Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
8,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SMILES enumeration, vectorization and batch generation
SMILES enumeration is the process of writing out all possible SMILES forms of a molecule. It's a useful technique for data augmentation before sequence based modeling of molecules. You can read more about the background in this blog post or this preprint on arxiv.org
Import the SmilesEnumerator and instantiate the object
Step1: A few SMILES strings will be enumerated as a demonstration.
Step2: Vectorization
Before vectorization SMILES must be stored as strings in an numpy array. The transform takes numpy arrays or pandas series with the SMILES as strings.
Step3: Fit the charset and the padding to the SMILES array, alternatively they can be specified when instantiating the object.
Step4: There have been added some extra padding to the maximum lenght observed in the smiles array. The SMILES can be transformed to one-hot encoded vectors and showed with matplotlib.
Step5: It's a nice piano roll. If the vectorization is repeated, the vectorization will be different due to the enumeration, as sme.enum and sme.canonical is set to True and False, respectively (default settings).
Step6: The reverse_transform() function can be used to translate back to a SMILES string, as long as the charset is the same as was used to vectorize.
Step7: Batch generation for Keras RNN modeling
The SmilesEnumerator class can be used together with the SmilesIterator batch generator for on the fly vectorization for RNN modeling of molecules. Below it's briefly demonstrated how this can be done.
Step8: Build a SMILES based RNN QSAR model with Keras.
Step9: Use the generator object for training.
Step10: Not the best model until now. However, prolonged training with lowering of the learning rate towards the end will improve the model. | Python Code:
from SmilesEnumerator import SmilesEnumerator
sme = SmilesEnumerator()
print(help(SmilesEnumerator))
Explanation: SMILES enumeration, vectorization and batch generation
SMILES enumeration is the process of writing out all possible SMILES forms of a molecule. It's a useful technique for data augmentation before sequence based modeling of molecules. You can read more about the background in this blog post or this preprint on arxiv.org
Import the SmilesEnumerator and instantiate the object
End of explanation
for i in range(10):
print(sme.randomize_smiles("CCC(=O)O[C@@]1(CC[NH+](C[C@H]1CC=C)C)c2ccccc2"))
Explanation: A few SMILES strings will be enumerated as a demonstration.
End of explanation
import numpy as np
smiles = np.array(["CCC(=O)O[C@@]1(CC[NH+](C[C@H]1CC=C)C)c2ccccc2"])
print(smiles.shape)
Explanation: Vectorization
Before vectorization SMILES must be stored as strings in an numpy array. The transform takes numpy arrays or pandas series with the SMILES as strings.
End of explanation
sme.fit(smiles)
print(sme.charset)
print(sme.pad)
Explanation: Fit the charset and the padding to the SMILES array, alternatively they can be specified when instantiating the object.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
vect = sme.transform(smiles)
plt.imshow(vect[0])
Explanation: There have been added some extra padding to the maximum lenght observed in the smiles array. The SMILES can be transformed to one-hot encoded vectors and showed with matplotlib.
End of explanation
print(sme.enumerate, sme.canonical)
vect = sme.transform(smiles)
plt.imshow(vect[0])
Explanation: It's a nice piano roll. If the vectorization is repeated, the vectorization will be different due to the enumeration, as sme.enum and sme.canonical is set to True and False, respectively (default settings).
End of explanation
print(sme.reverse_transform(vect))
Explanation: The reverse_transform() function can be used to translate back to a SMILES string, as long as the charset is the same as was used to vectorize.
End of explanation
import pandas as pd
data = pd.read_csv("Example_data/Sutherland_DHFR.csv")
print(data.head())
from sklearn.model_selection import train_test_split
#We ignore the > signs, and use random splitting for simplicity
X_train, X_test, y_train, y_test = train_test_split(data["smiles_parent"],
np.log(data["PC_uM_value"]).values.reshape(-1,1),
random_state=42)
from sklearn.preprocessing import RobustScaler
rbs = RobustScaler(with_centering=True, with_scaling=True, quantile_range=(5.0, 95.0), copy=True)
y_train = rbs.fit_transform((y_train))
y_test = rbs.transform(y_test)
_ = plt.hist(y_train, bins=25)
import tensorflow.keras.backend as K
from SmilesEnumerator import SmilesIterator
#The SmilesEnumerator must be fit to the entire dataset, so that all chars are registered
sme.fit(data["smiles_parent"])
sme.leftpad = True
print(sme.charset)
print(sme.pad)
#The dtype is set for the K.floatx(), which is the numerical type configured for Tensorflow or Theano
generator = SmilesIterator(X_train, y_train, sme, batch_size=200, dtype=K.floatx())
X,y = generator.next()
print(X.shape)
print(y.shape)
Explanation: Batch generation for Keras RNN modeling
The SmilesEnumerator class can be used together with the SmilesIterator batch generator for on the fly vectorization for RNN modeling of molecules. Below it's briefly demonstrated how this can be done.
End of explanation
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras import regularizers
from tensorflow.keras.optimizers import RMSprop
input_shape = X.shape[1:]
output_shape = 1
model = Sequential()
model.add(LSTM(64,
input_shape=input_shape,
dropout = 0.19
#unroll= True
))
model.add(Dense(output_shape,
kernel_regularizer=regularizers.l1_l2(0.005,0.01),
activation="linear"))
model.compile(loss="mse", optimizer=RMSprop(lr=0.005))
print(model.summary())
Explanation: Build a SMILES based RNN QSAR model with Keras.
End of explanation
model.fit_generator(generator, steps_per_epoch=100, epochs=25, workers=4)
y_pred_train = model.predict(sme.transform(X_train))
y_pred_test = model.predict(sme.transform(X_test))
plt.scatter(y_train, y_pred_train, label="Train")
plt.scatter(y_test, y_pred_test, label="Test")
plt.legend()
Explanation: Use the generator object for training.
End of explanation
#The Enumerator can be used in sampling
i = 0
y_true = y_test[i]
y_pred = model.predict(sme.transform(X_test.iloc[i:i+1]))
print(y_true)
print(y_true - y_pred)
#Enumeration of the SMILES before sampling stabilises the result
smiles_repeat = np.array([X_test.iloc[i:i+1].values[0]]*50)
y_pred = model.predict(sme.transform(smiles_repeat))
print(y_pred.std())
print(y_true - np.median(y_pred))
_ = plt.hist(y_pred)
Explanation: Not the best model until now. However, prolonged training with lowering of the learning rate towards the end will improve the model.
End of explanation |
8,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'mesh' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Dataset Parameters
Let's create the ParameterSet which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach this ParameterSet for us.
Step3: times
Step4: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to meshes
Step5: mesh_method
Step6: The 'mesh_method' parameter determines how each component in the system is discretized into its mesh, but currently only has one option
Step7: gridsize
The 'gridsize' parameter is only relevant if mesh_method=='wd' (so will not be available unless that is the case).
Step8: Synthetics
Step9: Per-Mesh Parameters
Step10: Per-Time Parameters
Step11: Per-Element Parameters
Step12: Plotting
By default, MESH datasets plot as 'ys' vx 'xs' (plane of sky) of just the surface elements, taken from the vertices vectors.
Step13: Any of the 1-D fields (ie not vertices or normals) or matplotlib-recognized colornames can be used to color either the faces or edges of the triangles. Passing none for edgecolor or facecolor turns off the coloring (you may want to set edgecolor=None if setting facecolor to disable the black outline).
Step14: Alternatively, if you provide simple 1-D fields to plot, a 2D x-y plot will be created using the values from each element (always for a single time - if meshes exist for multiple times in the model, you must provide a single time either in the twig or as an argument to plot).
Step15: The exception to needing to provide a time is for the per-time parameters mentioned above. For these, time can be the x-array (not very exciting in this case with only a single time).
For more examples see the following | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: 'mesh' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
ps, constraints = phoebe.dataset.mesh()
print ps
Explanation: Dataset Parameters
Let's create the ParameterSet which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach this ParameterSet for us.
End of explanation
print ps['times']
Explanation: times
End of explanation
ps_compute = phoebe.compute.phoebe()
print ps_compute
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to meshes
End of explanation
print ps_compute['mesh_method']
Explanation: mesh_method
End of explanation
print ps_compute['ntriangles']
Explanation: The 'mesh_method' parameter determines how each component in the system is discretized into its mesh, but currently only has one option:
* marching (default): this is the new method introduced in PHOEBE 2. The star is discretized into triangles, with the attempt to make them each of equal-area and nearly equilateral. Although not as fast as 'wd', this method is more robust and will always form a closed surface (when possible).
ntriangles
The 'ntriangles' parameter is only relevenat if mesh_method=='marching' (so will not be available unless that is the case).
End of explanation
print ps_compute['gridsize']
Explanation: gridsize
The 'gridsize' parameter is only relevant if mesh_method=='wd' (so will not be available unless that is the case).
End of explanation
b.add_dataset('mesh', times=[0], dataset='mesh01')
b.run_compute()
b['mesh@model'].twigs
Explanation: Synthetics
End of explanation
print b['times@primary@mesh01@model']
Explanation: Per-Mesh Parameters
End of explanation
print b['volume@primary@mesh01@model']
print b['rpole@primary@mesh01@model']
print b['pot@primary@mesh01@model']
Explanation: Per-Time Parameters
End of explanation
print b['vertices@primary@mesh01@model']
print b['xs@primary@mesh01@model']
print b['rs@primary@mesh01@model']
print b['r_projs@primary@mesh01@model']
print b['cosbetas@primary@mesh01@model']
print b['normals@primary@mesh01@model']
print b['nxs@primary@mesh01@model']
print b['mus@primary@mesh01@model']
print b['vxs@primary@mesh01@model']
print b['areas@primary@mesh01@model']
print b['loggs@primary@mesh01@model']
print b['teffs@primary@mesh01@model']
print b['visibilities@primary@mesh01@model']
Explanation: Per-Element Parameters
End of explanation
axs, artists = b['mesh@model'].plot()
Explanation: Plotting
By default, MESH datasets plot as 'ys' vx 'xs' (plane of sky) of just the surface elements, taken from the vertices vectors.
End of explanation
axs, artists = b['mesh@model'].plot(facecolor='teffs', edgecolor=None)
Explanation: Any of the 1-D fields (ie not vertices or normals) or matplotlib-recognized colornames can be used to color either the faces or edges of the triangles. Passing none for edgecolor or facecolor turns off the coloring (you may want to set edgecolor=None if setting facecolor to disable the black outline).
End of explanation
axs, artists = b['mesh@model'].plot(time=0.0, x='mus', y='teffs')
Explanation: Alternatively, if you provide simple 1-D fields to plot, a 2D x-y plot will be created using the values from each element (always for a single time - if meshes exist for multiple times in the model, you must provide a single time either in the twig or as an argument to plot).
End of explanation
axs, artists = b['mesh@model'].plot(x='times', y='rpole', marker='s')
Explanation: The exception to needing to provide a time is for the per-time parameters mentioned above. For these, time can be the x-array (not very exciting in this case with only a single time).
For more examples see the following:
- Passband Luminosity Tutorial
End of explanation |
8,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic factors and coincident indices
Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data.
Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them.
Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index.
Macroeconomic data
The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses)
Step1: Note
Step2: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized.
Step3: Dynamic factors
A general dynamic factor model is written as
Step4: Estimates
Once the model has been estimated, there are two components that we can use for analysis or inference
Step5: Estimated factors
While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons
Step6: Post-estimation
Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not.
In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables).
In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income.
Step7: Coincident Index
As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991).
In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED).
Step8: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
Step9: Appendix 1
Step10: So what did we just do?
__init__
The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks.
start_params
start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short.
param_names
param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names.
transform_params and untransform_params
The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine).
Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons
Step11: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True, linewidth=120)
from pandas_datareader.data import DataReader
# Get the datasets from FRED
start = '1979-01-01'
end = '2014-12-01'
indprod = DataReader('IPMAN', 'fred', start=start, end=end)
income = DataReader('W875RX1', 'fred', start=start, end=end)
sales = DataReader('CMRMTSPL', 'fred', start=start, end=end)
emp = DataReader('PAYEMS', 'fred', start=start, end=end)
# dta = pd.concat((indprod, income, sales, emp), axis=1)
# dta.columns = ['indprod', 'income', 'sales', 'emp']
Explanation: Dynamic factors and coincident indices
Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data.
Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them.
Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index.
Macroeconomic data
The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses):
Industrial production (IPMAN)
Real aggregate income (excluding transfer payments) (W875RX1)
Manufacturing and trade sales (CMRMTSPL)
Employees on non-farm payrolls (PAYEMS)
In all cases, the data is at the monthly frequency and has been seasonally adjusted; the time-frame considered is 1972 - 2005.
End of explanation
# HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end)
# CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end)
# HMRMT_growth = HMRMT.diff() / HMRMT.shift()
# sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index)
# # Fill in the recent entries (1997 onwards)
# sales[CMRMT.index] = CMRMT
# # Backfill the previous entries (pre 1997)
# idx = sales.ix[:'1997-01-01'].index
# for t in range(len(idx)-1, 0, -1):
# month = idx[t]
# prev_month = idx[t-1]
# sales.ix[prev_month] = sales.ix[month] / (1 + HMRMT_growth.ix[prev_month].values)
dta = pd.concat((indprod, income, sales, emp), axis=1)
dta.columns = ['indprod', 'income', 'sales', 'emp']
dta.ix[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6));
Explanation: Note: in a recent update on FRED (8/12/15) the time series CMRMTSPL was truncated to begin in 1997; this is probably a mistake due to the fact that CMRMTSPL is a spliced series, so the earlier period is from the series HMRMT and the latter period is defined by CMRMT.
This has since (02/11/16) been corrected, however the series could also be constructed by hand from HMRMT and CMRMT, as shown below (process taken from the notes in the Alfred xls file).
End of explanation
# Create log-differenced series
dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100
dta['dln_income'] = (np.log(dta.income)).diff() * 100
dta['dln_sales'] = (np.log(dta.sales)).diff() * 100
dta['dln_emp'] = (np.log(dta.emp)).diff() * 100
# De-mean and standardize
dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std()
dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std()
dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std()
dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std()
Explanation: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized.
End of explanation
# Get the endogenous data
endog = dta.ix['1979-02-01':, 'std_indprod':'std_emp']
# Create the model
mod = sm.tsa.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2)
initial_res = mod.fit(method='powell', disp=False)
res = mod.fit(initial_res.params)
Explanation: Dynamic factors
A general dynamic factor model is written as:
$$
\begin{align}
y_t & = \Lambda f_t + B x_t + u_t \
f_t & = A_1 f_{t-1} + \dots + A_p f_{t-p} + \eta_t \qquad \eta_t \sim N(0, I)\
u_t & = C_1 u_{t-1} + \dots + C_1 f_{t-q} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \Sigma)
\end{align}
$$
where $y_t$ are observed data, $f_t$ are the unobserved factors (evolving as a vector autoregression), $x_t$ are (optional) exogenous variables, and $u_t$ is the error, or "idiosyncratic", process ($u_t$ is also optionally allowed to be autocorrelated). The $\Lambda$ matrix is often referred to as the matrix of "factor loadings". The variance of the factor error term is set to the identity matrix to ensure identification of the unobserved factors.
This model can be cast into state space form, and the unobserved factor estimated via the Kalman filter. The likelihood can be evaluated as a byproduct of the filtering recursions, and maximum likelihood estimation used to estimate the parameters.
Model specification
The specific dynamic factor model in this application has 1 unobserved factor which is assumed to follow an AR(2) proces. The innovations $\varepsilon_t$ are assumed to be independent (so that $\Sigma$ is a diagonal matrix) and the error term associated with each equation, $u_{i,t}$ is assumed to follow an independent AR(2) process.
Thus the specification considered here is:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
where $i$ is one of: [indprod, income, sales, emp ].
This model can be formulated using the DynamicFactor model built-in to Statsmodels. In particular, we have the following specification:
k_factors = 1 - (there is 1 unobserved factor)
factor_order = 2 - (it follows an AR(2) process)
error_var = False - (the errors evolve as independent AR processes rather than jointly as a VAR - note that this is the default option, so it is not specified below)
error_order = 2 - (the errors are autocorrelated of order 2: i.e. AR(2) processes)
error_cov_type = 'diagonal' - (the innovations are uncorrelated; this is again the default)
Once the model is created, the parameters can be estimated via maximum likelihood; this is done using the fit() method.
Note: recall that we have de-meaned and standardized the data; this will be important in interpreting the results that follow.
Aside: in their empirical example, Kim and Nelson (1999) actually consider a slightly different model in which the employment variable is allowed to also depend on lagged values of the factor - this model does not fit into the built-in DynamicFactor class, but can be accomodated by using a subclass to implement the required new parameters and restrictions - see Appendix A, below.
Parameter estimation
Multivariate models can have a relatively large number of parameters, and it may be difficult to escape from local minima to find the maximized likelihood. In an attempt to mitigate this problem, I perform an initial maximization step (from the model-defined starting paramters) using the modified Powell method available in Scipy (see the minimize documentation for more information). The resulting parameters are then used as starting parameters in the standard LBFGS optimization method.
End of explanation
print(res.summary(separate_params=False))
Explanation: Estimates
Once the model has been estimated, there are two components that we can use for analysis or inference:
The estimated parameters
The estimated factor
Parameters
The estimated parameters can be helpful in understanding the implications of the model, although in models with a larger number of observed variables and / or unobserved factors they can be difficult to interpret.
One reason for this difficulty is due to identification issues between the factor loadings and the unobserved factors. One easy-to-see identification issue is the sign of the loadings and the factors: an equivalent model to the one displayed below would result from reversing the signs of all factor loadings and the unobserved factor.
Here, one of the easy-to-interpret implications in this model is the persistence of the unobserved factor: we find that exhibits substantial persistence.
End of explanation
fig, ax = plt.subplots(figsize=(13,3))
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, res.factors.filtered[0], label='Factor')
ax.legend()
# Retrieve and also plot the NBER recession indicators
rec = DataReader('USREC', 'fred', start=start, end=end)
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
Explanation: Estimated factors
While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons:
The sign-related identification issue described above.
Since the data was differenced, the estimated factor explains the variation in the differenced data, not the original data.
It is for these reasons that the coincident index is created (see below).
With these reservations, the unobserved factor is plotted below, along with the NBER indicators for US recessions. It appears that the factor is successful at picking up some degree of business cycle activity.
End of explanation
res.plot_coefficients_of_determination(figsize=(8,2));
Explanation: Post-estimation
Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not.
In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables).
In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income.
End of explanation
usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI']
usphci.plot(figsize=(13,3));
dusphci = usphci.diff()[1:].values
def compute_coincident_index(mod, res):
# Estimate W(1)
spec = res.specification
design = mod.ssm['design']
transition = mod.ssm['transition']
ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1]
k_states = ss_kalman_gain.shape[0]
W1 = np.linalg.inv(np.eye(k_states) - np.dot(
np.eye(k_states) - np.dot(ss_kalman_gain, design),
transition
)).dot(ss_kalman_gain)[0]
# Compute the factor mean vector
factor_mean = np.dot(W1, dta.ix['1972-02-01':, 'dln_indprod':'dln_emp'].mean())
# Normalize the factors
factor = res.factors.filtered[0]
factor *= np.std(usphci.diff()[1:]) / np.std(factor)
# Compute the coincident index
coincident_index = np.zeros(mod.nobs+1)
# The initial value is arbitrary; here it is set to
# facilitate comparison
coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean()
for t in range(0, mod.nobs):
coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean
# Attach dates
coincident_index = pd.Series(coincident_index, index=dta.index).iloc[1:]
# Normalize to use the same base year as USPHCI
coincident_index *= (usphci.ix['1992-07-01'] / coincident_index.ix['1992-07-01'])
return coincident_index
Explanation: Coincident Index
As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991).
In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED).
End of explanation
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
coincident_index = compute_coincident_index(mod, res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, label='Coincident index')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
Explanation: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
End of explanation
from statsmodels.tsa.statespace import tools
class ExtendedDFM(sm.tsa.DynamicFactor):
def __init__(self, endog, **kwargs):
# Setup the model as if we had a factor order of 4
super(ExtendedDFM, self).__init__(
endog, k_factors=1, factor_order=4, error_order=2,
**kwargs)
# Note: `self.parameters` is an ordered dict with the
# keys corresponding to parameter types, and the values
# the number of parameters of that type.
# Add the new parameters
self.parameters['new_loadings'] = 3
# Cache a slice for the location of the 4 factor AR
# parameters (a_1, ..., a_4) in the full parameter vector
offset = (self.parameters['factor_loadings'] +
self.parameters['exog'] +
self.parameters['error_cov'])
self._params_factor_ar = np.s_[offset:offset+2]
self._params_factor_zero = np.s_[offset+2:offset+4]
@property
def start_params(self):
# Add three new loading parameters to the end of the parameter
# vector, initialized to zeros (for simplicity; they could
# be initialized any way you like)
return np.r_[super(ExtendedDFM, self).start_params, 0, 0, 0]
@property
def param_names(self):
# Add the corresponding names for the new loading parameters
# (the name can be anything you like)
return super(ExtendedDFM, self).param_names + [
'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)]
def transform_params(self, unconstrained):
# Perform the typical DFM transformation (w/o the new parameters)
constrained = super(ExtendedDFM, self).transform_params(
unconstrained[:-3])
# Redo the factor AR constraint, since we only want an AR(2),
# and the previous constraint was for an AR(4)
ar_params = unconstrained[self._params_factor_ar]
constrained[self._params_factor_ar] = (
tools.constrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[constrained, unconstrained[-3:]]
def untransform_params(self, constrained):
# Perform the typical DFM untransformation (w/o the new parameters)
unconstrained = super(ExtendedDFM, self).untransform_params(
constrained[:-3])
# Redo the factor AR unconstraint, since we only want an AR(2),
# and the previous unconstraint was for an AR(4)
ar_params = constrained[self._params_factor_ar]
unconstrained[self._params_factor_ar] = (
tools.unconstrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[unconstrained, constrained[-3:]]
def update(self, params, transformed=True, complex_step=False):
# Peform the transformation, if required
if not transformed:
params = self.transform_params(params)
params[self._params_factor_zero] = 0
# Now perform the usual DFM update, but exclude our new parameters
super(ExtendedDFM, self).update(params[:-3], transformed=True, complex_step=complex_step)
# Finally, set our new parameters in the design matrix
self.ssm['design', 3, 1:4] = params[-3:]
Explanation: Appendix 1: Extending the dynamic factor model
Recall that the previous specification was described by:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
Written in state space form, the previous specification of the model had the following observation equation:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \
y_{\text{income}, t} \
y_{\text{sales}, t} \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{income} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{sales} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{emp} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix}
$$
and transition equation:
$$
\begin{bmatrix}
f_t \
f_{t-1} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \
0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \
0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \
0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
u_{\text{indprod}, t-2} \
u_{\text{income}, t-2} \
u_{\text{sales}, t-2} \
u_{\text{emp}, t-2} \
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
the DynamicFactor model handles setting up the state space representation and, in the DynamicFactor.update method, it fills in the fitted parameter values into the appropriate locations.
The extended specification is the same as in the previous example, except that we also want to allow employment to depend on lagged values of the factor. This creates a change to the $y_{\text{emp},t}$ equation. Now we have:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \qquad & i \in {\text{indprod}, \text{income}, \text{sales} }\
y_{i,t} & = \lambda_{i,0} f_t + \lambda_{i,1} f_{t-1} + \lambda_{i,2} f_{t-2} + \lambda_{i,2} f_{t-3} + u_{i,t} \qquad & i = \text{emp} \
u_{i,t} & = c_{i,1} u_{i,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
Now, the corresponding observation equation should look like the following:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \
y_{\text{income}, t} \
y_{\text{sales}, t} \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{income} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{sales} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{emp,1} & \lambda_\text{emp,2} & \lambda_\text{emp,3} & \lambda_\text{emp,4} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix}
$$
Notice that we have introduced two new state variables, $f_{t-2}$ and $f_{t-3}$, which means we need to update the transition equation:
$$
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
f_{t-3} \
f_{t-4} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
u_{\text{indprod}, t-2} \
u_{\text{income}, t-2} \
u_{\text{sales}, t-2} \
u_{\text{emp}, t-2} \
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
This model cannot be handled out-of-the-box by the DynamicFactor class, but it can be handled by creating a subclass when alters the state space representation in the appropriate way.
First, notice that if we had set factor_order = 4, we would almost have what we wanted. In that case, the last line of the observation equation would be:
$$
\begin{bmatrix}
\vdots \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\vdots & & & & & & & & & & & \vdots \
\lambda_\text{emp,1} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
\vdots
\end{bmatrix}
$$
and the first line of the transition equation would be:
$$
\begin{bmatrix}
f_t \
\vdots
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & a_3 & a_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\vdots & & & & & & & & & & & \vdots \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
f_{t-3} \
f_{t-4} \
\vdots
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
Relative to what we want, we have the following differences:
In the above situation, the $\lambda_{\text{emp}, j}$ are forced to be zero for $j > 0$, and we want them to be estimated as parameters.
We only want the factor to transition according to an AR(2), but under the above situation it is an AR(4).
Our strategy will be to subclass DynamicFactor, and let it do most of the work (setting up the state space representation, etc.) where it assumes that factor_order = 4. The only things we will actually do in the subclass will be to fix those two issues.
First, here is the full code of the subclass; it is discussed below. It is important to note at the outset that none of the methods defined below could have been omitted. In fact, the methods __init__, start_params, param_names, transform_params, untransform_params, and update form the core of all state space models in Statsmodels, not just the DynamicFactor class.
End of explanation
# Create the model
extended_mod = ExtendedDFM(endog)
initial_extended_res = extended_mod.fit(maxiter=1000, disp=False)
extended_res = extended_mod.fit(initial_extended_res.params, method='nm', maxiter=1000)
print(extended_res.summary(separate_params=False))
Explanation: So what did we just do?
__init__
The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks.
start_params
start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short.
param_names
param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names.
transform_params and untransform_params
The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine).
Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons:
The version in the DynamicFactor class is expecting 3 fewer parameters than we have now. At a minimum, we need to handle the three new parameters.
The version in the DynamicFactor class constrains the factor lag coefficients to be stationary as though it was an AR(4) model. Since we actually have an AR(2) model, we need to re-do the constraint. We also set the last two autoregressive coefficients to be zero here.
update
The most important reason we need to specify a new update method is because we have three new parameters that we need to place into the state space formulation. In particular we let the parent DynamicFactor.update class handle placing all the parameters except the three new ones in to the state space representation, and then we put the last three in manually.
End of explanation
extended_res.plot_coefficients_of_determination(figsize=(8,2));
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
extended_coincident_index = compute_coincident_index(extended_mod, extended_res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model')
ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
ax.set(title='Coincident indices, comparison')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
Explanation: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results.
End of explanation |
8,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EOF analysis in central pacific ocean
In statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data. It is similar to performing a principal components analysis on the data, except that the EOF method finds both temporal projections and spatial patterns. The term is also interchangeable with the geographically weighted PCAs in geophysics (https
Step1: 2. Load SST data
2.1 Read SST
Step2: 2.2 Declare variables
Step3: 3. Detrend
Step4: 4. Remove seasonal cycle
4.1 Rearrange data for seasonal removal
Step5: 4.2 Calculate seasonal cycle
Step6: 4.3 Remove seasonal cycle
Here utlized the broadcast property of numpy.array
Step7: 4.4 Rearrange array to original format
Step8: 5. Carry out EOF analysis
5.2 Create an EOF solver to do the EOF analysis
Square-root of cosine of latitude weights are applied before the computation of EOFs.
Step9: 5.3 Retrieve the leading EOFs
Expressed as the correlation between the leading PC time series and the input SST anomalies at each grid point, and the
leading PC time series itself.
Step10: 6. Visualize leading EOFs
Expressed as correlation in the Pacific domain.
6.1 Plot EOFs and PCs
Step11: 6.2 Check variances explained by leading EOFs | Python Code:
% matplotlib inline
import numpy as np
from scipy import signal
import numpy.polynomial.polynomial as poly
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from eofs.standard import Eof
Explanation: EOF analysis in central pacific ocean
In statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data. It is similar to performing a principal components analysis on the data, except that the EOF method finds both temporal projections and spatial patterns. The term is also interchangeable with the geographically weighted PCAs in geophysics (https://en.wikipedia.org/wiki/Empirical_orthogonal_functions).
The spatial patterns are the EOFs, and can be thought of as basis functions in terms of variance. The associated temporal projections are the pricipal components (PCs) and are the temporal coefficients of the EOF patterns.
This notebook is inspired by the blog presented by yiboj, where you can download the sample code and sample data. For convinience, yearly data have been converted into a single file using CDO. However, it should be noted this notebook simplifies the original source code through some advanced syntaxes of NumPy and fixs a bug of filling zeros over land.
The SST data is from from AVHRR Level 4 dataset in central pacific ocean from 1982 to 2000.
1. Load basic libraries
End of explanation
startY = 1982
endY = 2000
ndy = 36
infile = 'data\eof_data\sst.sw.AVHRR.l4.1982.2000.nc'
ncin = Dataset(infile, 'r')
sst_raw = ncin.variables['analysed_sst'][:]
lons = ncin.variables['lon'][:]
lats = ncin.variables['lat'][:]
ncin.close()
nt,nlat,nlon = sst_raw.shape
ny = nt/ndy
mask = sst_raw.mask
Explanation: 2. Load SST data
2.1 Read SST
End of explanation
sst_detrend = np.empty(sst_raw.shape)
sst_coeffs = np.empty((2, nlat, nlon))
sst_detrend[:,:,:] = np.nan
Explanation: 2.2 Declare variables
End of explanation
x = np.linspace(1,nt,nt)
for i in range(0, nlat):
for j in range(0,nlon):
ytemp = np.copy(sst_raw[:,i,j])
y = sst_raw[:,i,j]
b = ~np.isnan(y)
coefs = poly.polyfit(x[b], y[b], 1)
sst_coeffs[0,i,j] = coefs[0]
sst_coeffs[1,i,j] = coefs[1]
ffit = poly.polyval(x[b], coefs)
sst_detrend[b,i,j] = y[b] - ffit
Explanation: 3. Detrend
End of explanation
sst_all = sst_detrend.reshape((ndy,ny,nlat,nlon), order='F').transpose((1,0,2,3)) # year, 36, lat, lon
Explanation: 4. Remove seasonal cycle
4.1 Rearrange data for seasonal removal
End of explanation
sst_season = np.mean(sst_all, axis=0)
Explanation: 4.2 Calculate seasonal cycle
End of explanation
sst_diff = sst_all - sst_season
sst_diff = np.ma.masked_array(sst_diff, mask=mask) # have to do this, or fill in zeros in sst_diff.
Explanation: 4.3 Remove seasonal cycle
Here utlized the broadcast property of numpy.array
End of explanation
sst_final = sst_diff.transpose((1,0,2,3)).reshape((ndy*ny,nlat,nlon), order='F')
Explanation: 4.4 Rearrange array to original format
End of explanation
coslat = np.cos(np.deg2rad(lats))
wgts = np.sqrt(coslat)[..., np.newaxis]
solver = Eof(sst_final, weights=wgts)
print(coslat.shape)
print(wgts.shape)
Explanation: 5. Carry out EOF analysis
5.2 Create an EOF solver to do the EOF analysis
Square-root of cosine of latitude weights are applied before the computation of EOFs.
End of explanation
eof1 = solver.eofs(neofs=10)
pc1 = solver.pcs(npcs=10, pcscaling=0)
varfrac = solver.varianceFraction()
lambdas = solver.eigenvalues()
Explanation: 5.3 Retrieve the leading EOFs
Expressed as the correlation between the leading PC time series and the input SST anomalies at each grid point, and the
leading PC time series itself.
End of explanation
parallels = np.arange(-90,90,10.)
meridians = np.arange(-180,180,20)
for i in range(0, 3):
fig = plt.figure(figsize=(9,7))
plt.subplot(211)
#ax=fig.add_axes([0.1,0.1,0.8,0.8])
m = Basemap(projection='cyl', llcrnrlon=min(lons), llcrnrlat=min(lats),
urcrnrlon=max(lons), urcrnrlat=max(lats))
x, y = m(*np.meshgrid(lons, lats))
clevs = np.linspace(np.min(eof1[i,:,:].squeeze()), np.max(eof1[i,:,:].squeeze()), 11)
cs = m.contourf(x, y, eof1[i,:,:].squeeze(), clevs, cmap=plt.cm.RdBu_r)
m.drawcoastlines()
#m.fillcontinents(color='#000000',lake_color='#99ffff')
m.drawparallels(parallels,labels=[1,0,0,0])
m.drawmeridians(meridians,labels=[1,0,0,1])
#cb = plt.colorbar(cs, orientation='horizontal')
cb = m.colorbar(cs, 'right', size='5%', pad='2%')
cb.set_label('EOF', fontsize=12)
plt.title('EOF ' + str(i+1), fontsize=16)
plt.subplot(212)
days = [startY+(x*10+1)/365.0 for x in range(0, nt)]
plt.plot(days, pc1[:,i], linewidth=2)
plt.xticks(range(startY, endY), rotation='vertical')
plt.axhline(0, color='k')
plt.xlabel('Year')
plt.ylabel('PC Amplitude')
plt.xlim(startY, endY)
plt.ylim(np.min(pc1.squeeze()), np.max(pc1.squeeze()))
plt.xticks(range(startY, endY))
plt.tight_layout()
Explanation: 6. Visualize leading EOFs
Expressed as correlation in the Pacific domain.
6.1 Plot EOFs and PCs
End of explanation
plt.figure(figsize=(11,6))
eof_num = range(1, 16)
plt.plot(eof_num, varfrac[0:15], linewidth=2)
plt.plot(eof_num, varfrac[0:15], linestyle='None', marker="o", color='r', markersize=8)
plt.axhline(0, color='k')
plt.xticks(range(1, 16))
plt.title('Fraction of the total variance represented by each EOF')
plt.xlabel('EOF #')
plt.ylabel('Variance Fraction')
plt.xlim(1, 15)
plt.ylim(np.min(varfrac), np.max(varfrac)+0.01)
Explanation: 6.2 Check variances explained by leading EOFs
End of explanation |
8,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Transformer-based recommendation system
Author
Step1: Prepare the data
Download and prepare the DataFrames
First, let's download the movielens data.
The downloaded folder will contain three data files
Step2: Then, we load the data into pandas DataFrames with their proper column names.
Step3: Here, we do some simple data processing to fix the data types of the columns.
Step4: Each movie has multiple genres. We split them into separate columns in the movies
DataFrame.
Step5: Transform the movie ratings data into sequences
First, let's sort the the ratings data using the unix_timestamp, and then group the
movie_id values and the rating values by user_id.
The output DataFrame will have a record for each user_id, with two ordered lists
(sorted by rating datetime)
Step6: Now, let's split the movie_ids list into a set of sequences of a fixed length.
We do the same for the ratings. Set the sequence_length variable to change the length
of the input sequence to the model. You can also change the step_size to control the
number of sequences to generate for each user.
Step7: After that, we process the output to have each sequence in a separate records in
the DataFrame. In addition, we join the user features with the ratings data.
Step8: With sequence_length of 4 and step_size of 2, we end up with 498,623 sequences.
Finally, we split the data into training and testing splits, with 85% and 15% of
the instances, respectively, and store them to CSV files.
Step9: Define metadata
Step10: Create tf.data.Dataset for training and evaluation
Step11: Create model inputs
Step12: Encode input features
The encode_input_features method works as follows
Step13: Create a BST model
Step14: Run training and evaluation experiment | Python Code:
import os
import math
from zipfile import ZipFile
from urllib.request import urlretrieve
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import StringLookup
Explanation: A Transformer-based recommendation system
Author: Khalid Salama<br>
Date created: 2020/12/30<br>
Last modified: 2020/12/30<br>
Description: Rating rate prediction using the Behavior Sequence Transformer (BST) model on the Movielens.
Introduction
This example demonstrates the Behavior Sequence Transformer (BST)
model, by Qiwei Chen et al., using the Movielens dataset.
The BST model leverages the sequential behaviour of the users in watching and rating movies,
as well as user profile and movie features, to predict the rating of the user to a target movie.
More precisely, the BST model aims to predict the rating of a target movie by accepting
the following inputs:
A fixed-length sequence of movie_ids watched by a user.
A fixed-length sequence of the ratings for the movies watched by a user.
A set of user features, including user_id, sex, occupation, and age_group.
A set of genres for each movie in the input sequence and the target movie.
A target_movie_id for which to predict the rating.
This example modifies the original BST model in the following ways:
We incorporate the movie features (genres) into the processing of the embedding of each
movie of the input sequence and the target movie, rather than treating them as "other features"
outside the transformer layer.
We utilize the ratings of movies in the input sequence, along with the their positions
in the sequence, to update them before feeding them into the self-attention layer.
Note that this example should be run with TensorFlow 2.4 or higher.
The dataset
We use the 1M version of the Movielens dataset.
The dataset includes around 1 million ratings from 6000 users on 4000 movies,
along with some user features, movie genres. In addition, the timestamp of each user-movie
rating is provided, which allows creating sequences of movie ratings for each user,
as expected by the BST model.
Setup
End of explanation
urlretrieve("http://files.grouplens.org/datasets/movielens/ml-1m.zip", "movielens.zip")
ZipFile("movielens.zip", "r").extractall()
Explanation: Prepare the data
Download and prepare the DataFrames
First, let's download the movielens data.
The downloaded folder will contain three data files: users.dat, movies.dat,
and ratings.dat.
End of explanation
users = pd.read_csv(
"ml-1m/users.dat",
sep="::",
names=["user_id", "sex", "age_group", "occupation", "zip_code"],
)
ratings = pd.read_csv(
"ml-1m/ratings.dat",
sep="::",
names=["user_id", "movie_id", "rating", "unix_timestamp"],
)
movies = pd.read_csv(
"ml-1m/movies.dat", sep="::", names=["movie_id", "title", "genres"]
)
Explanation: Then, we load the data into pandas DataFrames with their proper column names.
End of explanation
users["user_id"] = users["user_id"].apply(lambda x: f"user_{x}")
users["age_group"] = users["age_group"].apply(lambda x: f"group_{x}")
users["occupation"] = users["occupation"].apply(lambda x: f"occupation_{x}")
movies["movie_id"] = movies["movie_id"].apply(lambda x: f"movie_{x}")
ratings["movie_id"] = ratings["movie_id"].apply(lambda x: f"movie_{x}")
ratings["user_id"] = ratings["user_id"].apply(lambda x: f"user_{x}")
ratings["rating"] = ratings["rating"].apply(lambda x: float(x))
Explanation: Here, we do some simple data processing to fix the data types of the columns.
End of explanation
genres = [
"Action",
"Adventure",
"Animation",
"Children's",
"Comedy",
"Crime",
"Documentary",
"Drama",
"Fantasy",
"Film-Noir",
"Horror",
"Musical",
"Mystery",
"Romance",
"Sci-Fi",
"Thriller",
"War",
"Western",
]
for genre in genres:
movies[genre] = movies["genres"].apply(
lambda values: int(genre in values.split("|"))
)
Explanation: Each movie has multiple genres. We split them into separate columns in the movies
DataFrame.
End of explanation
ratings_group = ratings.sort_values(by=["unix_timestamp"]).groupby("user_id")
ratings_data = pd.DataFrame(
data={
"user_id": list(ratings_group.groups.keys()),
"movie_ids": list(ratings_group.movie_id.apply(list)),
"ratings": list(ratings_group.rating.apply(list)),
"timestamps": list(ratings_group.unix_timestamp.apply(list)),
}
)
Explanation: Transform the movie ratings data into sequences
First, let's sort the the ratings data using the unix_timestamp, and then group the
movie_id values and the rating values by user_id.
The output DataFrame will have a record for each user_id, with two ordered lists
(sorted by rating datetime): the movies they have rated, and their ratings of these movies.
End of explanation
sequence_length = 4
step_size = 2
def create_sequences(values, window_size, step_size):
sequences = []
start_index = 0
while True:
end_index = start_index + window_size
seq = values[start_index:end_index]
if len(seq) < window_size:
seq = values[-window_size:]
if len(seq) == window_size:
sequences.append(seq)
break
sequences.append(seq)
start_index += step_size
return sequences
ratings_data.movie_ids = ratings_data.movie_ids.apply(
lambda ids: create_sequences(ids, sequence_length, step_size)
)
ratings_data.ratings = ratings_data.ratings.apply(
lambda ids: create_sequences(ids, sequence_length, step_size)
)
del ratings_data["timestamps"]
Explanation: Now, let's split the movie_ids list into a set of sequences of a fixed length.
We do the same for the ratings. Set the sequence_length variable to change the length
of the input sequence to the model. You can also change the step_size to control the
number of sequences to generate for each user.
End of explanation
ratings_data_movies = ratings_data[["user_id", "movie_ids"]].explode(
"movie_ids", ignore_index=True
)
ratings_data_rating = ratings_data[["ratings"]].explode("ratings", ignore_index=True)
ratings_data_transformed = pd.concat([ratings_data_movies, ratings_data_rating], axis=1)
ratings_data_transformed = ratings_data_transformed.join(
users.set_index("user_id"), on="user_id"
)
ratings_data_transformed.movie_ids = ratings_data_transformed.movie_ids.apply(
lambda x: ",".join(x)
)
ratings_data_transformed.ratings = ratings_data_transformed.ratings.apply(
lambda x: ",".join([str(v) for v in x])
)
del ratings_data_transformed["zip_code"]
ratings_data_transformed.rename(
columns={"movie_ids": "sequence_movie_ids", "ratings": "sequence_ratings"},
inplace=True,
)
Explanation: After that, we process the output to have each sequence in a separate records in
the DataFrame. In addition, we join the user features with the ratings data.
End of explanation
random_selection = np.random.rand(len(ratings_data_transformed.index)) <= 0.85
train_data = ratings_data_transformed[random_selection]
test_data = ratings_data_transformed[~random_selection]
train_data.to_csv("train_data.csv", index=False, sep="|", header=False)
test_data.to_csv("test_data.csv", index=False, sep="|", header=False)
Explanation: With sequence_length of 4 and step_size of 2, we end up with 498,623 sequences.
Finally, we split the data into training and testing splits, with 85% and 15% of
the instances, respectively, and store them to CSV files.
End of explanation
CSV_HEADER = list(ratings_data_transformed.columns)
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
"user_id": list(users.user_id.unique()),
"movie_id": list(movies.movie_id.unique()),
"sex": list(users.sex.unique()),
"age_group": list(users.age_group.unique()),
"occupation": list(users.occupation.unique()),
}
USER_FEATURES = ["sex", "age_group", "occupation"]
MOVIE_FEATURES = ["genres"]
Explanation: Define metadata
End of explanation
def get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128):
def process(features):
movie_ids_string = features["sequence_movie_ids"]
sequence_movie_ids = tf.strings.split(movie_ids_string, ",").to_tensor()
# The last movie id in the sequence is the target movie.
features["target_movie_id"] = sequence_movie_ids[:, -1]
features["sequence_movie_ids"] = sequence_movie_ids[:, :-1]
ratings_string = features["sequence_ratings"]
sequence_ratings = tf.strings.to_number(
tf.strings.split(ratings_string, ","), tf.dtypes.float32
).to_tensor()
# The last rating in the sequence is the target for the model to predict.
target = sequence_ratings[:, -1]
features["sequence_ratings"] = sequence_ratings[:, :-1]
return features, target
dataset = tf.data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=CSV_HEADER,
num_epochs=1,
header=False,
field_delim="|",
shuffle=shuffle,
).map(process)
return dataset
Explanation: Create tf.data.Dataset for training and evaluation
End of explanation
def create_model_inputs():
return {
"user_id": layers.Input(name="user_id", shape=(1,), dtype=tf.string),
"sequence_movie_ids": layers.Input(
name="sequence_movie_ids", shape=(sequence_length - 1,), dtype=tf.string
),
"target_movie_id": layers.Input(
name="target_movie_id", shape=(1,), dtype=tf.string
),
"sequence_ratings": layers.Input(
name="sequence_ratings", shape=(sequence_length - 1,), dtype=tf.float32
),
"sex": layers.Input(name="sex", shape=(1,), dtype=tf.string),
"age_group": layers.Input(name="age_group", shape=(1,), dtype=tf.string),
"occupation": layers.Input(name="occupation", shape=(1,), dtype=tf.string),
}
Explanation: Create model inputs
End of explanation
def encode_input_features(
inputs,
include_user_id=True,
include_user_features=True,
include_movie_features=True,
):
encoded_transformer_features = []
encoded_other_features = []
other_feature_names = []
if include_user_id:
other_feature_names.append("user_id")
if include_user_features:
other_feature_names.extend(USER_FEATURES)
## Encode user features
for feature_name in other_feature_names:
# Convert the string input values into integer indices.
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
idx = StringLookup(vocabulary=vocabulary, mask_token=None, num_oov_indices=0)(
inputs[feature_name]
)
# Compute embedding dimensions
embedding_dims = int(math.sqrt(len(vocabulary)))
# Create an embedding layer with the specified dimensions.
embedding_encoder = layers.Embedding(
input_dim=len(vocabulary),
output_dim=embedding_dims,
name=f"{feature_name}_embedding",
)
# Convert the index values to embedding representations.
encoded_other_features.append(embedding_encoder(idx))
## Create a single embedding vector for the user features
if len(encoded_other_features) > 1:
encoded_other_features = layers.concatenate(encoded_other_features)
elif len(encoded_other_features) == 1:
encoded_other_features = encoded_other_features[0]
else:
encoded_other_features = None
## Create a movie embedding encoder
movie_vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY["movie_id"]
movie_embedding_dims = int(math.sqrt(len(movie_vocabulary)))
# Create a lookup to convert string values to integer indices.
movie_index_lookup = StringLookup(
vocabulary=movie_vocabulary,
mask_token=None,
num_oov_indices=0,
name="movie_index_lookup",
)
# Create an embedding layer with the specified dimensions.
movie_embedding_encoder = layers.Embedding(
input_dim=len(movie_vocabulary),
output_dim=movie_embedding_dims,
name=f"movie_embedding",
)
# Create a vector lookup for movie genres.
genre_vectors = movies[genres].to_numpy()
movie_genres_lookup = layers.Embedding(
input_dim=genre_vectors.shape[0],
output_dim=genre_vectors.shape[1],
embeddings_initializer=tf.keras.initializers.Constant(genre_vectors),
trainable=False,
name="genres_vector",
)
# Create a processing layer for genres.
movie_embedding_processor = layers.Dense(
units=movie_embedding_dims,
activation="relu",
name="process_movie_embedding_with_genres",
)
## Define a function to encode a given movie id.
def encode_movie(movie_id):
# Convert the string input values into integer indices.
movie_idx = movie_index_lookup(movie_id)
movie_embedding = movie_embedding_encoder(movie_idx)
encoded_movie = movie_embedding
if include_movie_features:
movie_genres_vector = movie_genres_lookup(movie_idx)
encoded_movie = movie_embedding_processor(
layers.concatenate([movie_embedding, movie_genres_vector])
)
return encoded_movie
## Encoding target_movie_id
target_movie_id = inputs["target_movie_id"]
encoded_target_movie = encode_movie(target_movie_id)
## Encoding sequence movie_ids.
sequence_movies_ids = inputs["sequence_movie_ids"]
encoded_sequence_movies = encode_movie(sequence_movies_ids)
# Create positional embedding.
position_embedding_encoder = layers.Embedding(
input_dim=sequence_length,
output_dim=movie_embedding_dims,
name="position_embedding",
)
positions = tf.range(start=0, limit=sequence_length - 1, delta=1)
encodded_positions = position_embedding_encoder(positions)
# Retrieve sequence ratings to incorporate them into the encoding of the movie.
sequence_ratings = tf.expand_dims(inputs["sequence_ratings"], -1)
# Add the positional encoding to the movie encodings and multiply them by rating.
encoded_sequence_movies_with_poistion_and_rating = layers.Multiply()(
[(encoded_sequence_movies + encodded_positions), sequence_ratings]
)
# Construct the transformer inputs.
for encoded_movie in tf.unstack(
encoded_sequence_movies_with_poistion_and_rating, axis=1
):
encoded_transformer_features.append(tf.expand_dims(encoded_movie, 1))
encoded_transformer_features.append(encoded_target_movie)
encoded_transformer_features = layers.concatenate(
encoded_transformer_features, axis=1
)
return encoded_transformer_features, encoded_other_features
Explanation: Encode input features
The encode_input_features method works as follows:
Each categorical user feature is encoded using layers.Embedding, with embedding
dimension equals to the square root of the vocabulary size of the feature.
The embeddings of these features are concatenated to form a single input tensor.
Each movie in the movie sequence and the target movie is encoded layers.Embedding,
where the dimension size is the square root of the number of movies.
A multi-hot genres vector for each movie is concatenated with its embedding vector,
and processed using a non-linear layers.Dense to output a vector of the same movie
embedding dimensions.
A positional embedding is added to each movie embedding in the sequence, and then
multiplied by its rating from the ratings sequence.
The target movie embedding is concatenated to the sequence movie embeddings, producing
a tensor with the shape of [batch size, sequence length, embedding size], as expected
by the attention layer for the transformer architecture.
The method returns a tuple of two elements: encoded_transformer_features and
encoded_other_features.
End of explanation
include_user_id = False
include_user_features = False
include_movie_features = False
hidden_units = [256, 128]
dropout_rate = 0.1
num_heads = 3
def create_model():
inputs = create_model_inputs()
transformer_features, other_features = encode_input_features(
inputs, include_user_id, include_user_features, include_movie_features
)
# Create a multi-headed attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=transformer_features.shape[2], dropout=dropout_rate
)(transformer_features, transformer_features)
# Transformer block.
attention_output = layers.Dropout(dropout_rate)(attention_output)
x1 = layers.Add()([transformer_features, attention_output])
x1 = layers.LayerNormalization()(x1)
x2 = layers.LeakyReLU()(x1)
x2 = layers.Dense(units=x2.shape[-1])(x2)
x2 = layers.Dropout(dropout_rate)(x2)
transformer_features = layers.Add()([x1, x2])
transformer_features = layers.LayerNormalization()(transformer_features)
features = layers.Flatten()(transformer_features)
# Included the other features.
if other_features is not None:
features = layers.concatenate(
[features, layers.Reshape([other_features.shape[-1]])(other_features)]
)
# Fully-connected layers.
for num_units in hidden_units:
features = layers.Dense(num_units)(features)
features = layers.BatchNormalization()(features)
features = layers.LeakyReLU()(features)
features = layers.Dropout(dropout_rate)(features)
outputs = layers.Dense(units=1)(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
model = create_model()
Explanation: Create a BST model
End of explanation
# Compile the model.
model.compile(
optimizer=keras.optimizers.Adagrad(learning_rate=0.01),
loss=keras.losses.MeanSquaredError(),
metrics=[keras.metrics.MeanAbsoluteError()],
)
# Read the training data.
train_dataset = get_dataset_from_csv("train_data.csv", shuffle=True, batch_size=265)
# Fit the model with the training data.
model.fit(train_dataset, epochs=5)
# Read the test data.
test_dataset = get_dataset_from_csv("test_data.csv", batch_size=265)
# Evaluate the model on the test data.
_, rmse = model.evaluate(test_dataset, verbose=0)
print(f"Test MAE: {round(rmse, 3)}")
Explanation: Run training and evaluation experiment
End of explanation |
8,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 1
Step1: Get rid of void coordinates
Step2: Recap of step 0
Adopting building coordinates
It turns out that there is a slight mismatch between real world building coordinates w.r.t given data. So that only median building dimension info is reserved from the building info we got from online open data at data.detroitmi.gov.
Step3: For example
Step4: In real world data, this corresponds to
Step5: The coordinate of this building from data.detroitmi.gov is slightly different from data given in our course material.
Only building dimension info is adopted for our analysis.
Step6: Visualization
Step7: Distribution of blighted buildings | Python Code:
data_events = pd.read_csv('../data/events.csv')
data_events.head(10)
data_events.shape
# To get rid of duplicates with same coordinates and possibly different address names
building_pool = data_events.drop_duplicates(subset=['lon','lat'])
building_pool.shape
# 1. sort data according to longitude
# init new_data
# 2. for each record:
# if record[lon] - prev[lon] > length:
# add new record into new_data
# else:
# find previous coords that are close
# if no coords in bbox:
# add new record into new_data
# else:
# for each of these coords:
# if record in bbox:
# append event_id
#
# At the same time, if building is assigned one permit or more for demolition, blighted will be assigned to one.
#
def gen_buildings(data):
'''generate buildings from coordinates'''
from assign_bbox import nearest_pos, is_in_bbox, raw_dist # defined in assign_bbox.py in current dir
new_data = {'addr': [], 'lon': [], 'lat': [], 'event_id_list': [], 'blighted': []}
data_sorted = data.sort_values(by='lon', inplace=False)
length = 4.11e-4 # longitude
width = 2.04e-4 # latitude
prev_lon = 0
prev_lat = 0
max_distX = abs(length/2)
max_distY = abs(width/2)
for i, entry in data_sorted.iterrows():
lon = entry['lon']
lat = entry['lat']
b = entry['type']
if abs(lon - prev_lon) > length:
new_data['addr'].append(entry['addr'])
new_data['lon'].append(lon)
new_data['lat'].append(lat)
# below line is different from the loop for events_part2
new_data['event_id_list'].append([entry['event_id']])
if b == 4: # if demolition permit
new_data['blighted'].append(1)
else:
new_data['blighted'].append(0)
prev_lon = lon
prev_lat = lat
else:
listX = np.array(new_data['lon'])
listY = np.array(new_data['lat'])
poses = nearest_pos((lon,lat), listX, listY, length, width)
# if already in new_data
if poses.size > 0:
has_pos = False
for pos in poses:
temp_lon = new_data['lon'][pos]
temp_lat = new_data['lat'][pos]
if (abs(temp_lon - lon) < max_distX) & (abs(temp_lat - lat) < max_distY):
new_data['event_id_list'][pos] += [entry['event_id']]
if b == 4:
new_data['blighted'][pos] = 1
has_pos = True
if has_pos:
continue
new_data['addr'].append(entry['addr'])
new_data['lon'].append(lon)
new_data['lat'].append(lat)
# below line is different from the loop for events_part2
new_data['event_id_list'].append([entry['event_id']])
if b == 4:
new_data['blighted'].append(1)
else:
new_data['blighted'].append(0)
prev_lon = lon
prev_lat = lat
return pd.DataFrame(new_data)
buildings_concise = gen_buildings(building_pool)
buildings_concise.shape# shorter than before
buildings_concise.tail()
buildings = buildings_concise
Explanation: Step 1: Building List and Labels
Collecting instances from 311 calls, crimes, blight violations, and demolition permits.
Data already cleaned by this notebook
The collection of data was saved at ../data/events.csv
End of explanation
buildings = buildings[(buildings['lat']>42.25) & (buildings['lat']<42.5) & (buildings['lon']>-83.3) & (buildings['lon']<-82.9)]
buildings.shape
buildings['blighted'].value_counts()
Explanation: Get rid of void coordinates
End of explanation
data_dir = '../data/'
buildings_step_0 = pd.read_csv(data_dir+'buildings_step_0.csv')
permits = pd.read_csv(data_dir+'permits.csv')
permits = permits[['PARCEL_NO', 'BLD_PERMIT_TYPE', 'addr', 'lon', 'lat']]
permits['BLD_PERMIT_TYPE'].unique()
Explanation: Recap of step 0
Adopting building coordinates
It turns out that there is a slight mismatch between real world building coordinates w.r.t given data. So that only median building dimension info is reserved from the building info we got from online open data at data.detroitmi.gov.
End of explanation
demo01 = permits.loc[0,['PARCEL_NO','addr','lon','lat']]
print(demo01)
Explanation: For example: the very first entry of permit has coordinate:
End of explanation
c = buildings_step_0['addr'].apply(lambda x: x == permits.loc[0,'addr'])
buildings_step_0[c][['PARCELNO','lon','lat','addr']]
Explanation: In real world data, this corresponds to:
End of explanation
length = 0.000411
width = 0.000204 # These results come from step 0.
buildings.loc[:,'llcrnrlon'] = buildings.loc[:,'lon'] - length/2
buildings.loc[:,'llcrnrlat'] = buildings.loc[:,'lat'] - width/2
buildings.loc[:,'urcrnrlon'] = buildings.loc[:,'lon'] + length/2
buildings.loc[:,'urcrnrlat'] = buildings.loc[:,'lat'] + width/2
buildings.loc[:,'building_id'] = np.arange(0,buildings.shape[0])
buildings = buildings.reindex()
buildings.tail()
buildings.to_csv('../data/buildings.csv', index=False)
Explanation: The coordinate of this building from data.detroitmi.gov is slightly different from data given in our course material.
Only building dimension info is adopted for our analysis.
End of explanation
from bbox import draw_screen_bbox
from matplotlib import pyplot as plt
%matplotlib inline
buildings = pd.read_csv('../data/buildings.csv')
bboxes = buildings.loc[:,['llcrnrlon','llcrnrlat','urcrnrlon','urcrnrlat']]
bboxes = bboxes.as_matrix()
fig = plt.figure(figsize=(8,6), dpi=2000)
for box in bboxes:
draw_screen_bbox(box, fig)
plt.xlim(-83.3,-82.9)
plt.ylim(42.25,42.45)
plt.savefig('../data/buildings_distribution.png')
plt.show()
Explanation: Visualization
End of explanation
blighted_buildings = buildings[buildings.loc[:,'blighted'] == 1]
blighted_bboxes = blighted_buildings.loc[:,['llcrnrlon','llcrnrlat','urcrnrlon','urcrnrlat']]
blighted_bboxes = blighted_bboxes.as_matrix()
fig = plt.figure(figsize=(8,6), dpi=2000)
for box in blighted_bboxes:
draw_screen_bbox(box, fig)
plt.xlim(-83.3,-82.9)
plt.ylim(42.25,42.46)
plt.title("Distribution of Blighted Buildings in Detroit")
plt.savefig('../data/blighted_buildings_distribution.png')
plt.show()
Explanation: Distribution of blighted buildings
End of explanation |
8,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
8,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing AI Platform Training Service
Learning Objectives
Step1: Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes
Step2: Jupyter allows the subsitution of python variables into bash commands when using the !<cmd> format.
It is also possible using the %%bash magic but requires an additional parameter.
Step3: Move code into a python package
When you execute a AI Platform Training Service training job, the service zips up your code and ships it to the Cloud so it can be run on Cloud infrastructure. In order to do this AI Platform Training Service requires your code to be a Python package.
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
Create Package Directory and __init__.py
The bash command touch creates an empty file in the specified location.
Step4: Paste existing code into model.py
A Python package requires our code to be in a .py file, as opposed to notebook cells. So we simply copy and paste our existing code for the previous notebook into a single file.
The %%writefile magic writes the contents of its cell to disk with the specified name.
Exercise 1
In the cell below, write the content of the model.py to the file taxifaremodel/model.py. This will allow us to package the model we
developed in the previous labs so that we can deploy it to AI Platform Training Service. You'll also need to reuse the input functions and the EvalSpec, TrainSpec, RunConfig, etc. that we implemented in the previous labs.
Complete all the TODOs in the cell below by copy/pasting the code we developed in the previous labs. This will write all the necessary components we developed in our notebook to a single model.py file.
Once we have the code running well locally, we will execute the next cells to train and deploy your packaged model to AI Platform Training Service.
Step5: Modify code to read data from and write checkpoint files to GCS
If you look closely above, you'll notice two changes to the code
The input function now supports reading a list of files matching a file name pattern instead of just a single CSV
This is useful because large datasets tend to exist in shards.
The train and evaluate portion is wrapped in a function that takes a parameter dictionary as an argument.
This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both.
We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file.
Exposing parameters to the command line also allows us to use AI Platform Training Service's automatic hyperparameter tuning feature which we'll cover in a future lesson.
Exercise 2
Add two additional command line parameter parsers to the list we've started below. You should add code to parse command line parameters for the output_dir and the job-dir. Look at the examples below to make sure you have the correct format, including a help description and required specification.
Step6: Train using AI Platform Training Service (local)
AI Platform Training Service comes with a local test tool (gcloud ai-platform local train) to ensure we've packaged our code directly. It's best to first run that for a few steps before trying a Cloud job.
The arguments before -- \ are for AI Platform Training Service
- package-path
Step7: Train using AI Platform Training Service (Cloud)
To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service
Step8: You can track your job and view logs using cloud console. It will take 5-10 minutes to complete. Wait until the job finishes before moving on.
Deploy model
Now let's take our exported SavedModel and deploy it behind a REST API. To do so we'll use AI Platform Training Service's managed TF Serving feature which auto-scales based on load.
Step9: AI Platform Training Service uses a model versioning system. First you create a model folder, and within the folder you create versions of the model.
Note
Step10: Online prediction
Now that we have deployed our model behind a production grade REST API, we can invoke it remotely.
We could invoke it directly calling the REST API with an HTTP POST request reference docs, however AI Platform Training Service provides an easy way to invoke it via command line.
Invoke prediction REST API via command line
First we write our prediction requests to file in json format
Step11: Then we use gcloud ai-platform predict and specify the model name and location of the json file. Since we don't explicitly specify --version, the default model version will be used.
Since we only have one version it is already the default, but if we had multiple model versions we can designate the default using gcloud ai-platform versions set-default or using cloud console
Step12: Invoke prediction REST API via python
Exercise 3
In the cell below, use the Google Python client library to query the model you just deployed on AI Platform Training Service. Find the estimated taxi fare for a ride with the following properties
- ride occurs on Monday
- at 8 | Python Code:
# Uncomment and run if you need to update your Google SDK
# !sudo apt-get update && sudo apt-get --only-upgrade install google-cloud-sdk
Explanation: Introducing AI Platform Training Service
Learning Objectives:
- Learn how to make code compatible with AI Platform Training Service
- Train your model using cloud infrastructure via AI Platform Training Service
- Deploy your model behind a production grade REST API using AI Platform Training Service
Introduction
In this notebook we'll make the jump from training and predicting locally, to do doing both in the cloud. We'll take advantage of Google Cloud's AI Platform Training Service.
AI Platform Training Service is a managed service that allows the training and deployment of ML models without having to provision or maintain servers. The infrastructure is handled seamlessly by the managed service for us.
End of explanation
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for AI Platform Training Service
TFVERSION = "1.14" # TF version for AI Platform Training Service to use
Explanation: Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes:
Upload data to Google Cloud Storage
Move code into a Python package
Modify code to read data from and write checkpoint files to GCS
Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
Specify your project name and bucket name in the cell below.
End of explanation
!gcloud config set project {PROJECT}
!gsutil mb -l {REGION} gs://{BUCKET}
!gsutil -m cp *.csv gs://{BUCKET}/taxifare/smallinput/
Explanation: Jupyter allows the subsitution of python variables into bash commands when using the !<cmd> format.
It is also possible using the %%bash magic but requires an additional parameter.
End of explanation
%%bash
mkdir taxifaremodel
touch taxifaremodel/__init__.py
Explanation: Move code into a python package
When you execute a AI Platform Training Service training job, the service zips up your code and ships it to the Cloud so it can be run on Cloud infrastructure. In order to do this AI Platform Training Service requires your code to be a Python package.
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
Create Package Directory and __init__.py
The bash command touch creates an empty file in the specified location.
End of explanation
%%writefile taxifaremodel/model.py
# TODO: Your code goes here. Import the necessary libraries (e.g. tensorflow, etc)
CSV_COLUMN_NAMES = # TODO: Your code goes here
CSV_DEFAULTS = # TODO: Your code goes here
FEATURE_NAMES = # TODO: Your code goes here
def parse_row(row):
# TODO: Your code goes here
return features, label
def read_dataset(csv_path):
# TODO: Your code goes here
return dataset
def train_input_fn(csv_path, batch_size = 128):
# TODO: Your code goes here
return dataset
def eval_input_fn(csv_path, batch_size = 128):
# TODO: Your code goes here
return dataset
def serving_input_receiver_fn():
# TODO: Your code goes here
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors)
def my_rmse(labels, predictions):
# TODO: Your code goes here
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
def create_model(model_dir, train_steps):
# TODO: Your code goes here
return model
def train_and_evaluate(params):
OUTDIR = params["output_dir"]
TRAIN_DATA_PATH = params["train_data_path"]
EVAL_DATA_PATH = params["eval_data_path"]
TRAIN_STEPS = params["train_steps"]
model = # TODO: Your code goes here.
train_spec = # TODO: Your code goes here
exporter = # TODO: Your code goes here
eval_spec = # TODO: Your code goes here
tf.logging.set_verbosity(tf.logging.INFO)
shutil.rmtree(path = OUTDIR, ignore_errors = True)
tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)
Explanation: Paste existing code into model.py
A Python package requires our code to be in a .py file, as opposed to notebook cells. So we simply copy and paste our existing code for the previous notebook into a single file.
The %%writefile magic writes the contents of its cell to disk with the specified name.
Exercise 1
In the cell below, write the content of the model.py to the file taxifaremodel/model.py. This will allow us to package the model we
developed in the previous labs so that we can deploy it to AI Platform Training Service. You'll also need to reuse the input functions and the EvalSpec, TrainSpec, RunConfig, etc. that we implemented in the previous labs.
Complete all the TODOs in the cell below by copy/pasting the code we developed in the previous labs. This will write all the necessary components we developed in our notebook to a single model.py file.
Once we have the code running well locally, we will execute the next cells to train and deploy your packaged model to AI Platform Training Service.
End of explanation
%%writefile taxifaremodel/task.py
import argparse
import json
import os
from . import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--train_data_path",
help = "GCS or local path to training data",
required = True
)
parser.add_argument(
"--train_steps",
help = "Steps to run the training job for (default: 1000)",
type = int,
default = 1000
)
parser.add_argument(
"--eval_data_path",
help = "GCS or local path to evaluation data",
required = True
)
parser.add_argument(
# TODO: Your code goes here
)
parser.add_argument(
# TODO: Your code goes here
)
args = parser.parse_args().__dict__
model.train_and_evaluate(args)
Explanation: Modify code to read data from and write checkpoint files to GCS
If you look closely above, you'll notice two changes to the code
The input function now supports reading a list of files matching a file name pattern instead of just a single CSV
This is useful because large datasets tend to exist in shards.
The train and evaluate portion is wrapped in a function that takes a parameter dictionary as an argument.
This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both.
We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file.
Exposing parameters to the command line also allows us to use AI Platform Training Service's automatic hyperparameter tuning feature which we'll cover in a future lesson.
Exercise 2
Add two additional command line parameter parsers to the list we've started below. You should add code to parse command line parameters for the output_dir and the job-dir. Look at the examples below to make sure you have the correct format, including a help description and required specification.
End of explanation
%%time
!gcloud ai-platform local train \
--package-path=taxifaremodel \
--module-name=taxifaremodel.task \
-- \
--train_data_path=taxi-train.csv \
--eval_data_path=taxi-valid.csv \
--train_steps=1 \
--output_dir=taxi_trained
Explanation: Train using AI Platform Training Service (local)
AI Platform Training Service comes with a local test tool (gcloud ai-platform local train) to ensure we've packaged our code directly. It's best to first run that for a few steps before trying a Cloud job.
The arguments before -- \ are for AI Platform Training Service
- package-path: speficies the location of the Python package
- module-name: specifies which .py file should be run within the package. task.py is our entry point so we specify that
The arguments after -- \ are sent to our task.py.
End of explanation
OUTDIR = "gs://{}/taxifare/trained_small".format(BUCKET)
!gsutil -m rm -rf {OUTDIR} # start fresh each time
!gcloud ai-platform jobs submit training taxifare_$(date -u +%y%m%d_%H%M%S) \
--package-path=taxifaremodel \
--module-name=taxifaremodel.task \
--job-dir=gs://{BUCKET}/taxifare \
--python-version=3.5 \
--runtime-version={TFVERSION} \
--region={REGION} \
-- \
--train_data_path=gs://{BUCKET}/taxifare/smallinput/taxi-train.csv \
--eval_data_path=gs://{BUCKET}/taxifare/smallinput/taxi-valid.csv \
--train_steps=1000 \
--output_dir={OUTDIR}
Explanation: Train using AI Platform Training Service (Cloud)
To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:
- jobname: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- job-dir: A GCS location to upload the Python package to
- runtime-version: Version of TF to use. Defaults to 1.0 if not specified
- python-version: Version of Python to use. Defaults to 2.7 if not specified
- region: Cloud region to train in. See here for supported AI Platform Training Service regions
Below the -- \ note how we've changed our task.py args to be GCS locations
End of explanation
!gsutil ls gs://{BUCKET}/taxifare/trained_small/export/exporter
Explanation: You can track your job and view logs using cloud console. It will take 5-10 minutes to complete. Wait until the job finishes before moving on.
Deploy model
Now let's take our exported SavedModel and deploy it behind a REST API. To do so we'll use AI Platform Training Service's managed TF Serving feature which auto-scales based on load.
End of explanation
VERSION='v1'
!gcloud ai-platform models create taxifare --regions us-central1
!gcloud ai-platform versions delete {VERSION} --model taxifare --quiet
!gcloud ai-platform versions create {VERSION} --model taxifare \
--origin $(gsutil ls gs://{BUCKET}/taxifare/trained_small/export/exporter | tail -1) \
--python-version=3.5 \
--runtime-version {TFVERSION}
Explanation: AI Platform Training Service uses a model versioning system. First you create a model folder, and within the folder you create versions of the model.
Note: You will see an error below if the model folder already exists, it is safe to ignore
End of explanation
%%writefile ./test.json
{"dayofweek": 1, "hourofday": 0, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403}
Explanation: Online prediction
Now that we have deployed our model behind a production grade REST API, we can invoke it remotely.
We could invoke it directly calling the REST API with an HTTP POST request reference docs, however AI Platform Training Service provides an easy way to invoke it via command line.
Invoke prediction REST API via command line
First we write our prediction requests to file in json format
End of explanation
!gcloud ai-platform predict --model=taxifare --json-instances=./test.json
Explanation: Then we use gcloud ai-platform predict and specify the model name and location of the json file. Since we don't explicitly specify --version, the default model version will be used.
Since we only have one version it is already the default, but if we had multiple model versions we can designate the default using gcloud ai-platform versions set-default or using cloud console
End of explanation
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
credentials = # TODO: Your code goes here
api = # TODO: Your code goes here
request_data = {"instances":
[
{
# TODO: Your code goes here
}
]
}
parent = # TODO: Your code goes here
response = # TODO: Your code goes here
print("response = {0}".format(response))
Explanation: Invoke prediction REST API via python
Exercise 3
In the cell below, use the Google Python client library to query the model you just deployed on AI Platform Training Service. Find the estimated taxi fare for a ride with the following properties
- ride occurs on Monday
- at 8:00 am
- pick up at (40.773, -73.885)
- drop off at (40.732, -73.987)
Have a look at this post and examples on "Using the Python Client Library" and "Getting Online Predictions" from Google Cloud.
End of explanation |
8,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pixels and their neighbours
Step1: Mesh
Where we solve things! See mesh.ipynb a discussion of how we construct a mesh and the associated properties we need.
Step2: Physical Property Model
Define an electrical conductivity ($\sigma$) model, on the cell-centers of the mesh.
Step3: Define a source
Define location of the positive and negative electrodes
Step4: Assemble and solve the DC system of equations
How we construct the divergence operator is discussed in divergence.ipynb, and the inner product matrix in weakformulation.ipynb. The final system is assembled and discussed in play.ipynb (with widgets!). | Python Code:
# Import numpy, python's n-dimensional array package,
# the mesh class with differential operators from SimPEG
# matplotlib, the basic python plotting package
import numpy as np
from SimPEG import Mesh, Utils
import matplotlib.pyplot as plt
%matplotlib inline
plt.set_cmap(plt.get_cmap('viridis')) # use a nice colormap!
Explanation: Pixels and their neighbours: Finite volume
Rowan Cockett, Lindsey Heagy and Doug Oldenburg
This notebook uses Python 2.7 and the open source package SimPEG. SimPEG can be installed using the python package manager PyPi and running:
pip install SimPEG
Alternatively, these notebooks can be run on the web using binders
This tutorial consists of 3 parts, here, we introduce the problem, in divergence.ipynb we build the discrete divergence operator and in weakformulation.ipynb, we discretize and solve the DC equations using weak formulation.
Contents
- DC Resistivity setup
- mesh
- divergence
- weakforulation
- all together now
DC Resistivity
<img src="./images/DCSurvey.png" width=60% align="center">
<h4 align="center"> Figure 1. Setup of a DC resistivity survey.</h4>
DC resistivity surveys obtain information about subsurface electrical conductivity, $\sigma$. This physical property is often diagnostic in mineral exploration, geotechnical, environmental and hydrogeologic problems, where the target of interest has a significant electrical conductivity contrast from the background. In a DC resistivity survey, steady state currents are set up in the subsurface by injecting current through a positive electrode and completing the circuit with a return electrode.
Deriving the DC equations
<img src="images/DCEquations.png" width=70% align="center">
<h4 align="center">Figure 2. Derivation of the DC resistivity equations</h4>
Conservation of charge (which can be derived by taking the divergence of Ampere’s law at steady state) connects the divergence of the current density everywhere in space to the source term which consists of two point sources, one positive and one negative. The flow of current sets up electric fields according to Ohm’s law, which relates current density to electric fields through the electrical conductivity. From Faraday’s law for steady state fields, we can describe the electric field in terms of a scalar potential, $\phi$, which we sample at potential electrodes to obtain data in the form of potential differences.
The finish line
Where are we going??
Here, we are going to do a run through of how to setup and solve the DC resistivity equations for a 2D problem using SimPEG. This is meant to give you a once-over of the whole picture. We will break down the steps to get here in the series of notebooks that follow...
End of explanation
# Define a unit-cell mesh
mesh = Mesh.TensorMesh([100, 80]) # setup a mesh on which to solve
print("The mesh has {nC} cells.".format(nC=mesh.nC))
mesh.plotGrid()
plt.axis('tight');
Explanation: Mesh
Where we solve things! See mesh.ipynb a discussion of how we construct a mesh and the associated properties we need.
End of explanation
# model parameters
sigma_background = 1. # Conductivity of the background, S/m
sigma_block = 10. # Conductivity of the block, S/m
# add a block to our model
x_block = np.r_[0.4, 0.6]
y_block = np.r_[0.4, 0.6]
# assign them on the mesh
sigma = sigma_background * np.ones(mesh.nC) # create a physical property model
block_indices = ((mesh.gridCC[:,0] >= x_block[0]) & # left boundary
(mesh.gridCC[:,0] <= x_block[1]) & # right boundary
(mesh.gridCC[:,1] >= y_block[0]) & # bottom boundary
(mesh.gridCC[:,1] <= y_block[1])) # top boundary
# add the block to the physical property model
sigma[block_indices] = sigma_block
# plot it!
plt.colorbar(mesh.plotImage(sigma)[0])
plt.title('electrical conductivity, $\sigma$')
Explanation: Physical Property Model
Define an electrical conductivity ($\sigma$) model, on the cell-centers of the mesh.
End of explanation
# Define a source
a_loc, b_loc = np.r_[0.2, 0.5], np.r_[0.8, 0.5]
source_locs = [a_loc, b_loc]
# locate it on the mesh
source_loc_inds = Utils.closestPoints(mesh, source_locs)
a_loc_mesh = mesh.gridCC[source_loc_inds[0],:]
b_loc_mesh = mesh.gridCC[source_loc_inds[1],:]
# plot it
plt.colorbar(mesh.plotImage(sigma)[0])
plt.plot(a_loc_mesh[0], a_loc_mesh[1],'wv', markersize=8) # a-electrode
plt.plot(b_loc_mesh[0], b_loc_mesh[1],'w^', markersize=8) # b-electrode
plt.title('electrical conductivity, $\sigma$')
Explanation: Define a source
Define location of the positive and negative electrodes
End of explanation
mesh.faceDiv??
# Assemble and solve the DC resistivity problem
Div = mesh.faceDiv
Sigma = mesh.getFaceInnerProduct(sigma, invProp=True, invMat=True)
Vol = Utils.sdiag(mesh.vol)
# assemble the system matrix
A = Vol * Div * Sigma * Div.T * Vol
# right hand side
q = np.zeros(mesh.nC)
q[source_loc_inds] = np.r_[+1, -1]
from SimPEG import Solver # import the default solver (LU)
# solve the DC resistivity problem
Ainv = Solver(A) # create a matrix that behaves like A inverse
phi = Ainv * q
# look at the results!
plt.colorbar(mesh.plotImage(phi)[0])
plt.title('Electric Potential, $\phi$');
Explanation: Assemble and solve the DC system of equations
How we construct the divergence operator is discussed in divergence.ipynb, and the inner product matrix in weakformulation.ipynb. The final system is assembled and discussed in play.ipynb (with widgets!).
End of explanation |
8,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Potential Host Queries
Now that we're using SWIRE directly instead of querying Gator, we have to quickly find the potential hosts in a neighbourhood. I can think of two ways, one which is $O(n)$ and one which is $O(\log n)$, but the former is a lot easier. How slow is it? Let's grab 1000 subjects and find all the potential hosts in the $2' \times 2'$ neighbourhood, then average the times.
Step1: Now let's try using a tree. I'll use a $k$-d tree to store SWIRE indices.
Step2: Note that this measurement actually depends a lot on leafsize. Smaller values seem to be slower to some extent. I think this is because I am returning many points.
I kinda prefer sklearn to scipy, so let's have a try of sklearn. | Python Code:
import csv
import time
import h5py
import numpy
CROWDASTRO_H5_PATH = '../crowdastro.h5'
CROWDASTRO_CSV_PATH = '../crowdastro.csv'
ARCMIN = 0.0166667
with h5py.File(CROWDASTRO_H5_PATH) as f_h5:
positions = f_h5['/swire/cdfs/catalogue'][:, :2]
times = []
for i in range(1000):
now = time.time()
sx, sy = f_h5['/atlas/cdfs/positions'][i]
lt_x = positions[:, 0] <= sx + ARCMIN
gt_x = positions[:, 0] >= sx - ARCMIN
lt_y = positions[:, 1] <= sy + ARCMIN
gt_y = positions[:, 1] >= sy - ARCMIN
enclosed = numpy.all([lt_x, gt_x, lt_y, gt_y], axis=0)
potential_hosts = positions[enclosed]
total = time.time() - now
times.append(total)
print('{:.02} +- {:1.1} s'.format(numpy.mean(times), numpy.std(times)))
Explanation: Potential Host Queries
Now that we're using SWIRE directly instead of querying Gator, we have to quickly find the potential hosts in a neighbourhood. I can think of two ways, one which is $O(n)$ and one which is $O(\log n)$, but the former is a lot easier. How slow is it? Let's grab 1000 subjects and find all the potential hosts in the $2' \times 2'$ neighbourhood, then average the times.
End of explanation
import scipy.spatial
with h5py.File(CROWDASTRO_H5_PATH) as f_h5:
positions = f_h5['/swire/cdfs/catalogue'][:, :2]
tree = scipy.spatial.KDTree(positions, leafsize=100)
radius = numpy.sqrt(2) * ARCMIN
times = []
for i in range(1000):
now = time.time()
point = f_h5['/atlas/cdfs/positions'][i]
enclosed = tree.query_ball_point(point, radius)
potential_hosts = positions[enclosed]
lt_x = potential_hosts[:, 0] <= sx + ARCMIN
gt_x = potential_hosts[:, 0] >= sx - ARCMIN
lt_y = potential_hosts[:, 1] <= sy + ARCMIN
gt_y = potential_hosts[:, 1] >= sy - ARCMIN
enclosed = numpy.all([lt_x, gt_x, lt_y, gt_y], axis=0)
potential_hosts = potential_hosts[enclosed]
total = time.time() - now
times.append(total)
print('{:.02} +- {:1.1} s'.format(numpy.mean(times), numpy.std(times)))
Explanation: Now let's try using a tree. I'll use a $k$-d tree to store SWIRE indices.
End of explanation
import sklearn.neighbors
with h5py.File(CROWDASTRO_H5_PATH) as f_h5:
positions = f_h5['/swire/cdfs/catalogue'][:, :2]
tree = sklearn.neighbors.KDTree(positions, leaf_size=100)
radius = numpy.sqrt(2) * ARCMIN
times = []
for i in range(1000):
now = time.time()
point = f_h5['/atlas/cdfs/positions'][i]
(enclosed,) = tree.query_radius(point.reshape((1, -1)), r=radius)
potential_hosts = positions[enclosed]
lt_x = potential_hosts[:, 0] <= sx + ARCMIN
gt_x = potential_hosts[:, 0] >= sx - ARCMIN
lt_y = potential_hosts[:, 1] <= sy + ARCMIN
gt_y = potential_hosts[:, 1] >= sy - ARCMIN
enclosed = numpy.all([lt_x, gt_x, lt_y, gt_y], axis=0)
potential_hosts = potential_hosts[enclosed]
total = time.time() - now
times.append(total)
print('{:.02} +- {:1.1} s'.format(numpy.mean(times), numpy.std(times)))
with h5py.File(CROWDASTRO_H5_PATH) as f_h5:
positions = f_h5['/swire/cdfs/catalogue'][:, :2]
tree = sklearn.neighbors.KDTree(positions, leaf_size=20)
radius = numpy.sqrt(2) * ARCMIN
now = time.time()
points = f_h5['/atlas/cdfs/positions'][:1000]
all_enclosed = tree.query_radius(points, r=radius)
# potential_hosts = positions[enclosed]
# lt_x = potential_hosts[:, 0] <= sx + ARCMIN
# gt_x = potential_hosts[:, 0] >= sx - ARCMIN
# lt_y = potential_hosts[:, 1] <= sy + ARCMIN
# gt_y = potential_hosts[:, 1] >= sy - ARCMIN
# enclosed = numpy.all([lt_x, gt_x, lt_y, gt_y], axis=0)
# potential_hosts = potential_hosts[enclosed]
total = time.time() - now
print('{:.06f} s'.format(total / 1000))
Explanation: Note that this measurement actually depends a lot on leafsize. Smaller values seem to be slower to some extent. I think this is because I am returning many points.
I kinda prefer sklearn to scipy, so let's have a try of sklearn.
End of explanation |
8,010 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Say I have two dataframes: | Problem:
import pandas as pd
df1 = pd.DataFrame({'Timestamp': ['2019/04/02 11:00:01', '2019/04/02 11:00:15', '2019/04/02 11:00:29', '2019/04/02 11:00:30'],
'data': [111, 222, 333, 444]})
df2 = pd.DataFrame({'Timestamp': ['2019/04/02 11:00:14', '2019/04/02 11:00:15', '2019/04/02 11:00:16', '2019/04/02 11:00:30', '2019/04/02 11:00:31'],
'stuff': [101, 202, 303, 404, 505]})
df1['Timestamp'] = pd.to_datetime(df1['Timestamp'])
df2['Timestamp'] = pd.to_datetime(df2['Timestamp'])
def g(df1, df2):
return pd.merge_asof(df1, df2, on='Timestamp', direction='forward')
result = g(df1.copy(), df2.copy()) |
8,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Frequency and time-frequency sensor analysis
The objective is to show you how to explore the spectral content
of your data (frequency and time-frequency). Here we'll work on Epochs.
We will use this dataset
Step1: Set parameters
Step2: Frequency analysis
We start by exploring the frequence content of our epochs.
Let's first check out all channel types by averaging across epochs.
Step3: Now let's take a look at the spatial distributions of the PSD.
Step4: Alternatively, you can also create PSDs from Epochs objects with functions
that start with psd_ such as
Step5: Notably,
Step6: Lastly, we can also retrieve the unaggregated segments by passing
average=None to
Step7: Time-frequency analysis
Step8: Inspect power
<div class="alert alert-info"><h4>Note</h4><p>The generated figures are interactive. In the topo you can click
on an image to visualize the data for one sensor.
You can also select a portion in the time-frequency plane to
obtain a topomap for a certain time-frequency region.</p></div>
Step9: Joint Plot
You can also create a joint plot showing both the aggregated TFR
across channels and topomaps at specific times and frequencies to obtain
a quick overview regarding oscillatory effects across time and space.
Step10: Inspect ITC | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
# Richard Höchenberger <richard.hoechenberger@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet, psd_multitaper, psd_welch
from mne.datasets import somato
Explanation: Frequency and time-frequency sensor analysis
The objective is to show you how to explore the spectral content
of your data (frequency and time-frequency). Here we'll work on Epochs.
We will use this dataset: somato-dataset. It contains so-called event
related synchronizations (ERS) / desynchronizations (ERD) in the beta band.
End of explanation
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False)
# Construct Epochs
event_id, tmin, tmax = 1, -1., 3.
baseline = (None, 0)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6),
preload=True)
epochs.resample(200., npad='auto') # resample to reduce computation time
Explanation: Set parameters
End of explanation
epochs.plot_psd(fmin=2., fmax=40., average=True, spatial_colors=False)
Explanation: Frequency analysis
We start by exploring the frequence content of our epochs.
Let's first check out all channel types by averaging across epochs.
End of explanation
epochs.plot_psd_topomap(ch_type='grad', normalize=True)
Explanation: Now let's take a look at the spatial distributions of the PSD.
End of explanation
f, ax = plt.subplots()
psds, freqs = psd_multitaper(epochs, fmin=2, fmax=40, n_jobs=1)
psds = 10. * np.log10(psds)
psds_mean = psds.mean(0).mean(0)
psds_std = psds.mean(0).std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD (gradiometers)', xlabel='Frequency (Hz)',
ylabel='Power Spectral Density (dB)')
plt.show()
Explanation: Alternatively, you can also create PSDs from Epochs objects with functions
that start with psd_ such as
:func:mne.time_frequency.psd_multitaper and
:func:mne.time_frequency.psd_welch.
End of explanation
# Estimate PSDs based on "mean" and "median" averaging for comparison.
kwargs = dict(fmin=2, fmax=40, n_jobs=1)
psds_welch_mean, freqs_mean = psd_welch(epochs, average='mean', **kwargs)
psds_welch_median, freqs_median = psd_welch(epochs, average='median', **kwargs)
# Convert power to dB scale.
psds_welch_mean = 10 * np.log10(psds_welch_mean)
psds_welch_median = 10 * np.log10(psds_welch_median)
# We will only plot the PSD for a single sensor in the first epoch.
ch_name = 'MEG 0122'
ch_idx = epochs.info['ch_names'].index(ch_name)
epo_idx = 0
_, ax = plt.subplots()
ax.plot(freqs_mean, psds_welch_mean[epo_idx, ch_idx, :], color='k',
ls='-', label='mean of segments')
ax.plot(freqs_median, psds_welch_median[epo_idx, ch_idx, :], color='k',
ls='--', label='median of segments')
ax.set(title='Welch PSD ({}, Epoch {})'.format(ch_name, epo_idx),
xlabel='Frequency (Hz)', ylabel='Power Spectral Density (dB)')
ax.legend(loc='upper right')
plt.show()
Explanation: Notably, :func:mne.time_frequency.psd_welch supports the keyword argument
average, which specifies how to estimate the PSD based on the individual
windowed segments. The default is average='mean', which simply calculates
the arithmetic mean across segments. Specifying average='median', in
contrast, returns the PSD based on the median of the segments (corrected for
bias relative to the mean), which is a more robust measure.
End of explanation
psds_welch_unagg, freqs_unagg = psd_welch(epochs, average=None, **kwargs)
print(psds_welch_unagg.shape)
Explanation: Lastly, we can also retrieve the unaggregated segments by passing
average=None to :func:mne.time_frequency.psd_welch. The dimensions of
the returned array are (n_epochs, n_sensors, n_freqs, n_segments).
End of explanation
freqs = np.logspace(*np.log10([6, 35]), num=8)
n_cycles = freqs / 2. # different number of cycle per frequency
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True,
return_itc=True, decim=3, n_jobs=1)
Explanation: Time-frequency analysis: power and inter-trial coherence
We now compute time-frequency representations (TFRs) from our Epochs.
We'll look at power and inter-trial coherence (ITC).
To this we'll use the function :func:mne.time_frequency.tfr_morlet
but you can also use :func:mne.time_frequency.tfr_multitaper
or :func:mne.time_frequency.tfr_stockwell.
<div class="alert alert-info"><h4>Note</h4><p>The ``decim`` parameter reduces the sampling rate of the time-frequency
decomposition by the defined factor. This is usually done to reduce
memory usage. For more information refer to the documentation of
:func:`mne.time_frequency.tfr_morlet`.</p></div>
define frequencies of interest (log-spaced)
End of explanation
power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power')
power.plot([82], baseline=(-0.5, 0), mode='logratio', title=power.ch_names[82])
fig, axis = plt.subplots(1, 2, figsize=(7, 4))
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12,
baseline=(-0.5, 0), mode='logratio', axes=axis[0],
title='Alpha', show=False)
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25,
baseline=(-0.5, 0), mode='logratio', axes=axis[1],
title='Beta', show=False)
mne.viz.tight_layout()
plt.show()
Explanation: Inspect power
<div class="alert alert-info"><h4>Note</h4><p>The generated figures are interactive. In the topo you can click
on an image to visualize the data for one sensor.
You can also select a portion in the time-frequency plane to
obtain a topomap for a certain time-frequency region.</p></div>
End of explanation
power.plot_joint(baseline=(-0.5, 0), mode='mean', tmin=-.5, tmax=2,
timefreqs=[(.5, 10), (1.3, 8)])
Explanation: Joint Plot
You can also create a joint plot showing both the aggregated TFR
across channels and topomaps at specific times and frequencies to obtain
a quick overview regarding oscillatory effects across time and space.
End of explanation
itc.plot_topo(title='Inter-Trial coherence', vmin=0., vmax=1., cmap='Reds')
Explanation: Inspect ITC
End of explanation |
8,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computational Quantum Dynamics
(Lorenzo Biasi)
Project
I upload some library and initialize settings
Step1: 1. Superexchange in a three-level system.
(a)
For calculating the occupation probability we can diagonalize the hamiltonian and compute $\Psi(t) as$
Step2: (b)
One can plot the given functions over the computation for $|\Psi_{1, 2}(t)|^2$ for values of $V = 1$ and $E_2 = 20$. For being able to distinguish the approximation from $|\Psi_{1, 2}(t)|^2$ we plot and enlargment for the the first coordinate.
Step3: 2. The one-dimensional soft-core potential.
We can discretize the hamiltonian with the finite difference scheme and compute the eigenvalues and eigenvector. I plotted the eigenfunctions for the 3 lowest energy eigenstates and in the second plot you can see the different bound energies compered to the potential.
Step4: 3. Ionization from a one-dimensional soft-core potential.
(a)
Step5: (b)
We know that in our simulation we work with a system with low ionation probability, this suggests that the wave function does not move much from the initial position. All of this would indicate that the kinetic momentum is low, so the optimal gauge is the length gauge, as the velocity gauge would shift the momentum by $A(t)$.
I will first do (d) and then (c) and (e) together.
(d)
We can approximate our system to an electron in a sinusoidal electric field in the classical regime. Newton's equation then is
Step6: (f)
I run the previous program for different values of $E_0$.
Step7: We see as expected that the lower the amplitude of the electric field the lower the final ionization probability will be. | Python Code:
from pylab import *
from copy import deepcopy
from matplotlib import animation, rc
from IPython.display import HTML
%matplotlib inline
rc('text', usetex=True)
font = {'family' : 'normal',
'weight' : 'bold',
'size' : 15}
matplotlib.rc('font', **font)
Explanation: Computational Quantum Dynamics
(Lorenzo Biasi)
Project
I upload some library and initialize settings
End of explanation
E1, E2, E3 = 0., 20., 0.
V12, V23 = 1., 1.
psi0 = array([1, 0, 0], dtype='complex')
Nt = int(1e4)
psi = zeros((Nt, 3), dtype='complex')
psi[0, :] = psi0
for E2, tf in zip(arange(4) * 20, [20, 200, 200, 200]):
times = linspace(0, tf, Nt)
H = array([[E1, V12, 0],
[V12, E2, V23],
[0, V23, E3]])
lambd, Q = eigh(H)
Q_inv = Q.T.conj()
for it in range(1, Nt):
psi[it, :] = Q_inv @ psi0
psi[it, :] = diag(np.exp(-1j * lambd * times[it])) @ psi[it, :]
psi[it, :] = Q @ psi[it, :]
plot(times, abs(psi) ** 2)
ylabel(r'$\|\Psi(t)\|^2$')
xlabel(r'$t$')
legend(['$\|\Psi(t)_1\|^2$', '$\|\Psi(t)_2\|^2$', '$\|\Psi(t)_3\|^2$'], loc=1)
figure()
Explanation: 1. Superexchange in a three-level system.
(a)
For calculating the occupation probability we can diagonalize the hamiltonian and compute $\Psi(t) as$:
\begin{equation}
\Psi(t) = D^\dagger e^{-i\Lambda t /\hbar} D \Psi(0)
\end{equation}
Where $e^{-i\Lambda t /\hbar}$ is a diagonal matrix with on the diagonal $e^{-i\lambda_i t /\hbar}$ for each eigenvalue.
End of explanation
y = cos(V12 ** 2 / E2 * times) ** 2
plot(times, y)
y = sin(V12 ** 2 / E2 * times) ** 2
plot(times, y)
plot(times, abs(psi[:, 0]) ** 2, label='$\|\Psi(t)_1\|^2$')
plot(times, abs(psi[:, 2]) ** 2, label='$\|\Psi(t)_3\|^2$')
ylabel(r'$\|\Psi(t)\|^2$')
xlabel(r'$t$')
legend(loc=1)
figure()
plot(times, abs(psi[:, 0]) ** 2, label='$\|\Psi(t)_1\|^2$')
y = cos(V12 ** 2 / E2 * times) ** 2
plot(times, y, label=r'$\cos(V_{12}^2 / (E_2 t)^2)$')
ylabel(r'$\|\Psi(t)\|^2$')
xlabel(r'$t$')
legend(loc=1)
ylim([.99, 1.01])
xlim([-.3, 3]);
Explanation: (b)
One can plot the given functions over the computation for $|\Psi_{1, 2}(t)|^2$ for values of $V = 1$ and $E_2 = 20$. For being able to distinguish the approximation from $|\Psi_{1, 2}(t)|^2$ we plot and enlargment for the the first coordinate.
End of explanation
def V(x, Z=1):
return -Z / sqrt(2 / Z ** 2 + x ** 2)
N = 2 ** 10
x0, x1 = -25, 25
x = linspace(x0, x1, N)
dx = (x1 - x0) / (N - 1)
H = diag(ones(N - 1), -1) - 2 * diag(ones(N)) + diag(ones(N - 1), 1)
H *= -1 / (2 * dx**2)
H += diag(V(x))
E, Psi_tot = eigh(H)
E_bound=E[E<0]
for k, E_ in enumerate(sorted(E_bound)[:3]):
print('E_{' + str(k) + '} = ' + "{:1.4f}".format(E_))
plot(x, Psi_tot[:, 0] / sqrt(dx), label=r'$\Psi_0(x)$')
plot(x, Psi_tot[:, 1] / sqrt(dx), label=r'$\Psi_1(x)$')
plot(x, Psi_tot[:, 2] / sqrt(dx), label=r'$\Psi_2(x)$')
legend(loc=1)
xlabel('x')
ylabel('$\Psi(t)$')
figure()
plot(x, V(x))
plot(x, E_bound * ones_like(x)[:, newaxis])
legend([r'$V(x)$', r'$E_0$', r'$E_1$', r'$E_2$'])
xlabel('x')
ylabel('Energy')
Explanation: 2. The one-dimensional soft-core potential.
We can discretize the hamiltonian with the finite difference scheme and compute the eigenvalues and eigenvector. I plotted the eigenfunctions for the 3 lowest energy eigenstates and in the second plot you can see the different bound energies compered to the potential.
End of explanation
def E(t, E0, omega, n):
t_ = maximum(omega * t, 0)
t_ = minimum(t_, 2 * np.pi * n)
return E0 * sin(t_) * sin(t_ / (2 * n))
def A(t, E0, omega, n):
pref = -E0 / omega
t_ = maximum(omega * t, 0.)
t_ = minimum(t_, 2 * np.pi * n)
return pref * (cos(t_) * (n * n * cos(t_ / n) - n * n + 1) + n * sin(t_) *
sin(t_ / n) - 1) / (2 * (n * n - 1))
def vanish(V0, x, x0, x1):
V0 *= 2
xs, xe = x[0], x[-1]
potential = np.maximum(0, (V0 * (x - x0) / (xs - x0)))
return np.maximum(potential, (V0 * (x - x1) / (xe - x1)))
omega = .02
n = 3 / 2
E0 = .05
Z = 1
x0, x1 = -15, 15
dx = .1
x = arange(x0, x1, dx)
N = len(x)
p = fftfreq(N, d=dx / (2 * pi))
dt = 0.5
ts = np.arange(- pi / omega, 2 * np.pi * (n + .5) / omega, dt)
plot(ts, E(ts, E0, omega, n))
title('Electric field for n = 3/2')
xlabel('t')
ylabel('E(t)')
figure()
t_star = np.arange(-pi / omega, 2 * np.pi * (5 + .5) / omega, 0.01)
plot(t_star, E(t_star, E0, omega, 5))
xlabel('t')
ylabel('E(t)')
title('Electric field for n = 5')
figure()
plot(ts, A(ts, E0, omega, n))
xlabel('t')
ylabel('A(t)')
title('Magnetic potential field for n = 3/2')
figure()
plot(t_star, A(t_star, E0, omega, 5))
xlabel('t')
ylabel('A(t)')
title('Magnetic potential field for n = 5')
Explanation: 3. Ionization from a one-dimensional soft-core potential.
(a)
End of explanation
omega = .02
n = 3 / 2
E0 = .05
Z = 1
x0, x1 = -15, 15
xl, xr = -10, 10
d = x1 - xr
t_temp = np.linspace(0, 2 * np.pi * (n + .5) / omega, 1000)
A_max = max(A(t_temp, E0, omega, n)) # the maximum momentum is equal to the
# maximum value of the magnetic potential
p_tilde = n**2 * E0 /(n**2 - 1) / omega
print('dx using the approximation ',\
"{:1.4f}".format(pi / p_tilde), 'a.u.')
print('dx using the maximum of the momentum calculated numerically',\
"{:1.4f}".format(pi / A_max), 'a.u.')
print('dt using the approximation ',\
"{:1.4f}".format(2 * pi / p_tilde ** 2), 'a.u.')
print('dt using the maximum of the momentum calculated numerically',\
"{:1.4f}".format(2 * pi / A_max ** 2), 'a.u.')
print("{:1.4f}".format(p_tilde / (8 * d)), 'a.u. < tilde_V <',\
"{:1.4f}".format(p_tilde ** 3 / 2 ** 4), 'a.u.')
V_tilde = 5.
dx = pi / p_tilde
x = arange(x0, x1, dx)
N = len(x)
p = fftfreq(N, d=dx / (2 * pi))
dt = 2 * pi / p_tilde ** 2
ts = np.arange(0, 2 * np.pi * (n + .5) / omega, dt)
H = diag(ones(N - 1), -1) - 2 * diag(ones(N)) + diag(ones(N - 1), 1)
H *= -1 / (2 * dx ** 2)
H += diag(V(x, Z))
U_2 = exp(-1j * 0.5 * p ** 2 * dt)
_, Psi_tot = eigh(H)
Psi = Psi_tot[:, 0].astype('complex')
Psi /= np.sqrt(sum(abs(Psi) ** 2 * dx))
psi0 = deepcopy(Psi)
norm = np.zeros(len(ts))
overlap = np.zeros(len(ts))
for k, t in enumerate(ts):
U_1 = exp(-0.5 * 1j * (V(x, 1) - 1j *
vanish(V_tilde, x, xl, xr) - x * E(t, E0, omega, n)) * dt)
Psi *= U_1
Psi = fft(Psi)
Psi *= U_2
Psi = ifft(Psi) # go to real space
Psi *= U_1
norm[k] = sum(abs(Psi) ** 2 * dx)
overlap[k] = abs(vdot(Psi, psi0)) * dx
Explanation: (b)
We know that in our simulation we work with a system with low ionation probability, this suggests that the wave function does not move much from the initial position. All of this would indicate that the kinetic momentum is low, so the optimal gauge is the length gauge, as the velocity gauge would shift the momentum by $A(t)$.
I will first do (d) and then (c) and (e) together.
(d)
We can approximate our system to an electron in a sinusoidal electric field in the classical regime. Newton's equation then is :
\begin{equation}
m\ddot{x} = E_0 \sin(\omega t) sin^2(\frac{\omega t}{2 n})
\end{equation}
by integrating this equation we obtain:
\begin{equation}
m\dot{x} = \frac{E_0}{\omega} \frac{\cos(\omega t) (n^2 \cos(\frac{\omega t}{n}) - n^2 + 1) + n \sin(\omega t) \sin(\frac{\omega t}{n}) -1}{2 (n^2 - 1)}
\end{equation}
looking at the possible maxima of this function one can put an upper bound on the momentum in the case of $n \leq 1$:
\begin{equation}
\tilde{p} \leq (n^2 E_0) / ((n^2 -1) \omega)
\end{equation}
if a general bound is needed one can use $((n^2 + 1) E_0)$ in place of the numerator. Obviously this is just an upper bound on the momentum and not a maximum, one can use a better value depending on the parameters, by calculating the maximum numerically.
Having an approximation for the maximum momentum we estimate the spacing of the spacial grid and temporal step with:
\begin{equation}
\Delta x \leq \frac{\hbar\pi}{\tilde{p}} \quad \text{and} \quad \Delta t \leq \frac{\hbar\pi}{\tilde{E}} = \frac{2\pi \hbar}{\tilde{p}^2}
\end{equation}
For the simulation we need also a maximum and minimum for $x$. Since we do not expect our wave function to move dramatically we can look and the ground state of the soft-core potential and look when the function is sufficiently close to 0. In the simulation we opted for -15 a.u., 15 a.u. as boundery.
Lastly we will choose the mean of the vanishing potential $\tilde{V}$ based around the size of the damping region $d$ and the average momentum $p$, by using:
\begin{equation}
\frac{\hbar p}{4m_0d} \ll \tilde{V} \ll \frac{d p^3}{2 m_0 \hbar}
\end{equation}
(c) and (e)
In this first part we are going to set up the various parameters needed for the simulation. For $\tilde p$ we could also decide to use the maximum momentum obtained numerically, but we decided to be consistent with what was said previously.
End of explanation
N_e = 20
ionizs = np.zeros(N_e)
norms = np.zeros((N_e, len(ts)))
for j, E0 in enumerate(np.linspace(0, .05, N_e)):
Psi = deepcopy(psi0)
for k, t in enumerate(ts):
U_1 = exp(-0.5 * 1j * (V(x, 1) - 1j * vanish(V_tilde, x, -10, 10) - x * E(t, E0, omega, n)) * dt)
Psi *= U_1
Psi = fft(Psi)
Psi *= U_2
Psi = ifft(Psi) # go to real space
Psi *= U_1
norms[j, k] = sum(abs(Psi) ** 2 * dx)
ionizs[j] = 1 - sum(abs(Psi) ** 2 * dx)
Explanation: (f)
I run the previous program for different values of $E_0$.
End of explanation
title('Ionization probabilies in time')
plot(ts, 1 - norms.T[:, ::-1])
legend([r'$E_0 = 0.05$ a.u.'])
xlabel(r't')
ylabel(r'$1 - |<\Psi(t)|\Psi(t)>|^2$')
figure()
title(r'ionization probabilies at $t_{end}$')
ylabel(r'$1 - |<\Psi(t_{end})|\Psi(t_{end})>|^2$')
xlabel(r'$E_0$')
plot(np.linspace(0, .05, N_e), ionizs)
Explanation: We see as expected that the lower the amplitude of the electric field the lower the final ionization probability will be.
End of explanation |
8,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A tutorial on Markowitz portfolio optimization in Python using cvxopt
Authors
Step1: Assume that we have 4 assets, each with a return series of length 1000. We can use numpy.random.randn to sample returns from a normal distribution.
Step2: These return series can be used to create a wide range of portfolios, which all
have different returns and risks (standard deviation). We can produce a wide range
of random weight vectors and plot those portfolios. As we want all our capital to be invested, this vector will have to sum to one.
Step3: Next, lets evaluate how many of these random portfolios would perform. Towards this goal we are calculating the mean returns as well as the volatility (here we are using standard deviation). You can also see that there is
a filter that only allows to plot portfolios with a standard deviation of < 2 for better illustration.
Step4: In the code you will notice the calculation of the return with
Step5: Upon plotting those you will observe that they form a characteristic parabolic
shape called the ‘Markowitz bullet‘ with the boundaries being called the ‘efficient
frontier‘, where we have the lowest variance for a given expected.
Step6: Markowitz optimization and the Efficient Frontier
Once we have a good representation of our portfolios as the blue dots show we can calculate the efficient frontier Markowitz-style. This is done by minimising
$$ w^T C w$$
for $w$ on the expected portfolio return $R^T w$ whilst keeping the sum of all the
weights equal to 1
Step7: In yellow you can see the optimal portfolios for each of the desired returns (i.e. the mus). In addition, we get the one optimal portfolio returned
Step8: Backtesting on real market data
This is all very interesting but not very applied. We next demonstrate how you can create a simple algorithm in zipline -- the open-source backtester that powers Quantopian -- to test this optimization on actual historical stock data.
First, lets load in some historical data using Quantopian's data (if we are running in the Quantopian Research Platform, or the load_bars_from_yahoo() function from zipline.
Step9: Next, we'll create a zipline algorithm by defining two functions -- initialize() which is called once before the simulation starts, and handle_data() which is called for every trading bar. We then instantiate the algorithm object.
If you are confused about the syntax of zipline, check out the tutorial. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import cvxopt as opt
from cvxopt import blas, solvers
import pandas as pd
np.random.seed(123)
# Turn off progress printing
solvers.options['show_progress'] = False
Explanation: A tutorial on Markowitz portfolio optimization in Python using cvxopt
Authors: Dr. Thomas Starke, David Edwards, Dr. Thomas Wiecki
Notebook released under the Creative Commons Attribution 4.0 License.
About the author:
Today's blog post is written in collaboration with Dr. Thomas Starke. It is based on a longer whitepaper by Thomas Starke on the relationship between Markowitz portfolio optimization and Kelly optimization. The full whitepaper can be found here.
Introduction
In this blog post you will learn about the basic idea behind Markowitz portfolio optimization as well as how to do it in Python. We will then show how you can create a simple backtest that rebalances its portfolio in a Markowitz-optimal way. We hope you enjoy it and get a little more enlightened in the process.
We will start by using random data and only later use actual stock data. This will hopefully help you to get a sense of how to use modelling and simulation to improve your understanding of the theoretical concepts. Don‘t forget that the skill of an algo-trader is to put mathematical models into code and this example is great practice.
Let's start with importing a few modules, which we need later and produce a series of normally distributed returns. cvxopt is a convex solver which you can easily download with
sudo pip install cvxopt.
Simulations
End of explanation
## NUMBER OF ASSETS
n_assets = 4
## NUMBER OF OBSERVATIONS
n_obs = 1000
return_vec = np.random.randn(n_assets, n_obs)
plt.plot(return_vec.T, alpha=.4);
plt.xlabel('time')
plt.ylabel('returns')
Explanation: Assume that we have 4 assets, each with a return series of length 1000. We can use numpy.random.randn to sample returns from a normal distribution.
End of explanation
def rand_weights(n):
''' Produces n random weights that sum to 1 '''
k = np.random.rand(n)
return k / sum(k)
print rand_weights(n_assets)
print rand_weights(n_assets)
Explanation: These return series can be used to create a wide range of portfolios, which all
have different returns and risks (standard deviation). We can produce a wide range
of random weight vectors and plot those portfolios. As we want all our capital to be invested, this vector will have to sum to one.
End of explanation
def random_portfolio(returns):
'''
Returns the mean and standard deviation of returns for a random portfolio
'''
p = np.asmatrix(np.mean(returns, axis=1))
w = np.asmatrix(rand_weights(returns.shape[0]))
C = np.asmatrix(np.cov(returns))
mu = w * p.T
sigma = np.sqrt(w * C * w.T)
# This recursion reduces outliers to keep plots pretty
if sigma > 2:
return random_portfolio(returns)
return mu, sigma
Explanation: Next, lets evaluate how many of these random portfolios would perform. Towards this goal we are calculating the mean returns as well as the volatility (here we are using standard deviation). You can also see that there is
a filter that only allows to plot portfolios with a standard deviation of < 2 for better illustration.
End of explanation
n_portfolios = 500
means, stds = np.column_stack([
random_portfolio(return_vec)
for _ in xrange(n_portfolios)
])
Explanation: In the code you will notice the calculation of the return with:
$$ R = p^T w $$
where $R$ is the expected return, $p^T$ is the transpose of the vector for the mean
returns for each time series and w is the weight vector of the portfolio. $p$ is a Nx1
column vector, so $p^T$ turns into a 1xN row vector which can be multiplied with the
Nx1 weight (column) vector w to give a scalar result. This is equivalent to the dot
product used in the code. Keep in mind that Python has a reversed definition of
rows and columns and the accurate NumPy version of the previous equation would
be R = w * p.T
Next, we calculate the standard deviation with
$$\sigma = \sqrt{w^T C w}$$
where $C$ is the covariance matrix of the returns which is a NxN matrix. Please
note that if we simply calculated the simple standard deviation with the appropriate weighting using std(array(ret_vec).T*w) we would get a slightly different
’bullet’. This is because the simple standard deviation calculation would not take
covariances into account. In the covariance matrix, the values of the diagonal
represent the simple variances of each asset while the off-diagonals are the variances between the assets. By using ordinary std() we effectively only regard the
diagonal and miss the rest. A small but significant difference.
Lets generate the mean returns and volatility for 500 random portfolios:
End of explanation
plt.plot(stds, means, 'o', markersize=5)
plt.xlabel('std')
plt.ylabel('mean')
plt.title('Mean and standard deviation of returns of randomly generated portfolios')
Explanation: Upon plotting those you will observe that they form a characteristic parabolic
shape called the ‘Markowitz bullet‘ with the boundaries being called the ‘efficient
frontier‘, where we have the lowest variance for a given expected.
End of explanation
def optimal_portfolio(returns):
n = len(returns)
returns = np.asmatrix(returns)
N = 100
mus = [10**(5.0 * t/N - 1.0) for t in range(N)]
# Convert to cvxopt matrices
S = opt.matrix(np.cov(returns))
pbar = opt.matrix(np.mean(returns, axis=1))
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x']
for mu in mus]
## CALCULATE RISKS AND RETURNS FOR FRONTIER
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
m1 = np.polyfit(returns, risks, 2)
x1 = np.sqrt(m1[2] / m1[0])
# CALCULATE THE OPTIMAL PORTFOLIO
wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
return np.asarray(wt), returns, risks
weights, returns, risks = optimal_portfolio(return_vec)
plt.plot(stds, means, 'o')
plt.ylabel('mean')
plt.xlabel('std')
plt.plot(risks, returns, 'y-o')
Explanation: Markowitz optimization and the Efficient Frontier
Once we have a good representation of our portfolios as the blue dots show we can calculate the efficient frontier Markowitz-style. This is done by minimising
$$ w^T C w$$
for $w$ on the expected portfolio return $R^T w$ whilst keeping the sum of all the
weights equal to 1:
$$ \sum_{i}{w_i} = 1 $$
Here we parametrically run through $R^T w = \mu$ and find the minimum variance
for different $\mu$‘s. This can be done with scipy.optimise.minimize but we have
to define quite a complex problem with bounds, constraints and a Lagrange multiplier. Conveniently, the cvxopt package, a convex solver, does all of that for us. We used one of their examples with some modifications as shown below. You will notice that there are some conditioning expressions in the code. They are simply needed to set up the problem. For more information please have a look at the cvxopt example.
The mus vector produces a series of expected return values $\mu$ in a non-linear and more appropriate way. We will see later that we don‘t need to calculate a lot of these as they perfectly fit a parabola, which can safely be extrapolated for higher values.
End of explanation
print weights
Explanation: In yellow you can see the optimal portfolios for each of the desired returns (i.e. the mus). In addition, we get the one optimal portfolio returned:
End of explanation
from zipline.utils.factory import load_bars_from_yahoo
end = pd.Timestamp.utcnow()
start = end - 2500 * pd.tseries.offsets.BDay()
data = load_bars_from_yahoo(stocks=['IBM', 'GLD', 'XOM', 'AAPL',
'MSFT', 'TLT', 'SHY'],
start=start, end=end)
data.loc[:, :, 'price'].plot(figsize=(8,5))
plt.ylabel('price in $')
Explanation: Backtesting on real market data
This is all very interesting but not very applied. We next demonstrate how you can create a simple algorithm in zipline -- the open-source backtester that powers Quantopian -- to test this optimization on actual historical stock data.
First, lets load in some historical data using Quantopian's data (if we are running in the Quantopian Research Platform, or the load_bars_from_yahoo() function from zipline.
End of explanation
import zipline
from zipline.api import (add_history,
history,
set_slippage,
slippage,
set_commission,
commission,
order_target_percent)
from zipline import TradingAlgorithm
def initialize(context):
'''
Called once at the very beginning of a backtest (and live trading).
Use this method to set up any bookkeeping variables.
The context object is passed to all the other methods in your algorithm.
Parameters
context: An initialized and empty Python dictionary that has been
augmented so that properties can be accessed using dot
notation as well as the traditional bracket notation.
Returns None
'''
# Register history container to keep a window of the last 100 prices.
add_history(100, '1d', 'price')
# Turn off the slippage model
set_slippage(slippage.FixedSlippage(spread=0.0))
# Set the commission model (Interactive Brokers Commission)
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(context, data):
'''
Called when a market event occurs for any of the algorithm's
securities.
Parameters
data: A dictionary keyed by security id containing the current
state of the securities in the algo's universe.
context: The same context object from the initialize function.
Stores the up to date portfolio as well as any state
variables defined.
Returns None
'''
# Allow history to accumulate 100 days of prices before trading
# and rebalance every day thereafter.
context.tick += 1
if context.tick < 100:
return
# Get rolling window of past prices and compute returns
prices = history(100, '1d', 'price').dropna()
returns = prices.pct_change().dropna()
try:
# Perform Markowitz-style portfolio optimization
weights, _, _ = optimal_portfolio(returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(prices.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
# Instantinate algorithm
algo = TradingAlgorithm(initialize=initialize,
handle_data=handle_data)
# Run algorithm
results = algo.run(data)
results.portfolio_value.plot()
Explanation: Next, we'll create a zipline algorithm by defining two functions -- initialize() which is called once before the simulation starts, and handle_data() which is called for every trading bar. We then instantiate the algorithm object.
If you are confused about the syntax of zipline, check out the tutorial.
End of explanation |
8,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datasets
Datasets tell PHOEBE how and at what times to compute the model. In some cases these will include the actual observational data, and in other cases may only include the times at which you want to compute a synthetic model.
Adding a dataset - even if it doesn't contain any observational data - is required in order to compute a synthetic model (which will be described in the Compute Tutorial).
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: Adding a Dataset from Arrays
To add a dataset, you need to provide the function in
phoebe.parameters.dataset for the particular type of data you're dealing with, as well
as any of your "observed" arrays.
The current available methods include
Step2: Without Observations
The simplest case of adding a dataset is when you do not have observational "data" and only want to compute a synthetic model. Here all you need to provide is an array of times and information about the type of data and how to compute it.
Here we'll do just that - we'll add an orbit dataset which will track the positions and velocities of both our 'primary' and 'secondary' stars (by their component tags) at each of the provided times.
Unlike other datasets, the mesh and orb dataset cannot accept actual observations, so there is no times parameter, only the compute_times and compute_phases parameters. For more details on these, see the Advanced
Step3: Here we used phoebe.linspace. This is essentially just a shortcut to np.linspace, but using nparray to allow these generated arrays to be serialized and stored easier within the Bundle. Other nparray constructor functions available at the top-level of PHOEBE include
Step4: You may notice that add_dataset does take some time to complete. In the background, the passband is being loaded (when applicable) and many parameters are created and attached to the Bundle.
If you do not provide a list of component(s), they will be assumed for you based on the dataset method. LCs (light curves) and meshes can only attach at the system level (component=None), for instance, whereas RVs and ORBs can attach for each star.
Step5: Here we added an RV dataset and can see that it was automatically created for both stars in our system. Under-the-hood, another entry is created for component='_default'. The default parameters hold the values that will be replicated if a new component is added to the system in the future. In order to see these hidden parameters, you need to pass check_default=False to any filter-type call (and note that '_default' is no longer exposed when calling .components). Also note that for set_value_all, this is automatically set to False.
Since we did not explicitly state that we only wanted the primary and secondary components, the time array on '_default' is filled as well. If we were then to add a tertiary component, its RVs would automatically be computed because of this replicated time array.
Step6: With Observations
Loading datasets with observations is (nearly) as simple.
Passing arrays to any of the dataset columns will apply it to all of the same components in which the time will be applied (see the 'Without Observations' section above for more details). This make perfect sense for fluxes in light curves where the time and flux arrays are both at the system level
Step7: For datasets which attach to individual components, however, this isn't always the desired behavior.
For a single-lined RV where we only attach to one component, everything is as expected.
Step8: However, for a double-lined RV we probably don't want to do the following
Step9: Instead we want to pass different arrays to the 'rvs@primary' and 'rvs@secondary'. This can be done by explicitly stating the components in a dictionary sent to that argument | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: Datasets
Datasets tell PHOEBE how and at what times to compute the model. In some cases these will include the actual observational data, and in other cases may only include the times at which you want to compute a synthetic model.
Adding a dataset - even if it doesn't contain any observational data - is required in order to compute a synthetic model (which will be described in the Compute Tutorial).
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
phoebe.list_available_datasets()
Explanation: Adding a Dataset from Arrays
To add a dataset, you need to provide the function in
phoebe.parameters.dataset for the particular type of data you're dealing with, as well
as any of your "observed" arrays.
The current available methods include:
lc light curves (tutorial)
rv radial velocity curves (tutorial)
lp spectral line profiles (tutorial)
orb orbit/positional data (tutorial)
mesh discretized mesh of stars (tutorial)
which can always be listed via phoebe.list_available_datasets
End of explanation
b.add_dataset(phoebe.dataset.orb,
compute_times=phoebe.linspace(0,10,20),
dataset='orb01',
component=['primary', 'secondary'])
Explanation: Without Observations
The simplest case of adding a dataset is when you do not have observational "data" and only want to compute a synthetic model. Here all you need to provide is an array of times and information about the type of data and how to compute it.
Here we'll do just that - we'll add an orbit dataset which will track the positions and velocities of both our 'primary' and 'secondary' stars (by their component tags) at each of the provided times.
Unlike other datasets, the mesh and orb dataset cannot accept actual observations, so there is no times parameter, only the compute_times and compute_phases parameters. For more details on these, see the Advanced: Compute Times & Phases tutorial.
End of explanation
b.add_dataset('orb',
compute_times=phoebe.linspace(0,10,20),
component=['primary', 'secondary'],
dataset='orb01',
overwrite=True)
Explanation: Here we used phoebe.linspace. This is essentially just a shortcut to np.linspace, but using nparray to allow these generated arrays to be serialized and stored easier within the Bundle. Other nparray constructor functions available at the top-level of PHOEBE include:
phoebe.arange
phoebe.invspace
phoebe.linspace
phoebe.logspace
phoebe.geomspace
Any nparray object, list, or numpy array is acceptable as input to FloatArrayParameters.
b.add_dataset can either take a function or the name of a function in phoebe.parameters.dataset as its first argument. The following line would do the same thing (and we'll pass overwrite=True to avoid the error of overwriting dataset='orb01').
End of explanation
b.add_dataset('rv', times=phoebe.linspace(0,10,20), dataset='rv01')
print(b.filter(qualifier='times', dataset='rv01').components)
Explanation: You may notice that add_dataset does take some time to complete. In the background, the passband is being loaded (when applicable) and many parameters are created and attached to the Bundle.
If you do not provide a list of component(s), they will be assumed for you based on the dataset method. LCs (light curves) and meshes can only attach at the system level (component=None), for instance, whereas RVs and ORBs can attach for each star.
End of explanation
print(b.filter(qualifier='times', dataset='rv01', check_default=False).components)
print(b.get('times@_default@rv01', check_default=False))
Explanation: Here we added an RV dataset and can see that it was automatically created for both stars in our system. Under-the-hood, another entry is created for component='_default'. The default parameters hold the values that will be replicated if a new component is added to the system in the future. In order to see these hidden parameters, you need to pass check_default=False to any filter-type call (and note that '_default' is no longer exposed when calling .components). Also note that for set_value_all, this is automatically set to False.
Since we did not explicitly state that we only wanted the primary and secondary components, the time array on '_default' is filled as well. If we were then to add a tertiary component, its RVs would automatically be computed because of this replicated time array.
End of explanation
b.add_dataset('lc', times=[0,1], fluxes=[1,0.5], dataset='lc01')
print(b.get_parameter(qualifier='fluxes', dataset='lc01', context='dataset'))
Explanation: With Observations
Loading datasets with observations is (nearly) as simple.
Passing arrays to any of the dataset columns will apply it to all of the same components in which the time will be applied (see the 'Without Observations' section above for more details). This make perfect sense for fluxes in light curves where the time and flux arrays are both at the system level:
End of explanation
b.add_dataset('rv',
times=[0,1],
rvs=[-3,3],
component='primary',
dataset='rv01',
overwrite=True)
print(b.get_parameter(qualifier='rvs', dataset='rv01', context='dataset'))
Explanation: For datasets which attach to individual components, however, this isn't always the desired behavior.
For a single-lined RV where we only attach to one component, everything is as expected.
End of explanation
b.add_dataset('rv',
times=[0,0.5,1],
rvs=[-3,3],
dataset='rv02')
print(b.filter(qualifier='rvs', dataset='rv02', context='dataset'))
Explanation: However, for a double-lined RV we probably don't want to do the following:
End of explanation
b.add_dataset('rv',
times=[0,0.5,1],
rvs={'primary': [-3,3], 'secondary': [4,-4]},
dataset='rv02',
overwrite=True)
print(b.filter(qualifier='rvs', dataset='rv02', context='dataset'))
Explanation: Instead we want to pass different arrays to the 'rvs@primary' and 'rvs@secondary'. This can be done by explicitly stating the components in a dictionary sent to that argument:
End of explanation |
8,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review of lists, loops, and more...
Here is another broken piece of code. Using what you learned from yesterday's lessons
fix what is broken
make comments to explain what is going on line-by-line (talk to the duck)
if you have time make some improvements!
Step1: Moving on to dictionaries
We are just about done with the data structures we will use in this Python course. There are other Python data structures, and many more concepts, but for now we will consider dictonaries
Step2: As with lists, a dictionary can be initialized empty, this time with the braces {}. Dictionaries have some properties in common with lists and string, but there are some key differences. Dictionaries are
Step3: Based on the chart above, add the values for Group Id,Average Mass(g), and Number of Mice for the beta, an gamma groups using parallel variable names (e.g. group_id...)
Step4: You can also explicitly add inidividual entries to your dictionary
Step5: You can also use variables and other string slicing methods we used earlier
Step6: One important property of a dictionary is that you can call entries explicitly (rather than referencing indicies like 0, 1, or 2). First, here is some terminology for a dictionary object
Step7: You can also see a list of the keys a dictionary has
Step8: You can also check the values
Step9: Translating RNA > Protein
RNA codons are translated into aminio acids according to a standard genetic code (see chart below); amino acids are represented here by their one-letter abbreviations.
Step10: Using what you have learned so far
Write the appropriate code to translate an RNA string to a protein sequence
Step11: Does your code work on the following RNA sequence?
Step12: Bonus
Step13: Translating DNA to RNA TO PROTEIN
Now that we have done each of the major biological sequences, you should be able to translate a DNA sequence into an RNA, and a protein
Step14: FUNctions
Now that we have learned how to do several thing together, lets wap them into a function. We have already been using several functions so we can write our own
Step15: Two things are happening in this function, let's examine some of its elements
Step16: Local vs global
One other important element of functions is that variables included in the function, are not defined outside of the function
Step17: Variables defined inside the function are local to that function. Conversely, variables defined outside of the function, are global and are defined everywhere in the block of code. This concept is refered to as namespace.
Step18: Returning values
Functions can also themslves return a value for use in other parts of your code. In this case the return keyword explicitly returns the value rna_1.
Step19: Challenge
Step20: In the statement above, we tell the function that it should be called with one parameter,
and that parameter should be assigned the value rna within the function.
Step21: Function paramters can also be made optional. To make a paramter optional, you must give it a default value. That value could be the keyword None, an empty value like '', or a default value | Python Code:
# build a random dna sequence...
from numpy import random
final_sequence_length = eighty
initial_sequence_length = 81
dna_sequence = ''
my_nucleotides = [a,t,g,c]
my_nucleotide_probs = [0.25,0.25,0.25,0.3]
while initial_sequence_length < final_sequence_length:
nucleotide = random.choice(my_nucleotides,p=my_nucleotide_p)
dna_sequence = dna_sequence + nucleotide
initial_sequence_length = initial_sequence_length + 1
print '>random_sequence (length:%d)\n%s' % (len(dna_sequence), dna_sequence)
Explanation: Review of lists, loops, and more...
Here is another broken piece of code. Using what you learned from yesterday's lessons
fix what is broken
make comments to explain what is going on line-by-line (talk to the duck)
if you have time make some improvements!
End of explanation
my_dictionary = {}
Explanation: Moving on to dictionaries
We are just about done with the data structures we will use in this Python course. There are other Python data structures, and many more concepts, but for now we will consider dictonaries:
Check the type of my_dictionary in the cell below
End of explanation
my_mouse_exp = {'alpha_id':'CGJ28371',
'alpha_avr_mass':17.0,
'alpha_no_mice':'3'}
print my_mouse_exp
Explanation: As with lists, a dictionary can be initialized empty, this time with the braces {}. Dictionaries have some properties in common with lists and string, but there are some key differences. Dictionaries are:
itterable
unordered
indexed (by keys)
Try printing the following dictionary based on some of the data recorded in a chart we used earlier
|Group|Number of Mice|Average Mass(g)|Group Id|
|-----|--------------|---------------|--------|
|alpha|3|17.0|CGJ28371|
|beta|5|16.4|SJW99399|
|gamma|6|17.8|PWS29382|
End of explanation
my_mouse_exp = {'alpha_id':'CGJ28371',
'alpha_avr_mass':17.0,
'alpha_no_mice':'3',}
Explanation: Based on the chart above, add the values for Group Id,Average Mass(g), and Number of Mice for the beta, an gamma groups using parallel variable names (e.g. group_id...):
End of explanation
my_mouse_exp['alpha_experimenter'] = 'CGJ'
print my_mouse_exp
Explanation: You can also explicitly add inidividual entries to your dictionary:
End of explanation
beta_group_id = 'SJW99399'
my_mouse_exp['beta_experimenter'] = beta_group_id[0:3]
print my_mouse_exp
Explanation: You can also use variables and other string slicing methods we used earlier:
End of explanation
my_mouse_exp['alpha_id']
Explanation: One important property of a dictionary is that you can call entries explicitly (rather than referencing indicies like 0, 1, or 2). First, here is some terminology for a dictionary object:
dictionary = { key:value }
A dictionary consists of some key (this is the name you choose for your entry) and some value (this is the entry itself). Generally keys are strings, but could be almost anything except a list. A value can be just about anything.
You can call a specific value from a dictionary by giving its key:
End of explanation
my_mouse_exp.keys()
Explanation: You can also see a list of the keys a dictionary has:
End of explanation
my_mouse_exp.values()
Explanation: You can also check the values
End of explanation
amino_acids = {
'AUA':'I', 'AUC':'I', 'AUU':'I', 'AUG':'M',
'ACA':'T', 'ACC':'T', 'ACG':'T', 'ACU':'T',
'AAC':'N', 'AAU':'N', 'AAA':'K', 'AAG':'K',
'AGC':'S', 'AGU':'S', 'AGA':'R', 'AGG':'R',
'CUA':'L', 'CUC':'L', 'CUG':'L', 'CUU':'L',
'CCA':'P', 'CCC':'P', 'CCG':'P', 'CCU':'P',
'CAC':'H', 'CAU':'H', 'CAA':'Q', 'CAG':'Q',
'CGA':'R', 'CGC':'R', 'CGG':'R', 'CGU':'R',
'GUA':'V', 'GUC':'V', 'GUG':'V', 'GUU':'V',
'GCA':'A', 'GCC':'A', 'GCG':'A', 'GCU':'A',
'GAC':'D', 'GAU':'D', 'GAA':'E', 'GAG':'E',
'GGA':'G', 'GGC':'G', 'GGG':'G', 'GGU':'G',
'UCA':'S', 'UCC':'S', 'UCG':'S', 'UCU':'S',
'UUC':'F', 'UUU':'F', 'UUA':'L', 'UUG':'L',
'UAC':'Y', 'UAU':'Y', 'UAA':'_', 'UAG':'_',
'UGC':'C', 'UGU':'C', 'UGA':'_', 'UGG':'W'
}
Explanation: Translating RNA > Protein
RNA codons are translated into aminio acids according to a standard genetic code (see chart below); amino acids are represented here by their one-letter abbreviations.
End of explanation
rna = 'AUGCAUGCGAAUGCAGCGGCUAGCAGACUGACUGUUAUGCUGGGAUCGUGCCGCUAG'
#This may or may not be helpful, but remeber
#you can itterate over an arbitrary range of elements/numbers
#using the range() function
Explanation: Using what you have learned so far
Write the appropriate code to translate an RNA string to a protein sequence:
End of explanation
rna = 'AUGCAAGACAGGGAUCUAUUUACGAUCAGGCAUCGAUCGAUCGAUGCUAGCUAGCGGGAUCGCACGAUACUAGCCCGAUGCUAGCUUUUAUGCUCGUAGCUGCCCGUACGUUAUUUAGCCUGCUGUGCGAAUGCAGCGGCUAGCAGACUGACUGUUAUGCUGGGAUCGUGCCGCUAG'
Explanation: Does your code work on the following RNA sequence?
End of explanation
rna = 'AUGCAAGACAGGGAUCUAUUUACGAUCAGGCAUCGAUCGAUCGAUGCUAGCUAGCGGGAUCGCACGAUACUAGCCCGAUGCUAGCUUUUAUGCUCGUAGCUGCCCGUACGUUAUUUAGCCUGCUGUGCGAAUGCAGCGGCUAGCAGACUGACUGUUAUGCUGGGAUCGUGCCGCUAG'
Explanation: Bonus: Can you translate this sequence in all 3 reading frames?
End of explanation
dna = 'ACGTCGTTTACGTACGGGAGTCGTACGATCCTCCCGTAGCTCGGGATCGTTTTATCGTAGCGGGAT'
Explanation: Translating DNA to RNA TO PROTEIN
Now that we have done each of the major biological sequences, you should be able to translate a DNA sequence into an RNA, and a protein
End of explanation
def print_double():
print "Hello world"
print "Hello world"
print_double()
Explanation: FUNctions
Now that we have learned how to do several thing together, lets wap them into a function. We have already been using several functions so we can write our own:
End of explanation
print_tripple()
def print_tripple():
print "Hello world"
print "Hello world"
print "Hello world"
Explanation: Two things are happening in this function, let's examine some of its elements:
def function_name( ):<br>### (indent) instruction_block
def - this special word indicates you are defining a new Python function
function_name(): - this is the arbitrary name for your function, followed by parentheses
instruction_block - following an ident, this is a block of instructions. Everything included in this function must be indented to this level.
There is also one other element
function_name( ):
This line is the function call. As long as a function is defined above this call, the function will be run.
fix this code block, and call the function twice:
End of explanation
def prints_dna_len():
dna = 'gatgcattatcgtgagc'
prints_dna_len()
print dna
print len(dna)
Explanation: Local vs global
One other important element of functions is that variables included in the function, are not defined outside of the function:
End of explanation
more_dna = 'aaatcgatttttttt'
def prints_dna_twice():
print more_dna
print more_dna
prints_dna_twice()
Explanation: Variables defined inside the function are local to that function. Conversely, variables defined outside of the function, are global and are defined everywhere in the block of code. This concept is refered to as namespace.
End of explanation
def dna_to_rna():
dna_1 = 'agcttttacgtcgatcctgcta'
rna_1 = dna_1.replace('t','u')
return rna_1
print dna_to_rna()
print type(dna_to_rna())
Explanation: Returning values
Functions can also themslves return a value for use in other parts of your code. In this case the return keyword explicitly returns the value rna_1.
End of explanation
def prints_rna_sequence(rna):
if 't' not in rna:
print rna
else:
print 'this is not rna!'
prints_rna_sequence('agaucgagcuacgua')
prints_rna_sequence('atcgcgcatcgatct')
Explanation: Challenge: Write some functions to do the following:
Write a function that calculates the GC content of a DNA string
Write a function that generates a random string of DNA of random a random length
Parameters
Functions can also accept one or more parameters, we could expand our definition like this:
def function_name(parameter1, parameter2, parameterN):<br>### (indent) instruction_block
The parameter can have any name: the name of the parmeter becomes the name of a local variable for used within the function:
End of explanation
prints_rna_sequence()
#.find method returns the string index (e.g. string[x]) if the search string is found
# my_string = abc
# my_string.find('a') would have the value 0 (e.g. string[0])
# If there is no match to the search, the .find() function returns -1
def print_dna_and_rna(dna,rna):
if dna.find('t')!= -1:
print 'here is your dna %s' % dna
elif dna.find('u')!= -1:
print 'This is RNA!: %s' % dna
if rna.find('t')!= -1:
print 'This is DNA!: %s' % rna
elif rna.find('u')!= -1:
print 'here is your rna %s' % rna
print_dna_and_rna('agatccgtcg','uagcugacug')
print_dna_and_rna('uagcugacug','agatccgtcg')
Explanation: In the statement above, we tell the function that it should be called with one parameter,
and that parameter should be assigned the value rna within the function.
End of explanation
def print_dna_and_rna(dna, rna='', number_of_times_to_print=1):
if dna.find('t')!= -1:
print 'here is your dna %s \n' % dna * number_of_times_to_print
elif dna.find('u')!= -1:
print 'This is RNA!: %s \n' % dna * number_of_times_to_print
if rna.find('t')!= -1:
print 'This is DNA!: %s \n' % rna * number_of_times_to_print
elif rna.find('u')!= -1:
print 'here is your rna %s \n'% rna * number_of_times_to_print
print_dna_and_rna('agatccgtcg',)
print_dna_and_rna('uagcugacug','agatccgtcg',2)
print_dna_and_rna('agatccgtcg', number_of_times_to_print=6)
Explanation: Function paramters can also be made optional. To make a paramter optional, you must give it a default value. That value could be the keyword None, an empty value like '', or a default value:
End of explanation |
8,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Natural and artificial perturbations
Step1: Atmospheric drag
The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
Step2: Evolution of RAAN due to the J2 perturbation
We can also see how the J2 perturbation changes RAAN over time!
Step3: 3rd body
Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
Step4: Thrusts
Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination. | Python Code:
# Temporary hack, see https://github.com/poliastro/poliastro/issues/281
from IPython.display import HTML
HTML('<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.1.10/require.min.js"></script>')
import numpy as np
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
%matplotlib inline
import matplotlib.pyplot as plt
import functools
import numpy as np
from astropy import units as u
from astropy.time import Time
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.util import time_range
from poliastro.core.perturbations import (
atmospheric_drag, third_body, J2_perturbation
)
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter, plot, OrbitPlotter3D
Explanation: Natural and artificial perturbations
End of explanation
R = Earth.R.to(u.km).value
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format='jd', scale='tdb'))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m**2)).to(u.km**2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km**3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tof = (100000 * u.s).to(u.day).value
tr = time_range(0.0, periods=2000, end=tof, format='jd', scale='tdb')
cowell_with_ad = functools.partial(cowell, ad=atmospheric_drag,
R=R, C_D=C_D, A=A, m=m, H0=H0, rho0=rho0)
rr = orbit.sample(tr, method=cowell_with_ad)
plt.ylabel('h(t)')
plt.xlabel('t, days')
plt.plot(tr.value, rr.data.norm() - Earth.R)
Explanation: Atmospheric drag
The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
End of explanation
r0 = np.array([-2384.46, 5729.01, 3050.46]) # km
v0 = np.array([-7.36138, -2.98997, 1.64354]) # km/s
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.from_vectors(Earth, r0 * u.km, v0 * u.km / u.s)
tof = (48.0 * u.h).to(u.s).value
rr, vv = cowell(orbit, np.linspace(0, tof, 2000), ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value)
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel('RAAN(t)')
plt.xlabel('t, s')
plt.plot(np.linspace(0, tof, 2000), raans)
Explanation: Evolution of RAAN due to the J2 perturbation
We can also see how the J2 perturbation changes RAAN over time!
End of explanation
# database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set('de432s')
j_date = 2454283.0 * u.day # setting the exact event date is important
tof = (60 * u.day).to(u.s).value
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(Moon, 28 * u.day, (j_date, j_date + 60 * u.day), rtol=1e-2)
epoch = Time(j_date, format='jd', scale='tdb')
initial = Orbit.from_classical(Earth, 42164.0 * u.km, 0.0001 * u.one, 1 * u.deg,
0.0 * u.deg, 0.0 * u.deg, 0.0 * u.rad, epoch=epoch)
# multiply Moon gravity by 400 so that effect is visible :)
cowell_with_3rdbody = functools.partial(cowell, rtol=1e-6, ad=third_body,
k_third=400 * Moon.k.to(u.km**3 / u.s**2).value,
third_body=body_r)
tr = time_range(j_date.value, periods=1000, end=j_date.value + 60, format='jd', scale='tdb')
rr = initial.sample(tr, method=cowell_with_3rdbody)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit influenced by Moon')
frame.show()
Explanation: 3rd body
Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
End of explanation
from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km**3 / u.s**2).value
s0 = Orbit.from_classical(
Earth,
a * u.km, ecc_0 * u.one, inc_0 * u.deg,
0 * u.deg, argp * u.deg, 0 * u.deg,
epoch=Time(0, format='jd', scale='tdb')
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
cowell_with_ad = functools.partial(cowell, rtol=1e-6, ad=a_d)
tr = time_range(0.0, periods=1000, end=(t_f * u.s).to(u.day).value, format='jd', scale='tdb')
rr = s0.sample(tr, method=cowell_with_ad)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit with artificial thrust')
frame.show()
Explanation: Thrusts
Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.
End of explanation |
8,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm4', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-ESM4
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
8,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: int16 활성화를 사용한 훈련 후 정수 양자화
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 16x8 양자화 모드를 사용할 수 있는지 확인합니다.
Step3: 모델 훈련 및 내보내기
Step4: 예를 들어, 단일 epoch에 대해서만 모델을 훈련시켰으므로 ~96% 정확성으로만 훈련합니다.
TensorFlow Lite 모델로 변환하기
이제 Python TFLiteConverter를 사용하여 훈련된 모델을 TensorFlow Lite 모델로 변환할 수 있습니다.
이제 TFliteConverter를 사용하여 모델을 기본 float32 형식으로 변환합니다.
Step5: .tflite 파일에 작성합니다.
Step6: 대신 모델을 16x8 양자화 모드로 양자화하려면 먼저 기본 최적화를 사용하도록 optimizations 플래그를 설정합니다. 그런 다음 16x8 양자화 모드가 대상 사양에서 지원되는 필수 연산임을 지정합니다.
Step7: int8 훈련 후 양자화의 경우와 마찬가지로, 변환기 옵션 inference_input(output)_type을 tf.int16으로 설정하여 완전히 정수 양자화된 모델을 생성할 수 있습니다.
보정 데이터를 설정합니다.
Step8: 마지막으로, 평소와 같이 모델을 변환합니다. 기본적으로 변환된 모델은 호출 편의를 위해 여전히 float 입력 및 출력을 사용합니다.
Step9: 결과 파일이 약 1/3 크기인 것에 주목합니다.
Step10: TensorFlow Lite 모델 실행하기
Python TensorFlow Lite 인터프리터를 사용하여 TensorFlow Lite 모델을 실행합니다.
인터프리터에 모델 로드하기
Step11: 하나의 이미지에서 모델 테스트하기
Step12: 모델 평가하기
Step13: 16x8 양자화된 모델에 대해 평가를 반복합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
Explanation: int16 활성화를 사용한 훈련 후 정수 양자화
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant_16x8"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lite/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lite/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a> </td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lite/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
개요
TensorFlow Lite는 이제 TensorFlow에서 TensorFlow Lite의 플랫 버퍼 형식으로 모델을 변환하는 동안 활성화를 16bit 정수 값으로, 가중치를 8bit 정수 값으로 변환하는 작업을 지원합니다. 이 모드를 "16x8 양자화 모드"라고 합니다. 이 모드는 활성화가 양자화에 민감할 때 양자화된 모델의 정확성을 크게 향상시키면서도 모델 크기를 거의 3 ~ 4배 줄일 수 있습니다. 또한 이 완전 양자화된 모델은 정수 전용 하드웨어 가속기에서 사용할 수 있습니다.
이 훈련 후 양자화 모드의 이점을 얻는 모델의 몇 가지 예는 다음과 같습니다.
초고해상도
소음 상쇄 및 빔포밍과 같은 오디오 신호 처리
이미지 노이즈 제거
단일 이미지에서 HDR 재구성
이 튜토리얼에서는 MNIST 모델을 처음부터 학습시키고 TensorFlow에서 정확성을 확인한 다음, 이 모드를 사용하여 모델을 Tensorflow Lite 플랫 버퍼로 변환합니다. 마지막으로, 변환된 모델의 정확성을 확인하고 원래 float32 모델과 비교합니다. 이 예제는 이 모드의 사용법을 보여주는 것이며 TensorFlow Lite에서 사용 가능한 다른 양자화 기술과 비교한 이점을 보여주지는 않습니다.
MNIST 모델 빌드하기
설정
End of explanation
tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
Explanation: 16x8 양자화 모드를 사용할 수 있는지 확인합니다.
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
Explanation: 모델 훈련 및 내보내기
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Explanation: 예를 들어, 단일 epoch에 대해서만 모델을 훈련시켰으므로 ~96% 정확성으로만 훈련합니다.
TensorFlow Lite 모델로 변환하기
이제 Python TFLiteConverter를 사용하여 훈련된 모델을 TensorFlow Lite 모델로 변환할 수 있습니다.
이제 TFliteConverter를 사용하여 모델을 기본 float32 형식으로 변환합니다.
End of explanation
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
Explanation: .tflite 파일에 작성합니다.
End of explanation
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]
Explanation: 대신 모델을 16x8 양자화 모드로 양자화하려면 먼저 기본 최적화를 사용하도록 optimizations 플래그를 설정합니다. 그런 다음 16x8 양자화 모드가 대상 사양에서 지원되는 필수 연산임을 지정합니다.
End of explanation
mnist_train, _ = tf.keras.datasets.mnist.load_data()
images = tf.cast(mnist_train[0], tf.float32) / 255.0
mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)
def representative_data_gen():
for input_value in mnist_ds.take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter.representative_dataset = representative_data_gen
Explanation: int8 훈련 후 양자화의 경우와 마찬가지로, 변환기 옵션 inference_input(output)_type을 tf.int16으로 설정하여 완전히 정수 양자화된 모델을 생성할 수 있습니다.
보정 데이터를 설정합니다.
End of explanation
tflite_16x8_model = converter.convert()
tflite_model_16x8_file = tflite_models_dir/"mnist_model_quant_16x8.tflite"
tflite_model_16x8_file.write_bytes(tflite_16x8_model)
Explanation: 마지막으로, 평소와 같이 모델을 변환합니다. 기본적으로 변환된 모델은 호출 편의를 위해 여전히 float 입력 및 출력을 사용합니다.
End of explanation
!ls -lh {tflite_models_dir}
Explanation: 결과 파일이 약 1/3 크기인 것에 주목합니다.
End of explanation
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file))
interpreter_16x8.allocate_tensors()
Explanation: TensorFlow Lite 모델 실행하기
Python TensorFlow Lite 인터프리터를 사용하여 TensorFlow Lite 모델을 실행합니다.
인터프리터에 모델 로드하기
End of explanation
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter_16x8.get_input_details()[0]["index"]
output_index = interpreter_16x8.get_output_details()[0]["index"]
interpreter_16x8.set_tensor(input_index, test_image)
interpreter_16x8.invoke()
predictions = interpreter_16x8.get_tensor(output_index)
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
Explanation: 하나의 이미지에서 모델 테스트하기
End of explanation
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
Explanation: 모델 평가하기
End of explanation
# NOTE: This quantization mode is an experimental post-training mode,
# it does not have any optimized kernels implementations or
# specialized machine learning hardware accelerators. Therefore,
# it could be slower than the float interpreter.
print(evaluate_model(interpreter_16x8))
Explanation: 16x8 양자화된 모델에 대해 평가를 반복합니다.
End of explanation |
8,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:00
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
8,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2T_Pandas로 배우는 SQL 시작하기 (2) - JOIN ON
Step3: 텍스트 마이닝
특정 텍스트가 포함되어 있는 row를 가져오는 방법
GovernmentForm 에 "Republic"이라는 텍스트가 포함된 열 가져오기
1. pandas로
contains, startswith, endswith
Step4: JOIN(pandas에서는 merge 기능)
"sakila"로 들어와서 실습
Step5: address(주소), customer(이름)
pandas merge를 이용해서 유저의 이름과 주소가 같이 있는 DataFrame을 만들어보기
Step8: sql로 하는 첫번째 방법 => 바보 같은 방법이지만 중요하다. 원리를 이해해라
Step10: customer 599개 address는 603개가 있는 상황
Step12: 599 * 603 = 361197
데이터를 받아오는 데 오랜 시간이 걸렸다.
Step16: 즉, 데이터 테이블을 합친 다음에 ( 모든 row에 대한 곱 )
WHERE 조건문을 통해서 우리가 원하는 데이터만 가져오자.
Step17: pandas보다 sql에서 JOIN을 활용
sql 1번과 2번 중 무엇을 써야 하는가? JOIN!! JOIN을 쓰면 코드가 한 눈에 안 들어온다.
SQL (1) - SELECT ____ WHERE
SQL (2) - SELECT _ FROM _ JOIN ____
굳이 JOIN을 안 쓰더라도 상관 없어. WHERE문만 써도 괜찮아
World DB에서 Country와 City를 합쳐라. (Country Name, City Name)
3가지 방법으로. SQL(where, join)과 Pandas(merge)
Step18: (1) Pandas_merge
Step21: 1. 바보 같지만 원리가 중요한 방법(sql)
Step23: 2. JOIN을 이용한 방법(DB를 생각해주는 착한 방법) | Python Code:
import pymysql
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"world",
charset='utf8',
)
df = pd.read_sql("SELECT * FROM Country;", db)
#cursor
cursor = db.cursor()
# 1. 실제로 명령을 수행하는 부분 - 서버
cursor.execute("SELECT * FROM Country;")
# 2. 데이터를 가져오는 부분 - 서버 => 클라이언트
cursor.fetchall() #결과를 불러 온다. Pandas는 사실 이걸 읽는 것이다.
pd.read_sql("SELECT * FROM Country;", db)
Explanation: 2T_Pandas로 배우는 SQL 시작하기 (2) - JOIN ON
End of explanation
country_df = pd.read_sql("SELECT * FROM Country;", db)
country_df[country_df["GovernmentForm"].str.contains("Republic")].head(2)
country_df[country_df["GovernmentForm"].str.startswith("Republic")].head(2)
country_df[country_df["GovernmentForm"].str.endswith("Republic")].head(2)
gf_str = country_df["GovernmentForm"].str
gf_str. #이렇게 해서 str에 어떤 기능들이 있는 지 확인할 수 있다.
# SQL을 이용
SQL_QUERY =
SELECT Name, GovernmentForm
From Country
WHERE
GovernmentForm LIKE "Republic"
;
pd.read_sql(SQL_QUERY, db).head()
SQL_QUERY =
SELECT Name, GovernmentForm
From Country
WHERE
GovernmentForm LIKE "%Republic%"
;
pd.read_sql(SQL_QUERY, db)
Explanation: 텍스트 마이닝
특정 텍스트가 포함되어 있는 row를 가져오는 방법
GovernmentForm 에 "Republic"이라는 텍스트가 포함된 열 가져오기
1. pandas로
contains, startswith, endswith
End of explanation
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset = "utf8",
)
Explanation: JOIN(pandas에서는 merge 기능)
"sakila"로 들어와서 실습
End of explanation
customer_df = pd.read_sql("SELECT * FROM customer;", db)
address_df = pd.read_sql("SELECT * FROM address;", db)
customer_df.columns
address_df.columns
customer_df.merge(address_df, on="address_id")[["first_name", "last_name", "address"]]
# left_on="address_id"
# right_on="address_id"
# => on...
Explanation: address(주소), customer(이름)
pandas merge를 이용해서 유저의 이름과 주소가 같이 있는 DataFrame을 만들어보기
End of explanation
SQL_QUERY =
SELECT COUNT(*)
FROM customer
;
pd.read_sql(SQL_QUERY, db).head()
SQL_QUERY =
SELECT COUNT(*)
FROM address
;
pd.read_sql(SQL_QUERY, db).head()
Explanation: sql로 하는 첫번째 방법 => 바보 같은 방법이지만 중요하다. 원리를 이해해라
End of explanation
SQL_QUERY =
SELECT COUNT(*)
FROM customer, address
;
pd.read_sql(SQL_QUERY, db).head()
Explanation: customer 599개 address는 603개가 있는 상황
End of explanation
SQL_QUERY =
SELECT customer.first_name, customer.last_name, address.address
FROM customer, address
WHERE
customer.address_id = address.address_id
;
df = pd.read_sql(SQL_QUERY, db).head()
Explanation: 599 * 603 = 361197
데이터를 받아오는 데 오랜 시간이 걸렸다.
End of explanation
SQL_QUERY =
SELECT customer.first_name, customer.last_name, address.address
FROM customer
JOIN address ON customer.address_id = address.address_id
;
pd.read_sql(SQL_QUERY, db).head()
import time
start_time = time.time()
customer_df = pd.read_sql("SELECT * FROM customer;", db)
address_df = pd.read_sql("SELECT * FROM address;", db)
df = customer_df.merge(address_df, on="address_id")
end_time = time.time()
exec_time = end_time - start_time
print(exec_time)
start_time = time.time()
SQL_QUERY =
SELECT customer.first_name, customer.last_name, address.address
FROM customer, address
WHERE
customer.address_id = address.address_id
;
df = pd.read_sql(SQL_QUERY, db)
end_time = time.time()
exec_time = end_time - start_time
print(exec_time)
start_time = time.time()
SQL_QUERY =
SELECT customer.first_name, customer.last_name, address.address
FROM customer
JOIN address ON customer.address_id = address.address_id
;
pd.read_sql(SQL_QUERY, db)
end_time = time.time()
exec_time = end_time - start_time
print(exec_time)
Explanation: 즉, 데이터 테이블을 합친 다음에 ( 모든 row에 대한 곱 )
WHERE 조건문을 통해서 우리가 원하는 데이터만 가져오자.
End of explanation
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"world",
charset='utf8',
)
country_df = pd.read_sql("SELECT * FROM Country;", db)
city_df = pd.read_sql("SELECT * FROM City;", db)
Explanation: pandas보다 sql에서 JOIN을 활용
sql 1번과 2번 중 무엇을 써야 하는가? JOIN!! JOIN을 쓰면 코드가 한 눈에 안 들어온다.
SQL (1) - SELECT ____ WHERE
SQL (2) - SELECT _ FROM _ JOIN ____
굳이 JOIN을 안 쓰더라도 상관 없어. WHERE문만 써도 괜찮아
World DB에서 Country와 City를 합쳐라. (Country Name, City Name)
3가지 방법으로. SQL(where, join)과 Pandas(merge)
End of explanation
country_df.columns
city_df.columns
city_df.merge(country_df, right_on="Code", left_on="CountryCode")[["Name_x", "Name_y"]]
Explanation: (1) Pandas_merge
End of explanation
SQL_QUERY =
SELECT Country.Name "Country Name", City.Name "City Name"
FROM Country, City
WHERE Country.Code = City.CountryCode
;
pd.read_sql(SQL_QUERY, db).head(2)
SQL_QUERY =
SELECT co.Name "Country Name", ci.Name "City Name"
FROM Country co, City ci
WHERE co.Code = ci.CountryCode
;
pd.read_sql(SQL_QUERY, db).head(2)
Explanation: 1. 바보 같지만 원리가 중요한 방법(sql)
End of explanation
SQL_QUERY =
SELECT co.Name "Country Name", ci.Name "City Name"
FROM Country co
JOIN City ci
ON co.code = ci.CountryCode
;
pd.read_sql(SQL_QUERY, db).head(2)
Explanation: 2. JOIN을 이용한 방법(DB를 생각해주는 착한 방법)
End of explanation |
8,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning and Adjusting Tuning Curves
This is a quick notebook just to sketch out some initial stages of looking at modelling Aaron Batista's data on training macaques to use BCIs, and how that interacts with the represented low-dimensional manifolds in M1.
First, we start with the required dependencies. They are all Python libraries that can be installed with pip (e.g. pip install nengo)
Step1: Now we build our actual model. We start by defining M1. Even though I'll only be extracting 2 dimensions out of this neural activity, I assume that it's probably encoding more than that, so let's go with saying that it's a 3-dimensional manifold.
Step2: M1 is gets its inputs from earlier motor areas, so let's add that in. We'll call it PMC (or maybe SMA), and we'll assume that it's doing a few different things, so let's arbitarily pick that it's a 6-dimensional manifold.
We then connect it to M1. When we do this, we can specify the relationship that we want between these manifolds and Nengo will find the synaptic connection weights that best approximate that mapping. This mapping can be a non-linear function (although the more non-linear it is, the less accurately it'll approximate that function). Here, we just do a linear function that grabs the first 3 dimensions from pmc and sends it to m1.
This is the connection that will actually end up being adjusted during learning, so we also define a learning rule. This is the PES rule, which is really just standard delta rule (i.e. a supervised learning rule on just that one set of connections, which ends up being backprop without backpropagation).
Step3: We should have some actual input to our system. Let's just do a random band-limited white noise signal, with a maximum frequency of 5Hz. Note that this input is a 6-dimensional input (i.e. it's in the low-D manifold space, not in the 500-D neurons space). The first 2 dimensions of this we will consider to be our target X,Y location that we want to decode out of the M1 representation.
Note that because of how we set up our connection above, the initial neuron model will be one that does send that information (the values we want to decode) to M1.
Step4: Now we define our BCI node. This will take in spiking data from an ensemble of neurons, and apply some linear transform on the spikes, projecting it down into some smaller space. The begin with, this calls into nengo to compute the ideal default decoding. However, if we change this self.decoder, then we change the mapping that the model has to learn.
Note
Step5: If we left it like this, then the way we're decoding the spikes in the BCI is exactly what the neural model is already doing, so it would give perfect behaviour instantly and there'd be nothing to learn.
Let's give it something to learn by swapping the 2 dimensions being decoded. That is, we're staying in manifold, but swapping dimension 1 and dimension 2
Step6: Now we need to give the system a learning signal. The solution provided here is cheating a fair bit, in that it already knows which direction to change things to improve the results. A more complete solution would require learning to characterize the relationship between the observed change in cursor position and the desired change. This is exactly the sort of learning we've done elsewhere in the adaptive Jacobian kinematics learning models (such as http
Step7: Finally, we mark particular data to be recorded during the model run.
Step8: Now we run the simulation for 500 seconds.
Step9: Let's see what it's doing at a behavioural level. First, we plot behaviour in the first second
Step10: As expected, the initial output of the system is exactly backwards (the first dimension and the second dimension are swapped).
What happens after 100 seconds of learning?
Step11: Things have changed, but it's still rather wrong.
What happens after 200 seconds of learning?
Step12: Getting better.
What happens after 500 seconds of learning?
Step13: It has successfully learned the new mapping.
Tuning curves
The whole point of all this was to see what's happening to the tuning curves.
To plot these, the simplest thing to do is just do a density plot of what target x,y locations the neurons spike for at different times in the experiment.
Step14: Here is the tuning curve for neuron #2, with the initial tuning curve in red (using data from t=0 to t=100) and the final tuning curve in blue (using data from t=400 to t=500).
Step15: Here's a neuron that didn't change much
Step16: And some more neurons. | Python Code:
%matplotlib inline
import pylab # plotting
import seaborn # plotting
import numpy as np # math functions
import nengo # neural modelling
Explanation: Learning and Adjusting Tuning Curves
This is a quick notebook just to sketch out some initial stages of looking at modelling Aaron Batista's data on training macaques to use BCIs, and how that interacts with the represented low-dimensional manifolds in M1.
First, we start with the required dependencies. They are all Python libraries that can be installed with pip (e.g. pip install nengo)
End of explanation
model = nengo.Network()
with model:
m1 = nengo.Ensemble(n_neurons=500, dimensions=3)
Explanation: Now we build our actual model. We start by defining M1. Even though I'll only be extracting 2 dimensions out of this neural activity, I assume that it's probably encoding more than that, so let's go with saying that it's a 3-dimensional manifold.
End of explanation
with model:
pmc = nengo.Ensemble(n_neurons=500, dimensions=6)
# function to approximate
def starting_map(x):
return x[0], x[1], x[2]
c = nengo.Connection(pmc, m1, function=starting_map,
learning_rule_type=nengo.PES(learning_rate=1e-5))
Explanation: M1 is gets its inputs from earlier motor areas, so let's add that in. We'll call it PMC (or maybe SMA), and we'll assume that it's doing a few different things, so let's arbitarily pick that it's a 6-dimensional manifold.
We then connect it to M1. When we do this, we can specify the relationship that we want between these manifolds and Nengo will find the synaptic connection weights that best approximate that mapping. This mapping can be a non-linear function (although the more non-linear it is, the less accurately it'll approximate that function). Here, we just do a linear function that grabs the first 3 dimensions from pmc and sends it to m1.
This is the connection that will actually end up being adjusted during learning, so we also define a learning rule. This is the PES rule, which is really just standard delta rule (i.e. a supervised learning rule on just that one set of connections, which ends up being backprop without backpropagation).
End of explanation
with model:
stim = nengo.Node(nengo.processes.WhiteSignal(period=500, high=5), size_out=6)
nengo.Connection(stim, pmc)
Explanation: We should have some actual input to our system. Let's just do a random band-limited white noise signal, with a maximum frequency of 5Hz. Note that this input is a 6-dimensional input (i.e. it's in the low-D manifold space, not in the 500-D neurons space). The first 2 dimensions of this we will consider to be our target X,Y location that we want to decode out of the M1 representation.
Note that because of how we set up our connection above, the initial neuron model will be one that does send that information (the values we want to decode) to M1.
End of explanation
class BCINode(nengo.Node):
def __init__(self, ensemble, dimensions, seed=1):
ensemble.seed = seed
self.decoder = self.get_decoder(ensemble)[:dimensions]
super(BCINode, self).__init__(self.decode, size_in=ensemble.n_neurons, size_out=dimensions)
# defines the behaviour of this Node in the running model
def decode(self, t, x):
return np.dot(self.decoder, x)
# use nengo to compute the ideal decoder for this neural population
def get_decoder(self, ens):
assert ens.seed is not None
net = nengo.Network(add_to_container=False)
net.ensembles.append(ens)
with net:
c = nengo.Connection(ens, ens)
sim = nengo.Simulator(net, progress_bar=False)
return sim.data[c].weights
with model:
bci = BCINode(m1, dimensions=2)
nengo.Connection(m1.neurons, bci)
Explanation: Now we define our BCI node. This will take in spiking data from an ensemble of neurons, and apply some linear transform on the spikes, projecting it down into some smaller space. The begin with, this calls into nengo to compute the ideal default decoding. However, if we change this self.decoder, then we change the mapping that the model has to learn.
Note: this is using a few different rather esoteric Nengo tricks, so probably isn't all that readable. But the final result is something that has as input spikes from N neurons, and as output has a 2-element vector that is formed by linearly combining the spike trains. By default it's using the same linear combination that the full model is using (i.e. as if we were able to find the actual mapping that the macaque is using), but we'll change that to a different mapping that it has to learn.
End of explanation
bci.decoder = np.vstack([bci.decoder[1], bci.decoder[0]])
Explanation: If we left it like this, then the way we're decoding the spikes in the BCI is exactly what the neural model is already doing, so it would give perfect behaviour instantly and there'd be nothing to learn.
Let's give it something to learn by swapping the 2 dimensions being decoded. That is, we're staying in manifold, but swapping dimension 1 and dimension 2
End of explanation
with model:
# population representing the error signal
error = nengo.Ensemble(n_neurons=500, dimensions=3)
# feedback from the BCI as to where the cursor actually went to
nengo.Connection(bci, error[:2], transform=1)
# minus the actual desired location
nengo.Connection(pmc[:2], error[:2], transform=-1)
# use this difference to drive the learning from pmc to m1
nengo.Connection(error, c.learning_rule, transform=[[0,1,0],[1,0,0],[0,0,1]])
Explanation: Now we need to give the system a learning signal. The solution provided here is cheating a fair bit, in that it already knows which direction to change things to improve the results. A more complete solution would require learning to characterize the relationship between the observed change in cursor position and the desired change. This is exactly the sort of learning we've done elsewhere in the adaptive Jacobian kinematics learning models (such as http://rspb.royalsocietypublishing.org/content/283/1843/20162134) but for this demonstration I'm cheating and skipping that part.
End of explanation
with model:
p_out = nengo.Probe(bci) # the output from the BCI
p_stim = nengo.Probe(stim[:2]) # the desired target location
p_spikes = nengo.Probe(m1.neurons) # the spike data in m1
Explanation: Finally, we mark particular data to be recorded during the model run.
End of explanation
sim = nengo.Simulator(model)
sim.run(500)
Explanation: Now we run the simulation for 500 seconds.
End of explanation
pylab.plot(sim.trange(), sim.data[p_out], label='BCI output')
pylab.gca().set_color_cycle(None)
pylab.plot(sim.trange(), sim.data[p_stim], linestyle='--', label='desired')
pylab.xlim(0,1)
pylab.legend(loc='best')
pylab.show()
Explanation: Let's see what it's doing at a behavioural level. First, we plot behaviour in the first second
End of explanation
pylab.plot(sim.trange(), sim.data[p_out], label='BCI output')
pylab.gca().set_color_cycle(None)
pylab.plot(sim.trange(), sim.data[p_stim], linestyle='--', label='desired')
pylab.xlim(100, 101)
pylab.legend(loc='best')
pylab.show()
Explanation: As expected, the initial output of the system is exactly backwards (the first dimension and the second dimension are swapped).
What happens after 100 seconds of learning?
End of explanation
pylab.plot(sim.trange(), sim.data[p_out], label='BCI output')
pylab.gca().set_color_cycle(None)
pylab.plot(sim.trange(), sim.data[p_stim], linestyle='--', label='desired')
pylab.xlim(200, 201)
pylab.legend(loc='best')
pylab.show()
Explanation: Things have changed, but it's still rather wrong.
What happens after 200 seconds of learning?
End of explanation
pylab.plot(sim.trange(), sim.data[p_out], label='BCI output')
pylab.gca().set_color_cycle(None)
pylab.plot(sim.trange(), sim.data[p_stim], linestyle='--', label='desired')
pylab.xlim(499, 500)
pylab.legend(loc='best')
pylab.show()
Explanation: Getting better.
What happens after 500 seconds of learning?
End of explanation
def plot_tuning(index, t_start=0, t_end=10, cmap='Reds'):
times = sim.trange()
spikes = sim.data[p_spikes][:,index]
spikes = np.where(times>t_end, 0, spikes)
spikes = np.where(times<t_start, 0, spikes)
value = sim.data[p_stim]
v = value[np.where(spikes>0)]
seaborn.kdeplot(v[:,0], v[:,1], shade=True, shade_lowest=False, cmap=cmap, alpha=1.0)
Explanation: It has successfully learned the new mapping.
Tuning curves
The whole point of all this was to see what's happening to the tuning curves.
To plot these, the simplest thing to do is just do a density plot of what target x,y locations the neurons spike for at different times in the experiment.
End of explanation
index = 2
pylab.figure(figsize=(6,6))
plot_tuning(index, t_start=0, t_end=100, cmap='Reds')
plot_tuning(index, t_start=400, t_end=500, cmap='Blues')
Explanation: Here is the tuning curve for neuron #2, with the initial tuning curve in red (using data from t=0 to t=100) and the final tuning curve in blue (using data from t=400 to t=500).
End of explanation
index = 3
pylab.figure(figsize=(6,6))
plot_tuning(index, t_start=0, t_end=100, cmap='Reds')
plot_tuning(index, t_start=400, t_end=500, cmap='Blues')
Explanation: Here's a neuron that didn't change much
End of explanation
index = 5
pylab.figure(figsize=(6,6))
plot_tuning(index, t_start=0, t_end=100, cmap='Reds')
plot_tuning(index, t_start=400, t_end=500, cmap='Blues')
index = 6
pylab.figure(figsize=(6,6))
plot_tuning(index, t_start=0, t_end=100, cmap='Reds')
plot_tuning(index, t_start=400, t_end=500, cmap='Blues')
Explanation: And some more neurons.
End of explanation |
8,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Sequence classification with LSTM on MNIST</center>
<div class="alert alert-block alert-info">
<font size = 3><strong>In this notebook you will learn the How to use TensorFlow for create a Recurrent Neural Network</strong></font>
<br>
- <a href="#intro">Introduction</a>
<br>
- <p><a href="#arch">Architectures</a></p>
- <a href="#lstm">Long Short-Term Memory Model (LSTM)</a>
- <p><a href="#build">Building a LSTM with TensorFlow</a></p>
</div>
<a id="intro"/> Introduction
Recurrent Neural Networks are Deep Learning models with simple structures and a feedback mechanism builted-in, or in different words, the output of a layer is added to the next input and fed back to the same layer.
The Recurrent Neural Network is a specialized type of Neural Network that solves the issue of maintaining context for Sequential data -- such as Weather data, Stocks, Genes, etc. At each iterative step, the processing unit takes in an input and the current state of the network, and produces an output and a new state that is re-fed into the network.
However, this model has some problems. It's very computationally expensive to maintain the state for a large amount of units, even more so over a long amount of time. Additionally, Recurrent Networks are very sensitive to changes in their parameters. As such, they are prone to different problems with their Gradient Descent optimizer -- they either grow exponentially (Exploding Gradient) or drop down to near zero and stabilize (Vanishing Gradient), both problems that greatly harm a model's learning capability.
To solve these problems, Hochreiter and Schmidhuber published a paper in 1997 describing a way to keep information over long periods of time and additionally solve the oversensitivity to parameter changes, i.e., make backpropagating through the Recurrent Networks more viable.
(In this notebook, we will cover only LSTM and its implementation using TensorFlow)
<a id="arch"/>Architectures
Fully Recurrent Network
Recursive Neural Networks
Hopfield Networks
Elman Networks and Jordan Networks
Echo State Networks
Neural history compressor
The Long Short-Term Memory Model (LSTM)
<img src="https
Step1: The function input_data.read_data_sets(...) loads the entire dataset and returns an object tensorflow.contrib.learn.python.learn.datasets.mnist.DataSets
The argument (one_hot=False) creates the label arrays as 10-dimensional binary vectors (only zeros and ones), in which the index cell for the number one, is the class label.
Step2: Let's get one sample, just to understand the structure of MNIST dataset
The next code snippet prints the label vector (one_hot format), the class and actual sample formatted as image
Step3: Let's Understand the parameters, inputs and outputs
We will treat the MNIST image $\in \mathcal{R}^{28 \times 28}$ as $28$ sequences of a vector $\mathbf{x} \in \mathcal{R}^{28}$.
Our simple RNN consists of
One input layer which converts a $28$ dimensional input to an $128$ dimensional hidden layer,
One intermediate recurrent neural network (LSTM)
One output layer which converts an $128$ dimensional output of the LSTM to $10$ dimensional output indicating a class label.
Step4: Construct a Recurrent Neural Network
Step5: The input should be a Tensor of shape
Step6: labels and logits should be tensors of shape [100x10]
Step7: Just recall that we will treat the MNIST image $\in \mathcal{R}^{28 \times 28}$ as $28$ sequences of a vector $\mathbf{x} \in \mathcal{R}^{28}$. | Python Code:
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("../../data/MNIST/", one_hot=True)
Explanation: <center> Sequence classification with LSTM on MNIST</center>
<div class="alert alert-block alert-info">
<font size = 3><strong>In this notebook you will learn the How to use TensorFlow for create a Recurrent Neural Network</strong></font>
<br>
- <a href="#intro">Introduction</a>
<br>
- <p><a href="#arch">Architectures</a></p>
- <a href="#lstm">Long Short-Term Memory Model (LSTM)</a>
- <p><a href="#build">Building a LSTM with TensorFlow</a></p>
</div>
<a id="intro"/> Introduction
Recurrent Neural Networks are Deep Learning models with simple structures and a feedback mechanism builted-in, or in different words, the output of a layer is added to the next input and fed back to the same layer.
The Recurrent Neural Network is a specialized type of Neural Network that solves the issue of maintaining context for Sequential data -- such as Weather data, Stocks, Genes, etc. At each iterative step, the processing unit takes in an input and the current state of the network, and produces an output and a new state that is re-fed into the network.
However, this model has some problems. It's very computationally expensive to maintain the state for a large amount of units, even more so over a long amount of time. Additionally, Recurrent Networks are very sensitive to changes in their parameters. As such, they are prone to different problems with their Gradient Descent optimizer -- they either grow exponentially (Exploding Gradient) or drop down to near zero and stabilize (Vanishing Gradient), both problems that greatly harm a model's learning capability.
To solve these problems, Hochreiter and Schmidhuber published a paper in 1997 describing a way to keep information over long periods of time and additionally solve the oversensitivity to parameter changes, i.e., make backpropagating through the Recurrent Networks more viable.
(In this notebook, we will cover only LSTM and its implementation using TensorFlow)
<a id="arch"/>Architectures
Fully Recurrent Network
Recursive Neural Networks
Hopfield Networks
Elman Networks and Jordan Networks
Echo State Networks
Neural history compressor
The Long Short-Term Memory Model (LSTM)
<img src="https://ibm.box.com/shared/static/v7p90neiaqghmpwawpiecmz9n7080m59.png" alt="Representation of a Recurrent Neural Network" width=80%>
<a id="lstm"/>LSTM
LSTM is one of the proposed solutions or upgrades to the Recurrent Neural Network model.
It is an abstraction of how computer memory works. It is "bundled" with whatever processing unit is implemented in the Recurrent Network, although outside of its flow, and is responsible for keeping, reading, and outputting information for the model. The way it works is simple: you have a linear unit, which is the information cell itself, surrounded by three logistic gates responsible for maintaining the data. One gate is for inputting data into the information cell, one is for outputting data from the input cell, and the last one is to keep or forget data depending on the needs of the network.
Thanks to that, it not only solves the problem of keeping states, because the network can choose to forget data whenever information is not needed, it also solves the gradient problems, since the Logistic Gates have a very nice derivative.
Long Short-Term Memory Architecture
As seen before, the Long Short-Term Memory is composed of a linear unit surrounded by three logistic gates. The name for these gates vary from place to place, but the most usual names for them are:
- the "Input" or "Write" Gate, which handles the writing of data into the information cell,
- the "Output" or "Read" Gate, which handles the sending of data back onto the Recurrent Network, and
- the "Keep" or "Forget" Gate, which handles the maintaining and modification of the data stored in the information cell.
<img src=https://ibm.box.com/shared/static/zx10duv5egw0baw6gh2hzsgr8ex45gsg.png width="720"/>
<center>Diagram of the Long Short-Term Memory Unit</center>
The three gates are the centerpiece of the LSTM unit. The gates, when activated by the network, perform their respective functions. For example, the Input Gate will write whatever data it is passed onto the information cell, the Output Gate will return whatever data is in the information cell, and the Keep Gate will maintain the data in the information cell. These gates are analog and multiplicative, and as such, can modify the data based on the signal they are sent.
<a id="build"/> Building a LSTM with TensorFlow
LSTM for Classification
Although RNN is mostly used to model sequences and predict sequential data, we can still classify images using a LSTM network. If we consider every image row as a sequence of pixels, we can feed a LSTM network for classification. Lets use the famous MNIST dataset here. Because MNIST image shape is 28*28px, we will then handle 28 sequences of 28 steps for every sample.
MNIST Dataset
Tensor flow already provides helper functions to download and process the MNIST dataset.
End of explanation
trainimgs = mnist.train.images
trainlabels = mnist.train.labels
testimgs = mnist.test.images
testlabels = mnist.test.labels
ntrain = trainimgs.shape[0]
ntest = testimgs.shape[0]
dim = trainimgs.shape[1]
nclasses = trainlabels.shape[1]
print("Train Images: ", trainimgs.shape)
print("Train Labels ", trainlabels.shape)
print()
print("Test Images: " , testimgs.shape)
print("Test Labels: ", testlabels.shape)
Explanation: The function input_data.read_data_sets(...) loads the entire dataset and returns an object tensorflow.contrib.learn.python.learn.datasets.mnist.DataSets
The argument (one_hot=False) creates the label arrays as 10-dimensional binary vectors (only zeros and ones), in which the index cell for the number one, is the class label.
End of explanation
samplesIdx = [100, 101, 102] #<-- You can change these numbers here to see other samples
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.imshow(testimgs[samplesIdx[0]].reshape([28,28]), cmap='gray')
xx, yy = np.meshgrid(np.linspace(0,28,28), np.linspace(0,28,28))
X = xx ; Y = yy
Z = 100*np.ones(X.shape)
img = testimgs[77].reshape([28,28])
ax = fig.add_subplot(122, projection='3d')
ax.set_zlim((0,200))
offset=200
for i in samplesIdx:
img = testimgs[i].reshape([28,28]).transpose()
ax.contourf(X, Y, img, 200, zdir='z', offset=offset, cmap="gray")
offset -= 100
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
plt.show()
for i in samplesIdx:
print("Sample: {0} - Class: {1} - Label Vector: {2} ".format(i, np.nonzero(testlabels[i])[0], testlabels[i]))
Explanation: Let's get one sample, just to understand the structure of MNIST dataset
The next code snippet prints the label vector (one_hot format), the class and actual sample formatted as image:
End of explanation
n_input = 28 # MNIST data input (img shape: 28*28)
n_steps = 28 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 10 # MNIST total classes (0-9 digits)
learning_rate = 0.001
training_iters = 100000
batch_size = 100
display_step = 10
Explanation: Let's Understand the parameters, inputs and outputs
We will treat the MNIST image $\in \mathcal{R}^{28 \times 28}$ as $28$ sequences of a vector $\mathbf{x} \in \mathcal{R}^{28}$.
Our simple RNN consists of
One input layer which converts a $28$ dimensional input to an $128$ dimensional hidden layer,
One intermediate recurrent neural network (LSTM)
One output layer which converts an $128$ dimensional output of the LSTM to $10$ dimensional output indicating a class label.
End of explanation
x = tf.placeholder(dtype="float", shape=[None, n_steps, n_input], name="x")
y = tf.placeholder(dtype="float", shape=[None, n_classes], name="y")
weights = {
'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_classes]))
}
Explanation: Construct a Recurrent Neural Network
End of explanation
# Define a lstm cell with tensorflow
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
#initial state
#initial_state = (tf.zeros([1,n_hidden]),)*2
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_steps, n_input) [100x28x28]
# Define a lstm cell with tensorflow
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Get lstm cell output
outputs, states = tf.nn.dynamic_rnn(lstm_cell, inputs=x, dtype=tf.float32)
# Get lstm cell output
#outputs, states = lstm_cell(x , initial_state)
# The output of the rnn would be a [100x28x128] matrix. we use the linear activation to map it to a [?x10 matrix]
# Linear activation, using rnn inner loop last output
# output [100x128] x weight [128, 10] + []
output = tf.reshape(tf.split(outputs, 28, axis=1, num=None, name='split')[-1],[-1,128])
return tf.matmul(output, weights['out']) + biases['out']
with tf.variable_scope('forward3'):
pred = RNN(x, weights, biases)
Explanation: The input should be a Tensor of shape: [batch_size, max_time, ...], in our case it would be (?, 28, 28)
End of explanation
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred ))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
accuracy_v2 = tf.contrib.metrics.accuracy(
labels=tf.arg_max(y, dimension=1),
predictions=tf.arg_max(pred, dimension=1)
)
Explanation: labels and logits should be tensors of shape [100x10]
End of explanation
sess = tf.InteractiveSession()
init = tf.global_variables_initializer()
sess.run(init)
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
# We will read a batch of 100 images [100 x 784] as batch_x
# batch_y is a matrix of [100x10]
batch_x, batch_y = mnist.train.next_batch(batch_size)
# We consider each row of the image as one sequence
# Reshape data to get 28 seq of 28 elements, so that, batxh_x is [100x28x28]
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# Calculate batch accuracy
acc, acc2 = sess.run([accuracy, accuracy_v2], feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print("({} / {}) Minibatch loss={:.6f} Accuracy={:.5f} Accuracy (tf)={:.5f}".format(
step*batch_size,
training_iters,
loss,
acc,
acc2
))
step += 1
print("Optimization Finished!")
# Calculate accuracy for the whole test set
test_data = mnist.test.images.reshape((-1, n_steps, n_input))
test_label = mnist.test.labels
print("Testing Accuracy: {:.3%}".format(sess.run(accuracy, feed_dict={x: test_data, y: test_label})))
sess.close()
Explanation: Just recall that we will treat the MNIST image $\in \mathcal{R}^{28 \times 28}$ as $28$ sequences of a vector $\mathbf{x} \in \mathcal{R}^{28}$.
End of explanation |
8,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Nonsensical Language Model using Theano LSTM
Today we will train a nonsensical language model !
We will first collect some language data, convert it to numbers, and then feed it to a recurrent neural network and ask it to predict upcoming words. When we are done we will have a machine that can generate sentences from our made-up language ad-infinitum !
Collect Language Data
The first step here is to get some data. Since we are basing our language on nonsense, we need to generate good nonsense using a sampler.
Our sampler will take a probability table as input, e.g. a language where people are equally likely to say "a" or "b" would be written as follows
Step1: Parts of Speech
Now that we have a Sampler we can create a couple different word groups that our language uses to distinguish between different probability distributions easily
Step2: Simple Grammar
To create sentences from our language we create a simple recursion that goes as follows
Step4: Utilities
Now that we have our training corpus for our language model (optionally you could gather an actual corpus from the web
Step5: Create a Mapping from numbers to words
Now we can use the Vocab class to gather all the words and store an Index
Step6: To send our sentences in one big chunk to our neural network we transform each sentence into a row vector and place each of these rows into a bigger matrix that holds all these rows. Not all sentences have the same length, so we will pad those that are too short with 0s in pad_into_matrix
Step12: Build a Recurrent Neural Network
Now the real work is upon us! Thank goodness we have our language data ready. We now create a recurrent neural network by connecting an Embedding $E$ for each word in our corpus, and stacking some special cells together to form a prediction function. Mathematically we want
Step16: Construct model
We now declare the model and parametrize it to use an RNN, and make predictions in the range provided by our vocabulary. We also tell the greedy reconstruction search that it can consider a sentence as being over when the symbol corresponding to a period appears
Step17: Train Model
We run 10,000 times through our data and every 500 epochs of training we output what the model considers to be a natural continuation to the sentence "the" | Python Code:
## Fake dataset:
class Sampler:
def __init__(self, prob_table):
total_prob = 0.0
if type(prob_table) is dict:
for key, value in prob_table.items():
total_prob += value
elif type(prob_table) is list:
prob_table_gen = {}
for key in prob_table:
prob_table_gen[key] = 1.0 / (float(len(prob_table)))
total_prob = 1.0
prob_table = prob_table_gen
else:
raise ArgumentError("__init__ takes either a dict or a list as its first argument")
if total_prob <= 0.0:
raise ValueError("Probability is not strictly positive.")
self._keys = []
self._probs = []
for key in prob_table:
self._keys.append(key)
self._probs.append(prob_table[key] / total_prob)
def __call__(self):
sample = random.random()
seen_prob = 0.0
for key, prob in zip(self._keys, self._probs):
if (seen_prob + prob) >= sample:
return key
else:
seen_prob += prob
return key
Explanation: A Nonsensical Language Model using Theano LSTM
Today we will train a nonsensical language model !
We will first collect some language data, convert it to numbers, and then feed it to a recurrent neural network and ask it to predict upcoming words. When we are done we will have a machine that can generate sentences from our made-up language ad-infinitum !
Collect Language Data
The first step here is to get some data. Since we are basing our language on nonsense, we need to generate good nonsense using a sampler.
Our sampler will take a probability table as input, e.g. a language where people are equally likely to say "a" or "b" would be written as follows:
nonsense = Sampler({"a": 0.5, "b": 0.5})
We get samples from this language like this:
word = nonsense()
We overloaded the __call__ method and got this syntactic sugar.
End of explanation
samplers = {
"punctuation": Sampler({".": 0.49, ",": 0.5, ";": 0.03, "?": 0.05, "!": 0.05}),
"stop": Sampler({"the": 10, "from": 5, "a": 9, "they": 3, "he": 3, "it" : 2.5, "she": 2.7, "in": 4.5}),
"noun": Sampler(["cat", "broom", "boat", "dog", "car", "wrangler", "mexico", "lantern", "book", "paper", "joke","calendar", "ship", "event"]),
"verb": Sampler(["ran", "stole", "carried", "could", "would", "do", "can", "carry", "catapult", "jump", "duck"]),
"adverb": Sampler(["rapidly", "calmly", "cooly", "in jest", "fantastically", "angrily", "dazily"])
}
Explanation: Parts of Speech
Now that we have a Sampler we can create a couple different word groups that our language uses to distinguish between different probability distributions easily:
End of explanation
def generate_nonsense(word = ""):
if word.endswith("."):
return word
else:
if len(word) > 0:
word += " "
word += samplers["stop"]()
word += " " + samplers["noun"]()
if random.random() > 0.7:
word += " " + samplers["adverb"]()
if random.random() > 0.7:
word += " " + samplers["adverb"]()
word += " " + samplers["verb"]()
if random.random() > 0.8:
word += " " + samplers["noun"]()
if random.random() > 0.9:
word += "-" + samplers["noun"]()
if len(word) > 500:
word += "."
else:
word += " " + samplers["punctuation"]()
return generate_nonsense(word)
def generate_dataset(total_size, ):
sentences = []
for i in range(total_size):
sentences.append(generate_nonsense())
return sentences
# generate dataset
lines = generate_dataset(100)
Explanation: Simple Grammar
To create sentences from our language we create a simple recursion that goes as follows:
If the sentence we have ends with a full stop, a question mark, or an exclamation point then end at once!
Else our sentence should have:
A stop word
A noun
An adverb (with prob 0.3), or 2 adverbs (with prob 0.3*0.3=0.09)
A verb
Another noun (with prob 0.2), or 2 more nouns connected by a dash (with prob 0.2*0.1=0.02)
If our sentence is now over 500 characters, add a full stop and end at once!
Else add some punctuation and go back to (1)
End of explanation
### Utilities:
class Vocab:
__slots__ = ["word2index", "index2word", "unknown"]
def __init__(self, index2word = None):
self.word2index = {}
self.index2word = []
# add unknown word:
self.add_words(["**UNKNOWN**"])
self.unknown = 0
if index2word is not None:
self.add_words(index2word)
def add_words(self, words):
for word in words:
if word not in self.word2index:
self.word2index[word] = len(self.word2index)
self.index2word.append(word)
def __call__(self, line):
Convert from numerical representation to words
and vice-versa.
if type(line) is np.ndarray:
return " ".join([self.index2word[word] for word in line])
if type(line) is list:
if len(line) > 0:
if line[0] is int:
return " ".join([self.index2word[word] for word in line])
indices = np.zeros(len(line), dtype=np.int32)
else:
line = line.split(" ")
indices = np.zeros(len(line), dtype=np.int32)
for i, word in enumerate(line):
indices[i] = self.word2index.get(word, self.unknown)
return indices
@property
def size(self):
return len(self.index2word)
def __len__(self):
return len(self.index2word)
Explanation: Utilities
Now that we have our training corpus for our language model (optionally you could gather an actual corpus from the web :), we can now create our first utility, Vocab, that will hold the mapping from words to an index, and perfom the conversions from words to indices and vice-versa:
End of explanation
vocab = Vocab()
for line in lines:
vocab.add_words(line.split(" "))
Explanation: Create a Mapping from numbers to words
Now we can use the Vocab class to gather all the words and store an Index:
End of explanation
def pad_into_matrix(rows, padding = 0):
if len(rows) == 0:
return np.array([0, 0], dtype=np.int32)
lengths = map(len, rows)
width = max(lengths)
height = len(rows)
mat = np.empty([height, width], dtype=rows[0].dtype)
mat.fill(padding)
for i, row in enumerate(rows):
mat[i, 0:len(row)] = row
return mat, list(lengths)
# transform into big numerical matrix of sentences:
numerical_lines = []
for line in lines:
numerical_lines.append(vocab(line))
numerical_lines, numerical_lengths = pad_into_matrix(numerical_lines)
Explanation: To send our sentences in one big chunk to our neural network we transform each sentence into a row vector and place each of these rows into a bigger matrix that holds all these rows. Not all sentences have the same length, so we will pad those that are too short with 0s in pad_into_matrix:
End of explanation
from theano_lstm import Embedding, LSTM, RNN, StackedCells, Layer, create_optimization_updates, masked_loss
def softmax(x):
Wrapper for softmax, helps with
pickling, and removing one extra
dimension that Theano adds during
its exponential normalization.
return T.nnet.softmax(x.T)
def has_hidden(layer):
Whether a layer has a trainable
initial hidden state.
return hasattr(layer, 'initial_hidden_state')
def matrixify(vector, n):
return T.repeat(T.shape_padleft(vector), n, axis=0)
def initial_state(layer, dimensions = None):
Initalizes the recurrence relation with an initial hidden state
if needed, else replaces with a "None" to tell Theano that
the network **will** return something, but it does not need
to send it to the next step of the recurrence
if dimensions is None:
return layer.initial_hidden_state if has_hidden(layer) else None
else:
return matrixify(layer.initial_hidden_state, dimensions) if has_hidden(layer) else None
def initial_state_with_taps(layer, dimensions = None):
Optionally wrap tensor variable into a dict with taps=[-1]
state = initial_state(layer, dimensions)
if state is not None:
return dict(initial=state, taps=[-1])
else:
return None
class Model:
Simple predictive model for forecasting words from
sequence using LSTMs. Choose how many LSTMs to stack
what size their memory should be, and how many
words can be predicted.
def __init__(self, hidden_size, input_size, vocab_size, stack_size=1, celltype=LSTM):
# declare model
self.model = StackedCells(input_size, celltype=celltype, layers =[hidden_size] * stack_size)
# add an embedding
self.model.layers.insert(0, Embedding(vocab_size, input_size))
# add a classifier:
self.model.layers.append(Layer(hidden_size, vocab_size, activation = softmax))
# inputs are matrices of indices,
# each row is a sentence, each column a timestep
self._stop_word = theano.shared(np.int32(999999999), name="stop word")
self.for_how_long = T.ivector()
self.input_mat = T.imatrix()
self.priming_word = T.iscalar()
self.srng = T.shared_randomstreams.RandomStreams(np.random.randint(0, 1024))
# create symbolic variables for prediction:
self.predictions = self.create_prediction()
# create symbolic variable for greedy search:
self.greedy_predictions = self.create_prediction(greedy=True)
# create gradient training functions:
self.create_cost_fun()
self.create_training_function()
self.create_predict_function()
def stop_on(self, idx):
self._stop_word.set_value(idx)
@property
def params(self):
return self.model.params
def create_prediction(self, greedy=False):
def step(idx, *states):
# new hiddens are the states we need to pass to LSTMs
# from past. Because the StackedCells also include
# the embeddings, and those have no state, we pass
# a "None" instead:
new_hiddens = [None] + list(states)
new_states = self.model.forward(idx, prev_hiddens = new_hiddens)
if greedy:
new_idxes = new_states[-1]
new_idx = new_idxes.argmax()
# provide a stopping condition for greedy search:
return ([new_idx.astype(self.priming_word.dtype)] + new_states[1:-1]), theano.scan_module.until(T.eq(new_idx,self._stop_word))
else:
return new_states[1:]
# in sequence forecasting scenario we take everything
# up to the before last step, and predict subsequent
# steps ergo, 0 ... n - 1, hence:
inputs = self.input_mat[:, 0:-1]
num_examples = inputs.shape[0]
# pass this to Theano's recurrence relation function:
# choose what gets outputted at each timestep:
if greedy:
outputs_info = [dict(initial=self.priming_word, taps=[-1])] + [initial_state_with_taps(layer) for layer in self.model.layers[1:-1]]
result, _ = theano.scan(fn=step,
n_steps=200,
outputs_info=outputs_info)
else:
outputs_info = [initial_state_with_taps(layer, num_examples) for layer in self.model.layers[1:]]
result, _ = theano.scan(fn=step,
sequences=[inputs.T],
outputs_info=outputs_info)
if greedy:
return result[0]
# softmaxes are the last layer of our network,
# and are at the end of our results list:
return result[-1].transpose((2,0,1))
# we reorder the predictions to be:
# 1. what row / example
# 2. what timestep
# 3. softmax dimension
def create_cost_fun (self):
# create a cost function that
# takes each prediction at every timestep
# and guesses next timestep's value:
what_to_predict = self.input_mat[:, 1:]
# because some sentences are shorter, we
# place masks where the sentences end:
# (for how long is zero indexed, e.g. an example going from `[2,3)`)
# has this value set 0 (here we substract by 1):
for_how_long = self.for_how_long - 1
# all sentences start at T=0:
starting_when = T.zeros_like(self.for_how_long)
self.cost = masked_loss(self.predictions,
what_to_predict,
for_how_long,
starting_when).sum()
def create_predict_function(self):
self.pred_fun = theano.function(
inputs=[self.input_mat],
outputs =self.predictions,
allow_input_downcast=True
)
self.greedy_fun = theano.function(
inputs=[self.priming_word],
outputs=T.concatenate([T.shape_padleft(self.priming_word), self.greedy_predictions]),
allow_input_downcast=True
)
def create_training_function(self):
updates, _, _, _, _ = create_optimization_updates(self.cost, self.params, method="adadelta")
self.update_fun = theano.function(
inputs=[self.input_mat, self.for_how_long],
outputs=self.cost,
updates=updates,
allow_input_downcast=True)
def __call__(self, x):
return self.pred_fun(x)
Explanation: Build a Recurrent Neural Network
Now the real work is upon us! Thank goodness we have our language data ready. We now create a recurrent neural network by connecting an Embedding $E$ for each word in our corpus, and stacking some special cells together to form a prediction function. Mathematically we want:
$$\mathrm{argmax_{E, \Phi}} {\bf P}(w_{k+1}| w_{k}, \dots, w_{0}; E, \Phi) = f(x, h)$$
with $f(\cdot, \cdot)$ the function our recurrent neural network performs at each timestep that takes as inputs:
an observation $x$, and
a previous state $h$,
and outputs a probability distribution $\hat{p}$ over the next word.
We have $x = E[ w_{k}]$ our observation at time $k$, and $h$ the internal state of our neural network, and $\Phi$ is the set of parameters used by our classifier, and recurrent neural network, and $E$ is the embedding for our words.
In practice we will obtain $E$ and $\Phi$ iteratively using gradient descent on the error our network is making in its prediction. To do this we define our error as the Kullback-Leibler divergence (a distance between probability distributions) between our estimate of $\hat{p} = {\bf P}(w_{k+1}| w_{k}, \dots, w_{0}; E, \Phi)$ and the actual value of ${\bf P}(w_{k+1}| w_{k}, \dots, w_{0})$ from the data (e.g. a probability distribution that is 1 for word $w_k$ and 0 elsewhere).
Theano LSTM StackedCells function
To build this predictive model we make use of theano_lstm, a Python module for building recurrent neural networks using Theano. The first step we take is to declare what kind of cells we want to use by declaring a celltype. There are many different celltypes we can use, but the most common these days (and incidentally most effective) are RNN and LSTM. For a more in-depth discussion of how these work I suggest checking out Arxiv, or Alex Graves' website, or Wikipedia. Here we use celltype = LSTM.
self.model = StackedCells(input_size, celltype=celltype, layers =[hidden_size] * stack_size)
Once we've declared what kind of cells we want to use, we can now choose to add an Embedding to map integers (indices) to vectors (and in our case map words to their indices, then indices to word vectors we wish to train). Intuitively this lets the network separate and recognize what it is "seeing" or "receiving" at each timestep. To add an Embedding we create Embedding(vocabulary_size, size_of_embedding_vectors) and insert it at the begging of the StackedCells's layers list (thereby telling StackedCells that this Embedding layer needs to be activated before the other ones):
# add an embedding
self.model.layers.insert(0, Embedding(vocab_size, input_size))
The final output of our network needs to be a probability distribution over the next words (but in different application areas this could be a sentiment classification, a decision, a topic, etc...) so we add another layer that maps the internal state of the LSTMs to a probability distribution over the all the words in our language. To ensure that our prediction is indeed a probability distribution we "activate" our layer with a Softmax, meaning that we will exponentiate every value of the output, $q_i = e^{x_i}$, so that all values are positive, and then we will divide the output by its sum so that the output sums to 1:
$$p_i = \frac{q_i}{\sum_j q_j}\text{, and }\sum_i p_i = 1.$$
# add a classifier:
self.model.layers.append(Layer(hidden_size, vocab_size, activation = softmax))
For convenience we wrap this all in one class below.
Prediction
We have now defined our network. At each timestep we can produce a probability distribution for each input index:
def create_prediction(self, greedy=False):
def step(idx, *states):
# new hiddens are the states we need to pass to LSTMs
# from past. Because the StackedCells also include
# the embeddings, and those have no state, we pass
# a "None" instead:
new_hiddens = [None] + list(states)
new_states = self.model.forward(idx, prev_hiddens = new_hiddens)
return new_states[1:]
...
Our inputs are an integer matrix Theano symbolic variable:
...
# in sequence forecasting scenario we take everything
# up to the before last step, and predict subsequent
# steps ergo, 0 ... n - 1, hence:
inputs = self.input_mat[:, 0:-1]
num_examples = inputs.shape[0]
# pass this to Theano's recurrence relation function:
....
Scan receives our recurrence relation step from above, and also needs to know what will be outputted at each step in outputs_info. We give outputs_info a set of variables corresponding to the hidden states of our StackedCells. Some of the layers have no hidden state, and thus we should simply pass a None to Theano, while others do require some initial state. In those cases with wrap their initial state inside a dictionary:
def has_hidden(layer):
Whether a layer has a trainable
initial hidden state.
return hasattr(layer, 'initial_hidden_state')
def matrixify(vector, n):
return T.repeat(T.shape_padleft(vector), n, axis=0)
def initial_state(layer, dimensions = None):
Initalizes the recurrence relation with an initial hidden state
if needed, else replaces with a "None" to tell Theano that
the network **will** return something, but it does not need
to send it to the next step of the recurrence
if dimensions is None:
return layer.initial_hidden_state if has_hidden(layer) else None
else:
return matrixify(layer.initial_hidden_state, dimensions) if has_hidden(layer) else None
def initial_state_with_taps(layer, dimensions = None):
Optionally wrap tensor variable into a dict with taps=[-1]
state = initial_state(layer, dimensions)
if state is not None:
return dict(initial=state, taps=[-1])
else:
return None
Let's now create these inital states (note how we skip layer 1, the embeddings by doing self.model.layers[1:] in the iteration, this is because there is no point in passing these embeddings around in our recurrence because word vectors are only seen at the timestep they are received in this network):
# choose what gets outputted at each timestep:
outputs_info = [initial_state_with_taps(layer, num_examples) for layer in self.model.layers[1:]]
result, _ = theano.scan(fn=step,
sequences=[inputs.T],
outputs_info=outputs_info)
if greedy:
return result[0]
# softmaxes are the last layer of our network,
# and are at the end of our results list:
return result[-1].transpose((2,0,1))
# we reorder the predictions to be:
# 1. what row / example
# 2. what timestep
# 3. softmax dimension
Error Function:
Our error function uses theano_lstm's masked_loss method. This method allows us to define ranges over which a probability distribution should obey a particular target distribution. We control this method by setting start and end points for these ranges. In doing so we mask the areas where we do not care what the network predicted.
In our case our network predicts words we care about during the sentence, but when we pad our short sentences with 0s to fill our matrix, we do not care what the network does there, because this is happening outside the sentence we collected:
def create_cost_fun (self):
# create a cost function that
# takes each prediction at every timestep
# and guesses next timestep's value:
what_to_predict = self.input_mat[:, 1:]
# because some sentences are shorter, we
# place masks where the sentences end:
# (for how long is zero indexed, e.g. an example going from `[2,3)`)
# has this value set 0 (here we substract by 1):
for_how_long = self.for_how_long - 1
# all sentences start at T=0:
starting_when = T.zeros_like(self.for_how_long)
self.cost = masked_loss(self.predictions,
what_to_predict,
for_how_long,
starting_when).sum()
Training Function
We now have a cost function. To perform gradient descent we now need to tell Theano how each parameter must be updated at every training epoch. We theano_lstm's create_optimization_udpates method to generate a dictionary of updates and to apply special gradient descent rules that accelerate and facilitate training (for instance scaling the gradients when they are too large or too little, and preventing gradients from becoming too big and making our model numerically unstable -- in this example we use Adadelta:
def create_training_function(self):
updates, _, _, _, _ = create_optimization_updates(self.cost, self.params, method="adadelta")
self.update_fun = theano.function(
inputs=[self.input_mat, self.for_how_long],
outputs=self.cost,
updates=updates,
allow_input_downcast=True)
PS: our parameters are obtained by calling self.model.params:
@property
def params(self):
return self.model.params
Final Code
End of explanation
# construct model & theano functions:
model = Model(
input_size=10,
hidden_size=10,
vocab_size=len(vocab),
stack_size=1, # make this bigger, but makes compilation slow
celltype=RNN # use RNN or LSTM
)
model.stop_on(vocab.word2index["."])
Explanation: Construct model
We now declare the model and parametrize it to use an RNN, and make predictions in the range provided by our vocabulary. We also tell the greedy reconstruction search that it can consider a sentence as being over when the symbol corresponding to a period appears:
End of explanation
# train:
for i in range(10000):
error = model.update_fun(numerical_lines, numerical_lengths) if i % 100 == 0:
print("epoch %(epoch)d, error=%(error).2f" % ({"epoch": i, "error": error}))
if i % 500 == 0:
print(vocab(model.greedy_fun(vocab.word2index["the"])))
Explanation: Train Model
We run 10,000 times through our data and every 500 epochs of training we output what the model considers to be a natural continuation to the sentence "the":
End of explanation |
8,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HDF5
HDF5 stands for (Hierarchical Data Format 5), and it is developed by the HDF Group. From their website
Step1: HDF5 is organized in a hierarchical structure and syntax is similar to the Linux/OSX file structure. A group can be thought of as a folder.
Step2: While a table can be thought of as a file in a folder.
Step3: PyTables is very low level and is a little difficult to use by hand. Luckily Pandas has integrated PyTables so that you can quickly dumpt a Pandas DataFrame to an HDF5 file.
Pandas + PyTables
Now I am going to create a table in pandas and dump it to an HDF5 file.
Step4: As I have mentioned, there are libraries for reading HDF5 files in R. Now we can open this file in R using the following
Step5: h5py
h5py can be installed using pip | Python Code:
# Import packages
import numpy as np
import tables as pt # PyTables
import h5py as hp # h5py
import pandas as pd
import rpy2
%load_ext rpy2.ipython
# Create a New HDF5 File
h5file = pt.open_file('test.h5', mode='w', title='Test file')
Explanation: HDF5
HDF5 stands for (Hierarchical Data Format 5), and it is developed by the HDF Group. From their website:
HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data. HDF5 is portable and is extensible, allowing applications to evolve in their use of HDF5. The HDF5 Technology suite includes tools and applications for managing, manipulating, viewing, and analyzing data in the HDF5 format.
Various programming languages have developed APIs for interacting with HDF formatted files, for example there are libraries in Python and R which I will briefly cover. There are also a set of command line tools developed by the HDF Group HERE, I will talk a little about h5ls and h5dump.
My goal here is just to give a little taste, the true power of HDF5 is not apparent until you look at real use cases for example the python package vcfnp converts a vcf file into an HDF5 file allowing you to quickly access different parts of the VCF, see here.
For all of these tools to work you need to install the HDF5 software from HDF5 group!
On Linux (Mint) you can run the following:
sudo apt-get update
sudo apt-get install h5utils hdf5-tools hdfview libhdf5-dev
On OSX take a look at MacPorts
For Linux, OSX, and Windows you can download and install from the HDF group
HDF5 in Python
There are two major packages for interacting with HDF5 files (PyTables and h5py. Both packages have a slightly different interface which is discussed HERE. I will go over a quick example usage of PyTables, h5py, and Pandas + PyTables.
You will need to have installed:
* Python >= 2.6 including Python 3.x (Python >= 2.7 is highly recommended)
* NumPy >= 1.7.1
* Numexpr >= 2.4
* Cython >= 0.14
* Pandas >= 0.14
PyTables
PyTables can be installed using pip:
pip install tables --user
or using your python distributions package manager.
End of explanation
# Create new group
group = h5file.create_group('/', 'pytables', 'PyTables Test')
print(group)
Explanation: HDF5 is organized in a hierarchical structure and syntax is similar to the Linux/OSX file structure. A group can be thought of as a folder.
End of explanation
# Create new table
class HgSnpCall(pt.IsDescription):
chrom = pt.StringCol(16) # 16-character String
start = pt.UInt32Col() # Unsigned 32-bit integer
end = pt.UInt32Col() # Unsigned 32-bit integer
call = pt.StringCol(16) # 16-character String
table = h5file.create_table(group, 'hg19', HgSnpCall, 'Human SNP Calls')
# Add a row of data to the table.
position = table.row
position['chrom'] = 'chr4'
position['start'] = 10023
position['end'] = 10024
position['call'] = 'A/G'
position.append()
# Flush table, similar to SQL
table.flush()
%%bash
# Lets look at the table we created using an external utility
hdfview 'test.h5'
# Close the h5file
h5file.close()
Explanation: While a table can be thought of as a file in a folder.
End of explanation
# Create a DataFrame
df_snp = pd.DataFrame({'chrom': [ 'chr4', 'chr4', 'chr2', 'chr2'],
'start': [10023, 3020, 40404, 20202],
'end': [10024, 3023, 40405, 20203],
'call': ['A/G', 'AA/G', 'T/C', 'A/C']},
columns=['chrom', 'start', 'end', 'call'])
print(df_snp)
# Save to hdf5 file
hdf = pd.HDFStore('test.h5')
hdf.put('pandas_test', df_snp, format='table', data_columns=True)
hdf.close()
%%bash
# Now lets look at it again
hdfview 'test.h5'
Explanation: PyTables is very low level and is a little difficult to use by hand. Luckily Pandas has integrated PyTables so that you can quickly dumpt a Pandas DataFrame to an HDF5 file.
Pandas + PyTables
Now I am going to create a table in pandas and dump it to an HDF5 file.
End of explanation
%%R
library(rhdf5)
library(bit64)
data = h5read('test.h5', 'pandas_test/table', bit64conversion='bit64')
print(data)
Explanation: As I have mentioned, there are libraries for reading HDF5 files in R. Now we can open this file in R using the following:
End of explanation
# Open a new hdf5 file
hdf = hp.File('test.h5', 'a')
# Create a new group
group = hdf.create_group('h5py_test')
# Create a new dataset object
dat = group.create_dataset('matrix', shape=(100, 100), dtype='i')
# I made a 100 x 100 matrix
dat[...]
# We can then do things to this matrix
dat[0,0] = 999
print(dat[...])
hdf.close()
%%bash
hdfview test.h5
%%bash
# On the command line we can also list the contents of an hdf5 file
h5ls test.h5
%%bash
# On the command line we can look at the contents an hdf5 file
h5dump -d /h5py_test/matrix -s "0,0" -c "5,15" test.h5
%%bash
# Clean up our mess
#rm test.h5
Explanation: h5py
h5py can be installed using pip:
pip install h5py --user
or using your python distributions package manager.
While Pandas + PyTables if very useful for traditional data sets, HDF5 can store a variety of data types. The python package h5py is nice for a higher level access to an HDF5 file and can quickly add and store arrays and lists.
End of explanation |
8,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
return None, None, None, None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_translation_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_translation_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Number of Layers
num_layers = None
# Embedding Size
encoding_embedding_size = None
decoding_embedding_size = None
# Learning Rate
learning_rate = None
# Dropout Keep Probability
keep_probability = None
display_step = None
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
8,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Google 'Image Box' analysis
First import all the libraries we want, together with some formatting options
Step1: Read in Google Scraper search results table
Step2: Programatically identify unique images
Read in each image, calculate a unique hash based on the image data, a bit like a fingerprint, and store the hash along with the image filename in a shelf (a persistent dictionary). Many images were repeats, so there were many image filenames stored with the same hash key.
The code is based on this blog post
Step3: Add the hash to each row in our data dataframe we loaded above
Step4: Open new dataframe with image hashes
Step5: HILLARY CLINTON DATA
Step6: Find an image file representative of each unique hash so we can look at each unique image
Step7: Collect unique images and put in separate directory
Step8: DONALD TRUMP DATA
Step9: Find an image file representative of each unique hash.
Step10: Collect unique images and put in separate directory
Step11: NEWS SOURCE INFORMATION
Step12: Getting Political Leaning from Allsides from News Sources of all images in basline dataset
Allsides bias data was generously provided by Allsides
Step13: Getting political leaning from a Facebook political bias ratings study
Data is available here
Step14: Combining ratings from Allsides and Facebook, together with crowdsourced political bias ratings from Mondo Times and author's judgement
First, Facebook ratings were categoriesed into 2, 1, 0, -1, and -2 integers so as to be compatable with the Allsides data. Between-category thresholds were selected so as to make each bucket equal in float range of values.
Step15: Next, ratings from Allsides and the Facebook study were combined. Where ratings from both Allsides and Facebook are absent, or disagree, bias rating was decided by Mondo Times where an outlet received more than 20 votes, and /or the author based on outlets' "About" pages and other information. Where doubt remained, or ratings are not applicable (eg. Getty Images), Unknown / Unreliable was assigned.
Step16: Make list of unique news sources, combine lists together from HC and DT, and output to a csv
Step17: Despite YouTube being in the Facebook study, the channel observed here in Hillary Clinton's list is a channel that displays a distinct bias toward Donald Trump and against Hillary Clinton. Therefore, it will be manually rated.
In addition, Reddit, which is also channel-based, will be rated on the specific channel, and not the overall site.
Step18: Read in my manual bias ratings | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 3)
plt.rcParams['font.family'] = 'sans-serif'
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
Explanation: Google 'Image Box' analysis
First import all the libraries we want, together with some formatting options
End of explanation
cols = ['requested_at', 'search_query', 'visible_link', 'rank' ,'image_path']
data = pd.read_csv('./image_box_data.csv', parse_dates=['requested_at'], usecols=cols)
data.head()
type(data.requested_at[0])
Explanation: Read in Google Scraper search results table
End of explanation
# Check the shelf has content. If nothing is printed out, run the following two scripts/cells
import shelve
db = shelve.open('db.shelve')
for k in db.keys():
print(k)
db.close()
%run ../index.py --dataset ./clinton --shelve db.shelve
%run ../index.py --dataset ./trump --shelve db.shelve
Explanation: Programatically identify unique images
Read in each image, calculate a unique hash based on the image data, a bit like a fingerprint, and store the hash along with the image filename in a shelf (a persistent dictionary). Many images were repeats, so there were many image filenames stored with the same hash key.
The code is based on this blog post: https://realpython.com/blog/python/fingerprinting-images-for-near-duplicate-detection/ which I adapted to work with Python3
End of explanation
import shelve
db = shelve.open('db.shelve')
for key in db.keys(): # For every hash KEY
for f in db[key]: # for every file path name ITEM within the KEY
for index, i in enumerate(data.image_path): # For every Image path in each row of my DF
if f in i: # If the ITEM file path is also in the IMAGE PATH of my DF
data.loc[index, 'image_hash'] = key # Put the KEY into the 'image_hash' Column
data.to_csv("./hashedDF.csv", index=False)
db.close()
Explanation: Add the hash to each row in our data dataframe we loaded above
End of explanation
hashedDF = pd.read_csv('./hashedDF.csv')
hashedDF.head()
hashedDF['requested_at'] = pd.to_datetime(hashedDF['requested_at'])
type(hashedDF.requested_at[0])
len(hashedDF)
Explanation: Open new dataframe with image hashes
End of explanation
HC = hashedDF[hashedDF.search_query == 'hillary clinton']
HC.to_csv('HC_hashed.csv', index=False)
HC = pd.read_csv('HC_hashed.csv')
# What are the news sources, and how many times do they appear in the dataset?
HC.visible_link.value_counts()
# What are the hashes, and how many times does each one appear in the dataset?
HC.image_hash.value_counts()
print(len(HC.visible_link.unique()))
print(len(HC.image_hash.unique()))
HC.head()
print(type(HC.image_hash[0]))
print(type(HC['rank'][0]))
print(type(HC.requested_at[0]))
Explanation: HILLARY CLINTON DATA
End of explanation
HC_unique_images = HC.groupby('image_hash').first().reset_index()
HC_unique_images
Explanation: Find an image file representative of each unique hash so we can look at each unique image
End of explanation
from shutil import copyfile
import os
def select_images(df_series, src_dir, dest_dir):
'''
provide dataframe series with all the image file names, the directory containing the images, and directory
where the unique images should go.
'''
try:
os.mkdir(dest_dir)
for file in df_series:
file = file.split('/')[-1]
copyfile(src_dir + file, dest_dir + file)
except FileExistsError:
pass
select_images(HC_unique_images.image_path,'clinton/', 'clinton_unique/' )
Explanation: Collect unique images and put in separate directory
End of explanation
DT = hashedDF[hashedDF.search_query == 'donald trump']
DT.to_csv('DT_hashed.csv', index=False)
DT = pd.read_csv('DT_hashed.csv')
DT.visible_link.value_counts()
DT.image_hash.value_counts()
Explanation: DONALD TRUMP DATA
End of explanation
DT_unique_images = DT.groupby('image_hash').first().reset_index()
DT_unique_images
Explanation: Find an image file representative of each unique hash.
End of explanation
select_images(DT_unique_images.image_path,'trump/', 'trump_unique/' )
Explanation: Collect unique images and put in separate directory
End of explanation
HC.visible_link.describe()
DT.visible_link.describe()
Explanation: NEWS SOURCE INFORMATION
End of explanation
allsides = pd.read_json('../BASELINE/allsides_data.json')
allsides.head()
HC_unique_news_sources = []
HC.visible_link.unique()
HC[HC.visible_link.isnull()]
def get_unique_news_sources(col, source_list):
print(len(col.unique()))
for i in col.unique():
print(i)
source_list.append(i.split('//')[1].split('/')[0])
get_unique_news_sources(HC.visible_link, HC_unique_news_sources)
HC_unique_news_sources
DT_unique_news_sources = []
get_unique_news_sources(DT.visible_link, DT_unique_news_sources)
DT_unique_news_sources
def get_url(col):
col = col.split('//')[1]
col = col.split('/')[0]
return col
HC.loc[:, 'news_source_url'] = HC.visible_link.apply(get_url)
DT.loc[:, 'news_source_url'] = DT.visible_link.apply(get_url)
HC.head()
allsides.head()
def tag_bias_rating(candidate):
candidate.loc['allsides_bias_rating'] = 999
allsides = pd.read_json('../BASELINE/allsides_data.json')
for i, valuei in enumerate(candidate.news_source_url):
for j, valuej in enumerate(allsides.url):
if 'http' in valuej:
# print("Found an HTTP in ", valuej)
valuej = valuej.split('//')[1]
# print(valuej)
try:
if valuei in valuej:
print(valuei, valuej)
if allsides.loc[j, 'bias_rating'] == 71: # Left
candidate.loc[i, 'allsides_bias_rating'] = -2
elif allsides.loc[j, 'bias_rating'] == 72: # Lean left
candidate.loc[i, 'allsides_bias_rating'] = -1
elif allsides.loc[j, 'bias_rating'] == 73: # center
candidate.loc[i, 'allsides_bias_rating'] = 0
elif allsides.loc[j, 'bias_rating'] == 74: # lean right
candidate.loc[i, 'allsides_bias_rating'] = 1
elif allsides.loc[j, 'bias_rating'] == 75: # Right
candidate.loc[i, 'allsides_bias_rating'] = 2
else:
candidate.loc[i, 'allsides_bias_rating'] = 999
except TypeError:
continue
tag_bias_rating(HC)
tag_bias_rating(DT)
for i in allsides.url:
if 'http' in i:
print(i.split('//')[1])
else:
print(i)
Explanation: Getting Political Leaning from Allsides from News Sources of all images in basline dataset
Allsides bias data was generously provided by Allsides
End of explanation
facebook = pd.read_csv('../Facebook_study.csv')
facebook.head()
cols = ['p', 'avg_align']
facebook = pd.read_csv('../Facebook_study.csv', usecols=cols)
facebook.head()
def tag_facebookbias_rating(candidate):
candidate['facebook_p'] = ''
candidate['facebookbias_rating'] = 999
count = 0
for i, valuei in enumerate(candidate.visible_link):
count += 1
valuei = valuei.split('//')[1]
valuei = valuei.split('/')[0]
print(valuei, count)
for j, valuej in enumerate(facebook.p):
if valuej == valuei:
print(valuei, valuej)
candidate.loc[i, 'facebookbias_rating'] = facebook.loc[j, 'avg_align']
candidate.loc[i, 'facebook_p'] = valuej
tag_facebookbias_rating(HC)
tag_facebookbias_rating(DT)
DT.facebookbias_rating[DT.facebookbias_rating < 3].plot.hist(alpha=0.5, bins=20, range=(-1,1), color='red')
HC.facebookbias_rating[HC.facebookbias_rating < 3].plot.hist(alpha=0.5, bins=20, range=(-1,1), color='blue')
plt.savefig('imagebox_facebookbias_hist.png')
DT.facebookbias_rating[DT.facebookbias_rating > 3].plot.hist(alpha=0.5, bins=10, range=(998,1000), color='red')
HC.facebookbias_rating[HC.facebookbias_rating > 3].plot.hist(alpha=0.5, bins=10, range=(998,1000), color='blue')
plt.savefig('imagebox_facebookbias_hist_unknowns.png')
HC.facebookbias_rating[HC.facebookbias_rating > 3].plot.hist()
DT.facebookbias_rating[DT.facebookbias_rating > 3].plot.hist()
HC.facebookbias_rating.value_counts()
DT.facebookbias_rating.value_counts()
Explanation: Getting political leaning from a Facebook political bias ratings study
Data is available here
End of explanation
def convert_facebookbias_toInts(col):
if col >= 0.6 and col <= 1:
return 2
elif col >= 0.2 and col < 0.6:
return 1
elif col > -0.2 and col < 0.2:
return 0
elif col > -0.6 and col <= -0.2:
return -1
elif col <= -0.6:
return -2
elif col == 999:
return 999
else:
return 999
HC['facebook_int'] = HC.facebookbias_rating.apply(convert_facebookbias_toInts)
DT['facebook_int'] = DT.facebookbias_rating.apply(convert_facebookbias_toInts)
HC.head()
HC.facebook_int.value_counts()
DT.facebook_int.value_counts()
Explanation: Combining ratings from Allsides and Facebook, together with crowdsourced political bias ratings from Mondo Times and author's judgement
First, Facebook ratings were categoriesed into 2, 1, 0, -1, and -2 integers so as to be compatable with the Allsides data. Between-category thresholds were selected so as to make each bucket equal in float range of values.
End of explanation
def combine_ratings(candidate):
candidate['combine_rating'] = 'Not Rated'
for i, valuei in enumerate(candidate.allsides_bias_rating):
try:
# STATEMENTS FOR IF BOTH RATINGS AGREE:
# Both bias ratings say LEFT
if (valuei < 0) and (candidate.loc[i, 'facebook_int'] < 0):
print(valuei, candidate.loc[i, 'facebook_int'], "Left")
candidate.loc[i, 'combine_rating'] = "Left"
# Both bias ratings say CENTER
elif (valuei == 0.0) and (candidate.loc[i, 'facebook_int'] == 0):
print(valuei, candidate.loc[i, 'facebook_int'], "Center")
candidate.loc[i, 'combine_rating'] = "Center"
# Both bias ratings say RIGHT
elif (0 < valuei < 3) and (0 < candidate.loc[i, 'facebook_int'] < 3):
print(valuei, candidate.loc[i, 'facebook_int'], "Right")
candidate.loc[i, 'combine_rating'] = "Right"
# STATEMENTS FOR IF RATINGS ARE ONLY PRESENT IN ONE (ALLSIDES OR FACEBOOK STUDY)
# Only one scale has a rating of LEFT, while the other has no entry
elif (valuei < 0 and candidate.loc[i, 'facebook_int'] == 999) or (valuei == 999 and candidate.loc[i, 'facebook_int'] < 0):
print(valuei, candidate.loc[i, 'facebook_int'], "Left")
candidate.loc[i, 'combine_rating'] = "Left"
# Only one scale has a rating of CENTER, while the other has no entry
elif (valuei == 0 and candidate.loc[i, 'facebook_int'] == 999) or (valuei == 999 and candidate.loc[i, 'facebook_int'] == 0):
print(valuei, candidate.loc[i, 'facebook_int'], "Center")
candidate.loc[i, 'combine_rating'] = "Center"
# Only one scale has a rating of RIGHT, while the other has no entry
elif (0 < valuei < 3 and candidate.loc[i, 'facebook_int'] == 999) or (valuei == 999 and 0 < candidate.loc[i, 'facebook_int'] < 3):
print(valuei, candidate.loc[i, 'facebook_int'], "Right")
candidate.loc[i, 'combine_rating'] = "Right"
# ALL OTHER RATINGS ARE EITHER ABSENT FOR BOTH SCALES OR THE SCALES DISAGREE
else:
print(valuei, candidate.loc[i, 'facebook_int'], "Not Rated")
candidate.loc[i, 'combine_rating'] = "Unknown / unreliable"
except KeyError:
continue
combine_ratings(HC)
len(HC)
combine_ratings(DT)
Explanation: Next, ratings from Allsides and the Facebook study were combined. Where ratings from both Allsides and Facebook are absent, or disagree, bias rating was decided by Mondo Times where an outlet received more than 20 votes, and /or the author based on outlets' "About" pages and other information. Where doubt remained, or ratings are not applicable (eg. Getty Images), Unknown / Unreliable was assigned.
End of explanation
HC_Unrated = HC[HC.combine_rating == "Unknown / unreliable"]
DT_Unrated = DT[DT.combine_rating == "Unknown / unreliable"]
HC_Unrated.news_source_url.unique()
DT_Unrated.news_source_url.unique()
Unrated_newssource_list = HC_Unrated.news_source_url.unique().tolist()
DT_Unrated_newssource_list = DT_Unrated.news_source_url.unique().tolist()
print(len(Unrated_newssource_list))
print(len(DT_Unrated_newssource_list))
Unrated_newssource_list
DT_Unrated_newssource_list
HC[HC.news_source_url == 'www.youtube.com']
Explanation: Make list of unique news sources, combine lists together from HC and DT, and output to a csv:
End of explanation
Unrated_newssource_list.append('www.youtube.com')
for i in DT_Unrated_newssource_list:
if i not in Unrated_newssource_list:
Unrated_newssource_list.append(i)
len(Unrated_newssource_list)
#tmp = pd.DataFrame(Unrated_newssource_list, columns=["news_source"])
#tmp.to_csv('unrated_newssources_imagebox.csv', index=False)
Explanation: Despite YouTube being in the Facebook study, the channel observed here in Hillary Clinton's list is a channel that displays a distinct bias toward Donald Trump and against Hillary Clinton. Therefore, it will be manually rated.
In addition, Reddit, which is also channel-based, will be rated on the specific channel, and not the overall site.
End of explanation
manual_rating = pd.read_csv('unrated_newssources_imagebox.csv')
manual_rating
def merge_manual_ratings(candidate, col):
candidate['final_rating'] = ''
for i, valuei in enumerate(candidate.news_source_url):
for j, valuej in enumerate(manual_rating.news_source):
if (valuei == valuej):
print(valuei, valuej, manual_rating.loc[j, col])
try:
if manual_rating.loc[j, col] < 0:
print("Left")
candidate.loc[i, 'final_rating'] = "Left"
elif manual_rating.loc[j, col] == 0:
print("Center")
candidate.loc[i, 'final_rating'] = "Center"
elif 999 > manual_rating.loc[j, col] > 0:
print("Right")
candidate.loc[i, 'final_rating'] = "Right"
elif manual_rating.loc[j, col] == 999:
print("Unknown/Unreliable")
candidate.loc[i, 'final_rating'] = "Unknown / unreliable"
except KeyError:
continue
for i, valuei in enumerate(candidate.final_rating):
if valuei == '':
try:
print("currently empty. Let's fill it up!!")
candidate.loc[i, 'final_rating'] = candidate.loc[i, 'combine_rating']
except KeyError:
continue
merge_manual_ratings(HC, 'final_rating_HC')
merge_manual_ratings(DT, 'final_rating_DT')
HC.allsides_bias_rating.value_counts()
DT.allsides_bias_rating.value_counts()
HC.facebookbias_rating.value_counts()
DT.facebookbias_rating.value_counts()
HC.final_rating.value_counts()
DT.final_rating.value_counts()
HC.final_rating.value_counts().plot(kind='bar', alpha=0.5, color='blue')
DT.final_rating.value_counts().plot(kind='bar', alpha=0.5, color='red')
HC.allsides_bias_rating.value_counts().plot(kind='bar', alpha=0.5, color='blue')
DT.allsides_bias_rating.value_counts().plot(kind='bar', alpha=0.5, color='red')
HC[HC.news_source_url == 'russia-insider.com']
DT[DT.final_rating == 'Unknown / unreliable']
HC.to_csv('HC_imagebox_full_ratings.csv', index=False)
DT.to_csv('DT_imagebox_full_ratings.csv', index=False)
HC.columns
Explanation: Read in my manual bias ratings:
End of explanation |
8,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering Online Retail Sales Data
Dataset
Step1: Load data
Step2: Find out if there are any nan in the columns
Step3: Drop all records having nan CustomerId
Step4: Find number of unique values for each column
Step5: Use describe to find range of each column
Step6: We see quantity column has a large negative value. Probably negative values are not valid for this analysis. Drop the records having negative values in frequency. Verify that there is not more nagative values in the columns
Step7: Convert the InvoiceDate to datetime.
Step8: Create caculated field to computee TotalPrice
Step9: For calculating rececency, use max for InvoiceDate as point of reference.
Step10: Calculate the R-F-M.
Step11: Rename the columns - "recency", "frequency", "monetary"
Step12: Digitize the columns for R-F-M into 5 equal buckets. To achieve this, find percentile values as bucket boundaries. These will create 5 buckets of equal sizes.
Step13: Find what could be an optimal number of clusters using elbow plot. As we see in the plot below, we can use 5 or 6 number of clusters (K) for KMeans algorithm. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing, metrics, cluster
%matplotlib inline
Explanation: Clustering Online Retail Sales Data
Dataset: https://archive.ics.uci.edu/ml/datasets/online+retail
End of explanation
df = pd.read_excel("/data/Online Retail.xlsx")
df.head()
Explanation: Load data
End of explanation
df.info()
df.isna().any()
Explanation: Find out if there are any nan in the columns
End of explanation
df = df[~df.CustomerID.isna()]
df.info()
Explanation: Drop all records having nan CustomerId
End of explanation
df.nunique()
Explanation: Find number of unique values for each column
End of explanation
df.describe()
Explanation: Use describe to find range of each column
End of explanation
df = df[df.Quantity>0]
df.info()
df.describe()
df.head()
Explanation: We see quantity column has a large negative value. Probably negative values are not valid for this analysis. Drop the records having negative values in frequency. Verify that there is not more nagative values in the columns
End of explanation
df.InvoiceDate = pd.to_datetime(df.InvoiceDate)
df.head()
Explanation: Convert the InvoiceDate to datetime.
End of explanation
df["TotalPrice"] = df.Quantity * df.UnitPrice
df.head()
Explanation: Create caculated field to computee TotalPrice
End of explanation
last_date = df.InvoiceDate.max()
last_date
Explanation: For calculating rececency, use max for InvoiceDate as point of reference.
End of explanation
rfm = df.groupby("CustomerID").agg({
"InvoiceDate": lambda values: (last_date - values.max()).days,
"InvoiceNo" : lambda values: len(values),
"TotalPrice": lambda values: np.sum(values)
})
rfm.head()
Explanation: Calculate the R-F-M.
End of explanation
rfm.columns = ["recency", "frequency", "monetary"]
rfm.head()
Explanation: Rename the columns - "recency", "frequency", "monetary"
End of explanation
quantiles = np.arange(1, 6) * 20
quantiles
rfm["r_score"] = np.digitize(rfm.recency, bins = np.percentile(rfm.recency, quantiles)
, right=True)
rfm["m_score"] = np.digitize(rfm.monetary, bins = np.percentile(rfm.monetary, quantiles)
, right=True)
rfm["f_score"] = np.digitize(rfm.frequency, bins = np.percentile(rfm.frequency, quantiles)
, right=True)
rfm["r_score"] = 4 - rfm["r_score"]
rfm["r_score"] = rfm["r_score"] + 1
rfm["f_score"] = rfm["f_score"] + 1
rfm["m_score"] = rfm["m_score"] + 1
rfm.head()
rfm.sample(10, random_state=123)
scaler = preprocessing.StandardScaler()
X = rfm[["r_score", "f_score", "m_score"]].values
X = scaler.fit_transform(X.astype("float32"))
X
Explanation: Digitize the columns for R-F-M into 5 equal buckets. To achieve this, find percentile values as bucket boundaries. These will create 5 buckets of equal sizes.
End of explanation
inertias = {}
for k in range(2, 10):
kmeans = cluster.KMeans(n_clusters=k, random_state=1)
kmeans.fit(X)
inertias[k] = kmeans.inertia_
pd.Series(inertias).plot()
plt.xlabel("K (num of clusters)")
plt.ylabel("Inertia Score")
k = 5
kmeans = cluster.KMeans(n_clusters=k, random_state = 1)
rfm["cluster"] = kmeans.fit_predict(X)
rfm.cluster.value_counts()
rfm["distance"] = 0.0
for i in range(k):
centroid = kmeans.cluster_centers_[i].reshape(1, -1)
cluster_points = X[rfm.cluster == i]
rfm["distance"][rfm.cluster == i] = metrics.euclidean_distances(centroid, cluster_points).flatten()
rfm.sample(20)
rfm.groupby("cluster").distance.agg(["mean", "count"])
Explanation: Find what could be an optimal number of clusters using elbow plot. As we see in the plot below, we can use 5 or 6 number of clusters (K) for KMeans algorithm.
End of explanation |
8,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic statistical analysis of time series
In this example, basic time series statistical analysis is demonstrated.
Step1: Generating a Gaussian stochastic signal
Lets start by generating a Gaussian time series signal. The duration is set to 3 hours and sampling rate is 10Hz.
Step2: The signal will be generated using a rectangular power spectrum between 5 and 15 seconds of period. It is important to use sufficiently many discretizations to obtain a non-repeating approximation.
Step3: Generating the signal by super-positioning a large number of cosine functions with stochastic amplitudes and phases.
Step4: Statistical analysis of the signal
Lets see if the signal resambles a Guassian signal with zero mean.
Step5: Estimate the normal distribution paramters from data and plot the normal pdf with the histogram for data.
Step6: It is clear that the signal is indeed Gaussian!
Extreme value analysis
Finding the peaks
evapy package provides robust and fast methods to identify upcrossings and peaks. The declustered peaks refer to the largest peak per upcrossing. hus, for a narrow-banded signal, all peaks are declustered. In this example, declustering will occur for mean-upcrossing.
Step7: Counting upcrossings
Lets find the number of upcrossings for different levels using the evapy package.
Step8: Since the signal is 10800 seconds long, this mean that the Tz ((mean or) zero crossing period) of the signal is
Step9: Now, lets find the upcrossing rate at mean+std level.
Step10: The upcrossing rate is then
Step11: Statistical analysis of peaks
Now that we have established the peaks, lets investigate their distribution. Using different distribution functions the evapy package offers.
Step12: Using Rayleigh distribution
Step13: Using the Weibull distribution
Step14: The shape parameter should be close to 2. Sometimes the estimation alogorithm fails and divereges for 3 parameter Weibull using the MLE.
Lets see how the histogram looks.
Step15: Often, the to provide a good starting value for the scale parameter. Or to lock the location parameter based on prior knowledge. In our case, the signal mean can be a reasonable choice.
Step16: Using the generalized exponential tail distribution
Step17: We have now discareded most of our data
Step18: WARNING
Step19: Estimating the largest maxima
We can use the the relationship between the peak distributions and the maxima distributions to establish an estimate for the largest maxima. First, lets see whats the largest observed value in our signal.
Step20: Using Rayleig
Step21: Using Weibull
Step22: Using generalized exponential tail | Python Code:
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
import evapy
Explanation: Basic statistical analysis of time series
In this example, basic time series statistical analysis is demonstrated.
End of explanation
t = np.arange(0., 3*3600., 0.1)
Explanation: Generating a Gaussian stochastic signal
Lets start by generating a Gaussian time series signal. The duration is set to 3 hours and sampling rate is 10Hz.
End of explanation
start_period = 15.
stop_period = 5.
discrete_size = 1000
amp_norm = stats.norm.rvs(size=discrete_size)
phase = stats.uniform.rvs(size=discrete_size)*2.*np.pi
signal_mean = 0.
freq = np.linspace(2.*np.pi*1./start_period, 2.*np.pi*1./stop_period, num=discrete_size)
Explanation: The signal will be generated using a rectangular power spectrum between 5 and 15 seconds of period. It is important to use sufficiently many discretizations to obtain a non-repeating approximation.
End of explanation
signal = np.zeros(len(t)) + signal_mean
for i, freq_i in enumerate(freq):
signal = signal + amp_norm[i]*np.cos(freq_i*t + phase[i])
plt.figure('Snapshot of signal', figsize=(16,9))
plt.title('Snapshot of the signal')
plt.plot(t[1500:2000], signal[1500:2000])
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.grid()
plt.show()
Explanation: Generating the signal by super-positioning a large number of cosine functions with stochastic amplitudes and phases.
End of explanation
descr_stats = stats.describe(signal)
print('mean: {}'.format(descr_stats.mean))
print('std: {}'.format(np.sqrt(descr_stats.variance)))
print('skewness: {}'.format(descr_stats.skewness))
Explanation: Statistical analysis of the signal
Lets see if the signal resambles a Guassian signal with zero mean.
End of explanation
params = stats.norm.fit(signal)
signal_dist = stats.norm(*params)
x = np.linspace(-100, 100, num=200)
plt.figure('Signal histogram', figsize=(16,9))
plt.title('Signal histogram and Normal PDF')
plt.plot(x, signal_dist.pdf(x), color='r')
plt.hist(signal, bins=50, density=True)
plt.show()
Explanation: Estimate the normal distribution paramters from data and plot the normal pdf with the histogram for data.
End of explanation
peak_index = evapy.evstats.argrelmax(signal)
peak = signal[peak_index]
peak_dc_index = evapy.evstats.argrelmax_decluster(signal, x_up=np.mean(signal))
peak_dc = signal[peak_dc_index]
plt.figure('Finding the peaks', figsize=(16,9))
plt.title('Finding the peaks')
plt.plot(t, signal, 'k-', label='signal')
plt.plot(t[peak_dc_index], peak_dc, 'r^', markersize=12, label='peak declustered')
plt.plot(t[peak_index], peak, 'bo', label='peak')
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.xlim(100,200)
plt.grid()
plt.legend()
plt.show()
Explanation: It is clear that the signal is indeed Gaussian!
Extreme value analysis
Finding the peaks
evapy package provides robust and fast methods to identify upcrossings and peaks. The declustered peaks refer to the largest peak per upcrossing. hus, for a narrow-banded signal, all peaks are declustered. In this example, declustering will occur for mean-upcrossing.
End of explanation
uc_0 = evapy.evstats.argupcross(signal, x_up=np.mean(signal))
print('number of upcrossings above the mean: {}'.format(len(uc_0)))
Explanation: Counting upcrossings
Lets find the number of upcrossings for different levels using the evapy package.
End of explanation
t[-1]/len(uc_0)
Explanation: Since the signal is 10800 seconds long, this mean that the Tz ((mean or) zero crossing period) of the signal is
End of explanation
uc_1 = evapy.evstats.argupcross(signal, x_up=np.mean(signal)+np.std(signal))
print('number of upcrossings above the mean+std: {}'.format(len(uc_1)))
Explanation: Now, lets find the upcrossing rate at mean+std level.
End of explanation
len(uc_1)/t[-1]
Explanation: The upcrossing rate is then:
End of explanation
eak_index = evapy.evstats.argrelmax(signal)
peak = signal[peak_index]
peak_dc_index = evapy.evstats.argrelmax_decluster(signal, x_up=np.mean(signal))
peak_dc = signal[peak_dc_index]
r = np.linspace(-10, 100, num=100)
Explanation: Statistical analysis of peaks
Now that we have established the peaks, lets investigate their distribution. Using different distribution functions the evapy package offers.
End of explanation
params_ray = evapy.distributions.rayleigh.fit(peak_dc)
peak_dc_ray = evapy.distributions.rayleigh(*params_ray)
print('location: {}'.format(params_ray[0]))
print('scale: {}'.format(params_ray[1]))
plt.figure('Peaks histogram and Rayleigh distribution', figsize=(16,9))
plt.plot(r, peak_dc_ray.pdf(r), color='r')
plt.hist(peak_dc, bins=25, density=True)
plt.show()
Explanation: Using Rayleigh distribution:
End of explanation
params_wb = evapy.distributions.weibull.fit(peak_dc, scale=20.)
peak_dc_wb = evapy.distributions.weibull(*params_wb)
print('shape: {}'.format(params_wb[0]))
print('location: {}'.format(params_wb[1]))
print('scale: {}'.format(params_wb[2]))
Explanation: Using the Weibull distribution:
Note: Sometimes, the estimation alogorithm fails if the starting values for scale parameter is too far off. This is a known issue and we are working on it. In the mean time, you may provice a reasonable starting value for the scale parameter.
End of explanation
plt.figure('Peaks histogram and Weibull distribution', figsize=(16,9))
plt.plot(r, peak_dc_wb.pdf(r), color='r')
plt.hist(peak_dc, bins=25, density=True)
plt.show()
Explanation: The shape parameter should be close to 2. Sometimes the estimation alogorithm fails and divereges for 3 parameter Weibull using the MLE.
Lets see how the histogram looks.
End of explanation
params_wb = evapy.distributions.weibull.fit(peak_dc, floc=np.mean(signal))
peak_dc_wb = evapy.distributions.weibull(*params_wb)
print('shape: {}'.format(params_wb[0]))
print('location: {}'.format(params_wb[1]))
print('scale: {}'.format(params_wb[2]))
plt.figure('Peaks histogram and Weibull distribution', figsize=(16,9))
plt.plot(r, peak_dc_wb.pdf(r), color='r')
plt.hist(peak_dc, bins=25, density=True)
plt.show()
Explanation: Often, the to provide a good starting value for the scale parameter. Or to lock the location parameter based on prior knowledge. In our case, the signal mean can be a reasonable choice.
End of explanation
peak_dc_trunc = peak_dc[peak_dc>=35.]
Explanation: Using the generalized exponential tail distribution:
Note that the generalized exponential tail distribution is particularly suitable for truncated data. Lets truncate our peak data at 35.
End of explanation
print('original data size: {}'.format(len(peak_dc)))
print('truncated data size: {}'.format(len(peak_dc_trunc)))
Explanation: We have now discareded most of our data:
End of explanation
params_get = evapy.distributions.genexptail.fit(peak_dc_trunc, scale=20., floc=np.mean(signal))
peak_dc_get = evapy.distributions.genexptail(*params_get)
print('shape: {}'.format(params_get[0]))
print('q: {}'.format(params_get[1]))
print('location: {}'.format(params_get[2]))
print('scale: {}'.format(params_get[3]))
plt.figure('Peaks histogram and generalized exponential tail distribution', figsize=(16,9))
plt.plot(r, peak_dc_get.pdf(r), color='r')
plt.hist(peak_dc_trunc, bins=15, density=True)
plt.show()
Explanation: WARNING: At this point, the estimation alogrithm for the generalized exponential tail distribution is very unstable. It is recommended to lock the location parameter and provide a good starting value for the scale parameter. We are working on it!
End of explanation
max_obs = np.max(signal)
print('largest maxima: {}'.format(max_obs))
Explanation: Estimating the largest maxima
We can use the the relationship between the peak distributions and the maxima distributions to establish an estimate for the largest maxima. First, lets see whats the largest observed value in our signal.
End of explanation
rloc, rscale = evapy.distributions.rayleigh.fit(peak_dc, floc=0.)
rN = len(peak_dc)
maxima_ray_dist = evapy.distributions.acer_o1(2., rN, loc=rloc, scale=np.sqrt(2.)*rscale)
median_maxima = maxima_ray_dist.ppf(0.5)
print('median maxima: {}'.format(median_maxima))
Explanation: Using Rayleig:
End of explanation
wshape, wloc, wscale = evapy.distributions.weibull.fit(peak_dc, floc=0.)
wN = len(peak_dc)
maxima_wb_dist = evapy.distributions.acer_o1(wshape, wN, loc=wloc, scale=wscale)
median_maxima = maxima_wb_dist.ppf(0.5)
print('median maxima: {}'.format(median_maxima))
Explanation: Using Weibull:
End of explanation
gshape, qn, gloc, gscale = evapy.distributions.genexptail.fit(peak_dc_trunc, floc=0., scale=20.)
gN = len(peak_dc_trunc)
maxima_get_dist = evapy.distributions.acer_o1(gshape, gN*qn, loc=gloc, scale=gscale)
median_maxima = maxima_get_dist.ppf(0.5)
print('median maxima: {}'.format(median_maxima))
Explanation: Using generalized exponential tail:
End of explanation |
8,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
============================================================
Define target events based on time lag, plot evoked response
============================================================
This script shows how to define higher order events based on
time lag between reference and target events. For
illustration, we will put face stimuli presented into two
classes, that is 1) followed by an early button press
(within 590 milliseconds) and followed by a late button
press (later than 590 milliseconds). Finally, we will
visualize the evoked responses to both 'quickly-processed'
and 'slowly-processed' face stimuli.
Step1: Set parameters
Step2: Find stimulus event followed by quick button presses
Step3: View evoked response | Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.event import define_target_events
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
Explanation: ============================================================
Define target events based on time lag, plot evoked response
============================================================
This script shows how to define higher order events based on
time lag between reference and target events. For
illustration, we will put face stimuli presented into two
classes, that is 1) followed by an early button press
(within 590 milliseconds) and followed by a late button
press (later than 590 milliseconds). Finally, we will
visualize the evoked responses to both 'quickly-processed'
and 'slowly-processed' face stimuli.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + STI 014 - bad channels (modify to your needs)
include = [] # or stim channels ['STI 014']
raw.info['bads'] += ['EEG 053'] # bads
# pick MEG channels
picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False, eog=True,
include=include, exclude='bads')
Explanation: Set parameters
End of explanation
reference_id = 5 # presentation of a smiley face
target_id = 32 # button press
sfreq = raw.info['sfreq'] # sampling rate
tmin = 0.1 # trials leading to very early responses will be rejected
tmax = 0.59 # ignore face stimuli followed by button press later than 590 ms
new_id = 42 # the new event id for a hit. If None, reference_id is used.
fill_na = 99 # the fill value for misses
events_, lag = define_target_events(events, reference_id, target_id,
sfreq, tmin, tmax, new_id, fill_na)
print(events_) # The 99 indicates missing or too late button presses
# besides the events also the lag between target and reference is returned
# this could e.g. be used as parametric regressor in subsequent analyses.
print(lag[lag != fill_na]) # lag in milliseconds
# #############################################################################
# Construct epochs
tmin_ = -0.2
tmax_ = 0.4
event_id = dict(early=new_id, late=fill_na)
epochs = mne.Epochs(raw, events_, event_id, tmin_,
tmax_, picks=picks, baseline=(None, 0),
reject=dict(mag=4e-12))
# average epochs and get an Evoked dataset.
early, late = [epochs[k].average() for k in event_id]
Explanation: Find stimulus event followed by quick button presses
End of explanation
times = 1e3 * epochs.times # time in milliseconds
title = 'Evoked response followed by %s button press'
plt.clf()
ax = plt.subplot(2, 1, 1)
early.plot(axes=ax)
plt.title(title % 'late')
plt.ylabel('Evoked field (fT)')
ax = plt.subplot(2, 1, 2)
late.plot(axes=ax)
plt.title(title % 'early')
plt.ylabel('Evoked field (fT)')
plt.show()
Explanation: View evoked response
End of explanation |
8,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook to add co-ordinates for 1999 Polling Places
The AEC started putting co-ordinates on polling place files from 2007.
The code below matches to 2007 polling places, where the name of the venue is the same.
While not perfect, this should result the vast majority of the time in a co-ordinate closely representing the location of the polling location in 1999
Libraries
Step1: Helper functions
Step2: Import polling places
import_polling_places(filepath)
* Takes a path to a polling place file
* Returns a tidy data frame
* Renames some columns
* Dedups on ['state','polling_place']
Step3: Import 1999 polling places
Step4: Matches
Pandas setting I need for below to behave
pandas generates warnings for working with a data frame that's a copy of another
it thinks I might think I'm changing df_pp_1999 when im working with df_pp_1999_working
i'm turning this warning off because i'm doing this on purpose, so I can keep df_pp_1999 as a 'yet to be matched' file, and update it with each subsequent working file
Step5: Functions
match_polling_places(df_pp_1999, df_pp, settings)
For the 1999 data frame, and a given other polling place data frame, and a set of settings, run a merge, and return the rows that matched based on the join you specified
<br />
E.g
Step6: match_unmatched_polling_places(df1, settings)
This is a wrapper function for match_polling_places
It will only pass data that is NOT yet matched in df1 to the match function, so that we keep track of at what point in our order we matched the data frame (rather than overriding each time it matches)
This will matter as we do less high quality matches at the bottom of the pile
Step7: match_status(df1)
a function to tell me for a given data frame what the match status is
Step8: Match attempts
Match 1 - 2007 on premises name, state, and postcode
Other than schools that have moved, these places should be the same
And for schools that have moved, the postcode test should ensure it's not too far
Step9: Match 2 through 4 - 2010 through 2016 on premises name, state, and postcode
Other than schools that have moved, these places should be the same
And for schools that have moved, the postcode test should ensure it's not too far
Step10: Match 5 - 2007 polling places on polling place name, state, and postcode
This will match to a polling place name, in a different location, as long as it is in the same suburb
for the purposes of this analysis, this should be good enough
Step11: Match 6-8 - 2010-2016 polling places on polling place name, state, and postcode
Step12: Gooogle geocoder
keys.json contains a google maps api key, so it's not in this notebook.
Step13: Google geocode example
Step14: Match unmatched so far by google geocoder
Step15: Set the rest to the centroid of their suburb
Step16: How many are left now?
Step17: Hooray! WE HAVE A RESULT FOR EVERYWHERE
Let's write a CSV | Python Code:
import pandas as pd
import numpy as np
from IPython.display import display, HTML
import json
import googlemaps
Explanation: Notebook to add co-ordinates for 1999 Polling Places
The AEC started putting co-ordinates on polling place files from 2007.
The code below matches to 2007 polling places, where the name of the venue is the same.
While not perfect, this should result the vast majority of the time in a co-ordinate closely representing the location of the polling location in 1999
Libraries
End of explanation
def left_of_bracket(s):
if '(' in s:
needle = s.find('(')
r = s[:needle-1].strip()
return r
else:
return s
def state_abbreviation(state):
spaces = state.count(' ')
if spaces == 2:
bits = state.split(' ')
r=''
for b in bits:
r = r + b[:1].upper() # for each word in state grab first letter
return r
elif 'Australia' in state:
r = state[:1].upper() + 'A'
return r
elif state == 'Queensland':
return 'QLD'
elif state == 'Northern Territory':
return 'NT'
else:
r = state[:3].upper()
return r
# i keep forgetting the syntax for this so writing a wrapper
def dedup_df(df, keys, keep = False):
# for a data frame, drop anything thats a duplicate
# if you change keep to first, it'll keep the first row rather than none
df_dedup = df.drop_duplicates(keys, keep)
return df_dedup
Explanation: Helper functions
End of explanation
def import_polling_places(filepath):
# read csv
df_pp = pd.read_csv(
filepath
)
# pick the columns I want to keep
cols = [
'State',
'PollingPlaceNm',
'PremisesNm',
'PremisesAddress1',
'PremisesAddress2',
'PremisesAddress3',
'PremisesSuburb',
'PremisesStateAb',
'PremisesPostCode',
'Latitude',
'Longitude'
]
# filter for those
df_pp = df_pp[cols]
# create a polling place column missing the bracket
lambda_polling_places = lambda x: left_of_bracket(x)
df_pp['polling_place'] = df_pp['PollingPlaceNm'].apply(lambda_polling_places)
# rename columns to make joining easier
df_pp['premises'] = df_pp['PremisesNm']
df_pp['postcode'] = df_pp['PremisesPostCode']
# replace in the col headers list where I've modified/added the column
cols = [c.replace('PollingPlaceNm', 'polling_place') for c in cols]
cols = [c.replace('PremisesNm', 'premises') for c in cols]
cols = [c.replace('PremisesPostCode', 'postcode') for c in cols]
# reorder df
df_pp = df_pp[cols]
# dedup
df_pp = df_pp.drop_duplicates()
# make all headers lower case
df_pp.columns = [x.lower() for x in df_pp.columns]
return df_pp
filepath = 'federal_election_polling_places/pp_2007_election.csv'
test = import_polling_places(filepath)
display('Rows: ' + str(len(test.index)))
Explanation: Import polling places
import_polling_places(filepath)
* Takes a path to a polling place file
* Returns a tidy data frame
* Renames some columns
* Dedups on ['state','polling_place']
End of explanation
def import_1999_pp(filepath):
df_pp_1999 = pd.read_csv(
filepath
)
# add blank columns for match types and lat/lng
df_pp_1999['match_source'] = np.nan
df_pp_1999['match_type'] = np.nan
df_pp_1999['latitude'] = np.nan
df_pp_1999['longitude'] = np.nan
# tell it to index on state and polling place
df_pp_1999 = df_pp_1999.set_index(['state','polling_place'])
return df_pp_1999
filepath = '1999_referenda_output/polling_places.csv'
df_pp_1999 = import_1999_pp(filepath)
display(df_pp_1999.head(3))
Explanation: Import 1999 polling places
End of explanation
pd.set_option('chained_assignment',None)
Explanation: Matches
Pandas setting I need for below to behave
pandas generates warnings for working with a data frame that's a copy of another
it thinks I might think I'm changing df_pp_1999 when im working with df_pp_1999_working
i'm turning this warning off because i'm doing this on purpose, so I can keep df_pp_1999 as a 'yet to be matched' file, and update it with each subsequent working file
End of explanation
def match_polling_places(df1, df2, settings):
# split up our meta field
keys = settings['keys']
match_source = settings['match_source']
match_type = settings['match_type']
# filter for those columns
df_working = df1.reset_index()[[
'state',
'polling_place',
'premises',
'address',
'suburb',
'postcode',
'wheelchair_access'
]]
# the keys I want to keep from the second df in the join are the group_by keys, and also lat/lng
cols_df2 = keys + ['latitude','longitude']
# add cols for match type
df_working['match_source'] = match_source
df_working['match_type'] = match_type
# run the join
df_working = pd.merge(
df_working,
df2[cols_df2],
on=keys,
how='left'
)
# delete those which we didn't match
df_working = df_working[~df_working['latitude'].isnull()]
# dedup on the keys we matched on
df_working = dedup_df(df_working, keys)
return df_working
# test match_polling_places
filepath = '1999_referenda_output/polling_places.csv'
df1 = import_1999_pp(filepath)
filepath2 = 'federal_election_polling_places/pp_2007_election.csv'
df2 = import_polling_places(filepath2)
test = match_polling_places(
df1,
df2,
dict(
keys = ['state','premises','postcode'],
match_source = '2007 Polling Places',
match_type = 'Match 01 - state, premises, postcode'
)
)
display(test.head(3))
Explanation: Functions
match_polling_places(df_pp_1999, df_pp, settings)
For the 1999 data frame, and a given other polling place data frame, and a set of settings, run a merge, and return the rows that matched based on the join you specified
<br />
E.g:
match_polling_places(
df_pp_1999,
df_pp,
dict(
keys = ['state','premises','postcode'],
match_source = '2007 Polling Places',
match_type = 'Match 01 - state, premises, postcode'
)
)
runs a join on state, premises, and postcode between df1 and df2
keeps a defined set of columns from df1
adds the columns match_source and match_type, and sets their value
replaces the latitude and longitude columns of df1 with those from df2
returns this data frame, deleting all rows that didn't match from df1
End of explanation
def match_unmatched_polling_places(df1, settings):
# get polling place file from settings
filepath = settings['pp_filepath']
df2 = import_polling_places(filepath)
# work out which rows we haven't yet matched
df1_unmatched = df1[df1.match_source.isnull()]
# run match for those
df1_matches = match_polling_places(df1_unmatched, df2, settings)
# dedup this file for combinations of state/polling_place (my unique key)
keys = ['state','polling_place']
df1_matches = dedup_df(df1_matches, keys)
# check that worked by making it a key now
df1_matches = df1_matches.set_index(keys)
# update with matches
df1.update(df1_matches)
# return
return df1
Explanation: match_unmatched_polling_places(df1, settings)
This is a wrapper function for match_polling_places
It will only pass data that is NOT yet matched in df1 to the match function, so that we keep track of at what point in our order we matched the data frame (rather than overriding each time it matches)
This will matter as we do less high quality matches at the bottom of the pile
End of explanation
def match_status(df1):
# how many Nans are in match_type?
not_matched = len(df1[df1['match_type'].isnull()].index)
# make a df for none
none = pd.DataFrame(dict(
match_type = 'Not yet matched',
count = not_matched
), index=[0])
if not_matched == len(df1.index): # if all values are not-matched
return none
else:
df = pd.DataFrame(
df1.groupby('match_type')['match_type'].count().reset_index(name='count')
)
# add the non-matched row
df = df.append(none)
return df
Explanation: match_status(df1)
a function to tell me for a given data frame what the match status is
End of explanation
# first match attempt - set up file
filepath = '1999_referenda_output/polling_places.csv'
df_pp_1999 = import_1999_pp(filepath)
# double check none are somehow magically matched yet
print('before')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'federal_election_polling_places/pp_2007_election.csv',
keys = ['state','premises','postcode'],
match_source = '2007 Polling Places',
match_type = 'Match 01 - state, premises, postcode'
)
# run match
df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings)
print('after')
display(match_status(df_pp_1999))
Explanation: Match attempts
Match 1 - 2007 on premises name, state, and postcode
Other than schools that have moved, these places should be the same
And for schools that have moved, the postcode test should ensure it's not too far
End of explanation
## 2
print('before 2')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'federal_election_polling_places/pp_2010_election.csv',
keys = ['state','premises','postcode'],
match_source = '2010 Polling Places',
match_type = 'Match 02 - state, premises, postcode'
)
# run match
df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings)
print('after')
display(match_status(df_pp_1999))
## 3
print('before 3')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'federal_election_polling_places/pp_2013_election.csv',
keys = ['state','premises','postcode'],
match_source = '2013 Polling Places',
match_type = 'Match 03 - state, premises, postcode'
)
# run match
df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings)
print('after')
display(match_status(df_pp_1999))
## 4
print('before 4')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'federal_election_polling_places/pp_2016_election.csv',
keys = ['state','premises','postcode'],
match_source = '2016 Polling Places',
match_type = 'Match 04 - state, premises, postcode'
)
# run match
df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings)
print('after')
display(match_status(df_pp_1999))
Explanation: Match 2 through 4 - 2010 through 2016 on premises name, state, and postcode
Other than schools that have moved, these places should be the same
And for schools that have moved, the postcode test should ensure it's not too far
End of explanation
print('before 5')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'federal_election_polling_places/pp_2007_election.csv',
keys = ['state','polling_place','postcode'],
match_source = '2007 Polling Places',
match_type = 'Match 05 - state, polling_place, postcode'
)
# run match
df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings)
print('after')
display(match_status(df_pp_1999))
Explanation: Match 5 - 2007 polling places on polling place name, state, and postcode
This will match to a polling place name, in a different location, as long as it is in the same suburb
for the purposes of this analysis, this should be good enough
End of explanation
print('before 6')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'federal_election_polling_places/pp_2010_election.csv',
keys = ['state','polling_place','postcode'],
match_source = '2010 Polling Places',
match_type = 'Match 06 - state, polling_place, postcode'
)
# run match
df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings)
print('after')
display(match_status(df_pp_1999))
## 7
print('before 7')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'federal_election_polling_places/pp_2013_election.csv',
keys = ['state','polling_place','postcode'],
match_source = '2013 Polling Places',
match_type = 'Match 07 - state, polling_place, postcode'
)
# run match
df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings)
print('after')
display(match_status(df_pp_1999))
## 8
print('before 8')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'federal_election_polling_places/pp_2016_election.csv',
keys = ['state','polling_place','postcode'],
match_source = '2016 Polling Places',
match_type = 'Match 08 - state, polling_place, postcode'
)
# run match
df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings)
print('after')
display(match_status(df_pp_1999))
Explanation: Match 6-8 - 2010-2016 polling places on polling place name, state, and postcode
End of explanation
def get_google_api_key():
filepath = 'config/keys.json'
with open(filepath) as data_file:
data = json.load(data_file)
key = data['google_maps']
return key
Explanation: Gooogle geocoder
keys.json contains a google maps api key, so it's not in this notebook.
End of explanation
def geocode_address(address):
key = get_google_api_key()
componentRestrictions = {'country':'AU'}
gmaps = googlemaps.Client(key=key)
# geocode_result = gmaps.geocode(address)
geocode_result = gmaps.geocode(address, componentRestrictions)
return geocode_result
# display(geocode_address('10/44 Lord St Richmond VIC 3121'))
display(geocode_address('Salt Creek Rd SALT CREEK 5275'))
def geocode_polling_place(row, match_source = 'google geocode'):
address = row['address'] + ' ' + row['suburb'] + ' ' + str(row['postcode'])[:4]
# geocode address
geocode = geocode_address(address)
# update the row
if geocode:
row['match_source'] = match_source
row['match_type'] = geocode[0]['geometry']['location_type']
row['latitude'] = geocode[0]['geometry']['location']['lat']
row['longitude'] = geocode[0]['geometry']['location']['lng']
else:
row['match_source'] = 'google geocode'
row['match_type'] = 'failed'
row = pd.DataFrame(row, index=[0])
return row
# test above function on a few rows
test = df_pp_1999.head(3)
geocode_test = pd.DataFrame()
i = 0
for row in test.reset_index().to_dict('records')[:3]:
row = geocode_polling_place(row, 'Match 09 - Google Geocode')
if geocode_test.empty:
geocode_test = row
else:
geocode_test = geocode_test.append(row)
geocode_test
Explanation: Google geocode example
End of explanation
# test above function on a few rows
unmatched_places = df_pp_1999[df_pp_1999.match_source.isnull()]
geocode_matches = pd.DataFrame()
for row in unmatched_places.reset_index().to_dict('records'):
row = geocode_polling_place(row)
if geocode_matches.empty:
geocode_matches = row
else:
geocode_matches = geocode_matches.append(row)
geocode_matches.head(5)
# reorder geocode in same pattern as non-matches
# get all keys from that table
keys = df_pp_1999.reset_index().columns.values
#reorder by that
geocode_matches_ordered = geocode_matches[keys]
# add state indexes
keys = ['state','polling_place']
# df1_matches = dedup_df(df1_matches, keys)
# check that worked by making it a key now
geocode_matches_ordered = geocode_matches_ordered.set_index(keys)
# update with matches
df_pp_1999.update(geocode_matches_ordered)
# where are we at?
display(match_status(df_pp_1999))
Explanation: Match unmatched so far by google geocoder
End of explanation
filepath = 'from_abs/ssc_2016_aust_centroid.csv'
subtown_centroids = results = pd.read_csv(
filepath
)
# create a column for abbreviated states
lambda_states = lambda x: state_abbreviation(x)
subtown_centroids['state'] = subtown_centroids['STE_NAME16'].apply(lambda_states)
# strip the brackets out of suburb names, and make all caps, to match the 1999 file
lambda_suburbs = lambda x: left_of_bracket(x).upper()
subtown_centroids['suburb'] = subtown_centroids['SSC_NAME16'].apply(lambda_suburbs)
display(subtown_centroids.head(5))
print('before suburbs')
display(match_status(df_pp_1999))
# configure match settings
settings = dict(
pp_filepath = 'from_abs/ssc_2016_aust_centroid.csv',
keys = ['state','suburb'],
match_source = 'ABS Suburb Centroids',
match_type = 'Match 10 - suburb centroid'
)
# rows to try
## try not matched rows, OR rows that the geocode failed on
df_to_match = df_pp_1999[df_pp_1999.match_source.isnull()]
df_to_match = df_to_match.append(df_pp_1999[df_pp_1999['match_type'] == 'failed'])
# get matches
df1_matches = match_polling_places(
df_to_match, # only run on empty rows
subtown_centroids,
settings
)
# dedup this file for combinations of state/polling_place (my unique key)
keys = ['state','polling_place']
# df1_matches = dedup_df(df1_matches, keys)
# check that worked by making it a key now
df1_matches = df1_matches.set_index(keys)
# update with matches
df_pp_1999.update(df1_matches)
print('after')
display(match_status(df_pp_1999))
Explanation: Set the rest to the centroid of their suburb
End of explanation
df_to_match = df_pp_1999[df_pp_1999.match_source.isnull()]
df_to_match = df_to_match.append(df_pp_1999[df_pp_1999['match_type'] == 'failed'])
display(df_to_match)
Explanation: How many are left now?
End of explanation
df_pp_1999.to_csv(
'1999_referenda_output/polling_places_geocoded.csv',
sep = ','
)
Explanation: Hooray! WE HAVE A RESULT FOR EVERYWHERE
Let's write a CSV:
End of explanation |
8,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 10 - eigenvalues and eigenvectors
An eigenvector $\boldsymbol{x}$ and corrsponding eigenvalue $\lambda$ of a square matrix $\boldsymbol{A}$ satisfy
$$
\boldsymbol{A} \boldsymbol{x} = \lambda \boldsymbol{x}
$$
Rearranging this expression,
$$
\left( \boldsymbol{A} - \lambda \boldsymbol{I}\right) \boldsymbol{x} = \boldsymbol{0}
$$
The above equation has solutions (other than $\boldsymbol{x} = \boldsymbol{0}$) if
$$
\det \left( \boldsymbol{A} - \lambda \boldsymbol{I}\right) = 0
$$
Computing the determinant of an $n \times n$ matrix requires solution of an $n$th degree polynomial. It is known how to compute roots of polynomials up to and including degree four (e.g., see http
Step1: We can compute the eigenvectors and eigenvalues using the NumPy function linalg.eig
Step2: The $i$th column of evectors is the $i$th eigenvector.
Symmetric matrices
Note that the above eigenvalues and the eigenvectors are real valued. This is always the case for symmetric matrices. Another features of symmetric matrices is that the eigenvectors are orthogonal. We can verify this for the above matrix
Step3: Non-symmetric matrices
In general, the eigenvalues and eigenvectors of a non-symmetric, real-valued matrix are complex. We can see this by example
Step4: Unlike symmetric matrices, the eigenvectors are in general not orthogonal, which we can test | Python Code:
# Import NumPy and seed random number generator to make generated matrices deterministic
import numpy as np
np.random.seed(1)
# Create a symmetric matrix with random entries
A = np.random.rand(5, 5)
A = A + A.T
print(A)
Explanation: Lecture 10 - eigenvalues and eigenvectors
An eigenvector $\boldsymbol{x}$ and corrsponding eigenvalue $\lambda$ of a square matrix $\boldsymbol{A}$ satisfy
$$
\boldsymbol{A} \boldsymbol{x} = \lambda \boldsymbol{x}
$$
Rearranging this expression,
$$
\left( \boldsymbol{A} - \lambda \boldsymbol{I}\right) \boldsymbol{x} = \boldsymbol{0}
$$
The above equation has solutions (other than $\boldsymbol{x} = \boldsymbol{0}$) if
$$
\det \left( \boldsymbol{A} - \lambda \boldsymbol{I}\right) = 0
$$
Computing the determinant of an $n \times n$ matrix requires solution of an $n$th degree polynomial. It is known how to compute roots of polynomials up to and including degree four (e.g., see http://en.wikipedia.org/wiki/Quartic_function). For matrices with $n > 4$, numerical methods must be used to compute eigenvalues and eigenvectors.
An $n \times n$ will have $n$ eigenvalue/eigenvector pairs (eigenpairs).
Computing eigenvalues with NumPy
NumPy provides a function to compute eigenvalues and eigenvectors. To demonstrate how to compute eigpairs, we first create a $5 \times 5$ symmetric matrix:
End of explanation
# Compute eigenvectors of A
evalues, evectors = np.linalg.eig(A)
print("Eigenvalues: {}".format(evalues))
print("Eigenvectors: {}".format(evectors))
Explanation: We can compute the eigenvectors and eigenvalues using the NumPy function linalg.eig
End of explanation
import itertools
# Build pairs (0,0), (0,1), . . . (0, n-1), (1, 2), (1, 3), . . .
pairs = itertools.combinations_with_replacement(range(len(evectors)), 2)
# Compute dot product of eigenvectors x_{i} \cdot x_{j}
for p in pairs:
e0, e1 = p[0], p[1]
print ("Dot product of eigenvectors {}, {}: {}".format(e0, e1, evectors[:, e0].dot(evectors[:, e1])))
print("Testing Ax and (lambda)x: \n {}, \n {}".format(A.dot(evectors[:,1]), evalues[1]*evectors[:,1]))
Explanation: The $i$th column of evectors is the $i$th eigenvector.
Symmetric matrices
Note that the above eigenvalues and the eigenvectors are real valued. This is always the case for symmetric matrices. Another features of symmetric matrices is that the eigenvectors are orthogonal. We can verify this for the above matrix:
We can also check that the second eigenpair is indeed an eigenpair (Python/NumPy use base 0, so the second eiegenpair has index 1):
End of explanation
B = np.random.rand(5, 5)
evalues, evectors = np.linalg.eig(B)
print("Eigenvalues: {}".format(evalues))
print("Eigenvectors: {}".format(evectors))
Explanation: Non-symmetric matrices
In general, the eigenvalues and eigenvectors of a non-symmetric, real-valued matrix are complex. We can see this by example:
End of explanation
# Compute dot product of eigenvectors x_{i} \cdot x_{j}
pairs = itertools.combinations_with_replacement(range(len(evectors)), 2)
for p in pairs:
e0, e1 = p[0], p[1]
print ("Dot product of eigenvectors {}, {}: {}".format(e0, e1, evectors[:, e0].dot(evectors[:, e1])))
Explanation: Unlike symmetric matrices, the eigenvectors are in general not orthogonal, which we can test:
End of explanation |
8,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 1
Imports
Step3: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic
Step5: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
Step7: Write a function sort_word_counts that return a list of sorted word counts
Step8: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt
Step9: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research... | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
Explanation: Algorithms Exercise 1
Imports
End of explanation
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
Split a string into a list of words, removing punctuation and stop words.
low = []
t = s.splitlines()
for i in range(len(t)):
u = t[i].split()
for j in range(len(u)):
low.append(u[j]) # Turns multi-lined string into neat list
no_more_punc = []
for k in range(len(low)):
no_more_punc.append(''.join(list(filter(lambda x: not x in punctuation, low[k])))) # Removes punctuation
if type(stop_words)==list:
no_more_stops = list(filter(lambda x: not x in stop_words, no_more_punc)) # Removes stop words
elif type(stop_words)==str:
no_more_stops = list(filter(lambda x: not x in stop_words.split(), no_more_punc)) # for different input types
elif stop_words==None:
no_more_stops = no_more_punc
no_gaps = list(filter(lambda x: not x=='', no_more_stops)) # Removes empty strings
all_low = []
for l in range(len(no_gaps)):
all_low.append(no_gaps[l].lower()) # Removes any capitalization
return all_low
tokenize("This, is the way; that things will end", stop_words=['the', 'is'])
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland =
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
Explanation: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:
Split the string into lines using splitlines.
Split each line into a list of words and merge the lists for each line.
Use Python's builtin filter function to remove all punctuation.
If stop_words is a list, remove all occurences of the words in the list.
If stop_words is a space delimeted string of words, split them and remove them.
Remove any remaining empty words.
Make all words lowercase.
End of explanation
p = [1,2,3]
q = ['one','two','three']
pq = []
for i in range(len(p)):
pq.append((p[i],q[i]))
d = dict(pq)
d
def count_words(data):
Return a word count dictionary from the list of words in data.
word = []
count = []
for i in range(len(data)):
if not data[i] in word:
word.append(data[i])
count.append(1)
elif data[i] in word:
for j in range(len(word)):
if word[j]==data[i]:
count[j]+=1
wc = []
for j in range(len(word)):
wc.append((word[j],count[j]))
wcd = dict(wc)
return wcd
count_words(tokenize('this and the this from and a a a'))
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
Explanation: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
End of explanation
wiggity = [(1,'one'),(3,'three'),(2,'two')]
sorted(wiggity, key=lambda x: x[0], reverse=True)
boogity = {1:2,2:3,3:4}
list(iter(boogity)),boogity[1]
def sort_word_counts(wc):
Return a list of 2-tuples of (word, count), sorted by count descending.
word = list(iter(wc))
count = []
for i in word:
count.append(wc[i])
tups = []
for j in range(len(word)):
tups.append((word[j],count[j]))
return sorted(tups, key=lambda x: x[1], reverse=True)
sort_word_counts(count_words(tokenize('this and a the this this and a a a')))
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
Explanation: Write a function sort_word_counts that return a list of sorted word counts:
Each element of the list should be a (word, count) tuple.
The list should be sorted by the word counts, with the higest counts coming first.
To perform this sort, look at using the sorted function with a custom key and reverse
argument.
End of explanation
with open('mobydick_chapter1.txt', 'r') as f:
read_text = f.read()
f.closed
swc = sort_word_counts(count_words(tokenize(read_text, 'the of and a to in is it that as')))
assert swc[0]==('i',43)
assert len(swc)==848
Explanation: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:
Read the file into a string.
Tokenize with stop words of 'the of and a to in is it that as'.
Perform a word count, the sort and save the result in a variable named swc.
End of explanation
swc
swc[1][1]
x = []
y = []
for i in range(50):
j = 1
while j <= swc[i][1]:
x.append(i)
y.append(j)
j+=1
print(x, y) # Perfect data for normal dot plot... Oops.
plt.scatter(x,y)
x = []
for i in range(50):
x.append(swc[i][1])
y = list(range(1,51))[::-1]
x,y
words = []
for i in range(50):
words.append(swc[i][0])
rev_words = words[::-1]
warray = np.array(rev_words)
warray
f = plt.figure(figsize=(7,9))
plt.scatter(x, y)
plt.xlim(0,45)
plt.ylim(0,51)
plt.yticks(np.arange(1,51), warray)
plt.grid(axis='y')
plt.title('Moby Dick, Ch. 1: Word Count')
plt.xlabel('Count')
plt.ylabel('Words');
assert True # use this for grading the dotplot
Explanation: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
End of explanation |
8,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Illumina Overview Tutorial
Step1: You can use the FileLink and FileLinks features of the IPython notebook to view or download data. FileLinks is used for viewing or downloading directories, while FileLink is used for viewing or downloading single files.
Step2: We'll change to the moving_pictures_tutorial-1.9.0/illumina directory for the remaining steps. We also need to prepare the FileLink and FileLinks functions to work from this new location.
Step3: Check our mapping file for errors
The QIIME mapping file contains all of the per-sample metadata, including technical information such as primers and barcodes that were used for each sample, and information about the samples, including what body site they were taken from. In this data set we're looking at human microbiome samples from four sites on the bodies of two individuals at mutliple time points. The metadata in this case therefore includes a subject identifier, a timepoint, and a body site for each sample. You can review the map.tsv file at the link in the previous cell to see an example of the data (or view the published Google Spreadsheet version, which is more nicely formatted).
In this step, we run validate_mapping_file.py to ensure that our mapping file is compatible with QIIME.
Step4: In this case there were no errors, but if there were we would review the resulting HTML summary to find out what errors are present. You could then fix those in a spreadsheet program or text editor and rerun validate_mapping_file.py on the updated mapping file.
For the sake of illustrating what errors in a mapping file might look like, we've created a bad mapping file (map-bad.tsv). We'll next call validate_mapping_file.py on the file map-bad.tsv. Review the resulting HTML report. What are the issues with this mapping file?
Step5: Demultiplexing and quality filtering sequences
We next need to demultiplex and quality filter our sequences (i.e. assigning barcoded reads to the samples they are derived from). In general, you should get separate fastq files for your sequence and barcode reads. Note that we pass these files while still gzipped. split_libraries_fastq.py can handle gzipped or unzipped fastq files. The default strategy in QIIME for quality filtering of Illumina data is described in Bokulich et al (2013).
Step6: We often want to see the results of running a command. Here we can do that by calling our output formatter again, this time passing the output directory from the previous step.
Step7: We can see how many sequences we ended up with using count_seqs.py.
Step8: OTU picking
Step9: The primary output that we get from this command is the OTU table, or the number of times each operational taxonomic unit (OTU) is observed in each sample. QIIME uses the Genomics Standards Consortium Biological Observation Matrix standard (BIOM) format for representing OTU tables. You can find additional information on the BIOM format here, and information on converting these files to tab-separated text that can be viewed in spreadsheet programs here. Several OTU tables are generated by this command. The one we typically want to work with is otus/otu_table_mc2_w_tax_no_pynast_failures.biom. This has singleton OTUs (or OTUs with a total count of 1) removed, as well as OTUs whose representative (i.e., centroid) sequence couldn't be aligned with PyNAST. It also contains taxonomic assignments for each OTU as observation metadata.
The open-reference OTU picking command also produces a phylogenetic tree where the tips are the OTUs. The file containing the tree is otus/rep_set.tre, and is the file that should be used with otus/otu_table_mc2_w_tax_no_pynast_failures.biom in downstream phylogenetic diversity calculations. The tree is stored in the widely used newick format.
To view the output of this command, call FileLink on the index.html file in the output directory.
Step10: To compute some summary statistics of the OTU table we can run the following command.
Step11: The key piece of information you need to pull from this output is the depth of sequencing that should be used in diversity analyses. Many of the analyses that follow require that there are an equal number of sequences in each sample, so you need to review the Counts/sample detail and decide what depth you'd like. Any samples that don't have at least that many sequences will not be included in the analyses, so this is always a trade-off between the number of sequences you throw away and the number of samples you throw away. For some perspective on this, see Kuczynski 2010.
Run diversity analyses
Here we're running the core_diversity_analyses.py script which applies many of the "first-pass" diversity analyses that users are generally interested in. The main output that users will interact with is the index.html file, which provides links into the different analysis results.
Note that in this step we're passing -e which is the sampling depth that should be used for diversity analyses. I chose 1114 here, based on reviewing the above output from biom summarize-table. This value will be study-specific, so don't just use this value on your own data (though it's fine to use that value for this tutorial).
The commands in this section (combined) can take about 15 minutes to complete.
You may see a RuntimeWarning generated by this command. As the warning indicates, it's not something that you should be concerned about in this case. QIIME (and scikit-bio, which implements a lot of QIIME's core functionality) will sometimes provide these types of warnings to help you figure out if your analyses are valid, but you should always be thinking about whether a particular test or analysis is relevant for your data. Just because something can be passed as input to a QIIME script, doesn't necessarily mean that the analysis it performs is appropriate.
Step12: Next open the index.html file in the resulting directory. This will link you into the different results.
Step13: The results above treat all samples independently, but sometimes (for example, in the taxonomic summaries) it's useful to categorize samples by their metadata. We can do this by passing categories (i.e., headers from our mapping file) to core_diversity_analyses.py with the -c parameter. Because core_diversity_analyses.py can take a long time to run, it has a --recover_from_failure option, which can allow it to be rerun from a point where it previously failed in some cases (for example, if you accidentally turned your computer off while it was running). This option can also be used to add categorical analyses if you didn't include them in your initial run. Next we'll rerun core_diversity_analyses.py with two sets of categorical analyses
Step14: One thing you may notice in the PCoA plots generated by core_diversity_analyses.py is that the samples don't cluster perfectly by SampleType. This is unexpected, based on what we know about the human microbiome. Since this is a time series, let's explore this in a little more detail integrating a time axis into our PCoA plots. We can do this by re-running Emperor directly, replacing our previously generated PCoA plots. (Emperor is a tool for the visualization of PCoA plots with many advanced features that you can explore in the Emperor tutorial. If you use Emperor in your research you should be sure to cite it directly, as with the other tools that QIIME wraps, such as uclust and RDPClassifier.)
After this runs, you can reload the Emperor plots that you accessed from the above cdout/index.html links. Try making the samples taken during AntibioticUsage invisible.
Step15: IMPORTANT | Python Code:
!(wget ftp://ftp.microbio.me/qiime/tutorial_files/moving_pictures_tutorial-1.9.0.tgz || curl -O ftp://ftp.microbio.me/qiime/tutorial_files/moving_pictures_tutorial-1.9.0.tgz)
!tar -xzf moving_pictures_tutorial-1.9.0.tgz
Explanation: Illumina Overview Tutorial: Moving Pictures of the Human Microbiome
This tutorial covers a full QIIME workflow using Illumina sequencing data. This tutorial is intended to be quick to run, and as such, uses only a subset of a full Illumina Genome Analyzer II (GAIIx) run. We'll make use of the Greengenes reference OTUs, which is the default reference database used by QIIME. You can determine which version of Greengenes is being used by running print_qiime_config.py. This will be Greengenes, unless you've configured QIIME to use a different reference database by default.
The data used in this tutorial are derived from the Moving Pictures of the Human Microbiome study, where two human subjects collected daily samples from four body sites: the tongue, the palm of the left hand, the palm of the right hand, and the gut (via fecal samples obtained by swapping used toilet paper). These data were sequenced using the barcoded amplicon sequencing protocol described in Global patterns of 16S rRNA diversity at a depth of millions of sequences per sample. A more recent version of this protocol that can be used with the Illumina HiSeq 2000 and MiSeq can be found here.
This tutorial is presented as an IPython Notebook. For more information on using QIIME with IPython, see Ragan-Kelley et al. (2013). You can find more information on the IPython Notebook here.
Getting started
We'll begin by downloading the tutorial data.
End of explanation
from IPython.display import FileLinks, FileLink
FileLinks('moving_pictures_tutorial-1.9.0')
FileLink('moving_pictures_tutorial-1.9.0/README.txt')
Explanation: You can use the FileLink and FileLinks features of the IPython notebook to view or download data. FileLinks is used for viewing or downloading directories, while FileLink is used for viewing or downloading single files.
End of explanation
from functools import partial
from os import chdir
chdir('moving_pictures_tutorial-1.9.0/illumina')
FileLink = partial(FileLink, url_prefix='moving_pictures_tutorial-1.9.0/illumina/')
FileLinks = partial(FileLinks, url_prefix='moving_pictures_tutorial-1.9.0/illumina/')
Explanation: We'll change to the moving_pictures_tutorial-1.9.0/illumina directory for the remaining steps. We also need to prepare the FileLink and FileLinks functions to work from this new location.
End of explanation
!validate_mapping_file.py -o vmf-map/ -m map.tsv
Explanation: Check our mapping file for errors
The QIIME mapping file contains all of the per-sample metadata, including technical information such as primers and barcodes that were used for each sample, and information about the samples, including what body site they were taken from. In this data set we're looking at human microbiome samples from four sites on the bodies of two individuals at mutliple time points. The metadata in this case therefore includes a subject identifier, a timepoint, and a body site for each sample. You can review the map.tsv file at the link in the previous cell to see an example of the data (or view the published Google Spreadsheet version, which is more nicely formatted).
In this step, we run validate_mapping_file.py to ensure that our mapping file is compatible with QIIME.
End of explanation
!validate_mapping_file.py -o vmf-map-bad/ -m map-bad.tsv
FileLinks('vmf-map-bad/')
Explanation: In this case there were no errors, but if there were we would review the resulting HTML summary to find out what errors are present. You could then fix those in a spreadsheet program or text editor and rerun validate_mapping_file.py on the updated mapping file.
For the sake of illustrating what errors in a mapping file might look like, we've created a bad mapping file (map-bad.tsv). We'll next call validate_mapping_file.py on the file map-bad.tsv. Review the resulting HTML report. What are the issues with this mapping file?
End of explanation
!split_libraries_fastq.py -o slout/ -i forward_reads.fastq.gz -b barcodes.fastq.gz -m map.tsv
Explanation: Demultiplexing and quality filtering sequences
We next need to demultiplex and quality filter our sequences (i.e. assigning barcoded reads to the samples they are derived from). In general, you should get separate fastq files for your sequence and barcode reads. Note that we pass these files while still gzipped. split_libraries_fastq.py can handle gzipped or unzipped fastq files. The default strategy in QIIME for quality filtering of Illumina data is described in Bokulich et al (2013).
End of explanation
FileLinks('slout/')
Explanation: We often want to see the results of running a command. Here we can do that by calling our output formatter again, this time passing the output directory from the previous step.
End of explanation
!count_seqs.py -i slout/seqs.fna
Explanation: We can see how many sequences we ended up with using count_seqs.py.
End of explanation
!pick_open_reference_otus.py -o otus/ -i slout/seqs.fna -p ../uc_fast_params.txt
Explanation: OTU picking: using an open-reference OTU picking protocol by searching reads against the Greengenes database.
Now that we have demultiplexed sequences, we're ready to cluster these sequences into OTUs. There are three high-level ways to do this in QIIME. We can use de novo, closed-reference, or open-reference OTU picking. Open-reference OTU picking is currently our preferred method. Discussion of these methods can be found in Rideout et. al (2014).
Here we apply open-reference OTU picking. Note that this command takes the seqs.fna file that was generated in the previous step. We're also specifying some parameters to the pick_otus.py command, which is internal to this workflow. Specifically, we set enable_rev_strand_match to True, which allows sequences to match the reference database if either their forward or reverse orientation matches to a reference sequence. This parameter is specified in the parameters file which is passed as -p. You can find information on defining parameters files here.
This step can take about 10 minutes to complete.
End of explanation
FileLink('otus/index.html')
Explanation: The primary output that we get from this command is the OTU table, or the number of times each operational taxonomic unit (OTU) is observed in each sample. QIIME uses the Genomics Standards Consortium Biological Observation Matrix standard (BIOM) format for representing OTU tables. You can find additional information on the BIOM format here, and information on converting these files to tab-separated text that can be viewed in spreadsheet programs here. Several OTU tables are generated by this command. The one we typically want to work with is otus/otu_table_mc2_w_tax_no_pynast_failures.biom. This has singleton OTUs (or OTUs with a total count of 1) removed, as well as OTUs whose representative (i.e., centroid) sequence couldn't be aligned with PyNAST. It also contains taxonomic assignments for each OTU as observation metadata.
The open-reference OTU picking command also produces a phylogenetic tree where the tips are the OTUs. The file containing the tree is otus/rep_set.tre, and is the file that should be used with otus/otu_table_mc2_w_tax_no_pynast_failures.biom in downstream phylogenetic diversity calculations. The tree is stored in the widely used newick format.
To view the output of this command, call FileLink on the index.html file in the output directory.
End of explanation
!biom summarize-table -i otus/otu_table_mc2_w_tax_no_pynast_failures.biom
Explanation: To compute some summary statistics of the OTU table we can run the following command.
End of explanation
!core_diversity_analyses.py -o cdout/ -i otus/otu_table_mc2_w_tax_no_pynast_failures.biom -m map.tsv -t otus/rep_set.tre -e 1114
Explanation: The key piece of information you need to pull from this output is the depth of sequencing that should be used in diversity analyses. Many of the analyses that follow require that there are an equal number of sequences in each sample, so you need to review the Counts/sample detail and decide what depth you'd like. Any samples that don't have at least that many sequences will not be included in the analyses, so this is always a trade-off between the number of sequences you throw away and the number of samples you throw away. For some perspective on this, see Kuczynski 2010.
Run diversity analyses
Here we're running the core_diversity_analyses.py script which applies many of the "first-pass" diversity analyses that users are generally interested in. The main output that users will interact with is the index.html file, which provides links into the different analysis results.
Note that in this step we're passing -e which is the sampling depth that should be used for diversity analyses. I chose 1114 here, based on reviewing the above output from biom summarize-table. This value will be study-specific, so don't just use this value on your own data (though it's fine to use that value for this tutorial).
The commands in this section (combined) can take about 15 minutes to complete.
You may see a RuntimeWarning generated by this command. As the warning indicates, it's not something that you should be concerned about in this case. QIIME (and scikit-bio, which implements a lot of QIIME's core functionality) will sometimes provide these types of warnings to help you figure out if your analyses are valid, but you should always be thinking about whether a particular test or analysis is relevant for your data. Just because something can be passed as input to a QIIME script, doesn't necessarily mean that the analysis it performs is appropriate.
End of explanation
FileLink('cdout/index.html')
Explanation: Next open the index.html file in the resulting directory. This will link you into the different results.
End of explanation
!core_diversity_analyses.py -o cdout/ --recover_from_failure -c "SampleType,DaysSinceExperimentStart" -i otus/otu_table_mc2_w_tax_no_pynast_failures.biom -m map.tsv -t otus/rep_set.tre -e 1114
FileLink('cdout/index.html')
Explanation: The results above treat all samples independently, but sometimes (for example, in the taxonomic summaries) it's useful to categorize samples by their metadata. We can do this by passing categories (i.e., headers from our mapping file) to core_diversity_analyses.py with the -c parameter. Because core_diversity_analyses.py can take a long time to run, it has a --recover_from_failure option, which can allow it to be rerun from a point where it previously failed in some cases (for example, if you accidentally turned your computer off while it was running). This option can also be used to add categorical analyses if you didn't include them in your initial run. Next we'll rerun core_diversity_analyses.py with two sets of categorical analyses: one for the "SampleType category, and one for the DaysSinceExperimentStart category. Remember the --recover_from_failure option: it can save you a lot of time.
End of explanation
!make_emperor.py -i cdout/bdiv_even1114/weighted_unifrac_pc.txt -o cdout/bdiv_even1114/weighted_unifrac_emperor_pcoa_plot -m map.tsv --custom_axes DaysSinceExperimentStart
!make_emperor.py -i cdout/bdiv_even1114/unweighted_unifrac_pc.txt -o cdout/bdiv_even1114/unweighted_unifrac_emperor_pcoa_plot -m map.tsv --custom_axes DaysSinceExperimentStart
Explanation: One thing you may notice in the PCoA plots generated by core_diversity_analyses.py is that the samples don't cluster perfectly by SampleType. This is unexpected, based on what we know about the human microbiome. Since this is a time series, let's explore this in a little more detail integrating a time axis into our PCoA plots. We can do this by re-running Emperor directly, replacing our previously generated PCoA plots. (Emperor is a tool for the visualization of PCoA plots with many advanced features that you can explore in the Emperor tutorial. If you use Emperor in your research you should be sure to cite it directly, as with the other tools that QIIME wraps, such as uclust and RDPClassifier.)
After this runs, you can reload the Emperor plots that you accessed from the above cdout/index.html links. Try making the samples taken during AntibioticUsage invisible.
End of explanation
FileLinks("precomputed-output/")
Explanation: IMPORTANT: Removing points from a PCoA plot, as is suggested above for data exploration purposes, is not the same as computing PCoA without those points. If after running this, you'd like to remove the samples taken during AntibioticUsage from the analysis, you can do this with filter_samples_from_otus_table.py, which is discussed here. As an exercise, try removing the samples taken during AntibioticUsage from the OTU table and re-running core_diversity_analyses.py. You should output the results to a different directory than you created above (e.g., cdout_no_abx).
Next steps
This tutorial illustrated some of the basic features of QIIME, and there are a lot of places to go from here. If you're interested in seeing additional visualizations, you should check out the QIIME overview tutorial. The Procrustes analysis tutorial illustrates a really cool analysis, allowing you to continue with the same data used here, comparing against the samples sequenced on 454 (rather than Illumina, as in this analysis). If you're interested in some possibilities for statistical analyses you can try the supervised learning or distance matrix comparison tutorials, both of which can be adapted to use data generated in this tutorial.
Precomputed results
In case you're having trouble running the steps above, for example because of a broken QIIME installation, all of the output generated above has been precomputed. You can access this by running the cell below.
End of explanation |
8,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow, Mini-Batch/Stochastic GradientDescent With Moment
Step1: Input
Generamos la muestra de grado 4
Step2: Problema
Calcular los coeficientes que mejor se ajusten a la muestra sabiendo que es de grado 4
Generamos la matriz de coeficientes de grado 4
Step3: Solucion 1
Step4: <img src="capturas/gradient_descent.png">
Solución 2
Step5: <img src="capturas/mini_batch_gradient_descent.png">
Solución 3
Step6: <img src="capturas/minibatch_gradient_descent_momentum.png">
Solución 4
Step7: <img src="capturas/stocastic_gradient_descent_momentum_fail.png"> | Python Code:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
%matplotlib inline
import sys
import time
from IPython.display import Image
sys.path.append('/home/pedro/git/ElCuadernillo/ElCuadernillo/20160301_TensorFlowGradientDescentWithMomentum')
import gradient_descent_with_momentum as gdt
Explanation: TensorFlow, Mini-Batch/Stochastic GradientDescent With Moment
End of explanation
grado=4
tamano=100000
x,y,coeficentes=gdt.generar_muestra(grado,tamano)
print ("Coeficientes: ",coeficentes)
plt.plot(x,y,'.')
Explanation: Input
Generamos la muestra de grado 4
End of explanation
train_x=gdt.generar_matriz_coeficientes(x,grado) # MatrizA
train_y=np.reshape(y,(y.shape[0],-1)) # VectorColumna
learning_rate_inicial=1e-2
Explanation: Problema
Calcular los coeficientes que mejor se ajusten a la muestra sabiendo que es de grado 4
Generamos la matriz de coeficientes de grado 4
End of explanation
pesos_gd,ecm_gd,tiempo_gd=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=1,
learning_rate_inicial=learning_rate_inicial,
momentum=0.0)
Explanation: Solucion 1: Por medio gradient descent
End of explanation
pesos_mgd,ecm_mgd,tiempo_mgd=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=10000,
learning_rate_inicial=learning_rate_inicial,
momentum=0.0)
Explanation: <img src="capturas/gradient_descent.png">
Solución 2: Por medio mini-batch=1000 gradient descent
End of explanation
pesos_mgdm,ecm_mgdm,tiempo_mgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=10000,
learning_rate_inicial=learning_rate_inicial,
momentum=0.9)
Explanation: <img src="capturas/mini_batch_gradient_descent.png">
Solución 3: Por medio mini-batch=10000 gradient descent With Moment
End of explanation
pesos_sgdm,ecm_sgdm,tiempo_sgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=len(train_x),
learning_rate_inicial=learning_rate_inicial,
momentum=0.9)
Explanation: <img src="capturas/minibatch_gradient_descent_momentum.png">
Solución 4: Por medio mini-batch=1 Stocastict gradient descent With Moment
End of explanation
pesos_sgdm,ecm_sgdm,tiempo_sgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=len(train_x),
learning_rate_inicial=1e-3, # Disminuimos la tasa de aprendizaje
momentum=0.9)
Explanation: <img src="capturas/stocastic_gradient_descent_momentum_fail.png">
End of explanation |
8,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of pyesgf.search usage
Prelude
Step1: Warning
Step2: Find how many datasets containing humidity in a given experiment family
Step3: Search using a partial ESGF dataset ID (and get first download URL)
Step4: Find the OpenDAP URL for an aggregated dataset
Step5: Find download URLs for all files in a dataset
Step6: Define a search for datasets that includes a temporal range
Step7: Or do the same thing by searching without temporal constraints and then applying the constraint | Python Code:
from pyesgf.search import SearchConnection
conn = SearchConnection('http://esgf-index1.ceda.ac.uk/esg-search',
distrib=True)
Explanation: Examples of pyesgf.search usage
Prelude:
End of explanation
facets='project,experiment_family'
Explanation: Warning: don't use default search with facets=*.
This behavior is kept for backward-compatibility, but ESGF indexes might not
successfully perform a distributed search when this option is used, so some
results may be missing. For full results, it is recommended to pass a list of
facets of interest when instantiating a context object. For example,
ctx = conn.new_context(facets='project,experiment_id')
Only the facets that you specify will be present in the facets_counts dictionary.
This warning is displayed when a distributed search is performed while using the
facets=* default, a maximum of once per context object. To suppress this warning,
set the environment variable ESGF_PYCLIENT_NO_FACETS_STAR_WARNING to any value
or explicitly use conn.new_context(facets='*')
End of explanation
ctx = conn.new_context(project='CMIP5', query='humidity', facets=facets)
ctx.hit_count
ctx.facet_counts['experiment_family']
Explanation: Find how many datasets containing humidity in a given experiment family:
End of explanation
conn = SearchConnection('http://esgf-index1.ceda.ac.uk/esg-search', distrib=False)
ctx = conn.new_context(facets=facets)
dataset_id_pattern = "cmip5.output1.MOHC.HadGEM2-CC.historical.mon.atmos.Amon.*"
results = ctx.search(query="id:%s" % dataset_id_pattern)
len(results)
files = results[0].file_context().search()
len(files)
download_url = files[0].download_url
print(download_url)
Explanation: Search using a partial ESGF dataset ID (and get first download URL):
End of explanation
conn = SearchConnection('http://esgf-data.dkrz.de/esg-search', distrib=False)
ctx = conn.new_context(project='CMIP5', model='MPI-ESM-LR', experiment='decadal2000', time_frequency='day')
print('Hits: {}, Realms: {}, Ensembles: {}'.format(
ctx.hit_count,
ctx.facet_counts['realm'],
ctx.facet_counts['ensemble']))
ctx = ctx.constrain(realm='atmos', ensemble='r1i1p1')
ctx.hit_count
result = ctx.search()[0]
agg_ctx = result.aggregation_context()
agg = agg_ctx.search()[0]
print(agg.opendap_url)
Explanation: Find the OpenDAP URL for an aggregated dataset:
End of explanation
conn = SearchConnection('http://esgf-data.dkrz.de/esg-search', distrib=False)
ctx = conn.new_context(project='obs4MIPs')
ctx.hit_count
ds = ctx.search()[0]
files = ds.file_context().search()
len(files)
for f in files:
print(f.download_url)
Explanation: Find download URLs for all files in a dataset:
End of explanation
conn = SearchConnection('http://esgf-index1.ceda.ac.uk/esg-search', distrib=False)
ctx = conn.new_context(
project="CMIP5", model="HadGEM2-ES",
time_frequency="mon", realm="atmos", ensemble="r1i1p1", latest=True,
from_timestamp="2100-12-30T23:23:59Z", to_timestamp="2200-01-01T00:00:00Z")
ctx.hit_count
Explanation: Define a search for datasets that includes a temporal range:
End of explanation
ctx = conn.new_context(
project="CMIP5", model="HadGEM2-ES",
time_frequency="mon", realm="atmos", ensemble="r1i1p1", latest=True)
ctx.hit_count
ctx = ctx.constrain(from_timestamp = "2100-12-30T23:23:59Z", to_timestamp = "2200-01-01T00:00:00Z")
ctx.hit_count
Explanation: Or do the same thing by searching without temporal constraints and then applying the constraint:
End of explanation |
8,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representational Similarity Analysis
Representational Similarity Analysis is used to perform summary statistics
on supervised classifications where the number of classes is relatively high.
It consists in characterizing the structure of the confusion matrix to infer
the similarity between brain responses and serves as a proxy for characterizing
the space of mental representations
Step1: Let's restrict the number of conditions to speed up computation
Step2: Define stimulus - trigger mapping
Step3: Let's make the event_id dictionary
Step4: Read MEG data
Step5: Epoch data
Step6: Let's plot some conditions
Step7: Representational Similarity Analysis (RSA) is a neuroimaging-specific
appelation to refer to statistics applied to the confusion matrix
also referred to as the representational dissimilarity matrices (RDM).
Compared to the approach from Cichy et al. we'll use a multiclass
classifier (Multinomial Logistic Regression) while the paper uses
all pairwise binary classification task to make the RDM.
Also we use here the ROC-AUC as performance metric while the
paper uses accuracy. Finally here for the sake of time we use
RSA on a window of data while Cichy et al. did it for all time
instants separately.
Step8: Compute confusion matrix using ROC-AUC
Step9: Plot
Step10: Confusion matrix related to mental representations have been historically
summarized with dimensionality reduction using multi-dimensional scaling [1].
See how the face samples cluster together. | Python Code:
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
from pandas import read_csv
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.manifold import MDS
import mne
from mne.io import read_raw_fif, concatenate_raws
from mne.datasets import visual_92_categories
print(__doc__)
data_path = visual_92_categories.data_path()
# Define stimulus - trigger mapping
fname = op.join(data_path, 'visual_stimuli.csv')
conds = read_csv(fname)
print(conds.head(5))
Explanation: Representational Similarity Analysis
Representational Similarity Analysis is used to perform summary statistics
on supervised classifications where the number of classes is relatively high.
It consists in characterizing the structure of the confusion matrix to infer
the similarity between brain responses and serves as a proxy for characterizing
the space of mental representations
:footcite:Shepard1980,LaaksoCottrell2000,KriegeskorteEtAl2008.
In this example, we perform RSA on responses to 24 object images (among
a list of 92 images). Subjects were presented with images of human, animal
and inanimate objects :footcite:CichyEtAl2014. Here we use the 24 unique
images of faces and body parts.
<div class="alert alert-info"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not
build the images below.</p></div>
End of explanation
max_trigger = 24
conds = conds[:max_trigger] # take only the first 24 rows
Explanation: Let's restrict the number of conditions to speed up computation
End of explanation
conditions = []
for c in conds.values:
cond_tags = list(c[:2])
cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
for k, i in enumerate(c[2:], 2)]
conditions.append('/'.join(map(str, cond_tags)))
print(conditions[:10])
Explanation: Define stimulus - trigger mapping
End of explanation
event_id = dict(zip(conditions, conds.trigger + 1))
event_id['0/human bodypart/human/not-face/animal/natural']
Explanation: Let's make the event_id dictionary
End of explanation
n_runs = 4 # 4 for full data (use less to speed up computations)
fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
raws = [read_raw_fif(fname % block, verbose='error')
for block in range(n_runs)] # ignore filename warnings
raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
Explanation: Read MEG data
End of explanation
picks = mne.pick_types(raw.info, meg=True)
epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
picks=picks, tmin=-.1, tmax=.500, preload=True)
Explanation: Epoch data
End of explanation
epochs['face'].average().plot()
epochs['not-face'].average().plot()
Explanation: Let's plot some conditions
End of explanation
# Classify using the average signal in the window 50ms to 300ms
# to focus the classifier on the time interval with best SNR.
clf = make_pipeline(StandardScaler(),
LogisticRegression(C=1, solver='liblinear',
multi_class='auto'))
X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)
y = epochs.events[:, 2]
classes = set(y)
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
# Compute confusion matrix for each cross-validation fold
y_pred = np.zeros((len(y), len(classes)))
for train, test in cv.split(X, y):
# Fit
clf.fit(X[train], y[train])
# Probabilistic prediction (necessary for ROC-AUC scoring metric)
y_pred[test] = clf.predict_proba(X[test])
Explanation: Representational Similarity Analysis (RSA) is a neuroimaging-specific
appelation to refer to statistics applied to the confusion matrix
also referred to as the representational dissimilarity matrices (RDM).
Compared to the approach from Cichy et al. we'll use a multiclass
classifier (Multinomial Logistic Regression) while the paper uses
all pairwise binary classification task to make the RDM.
Also we use here the ROC-AUC as performance metric while the
paper uses accuracy. Finally here for the sake of time we use
RSA on a window of data while Cichy et al. did it for all time
instants separately.
End of explanation
confusion = np.zeros((len(classes), len(classes)))
for ii, train_class in enumerate(classes):
for jj in range(ii, len(classes)):
confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])
confusion[jj, ii] = confusion[ii, jj]
Explanation: Compute confusion matrix using ROC-AUC
End of explanation
labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6
fig, ax = plt.subplots(1)
im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])
ax.set_yticks(range(len(classes)))
ax.set_yticklabels(labels)
ax.set_xticks(range(len(classes)))
ax.set_xticklabels(labels, rotation=40, ha='left')
ax.axhline(11.5, color='k')
ax.axvline(11.5, color='k')
plt.colorbar(im)
plt.tight_layout()
plt.show()
Explanation: Plot
End of explanation
fig, ax = plt.subplots(1)
mds = MDS(2, random_state=0, dissimilarity='precomputed')
chance = 0.5
summary = mds.fit_transform(chance - confusion)
cmap = plt.get_cmap('rainbow')
colors = ['r', 'b']
names = list(conds['condition'].values)
for color, name in zip(colors, set(names)):
sel = np.where([this_name == name for this_name in names])[0]
size = 500 if name == 'human face' else 100
ax.scatter(summary[sel, 0], summary[sel, 1], s=size,
facecolors=color, label=name, edgecolors='k')
ax.axis('off')
ax.legend(loc='lower right', scatterpoints=1, ncol=2)
plt.tight_layout()
plt.show()
Explanation: Confusion matrix related to mental representations have been historically
summarized with dimensionality reduction using multi-dimensional scaling [1].
See how the face samples cluster together.
End of explanation |
8,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lightweight python components
Lightweight python components do not require you to build a new container image for every code change. They're intended to use for fast iteration in notebook environment.
Building a lightweight python component
To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.
There are several requirements for the function
Step1: Important
Step2: Simple function that just add two numbers
Step3: Convert the function to a pipeline operation
Step4: A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
Step5: Test running the python function directly
Step6: Convert the function to a pipeline operation
You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
Step7: Define the pipeline
Pipeline function has to be decorated with the @dsl.pipeline decorator
Step8: Compile and run the pipeline into Tekton yaml using kfp-tekton SDK | Python Code:
# Install the dependency packages
!pip install --upgrade pip
!pip install numpy tensorflow kfp-tekton
Explanation: Lightweight python components
Lightweight python components do not require you to build a new container image for every code change. They're intended to use for fast iteration in notebook environment.
Building a lightweight python component
To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.
There are several requirements for the function:
The function should be stand-alone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions should also be defined inside the main function.
The function can only import packages that are available in the base image. If you need to import a package that's not available you can try to find a container image that already includes the required packages. (As a workaround you can use the module subprocess to run pip install for the required package. There is an example below in my_divmod function.)
If the function operates on numbers, the parameters need to have type hints. Supported types are [int, float, bool]. Everything else is passed as string.
To build a component with multiple output values, use the typing.NamedTuple type hint syntax: NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)])
End of explanation
import kfp
import kfp.components as comp
Explanation: Important: If you are running this notebook using the Kubeflow Jupyter Server, you need to restart the Python Kernel because the packages above overwrited some default packages inside the Kubeflow Jupyter image.
End of explanation
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
Explanation: Simple function that just add two numbers:
End of explanation
add_op = comp.func_to_container_op(add)
Explanation: Convert the function to a pipeline operation
End of explanation
#Advanced function
#Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Pip installs inside a component function.
#NOTE: installs should be placed right at the beginning to avoid upgrading a package
# after it has already been imported and cached by python
import sys, subprocess;
subprocess.run([sys.executable, '-m', 'pip', 'install', 'tensorflow==1.8.0'])
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
from tensorflow.python.lib.io import file_io
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
Explanation: A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
End of explanation
my_divmod(100, 7)
Explanation: Test running the python function directly
End of explanation
divmod_op = comp.func_to_container_op(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
Explanation: Convert the function to a pipeline operation
You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
End of explanation
import kfp.dsl as dsl
@dsl.pipeline(
name='Calculation pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
# Currently kfp-tekton doesn't support pass parameter to the pipelinerun yet, so we hard code the number here
def calc_pipeline(
a='7',
b='8',
c='17',
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
Explanation: Define the pipeline
Pipeline function has to be decorated with the @dsl.pipeline decorator
End of explanation
# Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
# Specify Kubeflow Pipeline Host
host=None
# Submit a pipeline run using the KFP Tekton client.
from kfp_tekton import TektonClient
TektonClient(host=host).create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
# For Argo users, submit the pipeline run using the below client.
# kfp.Client(host=host).create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
Explanation: Compile and run the pipeline into Tekton yaml using kfp-tekton SDK
End of explanation |
8,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Function h2percentile
Synopse
The h2percentile function computes the percentile given an image histogram.
g = iapercentile(h,q)
Output
g
Step1: Examples
Step2: Numeric Example
Comparison with the NumPy percentile implementation
Step3: Image Example | Python Code:
def h2percentile(h,p):
import numpy as np
s = h.sum()
k = ((s-1) * p/100.)+1
dw = np.floor(k)
up = np.ceil(k)
hc = np.cumsum(h)
if isinstance(p, int):
k1 = np.argmax(hc>=dw)
k2 = np.argmax(hc>=up)
else:
k1 = np.argmax(hc>=dw[:,np.newaxis],axis=1)
k2 = np.argmax(hc>=up[:,np.newaxis],axis=1)
d0 = k1 * (up-k)
d1 = k2 * (k -dw)
return np.where(dw==up,k1,d0+d1)
Explanation: Function h2percentile
Synopse
The h2percentile function computes the percentile given an image histogram.
g = iapercentile(h,q)
Output
g: percentile value(s)
Input
h: 1D ndarray: histogram
q: 1D float ndarray in range of [0,100]. Default value = 1.
Description
The h2percentile function computes the percentiles from a given histogram.
Function Code
End of explanation
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python h2percentile.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
Explanation: Examples
End of explanation
if testing:
f = np.array([0,1,2,3,4,5,6,7,8])
h = ia.histogram(f)
print('h2percentile 1 = %f, np.percentile 1 = %f'%(ia.h2percentile(h,1),np.percentile(f,1)))
print('h2percentile 10 = %f, np.percentile 10 = %f'%(ia.h2percentile(h,10),np.percentile(f,10)))
print('h2percentile 50 = %f, np.percentile 50 = %f'%(ia.h2percentile(h,50),np.percentile(f,50)))
print('h2percentile 90 = %f, np.percentile 90 = %f'%(ia.h2percentile(h,90),np.percentile(f,90)))
print('h2percentile 99 = %f, np.percentile 99 = %f'%(ia.h2percentile(h,99),np.percentile(f,99)))
if testing:
f = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
h = ia.histogram(f)
p = [1, 10, 50, 90, 99]
print('percentiles:', p)
print('h2percentile', ia.h2percentile(h,np.array(p)))
print('np.percentile', np.percentile(f,p))
Explanation: Numeric Example
Comparison with the NumPy percentile implementation:
End of explanation
if testing:
import matplotlib.image as mpimg
f = mpimg.imread('../data/cameraman.tif')
h = ia.histogram(f)
p = [1, 10, 50, 90, 99]
print('percentiles:', p)
print('h2percentile', ia.h2percentile(h,np.array(p)))
print('np.percentile', np.percentile(f,p))
print('median', np.median(f))
Explanation: Image Example
End of explanation |
8,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Encoder-Decoders Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_files = ['/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb.json','/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json','/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb.json', '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb.json']
log_files = ['/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb_logs.json','/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json','/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb_logs.json', '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb_logs.json']
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
Explanation: Comparing Encoder-Decoders Analysis
Model Architecture
End of explanation
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])
Explanation: Generations
End of explanation
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
Explanation: BLEU Analysis
End of explanation
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
8,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KNN
Algumas execuções usando scikit-learn
Vamos utilizar a implementação da biblioteca K-Nearest Neighbors para descobrir qual o melhor K para o nosso dataset.
O dataset utilizado foi obtido de http
Step1: Agora vamos separar os conjuntos de treino e teste
Step2: Com os nossos dados separados podemos criar um classificador e treiná-lo
Step3: Após isso podemos fazer predições
Step4: E avaliá-lo usando score
Step5: Agora vamos fazer o mesmo para um knn que utiliza pesos relativos a distâncias
Step6: Ou vamos mudar o k para 1
Step7: Prática
Verifique no intervalo de k = 1 a 10, qual o melhor valor de k e o melhor tipo de pesso a ser utilizado | Python Code:
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
import math
data = np.loadtxt("haberman.data",delimiter=",")
print(data)
Explanation: KNN
Algumas execuções usando scikit-learn
Vamos utilizar a implementação da biblioteca K-Nearest Neighbors para descobrir qual o melhor K para o nosso dataset.
O dataset utilizado foi obtido de http://archive.ics.uci.edu/ml/datasets/Haberman's+Survival e apresenta dados sobre a sobrevivência de pacientes que passaram uma cirurgia de câncer de mama.
End of explanation
ndata = np.random.permutation(data)
size = len(ndata)
nt = int(math.floor(size*0.7))
trfeatures = ndata[0:nt,0:3]
ttfeatures = ndata[nt:size,0:3]
trlabels = ndata[0:nt,3]
ttlabels = ndata[nt:size,3]
Explanation: Agora vamos separar os conjuntos de treino e teste
End of explanation
knn3 = KNeighborsClassifier(n_neighbors=3)
knn3.fit(trfeatures, trlabels)
Explanation: Com os nossos dados separados podemos criar um classificador e treiná-lo
End of explanation
pred = knn3.predict(ttfeatures)
print(pred)
Explanation: Após isso podemos fazer predições:
End of explanation
knn3.score(ttfeatures,ttlabels)
Explanation: E avaliá-lo usando score:
End of explanation
wknn3 = KNeighborsClassifier(n_neighbors=3,weights='distance')
wknn3.fit(trfeatures, trlabels)
wknn3.score(ttfeatures,ttlabels)
Explanation: Agora vamos fazer o mesmo para um knn que utiliza pesos relativos a distâncias:
End of explanation
wknn1 = KNeighborsClassifier(n_neighbors=1,weights='uniform')
wknn1.fit(trfeatures, trlabels)
wknn1.score(ttfeatures,ttlabels)
Explanation: Ou vamos mudar o k para 1:
End of explanation
print("K \t Uniform \t Distance")
for k in range(1,11):
UniformKnnClassifier = KNeighborsClassifier(n_neighbors=k,weights='uniform')
UniformKnnClassifier.fit(trfeatures, trlabels)
uScore = UniformKnnClassifier.score(ttfeatures, ttlabels)
DistanceKnnClassifier = KNeighborsClassifier(n_neighbors=k,weights='distance')
DistanceKnnClassifier.fit(trfeatures, trlabels)
dScore = DistanceKnnClassifier.score(ttfeatures, ttlabels)
print k,"\t{:f} \t{:f}".format(uScore,dScore)
Explanation: Prática
Verifique no intervalo de k = 1 a 10, qual o melhor valor de k e o melhor tipo de pesso a ser utilizado
End of explanation |
8,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Librosa tutorial
Version
Step1: Documentation!
Librosa has extensive documentation with examples.
When in doubt, go to http
Step2: Resampling is easy
Step3: But what's that in seconds?
Step4: Spectral representations
Short-time Fourier transform underlies most analysis.
librosa.stft returns a complex matrix D.
D[f, t] is the FFT value at frequency f, time (frame) t.
Step5: Often, we only care about the magnitude.
D contains both magnitude S and phase $\phi$.
$$
D_{ft} = S_{ft} \exp\left(j \phi_{ft}\right)
$$
Step6: Constant-Q transforms
The CQT gives a logarithmically spaced frequency basis.
This representation is more natural for many analysis tasks.
Step7: Exercise 0
Load a different audio file
Compute its STFT with a different hop length
Step8: librosa.feature
Standard features
Step9: librosa.display
Plotting routines for spectra and waveforms
Note
Step10: Waveform display
Step11: A basic spectrogram display
Step12: Exercise 1
Pick a feature extractor from the librosa.feature submodule and plot the output with librosa.display.specshow
Bonus
Step13: librosa.beat
Beat tracking and tempo estimation
The beat tracker returns the estimated tempo and beat positions (measured in frames)
Step14: Let's sonify it!
Step15: Beats can be used to downsample features
Step16: librosa.segment
Self-similarity / recurrence
Segmentation
Recurrence matrices encode self-similarity
R[i, j] = similarity between frames (i, j)
Librosa computes recurrence between k-nearest neighbors.
Step17: We can include affinity weights for each link as well.
Step18: Exercise 2
Plot a recurrence matrix using different features
Bonus
Step19: librosa.decompose
hpss
Step20: NMF is pretty easy also! | Python Code:
import librosa
print(librosa.__version__)
y, sr = librosa.load(librosa.util.example_audio_file())
print(len(y), sr)
Explanation: Librosa tutorial
Version: 0.4.3
Tutorial home: https://github.com/librosa/tutorial
Librosa home: http://librosa.github.io/
User forum: https://groups.google.com/forum/#!forum/librosa
Environments
We assume that you have already installed Anaconda.
If you don't have an environment, create one by following command:
bash
conda create --name YOURNAME scipy jupyter ipython
(Replace YOURNAME by whatever you want to call the new environment.)
Then, activate the new environment
bash
source activate YOURNAME
Installing librosa
Librosa can then be installed by the following [🔗]:
bash
conda install -c conda-forge librosa
NOTE: Windows need to install audio decoding libraries separately. We recommend ffmpeg.
Test drive
Start Jupyter:
bash
jupyter notebook
and open a new notebook.
Then, run the following:
End of explanation
y_orig, sr_orig = librosa.load(librosa.util.example_audio_file(),
sr=None)
print(len(y_orig), sr_orig)
Explanation: Documentation!
Librosa has extensive documentation with examples.
When in doubt, go to http://librosa.github.io/librosa/
Conventions
All data are basic numpy types
Audio buffers are called y
Sampling rate is called sr
The last axis is time-like:
y[1000] is the 1001st sample
S[:, 100] is the 101st frame of S
Defaults sr=22050, hop_length=512
Roadmap for today
librosa.core
librosa.feature
librosa.display
librosa.beat
librosa.segment
librosa.decompose
librosa.core
Low-level audio processes
Unit conversion
Time-frequency representations
To load a signal at its native sampling rate, use sr=None
End of explanation
sr = 22050
y = librosa.resample(y_orig, sr_orig, sr)
print(len(y), sr)
Explanation: Resampling is easy
End of explanation
print(librosa.samples_to_time(len(y), sr))
Explanation: But what's that in seconds?
End of explanation
D = librosa.stft(y)
print(D.shape, D.dtype)
Explanation: Spectral representations
Short-time Fourier transform underlies most analysis.
librosa.stft returns a complex matrix D.
D[f, t] is the FFT value at frequency f, time (frame) t.
End of explanation
import numpy as np
S, phase = librosa.magphase(D)
print(S.dtype, phase.dtype, np.allclose(D, S * phase))
Explanation: Often, we only care about the magnitude.
D contains both magnitude S and phase $\phi$.
$$
D_{ft} = S_{ft} \exp\left(j \phi_{ft}\right)
$$
End of explanation
C = librosa.cqt(y, sr=sr)
print(C.shape, C.dtype)
Explanation: Constant-Q transforms
The CQT gives a logarithmically spaced frequency basis.
This representation is more natural for many analysis tasks.
End of explanation
# Exercise 0 solution
y2, sr2 = librosa.load( )
D = librosa.stft(y2, hop_length= )
Explanation: Exercise 0
Load a different audio file
Compute its STFT with a different hop length
End of explanation
melspec = librosa.feature.melspectrogram(y=y, sr=sr)
# Melspec assumes power, not energy as input
melspec_stft = librosa.feature.melspectrogram(S=S**2, sr=sr)
print(np.allclose(melspec, melspec_stft))
Explanation: librosa.feature
Standard features:
librosa.feature.melspectrogram
librosa.feature.mfcc
librosa.feature.chroma
Lots more...
Feature manipulation:
librosa.feature.stack_memory
librosa.feature.delta
Most features work either with audio or STFT input
End of explanation
# Displays are built with matplotlib
import matplotlib.pyplot as plt
# Let's make plots pretty
import matplotlib.style as ms
ms.use('seaborn-muted')
# Render figures interactively in the notebook
%matplotlib nbagg
# IPython gives us an audio widget for playback
from IPython.display import Audio
Explanation: librosa.display
Plotting routines for spectra and waveforms
Note: major overhaul coming in 0.5
End of explanation
plt.figure()
librosa.display.waveplot(y=y, sr=sr)
Explanation: Waveform display
End of explanation
plt.figure()
librosa.display.specshow(melspec, y_axis='mel', x_axis='time')
plt.colorbar()
Explanation: A basic spectrogram display
End of explanation
# Exercise 1 solution
X = librosa.feature.XX()
plt.figure()
librosa.display.specshow( )
Explanation: Exercise 1
Pick a feature extractor from the librosa.feature submodule and plot the output with librosa.display.specshow
Bonus: Customize the plot using either specshow arguments or pyplot functions
End of explanation
tempo, beats = librosa.beat.beat_track(y=y, sr=sr)
print(tempo)
print(beats)
Explanation: librosa.beat
Beat tracking and tempo estimation
The beat tracker returns the estimated tempo and beat positions (measured in frames)
End of explanation
clicks = librosa.clicks(frames=beats, sr=sr, length=len(y))
Audio(data=y + clicks, rate=sr)
Explanation: Let's sonify it!
End of explanation
chroma = librosa.feature.chroma_cqt(y=y, sr=sr)
chroma_sync = librosa.feature.sync(chroma, beats)
plt.figure(figsize=(6, 3))
plt.subplot(2, 1, 1)
librosa.display.specshow(chroma, y_axis='chroma')
plt.ylabel('Full resolution')
plt.subplot(2, 1, 2)
librosa.display.specshow(chroma_sync, y_axis='chroma')
plt.ylabel('Beat sync')
Explanation: Beats can be used to downsample features
End of explanation
R = librosa.segment.recurrence_matrix(chroma_sync)
plt.figure(figsize=(4, 4))
librosa.display.specshow(R)
Explanation: librosa.segment
Self-similarity / recurrence
Segmentation
Recurrence matrices encode self-similarity
R[i, j] = similarity between frames (i, j)
Librosa computes recurrence between k-nearest neighbors.
End of explanation
R2 = librosa.segment.recurrence_matrix(chroma_sync,
mode='affinity',
sym=True)
plt.figure(figsize=(5, 4))
librosa.display.specshow(R2)
plt.colorbar()
Explanation: We can include affinity weights for each link as well.
End of explanation
# Exercise 2 solution
Explanation: Exercise 2
Plot a recurrence matrix using different features
Bonus: Use a custom distance metric
End of explanation
D_harm, D_perc = librosa.decompose.hpss(D)
y_harm = librosa.istft(D_harm)
y_perc = librosa.istft(D_perc)
Audio(data=y_harm, rate=sr)
Audio(data=y_perc, rate=sr)
Explanation: librosa.decompose
hpss: Harmonic-percussive source separation
nn_filter: Nearest-neighbor filtering, non-local means, Repet-SIM
decompose: NMF, PCA and friends
Separating harmonics from percussives is easy
End of explanation
# Fit the model
W, H = librosa.decompose.decompose(S, n_components=16, sort=True)
plt.figure(figsize=(6, 3))
plt.subplot(1, 2, 1), plt.title('W')
librosa.display.specshow(librosa.logamplitude(W**2), y_axis='log')
plt.subplot(1, 2, 2), plt.title('H')
librosa.display.specshow(H, x_axis='time')
# Reconstruct the signal using only the first component
S_rec = W[:, :1].dot(H[:1, :])
y_rec = librosa.istft(S_rec * phase)
Audio(data=y_rec, rate=sr)
Explanation: NMF is pretty easy also!
End of explanation |
8,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Das Quell-Panel Verfahren
Das Panel-Verfahren wurde Anfang der 1970er Jahre entwickelt und ist eine in der Industrie weiterhin weit verbreitete Methode zur Berechnung der Umströmung von Flügelprofilen und beliebiger anderer Körper. Durch ihren geringen Rechenaufwand kann die Panel-Methode sehr gut mit Optimierungsverfahren kombiniert werden. Meist dient sie dazu, eine erste Profilgeometrie mit den gewünschten Eigenschaften zu generieren, die dann im weiteren Verlauf durch genauere CFD-Berechnungen verfeinert wird.
Das Verfahren eignet sich sehr gut, um die Druckverteilung von beliebigen Körpern zu berechnen. In einer ersten Version des Verfahrens, dem Quell-Panel-Verfahren sind nur Berechnungen von Körpern ohne Auftrieb möglich. Das Wirbel-Panel-Verfahren bietet dann die Erweiterung zur Berechnung von Körpern mit Auftrieb.
Die Idee des Verfahrens besteht darin, die Körperkontur zunächst in diskrete "Paneele" aufzuteilen. Jedes der Paneele besteht aus einer unendlichen Anzahl von Quellen (oder Senken), die nebeneinander angeordnet sind und dafür sorgen, dass die Stromlinien tangential zum Paneel verlaufen.
Das Quell-Paneel
Wir müssen also zunächst herausfinden, wie die Geschwindigkeits-, Potential- und Stromfunktionen für ein Quell-Paneel lauten. Die Potentialfunktion einer einzelnen Quelle an der Stelle $(x_s,y_s)$ ist wie zuvor gezeigt
Step1: Zeit, das Ganze mal grafisch darzustellen. Wir erzeugen zunächst ein Objekt der Klasse Panel mit dem Namen panel1, den Eckpunkten $a=(0,-2)$ und $b=(0,2)$ und einem Volumenstrom pro Länge und Tiefe von $2~\text{m}/{s}$.
Dann berechnen wir die Geschwindigkeitsverteilung an allen Punkten des Rechengitters und Plotten wie gewohnt die Stromlinien.
Step2: Unser Quell-Paneel macht schon (fast) was es soll. Es hat nur noch einen Schönheitsfehler
Step3: Als nächstes erzeugen wir die 4 Paneele
Step4: und stellen dann das Gleichungssytem auf
Step5: Zur Lösung des Gleichungssystems nutzen wir einen in Python (bzw. Numpy) vorhandenen Löser
Step6: Nachdem jetzt allen Paneelen die richtige Stärke $\lambda_i$ zugewiesen wurde, können wir das Geschwindigkeitsfeld berechnen
Step7: Mit dem Quell-Panel-Verfahren können wir die Druckverteilung auf beliebigen umströmten Körpern berechnen, solange die Strömung stationär und reibungsfrei ist und der Körper keinen Auftrieb erfährt.
Um auch Umströmungen mit Auftrieb beschreiben zu können, muss das Verfahren noch erweitert werden. Wie wir bei der Zylinderumströmung gesehen haben, kann der Auftrieb mit der Potentialtheorie nur durch eine Überlagerung von Potentialwirbeln simuliert werden. Die Vorgehensweise ist ganz ähnlich der bisherigen, nur dass jedem Paneel noch ein Potentialwirbel überlagert wird.
Die Erweiterung auf das Wirbelpanelverfahren folgt hier.
Copyright (c) 2019, Florian Theobald und Matthias Stripf
Der folgende Python-Code darf ignoriert werden. Er dient nur dazu, die richtige Formatvorlage für die Jupyter-Notebooks zu laden. | Python Code:
import math
import numpy as np
from scipy import integrate
import matplotlib.pyplot as plt
%matplotlib inline
class Panel:
# Initialisiert ein Objekt der Klasse Panel
def __init__(self, ax, ay, bx, by, lamb=0):
# Panel-Stärke lambda
self.lamb = lamb
# Koordinaten der beiden Endpunkte des Panels als
# Attribute des Objekts übernehmen
self.ax = ax
self.ay = ay
self.bx = bx
self.by = by
# Koordinaten des Paneel-Mittelpunkts berechnen
self.mx = 0.5*(ax+bx)
self.my = 0.5*(ay+by)
# Länge des Paneels berechnen
self.laenge = math.sqrt((bx-ax)**2+(by-ay)**2)
# Winkel zwischen x-Achse und Panel-Normalen
if bx-ax <= 0:
self.beta = math.acos((by-ay)/self.laenge)
else:
self.beta = math.pi + math.acos(-(by-ay)/self.laenge)
def x(self, s):
return self.ax + s*(self.bx-self.ax)/self.laenge # Strahlensatz
def y(self, s):
return self.ay + s*(self.by-self.ay)/self.laenge # Strahlensatz
def phi(self, x,y):
def integrand(s, x, y, xs, ys):
return numpy.log(numpy.sqrt((x-xs(s))**2+(y-ys(s))**2))
def integral(s_min, s_max, x, y, xs, ys):
return integrate.quad(integrand, s_min, s_max,
args=(x,y,self.x,self.y))[0]
vec_integral = np.vectorize(integral)
return (self.lamb / (2 * math.pi)
* vec_integral(0, self.laenge, x, y, self.x, self.y))
def psi(self, x,y):
def integrand(s, x, y, xs, ys):
return numpy.arctan2(y-ys(s),x-xs(s))
def integral(s_min, s_max, x, y, xs, ys):
return integrate.quad(integrand, s_min, s_max,
args=(x,y,self.x,self.y))[0]
vec_integral = np.vectorize(integral)
return (self.lamb / (2 * math.pi)
* vec_integral(0, self.laenge, x, y, self.x, self.y))
def vel(self, x,y):
def integrand_u(s, x, y, xs, ys):
return (x-xs(s))/((x-xs(s))**2+(y-ys(s))**2)
def integrand_v(s, x, y, xs, ys):
return (y-ys(s))/((x-xs(s))**2+(y-ys(s))**2)
def integral_u(s_min, s_max, x, y, xs, ys):
return integrate.quad(integrand_u, s_min, s_max,
args=(x,y,self.x,self.y))[0]
def integral_v(s_min, s_max, x, y, xs, ys):
return integrate.quad(integrand_v, s_min, s_max,
args=(x,y,self.x,self.y))[0]
vec_integral_u = np.vectorize(integral_u)
vec_integral_v = np.vectorize(integral_v)
u = (self.lamb / (2 * math.pi)
* vec_integral_u(0, self.laenge, x, y, self.x, self.y))
v = (self.lamb / (2 * math.pi)
* vec_integral_v(0, self.laenge, x, y, self.x, self.y))
return u,v
Explanation: Das Quell-Panel Verfahren
Das Panel-Verfahren wurde Anfang der 1970er Jahre entwickelt und ist eine in der Industrie weiterhin weit verbreitete Methode zur Berechnung der Umströmung von Flügelprofilen und beliebiger anderer Körper. Durch ihren geringen Rechenaufwand kann die Panel-Methode sehr gut mit Optimierungsverfahren kombiniert werden. Meist dient sie dazu, eine erste Profilgeometrie mit den gewünschten Eigenschaften zu generieren, die dann im weiteren Verlauf durch genauere CFD-Berechnungen verfeinert wird.
Das Verfahren eignet sich sehr gut, um die Druckverteilung von beliebigen Körpern zu berechnen. In einer ersten Version des Verfahrens, dem Quell-Panel-Verfahren sind nur Berechnungen von Körpern ohne Auftrieb möglich. Das Wirbel-Panel-Verfahren bietet dann die Erweiterung zur Berechnung von Körpern mit Auftrieb.
Die Idee des Verfahrens besteht darin, die Körperkontur zunächst in diskrete "Paneele" aufzuteilen. Jedes der Paneele besteht aus einer unendlichen Anzahl von Quellen (oder Senken), die nebeneinander angeordnet sind und dafür sorgen, dass die Stromlinien tangential zum Paneel verlaufen.
Das Quell-Paneel
Wir müssen also zunächst herausfinden, wie die Geschwindigkeits-, Potential- und Stromfunktionen für ein Quell-Paneel lauten. Die Potentialfunktion einer einzelnen Quelle an der Stelle $(x_s,y_s)$ ist wie zuvor gezeigt:
$$\Phi = \frac{Q}{2\pi}\ln\sqrt{(x-x_s)^2+(y-y_s)^2}$$
Wenn wir nun die Größe $\lambda$ als Quellstärke pro Länge entlang des Paneels und pro Einheitstiefe definieren, wird die Potentialfunktion eines kleinen Ausschnitts $\text{d}s$ des Paneels zu:
$$\text{d}\Phi = \frac{\lambda}{2\pi}\ln\sqrt{(x-x(s))^2+(y-y(s))^2} \text{d}s$$
Das Potential des gesamten Paneels ergibt sich dann durch Integration über die Länge des Paneels zu:
$$\Phi(x,y) = \frac{\lambda}{2\pi}\int_a^b\ln\sqrt{(x-x(s))^2+(y-y(s))^2} \text{d}s$$
Ähnlich lassen sich auch die Stromfunktion und die Geschwindigkeitskomponenten bestimmen:
$$\Psi(x,y) = \frac{\lambda}{2\pi}\int_a^b\text{arctan}\frac{y-y(s)}{x-x(s)} \text{d}s$$
$$u(x,y) = \frac{\lambda}{2\pi}\int_a^b\frac{x-x(s)}{(x-x(s))^2+(y-y(s))^2} \text{d}s$$
$$v(x,y) = \frac{\lambda}{2\pi}\int_a^b\frac{y-y(s)}{(x-x(s))^2+(y-y(s))^2} \text{d}s$$
Für die Implementierung in Python ist es sinnvoll auf einen objektorientierten Ansatz zurückzugreifen und die Attribute und Funktionen, die wir für die Darstellung eines Panels benötigen in einer Klasse Panel zusammenzufassen.
End of explanation
lamb = 1.0
panel1 = Panel(ax=0,ay=-2,bx=0,by=2,lamb=lamb) # erzeuge neues Panel
nx = 400 # Anzahl der Punkte in x-Richtung
ny = 200 # Anzahl der Punkte in y-Richtung
x = np.linspace(-10, 10, nx, dtype=float) # 1D-Array mit x-Koordinaten
y = np.linspace(-5, 5, ny, dtype=float) # 1D-Array mit y-Koordinaten
X, Y = np.meshgrid(x, y ) # erzeugt das Gitter mit nx * ny Punkten
u,v = panel1.vel(X, Y)
u += 0.5
# Neuen Plot einrichten
plt.figure(figsize=(10, 5))
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-10,10)
plt.ylim(-5,5)
# Stromlinien mit Matplotlib-Funktion darstellen
plt.streamplot(X, Y, u, v,
density=2, linewidth=1, arrowsize=2, arrowstyle='->')
plt.plot([panel1.ax, panel1.bx],[panel1.ay, panel1.by],
color='red', linewidth=3);
Explanation: Zeit, das Ganze mal grafisch darzustellen. Wir erzeugen zunächst ein Objekt der Klasse Panel mit dem Namen panel1, den Eckpunkten $a=(0,-2)$ und $b=(0,2)$ und einem Volumenstrom pro Länge und Tiefe von $2~\text{m}/{s}$.
Dann berechnen wir die Geschwindigkeitsverteilung an allen Punkten des Rechengitters und Plotten wie gewohnt die Stromlinien.
End of explanation
def norm_int(panel_i, panel_j):
# Berechnet das Integral in der Koeffizientenmatrix
def integrand(s):
delta_x = panel_j.mx - panel_i.x(s)
delta_y = panel_j.my - panel_i.y(s)
return ((delta_x*math.cos(panel_j.beta)
+ delta_y*math.sin(panel_j.beta))
/ (delta_x*delta_x + delta_y*delta_y))
return integrate.quad(integrand, 0.0, panel_i.laenge)[0]
Explanation: Unser Quell-Paneel macht schon (fast) was es soll. Es hat nur noch einen Schönheitsfehler: aus dem Paneel scheint entgegen der Strömungsrichtung ein Volumenstrom auszutreten. Wir möchten aber, dass das Paneel zur Anströmrichtung hin eine undurchlässige Wand simuliert. Die stromabgewandte Seite wird später auf der Innenseite des umströmten Körpers liegen, so dass die dort austretende Strömung keine Bedeutung hat).
Wir müssen die Stärke $\lambda$ also genau so wählen, dass der Staupunkt gerade auf dem Paneel zum liegen kommt. Da der pro Länge und Einheitstiefe austretende Volumenstrom ohne überlagerte Translationsströmung zu beiden Seiten des Quell-Panels gleich ist, tritt auf der der Strömung zugewandten Seite genau $\lambda/2$ aus. Damit der Staupunkt also auf dem Quell-Paneel liegt, muss dieser auf die Länge und Einheitstiefe bezogene Volumenstrom also gerade durch die Translationsströmung kompensiert werden:
$$\frac{\lambda}{2} = u_\infty$$
Wenn wir oben im Python-Code lamb = 1 setzen, sollten wir also das gewünschte Ergebnis bekommen.
Überlagerung mehrerer Paneele zur Beschreibung eines umströmten Körpers
Zur Abbildung des umströmten Körpers mit Paneelen, müssen wir sicherstellen, dass die Geschwindigkeit am Ort der Paneele genau tangential verläuft. Anders ausgedrückt, muss die Geschwindigkeitskomponente normal zum Paneel $u_n$ gleich Null sein. Der Einfachheit halber fordern wir, dass $u_n$ nur im Mittelpunkt des Paneels Null sein muss.
Da die Normalenkomponente der Geschwindigkeit gerade die Ableitung der Potentialfunktion nach der Normalenrichtung ist, gilt:
$$u_n(x_m,y_m) = \frac{\partial \phi}{\partial n}\left(x_m, y_m\right) \stackrel{!}{=} 0$$
Und jetzt wird's etwas komplizierter. Denn die Potentialfunktion $\phi$ in dieser Gleichung ist nicht einfach die eines einzelnen Paneels, sondern die aller $N$ linear überlagerter Paneele und der überlagerten Translationsströmung:
$$\phi(x,y) = u_\infty x + v_\infty y + \sum_{i=1}^N \frac{\lambda_i}{2\pi}\int\limits_a^b\ln\sqrt{(x-x_i(s_i))^2+(y-y_i(s_i))^2} \text{d}s_i$$
Die Ableitung von $\phi$ an der Stelle $(x_m, y_m)$ nach der Normalenrichtung ist dann für das Paneel $j$:
$$\frac{\partial \phi(x_{m_j},y_{m_j})}{\partial n_j} = u_\infty\frac{\partial x_{m_j}}{\partial n_j} + v_\infty\frac{\partial y_{m_j}}{\partial n_j} + \frac{\lambda_j}{2}$$
$$+ \sum_{i\ne j}^N \frac{\lambda_i}{2\pi}\int\limits_{a_i}^{b_i}
\frac{\left(x_{m_j}-x_i(s_i)\right)\frac{\partial x_{m_j}}{\partial n_j}
+ \left(y_{m_j}-y_i(s_i)\right)\frac{\partial y_{m_j}}{\partial n_j}}
{\left(x_{m_j}-x_i(s_i)\right)^2+\left(y_{m_j}-y_i(s_i)\right)^2} \text{d}s_i $$
wobei der Beitrag des $j$-ten Paneels zu sich selbst mit $\lambda/2$ gesondert betrachtet wird (vgl. oben) und die partiellen Ableitungen nach der Normalenrichtung mit dem Winkel $\beta$ zwischen $x$-Achse und Normale (vgl. Bild des Paneels oben)
$$\frac{\partial x_{m_j}}{\partial n_j} = \cos \beta_j \qquad \text{und} \qquad \frac{\partial y_{m_j}}{\partial n_j} = \sin \beta_j$$
ergeben.
Wir haben jetzt für jedes der $N$ Paneele eine Gleichung für die Randbedingung. Damit suchen wir $N$ Werte für die jeweilige Stärke $\lambda_i$ des Paneels. Es geht nun also nur noch darum, ein System mit $N$ linearen algebraischen Gleichungen zu lösen, ähnlich wie wir das schon bei der Finite-Differenzen-Methode kennengelernt haben.
$$
A_{ij}\cdot
\begin{pmatrix}
\lambda_1 \
\lambda_2 \
\vdots \
\lambda_N
\end{pmatrix}
=
\begin{pmatrix}
-u_\infty \cos \beta_1 - v_\infty \sin \beta_1 \
-u_\infty \cos \beta_2 - v_\infty \sin \beta_2 \
\vdots \vphantom{\ddots} \
-u_\infty \cos \beta_N - v_\infty \sin \beta_N
\end{pmatrix}
$$
mit der Koeffizientenmatrix
$$A_{ij} =
\begin{cases}
\frac{1}{2} & \text{für } i = j \
\frac{1}{2\pi}\displaystyle\int\limits_{a_i}^{b_i} \frac{\left(x_{m_j}-x_i(s_i)\right)\cos \beta_j + \left(y_{m_j}-y_i(s_i)\right)\sin \beta_j}{\left(x_{m_j}-x_i(s_i)\right)^2+\left(y_{m_j}-y_i(s_i)\right)^2} \text{d}s_i & \text{für } i \ne j
\end{cases}$$
Beispiel
In einem Beispiel soll die Umströmung des ganz oben gezeigten Ovals simuliert werden. Die Anströmgeschwindigkeit sei $u_\infty = 1~\text{m}/{s}$. Wir approximieren das Oval mit 4 Paneele, welche die folgenden Eckpunkte haben:
Paneel 1: ( 0.0, 0.0) -> ( 1.0, 0.5)
Paneel 2: ( 1.0, 0.5) -> ( 2.0, 0.0)
Paneel 3: ( 2.0, 0.0) -> ( 1.0,-0.5)
Paneel 4: ( 1.0,-0.5) -> ( 0.0, 0.0)
Bevor wir das zu lösende Gleichungssystem aufstellen können, müssen wir noch eine Funktion für das Integral in der Koeefizientenmatrix programmieren. Wir tun dies analog zur Vorgehensweise in der Klasse Panel:
End of explanation
panels = [] # Liste der Paneele
#panels.append(Panel(ax=-5.0,ay= 0.0, bx= 0.0, by= 2.0))
#panels.append(Panel(ax= 0.0,ay= 2.0, bx= 5.0, by= 0.0))
#panels.append(Panel(ax= 5.0,ay= 0.0, bx= 0.0, by=-2.0))
#panels.append(Panel(ax= 0.0,ay=-2.0, bx=-5.0, by= 0.0))
panels.append(Panel(bx=-5.0,by= 0.0, ax= 0.0, ay= 2.0))
panels.append(Panel(bx= 0.0,by= 2.0, ax= 15, ay= 0.0))
panels.append(Panel(bx= 15,by= 0.0, ax= 0.0, ay=-2.0))
panels.append(Panel(bx= 0.0,by=-2.0, ax=-5.0, ay= 0.0))
N = len(panels)
Explanation: Als nächstes erzeugen wir die 4 Paneele:
End of explanation
u_oo = 1.0
v_oo = 0.0
# die Koeffizientenmatrix A_ij
A_ij = np.zeros((N,N), dtype=float)
for i, panel_i in enumerate(panels):
for j, panel_j in enumerate(panels):
if i == j:
A_ij[i,j] = 0.5
else:
A_ij[i,j] = norm_int(panel_i, panel_j) / (2 * math.pi)
# und die rechte Seite des Gleichungssystems
b_i = np.zeros(N, dtype=float)
for i, panel_i in enumerate(panels):
b_i[i] = -u_oo * math.cos(panel_i.beta) - v_oo * math.sin(panel_i.beta)
Explanation: und stellen dann das Gleichungssytem auf:
End of explanation
lambda_i = np.linalg.solve(A_ij, b_i)
for i, panel_i in enumerate(panels):
panel_i.lamb = lambda_i[i]
Explanation: Zur Lösung des Gleichungssystems nutzen wir einen in Python (bzw. Numpy) vorhandenen Löser:
End of explanation
# Anteil der Translationsströmung:
u = np.full_like(X, u_oo)
v = np.full_like(Y, v_oo)
# Anteil der einzelnen Paneele
for i, panel_i in enumerate(panels):
u_i,v_i = panel_i.vel(X, Y)
u += u_i
v += v_i
# Neuen Plot einrichten
plt.figure(figsize=(10, 5))
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-10,10)
plt.ylim(-5,5)
# Stromlinien mit Matplotlib-Funktion darstellen
plt.streamplot(X, Y, u, v,
density=2, linewidth=1, arrowsize=2, arrowstyle='->');
for i, panel_i in enumerate(panels):
plt.plot([panel_i.ax, panel_i.bx],[panel_i.ay, panel_i.by],
color='red', linewidth=3);
Explanation: Nachdem jetzt allen Paneelen die richtige Stärke $\lambda_i$ zugewiesen wurde, können wir das Geschwindigkeitsfeld berechnen:
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open('TFDStyle.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: Mit dem Quell-Panel-Verfahren können wir die Druckverteilung auf beliebigen umströmten Körpern berechnen, solange die Strömung stationär und reibungsfrei ist und der Körper keinen Auftrieb erfährt.
Um auch Umströmungen mit Auftrieb beschreiben zu können, muss das Verfahren noch erweitert werden. Wie wir bei der Zylinderumströmung gesehen haben, kann der Auftrieb mit der Potentialtheorie nur durch eine Überlagerung von Potentialwirbeln simuliert werden. Die Vorgehensweise ist ganz ähnlich der bisherigen, nur dass jedem Paneel noch ein Potentialwirbel überlagert wird.
Die Erweiterung auf das Wirbelpanelverfahren folgt hier.
Copyright (c) 2019, Florian Theobald und Matthias Stripf
Der folgende Python-Code darf ignoriert werden. Er dient nur dazu, die richtige Formatvorlage für die Jupyter-Notebooks zu laden.
End of explanation |
8,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Requirements
From this url http
Step1: Get all the detail page links to scrape
Since the index page has multiple pages (10 pages for now), we'll need to handle pagination, and note that the page parameter is zero-indexed
Step2: The index page HTML source
Step3: The index page looks like
Step4: Develop scraping function for detail page
Now we work on getting company info from detail page url, it turns out that each detail page url points to a company website of nearly standard format (some after url redirection), and all the info we need can be found in Contact page of the detail page
Note that currently there're 2 designs, so we need two ways for scraping
This is how we handle redirection
Step5: so we create his helper function
Step6: test~
Step7: handle design 1 (eg. http
Step8: The section we're interested in
Step9: Combining all the process above, we can create a function that given a detail page url, return the company's info
Step10: test run
Step11: handle design 2 (eg. http
Step12: The section we're interested in
Step13: Scraping function for design 2
Step14: test run
Step15: The main scraping
Now, let's go and scrape them all
Step16: Profit
save our important information | Python Code:
from IPython.core.display import display, HTML
import urllib2
import bs4
import urlparse
import pandas as pd
import numpy as np
Explanation: Requirements
From this url http://www.5metal.com.hk/ajax/pager/company_fea?view_amount=36&page=1,
We get a list of urls for detail pages like http://www.5metal.com.hk/wongwahkee/contact to scrape the companies' info
Info to scrape:
address
company name
company website
contact number
contact person
email
mobile phone
End of explanation
index_page_url = 'http://www.5metal.com.hk/ajax/pager/company?keyword=&page=0&tid=1'
index_page_src = urllib2.urlopen(index_page_url).read()
index_page_soup = bs4.BeautifulSoup(index_page_src, 'html.parser')
Explanation: Get all the detail page links to scrape
Since the index page has multiple pages (10 pages for now), we'll need to handle pagination, and note that the page parameter is zero-indexed
End of explanation
print(index_page_soup.prettify())
Explanation: The index page HTML source:
End of explanation
display(HTML(index_page_src))
number_of_index_pages = int(index_page_soup.select_one('.pager-current').getText().split('of')[1])
print(number_of_index_pages)
detail_pages_links = []
for i in xrange(number_of_index_pages):
current_index_page_url = ('http://www.5metal.com.hk/ajax/pager/company?keyword=&page=%d&tid=1' % i)
print('scraping index page: %s' % current_index_page_url)
try:
current_index_page_src = urllib2.urlopen(current_index_page_url).read()
current_index_page_soup = bs4.BeautifulSoup(index_page_src, 'html.parser')
detail_pages_links_in_current_page = [anchor.get('href') for anchor in current_index_page_soup.select('.views-field .company .logo a[href]')]
detail_pages_links_in_current_page = [urlparse.urljoin(current_index_page_url, link) for link in detail_pages_links_in_current_page if link is not None]
detail_pages_links += detail_pages_links_in_current_page
except Exception:
print('there was an exception when scraping index page: %s' % index_page_url)
scraped_data = pd.DataFrame(data={'detail_page_url': detail_pages_links})
scraped_data['address'] = None
scraped_data['company_name'] = None
scraped_data['company_website'] = None
scraped_data['contact_number'] = None
scraped_data['contact_person'] = None
scraped_data['fax'] = None
scraped_data['email'] = None
scraped_data['mobile_phone'] = None
print('No. of links got: ' + str(scraped_data.shape[0]))
print(scraped_data.head(10))
Explanation: The index page looks like:
End of explanation
sample_detail_page_url = detail_pages_links[0]
print('sample_detail_page_url:')
print(sample_detail_page_url)
redirected_sample_detail_page_url = urllib2.urlopen(sample_detail_page_url).geturl()
print('redirected_sample_detail_page_url:')
print(redirected_sample_detail_page_url)
sample_contact_page_url = redirected_sample_detail_page_url + '/contact'
print('sample_contact_page_url:')
print(sample_contact_page_url)
Explanation: Develop scraping function for detail page
Now we work on getting company info from detail page url, it turns out that each detail page url points to a company website of nearly standard format (some after url redirection), and all the info we need can be found in Contact page of the detail page
Note that currently there're 2 designs, so we need two ways for scraping
This is how we handle redirection:
End of explanation
def get_contact_page_soup_from_detail_page_url(detail_page_url):
redirected_detail_page_url = urllib2.urlopen(detail_page_url).geturl()
contact_page_url = redirected_detail_page_url + '/contact'
contact_page_src = urllib2.urlopen(contact_page_url).read()
return bs4.BeautifulSoup(contact_page_src, 'html.parser')
Explanation: so we create his helper function:
End of explanation
get_contact_page_soup_from_detail_page_url(detail_pages_links[0])
Explanation: test~
End of explanation
detail_page_url_design1 = 'http://www.5metal.com.hk/node/562'
sample_contact_page_soup_design1 = get_contact_page_soup_from_detail_page_url(detail_page_url_design1)
print(sample_contact_page_soup_design1.prettify())
Explanation: handle design 1 (eg. http://www.5metal.com.hk/node/562):
End of explanation
contact_info_soup = sample_contact_page_soup_design1.select_one('.company_right')
print contact_info_soup.prettify()
fields_and_selectors = [
( 'company_name', '.node_title'),
('address', '.field-name-field-company-address .field-item'),
('contact_person', '.field-name-field-contact-person .field-item'),
('contact_number', '.field-name-field-company-tel .field-item'),
('mobile_phone', '.field-name-field-mobile .field-item'),
('fax', '.field-name-field-company-fax .field-item'),
('email', '.field-name-field-email .field-item'),
('company_website', '.field-name-field-company-url .field-item')
]
values = [(value and value.getText().strip()) for value in [contact_info_soup.select_one(x[1]) for x in fields_and_selectors]]
company_info = pd.DataFrame(data={'values': values}, index=[field for (field, css_selector) in fields_and_selectors])
print company_info
Explanation: The section we're interested in:
End of explanation
def get_company_info_from_soup_design1(contact_page_soup):
contact_info_soup = contact_page_soup.select_one('.company_right')
fields_and_selectors = [
( 'company_name', '.node_title'),
('address', '.field-name-field-company-address .field-item'),
('contact_person', '.field-name-field-contact-person .field-item'),
('contact_number', '.field-name-field-company-tel .field-item'),
('mobile_phone', '.field-name-field-mobile .field-item'),
('fax', '.field-name-field-company-fax .field-item'),
('email', '.field-name-field-email .field-item'),
('company_website', '.field-name-field-company-url .field-item')
]
fields = [field for (field, css_selector) in fields_and_selectors]
values = [(value and value.getText().strip()) for value in [contact_info_soup.select_one(x[1]) for x in fields_and_selectors]]
return {field:value for (field, value) in zip(fields, values)}
Explanation: Combining all the process above, we can create a function that given a detail page url, return the company's info:
End of explanation
test_run_result = get_company_info_from_soup_design1(get_contact_page_soup_from_detail_page_url(detail_page_url_design1))
print(test_run_result)
Explanation: test run :) :
End of explanation
detail_page_url_design2 = 'http://www.5metal.com.hk/node/13112'
sample_contact_page_soup_design2 = get_contact_page_soup_from_detail_page_url(detail_page_url_design2)
print(sample_contact_page_soup_design2.prettify())
Explanation: handle design 2 (eg. http://www.5metal.com.hk/node/13112):
End of explanation
contact_info_soup = sample_contact_page_soup_design2.select_one('.cp-contact .cp-box-content')
print contact_info_soup.prettify()
contact_info_soup = sample_contact_page_soup_design2.select_one('.cp-contact .cp-box-content')
fields = ['company_name', 'address', 'contact_person', 'contact_number', 'mobile_phone', 'fax', 'email', 'company_website']
values = [contact_info_soup.select_one('.row.company_name' + ' + .row' * i + ' .val').getText().strip() for i in xrange(len(fields))]
{field:value for (field, value) in zip(fields, values)}
Explanation: The section we're interested in:
End of explanation
def get_company_info_from_soup_design2(contact_page_soup):
contact_info_soup = contact_page_soup.select_one('.cp-contact .cp-box-content')
fields = ['company_name', 'address', 'contact_person', 'contact_number', 'mobile_phone', 'fax', 'email', 'company_website']
values = [contact_info_soup.select_one('.row.company_name' + ' + .row' * i + ' .val').getText().strip() for i in xrange(len(fields))]
return {field:value for (field, value) in zip(fields, values)}
Explanation: Scraping function for design 2
End of explanation
test_run_result = get_company_info_from_soup_design2(get_contact_page_soup_from_detail_page_url(detail_page_url_design2))
print(test_run_result)
Explanation: test run:
End of explanation
final_scraped_data = scraped_data.copy()
n = final_scraped_data.shape[0]
start_index = 0
end_index = n
print('%d detail pages to scrape from' % (end_index - start_index))
for i in xrange(start_index, end_index):
url_to_scrape_from = final_scraped_data.iloc[i]['detail_page_url']
print('[%4d/%4d] scraping url: %s' % (i, n, url_to_scrape_from))
try:
contact_page_soup = get_contact_page_soup_from_detail_page_url(url_to_scrape_from)
is_design1 = (contact_page_soup.select_one('.company_right') is not None)
if is_design1:
company_info = get_company_info_from_soup_design1(contact_page_soup)
else:
company_info = get_company_info_from_soup_design2(contact_page_soup)
for col in final_scraped_data.columns:
if col is not 'detail_page_url':
final_scraped_data[col].iloc[i] = company_info[col]
except Exception:
print('there was an exception when scraping %dth url: %s' % (i, url_to_scrape_from))
final_scraped_data
Explanation: The main scraping
Now, let's go and scrape them all:
End of explanation
import datetime
timestamp = str(datetime.datetime.now()).split('.')[0]
filename = './scraped_result %s.csv' % timestamp
print('saving data...')
final_scraped_data.to_csv(filename, encoding='utf-8')
print('saved data to %s'%filename)
!cat '$filename'
Explanation: Profit
save our important information :)
End of explanation |
8,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Now set up everything so that the figures show up in the notebook
Step2: More info on other options for Offline Plotly usage can be found here.
Choropleth US Maps
Plotly's mapping can be a bit hard to get used to at first, remember to reference the cheat sheet in the data visualization folder, or find it online here.
Step3: Now we need to begin to build our data dictionary. Easiest way to do this is to use the dict() function of the general form
Step4: Then we create the layout nested dictionary
Step5: Then we use
Step6: Real Data US Map Choropleth
Now let's show an example with some real data as well as some other options we can add to the dictionaries in data and layout.
Step7: Now out data dictionary with some extra marker and colorbar arguments
Step8: And our layout dictionary with some more arguments
Step9: World Choropleth Map
Now let's see an example with a World Map | Python Code:
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Choropleth Maps
Offline Plotly Usage
Get imports and set everything up to be working offline.
End of explanation
init_notebook_mode(connected=True)
Explanation: Now set up everything so that the figures show up in the notebook:
End of explanation
import pandas as pd
Explanation: More info on other options for Offline Plotly usage can be found here.
Choropleth US Maps
Plotly's mapping can be a bit hard to get used to at first, remember to reference the cheat sheet in the data visualization folder, or find it online here.
End of explanation
data = dict(type = 'choropleth',
locations = ['AZ','CA','NY'],
locationmode = 'USA-states',
colorscale= 'Portland',
text= ['text1','text2','text3'],
z=[1.0,2.0,3.0],
colorbar = {'title':'Colorbar Title'})
Explanation: Now we need to begin to build our data dictionary. Easiest way to do this is to use the dict() function of the general form:
type = 'choropleth',
locations = list of states
locationmode = 'USA-states'
colorscale=
Either a predefined string:
'pairs' | 'Greys' | 'Greens' | 'Bluered' | 'Hot' | 'Picnic' | 'Portland' | 'Jet' | 'RdBu' | 'Blackbody' | 'Earth' | 'Electric' | 'YIOrRd' | 'YIGnBu'
or create a custom colorscale
text= list or array of text to display per point
z= array of values on z axis (color of state)
colorbar = {'title':'Colorbar Title'})
Here is a simple example:
End of explanation
layout = dict(geo = {'scope':'usa'})
Explanation: Then we create the layout nested dictionary:
End of explanation
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
Explanation: Then we use:
go.Figure(data = [data],layout = layout)
to set up the object that finally gets passed into iplot()
End of explanation
df = pd.read_csv('2011_US_AGRI_Exports')
df.head()
Explanation: Real Data US Map Choropleth
Now let's show an example with some real data as well as some other options we can add to the dictionaries in data and layout.
End of explanation
data = dict(type='choropleth',
colorscale = 'YIOrRd',
locations = df['code'],
z = df['total exports'],
locationmode = 'USA-states',
text = df['text'],
marker = dict(line = dict(color = 'rgb(255,255,255)',width = 2)),
colorbar = {'title':"Millions USD"}
)
Explanation: Now out data dictionary with some extra marker and colorbar arguments:
End of explanation
layout = dict(title = '2011 US Agriculture Exports by State',
geo = dict(scope='usa',
showlakes = True,
lakecolor = 'rgb(85,173,240)')
)
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
Explanation: And our layout dictionary with some more arguments:
End of explanation
df = pd.read_csv('2014_World_GDP')
df.head()
data = dict(
type = 'choropleth',
locations = df['CODE'],
z = df['GDP (BILLIONS)'],
text = df['COUNTRY'],
colorbar = {'title' : 'GDP Billions US'},
)
layout = dict(
title = '2014 Global GDP',
geo = dict(
showframe = False,
projection = {'type':'Mercator'}
)
)
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
Explanation: World Choropleth Map
Now let's see an example with a World Map:
End of explanation |
8,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with ECoG data
MNE supports working with more than just MEG and EEG data. Here we show some
of the functions that can be used to facilitate working with
electrocorticography (ECoG) data.
Step1: Let's load some ECoG electrode locations and names, and turn them into
a
Step2: Now that we have our electrode positions in MRI coordinates, we can create
our measurement info structure.
Step3: We can then plot the locations of our electrodes on our subject's brain.
<div class="alert alert-info"><h4>Note</h4><p>These are not real electrodes for this subject, so they
do not align to the cortical surface perfectly.</p></div>
Step4: Sometimes it is useful to make a scatterplot for the current figure view.
This is best accomplished with matplotlib. We can capture an image of the
current mayavi view, along with the xy position of each electrode, with the
snapshot_brain_montage function. | Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Chris Holdgraf <choldgraf@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from mayavi import mlab
import mne
from mne.viz import plot_alignment, snapshot_brain_montage
print(__doc__)
Explanation: Working with ECoG data
MNE supports working with more than just MEG and EEG data. Here we show some
of the functions that can be used to facilitate working with
electrocorticography (ECoG) data.
End of explanation
mat = loadmat(mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat')
ch_names = mat['ch_names'].tolist()
elec = mat['elec'] # electrode positions given in meters
dig_ch_pos = dict(zip(ch_names, elec))
mon = mne.channels.DigMontage(dig_ch_pos=dig_ch_pos)
print('Created %s channel positions' % len(ch_names))
Explanation: Let's load some ECoG electrode locations and names, and turn them into
a :class:mne.channels.DigMontage class.
End of explanation
info = mne.create_info(ch_names, 1000., 'ecog', montage=mon)
Explanation: Now that we have our electrode positions in MRI coordinates, we can create
our measurement info structure.
End of explanation
subjects_dir = mne.datasets.sample.data_path() + '/subjects'
fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,
surfaces=['pial'])
mlab.view(200, 70)
Explanation: We can then plot the locations of our electrodes on our subject's brain.
<div class="alert alert-info"><h4>Note</h4><p>These are not real electrodes for this subject, so they
do not align to the cortical surface perfectly.</p></div>
End of explanation
# We'll once again plot the surface, then take a snapshot.
fig_scatter = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,
surfaces='pial')
mlab.view(200, 70)
xy, im = snapshot_brain_montage(fig_scatter, mon)
# Convert from a dictionary to array to plot
xy_pts = np.vstack([xy[ch] for ch in info['ch_names']])
# Define an arbitrary "activity" pattern for viz
activity = np.linspace(100, 200, xy_pts.shape[0])
# This allows us to use matplotlib to create arbitrary 2d scatterplots
_, ax = plt.subplots(figsize=(10, 10))
ax.imshow(im)
ax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm')
ax.set_axis_off()
plt.show()
Explanation: Sometimes it is useful to make a scatterplot for the current figure view.
This is best accomplished with matplotlib. We can capture an image of the
current mayavi view, along with the xy position of each electrode, with the
snapshot_brain_montage function.
End of explanation |
8,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Networks in Keras
Step1: The original GAN!
See this paper for details of the approach we'll try first for our first GAN. We'll see if we can generate hand-drawn numbers based on MNIST, so let's load that dataset first.
We'll be refering to the discriminator as 'D' and the generator as 'G'.
Step2: Train
This is just a helper to plot a bunch of generated images.
Step3: Create some random data for the generator.
Step4: Create a batch of some real and some generated data, with appropriate labels, for the discriminator.
Step5: Train a few epochs, and return the losses for D and G. In each epoch we
Step6: MLP GAN
We'll keep thinks simple by making D & G plain ole' MLPs.
Step7: The loss plots for most GANs are nearly impossible to interpret - which is one of the things that make them hard to train.
Step8: This is what's known in the literature as "mode collapse".
Step9: OK, so that didn't work. Can we do better?...
DCGAN
There's lots of ideas out there to make GANs train better, since they are notoriously painful to get working. The paper introducing DCGANs is the main basis for our next section. Add see https
Step10: Our generator uses a number of upsampling steps as suggested in the above papers. We use nearest neighbor upsampling rather than fractionally strided convolutions, as discussed in our style transfer notebook.
Step11: The discriminator uses a few downsampling steps through strided convolutions.
Step12: We train D a "little bit" so it can at least tell a real image from random noise.
Step13: Now we can train D & G iteratively.
Step14: Better than our first effort, but still a lot to be desired | Python Code:
%matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *
from tqdm import tqdm
Explanation: Generative Adversarial Networks in Keras
End of explanation
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
n = len(X_train)
X_train = X_train.reshape(n, -1).astype(np.float32)
X_test = X_test.reshape(len(X_test), -1).astype(np.float32)
X_train /= 255.; X_test /= 255.
Explanation: The original GAN!
See this paper for details of the approach we'll try first for our first GAN. We'll see if we can generate hand-drawn numbers based on MNIST, so let's load that dataset first.
We'll be refering to the discriminator as 'D' and the generator as 'G'.
End of explanation
def plot_gen(G, n_ex=16):
plot_multi(G.predict(noise(n_ex)).reshape(n_ex, 28,28), cmap='gray')
Explanation: Train
This is just a helper to plot a bunch of generated images.
End of explanation
def noise(bs): return np.random.rand(bs,100)
Explanation: Create some random data for the generator.
End of explanation
def data_D(sz, G):
real_img = X_train[np.random.randint(0,n,size=sz)]
X = np.concatenate((real_img, G.predict(noise(sz))))
return X, [0]*sz + [1]*sz
def make_trainable(net, val):
net.trainable = val
for l in net.layers: l.trainable = val
Explanation: Create a batch of some real and some generated data, with appropriate labels, for the discriminator.
End of explanation
def train(D, G, m, nb_epoch=5000, bs=128):
dl,gl=[],[]
for e in tqdm(range(nb_epoch)):
X,y = data_D(bs//2, G)
dl.append(D.train_on_batch(X,y))
make_trainable(D, False)
gl.append(m.train_on_batch(noise(bs), np.zeros([bs])))
make_trainable(D, True)
return dl,gl
Explanation: Train a few epochs, and return the losses for D and G. In each epoch we:
Train D on one batch from data_D()
Train G to create images that the discriminator predicts as real.
End of explanation
MLP_G = Sequential([
Dense(200, input_shape=(100,), activation='relu'),
Dense(400, activation='relu'),
Dense(784, activation='sigmoid'),
])
MLP_D = Sequential([
Dense(300, input_shape=(784,), activation='relu'),
Dense(300, activation='relu'),
Dense(1, activation='sigmoid'),
])
MLP_D.compile(Adam(1e-4), "binary_crossentropy")
MLP_m = Sequential([MLP_G,MLP_D])
MLP_m.compile(Adam(1e-4), "binary_crossentropy")
dl,gl = train(MLP_D, MLP_G, 4000)
Explanation: MLP GAN
We'll keep thinks simple by making D & G plain ole' MLPs.
End of explanation
plt.plot(dl[100:])
plt.plot(gl[100:])
Explanation: The loss plots for most GANs are nearly impossible to interpret - which is one of the things that make them hard to train.
End of explanation
plot_gen()
Explanation: This is what's known in the literature as "mode collapse".
End of explanation
X_train = X_train.reshape(n, 28, 28, 1)
X_test = X_test.reshape(len(X_test), 28, 28, 1)
Explanation: OK, so that didn't work. Can we do better?...
DCGAN
There's lots of ideas out there to make GANs train better, since they are notoriously painful to get working. The paper introducing DCGANs is the main basis for our next section. Add see https://github.com/soumith/ganhacks for many tips!
Because we're using a CNN from now on, we'll reshape our digits into proper images.
End of explanation
CNN_G = Sequential([
Dense(512*7*7, input_dim=100, activation=LeakyReLU()),
BatchNormalization(mode=2),
Reshape((7, 7, 512)),
UpSampling2D(),
Convolution2D(64, 3, 3, border_mode='same', activation=LeakyReLU()),
BatchNormalization(mode=2),
UpSampling2D(),
Convolution2D(32, 3, 3, border_mode='same', activation=LeakyReLU()),
BatchNormalization(mode=2),
Convolution2D(1, 1, 1, border_mode='same', activation='sigmoid')
])
Explanation: Our generator uses a number of upsampling steps as suggested in the above papers. We use nearest neighbor upsampling rather than fractionally strided convolutions, as discussed in our style transfer notebook.
End of explanation
CNN_D = Sequential([
Convolution2D(256, 5, 5, subsample=(2,2), border_mode='same',
input_shape=(28, 28, 1), activation=LeakyReLU()),
Convolution2D(512, 5, 5, subsample=(2,2), border_mode='same', activation=LeakyReLU()),
Flatten(),
Dense(256, activation=LeakyReLU()),
Dense(1, activation = 'sigmoid')
])
CNN_D.compile(Adam(1e-3), "binary_crossentropy")
Explanation: The discriminator uses a few downsampling steps through strided convolutions.
End of explanation
sz = n//200
x1 = np.concatenate([np.random.permutation(X_train)[:sz], CNN_G.predict(noise(sz))])
CNN_D.fit(x1, [0]*sz + [1]*sz, batch_size=128, nb_epoch=1, verbose=2)
CNN_m = Sequential([CNN_G, CNN_D])
CNN_m.compile(Adam(1e-4), "binary_crossentropy")
K.set_value(CNN_D.optimizer.lr, 1e-3)
K.set_value(CNN_m.optimizer.lr, 1e-3)
Explanation: We train D a "little bit" so it can at least tell a real image from random noise.
End of explanation
dl,gl = train(CNN_D, CNN_G, CNN_m, 250)
plt.plot(dl[10:])
plt.plot(gl[10:])
Explanation: Now we can train D & G iteratively.
End of explanation
plot_gen(CNN_G)
Explanation: Better than our first effort, but still a lot to be desired:...
End of explanation |
8,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
__getitem__ AND __len__ 方法
下面看一个生成扑克牌以及对其进行操作的例子
Step1: 迭代
Step2: in 运算符
迭代通常是隐式的,如果集合中没有 __contains__ 方法, in 操作符就会按顺序进行一次迭代搜索,于是 in 可以在 FrenchDeck 类中使用,因为它是可迭代的
Step3: 排序
扑克牌一般按照数字大小( A 最大)来排列,花色按照黑桃(最大),红心,方块,梅花(最小)来排序,我们实现一下此功能 | Python Code:
import collections
Card = collections.namedtuple('Card', ['rank', 'suit']) #'Card' 是 namedtuple 名字, 后面是元素
class FrenchDeck:
ranks = [str(n) for n in range(2, 11)] + list('JQKA')
suits = 'spades diamonds clubs hearts'.split() # 黑桃 钻石 梅花 红心
def __init__(self):
self._cards = [Card(rank, suit) for suit in self.suits
for rank in self.ranks]
def __len__(self):
return len(self._cards)
def __getitem__(self, position): #代表了 self._cards 的 []运算符
return self._cards[position]
beer_card = Card('7', 'diamonds')
beer_card
deck = FrenchDeck()
len(deck) #默认调用了 __len__()
deck[0] #调用了 __getitem__()
deck[-1]
from random import choice
choice(deck)
deck[:3] #因为 __getitem__() 方法把 [] 操作交给 slef._cards 列表,我们的 desk 自动支持切片操作
deck[12::13]
Explanation: __getitem__ AND __len__ 方法
下面看一个生成扑克牌以及对其进行操作的例子
End of explanation
# 我们只要写了 __getitem__() 方法,就可以将类变成可迭代的
for card in deck[:10]:
print(card)
# 也可以反向迭代
for card in reversed(deck):
print(card)
Explanation: 迭代
End of explanation
Card('Q', 'hearts') in deck
Card('Q', 'beasts') in deck
Explanation: in 运算符
迭代通常是隐式的,如果集合中没有 __contains__ 方法, in 操作符就会按顺序进行一次迭代搜索,于是 in 可以在 FrenchDeck 类中使用,因为它是可迭代的
End of explanation
suit_values = dict(spades = 3, hearts = 2, diamonds = 1, clubs = 0)
suit_values
def spades_high(card):
rank_value = FrenchDeck.ranks.index(card.rank)
return rank_value * len(suit_values) + suit_values[card.suit]
for card in sorted(deck, key=spades_high):
print(card)
Explanation: 排序
扑克牌一般按照数字大小( A 最大)来排列,花色按照黑桃(最大),红心,方块,梅花(最小)来排序,我们实现一下此功能
End of explanation |
8,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Ensemble Methods
In this assignment, we will explore the use of boosting. We will use the pre-implemented gradient boosted trees in GraphLab Create. You will
Step1: Load LendingClub dataset
We will be using the LendingClub data. As discussed earlier, the LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors.
Just like we did in previous assignments, we will build a classification model to predict whether or not a loan provided by lending club is likely to default.
Let us start by loading the data.
Step2: Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset. We have done this in previous assignments, so we won't belabor this here.
Step3: Modifying the target column
The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
As in past assignments, in order to make this more intuitive and consistent with the lectures, we reassign the target to be
Step4: Selecting features
In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are described in the code comments below. If you are a finance geek, the LendingClub website has a lot more details about these features.
The features we will be using are described in the code comments below
Step5: Skipping observations with missing values
Recall from the lectures that one common approach to coping with missing values is to skip observations that contain missing values.
We run the following code to do so
Step6: Fortunately, there are not too many missing values. We are retaining most of the data.
Make sure the classes are balanced
We saw in an earlier assignment that this dataset is also imbalanced. We will undersample the larger class (safe loans) in order to balance out our dataset. We used seed=1 to make sure everyone gets the same results.
Step7: Checkpoint
Step8: Gradient boosted tree classifier
Gradient boosted trees are a powerful variant of boosting methods; they have been used to win many Kaggle competitions, and have been widely used in industry. We will explore the predictive power of multiple decision trees as opposed to a single decision tree.
Additional reading
Step9: Making predictions
Just like we did in previous sections, let us consider a few positive and negative examples from the validation set. We will do the following
Step10: Predicting on sample validation data
For each row in the sample_validation_data, write code to make model_5 predict whether or not the loan is classified as a safe loan.
Hint
Step11: Quiz question
Step12: Quiz Question
Step13: Calculate the number of false positives made by the model.
Step14: Quiz question
Step15: Comparison with decision trees
In the earlier assignment, we saw that the prediction accuracy of the decision trees was around 0.64 (rounded). In this assignment, we saw that model_5 has an accuracy of 0.67 (rounded).
Here, we quantify the benefit of the extra 3% increase in accuracy of model_5 in comparison with a single decision tree from the original decision tree assignment.
As we explored in the earlier assignment, we calculated the cost of the mistakes made by the model. We again consider the same costs as follows
Step16: Reminder
Step17: Checkpoint
Step18: Now, we are ready to go to Step 3. You can now use the prediction column to sort the loans in validation_data (in descending order) by prediction probability. Find the top 5 loans with the highest probability of being predicted as a safe loan.
Step19: Quiz question
Step20: Checkpoint
Step21: Now, train 4 models with max_iterations to be
Step22: Compare accuracy on entire validation set
Now we will compare the predicitve accuracy of our models on the validation set. Evaluate the accuracy of the 10, 50, 100, 200, and 500 tree models on the validation_data. Use the .evaluate method.
Step23: Quiz Question
Step24: Plot the training and validation error vs. number of trees
Recall from the lecture that the classification error is defined as
$$
\mbox{classification error} = 1 - \mbox{accuracy}
$$
In this section, we will plot the training and validation errors versus the number of trees to get a sense of how these models are performing. We will compare the 10, 50, 100, 200, and 500 tree models. You will need matplotlib in order to visualize the plots.
First, make sure this block of code runs on your computer.
Step25: In order to plot the classification errors (on the train_data and validation_data) versus the number of trees, we will need lists of these accuracies, which we get by applying the method .evaluate.
Steps to follow
Step26: Now, let us run Step 2. Save the training errors into a list called training_errors
Step27: Now, onto Step 3. Write code to compute the classification error on the validation_data for models model_10, model_50, model_100, model_200, and model_500.
Step28: Now, let us run Step 4. Save the training errors into a list called validation_errors
Step29: Now, we will plot the training_errors and validation_errors versus the number of trees. We will compare the 10, 50, 100, 200, and 500 tree models. We provide some plotting code to visualize the plots within this notebook.
Run the following code to visualize the plots. | Python Code:
import graphlab
Explanation: Exploring Ensemble Methods
In this assignment, we will explore the use of boosting. We will use the pre-implemented gradient boosted trees in GraphLab Create. You will:
Use SFrames to do some feature engineering.
Train a boosted ensemble of decision-trees (gradient boosted trees) on the LendingClub dataset.
Predict whether a loan will default along with prediction probabilities (on a validation set).
Evaluate the trained model and compare it with a baseline.
Find the most positive and negative loans using the learned model.
Explore how the number of trees influences classification performance.
Let's get started!
Fire up Graphlab Create
End of explanation
loans = graphlab.SFrame('lending-club-data.gl/')
Explanation: Load LendingClub dataset
We will be using the LendingClub data. As discussed earlier, the LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors.
Just like we did in previous assignments, we will build a classification model to predict whether or not a loan provided by lending club is likely to default.
Let us start by loading the data.
End of explanation
loans.column_names()
Explanation: Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset. We have done this in previous assignments, so we won't belabor this here.
End of explanation
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
Explanation: Modifying the target column
The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
As in past assignments, in order to make this more intuitive and consistent with the lectures, we reassign the target to be:
* +1 as a safe loan,
* -1 as a risky (bad) loan.
We put this in a new column called safe_loans.
End of explanation
target = 'safe_loans'
features = ['grade', # grade of the loan (categorical)
'sub_grade_num', # sub-grade of the loan as a number from 0 to 1
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'payment_inc_ratio', # ratio of the monthly payment to income
'delinq_2yrs', # number of delinquincies
'delinq_2yrs_zero', # no delinquincies in last 2 years
'inq_last_6mths', # number of creditor inquiries in last 6 months
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'open_acc', # number of open credit accounts
'pub_rec', # number of derogatory public records
'pub_rec_zero', # no derogatory public records
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
'int_rate', # interest rate of the loan
'total_rec_int', # interest received to date
'annual_inc', # annual income of borrower
'funded_amnt', # amount committed to the loan
'funded_amnt_inv', # amount committed by investors for the loan
'installment', # monthly payment owed by the borrower
]
Explanation: Selecting features
In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are described in the code comments below. If you are a finance geek, the LendingClub website has a lot more details about these features.
The features we will be using are described in the code comments below:
End of explanation
loans, loans_with_na = loans[[target] + features].dropna_split()
# Count the number of rows with missing data
num_rows_with_na = loans_with_na.num_rows()
num_rows = loans.num_rows()
print 'Dropping %s observations; keeping %s ' % (num_rows_with_na, num_rows)
Explanation: Skipping observations with missing values
Recall from the lectures that one common approach to coping with missing values is to skip observations that contain missing values.
We run the following code to do so:
End of explanation
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
Explanation: Fortunately, there are not too many missing values. We are retaining most of the data.
Make sure the classes are balanced
We saw in an earlier assignment that this dataset is also imbalanced. We will undersample the larger class (safe loans) in order to balance out our dataset. We used seed=1 to make sure everyone gets the same results.
End of explanation
train_data, validation_data = loans_data.random_split(.8, seed=1)
Explanation: Checkpoint: You should now see that the dataset is balanced (approximately 50-50 safe vs risky loans).
Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Split data into training and validation sets
We split the data into training data and validation data. We used seed=1 to make sure everyone gets the same results. We will use the validation data to help us select model parameters.
End of explanation
model_5 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 5)
Explanation: Gradient boosted tree classifier
Gradient boosted trees are a powerful variant of boosting methods; they have been used to win many Kaggle competitions, and have been widely used in industry. We will explore the predictive power of multiple decision trees as opposed to a single decision tree.
Additional reading: If you are interested in gradient boosted trees, here is some additional reading material:
* GraphLab Create user guide
* Advanced material on boosted trees
We will now train models to predict safe_loans using the features above. In this section, we will experiment with training an ensemble of 5 trees. To cap the ensemble classifier at 5 trees, we call the function with max_iterations=5 (recall that each iterations corresponds to adding a tree). We set validation_set=None to make sure everyone gets the same results.
End of explanation
# Select all positive and negative examples.
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
# Select 2 examples from the validation set for positive & negative loans
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
# Append the 4 examples into a single dataset
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
Explanation: Making predictions
Just like we did in previous sections, let us consider a few positive and negative examples from the validation set. We will do the following:
* Predict whether or not a loan is likely to default.
* Predict the probability with which the loan is likely to default.
End of explanation
model_5.predict(dataset=sample_validation_data)
Explanation: Predicting on sample validation data
For each row in the sample_validation_data, write code to make model_5 predict whether or not the loan is classified as a safe loan.
Hint: Use the predict method in model_5 for this.
End of explanation
model_5.predict(dataset=sample_validation_data, output_type='probability')
Explanation: Quiz question: What percentage of the predictions on sample_validation_data did model_5 get correct?
Prediction probabilities
For each row in the sample_validation_data, what is the probability (according model_5) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using model_5 on sample_validation_data:
End of explanation
e = model_5.evaluate(validation_data)
e
Explanation: Quiz Question: According to model_5, which loan is the least likely to be a safe loan?
Checkpoint: Can you verify that for all the predictions with probability >= 0.5, the model predicted the label +1?
Evaluating the model on the validation data
Recall that the accuracy is defined as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
Evaluate the accuracy of the model_5 on the validation_data.
Hint: Use the .evaluate() method in the model.
End of explanation
confusion_matrix = e['confusion_matrix']
confusion_matrix[(confusion_matrix['target_label']==-1) & (confusion_matrix['predicted_label']==1)]
Explanation: Calculate the number of false positives made by the model.
End of explanation
confusion_matrix[(confusion_matrix['target_label']==1) & (confusion_matrix['predicted_label']==-1)]
Explanation: Quiz question: What is the number of false positives on the validation_data?
Calculate the number of false negatives made by the model.
End of explanation
cost = 10000 * 1463 + 20000 * 1618
cost
Explanation: Comparison with decision trees
In the earlier assignment, we saw that the prediction accuracy of the decision trees was around 0.64 (rounded). In this assignment, we saw that model_5 has an accuracy of 0.67 (rounded).
Here, we quantify the benefit of the extra 3% increase in accuracy of model_5 in comparison with a single decision tree from the original decision tree assignment.
As we explored in the earlier assignment, we calculated the cost of the mistakes made by the model. We again consider the same costs as follows:
False negatives: Assume a cost of \$10,000 per false negative.
False positives: Assume a cost of \$20,000 per false positive.
Assume that the number of false positives and false negatives for the learned decision tree was
False negatives: 1936
False positives: 1503
Using the costs defined above and the number of false positives and false negatives for the decision tree, we can calculate the total cost of the mistakes made by the decision tree model as follows:
cost = $10,000 * 1936 + $20,000 * 1503 = $49,420,000
The total cost of the mistakes of the model is $49.42M. That is a lot of money!.
Quiz Question: Using the same costs of the false positives and false negatives, what is the cost of the mistakes made by the boosted tree model (model_5) as evaluated on the validation_set?
End of explanation
validation_data['predictions'] = model_5.predict(dataset=validation_data, output_type='probability')
Explanation: Reminder: Compare the cost of the mistakes made by the boosted trees model with the decision tree model. The extra 3% improvement in prediction accuracy can translate to several million dollars! And, it was so easy to get by simply boosting our decision trees.
Most positive & negative loans.
In this section, we will find the loans that are most likely to be predicted safe. We can do this in a few steps:
Step 1: Use the model_5 (the model with 5 trees) and make probability predictions for all the loans in the validation_data.
Step 2: Similar to what we did in the very first assignment, add the probability predictions as a column called predictions into the validation_data.
Step 3: Sort the data (in descreasing order) by the probability predictions.
Start here with Step 1 & Step 2. Make predictions using model_5 for examples in the validation_data. Use output_type = probability.
End of explanation
validation_data.sort('predictions', ascending=False)
print "Your loans : %s\n" % validation_data['predictions'].head(4)
print "Expected answer : %s" % [0.4492515948736132, 0.6119100103640573,
0.3835981314851436, 0.3693306705994325]
Explanation: Checkpoint: For each row, the probabilities should be a number in the range [0, 1]. We have provided a simple check here to make sure your answers are correct.
End of explanation
s = validation_data.sort('predictions', ascending=True)
Explanation: Now, we are ready to go to Step 3. You can now use the prediction column to sort the loans in validation_data (in descending order) by prediction probability. Find the top 5 loans with the highest probability of being predicted as a safe loan.
End of explanation
print "Your loans : %s\n" % s['grade'].head(5)
Explanation: Quiz question: What grades are the top 5 loans?
Let us repeat this excercise to find the top 5 loans (in the validation_data) with the lowest probability of being predicted as a safe loan:
End of explanation
model_10 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 10, verbose=False)
Explanation: Checkpoint: You should expect to see 5 loans with the grade ['D', 'C', 'C', 'C', 'B'].
Effect of adding more trees
In this assignment, we will train 5 different ensemble classifiers in the form of gradient boosted trees. We will train models with 10, 50, 100, 200, and 500 trees. We use the max_iterations parameter in the boosted tree module.
Let's get sarted with a model with max_iterations = 10:
End of explanation
model_50 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 50, verbose=False)
model_100 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 100, verbose=False)
model_200 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 200, verbose=False)
model_500 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None,
target = target, features = features, max_iterations = 500, verbose=False)
Explanation: Now, train 4 models with max_iterations to be:
* max_iterations = 50,
* max_iterations = 100
* max_iterations = 200
* max_iterations = 500.
Let us call these models model_50, model_100, model_200, and model_500. You can pass in verbose=False in order to suppress the printed output.
Warning: This could take a couple of minutes to run.
End of explanation
e_50 = model_50.evaluate(validation_data)
e_100 = model_100.evaluate(validation_data)
e_200 = model_200.evaluate(validation_data)
e_500 = model_500.evaluate(validation_data)
Explanation: Compare accuracy on entire validation set
Now we will compare the predicitve accuracy of our models on the validation set. Evaluate the accuracy of the 10, 50, 100, 200, and 500 tree models on the validation_data. Use the .evaluate method.
End of explanation
print "Model 50 accuracy: %s\n" % e_50['accuracy']
print "Model 100 accuracy: %s\n" % e_100['accuracy']
print "Model 200 accuracy: %s\n" % e_200['accuracy']
print "Model 500 accuracy: %s\n" % e_500['accuracy']
Explanation: Quiz Question: Which model has the best accuracy on the validation_data?
Quiz Question: Is it always true that the model with the most trees will perform best on test data?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
def make_figure(dim, title, xlabel, ylabel, legend):
plt.rcParams['figure.figsize'] = dim
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
if legend is not None:
plt.legend(loc=legend, prop={'size':15})
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
Explanation: Plot the training and validation error vs. number of trees
Recall from the lecture that the classification error is defined as
$$
\mbox{classification error} = 1 - \mbox{accuracy}
$$
In this section, we will plot the training and validation errors versus the number of trees to get a sense of how these models are performing. We will compare the 10, 50, 100, 200, and 500 tree models. You will need matplotlib in order to visualize the plots.
First, make sure this block of code runs on your computer.
End of explanation
train_err_10 = 1 - model_10.evaluate(train_data)['accuracy']
train_err_50 = 1 - model_50.evaluate(train_data)['accuracy']
train_err_100 = 1 - model_100.evaluate(train_data)['accuracy']
train_err_200 = 1 - model_200.evaluate(train_data)['accuracy']
train_err_500 = 1 - model_500.evaluate(train_data)['accuracy']
Explanation: In order to plot the classification errors (on the train_data and validation_data) versus the number of trees, we will need lists of these accuracies, which we get by applying the method .evaluate.
Steps to follow:
Step 1: Calculate the classification error for model on the training data (train_data).
Step 2: Store the training errors into a list (called training_errors) that looks like this:
[train_err_10, train_err_50, ..., train_err_500]
Step 3: Calculate the classification error of each model on the validation data (validation_data).
Step 4: Store the validation classification error into a list (called validation_errors) that looks like this:
[validation_err_10, validation_err_50, ..., validation_err_500]
Once that has been completed, the rest of the code should be able to evaluate correctly and generate the plot.
Let us start with Step 1. Write code to compute the classification error on the train_data for models model_10, model_50, model_100, model_200, and model_500.
End of explanation
training_errors = [train_err_10, train_err_50, train_err_100,
train_err_200, train_err_500]
Explanation: Now, let us run Step 2. Save the training errors into a list called training_errors
End of explanation
validation_err_10 = 1 - model_10.evaluate(validation_data)['accuracy']
validation_err_50 = 1 - model_50.evaluate(validation_data)['accuracy']
validation_err_100 = 1 - model_100.evaluate(validation_data)['accuracy']
validation_err_200 = 1 - model_200.evaluate(validation_data)['accuracy']
validation_err_500 = 1 - model_500.evaluate(validation_data)['accuracy']
Explanation: Now, onto Step 3. Write code to compute the classification error on the validation_data for models model_10, model_50, model_100, model_200, and model_500.
End of explanation
validation_errors = [validation_err_10, validation_err_50, validation_err_100,
validation_err_200, validation_err_500]
Explanation: Now, let us run Step 4. Save the training errors into a list called validation_errors
End of explanation
plt.plot([10, 50, 100, 200, 500], training_errors, linewidth=4.0, label='Training error')
plt.plot([10, 50, 100, 200, 500], validation_errors, linewidth=4.0, label='Validation error')
make_figure(dim=(10,5), title='Error vs number of trees',
xlabel='Number of trees',
ylabel='Classification error',
legend='best')
Explanation: Now, we will plot the training_errors and validation_errors versus the number of trees. We will compare the 10, 50, 100, 200, and 500 tree models. We provide some plotting code to visualize the plots within this notebook.
Run the following code to visualize the plots.
End of explanation |
8,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This is a basic tutorial on getting direct info and help of python modules from within a Jupyter notebook. Some of this tutorial is specific to Linux (particularly the commands that start with "!").
Getting Basic Help from the Python interpreter
Let's get help on Python's built-in abs function
Step1: We have to import the numpy module in order to get help on any of its functions which we attempt below. But if we ask for help on all of np (numpy), it blocks it due to simply too much output for the browser to handle. Besides, we won't be able to read it all in one sitting. So see the Help menu in the notebook web page for numpy which today points to
Step2: but we can execute commands from the Linux shell like this
Step3: We can count the number of lines of output of whatever command we want. Here it is the simple one-liner of python that is generating two lines of output
Step4: And here is why we got the IOPub data rate exceeded error above! There is a lot of output from the help on the np module, and we do not want to see it all, anyhow.
Step5: But we can explore some of it by using basic Linux shell commands
Step6: We can ignore the big traceback from python. All it is telling us that it failed to write to the stdout, because the head command is closing its side of the pipe after 20 lines of output received.
That is nothing to worry about, but if we cared, we could just use 2>/dev/null to send all of the error (on stderr!) to the bit bucket to hide it (which normally, we should not do because it would hide errors we do need to pay attention to, but not in this peculiar case)
Step7: Getting help on numpy functions
If we know what function within the particular module from online resources such as https
Step8: Getting help on scipy functions
Likewise, we can get help on scipi module functions
Step9: It seems that the np.info function is close to the same as the builtin help function
Step10: Finding numpy functions with keywords in doc strings | Python Code:
help(abs)
Explanation: Introduction
This is a basic tutorial on getting direct info and help of python modules from within a Jupyter notebook. Some of this tutorial is specific to Linux (particularly the commands that start with "!").
Getting Basic Help from the Python interpreter
Let's get help on Python's built-in abs function:
End of explanation
import numpy as np
help(np)
Explanation: We have to import the numpy module in order to get help on any of its functions which we attempt below. But if we ask for help on all of np (numpy), it blocks it due to simply too much output for the browser to handle. Besides, we won't be able to read it all in one sitting. So see the Help menu in the notebook web page for numpy which today points to: https://docs.scipy.org/doc/numpy/reference/
End of explanation
!echo this is output from the echo command from the Linux shell
!python --version
!python -c 'print("foo"); print("bar")'
Explanation: but we can execute commands from the Linux shell like this:
End of explanation
!python -c 'print("foo"); print("bar")' | wc -l
Explanation: We can count the number of lines of output of whatever command we want. Here it is the simple one-liner of python that is generating two lines of output:
End of explanation
!python -c 'import numpy as np; help(np)' | wc -l
Explanation: And here is why we got the IOPub data rate exceeded error above! There is a lot of output from the help on the np module, and we do not want to see it all, anyhow.
End of explanation
!python -c 'import numpy as np; help(np)' | head -20
Explanation: But we can explore some of it by using basic Linux shell commands:
End of explanation
!python -c 'import numpy as np; help(np)' 2>/dev/null | head -20
Explanation: We can ignore the big traceback from python. All it is telling us that it failed to write to the stdout, because the head command is closing its side of the pipe after 20 lines of output received.
That is nothing to worry about, but if we cared, we could just use 2>/dev/null to send all of the error (on stderr!) to the bit bucket to hide it (which normally, we should not do because it would hide errors we do need to pay attention to, but not in this peculiar case):
End of explanation
import numpy as np
help(np.linspace)
Explanation: Getting help on numpy functions
If we know what function within the particular module from online resources such as https://docs.scipy.org/doc/numpy/reference/) we are interested in, we can directly get help on it within the Jupyter notebook. But we do need to import the module before asking for it, as otherwise we will see an error:
End of explanation
import numpy as np
import matplotlib as mpl
from scipy import linalg, optimize
# I don't know what difference there is between np.info and just Python's built-in help:
# np.info(optimize.fmin)
help(optimize.fmin)
Explanation: Getting help on scipy functions
Likewise, we can get help on scipi module functions:
End of explanation
help(np.info)
Explanation: It seems that the np.info function is close to the same as the builtin help function:
End of explanation
help(np.lookfor)
np.lookfor('root')
Explanation: Finding numpy functions with keywords in doc strings
End of explanation |
8,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a Jupyter notebook for David Dobrinskiy's HSE Thesis
How Venture Capital Affects Startups' Success
Step1: Let us look at the dynamics of total US VC investment
Step3: Deals and investments are in alternating rows of frame, let's separate them
Step5: Plot data from MoneyTree report
http
Step6: prepare data for ols | Python Code:
# You should be running python3
import sys
print(sys.version)
import pandas as pd # http://pandas.pydata.org/
import numpy as np # http://numpy.org/
import statsmodels.api as sm # http://statsmodels.sourceforge.net/stable/index.html
import statsmodels.formula.api as smf
import statsmodels
print("Pandas Version: {}".format(pd.__version__)) # pandas version
print("StatsModels Version: {}".format(statsmodels.__version__)) # StatsModels version
Explanation: This is a Jupyter notebook for David Dobrinskiy's HSE Thesis
How Venture Capital Affects Startups' Success
End of explanation
# load the pwc dataset from azure
frame = pd.read_csv('pwc_moneytree.csv')
frame.head()
del frame['Grand Total']
frame.columns = ['year', 'type', 'q1', 'q2', 'q3', 'q4']
frame['year'] = frame['year'].fillna(method='ffill')
frame.head()
Explanation: Let us look at the dynamics of total US VC investment
End of explanation
deals_df = frame.iloc[0::2]
investments_df = frame.iloc[1::2]
# once separated, 'type' field is identical within each df
# let's delete it
del deals_df['type']
del investments_df['type']
deals_df.head()
investments_df.head()
def unstack_to_series(df):
Takes q1-q4 in a dataframe and converts it to a series
input: a dataframe containing ['q1', 'q2', 'q3', 'q4']
ouput: a pandas series
quarters = ['q1', 'q2', 'q3', 'q4']
d = dict()
for i, row in df.iterrows():
for q in quarters:
key = str(int(row['year'])) + q
d[key] = row[q]
# print(key, q, row[q])
return pd.Series(d)
deals = unstack_to_series(deals_df ).dropna()
investments = unstack_to_series(investments_df).dropna()
def string_to_int(money_string):
numerals = [c if c.isnumeric() else '' for c in money_string]
return int(''.join(numerals))
# convert deals from string to integers
deals = deals.apply(string_to_int)
deals.tail()
# investment in billions USD
# converts to integers - which is ok, since data is in dollars
investments_b = investments.apply(string_to_int)
# in python3 division automatically converts numbers to floats, we don't loose precicion
investments_b = investments_b / 10**9
# round data to 2 decimals
investments_b = investments_b.apply(round, ndigits=2)
investments_b.tail()
Explanation: Deals and investments are in alternating rows of frame, let's separate them
End of explanation
import matplotlib.pyplot as plt # http://matplotlib.org/
import matplotlib.patches as mpatches
import matplotlib.ticker as ticker
%matplotlib inline
# change matplotlib inline display size
# import matplotlib.pylab as pylab
# pylab.rcParams['figure.figsize'] = (8, 6) # that's default image size for this interactive session
fig, ax1 = plt.subplots()
ax1.set_title("VC historical trend (US Data)")
t = range(len(investments_b)) # need to substitute tickers for years later
width = t[1]-t[0]
y1 = investments_b
# create filled step chart for investment amount
ax1.bar(t, y1, width=width, facecolor='0.80', edgecolor='', label = 'Investment ($ Bln.)')
ax1.set_ylabel('Investment ($ Bln.)')
# set up xlabels with years
years = [str(year)[:-2] for year in deals.index][::4] # get years without quarter
ax1.set_xticks(t[::4]) # set 1 tick per year
ax1.set_xticklabels(years, rotation=50) # set tick names
ax1.set_xlabel('Year') # name X axis
# format Y1 tickers to $ billions
formatter = ticker.FormatStrFormatter('$%1.0f Bil.')
ax1.yaxis.set_major_formatter(formatter)
for tick in ax1.yaxis.get_major_ticks():
tick.label1On = False
tick.label2On = True
# create second Y2 axis for Num of Deals
ax2 = ax1.twinx()
y2 = deals
ax2.plot(t, y2, color = 'k', ls = '-', label = 'Num. of Deals')
ax2.set_ylabel('Num. of Deals')
# add annotation bubbles
ax2.annotate('1997-2000 dot-com bubble', xy=(23, 2100), xytext=(3, 1800),
bbox=dict(boxstyle="round4", fc="w"),
arrowprops=dict(arrowstyle="-|>",
connectionstyle="arc3,rad=0.2",
fc="w"),
)
ax2.annotate('2007-08 Financial Crisis', xy=(57, 800), xytext=(40, 1300),
bbox=dict(boxstyle="round4", fc="w"),
arrowprops=dict(arrowstyle="-|>",
connectionstyle="arc3,rad=-0.2",
fc="w"),
)
# add legend
ax1.legend(loc="best")
ax2.legend(bbox_to_anchor=(0.95, 0.88))
fig.tight_layout() # solves cropping problems when saving png
fig.savefig('vc_trend_3.png', dpi=250)
plt.show()
# load countries dataset from azure
ds = ws.datasets['country_data.csv']
# data for 2015
country_data = ds.to_dataframe()
country_data
def tex(df):
Print dataframe contents in latex-ready format
for line in df.to_latex().split('\n'):
print(line)
params = pd.DataFrame(country_data['Criteria'])
params.index = ['y'] + ['X'+str(i) for i in range(1, len(country_data))]
tex(params)
# set index
country_data = country_data.set_index('Criteria')
# convert values to floats (note: comas need to be replaced by dots for python conversion to work)
country_data.index = ['y'] + ['X'+str(i) for i in range(1, len(country_data))]
country_data
Explanation: Plot data from MoneyTree report
http://www.pwcmoneytree.com
End of explanation
const = pd.Series([1]*len(country_data.columns), index = country_data.columns, name = 'X0')
const
country_data = pd.concat([pd.DataFrame(const).T, country_data])
country_data = country_data.sort_index()
country_data
tex(country_data)
y = country_data.iloc[-1,:]
y
X = country_data.iloc[:-1, :].T
X
# Fit regression model
results = sm.OLS(y, X).fit()
# Inspect the results in latex doc, {tab:vc_ols_1}
print(results.summary())
# Inspect the results in latex doc, {tab:vc_ols_1}
print(results.summary().as_latex())
# equation for eq:OLS_1_coeffs in LaTeX
equation = 'Y ='
for i, coeff in results.params.iteritems():
sign = '+' if coeff >= 0 else '-'
equation += ' ' + sign + str(abs(round(coeff,2))) + '*' + i
print(equation)
# correlation table
corr = country_data.T.corr().iloc[1:,1:]
corr = corr.applymap(lambda x: round(x, 2))
corr
# corr table to latex
tex(corr)
import itertools
# set of unique parameter pairs
pairs = set([frozenset(pair) for pair in itertools.permutations(list(corr.index), 2)])
for pair in pairs:
pair = sorted(list(pair))
corr_pair = corr.loc[pair[0],pair[1]]
if corr_pair > 0.7:
print(pair, round(corr_pair, 2))
print('-'*40)
print('a')
for i in corr.columns:
for j in corr.columns:
if abs(corr.loc[i, j]) > 0.7 and i != j:
print(i+'~'+j, corr.loc[i, j])
Explanation: prepare data for ols
End of explanation |
8,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
8,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/csdms_logo.jpg">
Using a BMI
Step1: Import the Waves class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
Step2: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
Step3: Or the output variables.
Step4: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main output of the Waves model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
Step5: OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Waves to use some defaults.
Step6: Before running the model, let's set a couple input parameters. These two parameters represent the frequency for which waves approach the shore at a high angle and if they come from a prefered direction.
Step7: To advance the model in time, we use the update method. We'll advance the model one day.
Step8: Let's double-check that the model advanced to the given time and see what the new wave angle is.
Step9: We'll put all this in a loop and advance the model in time to generate a time series of waves angles. | Python Code:
%matplotlib inline
Explanation: <img src="images/csdms_logo.jpg">
Using a BMI: Waves
This example explores how to use a BMI implementation using the Waves model as an example.
Links
Waves source code: Look at the files that have waves in their name.
Waves description on CSDMS: Detailed information on the Waves model.
Interacting with the Waves BMI using Python
Some magic that allows us to view images within the notebook.
End of explanation
from cmt.components import Waves
waves = Waves()
Explanation: Import the Waves class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
End of explanation
waves.get_output_var_names()
Explanation: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
End of explanation
waves.get_input_var_names()
Explanation: Or the output variables.
End of explanation
angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity'
print "Data type: %s" % waves.get_var_type(angle_name)
print "Units: %s" % waves.get_var_units(angle_name)
print "Grid id: %d" % waves.get_var_grid(angle_name)
print "Number of elements in grid: %d" % waves.get_grid_size(0)
print "Type of grid: %s" % waves.get_grid_type(0)
Explanation: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main output of the Waves model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
End of explanation
waves.initialize(None)
Explanation: OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Waves to use some defaults.
End of explanation
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_asymmetry_parameter', .25)
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_highness_parameter', .7)
Explanation: Before running the model, let's set a couple input parameters. These two parameters represent the frequency for which waves approach the shore at a high angle and if they come from a prefered direction.
End of explanation
waves.update()
Explanation: To advance the model in time, we use the update method. We'll advance the model one day.
End of explanation
print 'Current model time: %f' % waves.get_current_time()
val = waves.get_value(angle_name)
print 'The current wave angle is: %f' % val[0]
Explanation: Let's double-check that the model advanced to the given time and see what the new wave angle is.
End of explanation
import numpy as np
number_of_time_steps = 400
angles = np.empty(number_of_time_steps)
for time in xrange(number_of_time_steps):
waves.update()
angles[time] = waves.get_value(angle_name)
import matplotlib.pyplot as plt
plt.plot(np.array(angles) * 180 / np.pi)
plt.xlabel('Time (days)')
plt.ylabel('Incoming wave angle (degrees)')
plt.hist(np.array(angles) * 180 / np.pi, bins=25)
plt.xlabel('Incoming wave angle (degrees)')
plt.ylabel('Number of occurences')
Explanation: We'll put all this in a loop and advance the model in time to generate a time series of waves angles.
End of explanation |
8,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
how long until I get into the Hardrock 100?
The Hardrock 100 is a 100-mile footrace through the San Juan mountains of Colorado. It is considered a "post-graduate level" event with 66000 feet elevation change and an average elevation of over 11000 ft! It is one of my lifelong goals to run in this event.
Hardrock is a small event and maintains a lottery system to gain entry. Runners must run a qualifier race in order to enter the lottery. They allot 45 slots in the lottery each year to newcomers. Based on the calculations of the Hardrock organizers, each newcomer gets 2^n tickets in the lottery, where n is the number of DNSs (did not start) from all previous years. So, if you have tried and failed to get into the race for several years, then your chances of getting in go up dramatically.
I wanted to figure out how long it would take me to have a good chance of getting into HR100. To do this, I utilized lottery data the organizers (Blake Wood) nicely release for the last three years. Blake uses simulation to predict what the odds of newcomers with x number of tickets in the lottery. Using this data, I decided to build a simple probabilistic model using PyMC of my entrance into the lottery process.
TL;DR
Step1: I transferred the lottery odds data from the PDF reports from the last three years to a CSV for easy read to python.
Step2: The total number of tickets appears to be increasing linearly, so a linear model seems like a decent first approximation of the process. So we'll start with that in the model below.
model assumptions
Step3: The figure above shows the mean (blue line) and 95% HPD (grey shaded area) of the distribution of number of draws necessary to pull my name each year in the HR100 lottery. The red horizontal line is 45, which is the number of slots in the loterry for newcomers. So in 2029 when the mean of the distribution passes below 45 on the y-axis, I have a better than 50% chance of getting in. This is shown in the table below as well. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import pymc
import numpy as np
from scipy import stats
%matplotlib inline
Explanation: how long until I get into the Hardrock 100?
The Hardrock 100 is a 100-mile footrace through the San Juan mountains of Colorado. It is considered a "post-graduate level" event with 66000 feet elevation change and an average elevation of over 11000 ft! It is one of my lifelong goals to run in this event.
Hardrock is a small event and maintains a lottery system to gain entry. Runners must run a qualifier race in order to enter the lottery. They allot 45 slots in the lottery each year to newcomers. Based on the calculations of the Hardrock organizers, each newcomer gets 2^n tickets in the lottery, where n is the number of DNSs (did not start) from all previous years. So, if you have tried and failed to get into the race for several years, then your chances of getting in go up dramatically.
I wanted to figure out how long it would take me to have a good chance of getting into HR100. To do this, I utilized lottery data the organizers (Blake Wood) nicely release for the last three years. Blake uses simulation to predict what the odds of newcomers with x number of tickets in the lottery. Using this data, I decided to build a simple probabilistic model using PyMC of my entrance into the lottery process.
TL;DR: It's going to take around 10 years.
End of explanation
data = pd.read_csv('./hr100_odds.csv',header=0,index_col=0)
data
# calculate total number of tickets per lottery year
total_tix = []
for x in ['2017','2018','2019']:
total_tix.append((data[x]*data.index).sum())
plt.plot(total_tix)
plt.scatter([0,1,2],total_tix)
plt.suptitle('total tickets',size=14)
plt.xticks([0,1,2],[2017,2018,2019])
Explanation: I transferred the lottery odds data from the PDF reports from the last three years to a CSV for easy read to python.
End of explanation
# years to predict
ytp = np.array([3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18])
ytp_real = 2017+ytp
print ytp_real
b0 = pymc.Normal('b0',7000,0.00001) # intercept of model of total tickets in lottery
b1 = pymc.Normal('b1',5000,0.00001) # slope of total_tickets linear model
err= pymc.Uniform('err',0,500) # error on total_tickets model
x=np.array([0,1,2])
# distribution of number of tickets per year with 0.9 prob of drawing 1 tix per year
my_tickets = pymc.Binomial('num tix',p=0.9,n=np.array([ytp-1]))
# the model
@pymc.deterministic
def total_pool_pred(b0=b0,b1=b1,x=x):
return b0+b1*x
#estimate values of the model based on data
total_pool = pymc.Normal('y', total_pool_pred , err, value=np.array(total_tix), observed=True)
# use fitted params to estimate population size at each year
pop_size = pymc.Normal('population',mu=b1*ytp+b0,tau=err,size=len(ytp))
def chance_final(foonum=my_tickets,pop_size=pop_size):
tmp = (2**foonum)/pop_size
tmp[tmp>1] = 1
return tmp
chances = pymc.Deterministic(name='chances',eval=chance_final,parents={"foonum":my_tickets,"pop_size":pop_size},doc='foo')
# how many draws until success,
final = pymc.Geometric('final_odds',p=chances)
model = pymc.Model([total_pool_pred, b0, b1, total_pool, err, x,pop_size,chances,my_tickets,final])
mcmc = pymc.MCMC(model)
mcmc.sample(100000, 20000)
fo_central = final.stats()['quantiles'][50]
fo_ub = final.stats()['95% HPD interval'][1]
fo_lb = final.stats()['95% HPD interval'][0]
plt.figure(figsize=[7.5,7.5])
plt.suptitle('number of draws needed to pull my name',size=14)
plt.plot(fo_central,linewidth=3)
plt.plot(fo_ub,c='grey',linewidth=2)
plt.plot(fo_lb,c='grey',linewidth=2)
plt.fill_between(np.arange(0,len(ytp_real+1),1),fo_ub,fo_lb,color='grey',alpha=0.25)
plt.plot([0,15],[45,45],c='red')
plt.xticks(np.arange(0,len(ytp_real+1),1),ytp_real,rotation=45)
plt.ylim([0,250])
plt.xlabel('year')
plt.ylabel('number of draws')
Explanation: The total number of tickets appears to be increasing linearly, so a linear model seems like a decent first approximation of the process. So we'll start with that in the model below.
model assumptions:
the total number of tickets increases linearly over time. Likely this is not true and I expect it will plateau at a given number, or even increase at greater than linear rates. More modeling to come on this.
each year, I model my number of tickets as a binomial distribution with p = 0.90, meaning each uear I expect there'sa 90% chance I'll qualify to enter the lottery. That is a huge assumption, essentially saying in 5 years I think I'll be healthy, fit, and have enough time to run a 100 mile race. Oof...
the model:
Normal linear regression model inferring values of the slope and intercept based on our available data. Use that model to build a distribution of what we think the total number of tickets will look like each year into the future.
My number of tickets each year is a binomial distribution with p = 0.90 and n = year - 2019.
The output of the model is a geometric distribution showing how many draws from the lottery are necessary until my name gets picked the first time. Since there are (currently) 45 draws for newbies, then if that number is less than 45, I expect to get picked!
End of explanation
tmp_perc = []
for x in xrange(len(ytp)):
tmp_perc.append(round(sum(np.mean(mcmc.trace('final_odds')[:],1)[:,x]<=45)/80000.0,4)*100)
pd.DataFrame(index=ytp_real,data={'percent chance':tmp_perc})
Explanation: The figure above shows the mean (blue line) and 95% HPD (grey shaded area) of the distribution of number of draws necessary to pull my name each year in the HR100 lottery. The red horizontal line is 45, which is the number of slots in the loterry for newcomers. So in 2029 when the mean of the distribution passes below 45 on the y-axis, I have a better than 50% chance of getting in. This is shown in the table below as well.
End of explanation |
8,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Of the three parts of this app, part 2 should be very familiar by now -- load some taxi dropoff locations, declare a Points object, datashade them, and set some plot options.
Step 1 is new
Step2: Iteratively building a bokeh app in the notebook
The above app script can be built entirely without using Jupyter, though we displayed it here using Jupyter for convenience in the tutorial. Jupyter notebooks are also often helpful when initially developing such apps, allowing you to quickly iterate over visualizations in the notebook, deploying it as a standalone app only once we are happy with it.
To illustrate this process, let's quickly go through such a workflow. As before we will set up our imports, load the extension, and load the taxi dataset
Step3: Next we define a Counter stream which we will use to select taxi trips by hour.
Step4: Up to this point, we have a normal HoloViews notebook that we could display using Jupyter's rich display of overlay, as we would with an any notebook. But having come up with the objects we want interactively in this way, we can now display the result as a Bokeh app, without leaving the notebook. To do that, first edit the following cell to change "8888" to whatever port your jupyter session is using, in case your URL bar doesn't say "localhost
Step5: We could stop here, having launched an app, but so far the app will work just the same as in the normal Jupyter notebook, responding to user inputs as they occur. Having defined a Counter stream above, let's go one step further and add a series of periodic events that will let the visualization play on its own even without any user input
Step6: You can stop this ongoing process by clearing the cell displaying the app.
Now let's open the text editor again and make this edit to a separate app, which we can then launch using Bokeh Server separately from this notebook.
Step7: Combining HoloViews with bokeh models
Now for a last hurrah let's put everything we have learned to good use and create a bokeh app with it. This time we will go straight to a Python script containing the app. If you run the app with bokeh serve --show ./apps/player_app.py from your terminal you should see something like this | Python Code:
with open('./apps/server_app.py', 'r') as f:
print(f.read())
Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
<div style="float:right;"><h2>08. Deploying Bokeh Apps</h2></div>
In the previous sections we discovered how to use a HoloMap to build a Jupyter notebook with interactive visualizations that can be exported to a standalone HTML file, as well as how to use DynamicMap and Streams to set up dynamic interactivity backed by the Jupyter Python kernel. However, frequently we want to package our visualization or dashboard for wider distribution, backed by Python but run outside of the notebook environment. Bokeh Server provides a flexible and scalable architecture to deploy complex interactive visualizations and dashboards, integrating seamlessly with Bokeh and with HoloViews.
For a detailed background on Bokeh Server see the bokeh user guide. In this tutorial we will discover how to deploy the visualizations we have created so far as a standalone bokeh server app, and how to flexibly combine HoloViews and Bokeh APIs to build highly customized apps. We will also reuse a lot of what we have learned so far---loading large, tabular datasets, applying datashader operations to them, and adding linked streams to our app.
A simple bokeh app
The preceding sections of this tutorial focused solely on the Jupyter notebook, but now let's look at a bare Python script that can be deployed using Bokeh Server:
End of explanation
# Exercise: Modify the app to display the pickup locations and add a tilesource, then run the app with bokeh serve
# Tip: Refer to the previous notebook
Explanation: Of the three parts of this app, part 2 should be very familiar by now -- load some taxi dropoff locations, declare a Points object, datashade them, and set some plot options.
Step 1 is new: Instead of loading the bokeh extension using hv.extension('bokeh'), we get a direct handle on a bokeh renderer using the hv.renderer function. This has to be done at the top of the script, to be sure that options declared are passed to the Bokeh renderer.
Step 3 is also new: instead of typing app to see the visualization as we would in the notebook, here we create a Bokeh document from it by passing the HoloViews object to the renderer.server_doc method.
Steps 1 and 3 are essentially boilerplate, so you can now use this simple skeleton to turn any HoloViews object into a fully functional, deployable Bokeh app!
Deploying the app
Assuming that you have a terminal window open with the hvtutorial environment activated, in the notebooks/ directory, you can launch this app using Bokeh Server:
bokeh serve --show apps/server_app.py
If you don't already have a favorite way to get a terminal, one way is to open it from within Jupyter, then make sure you are in the notebooks directory, and activate the environment using source activate hvtutorial (or activate tutorial on Windows). You can also open the app script file in the inbuilt text editor, or you can use your own preferred editor.
End of explanation
import holoviews as hv
import geoviews as gv
import dask.dataframe as dd
from holoviews.operation.datashader import datashade, aggregate, shade
from bokeh.models import WMTSTileSource
hv.extension('bokeh', logo=False)
usecols = ['tpep_pickup_datetime', 'dropoff_x', 'dropoff_y']
ddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime'], usecols=usecols)
ddf['hour'] = ddf.tpep_pickup_datetime.dt.hour
ddf = ddf.persist()
Explanation: Iteratively building a bokeh app in the notebook
The above app script can be built entirely without using Jupyter, though we displayed it here using Jupyter for convenience in the tutorial. Jupyter notebooks are also often helpful when initially developing such apps, allowing you to quickly iterate over visualizations in the notebook, deploying it as a standalone app only once we are happy with it.
To illustrate this process, let's quickly go through such a workflow. As before we will set up our imports, load the extension, and load the taxi dataset:
End of explanation
stream = hv.streams.Counter()
points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])
dmap = hv.DynamicMap(lambda counter: points.select(hour=counter%24).relabel('Hour: %s' % (counter % 24)),
streams=[stream])
shaded = datashade(dmap)
hv.opts('RGB [width=800, height=600, xaxis=None, yaxis=None]')
url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'
wmts = gv.WMTS(WMTSTileSource(url=url))
overlay = wmts * shaded
Explanation: Next we define a Counter stream which we will use to select taxi trips by hour.
End of explanation
renderer = hv.renderer('bokeh')
server = renderer.app(overlay, show=True, websocket_origin='localhost:8888')
Explanation: Up to this point, we have a normal HoloViews notebook that we could display using Jupyter's rich display of overlay, as we would with an any notebook. But having come up with the objects we want interactively in this way, we can now display the result as a Bokeh app, without leaving the notebook. To do that, first edit the following cell to change "8888" to whatever port your jupyter session is using, in case your URL bar doesn't say "localhost:8888/".
Then run this cell to launch the Bokeh app within this notebook:
End of explanation
dmap.periodic(1)
Explanation: We could stop here, having launched an app, but so far the app will work just the same as in the normal Jupyter notebook, responding to user inputs as they occur. Having defined a Counter stream above, let's go one step further and add a series of periodic events that will let the visualization play on its own even without any user input:
End of explanation
# Exercise: Copy the example above into periodic_app.py and modify it so it can be run with bokeh serve
# Hint: Use hv.renderer and renderer.server_doc
# Note that you have to run periodic **after** creating the bokeh document
Explanation: You can stop this ongoing process by clearing the cell displaying the app.
Now let's open the text editor again and make this edit to a separate app, which we can then launch using Bokeh Server separately from this notebook.
End of explanation
# Advanced Exercise: Add a histogram to the bokeh layout next to the datashaded plot
# Hint: Declare the histogram like this: hv.operation.histogram(aggregated, bin_range=(0, 20))
# then use renderer.get_plot and hist_plot.state and add it to the layout
Explanation: Combining HoloViews with bokeh models
Now for a last hurrah let's put everything we have learned to good use and create a bokeh app with it. This time we will go straight to a Python script containing the app. If you run the app with bokeh serve --show ./apps/player_app.py from your terminal you should see something like this:
<img src="./assets/tutorial_app.gif"></img>
This more complex app consists of several components:
A datashaded plot of points for the indicated hour of the daty (in the slider widget)
A linked PointerX stream, to compute a cross-section
A set of custom bokeh widgets linked to the hour-of-day stream
We have already covered 1. and 2. so we will focus on 3., which shows how easily we can combine a HoloViews plot with custom Bokeh models. We will not look at the precise widgets in too much detail, instead let's have a quick look at the callback defined for slider widget updates:
python
def slider_update(attrname, old, new):
stream.event(hour=new)
Whenever the slider value changes this will trigger a stream event updating our plots. The second part is how we combine HoloViews objects and Bokeh models into a single layout we can display. Once again we can use the renderer to convert the HoloViews object into something we can display with Bokeh:
python
renderer = hv.renderer('bokeh')
plot = renderer.get_plot(hvobj, doc=curdoc())
The plot instance here has a state attribute that represents the actual Bokeh model, which means we can combine it into a Bokeh layout just like any other Bokeh model:
python
layout = layout([[plot.state], [slider, button]], sizing_mode='fixed')
curdoc().add_root(layout)
End of explanation |
8,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Uncertainty-aware Deep Learning with SNGP
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Define visualization macros
Step4: The two moon dataset
Create the trainining and evaluation datasets from the two moon dataset.
Step6: Evaluate the model's predictive behavior over the entire 2D input space.
Step7: To evaluate model uncertainty, add an out-of-domain (OOD) dataset that belongs to a third class. The model never sees these OOD examples during training.
Step11: Here the blue and orange represent the positive and negative classes, and the red represents the OOD data. A model that quantifies the uncertainty well is expected to be confident when close to training data (i.e., $p(x_{test})$ close to 0 or 1), and be uncertain when far away from the training data regions (i.e., $p(x_{test})$ close to 0.5).
The deterministic model
Define model
Start from the (baseline) deterministic model
Step12: This tutorial uses a 6-layer ResNet with 128 hidden units.
Step13: Train model
Configure the training parameters to use SparseCategoricalCrossentropy as the loss function and the Adam optimizer.
Step14: Train the model for 100 epochs with batch size 128.
Step16: Visualize uncertainty
Step17: Now visualize the predictions of the deterministic model. First plot the class probability
Step18: In this plot, the yellow and purple are the predictive probabilities for the two classes. The deterministic model did a good job in classifying the two known classes (blue and orange) with a nonlinear decision boundary. However, it is not distance-aware, and classified the never-seen red out-of-domain (OOD) examples confidently as the orange class.
Visualize the model uncertainty by computing the predictive variance
Step19: In this plot, the yellow indicates high uncertainty, and the purple indicates low uncertainty. A deterministic ResNet's uncertainty depends only on the test examples' distance from the decision boundary. This leads the model to be over-confident when out of the training domain. The next section shows how SNGP behaves differently on this dataset.
The SNGP model
Define SNGP model
Let's now implement the SNGP model. Both the SNGP components, SpectralNormalization and RandomFeatureGaussianProcess, are available at the tensorflow_model's built-in layers.
Let's look at these two components in more detail. (You can also jump to the The SNGP model section to see how the full model is implemented.)
Spectral Normalization wrapper
SpectralNormalization is a Keras layer wrapper. It can be applied to an existing Dense layer like this
Step20: Spectral normalization regularizes the hidden weight $W$ by gradually guiding its spectral norm (i.e., the largest eigenvalue of $W$) toward the target value norm_multiplier.
Note
Step21: The main parameters of the GP layers are
Step24: Note
Step25: Use the same architecture as the deterministic model.
Step27: <a name="covariance-reset-callback"></a>
Implement a Keras callback to reset the covariance matrix at the beginning of a new epoch.
Step29: Add this callback to the DeepResNetSNGP model class.
Step30: Train model
Use tf.keras.model.fit to train the model.
Step31: Visualize uncertainty
First compute the predictive logits and variances.
Step32: <a name="mean-field-logits"></a>
Now compute the posterior predictive probability. The classic method for computing the predictive probability of a probabilistic model is to use Monte Carlo sampling, i.e.,
$$E(p(x)) = \frac{1}{M} \sum_{m=1}^M logit_m(x), $$
where $M$ is the sample size, and $logit_m(x)$ are random samples from the SNGP posterior $MultivariateNormal$(sngp_logits,sngp_covmat). However, this approach can be slow for latency-sensitive applications such as autonomous driving or real-time bidding. Instead, can approximate $E(p(x))$ using the mean-field method
Step33: Note
Step35: SNGP Summary
Step36: Put everything together. The entire procedure (training, evaluation and uncertainty computation) can be done in just five lines
Step37: Visualize the class probability (left) and the predictive uncertainty (right) of the SNGP model.
Step38: Remember that in the class probability plot (left), the yellow and purple are class probabilities. When close to the training data domain, SNGP correctly classifies the examples with high confidence (i.e., assigning near 0 or 1 probability). When far away from the training data, SNGP gradually becomes less confident, and its predictive probability becomes close to 0.5 while the (normalized) model uncertainty rises to 1.
Compare this to the uncertainty surface of the deterministic model
Step39: Like mentioned earlier, a deterministic model is not distance-aware. Its uncertainty is defined by the distance of the test example from the decision boundary. This leads the model to produce overconfident predictions for the out-of-domain examples (red).
Comparison with other uncertainty approaches
This section compares the uncertainty of SNGP with Monte Carlo dropout and Deep ensemble.
Both of these methods are based on Monte Carlo averaging of multiple forward passes of deterministic models. First set the ensemble size $M$.
Step40: Monte Carlo dropout
Given a trained neural network with Dropout layers, Monte Carlo dropout computes the mean predictive probability
$$E(p(x)) = \frac{1}{M}\sum_{m=1}^M softmax(logit_m(x))$$
by averaging over multiple Dropout-enabled forward passes ${logit_m(x)}_{m=1}^M$.
Step41: Deep ensemble
Deep ensemble is a state-of-the-art (but expensive) method for deep learning uncertainty. To train a Deep ensemble, first train $M$ ensemble members.
Step42: Collect logits and compute the mean predctive probability $E(p(x)) = \frac{1}{M}\sum_{m=1}^M softmax(logit_m(x))$. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!pip install --use-deprecated=legacy-resolver tf-models-official
# refresh pkg_resources so it takes the changes into account.
import pkg_resources
import importlib
importlib.reload(pkg_resources)
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import sklearn.datasets
import numpy as np
import tensorflow as tf
import official.nlp.modeling.layers as nlp_layers
Explanation: Uncertainty-aware Deep Learning with SNGP
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/understanding/sngp"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/understanding/sngp.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/understanding/sngp.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/understanding/sngp.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In AI applications that are safety-critical (e.g., medical decision making and autonomous driving) or where the data is inherently noisy (e.g., natural language understanding), it is important for a deep classifier to reliably quantify its uncertainty. The deep classifier should be able to be aware of its own limitations and when it should hand control over to the human experts. This tutorial shows how to improve a deep classifier's ability in quantifying uncertainty using a technique called Spectral-normalized Neural Gaussian Process (SNGP).
The core idea of SNGP is to improve a deep classifier's distance awareness by applying simple modifications to the network. A model's distance awareness is a measure of how its predictive probability reflects the distance between the test example and the training data. This is a desirable property that is common for gold-standard probablistic models (e.g., the Gaussian process with RBF kernels) but is lacking in models with deep neural networks. SNGP provides a simple way to inject this Gaussian-process behavior into a deep classifier while maintaining its predictive accuracy.
This tutorial implements a deep residual network (ResNet)-based SNGP model on the two moons dataset, and compares its uncertainty surface with that of two other popular uncertainty approaches - Monte Carlo dropout and Deep ensemble).
This tutorial illustrates the SNGP model on a toy 2D dataset. For an example of applying SNGP to a real-world natural language understanding task using BERT-base, check out the SNGP-BERT tutorial. For high-quality implementations of an SNGP model (and many other uncertainty methods) on a wide variety of benchmark datasets (such as CIFAR-100, ImageNet, Jigsaw toxicity detection, etc), refer to the Uncertainty Baselines benchmark.
About SNGP
Spectral-normalized Neural Gaussian Process (SNGP) is a simple approach to improve a deep classifier's uncertainty quality while maintaining a similar level of accuracy and latency. Given a deep residual network, SNGP makes two simple changes to the model:
It applies spectral normalization to the hidden residual layers.
It replaces the Dense output layer with a Gaussian process layer.
Compared to other uncertainty approaches (e.g., Monte Carlo dropout or Deep ensemble), SNGP has several advantages:
It works for a wide range of state-of-the-art residual-based architectures (e.g., (Wide) ResNet, DenseNet, BERT, etc).
It is a single-model method (i.e., does not rely on ensemble averaging). Therefore SNGP has a similar level of latency as a single deterministic network, and can be scaled easily to large datasets like ImageNet and Jigsaw Toxic Comments classification.
It has strong out-of-domain detection performance due to the distance-awareness property.
The downsides of this method are:
The predictive uncertainty of a SNGP is computed using the Laplace approximation. Therefore theoretically, the posterior uncertainty of SNGP is different from that of an exact Gaussian process.
SNGP training needs a covariance reset step at the beginning of a new epoch. This can add a tiny amount of extra complexity to a training pipeline. This tutorial shows a simple way to implement this using Keras callbacks.
Setup
End of explanation
plt.rcParams['figure.dpi'] = 140
DEFAULT_X_RANGE = (-3.5, 3.5)
DEFAULT_Y_RANGE = (-2.5, 2.5)
DEFAULT_CMAP = colors.ListedColormap(["#377eb8", "#ff7f00"])
DEFAULT_NORM = colors.Normalize(vmin=0, vmax=1,)
DEFAULT_N_GRID = 100
Explanation: Define visualization macros
End of explanation
def make_training_data(sample_size=500):
Create two moon training dataset.
train_examples, train_labels = sklearn.datasets.make_moons(
n_samples=2 * sample_size, noise=0.1)
# Adjust data position slightly.
train_examples[train_labels == 0] += [-0.1, 0.2]
train_examples[train_labels == 1] += [0.1, -0.2]
return train_examples, train_labels
Explanation: The two moon dataset
Create the trainining and evaluation datasets from the two moon dataset.
End of explanation
def make_testing_data(x_range=DEFAULT_X_RANGE, y_range=DEFAULT_Y_RANGE, n_grid=DEFAULT_N_GRID):
Create a mesh grid in 2D space.
# testing data (mesh grid over data space)
x = np.linspace(x_range[0], x_range[1], n_grid)
y = np.linspace(y_range[0], y_range[1], n_grid)
xv, yv = np.meshgrid(x, y)
return np.stack([xv.flatten(), yv.flatten()], axis=-1)
Explanation: Evaluate the model's predictive behavior over the entire 2D input space.
End of explanation
def make_ood_data(sample_size=500, means=(2.5, -1.75), vars=(0.01, 0.01)):
return np.random.multivariate_normal(
means, cov=np.diag(vars), size=sample_size)
# Load the train, test and OOD datasets.
train_examples, train_labels = make_training_data(
sample_size=500)
test_examples = make_testing_data()
ood_examples = make_ood_data(sample_size=500)
# Visualize
pos_examples = train_examples[train_labels == 0]
neg_examples = train_examples[train_labels == 1]
plt.figure(figsize=(7, 5.5))
plt.scatter(pos_examples[:, 0], pos_examples[:, 1], c="#377eb8", alpha=0.5)
plt.scatter(neg_examples[:, 0], neg_examples[:, 1], c="#ff7f00", alpha=0.5)
plt.scatter(ood_examples[:, 0], ood_examples[:, 1], c="red", alpha=0.1)
plt.legend(["Positive", "Negative", "Out-of-Domain"])
plt.ylim(DEFAULT_Y_RANGE)
plt.xlim(DEFAULT_X_RANGE)
plt.show()
Explanation: To evaluate model uncertainty, add an out-of-domain (OOD) dataset that belongs to a third class. The model never sees these OOD examples during training.
End of explanation
#@title
class DeepResNet(tf.keras.Model):
Defines a multi-layer residual network.
def __init__(self, num_classes, num_layers=3, num_hidden=128,
dropout_rate=0.1, **classifier_kwargs):
super().__init__()
# Defines class meta data.
self.num_hidden = num_hidden
self.num_layers = num_layers
self.dropout_rate = dropout_rate
self.classifier_kwargs = classifier_kwargs
# Defines the hidden layers.
self.input_layer = tf.keras.layers.Dense(self.num_hidden, trainable=False)
self.dense_layers = [self.make_dense_layer() for _ in range(num_layers)]
# Defines the output layer.
self.classifier = self.make_output_layer(num_classes)
def call(self, inputs):
# Projects the 2d input data to high dimension.
hidden = self.input_layer(inputs)
# Computes the resnet hidden representations.
for i in range(self.num_layers):
resid = self.dense_layers[i](hidden)
resid = tf.keras.layers.Dropout(self.dropout_rate)(resid)
hidden += resid
return self.classifier(hidden)
def make_dense_layer(self):
Uses the Dense layer as the hidden layer.
return tf.keras.layers.Dense(self.num_hidden, activation="relu")
def make_output_layer(self, num_classes):
Uses the Dense layer as the output layer.
return tf.keras.layers.Dense(
num_classes, **self.classifier_kwargs)
Explanation: Here the blue and orange represent the positive and negative classes, and the red represents the OOD data. A model that quantifies the uncertainty well is expected to be confident when close to training data (i.e., $p(x_{test})$ close to 0 or 1), and be uncertain when far away from the training data regions (i.e., $p(x_{test})$ close to 0.5).
The deterministic model
Define model
Start from the (baseline) deterministic model: a multi-layer residual network (ResNet) with dropout regularization.
End of explanation
resnet_config = dict(num_classes=2, num_layers=6, num_hidden=128)
resnet_model = DeepResNet(**resnet_config)
resnet_model.build((None, 2))
resnet_model.summary()
Explanation: This tutorial uses a 6-layer ResNet with 128 hidden units.
End of explanation
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metrics = tf.keras.metrics.SparseCategoricalAccuracy(),
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
train_config = dict(loss=loss, metrics=metrics, optimizer=optimizer)
Explanation: Train model
Configure the training parameters to use SparseCategoricalCrossentropy as the loss function and the Adam optimizer.
End of explanation
fit_config = dict(batch_size=128, epochs=100)
resnet_model.compile(**train_config)
resnet_model.fit(train_examples, train_labels, **fit_config)
Explanation: Train the model for 100 epochs with batch size 128.
End of explanation
#@title
def plot_uncertainty_surface(test_uncertainty, ax, cmap=None):
Visualizes the 2D uncertainty surface.
For simplicity, assume these objects already exist in the memory:
test_examples: Array of test examples, shape (num_test, 2).
train_labels: Array of train labels, shape (num_train, ).
train_examples: Array of train examples, shape (num_train, 2).
Arguments:
test_uncertainty: Array of uncertainty scores, shape (num_test,).
ax: A matplotlib Axes object that specifies a matplotlib figure.
cmap: A matplotlib colormap object specifying the palette of the
predictive surface.
Returns:
pcm: A matplotlib PathCollection object that contains the palette
information of the uncertainty plot.
# Normalize uncertainty for better visualization.
test_uncertainty = test_uncertainty / np.max(test_uncertainty)
# Set view limits.
ax.set_ylim(DEFAULT_Y_RANGE)
ax.set_xlim(DEFAULT_X_RANGE)
# Plot normalized uncertainty surface.
pcm = ax.imshow(
np.reshape(test_uncertainty, [DEFAULT_N_GRID, DEFAULT_N_GRID]),
cmap=cmap,
origin="lower",
extent=DEFAULT_X_RANGE + DEFAULT_Y_RANGE,
vmin=DEFAULT_NORM.vmin,
vmax=DEFAULT_NORM.vmax,
interpolation='bicubic',
aspect='auto')
# Plot training data.
ax.scatter(train_examples[:, 0], train_examples[:, 1],
c=train_labels, cmap=DEFAULT_CMAP, alpha=0.5)
ax.scatter(ood_examples[:, 0], ood_examples[:, 1], c="red", alpha=0.1)
return pcm
Explanation: Visualize uncertainty
End of explanation
resnet_logits = resnet_model(test_examples)
resnet_probs = tf.nn.softmax(resnet_logits, axis=-1)[:, 0] # Take the probability for class 0.
_, ax = plt.subplots(figsize=(7, 5.5))
pcm = plot_uncertainty_surface(resnet_probs, ax=ax)
plt.colorbar(pcm, ax=ax)
plt.title("Class Probability, Deterministic Model")
plt.show()
Explanation: Now visualize the predictions of the deterministic model. First plot the class probability:
$$p(x) = softmax(logit(x))$$
End of explanation
resnet_uncertainty = resnet_probs * (1 - resnet_probs)
_, ax = plt.subplots(figsize=(7, 5.5))
pcm = plot_uncertainty_surface(resnet_uncertainty, ax=ax)
plt.colorbar(pcm, ax=ax)
plt.title("Predictive Uncertainty, Deterministic Model")
plt.show()
Explanation: In this plot, the yellow and purple are the predictive probabilities for the two classes. The deterministic model did a good job in classifying the two known classes (blue and orange) with a nonlinear decision boundary. However, it is not distance-aware, and classified the never-seen red out-of-domain (OOD) examples confidently as the orange class.
Visualize the model uncertainty by computing the predictive variance:
$$var(x) = p(x) * (1 - p(x))$$
End of explanation
dense = tf.keras.layers.Dense(units=10)
dense = nlp_layers.SpectralNormalization(dense, norm_multiplier=0.9)
Explanation: In this plot, the yellow indicates high uncertainty, and the purple indicates low uncertainty. A deterministic ResNet's uncertainty depends only on the test examples' distance from the decision boundary. This leads the model to be over-confident when out of the training domain. The next section shows how SNGP behaves differently on this dataset.
The SNGP model
Define SNGP model
Let's now implement the SNGP model. Both the SNGP components, SpectralNormalization and RandomFeatureGaussianProcess, are available at the tensorflow_model's built-in layers.
Let's look at these two components in more detail. (You can also jump to the The SNGP model section to see how the full model is implemented.)
Spectral Normalization wrapper
SpectralNormalization is a Keras layer wrapper. It can be applied to an existing Dense layer like this:
End of explanation
batch_size = 32
input_dim = 1024
num_classes = 10
gp_layer = nlp_layers.RandomFeatureGaussianProcess(units=num_classes,
num_inducing=1024,
normalize_input=False,
scale_random_features=True,
gp_cov_momentum=-1)
Explanation: Spectral normalization regularizes the hidden weight $W$ by gradually guiding its spectral norm (i.e., the largest eigenvalue of $W$) toward the target value norm_multiplier.
Note: Usually it is preferable to set norm_multiplier to a value smaller than 1. However in practice, it can be also relaxed to a larger value to ensure the deep network has enough expressive power.
The Gaussian Process (GP) layer
RandomFeatureGaussianProcess implements a random-feature based approximation to a Gaussian process model that is end-to-end trainable with a deep neural network. Under the hood, the Gaussian process layer implements a two-layer network:
$$logits(x) = \Phi(x) \beta, \quad \Phi(x)=\sqrt{\frac{2}{M}} * cos(Wx + b)$$
Here $x$ is the input, and $W$ and $b$ are frozen weights initialized randomly from Gaussian and uniform distributions, respectively. (Therefore $\Phi(x)$ are called "random features".) $\beta$ is the learnable kernel weight similar to that of a Dense layer.
End of explanation
embedding = tf.random.normal(shape=(batch_size, input_dim))
logits, covmat = gp_layer(embedding)
Explanation: The main parameters of the GP layers are:
units: The dimension of the output logits.
num_inducing: The dimension $M$ of the hidden weight $W$. Default to 1024.
normalize_input: Whether to apply layer normalization to the input $x$.
scale_random_features: Whether to apply the scale $\sqrt{2/M}$ to the hidden output.
Note: For a deep neural network that is sensitive to the learning rate (e.g., ResNet-50 and ResNet-110), it is generally recommended to set normalize_input=True to stablize training, and set scale_random_features=False to avoid the learning rate from being modified in unexpected ways when passing through the GP layer.
gp_cov_momentum controls how the model covariance is computed. If set to a positive value (e.g., 0.999), the covariance matrix is computed using the momentum-based moving average update (similar to batch normalization). If set to -1, the the covariance matrix is updated without momentum.
Note: The momentum-based update method can be sensitive to batch size. Therefore it is generally recommended to set gp_cov_momentum=-1 to compute the covariance exactly. For this to work properly, the covariance matrix estimator needs to be reset at the begining of a new epoch in order to avoid counting the same data twice. For RandomFeatureGaussianProcess, this is can be done by calling its reset_covariance_matrix(). The next section shows an easy implementation of this using Keras' built-in API.
Given a batch input with shape (batch_size, input_dim), the GP layer returns a logits tensor (shape (batch_size, num_classes)) for prediction, and also covmat tensor (shape (batch_size, batch_size)) which is the posterior covariance matrix of the batch logits.
End of explanation
class DeepResNetSNGP(DeepResNet):
def __init__(self, spec_norm_bound=0.9, **kwargs):
self.spec_norm_bound = spec_norm_bound
super().__init__(**kwargs)
def make_dense_layer(self):
Applies spectral normalization to the hidden layer.
dense_layer = super().make_dense_layer()
return nlp_layers.SpectralNormalization(
dense_layer, norm_multiplier=self.spec_norm_bound)
def make_output_layer(self, num_classes):
Uses Gaussian process as the output layer.
return nlp_layers.RandomFeatureGaussianProcess(
num_classes,
gp_cov_momentum=-1,
**self.classifier_kwargs)
def call(self, inputs, training=False, return_covmat=False):
# Gets logits and covariance matrix from GP layer.
logits, covmat = super().call(inputs)
# Returns only logits during training.
if not training and return_covmat:
return logits, covmat
return logits
Explanation: Note: Notice that under this implementation of the SNGP model, the predictive logits $logit(x_{test})$ for all classes share the same covariance matrix $var(x_{test})$, which describes the distance between $x_{test}$ from the training data.
Theoretically, it is possible to extend the algorithm to compute different variance values for different classes (as introduced in the original SNGP paper). However, this is difficult to scale to problems with large output spaces (e.g., ImageNet or language modeling).
<a name="full-sngp-model"></a>
The full SNGP model
Given the base class DeepResNet, the SNGP model can be implemented easily by modifying the residual network's hidden and output layers. For compatibility with Keras model.fit() API, also modify the model's call() method so it only outputs logits during training.
End of explanation
resnet_config
sngp_model = DeepResNetSNGP(**resnet_config)
sngp_model.build((None, 2))
sngp_model.summary()
Explanation: Use the same architecture as the deterministic model.
End of explanation
class ResetCovarianceCallback(tf.keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs=None):
Resets covariance matrix at the beginning of the epoch.
if epoch > 0:
self.model.classifier.reset_covariance_matrix()
Explanation: <a name="covariance-reset-callback"></a>
Implement a Keras callback to reset the covariance matrix at the beginning of a new epoch.
End of explanation
class DeepResNetSNGPWithCovReset(DeepResNetSNGP):
def fit(self, *args, **kwargs):
Adds ResetCovarianceCallback to model callbacks.
kwargs["callbacks"] = list(kwargs.get("callbacks", []))
kwargs["callbacks"].append(ResetCovarianceCallback())
return super().fit(*args, **kwargs)
Explanation: Add this callback to the DeepResNetSNGP model class.
End of explanation
sngp_model = DeepResNetSNGPWithCovReset(**resnet_config)
sngp_model.compile(**train_config)
sngp_model.fit(train_examples, train_labels, **fit_config)
Explanation: Train model
Use tf.keras.model.fit to train the model.
End of explanation
sngp_logits, sngp_covmat = sngp_model(test_examples, return_covmat=True)
sngp_variance = tf.linalg.diag_part(sngp_covmat)[:, None]
Explanation: Visualize uncertainty
First compute the predictive logits and variances.
End of explanation
sngp_logits_adjusted = sngp_logits / tf.sqrt(1. + (np.pi / 8.) * sngp_variance)
sngp_probs = tf.nn.softmax(sngp_logits_adjusted, axis=-1)[:, 0]
Explanation: <a name="mean-field-logits"></a>
Now compute the posterior predictive probability. The classic method for computing the predictive probability of a probabilistic model is to use Monte Carlo sampling, i.e.,
$$E(p(x)) = \frac{1}{M} \sum_{m=1}^M logit_m(x), $$
where $M$ is the sample size, and $logit_m(x)$ are random samples from the SNGP posterior $MultivariateNormal$(sngp_logits,sngp_covmat). However, this approach can be slow for latency-sensitive applications such as autonomous driving or real-time bidding. Instead, can approximate $E(p(x))$ using the mean-field method:
$$E(p(x)) \approx softmax(\frac{logit(x)}{\sqrt{1+ \lambda * \sigma^2(x)}})$$
where $\sigma^2(x)$ is the SNGP variance, and $\lambda$ is often chosen as $\pi/8$ or $3/\pi^2$.
End of explanation
def compute_posterior_mean_probability(logits, covmat, lambda_param=np.pi / 8.):
# Computes uncertainty-adjusted logits using the built-in method.
logits_adjusted = nlp_layers.gaussian_process.mean_field_logits(
logits, covmat, mean_field_factor=lambda_param)
return tf.nn.softmax(logits_adjusted, axis=-1)[:, 0]
sngp_logits, sngp_covmat = sngp_model(test_examples, return_covmat=True)
sngp_probs = compute_posterior_mean_probability(sngp_logits, sngp_covmat)
Explanation: Note: Instead of fixing $\lambda$ to a fixed value, you can also treat it as a hyperparameter, and tune it to optimize the model's calibration performance. This is known as temperature scaling in the deep learning uncertainty literature.
This mean-field method is implemented as a built-in function layers.gaussian_process.mean_field_logits:
End of explanation
#@title
def plot_predictions(pred_probs, model_name=""):
Plot normalized class probabilities and predictive uncertainties.
# Compute predictive uncertainty.
uncertainty = pred_probs * (1. - pred_probs)
# Initialize the plot axes.
fig, axs = plt.subplots(1, 2, figsize=(14, 5))
# Plots the class probability.
pcm_0 = plot_uncertainty_surface(pred_probs, ax=axs[0])
# Plots the predictive uncertainty.
pcm_1 = plot_uncertainty_surface(uncertainty, ax=axs[1])
# Adds color bars and titles.
fig.colorbar(pcm_0, ax=axs[0])
fig.colorbar(pcm_1, ax=axs[1])
axs[0].set_title(f"Class Probability, {model_name}")
axs[1].set_title(f"(Normalized) Predictive Uncertainty, {model_name}")
plt.show()
Explanation: SNGP Summary
End of explanation
def train_and_test_sngp(train_examples, test_examples):
sngp_model = DeepResNetSNGPWithCovReset(**resnet_config)
sngp_model.compile(**train_config)
sngp_model.fit(train_examples, train_labels, verbose=0, **fit_config)
sngp_logits, sngp_covmat = sngp_model(test_examples, return_covmat=True)
sngp_probs = compute_posterior_mean_probability(sngp_logits, sngp_covmat)
return sngp_probs
sngp_probs = train_and_test_sngp(train_examples, test_examples)
Explanation: Put everything together. The entire procedure (training, evaluation and uncertainty computation) can be done in just five lines:
End of explanation
plot_predictions(sngp_probs, model_name="SNGP")
Explanation: Visualize the class probability (left) and the predictive uncertainty (right) of the SNGP model.
End of explanation
plot_predictions(resnet_probs, model_name="Deterministic")
Explanation: Remember that in the class probability plot (left), the yellow and purple are class probabilities. When close to the training data domain, SNGP correctly classifies the examples with high confidence (i.e., assigning near 0 or 1 probability). When far away from the training data, SNGP gradually becomes less confident, and its predictive probability becomes close to 0.5 while the (normalized) model uncertainty rises to 1.
Compare this to the uncertainty surface of the deterministic model:
End of explanation
num_ensemble = 10
Explanation: Like mentioned earlier, a deterministic model is not distance-aware. Its uncertainty is defined by the distance of the test example from the decision boundary. This leads the model to produce overconfident predictions for the out-of-domain examples (red).
Comparison with other uncertainty approaches
This section compares the uncertainty of SNGP with Monte Carlo dropout and Deep ensemble.
Both of these methods are based on Monte Carlo averaging of multiple forward passes of deterministic models. First set the ensemble size $M$.
End of explanation
def mc_dropout_sampling(test_examples):
# Enable dropout during inference.
return resnet_model(test_examples, training=True)
# Monte Carlo dropout inference.
dropout_logit_samples = [mc_dropout_sampling(test_examples) for _ in range(num_ensemble)]
dropout_prob_samples = [tf.nn.softmax(dropout_logits, axis=-1)[:, 0] for dropout_logits in dropout_logit_samples]
dropout_probs = tf.reduce_mean(dropout_prob_samples, axis=0)
dropout_probs = tf.reduce_mean(dropout_prob_samples, axis=0)
plot_predictions(dropout_probs, model_name="MC Dropout")
Explanation: Monte Carlo dropout
Given a trained neural network with Dropout layers, Monte Carlo dropout computes the mean predictive probability
$$E(p(x)) = \frac{1}{M}\sum_{m=1}^M softmax(logit_m(x))$$
by averaging over multiple Dropout-enabled forward passes ${logit_m(x)}_{m=1}^M$.
End of explanation
# Deep ensemble training
resnet_ensemble = []
for _ in range(num_ensemble):
resnet_model = DeepResNet(**resnet_config)
resnet_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
resnet_model.fit(train_examples, train_labels, verbose=0, **fit_config)
resnet_ensemble.append(resnet_model)
Explanation: Deep ensemble
Deep ensemble is a state-of-the-art (but expensive) method for deep learning uncertainty. To train a Deep ensemble, first train $M$ ensemble members.
End of explanation
# Deep ensemble inference
ensemble_logit_samples = [model(test_examples) for model in resnet_ensemble]
ensemble_prob_samples = [tf.nn.softmax(logits, axis=-1)[:, 0] for logits in ensemble_logit_samples]
ensemble_probs = tf.reduce_mean(ensemble_prob_samples, axis=0)
plot_predictions(ensemble_probs, model_name="Deep ensemble")
Explanation: Collect logits and compute the mean predctive probability $E(p(x)) = \frac{1}{M}\sum_{m=1}^M softmax(logit_m(x))$.
End of explanation |
8,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coronagraph Wedge Masks
The notebook builds on the concepts introduced in Coronagraph_Basics.ipynb. Specifically, we concentrate on the complexities involved in simulating the wedge coronagraphs.
Step1: We will start by first importing pynrc along with the obs_hci (High Contrast Imaging) class, which lives in the pynrc.obs_nircam module.
Step2: Source Definitions
In the previous notebook, we simply used the stellar_spectrum functions to create sources normalized at their observed K-Band magnitude. This time, we will utilize the source_spectrum class to generate a model fit to the known spectrophotometry. The user can find the relevant photometric data at http
Step3: Initialize Observation
Now we will initialize the high-contrast imaging class pynrc.obs_hci using the spectral objects and various other settings. The obs_hci object is a subclass of the more generalized NIRCam class. It implements new settings and functions specific to high-contrast imaging observations for corongraphy and direct imaging.
For this tutorial, we want to observe these targets using the MASKLWB coronagraph in the F460M filter. All wedge coronagraphic masks such as the MASKLWB (B=bar) should be paired with the WEDGELYOT pupil element. Observations in the LW channel are most commonly observed in WINDOW mode with a 320x320 detector subarray size. Full detector sizes are also available.
The wedge coronagraphs have an additional option to specify the location along the wedge to place your point source via the bar_offset keyword. If not specified, the location is automatically chosen based on the filter. A positive value will move the source to the right when viewing in 'sci' coordinate convention. Specifying this location is a non-standard mode.
In this case, we're going to place our PSF at the narrow end of the LW bar, located at bar_offset=8 arcsec from the bar center.
Step4: Just as a reminder, information for the reference observation is stored in the attribute obs.Detector_ref, which is simply it's own isolated DetectorOps class. The bar_offset value is initialized to be the same as the science observation.
Exposure Settings
Optimization of exposure settings are demonstrated in another tutorial, so we will not repeat that process here. We can assume that process was performed elsewhere to choose the BRIGHT2 pattern with 10 groups and 40 total integrations. These settings apply to each roll position of the science observation as well as the for the reference observation.
Step5: Add Planets
There are four known giant planets orbiting HR 8799. Ideally, we would like to position them at their predicted locations on the anticipated observation date. For this case, we choose a plausible observation date of November 1, 2022. To convert between $(x,y)$ and $(r,\theta)$, use the nrc_utils.xy_to_rtheta and nrc_utils.rtheta_to_xy functions.
When adding the planets, it doesn't matter too much which exoplanet model spectrum we decide to use since the spectra are still fairly unconstrained at these wavelengths. We do know roughly the planets' luminosities, so we can simply choose some reasonable model and renormalize it to the appropriate filter brightness.
Their are a few exoplanet models available to pynrc (SB12, BEX, COND). Let's choose those from Spiegel & Burrows (2012).
Step6: As we can see, even with "perfect PSF subtraction" and no noise, it's difficult to make out planet e. This is primarily due to its location relative to the occulting mask reducing throughput combined with confusion of bright diffraction spots from nearby sources.
Estimated Performance
Now we are ready to determine contrast performance and sensitivites as a function of distance from the star.
Roll-Subtracted Images
First, we will create a quick simulated roll-subtracted image using the in gen_roll_image method. For the selected observation date of 11/1/2019, APT shows a PA range of 84$^{\circ}$ to 96$^{\circ}$. So, we'll assume Roll 1 has PA1=85, while Roll 2 has PA2=95. In this case, "roll subtraction" simply creates two science observations at two different parallactic angles and subtracts the same reference observation from each. The two results are then de-rotated to a common PA=0 and averaged.
There is also the option to create ADI images, where the other roll position becomes the reference star by setting no_ref=True.
Contrast Curves
Next, we will cycle through a few WFE drift values to get an idea of potential predicted sensitivity curves. The calc_contrast method returns a tuple of three arrays | Python Code:
# Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Enable inline plotting at lower left
%matplotlib inline
Explanation: Coronagraph Wedge Masks
The notebook builds on the concepts introduced in Coronagraph_Basics.ipynb. Specifically, we concentrate on the complexities involved in simulating the wedge coronagraphs.
End of explanation
import pynrc
from pynrc import nrc_utils # Variety of useful functions and classes
from pynrc.obs_nircam import obs_hci # High-contrast imaging observation class
# Progress bar
from tqdm.auto import tqdm, trange
# Disable informational messages and only include warnings and higher
pynrc.setup_logging(level='WARN')
Explanation: We will start by first importing pynrc along with the obs_hci (High Contrast Imaging) class, which lives in the pynrc.obs_nircam module.
End of explanation
# Define 2MASS Ks bandpass and source information
bp_k = pynrc.bp_2mass('k')
# Science source, dist, age, sptype, Teff, [Fe/H], log_g, mag, band
args_sources = [('HR 8799', 39.0, 30, 'F0V', 7430, -0.47, 4.35, 5.24, bp_k)]
# References source, sptype, Teff, [Fe/H], log_g, mag, band
ref_sources = [('HD 220657', 'F8III', 5888, -0.01, 3.22, 3.04, bp_k)]
# Directory housing VOTables
# http://vizier.u-strasbg.fr/vizier/sed/
votdir = 'votables/'
# Fit spectrum to SED photometry
i=0
name_sci, dist_sci, age, spt_sci, Teff_sci, feh_sci, logg_sci, mag_sci, bp_sci = args_sources[i]
vot = votdir + name_sci.replace(' ' ,'') + '.vot'
args = (name_sci, spt_sci, mag_sci, bp_sci, vot)
kwargs = {'Teff':Teff_sci, 'metallicity':feh_sci, 'log_g':logg_sci}
src = nrc_utils.source_spectrum(*args, **kwargs)
src.fit_SED(use_err=False, robust=False, wlim=[1,5])
# Final source spectrum
sp_sci = src.sp_model
# Do the same for the reference source
name_ref, spt_ref, Teff_ref, feh_ref, logg_ref, mag_ref, bp_ref = ref_sources[i]
vot = votdir + name_ref.replace(' ' ,'') + '.vot'
args = (name_ref, spt_ref, mag_ref, bp_ref, vot)
kwargs = {'Teff':Teff_ref, 'metallicity':feh_ref, 'log_g':logg_ref}
ref = nrc_utils.source_spectrum(*args, **kwargs)
ref.fit_SED(use_err=False, robust=False, wlim=[0.5,10])
# Final reference spectrum
sp_ref = ref.sp_model
# Plot spectra
fig, axes = plt.subplots(1,2, figsize=(13,4.5))
src.plot_SED(xr=[0.3,10], ax=axes[0])
ref.plot_SED(xr=[0.3,10], ax=axes[1])
axes[0].set_title('Science Specta -- {} ({})'.format(src.name, spt_sci))
axes[1].set_title('Refrence Specta -- {} ({})'.format(ref.name, spt_ref))
fig.tight_layout()
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
ind = (w>=xr[0]) & (w<=xr[1])
sp.convert('Jy')
f = sp.flux / np.interp(4.0, w, sp.flux)
ax.semilogy(w[ind], f[ind], lw=1.5, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized at 4 $\mu m$')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('Spectral Sources')
# Overplot Filter Bandpass
bp = pynrc.read_filter('F460M', 'WEDGELYOT', 'MASKLWB')
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
Explanation: Source Definitions
In the previous notebook, we simply used the stellar_spectrum functions to create sources normalized at their observed K-Band magnitude. This time, we will utilize the source_spectrum class to generate a model fit to the known spectrophotometry. The user can find the relevant photometric data at http://vizier.u-strasbg.fr/vizier/sed/ and click download data as a VOTable.
End of explanation
filt, mask, pupil = ('F460M', 'MASKLWB', 'WEDGELYOT')
wind_mode, subsize = ('WINDOW', 320)
fov_pix, oversample = (321, 2)
obs = pynrc.obs_hci(sp_sci, dist_sci, sp_ref=sp_ref, bar_offset=8, use_ap_info=False,
filter=filt, image_mask=mask, pupil_mask=pupil,
wind_mode=wind_mode, xpix=subsize, ypix=subsize,
fov_pix=fov_pix, oversample=oversample, large_grid=True)
Explanation: Initialize Observation
Now we will initialize the high-contrast imaging class pynrc.obs_hci using the spectral objects and various other settings. The obs_hci object is a subclass of the more generalized NIRCam class. It implements new settings and functions specific to high-contrast imaging observations for corongraphy and direct imaging.
For this tutorial, we want to observe these targets using the MASKLWB coronagraph in the F460M filter. All wedge coronagraphic masks such as the MASKLWB (B=bar) should be paired with the WEDGELYOT pupil element. Observations in the LW channel are most commonly observed in WINDOW mode with a 320x320 detector subarray size. Full detector sizes are also available.
The wedge coronagraphs have an additional option to specify the location along the wedge to place your point source via the bar_offset keyword. If not specified, the location is automatically chosen based on the filter. A positive value will move the source to the right when viewing in 'sci' coordinate convention. Specifying this location is a non-standard mode.
In this case, we're going to place our PSF at the narrow end of the LW bar, located at bar_offset=8 arcsec from the bar center.
End of explanation
# Update both the science and reference observations
# These numbers come from GTO Proposal 1194
obs.update_detectors(read_mode='BRIGHT2', ngroup=10, nint=40, verbose=True)
obs.gen_ref_det(read_mode='BRIGHT2', ngroup=4, nint=90)
Explanation: Just as a reminder, information for the reference observation is stored in the attribute obs.Detector_ref, which is simply it's own isolated DetectorOps class. The bar_offset value is initialized to be the same as the science observation.
Exposure Settings
Optimization of exposure settings are demonstrated in another tutorial, so we will not repeat that process here. We can assume that process was performed elsewhere to choose the BRIGHT2 pattern with 10 groups and 40 total integrations. These settings apply to each roll position of the science observation as well as the for the reference observation.
End of explanation
# Projected locations for date 11/01/2022
# These are prelimary positions, but within constrained orbital parameters
loc_list = [(-1.625, 0.564), (0.319, 0.886), (0.588, -0.384), (0.249, 0.294)]
# Estimated magnitudes within F444W filter
pmags = [16.0, 15.0, 14.6, 14.7]
# Add planet information to observation class.
# These are stored in obs.planets.
# Can be cleared using obs.delete_planets().
obs.delete_planets()
for i, loc in enumerate(loc_list):
obs.add_planet(model='SB12', mass=10, entropy=13, age=age, xy=loc, runits='arcsec',
renorm_args=(pmags[i], 'vegamag', obs.bandpass))
# Generate and plot a noiseless slope image to verify orientation
PA1 = 85 # Telescope V3 PA
PA_offset = -1*PA1 # Image field is rotated opposite direction
im_planets = obs.gen_planets_image(PA_offset=PA_offset, return_oversample=False)
from matplotlib.patches import Circle
from pynrc.nrc_utils import plotAxes
from pynrc.obs_nircam import get_cen_offsets
fig, ax = plt.subplots(figsize=(6,6))
xasec = obs.det_info['xpix'] * obs.pixelscale
yasec = obs.det_info['ypix'] * obs.pixelscale
extent = [-xasec/2, xasec/2, -yasec/2, yasec/2]
xylim = 3
vmin = 0
vmax = 0.5*im_planets.max()
ax.imshow(im_planets, extent=extent, vmin=vmin, vmax=vmax)
# Overlay the coronagraphic mask
detid = obs.Detector.detid
im_mask = obs.mask_images['DETSAMP']
# Do some masked transparency overlays
masked = np.ma.masked_where(im_mask>0.95*im_mask.max(), im_mask)
ax.imshow(1-masked, extent=extent, alpha=0.3, cmap='Greys_r', vmin=-0.5)
for loc in loc_list:
xc, yc = get_cen_offsets(obs, idl_offset=loc, PA_offset=PA_offset)
circle = Circle((xc,yc), radius=xylim/15., alpha=0.7, lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
xlim = ylim = np.array([-1,1])*xylim
xlim = xlim + obs.bar_offset
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('Arcsec')
ax.set_ylabel('Arcsec')
ax.set_title('{} planets -- {} {}'.format(sp_sci.name, obs.filter, obs.image_mask))
color = 'grey'
ax.tick_params(axis='both', color=color, which='both')
for k in ax.spines.keys():
ax.spines[k].set_color(color)
plotAxes(ax, width=1, headwidth=5, alength=0.15, angle=PA_offset,
position=(0.1,0.1), label1='E', label2='N')
fig.tight_layout()
Explanation: Add Planets
There are four known giant planets orbiting HR 8799. Ideally, we would like to position them at their predicted locations on the anticipated observation date. For this case, we choose a plausible observation date of November 1, 2022. To convert between $(x,y)$ and $(r,\theta)$, use the nrc_utils.xy_to_rtheta and nrc_utils.rtheta_to_xy functions.
When adding the planets, it doesn't matter too much which exoplanet model spectrum we decide to use since the spectra are still fairly unconstrained at these wavelengths. We do know roughly the planets' luminosities, so we can simply choose some reasonable model and renormalize it to the appropriate filter brightness.
Their are a few exoplanet models available to pynrc (SB12, BEX, COND). Let's choose those from Spiegel & Burrows (2012).
End of explanation
# Cycle through a few WFE drift values
wfe_list = [0,5,10]
# PA values for each roll
PA1, PA2 = (85,95)
# A dictionary of HDULists
hdul_dict = {}
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
# Assume perfect pointing (ie., xyoff_*** = (0,0) )
# to approximate results of advanced post-processing
hdulist = obs.gen_roll_image(PA1=PA1, PA2=PA2,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift,
xyoff_roll1=(0,0), xyoff_roll2=(0,0), xyoff_ref=(0,0))
hdul_dict[wfe_drift] = hdulist
from pynrc.nb_funcs import plot_hdulist
from matplotlib.patches import Circle
fig, axes = plt.subplots(1,3, figsize=(14,4.3))
xylim = 2.5
xlim = ylim = np.array([-1,1])*xylim
for j, wfe_drift in enumerate(wfe_list):
ax = axes[j]
hdul = hdul_dict[wfe_drift]
plot_hdulist(hdul, xr=xlim, yr=ylim, ax=ax, vmin=0, vmax=2)
# Location of planet
for loc in loc_list:
circle = Circle(loc, radius=xylim/15., lw=1, edgecolor='red', facecolor='none')
ax.add_artist(circle)
ax.set_title('$\Delta$WFE = {:.0f} nm'.format(wfe_drift))
nrc_utils.plotAxes(ax, width=1, headwidth=5, alength=0.15, position=(0.9,0.1), label1='E', label2='N')
fig.suptitle('{} -- {} {}'.format(name_sci, obs.filter, obs.image_mask), fontsize=14)
fig.tight_layout()
fig.subplots_adjust(top=0.85)
nsig = 5
roll_angle = np.abs(PA2 - PA1)
curves = []
for wfe_drift in tqdm(wfe_list):
# Assume drift between Roll1 and Roll2 is 2 nm WFE
wfe_roll_drift = 0 if wfe_drift<2 else 2
# Generate contrast curves
result = obs.calc_contrast(roll_angle=roll_angle, nsig=nsig,
wfe_ref_drift=wfe_drift, wfe_roll_drift=wfe_roll_drift,
xyoff_roll1=(0,0), xyoff_roll2=(0,0), xyoff_ref=(0,0))
curves.append(result)
from pynrc.nb_funcs import plot_contrasts, plot_planet_patches, plot_contrasts_mjup, update_yscale
import matplotlib.patches as mpatches
# fig, ax = plt.subplots(figsize=(8,5))
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
xr=[0,5]
yr=[24,8]
# 1a. Plot contrast curves and set x/y limits
ax = axes[0]
ax, ax2, ax3 = plot_contrasts(curves, nsig, wfe_list, obs=obs,
xr=xr, yr=yr, ax=ax, return_axes=True)
# 1b. Plot the locations of exoplanet companions
label = 'Companions ({})'.format(filt)
planet_dist = [np.sqrt(x**2+y**2) for x,y in loc_list]
ax.plot(planet_dist, pmags, marker='o', ls='None', label=label, color='k', zorder=10)
# 1c. Plot Spiegel & Burrows (2012) exoplanet fluxes (Hot Start)
plot_planet_patches(ax, obs, age=age, entropy=13, av_vals=None)
ax.legend(ncol=2)
# 2. Plot in terms of MJup using COND models
ax = axes[1]
ax1, ax2, ax3 = plot_contrasts_mjup(curves, nsig, wfe_list, obs=obs, age=age,
ax=ax, twin_ax=True, xr=xr, yr=None, return_axes=True)
yr = [0.03,100]
for xval in planet_dist:
ax.plot([xval,xval],yr, lw=1, ls='--', color='k', alpha=0.7)
update_yscale(ax1, 'log', ylim=yr)
yr_temp = np.array(ax1.get_ylim()) * 318.0
update_yscale(ax2, 'log', ylim=yr_temp)
ax.legend(loc='upper right', title='BEX ({:.0f} Myr)'.format(age))
fig.suptitle('{} ({} + {})'.format(name_sci, obs.filter, obs.image_mask), fontsize=16)
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
Explanation: As we can see, even with "perfect PSF subtraction" and no noise, it's difficult to make out planet e. This is primarily due to its location relative to the occulting mask reducing throughput combined with confusion of bright diffraction spots from nearby sources.
Estimated Performance
Now we are ready to determine contrast performance and sensitivites as a function of distance from the star.
Roll-Subtracted Images
First, we will create a quick simulated roll-subtracted image using the in gen_roll_image method. For the selected observation date of 11/1/2019, APT shows a PA range of 84$^{\circ}$ to 96$^{\circ}$. So, we'll assume Roll 1 has PA1=85, while Roll 2 has PA2=95. In this case, "roll subtraction" simply creates two science observations at two different parallactic angles and subtracts the same reference observation from each. The two results are then de-rotated to a common PA=0 and averaged.
There is also the option to create ADI images, where the other roll position becomes the reference star by setting no_ref=True.
Contrast Curves
Next, we will cycle through a few WFE drift values to get an idea of potential predicted sensitivity curves. The calc_contrast method returns a tuple of three arrays:
1. The radius in arcsec.
2. The n-sigma contrast.
3. The n-sigma magnitude sensitivity limit (vega mag).
End of explanation |
8,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
Step1: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE
Step2: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A
Step3: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
Step4: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
Step5: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
Step7: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks
Step8: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
Step10: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
Step11: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image. | Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
End of explanation
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
Explanation: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
End of explanation
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
Explanation: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
End of explanation
def compute_saliency_maps(X, y, model):
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
saliency = None
##############################################################################
# TODO: Implement this function. You should use the forward and backward #
# methods of the PretrainedCNN class, and compute gradients with respect to #
# the unnormalized class score of the ground-truth classes in y. #
##############################################################################
N,C,H,W = X.shape
scores, cache = model.forward(X, mode='test')
dscores = np.zeros_like(scores)
dscores[np.arange(N),y] = 1.0
dX, grads = model.backward(dscores, cache)
# saliency = np.abs(np.sum(dX, axis=1))
saliency = np.amax(np.abs(dX), axis=1)
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
End of explanation
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
End of explanation
def make_fooling_image(X, target_y, model):
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
X_fooling = X.copy()
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. Use gradient ascent on the target class score, using #
# the model.forward method to compute scores and the model.backward method #
# to compute image gradients. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
##############################################################################
# Ref: https://github.com/cthorey/CS231
(N, C, H, W) = X_fooling.shape
for i in range(200):
scores, cache = model.forward(X_fooling, mode='test')
dscores = np.zeros_like(scores)
dscores[np.arange(N), target_y] = 1.0
dX, grads = model.backward(dscores, cache)
# Perform gradient ascent over the image
X_fooling += 100 * dX
y_pred = model.loss(X_fooling).argmax(axis=1)
if y_pred == target_y:
print 'Done in %d iterations' % (i + 1)
break
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
End of explanation |
8,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Routes Analysis
In this section, we are trying to answer a very interesting question
Step1: Generate Random Routes
In order to accomplish this goal, we need to have a function that generates random points within some geospatial region. Hence the function get_random_point_in_polygon is created
Step2: Now we simply copy the text above and go to https
Step4: Use the web UI
Copy the above result and paste it in the web UI at /#/Main/RouteLab
Then, select a time interval, date interval and click query data. You may go to the Rethinkdb Web UI to make sure your query is actually getting executed.
When it's done, simpy click COPY TO CLIPBOARD button to copy the result.
Then create a test.json file and store it in the same directory as the this file.
Analyzing data
First load the json, and build an appropirate dataframe for it.
Step5: Be Bold and try 100 routes | Python Code:
## import system module
import json
import rethinkdb as r
import time
import datetime as dt
import asyncio
from shapely.geometry import Point, Polygon
import random
import pandas as pd
import os
import matplotlib.pyplot as plt
## import custom module
from streettraffic.server import TrafficServer
from streettraffic.predefined.cities import San_Francisco_polygon
settings = {
'app_id': 'F8aPRXcW3MmyUvQ8Z3J9', # this is where you put your App ID from here.com
'app_code' : 'IVp1_zoGHdLdz0GvD_Eqsw', # this is where you put your App Code from here.com
'map_tile_base_url': 'https://1.traffic.maps.cit.api.here.com/maptile/2.1/traffictile/newest/normal.day/',
'json_tile_base_url': 'https://traffic.cit.api.here.com/traffic/6.2/flow.json?'
}
## initialize traffic server
server = TrafficServer(settings)
Explanation: Multiple Routes Analysis
In this section, we are trying to answer a very interesting question: within a city, do different routes experiences the same traffic pattern.
First let's import our moudles
End of explanation
def get_random_point_in_polygon(poly):
(minx, miny, maxx, maxy) = poly.bounds
while True:
p = Point(random.uniform(minx, maxx), random.uniform(miny, maxy))
if poly.contains(p):
return p
atlanta_polygon = Polygon([[33.658529, -84.471782], [33.667928, -84.351730], [33.883809, -84.347570], [33.855681, -84.469405]])
sample_points = []
for i in range(100):
point_in_poly = get_random_point_in_polygon(atlanta_polygon)
sample_points += [[point_in_poly.x, point_in_poly.y]]
print(server.traffic_data.format_list_points_for_display(sample_points))
Explanation: Generate Random Routes
In order to accomplish this goal, we need to have a function that generates random points within some geospatial region. Hence the function get_random_point_in_polygon is created
End of explanation
sample_route_count = 2
route_obj_collection = []
for i in range(sample_route_count):
point_in_poly1 = get_random_point_in_polygon(atlanta_polygon)
point_in_poly2 = get_random_point_in_polygon(atlanta_polygon)
route_obj_collection += [[
{
"lat": point_in_poly1.x,
"lng": point_in_poly1.y
},
{
"lat": point_in_poly2.x,
"lng": point_in_poly2.y
}
]]
route_obj_collection_json = json.dumps(route_obj_collection)
print(route_obj_collection_json)
Explanation: Now we simply copy the text above and go to https://www.darrinward.com/lat-long/ for plotting. The result would look like the following picture.
Now that we know we can generate random points, let's generate random routes. Let sample_route_count = 3, and we can create 3 random routes
End of explanation
# load the test.json
with open('test.json') as f:
route_traffic_pattern_collection = json.load(f)
# create a function that takes an overview_path and generate the distance
def overview_path_distance(overview_path):
This function extracts the longest_n routes in route_traffic_pattern_collection
distance = 0
for i in range(len(overview_path)-1):
point1 = overview_path[i]
point2 = overview_path[i+1]
distance += server.util.get_distance([point1['lat'], point1['lng']], [point2['lat'], point2['lng']])
return distance
# now we build the dataframe
df = pd.DataFrame(index = [json.dumps(item['origin_destination']) for item in route_traffic_pattern_collection])
df['distance (in meters)'] = [overview_path_distance(item['route']['routes'][0]['overview_path']) for item in route_traffic_pattern_collection]
for i in range(len(route_traffic_pattern_collection[0]['chartLabel'])):
df[route_traffic_pattern_collection[0]['chartLabel'][i]] = [item['chartData'][i] for item in route_traffic_pattern_collection]
df.sort_values(by='distance (in meters)')
df
# remove the 'distance (in meters)' column and then we can do analysis
del df['distance (in meters)']
df
# Now we can do all sorts of fun things with it.
# feel free to comment out the following statement and see various possibilites
#print(df.mean(axis=1))
#print(df.std())
#print(df.median())
# for each route, give me the mean Jamming Factor of all the instant (2:00:00 PM, 2:30:00 PM, ..., 4:30:00 PM)
df.mean(axis=1)
Explanation: Use the web UI
Copy the above result and paste it in the web UI at /#/Main/RouteLab
Then, select a time interval, date interval and click query data. You may go to the Rethinkdb Web UI to make sure your query is actually getting executed.
When it's done, simpy click COPY TO CLIPBOARD button to copy the result.
Then create a test.json file and store it in the same directory as the this file.
Analyzing data
First load the json, and build an appropirate dataframe for it.
End of explanation
## For reproducibility, we executed the following script and store
## route_obj_collection_json
# sample_route_count = 100
# route_obj_collection = []
# for i in range(sample_route_count):
# point_in_poly1 = get_random_point_in_polygon(atlanta_polygon)
# point_in_poly2 = get_random_point_in_polygon(atlanta_polygon)
# route_obj_collection += [[
# {
# "lat": point_in_poly1.x,
# "lng": point_in_poly1.y
# },
# {
# "lat": point_in_poly2.x,
# "lng": point_in_poly2.y
# }
# ]]
# route_obj_collection_json = json.dumps(route_obj_collection)
with open('route_obj_collection_json.json') as f:
route_obj_collection_json = json.load(f)
## after copying and pasting route_obj_collection_json into WEB UI, getting results and load
## it in route_traffic_pattern_collection, we get this:
with open('route_traffic_pattern_collection.json') as f:
route_traffic_pattern_collection = json.load(f)
df = pd.DataFrame(index = [json.dumps(item['origin_destination']) for item in route_traffic_pattern_collection])
df['distance (in meters)'] = [overview_path_distance(item['route']['routes'][0]['overview_path']) for item in route_traffic_pattern_collection]
for i in range(len(route_traffic_pattern_collection[0]['chartLabel'])):
df[route_traffic_pattern_collection[0]['chartLabel'][i]] = [item['chartData'][i] for item in route_traffic_pattern_collection]
df2 = df.sort_values(by='distance (in meters)')
df2
# The following graph shows on average, what is the Jamming Factor throughout 24 hours for those 20 routes.
df3 = df2[-20:]
del df3['distance (in meters)']
df3.mean(axis=1).plot()
plt.show()
# The following graph extracts the worst jamming factor of each routes
df4 = df2[-20:]
del df4['distance (in meters)']
df4.max(axis=1).plot()
plt.show()
Explanation: Be Bold and try 100 routes
End of explanation |
8,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img style="float
Step1: Select the water Station
For our example, we will query a water station called Bristol Avon Little Avon Axe and North Somerset St. This station has the station number 531118. It is possible to select another station by changing the station_number; a list of 50 other possible stations can be found following this link.
Step2: Tool and Stream
First we will create a Stream to store the data and an instance of the new tool.
Step3: Execute the tool
Now we will specify an interval of time for which we want the water levels. In this particular case we will ask for the last 7 days. Then, we can execute the tool for the specified interval of time. The result will be stored in the specified Stream.
Step4: Visualization
Now we can visualize all the data stored in the stream | Python Code:
%load_ext watermark
import sys
from datetime import datetime
from datetime import datetime, timedelta
sys.path.append("../") # Add parent dir in the Path
from hyperstream import HyperStream, StreamId
from hyperstream import TimeInterval
from hyperstream.utils import UTC
from utils import plot_high_chart
%watermark -v -m -p hyperstream -g
Explanation: <img style="float: right;" src="images/hyperstream.svg">
HyperStream Tutorial 4: Real-time streams
In this tutorial, we show how to create a new plugin that collects real-time data ussing a publicly available API. In this case, we use the Environment Agency flood-monitoring API.
Creating a plugin tool to use the API
1. Create a folder in plugins
First of all, we need to create a new folder to contain the new tool. The new folder needs to be in the folder plugins, in this example plugins/example/tools/environment_data_gov_uk/. Also, we need to create an _init_.py file in every subfolder.
plugins/
|- __init__.py
|- example/
|- __init__.py
|- tools/
|- __init__.py
|- environment_data_gov_uk
|- __init__.py
|- 2017-06-21_v0.0.1.py
2. Write the plugin in Python
As we have seen in a previous tutorial, we can create a new plugin in Python, in this case the code of the plugin ./plugins/example/tools/environment_data_gov_uk/2017-06-21_v0.0.1.py uses the API to query only one of the water readings for the specified interval of time:
```Python
from datetime import datetime
from datetime import datetime, timedelta
from hyperstream import Tool, StreamInstance, StreamInstanceCollection
from hyperstream.utils import check_input_stream_count
from hyperstream.utils import UTC
from dateutil.parser import parse
import urllib
import urllib2
import json
this uses Environment Agency flood and river level data from the real-time
data API (Beta)
For questions on the APIs please contact data.info@environment-agency.gov.uk,
a forum for announcements and discussion is under consideration.
class EnvironmentDataGovUk(Tool):
def init(self, station):
self.station = station
super(EnvironmentDataGovUk, self).init()
@check_input_stream_count(0)
def _execute(self, sources, alignment_stream, interval):
startdate = interval[0].strftime("%Y-%m-%d")
enddate = interval[1].strftime("%Y-%m-%d")
url = "https://environment.data.gov.uk/flood-monitoring/id/stations/{}/readings".format(self.station)
values = {'startdate' : startdate,
'enddate' : enddate}
url_parameters = urllib.urlencode(values)
full_url = url + '?' + url_parameters
response = urllib2.urlopen(full_url)
data = json.load(response)
for item in data['items']:
dt = parse(item.get('dateTime'))
if dt in interval:
value = float(item.get('value'))
yield StreamInstance(dt, value)
```
3. Add HyperStream configuration
Now, it is necessary to add information about this plugin into the hyperstream_config.json. In particular, we need to add the following information in the plugin section:
channel_id_prefix: This is to create Channels (explained in another tutorial).
channel_names: A list of available Channels
path: path to the new plugin
has_tools: If the new plugin has tools
has_assets: If it contains folders or files that are needed by the plugin
Next, we have an example of an configuration file with the new plugin:
```json
{
"mongo": {
"host": "localhost",
"port": 27017,
"tz_aware": true,
"db": "hyperstream"
},
"plugins": [{
"channel_id_prefix": "example",
"channel_names": [],
"path": "plugins/example",
"has_tools": true,
"has_assets": false
}],
"online_engine": {
"interval": {
"start": -60,
"end": -10
},
"sleep": 5,
"iterations": 100
}
}
```
Aknowledge
this uses Environment Agency flood and river level data from the real-time data API (Beta)
End of explanation
station_number = "531118"
station_name = "Bristol Avon Little Avon Axe and North Somerset St"
Explanation: Select the water Station
For our example, we will query a water station called Bristol Avon Little Avon Axe and North Somerset St. This station has the station number 531118. It is possible to select another station by changing the station_number; a list of 50 other possible stations can be found following this link.
End of explanation
hs = HyperStream(loglevel=20)
print hs
environment_stream = hs.channel_manager.memory.get_or_create_stream("environment")
environment_tool = hs.plugins.example.tools.environment_data_gov_uk(station=station_number)
Explanation: Tool and Stream
First we will create a Stream to store the data and an instance of the new tool.
End of explanation
now = datetime.utcnow().replace(tzinfo=UTC)
before = (now - timedelta(weeks=1)).replace(tzinfo=UTC)
ti = TimeInterval(before, now)
environment_tool.execute(sources=[], sink=environment_stream, interval=ti)
Explanation: Execute the tool
Now we will specify an interval of time for which we want the water levels. In this particular case we will ask for the last 7 days. Then, we can execute the tool for the specified interval of time. The result will be stored in the specified Stream.
End of explanation
my_time, my_data = zip(*[(key.__str__(), value) for key, value in environment_stream.window().items()])
plot_high_chart(my_time, my_data, type="high_stock", title=station_name, yax='meters')
Explanation: Visualization
Now we can visualize all the data stored in the stream
End of explanation |
8,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RADseq data simulations
I simulated two trees to work with. One that is completely imbalanced (ladder-like) and one that is balanced (equal number tips descended from each node). I'm using the Python package ete2 for most of the tree manipulations. This notebook was run in Python 2.7. You will also need the package rpy2 installed, as well as a working version of R with the package 'ape' to make tree plots later in the notebook.
Step1: Simulation software
I wrote a program to simulate RAD-seq like sequence data which uses the python package egglib for coalescent simulations. Below will check that you have the relevant software installed. See here for simrrls installation
Step2: Generate trees for simulations
Make a balanced tree with 64 tips and tree length = 6
Step3: String representation
Step4: Make an imbalanced tree of same treelength with 64 tips
Step5: Or copy the following string to a file
Step6: Check that the trees are the same length (close enough).
Step7: An ultrametric topology of Viburnum w/ 64 tips
This tree is inferred in notebook 3, and here it is scaled with penalized likelihood to be ultrametric.
Step8: Simulate sequence data on each tree
Here I use the simrrls program to simulate RADseq data on each input topology with locus dropout occurring with respect to phylogenetic distances. Find simrrls in my github profile.
Comparing tree shapes and sources of missing data
Step9: Show simrrls options
Step10: Simulate RAD data on different trees and with sampling
Here I simulate 1000 loci on each tree. For each tree data are simulated in 5 ways. With and without data loss from mutation-disruption or low sequencing coverage, and as a rad data type (one cutter) and ddrad (two cutters). This will take about 10 minutes to run probably.
Step11: Assemble data sets in pyRAD
Step14: Visualize data sharing on these trees
Step15: The hierarchical distribution of informative sites
First we re-calculate the pair-wise data sharing matrices for all species in each data set.
Step17: A function to count loci for each bipartition (quartet-style)
Step18: A function to write data to file for plotting
Here I iterate over each node and apply count_inf4 which returns the number of loci that are informative for the subtending bipartition, and count_snps which counts snps segregating at that bipartition. This takes a few minutes to run.
Step19: Make data files
Step20: Plot the hierarchical distribution with the trees
Step21: Load in the data to R
Step22: Plots
Step23: Empirical data (full & half depth)
Here I am grabbing the assembled empirical data from notebook_1 (Viburnum) to compare the effect of sequencing coverage with the results we see when simulating data on that tree.
Step24: plOT
Step25: PLOt nodes on tree
Step26: Data sharing by sub-sampling
How much data are shared by a random N samples, and how much data are shared across the deepest bipartition for 2+N samples. Also how many SNPs? | Python Code:
## standard Python imports
import glob
import itertools
from collections import OrderedDict, Counter
## extra Python imports
import rpy2 ## required for tree plotting
import ete2 ## used for tree manipulation
import egglib ## used for coalescent simulations
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
## print versions
for pkg in [matplotlib, np, pd, ete2, rpy2]:
print "{:<10}\t{:<10}".\
format(pkg.__name__, pkg.__version__)
Explanation: RADseq data simulations
I simulated two trees to work with. One that is completely imbalanced (ladder-like) and one that is balanced (equal number tips descended from each node). I'm using the Python package ete2 for most of the tree manipulations. This notebook was run in Python 2.7. You will also need the package rpy2 installed, as well as a working version of R with the package 'ape' to make tree plots later in the notebook.
End of explanation
## check simrrls package and requirements
import egglib
import simrrls
## print versions
print 'egglib', egglib.version
print 'simrrls', simrrls.__version__
Explanation: Simulation software
I wrote a program to simulate RAD-seq like sequence data which uses the python package egglib for coalescent simulations. Below will check that you have the relevant software installed. See here for simrrls installation: https://github.com/dereneaton/simrrls
End of explanation
## base tree
Tbal = ete2.Tree()
## branch lengths
bls = 1.
## namer
n = iter(('s'+str(i) for i in xrange(1,1500)))
## first nodes
n1 = Tbal.add_child(name=n.next(), dist=bls)
n2 = Tbal.add_child(name=n.next(), dist=bls)
## make balanced tree
while len(Tbal.get_leaves()) < 64:
thisrep = Tbal.get_descendants()
for node in thisrep:
if len(node.get_children()) < 1:
node.add_child(name=n.next(), dist=bls)
node.add_child(name=n.next(), dist=bls)
## Save newick string to file
Tbal.write(outfile="Tbal.tre", format=3)
Explanation: Generate trees for simulations
Make a balanced tree with 64 tips and tree length = 6
End of explanation
## newick string
! cat Tbal.tre
## show tree, remove node circles
#for node in Tbal.traverse():
# node.img_style["size"] = 0
#Tbal.render("%%inline", h=500)
Explanation: String representation
End of explanation
## base tree
Timb = ete2.Tree()
## namer
n = iter(('s'+str(i) for i in range(1,5000)))
## scale branches to match balanced treelength
brlen = (bls*6.)/63
## first nodes
n1 = Timb.add_child(name=n.next(), dist=brlen)
n2 = Timb.add_child(name=n.next(), dist=brlen)
while len(Timb.get_leaves()) < 64:
## extend others
for tip in Timb.get_leaves()[:-1]:
tip.dist += brlen
## extend the last node
Timb.get_leaves()[-1].add_child(name=n.next(), dist=brlen)
Timb.get_leaves()[-1].add_sister(name=n.next(), dist=brlen)
## write to file
Timb.write(outfile="Timb.tre", format=3)
Explanation: Make an imbalanced tree of same treelength with 64 tips
End of explanation
! cat Timb.tre
## show tree, remove node circles
#for node in Timb.traverse():
# node.img_style["size"] = 0
#Timb.render("%%inline", h=500)
Explanation: Or copy the following string to a file:
End of explanation
print set([i.get_distance(Tbal) for i in Tbal]), 'treelength'
print len(Tbal), 'tips'
print set([i.get_distance(Timb) for i in Timb]), 'treelength'
print len(Timb), 'tips'
Explanation: Check that the trees are the same length (close enough).
End of explanation
%load_ext rpy2.ipython
%%R -w 400 -h 600
library(ape)
## make tree ultrametric using penalized likelihood
Vtree <- read.tree("~/Dropbox/RAxML_bestTree.VIB_small_c85d6m4p99")
Utree <- drop.tip(Vtree, "clemensiae_DRY6_PWS_2135")
Utree <- ladderize(chronopl(Utree, 0.5))
## multiply bls so tree length=6 after dropping outgroup
Utree$edge.length <- Utree$edge.length*6
## save the new tree
write.tree(Utree, "Tvib.tre")
plot(Utree, cex=0.7, edge.width=2)
add.scale.bar()
#edgelabels(round(Utree$edge.length,3))
#### load TVib tree into Python and print newick string
Tvib = ete2.Tree("Tvib.tre")
! cat Tvib.tre
Explanation: An ultrametric topology of Viburnum w/ 64 tips
This tree is inferred in notebook 3, and here it is scaled with penalized likelihood to be ultrametric.
End of explanation
%%bash
## balanced tree
mkdir -p Tbal_rad_drop/
mkdir -p Tbal_ddrad_drop/
mkdir -p Tbal_rad_covfull/
mkdir -p Tbal_rad_covlow/
mkdir -p Tbal_rad_covmed/
## imbalanced tree
mkdir -p Timb_rad_drop/
mkdir -p Timb_ddrad_drop/
mkdir -p Timb_rad_covfull/
mkdir -p Timb_rad_covlow/
mkdir -p Timb_rad_covmed/
## sims on empirical Viburnum topo
mkdir -p Tvib_rad_drop/
mkdir -p Tvib_ddrad_drop/
mkdir -p Tvib_rad_covfull/
mkdir -p Tvib_rad_covlow/
mkdir -p Tvib_rad_covmed
Explanation: Simulate sequence data on each tree
Here I use the simrrls program to simulate RADseq data on each input topology with locus dropout occurring with respect to phylogenetic distances. Find simrrls in my github profile.
Comparing tree shapes and sources of missing data
End of explanation
%%bash
simrrls -h
Explanation: Show simrrls options
End of explanation
%%bash
for tree in Tbal Timb Tvib;
do
simrrls -mc 1 -ms 1 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f rad -c1 CTGCAG \
-o $tree\_rad_drop/$tree
simrrls -mc 1 -ms 1 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f ddrad -c1 CTGCAG -c2 AATT \
-o $tree\_ddrad_drop/$tree
simrrls -mc 0 -ms 0 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f rad -c1 CTGCAG \
-o $tree\_rad_covfull/$tree
simrrls -mc 0 -ms 0 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f rad -c1 CTGCAG \
-dm 5 -ds 5 \
-o $tree\_rad_covmed/$tree
simrrls -mc 0 -ms 0 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f rad -c1 CTGCAG \
-dm 2 -ds 5 \
-o $tree\_rad_covlow/$tree
done
Explanation: Simulate RAD data on different trees and with sampling
Here I simulate 1000 loci on each tree. For each tree data are simulated in 5 ways. With and without data loss from mutation-disruption or low sequencing coverage, and as a rad data type (one cutter) and ddrad (two cutters). This will take about 10 minutes to run probably.
End of explanation
%%bash
pyrad --version
%%bash
## new params file (remove existing file if present)
rm params.txt
pyrad -n
%%bash
## enter parameters into params file using sed
sed -i '/## 1. /c\Tbal_rad_drop ## 1. working dir ' params.txt
sed -i '/## 2. /c\Tbal_rad_drop/*.gz ## 2. data loc ' params.txt
sed -i '/## 3. /c\Tbal_rad_drop/*barcodes.txt ## 3. Bcode ' params.txt
sed -i '/## 6. /c\TGCAG,AATT ## 6. cutters ' params.txt
sed -i '/## 7. /c\20 ## 7. Nproc ' params.txt
sed -i '/## 10. /c\.82 ## 10. clust thresh' params.txt
sed -i '/## 11. /c\rad ## 11. datatype ' params.txt
sed -i '/## 12. /c\2 ## 12. minCov ' params.txt
sed -i '/## 13. /c\10 ## 13. maxSH' params.txt
sed -i '/## 14. /c\Tbal ## 14. outname' params.txt
sed -i '/## 24./c\99 ## 24. maxH' params.txt
sed -i '/## 30./c\n,p,s ## 30. out format' params.txt
## IPython code to iterate over trees and coverages and run pyrad
## sometimes this freezes when run in a jupyter notebook due
## to problems with multiprocessing in notebooks. This is why my new
## work with ipyrad uses ipyparallel instead of multiprocessing.
for tree in ['Tbal', 'Timb', 'Tvib']:
for dtype in ['rad', 'ddrad']:
with open('params.txt', 'rb') as params:
pp = params.readlines()
pp[1] = "{}_{}_drop ## 1. \n".format(tree, dtype)
pp[2] = "{}_{}_drop/*.gz ## 2. \n".format(tree, dtype)
pp[3] = "{}_{}_drop/*barcodes.txt ## 3. \n".format(tree, dtype)
pp[14] = "{} ## 14. \n".format(tree)
with open('params.txt', 'wb') as params:
params.write("".join(pp))
## this calls pyrad as a bash script
! pyrad -p params.txt >> log.txt 2>&1
for cov in ['full', 'med', 'low']:
with open('params.txt', 'rb') as params:
pp = params.readlines()
pp[1] = "{}_rad_cov{} ## 1. \n".format(tree, cov)
pp[2] = "{}_rad_cov{}/*.gz ## 2. \n".format(tree, cov)
pp[3] = "{}_rad_cov{}/*barcodes.txt ## 3. \n".format(tree, cov)
pp[14] = "{} ## 14. \n".format(tree)
with open('params.txt', 'wb') as params:
params.write("".join(pp))
## this calls pyrad as a bash script
! pyrad -p params.txt >> log.txt 2>&1
Explanation: Assemble data sets in pyRAD
End of explanation
def getarray(locifile, tree):
parse the loci list and return a
presence/absence matrix ordered by
the tips on the tree
## parse the loci file
loci = open(locifile).read().split("\n//")[:-1]
## order (ladderize) the tree
tree.ladderize()
## get tip names
names = tree.get_leaf_names()
## make empty matrix
lxs = np.zeros((len(names), len(loci)))
## fill the matrix
for loc in xrange(len(loci)):
for seq in loci[loc].split("\n"):
if ">" in seq:
lxs[names.index(seq.split()[0][1:].rsplit("_", 1)[0]),loc] += 1
return lxs
def countmatrix(lxsabove, lxsbelow, max=0):
fill a matrix with pairwise data sharing
between each pair of samples. You could put
in two different 'share' matrices to have
different results above and below the diagonal.
Can enter a max value to limit fill along diagonal.
share = np.zeros((lxsabove.shape[0],
lxsbelow.shape[0]))
## fill above
names = range(lxsabove.shape[0])
for row in lxsabove:
for samp1,samp2 in itertools.combinations(names,2):
shared = lxsabove[samp1, lxsabove[samp2,]>0].sum()
share[samp1,samp2] = shared
## fill below
for row in lxsbelow:
for samp2,samp1 in itertools.combinations(names,2):
shared = lxsabove[samp1, lxsabove[samp2,]>0].sum()
share[samp1,samp2] = shared
## fill diagonal
if not max:
for row in range(len(names)):
share[row,row] = lxsabove[row,].sum()
else:
for row in range(len(names)):
share[row,row] = max
return share
def plotSVGmatrix(share, outname):
surf = plt.pcolormesh(share, cmap="gist_yarg")
dims = plt.axis('image')
surf.axes.get_xaxis().set_ticklabels([])
surf.axes.get_xaxis().set_ticks([])
surf.axes.get_yaxis().set_ticklabels([])
surf.axes.get_yaxis().set_ticks([])
ax = plt.gca()
ax.invert_yaxis()
plt.colorbar(surf, aspect=15)
if outname:
plt.savefig(outname+".svg")
def fullplot(locifile, tree, outname=None):
lxsB = getarray(locifile, tree)
share = countmatrix(lxsB, lxsB)
plotSVGmatrix(share, outname)
fullplot('Tbal_rad_drop/outfiles/Tbal.loci', Tbal, 'Tbal_drop')
fullplot('Tbal_rad_covlow/outfiles/Tbal.loci', Tbal, 'Tbal_covlow')
fullplot('Tbal_rad_covmed/outfiles/Tbal.loci', Tbal, 'Tbal_covmed')
fullplot('Tbal_rad_covfull/outfiles/Tbal.loci', Tbal, 'Tbal_covfull')
fullplot('Timb_rad_drop/outfiles/Timb.loci', Timb, 'Timb_drop')
fullplot('Tvib_rad_drop/outfiles/Tvib.loci', Tvib, 'Tvib_drop')
fullplot('Tvib_rad_covlow/outfiles/Tvib.loci', Tvib, 'Tvib_covlow')
Explanation: Visualize data sharing on these trees
End of explanation
lxs_Tbal_droprad = getarray("Tbal_rad_drop/outfiles/Tbal.loci", Tbal)
lxs_Tbal_dropddrad = getarray("Tbal_ddrad_drop/outfiles/Tbal.loci", Tbal)
lxs_Tbal_covlow = getarray("Tbal_rad_covlow/outfiles/Tbal.loci", Tbal)
lxs_Tbal_covmed = getarray("Tbal_rad_covmed/outfiles/Tbal.loci", Tbal)
lxs_Tbal_covfull = getarray("Tbal_rad_covfull/outfiles/Tbal.loci", Tbal)
lxs_Timb_droprad = getarray("Timb_rad_drop/outfiles/Timb.loci", Timb)
lxs_Timb_dropddrad = getarray("Timb_ddrad_drop/outfiles/Timb.loci", Timb)
lxs_Timb_covlow = getarray("Timb_rad_covlow/outfiles/Timb.loci", Timb)
lxs_Timb_covmed = getarray("Timb_rad_covmed/outfiles/Timb.loci", Timb)
lxs_Timb_covfull = getarray("Timb_rad_covfull/outfiles/Timb.loci", Timb)
lxs_Tvib_droprad = getarray("Tvib_rad_drop/outfiles/Tvib.loci", Tvib)
lxs_Tvib_dropddrad = getarray("Tvib_ddrad_drop/outfiles/Tvib.loci", Tvib)
lxs_Tvib_covlow = getarray("Tvib_rad_covlow/outfiles/Tvib.loci", Tvib)
lxs_Tvib_covmed = getarray("Tvib_rad_covmed/outfiles/Tvib.loci", Tvib)
lxs_Tvib_covfull = getarray("Tvib_rad_covfull/outfiles/Tvib.loci", Tvib)
Explanation: The hierarchical distribution of informative sites
First we re-calculate the pair-wise data sharing matrices for all species in each data set.
End of explanation
def count_inf4(tree, matrix, node):
count the number of loci with data spanning
a given node in the tree
## get children of selected node
a, b = node.get_children()
## get tip descendents of a and b
tips_a = set(a.get_leaf_names())
tips_b = set(b.get_leaf_names())
## get every other tip (outgroups)
upone = node.up
if upone.is_root():
ch = upone.children
sis = [i for i in ch if i != node][0]
if sis.children:
tips_c = sis.children[0].get_leaf_names()
tips_d = sis.children[1].get_leaf_names()
else:
return 0
else:
upone = set(node.up.get_leaf_names())
tips_c = upone - tips_a - tips_b
tips_all = set(tree.get_leaf_names())
tips_d = tips_all - tips_a - tips_b - tips_c
## get indices in matrix for leaf tips
names = tree.get_leaf_names()
index_a = [names.index(i) for i in tips_a]
index_b = [names.index(i) for i in tips_b]
index_c = [names.index(i) for i in tips_c]
index_d = [names.index(i) for i in tips_d]
## how man loci are "informative"
inf = 0
for col in matrix.T:
hits_a = sum([col[i] for i in index_a])
hits_b = sum([col[i] for i in index_b])
hits_c = sum([col[i] for i in index_c])
hits_d = sum([col[i] for i in index_d])
if all([hits_a, hits_b, hits_c, hits_d]):
inf += 1
return inf
Explanation: A function to count loci for each bipartition (quartet-style)
End of explanation
def nodes_dat(tree, lxs, datfilename):
dat = []
for node in tree.traverse():
if not (node.is_leaf() or node.is_root()):
loci = count_inf4(tree, lxs, node)
dist = round(tree.get_distance(node),2)
dat.append([dist, loci])
node.name = "%d" % loci
## print tree with bls & node labels
tree.write(format=3,outfile=datfilename+".tre")
## print data to file
with open(datfilename, 'w') as outfile:
np.savetxt(outfile, np.array(dat), fmt="%.2f")
Explanation: A function to write data to file for plotting
Here I iterate over each node and apply count_inf4 which returns the number of loci that are informative for the subtending bipartition, and count_snps which counts snps segregating at that bipartition. This takes a few minutes to run.
End of explanation
%%bash
## a new directory to store the data in
mkdir -p analysis_counts2
nodes_dat(Tbal, lxs_Tbal_droprad,
"analysis_counts2/Tbal_droprad.dat3")
nodes_dat(Tbal, lxs_Tbal_dropddrad,
"analysis_counts2/Tbal_dropddrad.dat3")
nodes_dat(Tbal, lxs_Tbal_covlow,
"analysis_counts2/Tbal_covlow.dat3")
nodes_dat(Tbal, lxs_Tbal_covmed,
"analysis_counts2/Tbal_covmed.dat3")
nodes_dat(Tbal, lxs_Tbal_covfull,
"analysis_counts2/Tbal_covfull.dat3")
nodes_dat(Timb, lxs_Timb_droprad,
"analysis_counts2/Timb_droprad.dat3")
nodes_dat(Timb, lxs_Timb_dropddrad,
"analysis_counts2/Timb_dropddrad.dat3")
nodes_dat(Timb, lxs_Timb_covlow,
"analysis_counts2/Timb_covlow.dat3")
nodes_dat(Timb, lxs_Timb_covmed,
"analysis_counts2/Timb_covmed.dat3")
nodes_dat(Timb, lxs_Timb_covfull,
"analysis_counts2/Timb_covfull.dat3")
nodes_dat(Tvib, lxs_Tvib_droprad,
"analysis_counts2/Tvib_droprad.dat3")
nodes_dat(Tvib, lxs_Tvib_dropddrad,
"analysis_counts2/Tvib_dropddrad.dat3")
nodes_dat(Tvib, lxs_Tvib_covlow,
"analysis_counts2/Tvib_covlow.dat3")
nodes_dat(Tvib, lxs_Tvib_covmed,
"analysis_counts2/Tvib_covmed.dat3")
nodes_dat(Tvib, lxs_Tvib_covfull,
"analysis_counts2/Tvib_covfull.dat3")
Explanation: Make data files
End of explanation
%load_ext rpy2.ipython
%%R
library(ape)
Explanation: Plot the hierarchical distribution with the trees
End of explanation
%%R
## read in the data and factor results
dat <- read.table("analysis_counts2/Tbal_droprad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_droprad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tbal_dropddrad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_dropddrad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tbal_covlow.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_covlow_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tbal_covmed.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_covmed_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tbal_covfull.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_covfull_Lme <- with(dat, tapply(loci, depth, mean))
%%R
## read in the data and factor results
dat <- read.table("analysis_counts2/Timb_droprad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_droprad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Timb_dropddrad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_dropddrad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Timb_covlow.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_covlow_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Timb_covmed.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_covmed_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Timb_covfull.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_covfull_Lme <- with(dat, tapply(loci, depth, mean))
%%R
## read in the data and factor results
dat <- read.table("analysis_counts2/Tvib_droprad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_droprad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tvib_dropddrad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_dropddrad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tvib_covlow.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_covlow_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tvib_covmed.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_covmed_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tvib_covfull.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_covfull_Lme <- with(dat, tapply(loci, depth, mean))
Explanation: Load in the data to R
End of explanation
%%R -w 400 -h 400
#svg("box1.svg", width=4, height=5)
L = Tbal_droprad_Lme
plot(L, xlim=c(0,6), ylim=c(575,1025),
cex.axis=1.25, type='n', xaxt="n")
#abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tbal_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tbal_droprad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#6E6E6E", bg="#6E6E6E", lwd=0.75)
#L = Tbal_dropddrad_Lme
#df1 = data.frame(as.numeric(names(L)),as.numeric(L))
#points(df1, cex=3.5, pch=21, col="#6E6E6E", bg="#D3D3D3", lwd=0.75)
box()
axis(side=1, at=seq(0,6,0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#dev.off()
%%R -w 400 -h 400
#svg("box2.svg", width=4, height=5)
L = Tbal_covlow_Lme
plot(L, xlim=c(0,6), ylim=c(575,1025),
cex.axis=1.25, type='n', xaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tbal_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tbal_covmed_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
#L = Tbal_covlow_Lme
#df1 = data.frame(as.numeric(names(L)),as.numeric(L))
#points(df1, cex=3.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
axis(side=1, at=seq(0,6,0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#dev.off()
%%R -w 400 -h 400
#svg("box3.svg", width=4, height=5)
## samples every 6th to make plot more readable
L = Timb_droprad_Lme[c(3:62)]
plot(L, xlim=c(0,6), ylim=c(575, 1025),
cex.axis=1.25, type='n', xaxt="n")#, yaxt="n")
#abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Timb_covfull_Lme[seq(3, 65, 6)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Timb_droprad_Lme[seq(3, 65, 6)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
box()
axis(side=1, at=seq(0,6, 0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#dev.off()
%%R -w 400 -h 400
## samples every 6th to make plot more readable
#svg("box4.svg", width=4, height=5)
L = Timb_droprad_Lme[c(3:62)]
plot(L, xlim=c(0,6), ylim=c(575, 1025),
cex.axis=1.25, type='n', xaxt="n")#, yaxt="n")
#abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Timb_covfull_Lme[seq(3, 65, 6)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Timb_covmed_Lme[seq(3, 65, 6)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
box()
axis(side=1, at=seq(0,6, 0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#dev.off()
%%R -w 400 -h 500
#svg("Sourcemissing.svg", height=10, width=8)
#svg("Sourcemissing.svg", height=7, width=5.33)
mat2 = matrix(c(1,1,1,4,4,4,7,7,7,
1,1,1,4,4,4,7,7,7,
2,2,2,5,5,5,8,8,8,
3,3,3,6,6,6,9,9,9),
4,9, byrow=TRUE)
layout(mat2)
par(mar=c(1,1,0,1),
oma=c(2,2,1,0))
#########################################################
tre <- read.tree("Tbal.tre")
plot(tre, show.tip.label=F,
edge.width=2.5, type='p',
x.lim=c(0,6))
####
L = Tbal_droprad_Lme
plot(L, xlim=c(0,6), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tbal_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tbal_droprad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Tbal_dropddrad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
####
L = Tbal_covlow_Lme
plot(L, xlim=c(0,6), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tbal_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tbal_covmed_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Tbal_covlow_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
axis(side=1, at=seq(0,6,0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
box()
##########################################################
tre <- read.tree("Timb.tre")
plot(tre, show.tip.label=F,
edge.width=2.5, type='p',
x.lim=c(0,6))
####
L = Timb_droprad_Lme[c(3:62)]
plot(L, xlim=c(0,6), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n", yaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Timb_covfull_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Timb_droprad_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Timb_dropddrad_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
####
L = Timb_covlow_Lme[c(3:62)]
plot(L, xlim=c(0,6), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n", yaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Timb_covfull_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Timb_covmed_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Timb_covlow_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
##
axis(side=1, at=seq(0,6,0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#########################################################
tre <- read.tree("Tvib.tre")
plot(tre, show.tip.label=F,
edge.width=2.5, type='p',
x.lim=c(0,6))
####
L = Tvib_droprad_Lme
plot(L, xlim=c(0,6.25), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n", yaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tvib_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tvib_droprad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Tvib_dropddrad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
####
plot(L, xlim=c(0,6.25), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n", yaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tvib_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tvib_covmed_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Tvib_covlow_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
##
axis(side=1, at=seq(0,6,1),
labels=as.character(seq(6,0,by=-1)), cex.axis=1.25)
#dev.off()
Explanation: Plots
End of explanation
Tvib2 = Tvib.copy()
for node in Tvib2:
node.name = node.name+"_0"
## full size data
lxs_EmpVib_full = getarray("/home/deren/Dropbox/RADexplore/EmpVib/vib_full_64tip_c85d6m4p99.loci", Tvib)#, dropind=1)
lxs_EmpVib_half = getarray("/home/deren/Dropbox/RADexplore/EmpVib/vib_half_64tip_c85d6m4p99.loci", Tvib)#, dropind=1)
share_full = countmatrix(lxs_EmpVib_full,lxs_EmpVib_full)
plotSVGmatrix(share_full, "EmpVib_full")
share_half = countmatrix(lxs_EmpVib_half,lxs_EmpVib_half)
plotSVGmatrix(share_half, "EmpVib_half")
nodes_dat(Tvib, lxs_EmpVib_half,
"analysis_counts/Tvib_Emp_half.dat3")
nodes_dat(Tvib, lxs_EmpVib_full,
"analysis_counts/Tvib_Emp_full.dat3")
%%R
## read in the data and factor results
dat <- read.table("analysis_counts/Tvib_Emp_full.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
EmpVib_full_Lme <- with(dat, tapply(loci, depth, mean))
## read in the data and factor results
dat <- read.table("analysis_counts/Tvib_Emp_half.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
EmpVib_half_Lme <- with(dat, tapply(loci, depth, mean))
%%R
EmpVib_half_Lme
%%R -w 200 -h 400
#svg("EmpVib_fullvhalf3.svg", height=6, width=2.5)
mat2 <- matrix(c(1,1,1,2),byrow=TRUE)
layout(mat2)
par(mar=c(1,1,0,1),
oma=c(2,2,1,0))
#########################################################
#tre <- read.tree("Tvib.tre")
#plot(tre, show.tip.label=F,
# edge.width=2.5, type='p',
# x.lim=c(0,2.25))
Vtre <- read.tree("analysis_counts/EmpVib_full.dat3.tre")
plot(Vtre, cex=0.6, adj=0.05, x.lim=c(0,2.25),
edge.width=2.5, type='p',show.tip.label=FALSE)
nodelabels(pch=20, col="black",
cex=as.integer(Vtre$node.label)/7500)
####
L = EmpVib_half_Lme
plot(L, xlim=c(0,2.25), ylim=c(-25,50200),
cex.axis=1.25, type='n', xaxt="n")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="red", bg="red")
####
L = EmpVib_full_Lme
#plot(L, xlim=c(0,2.25), ylim=c(-25,50200),
# cex.axis=1.25, type='n', xaxt="n")
abline(h=0, lwd=2, col="gray", lty="dotted")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="#262626", bg="#262626")
#dev.off()
%%R -w 200 -h 400
#svg("fullvhalf3.svg", height=4.5, width=4)
####
L = EmpVib_full_Lme
plot(L, xlim=c(0,2.25), ylim=c(-25,50200),
cex.axis=1.25, type='n', xaxt="n")
abline(h=0, lwd=2, col="gray", lty="dotted")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="#262626", bg="#262626")
dev.off()
svg("fullonly.svg", height=4.5, width=4)
####
L = EmpVib_half_Lme
plot(L, xlim=c(0,2.25), ylim=c(-25,50200),
cex.axis=1.25, type='n', xaxt="n")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="red", bg="red")
####
L = EmpVib_full_Lme
abline(h=0, lwd=2, col="gray", lty="dotted")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="#262626", bg="#262626")
#dev.off()
%%R
data.frame(cbind(median(EmpVib_full_Lme),
median(EmpVib_half_Lme)),
cbind(mean(EmpVib_full_Lme),
mean(EmpVib_half_Lme)),
col.names=c("full","half"))
%%R
svg('hist.svg', width=4.25, height=4)
hist(rnorm(10000), col="grey")
dev.off()
%%R -w 300 -h 600
Vtre <- read.tree("analysis_counts/EmpVib_full.dat3.tre")
svg("EmpVib_full_nodes.svg", height=6, width=3)
plot(Vtre, cex=0.6, adj=0.05,
edge.width=3, show.tip.label=FALSE)
nodelabels(pch=20, col="black",
cex=as.integer(Vtre$node.label)/10000)
dev.off()
Explanation: Empirical data (full & half depth)
Here I am grabbing the assembled empirical data from notebook_1 (Viburnum) to compare the effect of sequencing coverage with the results we see when simulating data on that tree.
End of explanation
%%R -w 300 -h 600
#svg("locisnpsdepth.svg", height=8, width=4)
#pdf("locisnpsdepth.pdf", height=8, width=4)
mat2 <- matrix(c(1,1,1,5,5,5,
1,1,1,5,5,5,
2,2,2,6,6,6,
3,3,3,7,7,7,
4,4,4,8,8,8),
5,6, byrow=TRUE)
layout(mat2)
par(mar=c(1,1,0,1),
oma=c(2,2,1,0))
tre <- read.tree("Tbal.tre")
plot(tre, show.tip.label=F,
edge.width=2.5, type='p',
x.lim=c(-0.25,2.75))
##-------------------------------
## Plot full data locus sharing
x = seq(1.5,5.5)
y = Tbal_full_Lme#[1:6]
s = Tbal_full_Lsd#[1:6]
plot(x, y, xlim=c(1,7.2), ylim=c(-25,3100),
cex.axis=1.25, type='n', xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1.25, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
x = seq(1.5,5.5)
y = Tbal_full_Sme#[2:6]
s = Tbal_full_Ssd#[2:6]
lines(x, y, lwd=2, col="darkgrey")
points(x, y, cex=1.25, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
##--------------------------------
## Plot drop data locus sharing
x = seq(1.5,5.5)
y = Tbal_drop_Lme
s = Tbal_drop_Lsd
plot(x, y, xlim=c(1,7.2), ylim=c(-25,3100),
cex.axis=1.25, type='n', xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1.25, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
x = seq(1.5,5.5)
y = Tbal_drop_Sme
s = Tbal_drop_Ssd
lines(x, y, lwd=2, col="darkgrey")
points(x, y, cex=1.25, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
##------------------------------
## Plot cov data locus sharing
x = seq(1.5,5.5)
y = Tbal_cov_Lme
s = Tbal_cov_Lsd
plot(x, y, xlim=c(1,7.2), ylim=c(-25,3100),
cex.axis=1.25, type='n', xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1.25, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
x = seq(1.5,5.5)
y = Tbal_cov_Sme
s = Tbal_cov_Ssd
lines(x, y, lwd=2, col="darkgrey")
points(x, y, cex=1.25, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
axis(side=1, at=seq(1.1,8.1,1),
labels=as.character(seq(3,-0.5,by=-0.5)), cex.axis=1.25)
###########################################
###########################################
tre <- read.tree("Timb.tre")
plot(tre, show.tip.label=F,
edge.width=2, type='p')
##------------------------------------
## Plot full data locus sharing
x = seq(2,62)
y = Timb_full_Lme[2:62]
s = Timb_full_Lsd[2:62]
plot(x, y, xlim=c(1,65), ylim=c(-25,3100),
cex.axis=1.25, type='n', yaxt="n", xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
y = Timb_full_Sme[2:62]
s = Timb_full_Ssd[2:62]
lines(x, y, lwd=2, col="darkgrey")
#points(x, y, cex=1, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
##----------------------------------
## Plot full data locus sharing
x = seq(2,62)
y = Timb_drop_Lme[2:62]
s = Timb_drop_Lsd[2:62]
plot(x, y, xlim=c(1,65), ylim=c(-25,3100),
cex.axis=1.25, type='n', yaxt="n", xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
y = Timb_drop_Sme[2:62]
s = Timb_drop_Ssd[2:62]
lines(x, y, lwd=2, col="darkgrey")
#points(x, y, cex=1, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s)
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
##-----------------------------------
## Plot drop data locus sharing
x = seq(2,62)
y = Timb_cov_Lme[2:62]
s = Timb_cov_Lsd[2:62]
plot(x, y,
xlim=c(1,65), ylim=c(-20,3100),
cex=1, cex.axis=1.25,
pch=21, bg="#262626", xaxt="n", yaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
y = Timb_cov_Sme[2:62]
s = Timb_cov_Ssd[2:62]
lines(x, y, lwd=2, col="darkgrey")
#points(x, y, cex=1, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
axis(side=1, at=seq(2.1,72,10),
labels=as.character(seq(3,0,by=-0.5)), cex.axis=1.25)
#dev.off()
Explanation: plOT
End of explanation
lxs_EmpVib_full
def write_nodes_to_tree(tree,lxs,treefilename):
for node in tree.traverse():
if not node.is_leaf():
inf = count_inf4(tree, lxs, node)
node.name = "%d" % inf
## print tree with bls & node labels
tree.write(format=3,outfile=treefilename)
write_nodes_to_tree(Tvib, lxs_EmpVib_full, "Tvib_full_nodes.tre")
%%R -w 400 -h 500
tre <- read.tree("loci_Tbal_cov")
plot(tre)#, show.tip.label=F, edge.width=2.5)
#nodelabels(pch=21,
# bg="#262626",
# cex=as.integer(tre$node.label)/150)
nodelabels(tre$node.label, bg='grey', cex=1.5)
Explanation: PLOt nodes on tree
End of explanation
def counts(lxs, minr, maxr, maxi):
## data store
data = np.zeros((maxr+1-minr,maxi))
for sample in range(minr, maxr+1):
g = itertools.combinations(range(maxr), sample)
i = 0
while i<maxi:
try:
gsamp = g.next()
except StopIteration:
break
shared = sum(lxs[gsamp,:].sum(axis=0) == len(gsamp))
data[sample-minr,i] = shared
i += 1
return data
Dbal = counts(lxs_Tbal_drop, 4, 32, 1000)
Dimb = counts(lxs_Timb_drop, 4, 32, 1000)
Dbal
def counts2(lxs, minr, maxr, maxi):
## data store
data = np.zeros(((maxr+1-minr)*maxi, 3))
count = 0
for sample in range(minr, maxr+1):
g = itertools.combinations(range(maxr), sample)
i = 0
while i<maxi:
try:
gsamp = g.next()
except StopIteration:
break
shared = sum(lxs[gsamp,:].sum(axis=0) == len(gsamp))
datum = [sample, float(shared), i+1]
data[count] = datum
i += 1
count += 1
return data
Dimb1 = counts2(lxs_Timb_full, 4, 32, 100)
Dimb2 = counts2(lxs_Timb_drop, 4, 32, 100)
Dimb3 = counts2(lxs_Timb_cov, 4, 32, 100)
Dbal1 = counts2(lxs_Tbal_full, 4, 32, 100)
Dbal2 = counts2(lxs_Tbal_drop, 4, 32, 100)
Dbal3 = counts2(lxs_Tbal_cov, 4, 32, 100)
def saveto(D, outname):
dd = pd.DataFrame({"time":[i[0] for i in D],
"loci":[i[1] for i in D],
"idx":[i[2] for i in D],
"tree":["Timb" for _ in D]})
dd.to_csv(outname)
saveto(Dimb1, "Dimb1.dat")
saveto(Dimb2, "Dimb2.dat")
saveto(Dimb3, "Dimb3.dat")
saveto(Dbal1, "Dbal1.dat")
saveto(Dbal2, "Dbal2.dat")
saveto(Dbal3, "Dbal3.dat")
Explanation: Data sharing by sub-sampling
How much data are shared by a random N samples, and how much data are shared across the deepest bipartition for 2+N samples. Also how many SNPs?
End of explanation |
8,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 6
Use this notebook to work on your answers and check solutions. You can then submit your functions using "hw6_submission.ipynb" or directly write your functions in a file named "hw6_answers.py". Note that "hw6_answers.py" will be the only file collected and graded for this assignment.
For questions 1-3, you will use the APD dataset that we have been working with in class.
For questions 4-5, you will use data from https
Step1: Question 1
Write a function called "variable_helper" which takes one argument
Step2: Sample output
Step3: Sample output
Step4: Sample output
Step5: Question 4
Write a function called "rating_confusion" which takes one argument
Step6: Sample output | Python Code:
# Loading python packages and APD data file (this step does not have to be included in hw6_answers.py)
import pandas as pd
import numpy as np
df = pd.read_csv('/home/data/APD/COBRA-YTD2017.csv.gz')
Explanation: Homework 6
Use this notebook to work on your answers and check solutions. You can then submit your functions using "hw6_submission.ipynb" or directly write your functions in a file named "hw6_answers.py". Note that "hw6_answers.py" will be the only file collected and graded for this assignment.
For questions 1-3, you will use the APD dataset that we have been working with in class.
For questions 4-5, you will use data from https://perso.telecom-paristech.fr/eagan/class/igr204/datasets.
End of explanation
#### play with code here #####
Explanation: Question 1
Write a function called "variable_helper" which takes one argument:
df, which is a pandas data frame
and returns:
d, a dictionary where keys are the column names of df and values are one of "numeric", "categorical", "ordinal", "date/time", or "text", corresponding to the feature type of each column.
End of explanation
#### play with code here #####
Explanation: Sample output:
In [1]: variable_helper(df[['offense_id','beat','x','y']])
Out[1]: {'beat': 'categorical',
'offense_id': 'ordinal',
'x': 'numeric',
'y': 'numeric'}
Short explanation: offense_id is a number assigned to each offense. There is a natural ordering implied in the id number (based on order of occurrence). Because of this, offense_id is an ordinal feature. The beat uses a numeric label, but refers to a geographic location. There is no natural ordering, so beat is a categorical feature. The location variables (x and y) are numeric position coordinates.
Question 2
Write a function called "get_categories" which takes one argument:
df, which is a pandas data frame
and returns:
cat, a dictionary where keys are names of columns of df corresponding to categorical features, and values are arrays of all the unique values that the feature can take.
End of explanation
#### play with code here #####
Explanation: Sample output:
In [1]: get_categories(df[['offense_id','beat','UC2 Literal']])
Out[1]: {'UC2 Literal': array(['AGG ASSAULT', 'AUTO THEFT', 'BURGLARY-NONRES',
'BURGLARY-RESIDENCE', 'HOMICIDE', 'LARCENY-FROM VEHICLE',
'LARCENY-NON VEHICLE', 'RAPE', 'ROBBERY-COMMERCIAL',
'ROBBERY-PEDESTRIAN', 'ROBBERY-RESIDENCE'], dtype=object),
'beat': array([101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113,
114, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212,
213, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312,
313, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412,
413, 414, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511,
512, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612,
701, 702, 703, 704, 705, 706, 707, 708, 710])}
Short explanation: UC2 Literal and beat are the only categorical variables in the data frame df[['offense_id','beat','UC2 Literal']].
Question 3
Write a function called "code_shift" which takes one argument:
df, which is a pandas data frame
and returns:
a pandas data frame with columns "offense_id", "Shift", "ShiftID", where ShiftID is 0 if "Shift" is "Unk", 1 if "Morn", 2 if "Day", and 3 if "Eve".
End of explanation
%%sh
## RUN BUT DO NOT EDIT THIS CELL
## run this cell to download the cereal dataset into your current directory
wget https://perso.telecom-paristech.fr/eagan/class/igr204/datasets/cereal.csv
## RUN BUT DO NOT EDIT THIS CELL
# load the data, define ratingID
cer = pd.read_csv('cereal.csv', skiprows=[1], delimiter=';')
cer['ratingID'] = cer['rating'].apply(lambda x: 0 if x<60 else 1)
# define predicted ratingID
np.random.seed(12345)
cer['predicted_ratingID'] = (cer['rating']+20*np.random.randn(len(cer))).apply(lambda x: 0 if x<60 else 1)
Explanation: Sample output:
In [1]: code_shift(df[:5])
Out[1]: offense_id Shift ShiftID
0 172490115 Morn 1
1 172490265 Eve 3
2 172490322 Morn 1
3 172490390 Morn 1
4 172490401 Morn 1
For the last 2 questions, you will use the cereal data file available from https://perso.telecom-paristech.fr/eagan/class/igr204/datasets. Execute the download and loading instructions below.
End of explanation
#### play with code here #####
# Hint: look up pandas "crosstab"
Explanation: Question 4
Write a function called "rating_confusion" which takes one argument:
cer, which is a pandas data frame
and returns:
cf, a confusion matrix where the rows correspond to predicted_ratingID and the columns correspond to ratingID.
End of explanation
#### play with code here #####
Explanation: Sample output:
In [1]: rating_confusion(cer[:20])
Out[1]: ratingID 0 1
predicted_ratingID
0 15 0
1 3 2
Question 5
Write a function called "prediction_metrics" which takes one argument:
cer, which is a pandas data frame
and returns:
metrics_dict, a python dictionary object where the keys are 'precision', 'recall', 'F1' and the values are the numeric values for precision, recall, and F1 score, where ratingID is the prediction target and predicted_ratingID is a model output.
End of explanation |
8,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification problems are a broad category of machine learning problems that involve the prediction of values taken from a discrete, finite number of cases.
In this example, we'll build a classifier to predict to which species a flower belongs to.
Reading data
Step1: Visualizing data
Step2: Classifying species
We'll use scikit-learn's LogisticRegression to build out classifier.
Step3: Inspecting classification results
Scores like the one calculated above are usually not what we want to assess. it will only return the mean error obtained between predictions and the actual classes in the training dataset.
Consider what happens, for instance, when you're training a model to classify if someone has a disease or not and 99% of the people don't have that disease. What can go wrong if you use a score like the one above to evaluate your model? Hint
Step4: Another useful technique to inspect the results given by a classification model is to take a look at its confusion matrix. This is an K x K matrix (where K is the number of distinct classes identified by the classifier) that gives us, in the position (i, j), how many examples belonging to class i were classified as belonging to class j.
That can give us insights on which classes may require more attention. | Python Code:
import pandas as pd
iris = pd.read_csv('../datasets/iris.csv')
# Print some info about the dataset
iris.info()
iris['Class'].unique()
iris.describe()
Explanation: Classification problems are a broad category of machine learning problems that involve the prediction of values taken from a discrete, finite number of cases.
In this example, we'll build a classifier to predict to which species a flower belongs to.
Reading data
End of explanation
# Create a scatterplot for sepal length and sepal width
import matplotlib.pyplot as plt
%matplotlib inline
sl = iris['Sepal_length']
sw = iris['Sepal_width']
# Create a scatterplot of these two properties using plt.scatter()
# Assign different colors to each data point according to the class it belongs to
plt.scatter(sl[iris['Class'] == 'Iris-setosa'], sw[iris['Class'] == 'Iris-setosa'], color='red')
plt.scatter(sl[iris['Class'] == 'Iris-versicolor'], sw[iris['Class'] == 'Iris-versicolor'], color='green')
plt.scatter(sl[iris['Class'] == 'Iris-virginica'], sw[iris['Class'] == 'Iris-virginica'], color='blue')
# Specify labels for the X and Y axis
plt.xlabel('Sepal Length')
plt.ylabel('Sepal Width')
# Show graph
plt.show()
# Create a scatterplot for petal length and petal width
pl = iris['Petal_length']
pw = iris['Petal_width']
# Create a scatterplot of these two properties using plt.scatter()
# Assign different colors to each data point according to the class it belongs to
plt.scatter(pl[iris['Class'] == 'Iris-setosa'], pw[iris['Class'] == 'Iris-setosa'], color='red')
plt.scatter(pl[iris['Class'] == 'Iris-versicolor'], pw[iris['Class'] == 'Iris-versicolor'], color='green')
plt.scatter(pl[iris['Class'] == 'Iris-virginica'], pw[iris['Class'] == 'Iris-virginica'], color='blue')
# Specify labels for the X and Y axis
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
# Show graph
plt.show()
Explanation: Visualizing data
End of explanation
X = iris.drop('Class', axis=1)
t = iris['Class'].values
RANDOM_STATE = 4321
# Use sklean's train_test_plit() method to split our data into two sets.
from sklearn.model_selection import train_test_split
Xtr, Xts, ytr, yts = train_test_split(X, t, random_state=RANDOM_STATE)
# Use the training set to build a LogisticRegression model
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression().fit(Xtr, ytr) # Fit a logistic regression model
# Use the LogisticRegression's score() method to assess the model accuracy in the training set
lr.score(Xtr, ytr)
# Use the LogisticRegression's score() method to assess the model accuracy in the test set
lr.score(Xts, yts)
Explanation: Classifying species
We'll use scikit-learn's LogisticRegression to build out classifier.
End of explanation
# scikit-learn provides a function called "classification_report" that summarizes the three metrics above
# for a given classification model on a dataset.
from sklearn.metrics import classification_report
# Use this function to print a classification metrics report for the trained classifier.
# See http://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html
print(classification_report(yts, lr.predict(Xts)))
Explanation: Inspecting classification results
Scores like the one calculated above are usually not what we want to assess. it will only return the mean error obtained between predictions and the actual classes in the training dataset.
Consider what happens, for instance, when you're training a model to classify if someone has a disease or not and 99% of the people don't have that disease. What can go wrong if you use a score like the one above to evaluate your model? Hint: What would be the score of a classifier that always returns zero(i.e. it always says that the person doesn't have the disease) in this case?
Simple score metrics are usually not recommended for classification problems. There are at least three different metrics that are commonly used depending on the context:
* Precision: This is the number of true positives that the classifier got right - in the example of the disease classifier, this metric would say how many of the people who it said would have the disease actually have that disease;
* Recall: This is the number of true positives that are found by the classifier - in the same example, this metric would tell us how many of the people who actually have the disease were found by the classifier;
* F1-Score: This is a weighted sum of precision and recall - it's not easy to interpret its value intuitively, but the idea is that the f1-score represents a compromise between precision and recall;
<img src='../images/Precisionrecall.svg'></img>
Source: https://en.wikipedia.org/wiki/Precision_and_recall
Some other common evaluation methods for classification models include ROC chart analysis and the related concept of Area Under Curve (AUC).
What metric would you prioritise in the case of the disease classifier described before? What are the costs of false positives and false negatives in this case?
End of explanation
from sklearn.metrics import confusion_matrix
# Use scikit-learn's confusion_matrix to understand which classes were misclassified.
# See http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html
confusion_matrix(yts, lr.predict(Xts))
Explanation: Another useful technique to inspect the results given by a classification model is to take a look at its confusion matrix. This is an K x K matrix (where K is the number of distinct classes identified by the classifier) that gives us, in the position (i, j), how many examples belonging to class i were classified as belonging to class j.
That can give us insights on which classes may require more attention.
End of explanation |
8,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas is a Python Data Analysis Library. It allows you to play around with data and perform powerful data analysis.
In this example I will show you how to read data from CSV and Excel files in Pandas. You can then save the read output as in a Pandas dataframe. The sample data used in the below exercise was generated by https
Step1: Preview the first 5 lines of the data with .head() to ensure that it loaded.
Step2: You will need to pip install xlrd if you haven't already. In order to import data from Excel. | Python Code:
import pandas as pd
csv_data_df = pd.read_csv('data/MOCK_DATA.csv')
Explanation: Pandas is a Python Data Analysis Library. It allows you to play around with data and perform powerful data analysis.
In this example I will show you how to read data from CSV and Excel files in Pandas. You can then save the read output as in a Pandas dataframe. The sample data used in the below exercise was generated by https://mockaroo.com/.
End of explanation
csv_data_df.head()
Explanation: Preview the first 5 lines of the data with .head() to ensure that it loaded.
End of explanation
import xlrd
excel_data_df = pd.read_excel('data/MOCK_DATA.xlsx')
excel_data_df.head()
Explanation: You will need to pip install xlrd if you haven't already. In order to import data from Excel.
End of explanation |
8,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: Lists have a number of useful properties and methods available to them.
Here we'll take a quick look at some of the more common and useful ones
Step2: In addition, there are many more built-in list methods; they are well-covered in Python's online documentation.
While we've been demonstrating lists containing values of a single type, one of the powerful features of Python's compound objects is that they can contain objects of any type, or even a mix of types. For example
Step3: This flexibility is a consequence of Python's dynamic type system.
Creating such a mixed sequence in a statically-typed language like C can be much more of a headache!
We see that lists can even contain other lists as elements.
Such type flexibility is an essential piece of what makes Python code relatively quick and easy to write.
So far we've been considering manipulations of lists as a whole; another essential piece is the accessing of individual elements.
This is done in Python via indexing and slicing, which we'll explore next.
List indexing and slicing
Python provides access to elements in compound types through indexing for single elements, and slicing for multiple elements.
As we'll see, both are indicated by a square-bracket syntax.
Suppose we return to our list of the first several primes
Step4: Python uses zero-based indexing, so we can access the first and second element in using the following syntax
Step5: Elements at the end of the list can be accessed with negative numbers, starting from -1
Step6: You can visualize this indexing scheme this way
Step7: Notice where 0 and 3 lie in the preceding diagram, and how the slice takes just the values between the indices.
If we leave out the first index, 0 is assumed, so we can equivalently write
Step8: Similarly, if we leave out the last index, it defaults to the length of the list.
Thus, the last three elements can be accessed as follows
Step9: Finally, it is possible to specify a third integer that represents the step size; for example, to select every second element of the list, we can write
Step10: A particularly useful version of this is to specify a negative step, which will reverse the array
Step11: Both indexing and slicing can be used to set elements as well as access them.
The syntax is as you would expect
Step12: A very similar slicing syntax is also used in many data science-oriented packages, including NumPy and Pandas (mentioned in the introduction).
Now that we have seen Python lists and how to access elements in ordered compound types, let's take a look at the other three standard compound data types mentioned earlier.
Tuples
Tuples are in many ways similar to lists, but they are defined with parentheses rather than square brackets
Step13: They can also be defined without any brackets at all
Step14: Like the lists discussed before, tuples have a length, and individual elements can be extracted using square-bracket indexing
Step15: The main distinguishing feature of tuples is that they are immutable
Step16: Tuples are often used in a Python program; a particularly common case is in functions that have multiple return values.
For example, the as_integer_ratio() method of floating-point objects returns a numerator and a denominator; this dual return value comes in the form of a tuple
Step17: These multiple return values can be individually assigned as follows
Step18: The indexing and slicing logic covered earlier for lists works for tuples as well, along with a host of other methods.
Refer to the online Python documentation for a more complete list of these.
Dictionaries
Dictionaries are extremely flexible mappings of keys to values, and form the basis of much of Python's internal implementation.
They can be created via a comma-separated list of key
Step19: Items are accessed and set via the indexing syntax used for lists and tuples, except here the index is not a zero-based order but valid key in the dictionary
Step20: New items can be added to the dictionary using indexing as well
Step21: Keep in mind that dictionaries do not maintain any sense of order for the input parameters; this is by design.
This lack of ordering allows dictionaries to be implemented very efficiently, so that random element access is very fast, regardless of the size of the dictionary (if you're curious how this works, read about the concept of a hash table).
The python documentation has a complete list of the methods available for dictionaries.
Sets
The fourth basic collection is the set, which contains unordered collections of unique items.
They are defined much like lists and tuples, except they use the curly brackets of dictionaries
Step22: If you're familiar with the mathematics of sets, you'll be familiar with operations like the union, intersection, difference, symmetric difference, and others.
Python's sets have all of these operations built-in, via methods or operators.
For each, we'll show the two equivalent methods | Python Code:
L = [2, 3, 5, 7]
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="fig/cover-small.jpg">
This notebook contains an excerpt from the Whirlwind Tour of Python by Jake VanderPlas; the content is available on GitHub.
The text and code are released under the CC0 license; see also the companion project, the Python Data Science Handbook.
<!--NAVIGATION-->
< Built-In Types: Simple Values | Contents | Control Flow >
Built-In Data Structures
We have seen Python's simple types: int, float, complex, bool, str, and so on.
Python also has several built-in compound types, which act as containers for other types.
These compound types are:
| Type Name | Example |Description |
|-----------|---------------------------|---------------------------------------|
| list | [1, 2, 3] | Ordered collection |
| tuple | (1, 2, 3) | Immutable ordered collection |
| dict | {'a':1, 'b':2, 'c':3} | Unordered (key,value) mapping |
| set | {1, 2, 3} | Unordered collection of unique values |
As you can see, round, square, and curly brackets have distinct meanings when it comes to the type of collection produced.
We'll take a quick tour of these data structures here.
Lists
Lists are the basic ordered and mutable data collection type in Python.
They can be defined with comma-separated values between square brackets; for example, here is a list of the first several prime numbers:
End of explanation
# Length of a list
len(L)
# Append a value to the end
L.append(11)
L
# Addition concatenates lists
L + [13, 17, 19]
# sort() method sorts in-place
L = [2, 5, 1, 6, 3, 4]
L.sort()
L
Explanation: Lists have a number of useful properties and methods available to them.
Here we'll take a quick look at some of the more common and useful ones:
End of explanation
L = [1, 'two', 3.14, [0, 3, 5]]
Explanation: In addition, there are many more built-in list methods; they are well-covered in Python's online documentation.
While we've been demonstrating lists containing values of a single type, one of the powerful features of Python's compound objects is that they can contain objects of any type, or even a mix of types. For example:
End of explanation
L = [2, 3, 5, 7, 11]
Explanation: This flexibility is a consequence of Python's dynamic type system.
Creating such a mixed sequence in a statically-typed language like C can be much more of a headache!
We see that lists can even contain other lists as elements.
Such type flexibility is an essential piece of what makes Python code relatively quick and easy to write.
So far we've been considering manipulations of lists as a whole; another essential piece is the accessing of individual elements.
This is done in Python via indexing and slicing, which we'll explore next.
List indexing and slicing
Python provides access to elements in compound types through indexing for single elements, and slicing for multiple elements.
As we'll see, both are indicated by a square-bracket syntax.
Suppose we return to our list of the first several primes:
End of explanation
L[0]
L[1]
Explanation: Python uses zero-based indexing, so we can access the first and second element in using the following syntax:
End of explanation
L[-1]
L[-2]
Explanation: Elements at the end of the list can be accessed with negative numbers, starting from -1:
End of explanation
L[0:3]
Explanation: You can visualize this indexing scheme this way:
Here values in the list are represented by large numbers in the squares; list indices are represented by small numbers above and below.
In this case, L[2] returns 5, because that is the next value at index 2.
Where indexing is a means of fetching a single value from the list, slicing is a means of accessing multiple values in sub-lists.
It uses a colon to indicate the start point (inclusive) and end point (non-inclusive) of the sub-array.
For example, to get the first three elements of the list, we can write:
End of explanation
L[:3]
Explanation: Notice where 0 and 3 lie in the preceding diagram, and how the slice takes just the values between the indices.
If we leave out the first index, 0 is assumed, so we can equivalently write:
End of explanation
L[-3:]
Explanation: Similarly, if we leave out the last index, it defaults to the length of the list.
Thus, the last three elements can be accessed as follows:
End of explanation
L[::2] # equivalent to L[0:len(L):2]
Explanation: Finally, it is possible to specify a third integer that represents the step size; for example, to select every second element of the list, we can write:
End of explanation
L[::-1]
Explanation: A particularly useful version of this is to specify a negative step, which will reverse the array:
End of explanation
L[0] = 100
print(L)
L[1:3] = [55, 56]
print(L)
Explanation: Both indexing and slicing can be used to set elements as well as access them.
The syntax is as you would expect:
End of explanation
t = (1, 2, 3)
Explanation: A very similar slicing syntax is also used in many data science-oriented packages, including NumPy and Pandas (mentioned in the introduction).
Now that we have seen Python lists and how to access elements in ordered compound types, let's take a look at the other three standard compound data types mentioned earlier.
Tuples
Tuples are in many ways similar to lists, but they are defined with parentheses rather than square brackets:
End of explanation
t = 1, 2, 3
print(t)
Explanation: They can also be defined without any brackets at all:
End of explanation
len(t)
t[0]
Explanation: Like the lists discussed before, tuples have a length, and individual elements can be extracted using square-bracket indexing:
End of explanation
t[1] = 4
t.append(4)
Explanation: The main distinguishing feature of tuples is that they are immutable: this means that once they are created, their size and contents cannot be changed:
End of explanation
x = 0.125
x.as_integer_ratio()
Explanation: Tuples are often used in a Python program; a particularly common case is in functions that have multiple return values.
For example, the as_integer_ratio() method of floating-point objects returns a numerator and a denominator; this dual return value comes in the form of a tuple:
End of explanation
numerator, denominator = x.as_integer_ratio()
print(numerator / denominator)
Explanation: These multiple return values can be individually assigned as follows:
End of explanation
numbers = {'one':1, 'two':2, 'three':3}
Explanation: The indexing and slicing logic covered earlier for lists works for tuples as well, along with a host of other methods.
Refer to the online Python documentation for a more complete list of these.
Dictionaries
Dictionaries are extremely flexible mappings of keys to values, and form the basis of much of Python's internal implementation.
They can be created via a comma-separated list of key:value pairs within curly braces:
End of explanation
# Access a value via the key
numbers['two']
Explanation: Items are accessed and set via the indexing syntax used for lists and tuples, except here the index is not a zero-based order but valid key in the dictionary:
End of explanation
# Set a new key:value pair
numbers['ninety'] = 90
print(numbers)
Explanation: New items can be added to the dictionary using indexing as well:
End of explanation
primes = {2, 3, 5, 7}
odds = {1, 3, 5, 7, 9}
Explanation: Keep in mind that dictionaries do not maintain any sense of order for the input parameters; this is by design.
This lack of ordering allows dictionaries to be implemented very efficiently, so that random element access is very fast, regardless of the size of the dictionary (if you're curious how this works, read about the concept of a hash table).
The python documentation has a complete list of the methods available for dictionaries.
Sets
The fourth basic collection is the set, which contains unordered collections of unique items.
They are defined much like lists and tuples, except they use the curly brackets of dictionaries:
End of explanation
# union: items appearing in either
primes | odds # with an operator
primes.union(odds) # equivalently with a method
# intersection: items appearing in both
primes & odds # with an operator
primes.intersection(odds) # equivalently with a method
# difference: items in primes but not in odds
primes - odds # with an operator
primes.difference(odds) # equivalently with a method
# symmetric difference: items appearing in only one set
primes ^ odds # with an operator
primes.symmetric_difference(odds) # equivalently with a method
Explanation: If you're familiar with the mathematics of sets, you'll be familiar with operations like the union, intersection, difference, symmetric difference, and others.
Python's sets have all of these operations built-in, via methods or operators.
For each, we'll show the two equivalent methods:
End of explanation |
8,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
sequana_coverage test case example (fungus)
This notebook creates the BED file S_pombe.filtered.bed provided in
- https
Step1: Download FastQ files (1.6Gb)
Step2: Download reference and annotation files
Step3: The Reference must be altered to rename the header so that they
agree with the genbank
Step4: Sequana_coverage analysis (using the library)
Step5: Chromosome selection
This data set contains several chromosomes. We can select them one by one and anlyse them as follows
Step6: Sequana_coverage analysis (using the standalone) | Python Code:
%pylab inline
matplotlib.rcParams['figure.figsize'] = [10,7]
Explanation: sequana_coverage test case example (fungus)
This notebook creates the BED file S_pombe.filtered.bed provided in
- https://github.com/sequana/resources/tree/master/coverage and
- https://www.synapse.org/#!Synapse:syn10638358/wiki/465309
genome length: 5.5Mb
It also shows the ability of sequana_coverage tool to handle multi-chromosome input data sets
WARNING: To create the input BED file, you will need an account on synapse to get the FastQ files.
You can skip the steps that build the BED file and download it directly from github:
wget //github.com/sequana/resources/raw/master/coverage/S_pombe.filtered.bed.bz2
bunzip2 S_Pombe.filtered.bed
and jump to the Sequana_coverage analysis (using the library) section directly.
Otherwise, we first download 2 FastQ files from Synapse, its reference genome and its genbank annotation. Then, we use BWA to map reads into a BAM file. The BAM file itself is converted to a BED, which is going to be one input file to our analysis. Finally, we use the coverage tool from Sequana project (i) with the standalone (sequana_coverage) and (ii) the Python library to analyse the BED file.
Versions used:
- sequana 0.7.0
- bwa mem 0.7.15
- bedtools 2.26.0
- samtools 1.5
- synapseclient 1.7.2
End of explanation
import synapseclient
l = synapseclient.login()
l.get("syn10641621", downloadLocation=".", ifcollision="overwrite.local")
l.get("syn10641896", downloadLocation=".", ifcollision="overwrite.local")
Explanation: Download FastQ files (1.6Gb)
End of explanation
!sequana_coverage --download-reference CU329670
!sequana_coverage --download-reference CU329671
!sequana_coverage --download-reference CU329672
!sequana_coverage --download-reference X54421
!sequana_coverage --download-genbank CU329670
!sequana_coverage --download-genbank CU329671
!sequana_coverage --download-genbank CU329672
!sequana_coverage --download-genbank X54421
!cat CU*gbk X*gbk > S_pombe.gbk
!cat CU*.fa X*.fa> S_pombe.fa
Explanation: Download reference and annotation files
End of explanation
files = ['CU329670.fa', "CU329671.fa", "CU329672.fa", "X54421.fa"]
with open("S_pombe.fa", "w") as fout:
for filename in files:
with open(filename, "r") as fin :
for line in fin.readlines():
if line.startswith(">"):
start, end = line.split(None, 1)
accession = start[1:].rsplit("|", 1)[1]
line = ">" + accession + " " + end
fout.write(line)
# The mapping to obtain the sorted BAM file (uses BWA behing the scene)
!time sequana_mapping \
--file1 M14-19_J29_01_TAAGGCGA-TATCCTCT_L002_R1_001.fastq.gz \
--file2 M14-19_J29_01_TAAGGCGA-TATCCTCT_L002_R2_001.fastq.gz \
--reference S_pombe.fa --thread 4
# Build the BED file (unfiltered)
! time samtools depth -d 30000 S_pombe.fa.sorted.bam -aa > S_pombe.bed
Explanation: The Reference must be altered to rename the header so that they
agree with the genbank
End of explanation
%%time
from sequana import GenomeCov
# If chromosome length is >5Mb, we split the data. Here we now it is 5.5Mb, so let us
#slightly increase the chunksize.
b = GenomeCov("S_pombe.bed", "S_pombe.gbk", chunksize=6000000, low_threshold=-4, high_threshold=4)
b.compute_gc_content("S_pombe.fa")
Explanation: Sequana_coverage analysis (using the library)
End of explanation
chrom = b.chr_list[0]
chrom.run(20001, circular=True)
chrom.plot_coverage()
_ = ylim([0, 400])
chrom.plot_rois(0,1000000)
chrom.plot_gc_vs_coverage(bins=[80, 60], Nlevels=6)
Explanation: Chromosome selection
This data set contains several chromosomes. We can select them one by one and anlyse them as follows:
End of explanation
!sequana_coverage --input S_pombe.filtered.bed --genbank S_pombe.gbk --reference S_pombe.fa
Explanation: Sequana_coverage analysis (using the standalone)
End of explanation |
8,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tom Augspurger Dplyr/Pandas comparison (copy of 2016-01-01)
See result there
http
Step1: using an internet download to get flight.qcsv
Step2: Data
Step3: Single table verbs
dplyr has a small set of nicely defined verbs. I've listed their closest pandas verbs.
<table>
<tr>
<td><b>dplyr</b></td>
<td><b>pandas</b></td>
</tr>
<tr>
<td><code>filter()</code> (and <code>slice()</code>)</td>
<td><code>query()</code> (and <code>loc[]</code>, <code>iloc[]</code>)</td>
</tr>
<tr>
<td><code>arrange()</code></td>
<td><code>sort_values</code> and <code>sort_index()</code></td>
</tr>
<tr>
<td><code>select() </code>(and <code>rename()</code>)</td>
<td><code>__getitem__ </code> (and <code>rename()</code>)</td>
</tr>
<tr>
<td><code>distinct()</code></td>
<td><code>drop_duplicates()</code></td>
</tr>
<tr>
<td><code>mutate()</code> (and <code>transmute()</code>)</td>
<td>assign</td>
</tr>
<tr>
<td>summarise()</td>
<td>None</td>
</tr>
<tr>
<td>sample_n() and sample_frac()</td>
<td><code>sample</code></td>
</tr>
<tr>
<td><code>%>%</code></td>
<td><code>pipe</code></td>
</tr>
</table>
Some of the "missing" verbs in pandas are because there are other, different ways of achieving the same goal. For example summarise is spread across mean, std, etc. It's closest analog is actually the .agg method on a GroupBy object, as it reduces a DataFrame to a single row (per group). This isn't quite what .describe does.
I've also included the pipe operator from R (%>%), the pipe method from pandas, even though it isn't quite a verb.
Filter rows with filter(), query()
Step4: We see the first big language difference between R and python.
Many python programmers will shun the R code as too magical.
How is the programmer supposed to know that month and day are supposed to represent columns in the DataFrame?
On the other hand, to emulate this very convenient feature of R, python has to write the expression as a string, and evaluate the string in the context of the DataFrame.
The more verbose version
Step5: Arrange rows with arrange(), sort()
Step6: It's worth mentioning the other common sorting method for pandas DataFrames, sort_index. Pandas puts much more emphasis on indicies, (or row labels) than R.
This is a design decision that has positives and negatives, which we won't go into here. Suffice to say that when you need to sort a DataFrame by the index, use DataFrame.sort_index.
Select columns with select(), []
Step7: But like Hadley mentions, not that useful since it only returns the one column. dplyr and pandas compare well here.
Step8: Pandas is more verbose, but the the argument to columns can be any mapping. So it's often used with a function to perform a common task, say df.rename(columns=lambda x
Step9: FYI this returns a numpy array instead of a Series.
Step10: OK, so dplyr wins there from a consistency point of view. unique is only defined on Series, not DataFrames.
Add new columns with mutate()
We at pandas shamelessly stole this for v0.16.0.
Step11: The first example is pretty much identical (aside from the names, mutate vs. assign).
The second example just comes down to language differences. In R, it's possible to implement a function like mutate where you can refer to gain in the line calcuating gain_per_hour, even though gain hasn't actually been calcuated yet.
In Python, you can have arbitrary keyword arguments to functions (which we needed for .assign), but the order of the argumnets is arbitrary since dicts are unsorted and **kwargs* is a dict. So you can't have something like df.assign(x=df.a / df.b, y=x **2), because you don't know whether x or y will come first (you'd also get an error saying x is undefined.
To work around that with pandas, you'll need to split up the assigns, and pass in a callable to the second assign. The callable looks at itself to find a column named gain. Since the line above returns a DataFrame with the gain column added, the pipeline goes through just fine.
Step12: Summarise values with summarise()
Step13: This is only roughly equivalent.
summarise takes a callable (e.g. mean, sum) and evaluates that on the DataFrame. In pandas these are spread across pd.DataFrame.mean, pd.DataFrame.sum. This will come up again when we look at groupby.
Randomly sample rows with sample_n() and sample_frac()
Step14: Grouped operations
Step15: For me, dplyr's n() looked is a bit starge at first, but it's already growing on me.
I think pandas is more difficult for this particular example.
There isn't as natural a way to mix column-agnostic aggregations (like count) with column-specific aggregations like the other two. You end up writing could like .agg{'year'
Step16: Or using statsmodels directly for more control over the lowess, with an extremely lazy
"confidence interval".
Step17: There's a little know feature to groupby.agg
Step18: The result is a MultiIndex in the columns which can be a bit awkard to work with (you can drop a level with r.columns.droplevel()). Also the syntax going into the .agg may not be the clearest.
Similar to how dplyr provides optimized C++ versions of most of the summarise functions, pandas uses cython optimized versions for most of the agg methods.
Step19: I'm not sure how dplyr is handling the other columns, like year, in the last example. With pandas, it's clear that we're grouping by them since they're included in the groupby. For the last example, we didn't group by anything, so they aren't included in the result.
Chaining
Any follower of Hadley's twitter account will know how much R users love the %>% (pipe) operator. And for good reason! | Python Code:
#%load_ext rpy2.ipython
#%R install.packages("nycflights13", repos='http://cran.us.r-project.org')
#%R library(nycflights13)
#%R write.csv(flights, "flights.csv")
Explanation: Tom Augspurger Dplyr/Pandas comparison (copy of 2016-01-01)
See result there
http://nbviewer.ipython.org/urls/gist.githubusercontent.com/TomAugspurger/6e052140eaa5fdb6e8c0/raw/627b77addb4bcfc39ab6be6d85cb461e956fb3a3/dplyr_pandas.ipynb
to reproduce on your WinPython you'll need to get flights.csv in this directory
This notebook compares pandas
and dplyr.
The comparison is just on syntax (verbage), not performance. Whether you're an R user looking to switch to pandas (or the other way around), I hope this guide will help ease the transition.
We'll work through the introductory dplyr vignette to analyze some flight data.
I'm working on a better layout to show the two packages side by side.
But for now I'm just putting the dplyr code in a comment above each python call.
using R steps to get flights.csv
un-comment the next cell unless you have installed R and want to get Flights example from the source
to install R on your Winpython:
how to install R
End of explanation
# Downloading and unzipg a file, without R method :
# source= http://stackoverflow.com/a/34863053/3140336
import io
from zipfile import ZipFile
import requests
def get_zip(file_url):
url = requests.get(file_url)
zipfile = ZipFile(io.BytesIO(url.content))
zip_names = zipfile.namelist()
if len(zip_names) == 1:
file_name = zip_names.pop()
extracted_file = zipfile.open(file_name)
return extracted_file
url=r'https://github.com/winpython/winpython_afterdoc/raw/master/examples/nycflights13_datas/flights.zip'
with io.open("flights.csv", 'wb') as f:
f.write(get_zip(url).read())
# Some prep work to get the data from R and into pandas
%matplotlib inline
import matplotlib.pyplot as plt
#%load_ext rpy2.ipython
import pandas as pd
import seaborn as sns
pd.set_option("display.max_rows", 5)
Explanation: using an internet download to get flight.qcsv
End of explanation
flights = pd.read_csv("flights.csv", index_col=0)
# dim(flights) <--- The R code
flights.shape # <--- The python code
# head(flights)
flights.head()
Explanation: Data: nycflights13
End of explanation
# filter(flights, month == 1, day == 1)
flights.query("month == 1 & day == 1")
Explanation: Single table verbs
dplyr has a small set of nicely defined verbs. I've listed their closest pandas verbs.
<table>
<tr>
<td><b>dplyr</b></td>
<td><b>pandas</b></td>
</tr>
<tr>
<td><code>filter()</code> (and <code>slice()</code>)</td>
<td><code>query()</code> (and <code>loc[]</code>, <code>iloc[]</code>)</td>
</tr>
<tr>
<td><code>arrange()</code></td>
<td><code>sort_values</code> and <code>sort_index()</code></td>
</tr>
<tr>
<td><code>select() </code>(and <code>rename()</code>)</td>
<td><code>__getitem__ </code> (and <code>rename()</code>)</td>
</tr>
<tr>
<td><code>distinct()</code></td>
<td><code>drop_duplicates()</code></td>
</tr>
<tr>
<td><code>mutate()</code> (and <code>transmute()</code>)</td>
<td>assign</td>
</tr>
<tr>
<td>summarise()</td>
<td>None</td>
</tr>
<tr>
<td>sample_n() and sample_frac()</td>
<td><code>sample</code></td>
</tr>
<tr>
<td><code>%>%</code></td>
<td><code>pipe</code></td>
</tr>
</table>
Some of the "missing" verbs in pandas are because there are other, different ways of achieving the same goal. For example summarise is spread across mean, std, etc. It's closest analog is actually the .agg method on a GroupBy object, as it reduces a DataFrame to a single row (per group). This isn't quite what .describe does.
I've also included the pipe operator from R (%>%), the pipe method from pandas, even though it isn't quite a verb.
Filter rows with filter(), query()
End of explanation
# flights[flights$month == 1 & flights$day == 1, ]
flights[(flights.month == 1) & (flights.day == 1)]
# slice(flights, 1:10)
flights.iloc[:9]
Explanation: We see the first big language difference between R and python.
Many python programmers will shun the R code as too magical.
How is the programmer supposed to know that month and day are supposed to represent columns in the DataFrame?
On the other hand, to emulate this very convenient feature of R, python has to write the expression as a string, and evaluate the string in the context of the DataFrame.
The more verbose version:
End of explanation
# arrange(flights, year, month, day)
flights.sort_values(['year', 'month', 'day'])
# arrange(flights, desc(arr_delay))
flights.sort_values('arr_delay', ascending=False)
Explanation: Arrange rows with arrange(), sort()
End of explanation
# select(flights, year, month, day)
flights[['year', 'month', 'day']]
# select(flights, year:day)
flights.loc[:, 'year':'day']
# select(flights, -(year:day))
# No direct equivalent here. I would typically use
# flights.drop(cols_to_drop, axis=1)
# or fligths[flights.columns.difference(pd.Index(cols_to_drop))]
# point to dplyr!
# select(flights, tail_num = tailnum)
flights.rename(columns={'tailnum': 'tail_num'})['tail_num']
Explanation: It's worth mentioning the other common sorting method for pandas DataFrames, sort_index. Pandas puts much more emphasis on indicies, (or row labels) than R.
This is a design decision that has positives and negatives, which we won't go into here. Suffice to say that when you need to sort a DataFrame by the index, use DataFrame.sort_index.
Select columns with select(), []
End of explanation
# rename(flights, tail_num = tailnum)
flights.rename(columns={'tailnum': 'tail_num'})
Explanation: But like Hadley mentions, not that useful since it only returns the one column. dplyr and pandas compare well here.
End of explanation
# distinct(select(flights, tailnum))
flights.tailnum.unique()
Explanation: Pandas is more verbose, but the the argument to columns can be any mapping. So it's often used with a function to perform a common task, say df.rename(columns=lambda x: x.replace('-', '_')) to replace any dashes with underscores. Also, rename (the pandas version) can be applied to the Index.
One more note on the differences here.
Pandas could easily include a .select method.
xray, a library that builds on top of NumPy and pandas to offer labeled N-dimensional arrays (along with many other things) does just that.
Pandas chooses the .loc and .iloc accessors because any valid selection is also a valid assignment. This makes it easier to modify the data.
python
flights.loc[:, 'year':'day'] = data
where data is an object that is, or can be broadcast to, the correct shape.
Extract distinct (unique) rows
End of explanation
# distinct(select(flights, origin, dest))
flights[['origin', 'dest']].drop_duplicates()
Explanation: FYI this returns a numpy array instead of a Series.
End of explanation
# mutate(flights,
# gain = arr_delay - dep_delay,
# speed = distance / air_time * 60)
flights.assign(gain=flights.arr_delay - flights.dep_delay,
speed=flights.distance / flights.air_time * 60)
# mutate(flights,
# gain = arr_delay - dep_delay,
# gain_per_hour = gain / (air_time / 60)
# )
(flights.assign(gain=flights.arr_delay - flights.dep_delay)
.assign(gain_per_hour = lambda df: df.gain / (df.air_time / 60)))
Explanation: OK, so dplyr wins there from a consistency point of view. unique is only defined on Series, not DataFrames.
Add new columns with mutate()
We at pandas shamelessly stole this for v0.16.0.
End of explanation
# transmute(flights,
# gain = arr_delay - dep_delay,
# gain_per_hour = gain / (air_time / 60)
# )
(flights.assign(gain=flights.arr_delay - flights.dep_delay)
.assign(gain_per_hour = lambda df: df.gain / (df.air_time / 60))
[['gain', 'gain_per_hour']])
Explanation: The first example is pretty much identical (aside from the names, mutate vs. assign).
The second example just comes down to language differences. In R, it's possible to implement a function like mutate where you can refer to gain in the line calcuating gain_per_hour, even though gain hasn't actually been calcuated yet.
In Python, you can have arbitrary keyword arguments to functions (which we needed for .assign), but the order of the argumnets is arbitrary since dicts are unsorted and **kwargs* is a dict. So you can't have something like df.assign(x=df.a / df.b, y=x **2), because you don't know whether x or y will come first (you'd also get an error saying x is undefined.
To work around that with pandas, you'll need to split up the assigns, and pass in a callable to the second assign. The callable looks at itself to find a column named gain. Since the line above returns a DataFrame with the gain column added, the pipeline goes through just fine.
End of explanation
# summarise(flights,
# delay = mean(dep_delay, na.rm = TRUE))
flights.dep_delay.mean()
Explanation: Summarise values with summarise()
End of explanation
# sample_n(flights, 10)
flights.sample(n=10)
# sample_frac(flights, 0.01)
flights.sample(frac=.01)
Explanation: This is only roughly equivalent.
summarise takes a callable (e.g. mean, sum) and evaluates that on the DataFrame. In pandas these are spread across pd.DataFrame.mean, pd.DataFrame.sum. This will come up again when we look at groupby.
Randomly sample rows with sample_n() and sample_frac()
End of explanation
# planes <- group_by(flights, tailnum)
# delay <- summarise(planes,
# count = n(),
# dist = mean(distance, na.rm = TRUE),
# delay = mean(arr_delay, na.rm = TRUE))
# delay <- filter(delay, count > 20, dist < 2000)
planes = flights.groupby("tailnum")
delay = (planes.agg({"year": "count",
"distance": "mean",
"arr_delay": "mean"})
.rename(columns={"distance": "dist",
"arr_delay": "delay",
"year": "count"})
.query("count > 20 & dist < 2000"))
delay
Explanation: Grouped operations
End of explanation
fig, ax = plt.subplots(figsize=(12, 6))
sns.regplot("dist", "delay", data=delay, lowess=True, ax=ax,
scatter_kws={'color': 'k', 'alpha': .5, 's': delay['count'] / 10}, ci=90,
line_kws={'linewidth': 3});
Explanation: For me, dplyr's n() looked is a bit starge at first, but it's already growing on me.
I think pandas is more difficult for this particular example.
There isn't as natural a way to mix column-agnostic aggregations (like count) with column-specific aggregations like the other two. You end up writing could like .agg{'year': 'count'} which reads, "I want the count of year", even though you don't care about year specifically. You could just as easily have said .agg('distance': 'count').
Additionally assigning names can't be done as cleanly in pandas; you have to just follow it up with a rename like before.
We may as well reproduce the graph. It looks like ggplots geom_smooth is some kind of lowess smoother. We can either us seaborn:
End of explanation
import statsmodels.api as sm
smooth = sm.nonparametric.lowess(delay.delay, delay.dist, frac=1/8)
ax = delay.plot(kind='scatter', x='dist', y = 'delay', figsize=(12, 6),
color='k', alpha=.5, s=delay['count'] / 10)
ax.plot(smooth[:, 0], smooth[:, 1], linewidth=3);
std = smooth[:, 1].std()
ax.fill_between(smooth[:, 0], smooth[:, 1] - std, smooth[:, 1] + std, alpha=.25);
# destinations <- group_by(flights, dest)
# summarise(destinations,
# planes = n_distinct(tailnum),
# flights = n()
# )
destinations = flights.groupby('dest')
destinations.agg({
'tailnum': lambda x: len(x.unique()),
'year': 'count'
}).rename(columns={'tailnum': 'planes',
'year': 'flights'})
Explanation: Or using statsmodels directly for more control over the lowess, with an extremely lazy
"confidence interval".
End of explanation
destinations = flights.groupby('dest')
r = destinations.agg({'tailnum': {'planes': lambda x: len(x.unique())},
'year': {'flights': 'count'}})
r
Explanation: There's a little know feature to groupby.agg: it accepts a dict of dicts mapping
columns to {name: aggfunc} pairs. Here's the result:
End of explanation
# daily <- group_by(flights, year, month, day)
# (per_day <- summarise(daily, flights = n()))
daily = flights.groupby(['year', 'month', 'day'])
per_day = daily['distance'].count()
per_day
# (per_month <- summarise(per_day, flights = sum(flights)))
per_month = per_day.groupby(level=['year', 'month']).sum()
per_month
# (per_year <- summarise(per_month, flights = sum(flights)))
per_year = per_month.sum()
per_year
Explanation: The result is a MultiIndex in the columns which can be a bit awkard to work with (you can drop a level with r.columns.droplevel()). Also the syntax going into the .agg may not be the clearest.
Similar to how dplyr provides optimized C++ versions of most of the summarise functions, pandas uses cython optimized versions for most of the agg methods.
End of explanation
# flights %>%
# group_by(year, month, day) %>%
# select(arr_delay, dep_delay) %>%
# summarise(
# arr = mean(arr_delay, na.rm = TRUE),
# dep = mean(dep_delay, na.rm = TRUE)
# ) %>%
# filter(arr > 30 | dep > 30)
(
flights.groupby(['year', 'month', 'day'])
[['arr_delay', 'dep_delay']]
.mean()
.query('arr_delay > 30 | dep_delay > 30')
)
Explanation: I'm not sure how dplyr is handling the other columns, like year, in the last example. With pandas, it's clear that we're grouping by them since they're included in the groupby. For the last example, we didn't group by anything, so they aren't included in the result.
Chaining
Any follower of Hadley's twitter account will know how much R users love the %>% (pipe) operator. And for good reason!
End of explanation |
8,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KK's Rscript to Pyscript
Somayaji made a script in R which I am dying to reproduce in python using pandas and other great frameworks. His source file is
Step1: And the Resource for this learning is
Data Analysis with Python Pandas
Source
And, my Notebook which is a copy of the same tutorial in a notebook, whose file name is pandas_tutorial_3mar15.ipynb and can be found in SageMath Cloud.
Step2: <hr>
</hr>
!
Now, we will use the Crime Data csv file for this exercise and it is here in this location
Step3: I would like to change the datatype of the Date column as KK did in his Rscript. For that, I found this link useful.
Step4: I do not know what happened by the above converting command, but I can see 2 changes, while one of them seems positive.
The dates got a - symbol in place /. (Positive)
The IUCR column seems to be totally different from the source csv file (?) Why is it showing NaT instead of an alphanumeric as in the csv file.
The second point may not be due to this command, perhaps it has been this way from when python started reading the file. But I am not sure how to correct this now.
!
Now, I would like to save this newly created dataframe as a new csv file using pandas. This is how I should do it as per the tutorial.
Step5: !
Next is to use PyMySQLdb module and do some data manipulation as Somayaji did in the Rscript. | Python Code:
ls -l *.R
Explanation: KK's Rscript to Pyscript
Somayaji made a script in R which I am dying to reproduce in python using pandas and other great frameworks. His source file is:
End of explanation
from pandas import DataFrame, read_csv
import matplotlib.pyplot as plt
import pandas as pd
import sys
%matplotlib inline
print 'Python Version ' + sys.version
print 'Pandas Version ' + pd.__version__
Explanation: And the Resource for this learning is
Data Analysis with Python Pandas
Source
And, my Notebook which is a copy of the same tutorial in a notebook, whose file name is pandas_tutorial_3mar15.ipynb and can be found in SageMath Cloud.
End of explanation
ls -l *.csv
df = pd.read_csv('Crimes_-_2001_to_present.csv')
df.dtypes # Tells us what data type each column is!
df.ID # Shows us the elements inside the column named, "ID"
# You can also check it's data type separately
df.ID.dtype
df.Date.dtype
Explanation: <hr>
</hr>
!
Now, we will use the Crime Data csv file for this exercise and it is here in this location:
End of explanation
df.convert_objects(convert_dates='coerce',convert_numeric=True)
df.Date.dtype
Explanation: I would like to change the datatype of the Date column as KK did in his Rscript. For that, I found this link useful.
End of explanation
# Here is the commnd to do it.
# But the parameter `index=False` will prevent
# the index column from being exported to the csv file.
df.to_csv('Crimes_-_2001_to_present_v2.csv', index=False)
ls -l *.csv
Explanation: I do not know what happened by the above converting command, but I can see 2 changes, while one of them seems positive.
The dates got a - symbol in place /. (Positive)
The IUCR column seems to be totally different from the source csv file (?) Why is it showing NaT instead of an alphanumeric as in the csv file.
The second point may not be due to this command, perhaps it has been this way from when python started reading the file. But I am not sure how to correct this now.
!
Now, I would like to save this newly created dataframe as a new csv file using pandas. This is how I should do it as per the tutorial.
End of explanation
from pandasql import sqldf
from pandasql import load_meat, load_births
meat = load_meat
births = load_births
print meat.head()
q = "SELECT * FROM meat LIMIT 10;"
print sqldf(q, locals())
Explanation: !
Next is to use PyMySQLdb module and do some data manipulation as Somayaji did in the Rscript.
End of explanation |
8,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lexical Analysis of Wikipedia Abstracts
In this notebook we preprocess the biography overviews from the English DBpedia. We train a model to detect bi-grams in text, and we generate a vocabulary where we count in how many biographies each uni-/bi-gram appears.
By Eduardo Graells-Garrido.
Step1: First, we load person data to process only biographies present in our dataset.
Step2: Here we read the biography overviews to train our gensim co-llocations model. Note that you need NLTK to parse sentences.
Step3: Now that we have trained our model, we can identify bi-grams in biographies. Now, we will construct a vocabulary dictionary
Step4: And we save it in a structure to be used in the following notebooks. | Python Code:
from __future__ import print_function, unicode_literals
from dbpedia_utils import iter_entities_from
from collections import defaultdict, Counter
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import gensim
import json
import gzip
import nltk
import dbpedia_config
source_folder = dbpedia_config.DATA_FOLDER
target_folder = dbpedia_config.TARGET_FOLDER
abstracts_file = '{0}/long_abstracts_{1}.nt.bz2'.format(source_folder, dbpedia_config.MAIN_LANGUAGE)
text_language = 'english'
Explanation: Lexical Analysis of Wikipedia Abstracts
In this notebook we preprocess the biography overviews from the English DBpedia. We train a model to detect bi-grams in text, and we generate a vocabulary where we count in how many biographies each uni-/bi-gram appears.
By Eduardo Graells-Garrido.
End of explanation
person_data = pd.read_csv('{0}/person_data_en.csv.gz'.format(target_folder), encoding='utf-8', index_col='uri')
person_data.head()
Explanation: First, we load person data to process only biographies present in our dataset.
End of explanation
def sentences():
for i, entity in enumerate(iter_entities_from(abstracts_file)):
resource = entity['resource']
if resource in person_data.index:
try:
abstract = entity['abstract'].pop()
if abstract:
for sentence in nltk.sent_tokenize(abstract, language=text_language):
yield list(gensim.utils.tokenize(sentence, deacc=True, lowercase=True))
except KeyError:
continue
bigrams = gensim.models.Phrases(sentences())
bigrams.save('{0}/biography_overviews_bigrams.gensim'.format(target_folder))
Explanation: Here we read the biography overviews to train our gensim co-llocations model. Note that you need NLTK to parse sentences.
End of explanation
vocabulary = defaultdict(Counter)
for i, entity in enumerate(iter_entities_from(abstracts_file)):
resource = entity['resource']
if resource in person_data.index:
try:
abstract = entity['abstract'].pop()
if not abstract:
#some biographies have an empty abstract.
continue
gender = person_data.loc[resource].gender
for sentence in nltk.sent_tokenize(abstract, language=text_language):
n_grams = bigrams[list(gensim.utils.tokenize(sentence, deacc=True, lowercase=True))]
vocabulary[gender].update(set(n_grams))
except KeyError:
# some biographies do not have an abstract.
continue
Explanation: Now that we have trained our model, we can identify bi-grams in biographies. Now, we will construct a vocabulary dictionary:
{gender => {word => # of biographies}}
End of explanation
with gzip.open('{0}/vocabulary.json.gz'.format(target_folder), 'wb') as f:
json.dump(vocabulary, f)
Explanation: And we save it in a structure to be used in the following notebooks.
End of explanation |
8,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The most characteristic words in pro- and anti-feminist tweets
Attitude analysis by machine learning as an alternative to sentiment analysis in order to classify tweets as pro- or anti-feminist, and determine the differences in their vocabularies
This project won the Data Science prize andthe NLP prize at the Montreal Big Data Week Hackathon 2015
By David Taylor, Zafarali Ahmed, Jerome Boisvert-Chouinard, Dave Gurnsey, Nancy Lin, and Reda Lotfi
For a blog post about the results
Step1: Download data
The data set was too huge to host on GitHub, so we hosted it on David Taylor's professional website, dtdata.io, along with the manually curated dataset. The full dataset is 248 Mb. It will not be downloaded if it already exists in your working directory, unless you change overwrite to True.
Step2: See whether sentiment analysis can be used as a predicter of attitude.
Sentiment analysis relies on identifying 'positive' or 'negative' words (e.g. 'love', 'awesome'; 'hate', 'sucks'). This does not translate into attitude analysis simply because support for or opposition to a position can be done with both positive- and negative-sentiment words. For example
Step3: Unfortunately, the Twitter Search app and python script we used did not parse unicode characters correctly (it renders unicode character \u2018 as those six characters instead of an opening smart single quote, for example), so we'll change some of the more common ones (smart single and double quotes, en dash, and an emoji) to proper unicode.
Step4: Now we'll test automated sentiment analysis, which is based on a corpus of movie reviews. It gives a score of -1 to +1 for polarity and 0 to 1 for subjectivity, so we'll add those two columns to df_curated and plot them on a graph to see if they are separable.
Step5: There are not many obvious ways from the above graph to predict attitude based on polarity and subjectivity; none of the classes is very separable from the others.
A random forest classifier based on polarity and subjectivity does not do a great job at predicting classes -1 and +1, although it does better predicting class 0 -- the one we are not interested in. Look at the accuracy below; random choice with three categories would have a success rate of 33%.
We also calculate the 'precision' and 'recall' of each class, which are, respectively, the % of those predicted to be in the class (true positives + false positives) that are actually in the class (true positives), and the % of all of those that are actually in the class (true positives + false negatives) that were predicted to be in the class (true positives). (Precision and recall can be a little mindbending at first, but eventually you get used to them).
Step6: Load full tweet dataset and eliminate retweets and duplicates
Step7: Let's have a look at the dataset.
Step8: Description of features
Step9: An example of the tokenizer in action
Step10: Tokenize, vectorize, and classify
Tokenize and vectorize curated tweets (for 'bag-of-words' features) and train a Naive Bayes classifier to classify the remaining 390,000 tweets as class +1, 0, or -1.
Step11: Our test set accuracy is comparable to the Random Forest based on sentiment, but it is much better at predicting classes -1 and +1, at the expense of class 0, which we don't care about. In addition, our precision is way up, which we care about more than recall (i.e. we want the true posives to be correctly classified, we don't care as much whether the correctly classified are true positives... this actually makes sense, we promise!) We're overfitting, given the huge gap between test set and training set accuracy -- the obvious solution for a future implementation is to manually curate some more tweets, or do dimensionality reduction on the bag-of-words vectors
Step13: As you can see, we're overfitting; a lot of tweets are being classified as anti-feminist. However, once we do the log-likelihood method, words that truly do not differ between the two classes should cancel out; we should be safe as long as we draw inferences only from words with very high or low log-likelihood values.
Calculate token frequencies for each class
Step14: We go through all of the approximately 300,000 tweets classified as pro- or anti-feminist, and calculate the frequency of tokens in each class.
Step15: For fun, let's look at the top 10 tokens in each class, although raw numbers aren't that informative; we'll see a lot of stopwords. The log-likelihood results will be much more interesting
Step17: Calculate log-likelihood of each token
Now we calculate the log-likelihood of each word being characteristic of one class or another. Log-likelihood is a measure of significance, so it's essentially a transformed p-value. A good way to think of it is, you've got words in class -1 and in class +1. The 'significance' is a measure of how 'surprised' would you be to have at least as unequal a distribution of that word between the classes if the classes had been randomly assigned.
The nice thing about log-likelihood is that it takes into account both the ratio and the absolute values of the frequencies. In other words, if "goldilocks" (to pick a word totally at random) appeared 10 times in the anti-feminist tweets and 20 times in the pro-feminist tweets, it would have a lower log-likelihood than if it appeared 100 times and 200 times, respectively, even though the ratio between them is the same. How much lower depends on the total size of the dataset; we're more 'surprised' to find differences in randomly-classified words in large datasets than in small ones.
Step18: Here are the top 50 most characteristic tokens in anti-feminist tweets...
Note that the search terms, 'feminist', 'feminism' and 'feminists', will have exaggerated log-likelihoods because of selection bias.
The untranslated unicode characters u0001f602 is the
Step19: ... and the top 50 most characteristic tokens in pro-feminist tweets.
Step20: A function to give examples of tokens in tweets and their classes
Obviously, we'll see some misclassified tweets, but hopefully in the minority. | Python Code:
import csv
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import requests
import seaborn as sns
import shutil
import time
import urllib.request
from collections import Counter
from textblob import TextBlob
%matplotlib inline
Explanation: The most characteristic words in pro- and anti-feminist tweets
Attitude analysis by machine learning as an alternative to sentiment analysis in order to classify tweets as pro- or anti-feminist, and determine the differences in their vocabularies
This project won the Data Science prize andthe NLP prize at the Montreal Big Data Week Hackathon 2015
By David Taylor, Zafarali Ahmed, Jerome Boisvert-Chouinard, Dave Gurnsey, Nancy Lin, and Reda Lotfi
For a blog post about the results: http://www.prooffreader.com
For a blog post about the methodology: http://prooffreaderplus.blogspot.ca
OUTLINE:
We used the Twitter Search API from January to April, 2015 to search twitter periodically (at random intervals as short as 15 minutes) for the last tweets containing the terms 'feminist', 'feminists' and 'feminism' (usually the last 100 tweets, but at random intervals once every few days, the last 1500 tweets).
A CSV file was made of the search results resulting in about 988,000 tweets. Retweets and duplicate tweets (i.e. coming from a 'share this' button on a website) were removed, with the justification that we are studying the words humans choose to actively employ in tweets; simply clicking a button to retweet or share does not involve authorial word choice on the part of the tweeter.
We manually curated approximately 1,000 randomly chosen tweets into the following three categories: 'pro-feminist' (1), 'anti-feminist' (-1), 'neither' (0). This involved some reflection, to try to intuit the underlying attitude of the tweet's author. When in doubt, for example if the tweet was news reporting or if it may have been sarcastic, we defaulted to class 0.
We tokenized the tweets (broke them up into separate words and symbols) with almost no stopwords, with the justification that in only 140 characters, every token matters.
We verified that sentiment analysis was not up to the task of predicting our attitude classes, so we used a Naive Bayes classifier to predict the attitudes of the other 390,000 tweets.
We used the log-likelihood method to determine which words or tokens were most characteristic ('key') to each of the pro- or anti-feminist (+1 or -1) attitude classes.
Import libraries
End of explanation
overwrite = False # change to true to overwrite existing files
for url in ['http://www.dtdata.io/femtwitr/twitter_feminism_201501_201504.csv',
'http://www.dtdata.io/femtwitr/curated_feminism_tweets.csv']:
file_name = os.path.split(url)[1]
if (not os.path.isfile(file_name) or overwrite == True):
with urllib.request.urlopen(url) as response, open(file_name, 'wb+') as out_file:
shutil.copyfileobj(response, out_file)
Explanation: Download data
The data set was too huge to host on GitHub, so we hosted it on David Taylor's professional website, dtdata.io, along with the manually curated dataset. The full dataset is 248 Mb. It will not be downloaded if it already exists in your working directory, unless you change overwrite to True.
End of explanation
# load curated tweets into dataframe
df_curated = pd.read_csv('curated_feminism_tweets.csv', encoding='latin-1', index_col=None)
# ensure all tweets are classified
assert len(df_curated) == len(df_curated[df_curated['class'].isin([0,1,-1])])
# get rid of any non-information-carrying columns (e.g. 'index' created from parsing)
df_curated = df_curated[['id', 'tweet', 'class']]
plt.figure(1, figsize=(6,6))
ax = plt.axes([0.1, 0.1, 0.8, 0.8])
labels = '-1: Anti-feminist', '0: Neither pro nor anti', '1: Pro-feminist'
fracs = [len(df_curated[df_curated['class'] == -1]) / len(df_curated),
len(df_curated[df_curated['class'] == 0]) / len(df_curated),
len(df_curated[df_curated['class'] == 1]) / len(df_curated)]
plt.pie(fracs, labels=labels,
colors = ['#cc6666', '#999999', '#66cc66'],
autopct='%1.1f%%', startangle=90)
plt.title('Percentage of classes in curated tweets', fontsize=14)
plt.show()
Explanation: See whether sentiment analysis can be used as a predicter of attitude.
Sentiment analysis relies on identifying 'positive' or 'negative' words (e.g. 'love', 'awesome'; 'hate', 'sucks'). This does not translate into attitude analysis simply because support for or opposition to a position can be done with both positive- and negative-sentiment words. For example:
"I don't like feminists" sentiment=negative attitude=anti-feminist
"I don't like people questioning my feminism" sentiment=negative attitude=pro-feminist
A well-known out-of-the-box sentiment analyzer is that of the patterns library, in this case wrapped in the textblob library.
First, let's load the curated tweets and see how many there are in each category. We'll use a pie chart, even though it's frowned upon in many circles, because it's universally understood. Since the tweets to curate were randomly chosen, the relative sizes of these categories should reflect those in the entire dataset.
End of explanation
unicode_replacements = (('\\u2018', '\u2018'), ('\\u2019', '\u2019'),
('\\u201c', '\u201c'), ('\\u201d', '\u201d'),
('\\U0001f602', 'u0001f602'), ('\\u2013', '\u2013'),
('\\xe9', 'é'), ('\\x93', ' '), ('\\x94', ' '),
('\\ufe0f', ' '), ('|', ' '))
def fix_unicode(cell):
fixed = cell
for before, after in unicode_replacements:
fixed = fixed.replace(before, after)
fixed = re.sub('http://t.co[^ ]+', 'http://t.co/etc.', fixed)
return fixed
df_curated['tweet_after'] = df_curated.tweet.apply(fix_unicode)
print('{} out of {} tweets now have improperly escaped unicode characters.'.format(
len(df_curated[df_curated.tweet.str.contains('\\\\u')]), len(df_curated)))
_ = len(df_curated)
df_curated.drop_duplicates(subset=['tweet'], inplace=True)
print('{} duplicate tweets dropped'.format(_ - len(df_curated)))
Explanation: Unfortunately, the Twitter Search app and python script we used did not parse unicode characters correctly (it renders unicode character \u2018 as those six characters instead of an opening smart single quote, for example), so we'll change some of the more common ones (smart single and double quotes, en dash, and an emoji) to proper unicode.
End of explanation
def assign_polarity(cell):
return TextBlob(cell).polarity
df_curated['polarity'] = df_curated.tweet.apply(assign_polarity)
def assign_subjectivity(cell):
return TextBlob(cell).subjectivity
df_curated['subjectivity'] = df_curated.tweet.apply(assign_subjectivity)
plt.figure(figsize=(10,10))
#plt.subplot(3,1,1)
plt.xlim(-1.1, 1.1)
plt.ylim(-0.1, 1.1)
plt.xlabel('polarity')
plt.ylabel('subjectivity')
_1 = df_curated[df_curated['class'] == -1]
plt.plot(_1.polarity, _1.subjectivity, 'ro', alpha = 0.5, label='anti-feminist')
#plt.subplot(3,2,1)
_2 = df_curated[df_curated['class'] == 0]
plt.plot(_2.polarity, _2.subjectivity, 'ko', alpha = 0.5, label='neither')
#plt.subplot(3,3,1)
_3 = df_curated[df_curated['class'] == 1]
plt.plot(_3.polarity, _3.subjectivity, 'go', alpha = 0.5, label='pro-feminist')
plt.legend(loc=4)
plt.title('Sentiment analysis cannot separate attitudes', fontsize=15)
plt.show()
# df_ = df_curated[df_curated['class'] == 0]
# g = sns.jointplot(df_.polarity, df_.subjectivity, kind="kde", size=7, space=0)
Explanation: Now we'll test automated sentiment analysis, which is based on a corpus of movie reviews. It gives a score of -1 to +1 for polarity and 0 to 1 for subjectivity, so we'll add those two columns to df_curated and plot them on a graph to see if they are separable.
End of explanation
from sklearn.ensemble import RandomForestClassifier
df_curated['is_train'] = np.random.uniform(0, 1, len(df_curated)) <= .75 # randomly assign training and testing set
train, test = df_curated[df_curated['is_train']==True], df_curated[df_curated['is_train']==False]
features = ['polarity', 'subjectivity']
y = np.array(train['class'])
clf = RandomForestClassifier(n_jobs=2)
clf = clf.fit(train[features], y)
predicted = clf.predict(test[features])
train_predicted = clf.predict(train[features])
test_result = pd.crosstab(test['class'], predicted,
rownames=['actual'], colnames=['predicted'])
total_correct = test_result.loc[-1,-1] + test_result.loc[0,0] + test_result.loc[1,1]
train_result = pd.crosstab(train['class'], train_predicted,
rownames=['actual'], colnames=['predicted'])
train_total_correct = train_result.loc[-1,-1] + train_result.loc[0,0] + train_result.loc[1,1]
print("Confusion matrix:")
print("")
print("Classes:")
print(" predicted")
print(test_result)
print('')
print('Total test set accuracy is {}+{}+{}={} / {} = {}%'.format(
test_result.loc[-1,-1], test_result.loc[0,0], test_result.loc[1,1],
total_correct, len(test), round(100*total_correct/len(test), 1)))
print('Total training set accuracy is {}+{}+{}={} / {} = {}%'.format(
train_result.loc[-1,-1], train_result.loc[0,0], train_result.loc[1,1],
train_total_correct, len(train), round(100*train_total_correct/len(train), 1)))
print('')
print('Class Precision Recall')
for i in [-1, -0, 1]:
print('{:2d} {:2.1f}% {:2.1f}%'.format(i,
100*test_result.loc[i,i] / test_result.loc[i,:].sum(),
100*test_result.loc[i,i] / test_result.loc[:,i].sum()))
Explanation: There are not many obvious ways from the above graph to predict attitude based on polarity and subjectivity; none of the classes is very separable from the others.
A random forest classifier based on polarity and subjectivity does not do a great job at predicting classes -1 and +1, although it does better predicting class 0 -- the one we are not interested in. Look at the accuracy below; random choice with three categories would have a success rate of 33%.
We also calculate the 'precision' and 'recall' of each class, which are, respectively, the % of those predicted to be in the class (true positives + false positives) that are actually in the class (true positives), and the % of all of those that are actually in the class (true positives + false negatives) that were predicted to be in the class (true positives). (Precision and recall can be a little mindbending at first, but eventually you get used to them).
End of explanation
df = pd.read_csv('twitter_feminism_201501_201504.csv', encoding='latin-1')
df.drop('Unnamed: 0', axis=1, inplace=True) # drop garbage duplicate index column
len_1 = len(df)
df = df[~(df.tweet.str.contains('^RT '))]
len_2 = len(df)
df['tweet'] = df.tweet.apply(fix_unicode) # apply the same unicode fixer as above
df.drop_duplicates(subset='tweet', inplace=True)
print('{} tweets loaded\n{} tweets after eliminating retweets\n{} tweets after eliminating duplicate tweets'.format(
len_1, len_2, len(df)))
Explanation: Load full tweet dataset and eliminate retweets and duplicates
End of explanation
df.tail()
Explanation: Let's have a look at the dataset.
End of explanation
## words
regex_str = [
r'<[^>]+>', # HTML tags
r'(?:@[\w_]+)', # @-mentions
r"(?:\#+[\w_]+[\w\'_\-]*[\w_]+)", # hash-tags
r'http[s]?://(?:[a-z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-f][0-9a-f]))+', # URLs
r'(?:(?:\d+,?)+(?:\.?\d+)?)', # numbers
r"(?:[a-z][a-z'\u2019_]+[a-z])", # words with -, ' or curly apostrophe
r'(?:[\w_]+)', # other words
r'(?:\S)' # anything else
]
## compile regex
tokens_re = re.compile(r'('+'|'.join(regex_str)+')', re.VERBOSE | re.IGNORECASE)
def tokenize(string):
return tokens_re.findall(string)
def removable(token):
return token in [',', '.', ':', ';']
def pre_process(string, lowercase=True):
tokens = tokenize(string)
if lowercase:
tokens = [ token.lower() for token in tokens if not removable(token)]
else:
tokens = [ token for token in tokens if not removable(token)]
return tokens
Explanation: Description of features:
term The search term, 'feminism', 'feminist' or 'feminists'
id Unique ID for the tweet, e.g.
tweet TEXT OF THE TWEET
return_num Position during the search, 1, 2, 3... the larger this number,
the older the tweet was
total_num Total number of records retrieved during that search; usually 100,
but every now and then at random intervals I returned as many records
as possible during the search; because I was lazy, I didn't record
the actual number (about 16,000) but put 99999.
db_entered_time string, e.g. '2015-01-16 20:53:13'; all times are UTC
tweeted_time string, e.g. '2015-01-16 20:52:46'
followers how many followers that tweeter has
reply_id If this tweet is a reply to a previous tweet, the ID of that tweet
retweet_count If this is a retweet, how many times this tweet was retweeted
before this retweet
retweeted How many times this tweet was retweeted between its creation and
its search in the database. For this dataset, it is always zero
(i.e. we're always searching the tweet too soon after its creation
to pick up retweets. For rarer search terms, this can be non-zero).
user_location_text A user-entered string describing their location, usually of limited
utility due to entries like "A galaxy far, far away"
statuses_count How many statuses (e.g. not retweets) this tweeter made before
this tweet.
utc_offset How many hours away from UTC the user is; a useful measure of
longitude. e.g. -18000
account_created string, 2009-12-29 21:51:10
Custom tweet tokenizer, adapted from http://marcobonzanini.com/2015/03/09/mining-twitter-data-with-python-part-2/
End of explanation
example = ( "\"Don't\" with straight quotes and apostrophe, \u201cdon\u2019t\u201d with curly quotes and apostrophe. "
"\\u2013 unrecognized unicode; hyphenated-words; @mention: #hashtag -- http://a.url words>separated?by.things, "
"3.1415 numbers alone" )
print(pre_process(example))
Explanation: An example of the tokenizer in action
End of explanation
#randomize order of rows
df_curated.reset_index(drop=True, inplace=True)
df_curated.reindex(np.random.permutation(df_curated.index))
#take a 75% training set
train_size = int(len(df_curated) * .75)
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer(tokenizer=pre_process)
bagofwords = vec.fit_transform(df_curated.tweet)
bagofwords = bagofwords.toarray()
train = bagofwords[:train_size,:]
test = bagofwords[train_size:,:]
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB(alpha=0.7).fit(train, df_curated[:train_size]['class'])
predicted = clf.predict(test)
train_predicted = clf.predict(train)
test_result = pd.crosstab(df_curated[train_size:]['class'], predicted,
rownames=['actual'], colnames=['predicted'])
total_correct = test_result.loc[-1,-1] + test_result.loc[0,0] + test_result.loc[1,1]
train_result = pd.crosstab(df_curated[:train_size]['class'], train_predicted,
rownames=['actual'], colnames=['predicted'])
train_total_correct = train_result.loc[-1,-1] + train_result.loc[0,0] + train_result.loc[1,1]
print("Confusion matrix:")
print("")
print("Classes:")
print(" predicted")
print(test_result)
print('')
print('Total test set accuracy is {}+{}+{}={} / {} = {}%'.format(
test_result.loc[-1,-1], test_result.loc[0,0], test_result.loc[1,1],
total_correct, len(test), round(100*total_correct/len(test), 1)))
print('Total train set accuracy is {}+{}+{}={} / {} = {}%'.format(
train_result.loc[-1,-1], train_result.loc[0,0], train_result.loc[1,1],
train_total_correct, len(train), round(100*train_total_correct/len(train), 1)))
print('')
print('Class Precision Recall')
for i in [-1, -0, 1]:
print('{:2d} {:2.1f}% {:2.1f}%'.format(i,
100*test_result.loc[i,i] / test_result.loc[i,:].sum(),
100*test_result.loc[i,i] / test_result.loc[:,i].sum()))
Explanation: Tokenize, vectorize, and classify
Tokenize and vectorize curated tweets (for 'bag-of-words' features) and train a Naive Bayes classifier to classify the remaining 390,000 tweets as class +1, 0, or -1.
End of explanation
%%time
# add the predicted class to the 390,000-tweet dataframe
bagofwords = vec.fit(df_curated.tweet)
df_vector = bagofwords.transform(df.tweet)
df['class'] = clf.predict(df_vector)
plt.figure(1, figsize=(6,6))
ax = plt.axes([0.1, 0.1, 0.8, 0.8])
labels = '-1: Anti-feminist', '0: Neither pro nor anti', '1: Pro-feminist'
fracs = [len(df[df['class'] == -1]) / len(df),
len(df[df['class'] == 0]) / len(df),
len(df[df['class'] == 1]) / len(df)]
plt.pie(fracs, labels=labels,
colors = ['#cc6666', '#999999', '#66cc66'],
autopct='%1.1f%%', startangle=90)
plt.title('Percentage of predicted classes in all tweets', fontsize=14)
plt.show()
Explanation: Our test set accuracy is comparable to the Random Forest based on sentiment, but it is much better at predicting classes -1 and +1, at the expense of class 0, which we don't care about. In addition, our precision is way up, which we care about more than recall (i.e. we want the true posives to be correctly classified, we don't care as much whether the correctly classified are true positives... this actually makes sense, we promise!) We're overfitting, given the huge gap between test set and training set accuracy -- the obvious solution for a future implementation is to manually curate some more tweets, or do dimensionality reduction on the bag-of-words vectors
End of explanation
# This is a rudimentary progress 'bar' (more of a counter) ... it's nice
# when sharing notebooks after they're run, because you never know how
# widget-based graphic bars will display, and this shows you that the
# function was indeed run all the way and how long it took.
class ProgressBar:
Init with loop_length, i.e. number of events that add up to 100%, then use methods
.increment() for each event, and .finish() when complete.
def __init__(self, loop_length):
import time
self.start = time.time()
self.increment_size = 100.0/loop_length
self.curr_count = 0
self.curr_pct = 0
self.overflow = False
print('% complete:', end=' ')
def increment(self):
self.curr_count += self.increment_size
if int(self.curr_count) > self.curr_pct:
self.curr_pct = int(self.curr_count)
if self.curr_pct <= 100:
print (self.curr_pct, end=' ')
elif self.overflow == False:
print("\n*!* Count has gone over 100%; likely either due to:\n*!* - an error in the loop_length specified when " + \
"progress_bar was instantiated\n*!* - an error in the placement of the increment() function")
print('*!* Elapsed time when progress bar full: %0.1f seconds.' % (time.time() - self.start))
self.overflow = True
def finish(self):
if self.curr_pct == 99:
print("100"), # this is a cheat, because rounding sometimes makes the maximum count 99. One day I'll fix this bug.
if self.overflow == True:
print('*!* Elapsed time after end of loop: %0.1f seconds.\n' % (time.time() - self.start))
else:
print('\nElapsed time: %0.1f seconds.\n' % (time.time() - self.start))
Explanation: As you can see, we're overfitting; a lot of tweets are being classified as anti-feminist. However, once we do the log-likelihood method, words that truly do not differ between the two classes should cancel out; we should be safe as long as we draw inferences only from words with very high or low log-likelihood values.
Calculate token frequencies for each class
End of explanation
pbar = ProgressBar(len(df[df['class'] != 0]))
token_collector = []
frequency_collector = []
class_collector = []
for class_ in [-1, 1]:
df_ = df[df['class'] == class_]
c = Counter()
for key, row in df_.iterrows():
terms = [ term for term in pre_process( row['tweet'] ) ]
c.update(terms)
pbar.increment()
for token, frequency in c.items():
token_collector.append(token)
frequency_collector.append(frequency)
class_collector.append(class_)
pbar.finish()
df_freqs = pd.DataFrame({'token':token_collector, 'freq': frequency_collector, 'class': class_collector })
Explanation: We go through all of the approximately 300,000 tweets classified as pro- or anti-feminist, and calculate the frequency of tokens in each class.
End of explanation
print('pro-feminist:', [x for x in list(df_freqs[df_freqs['class'] == 1].sort('freq', ascending=False).token[:10])])
print('anti-feminist:', [x for x in list(df_freqs[df_freqs['class'] == -1].sort('freq', ascending=False).token[:10])])
Explanation: For fun, let's look at the top 10 tokens in each class, although raw numbers aren't that informative; we'll see a lot of stopwords. The log-likelihood results will be much more interesting
End of explanation
def calc_loglikelihood(n1, t1, n2, t2):
Calculates Dunning log likelihood of an observation of
frequency n1 in a corpus of size t1, compared to a frequency n2
in a corpus of size t2. If result is positive, it is more
likely to occur in corpus 1, otherwise in corpus 2.
from numpy import log
e1 = t1*1.0*(n1+n2)/(t1+t2) # expected values
e2 = t2*1.0*(n1+n2)/(t1+t2)
result = 2 * ((n1 * log(n1/e1)) + n2 * (log(n2/e2)))
if n2*1.0/t2 > n1*1.0/t1:
result = -result
return result
cutoff = 10 # to save time and since low-frequency words have low log-likelihoods anyway,
# we specify that a word has to appear this many times in both classes.
t1 = df_freqs[df_freqs['class'] == 1].freq.sum()
t2 = df_freqs[df_freqs['class'] == -1].freq.sum()
token_collector = []
loglikelihood_collector = []
df_freqs = df_freqs[df_freqs.freq >= cutoff]
pbar = ProgressBar(len(df_freqs.token.unique()))
for token in df_freqs.token.unique():
pbar.increment()
try:
n1 = df_freqs[(df_freqs['class'] == 1) & (df_freqs.token == token)].freq.iloc[0]
except:
n1 = 0
try:
n2 = df_freqs[(df_freqs['class'] == -1) & (df_freqs.token == token)].freq.iloc[0]
except:
n2 = 0
if n1 > cutoff and n2 > cutoff:
token_collector.append(token)
loglikelihood_collector.append(calc_loglikelihood(n1, t1, n2, t2))
pbar.finish()
df_result = pd.DataFrame({'token': token_collector, 'loglikelihood': loglikelihood_collector})
df_result.sort('loglikelihood', inplace=True)
df_result.to_csv('loglikelihoods.csv')
Explanation: Calculate log-likelihood of each token
Now we calculate the log-likelihood of each word being characteristic of one class or another. Log-likelihood is a measure of significance, so it's essentially a transformed p-value. A good way to think of it is, you've got words in class -1 and in class +1. The 'significance' is a measure of how 'surprised' would you be to have at least as unequal a distribution of that word between the classes if the classes had been randomly assigned.
The nice thing about log-likelihood is that it takes into account both the ratio and the absolute values of the frequencies. In other words, if "goldilocks" (to pick a word totally at random) appeared 10 times in the anti-feminist tweets and 20 times in the pro-feminist tweets, it would have a lower log-likelihood than if it appeared 100 times and 200 times, respectively, even though the ratio between them is the same. How much lower depends on the total size of the dataset; we're more 'surprised' to find differences in randomly-classified words in large datasets than in small ones.
End of explanation
print(df_result.head(50))
Explanation: Here are the top 50 most characteristic tokens in anti-feminist tweets...
Note that the search terms, 'feminist', 'feminism' and 'feminists', will have exaggerated log-likelihoods because of selection bias.
The untranslated unicode characters u0001f602 is the :joy: emoji, laughing and crying at the same time
End of explanation
print(df_result.sort('loglikelihood', ascending=False).head(50))
Explanation: ... and the top 50 most characteristic tokens in pro-feminist tweets.
End of explanation
%%time
# Build a smaller dataframe with a tokenized column to search through faster
df_2search = df[['id', 'class', 'tweet']]
def re_tokenize(row):
lst = pre_process(row.tweet)
return(' '.join(lst))
df_2search['tokens'] = df_2search.apply(re_tokenize, axis=1)
# This function prints 10 randomly chosen tweets containing a specified token, and their classes.
def search_tweets(token):
df1_ = df_2search[df_2search.tokens.str.contains('^'+token+' ')]
df2_ = df_2search[df_2search.tokens.str.contains(' '+token+' ')]
df3_ = df_2search[df_2search.tokens.str.contains(' '+token+'$')]
df_ = pd.concat([df1_,df2_,df3_])
df_.reset_index(drop=True, inplace=True)
df_.reindex(np.random.permutation(df_.index))
for i in range(10):
print('{:2} {}'.format(df_['class'].iloc[i], df_.tweet.iloc[i]))
#first a pro-feminist word:
search_tweets('equality')
#now an anti-feminist word:
search_tweets('can')
# now let's find a term that is not characteristic of either class...
df_result[df_result.loglikelihood > 0].iloc[0]
# ... and search for it. These kinds of words might have a higher rate of misclassification.
search_tweets('add')
Explanation: A function to give examples of tokens in tweets and their classes
Obviously, we'll see some misclassified tweets, but hopefully in the minority.
End of explanation |
8,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: This function will plot a cubic function and the parameter values obtained via Gradient Descent.
Step2: This function will plot a 4th order function and the parameter values obtained via Gradient Descent. You can also add Gaussian noise with a standard deviation determined by the parameter <code>std</code>.
Step3: This is a custom module. It will behave like a single parameter value. We do it this way so we can use PyTorch's build-in optimizers .
Step4: We create an object <code>w</code>, when we call the object with an input of one, it will behave like an individual parameter value. i.e <code>w(1)</code> is analogous to $w$
Step5: <!--Empty Space for separating topics-->
<h2 id="Saddle">Saddle Points</h2>
Let's create a cubic function with Saddle points
Step6: We create an optimizer with no momentum term
Step7: We run several iterations of stochastic gradient descent and plot the results. We see the parameter values get stuck in the saddle point.
Step8: we create an optimizer with momentum term of 0.9
Step9: We run several iterations of stochastic gradient descent with momentum and plot the results. We see the parameter values do not get stuck in the saddle point.
Step10: <!--Empty Space for separating topics-->
<h2 id="Minima">Local Minima</h2>
In this section, we will create a fourth order polynomial with a local minimum at <i>4</i> and a global minimum a <i>-2</i>. We will then see how the momentum parameter affects convergence to a global minimum. The fourth order polynomial is given by
Step11: We create an optimizer with no momentum term. We run several iterations of stochastic gradient descent and plot the results. We see the parameter values get stuck in the local minimum.
Step12: We create an optimizer with a momentum term of 0.9. We run several iterations of stochastic gradient descent and plot the results. We see the parameter values reach a global minimum.
Step13: <!--Empty Space for separating topics-->
<h2 id="Noise">Noise</h2>
In this section, we will create a fourth order polynomial with a local minimum at 4 and a global minimum a -2, but we will add noise to the function when the Gradient is calculated. We will then see how the momentum parameter affects convergence to a global minimum.
with no momentum, we get stuck in a local minimum
Step14: with momentum, we get to the global minimum
Step15: <!--Empty Space for separating topics-->
<h3>Practice</h3>
Create two <code> SGD</code> objects with a learning rate of <code> 0.001</code>. Use the default momentum parameter value for one and a value of <code> 0.9</code> for the second. Use the function <code>plot_fourth_order</code> with an <code>std=100</code>, to plot the different steps of each. Make sure you run the function on two independent cells. | Python Code:
# These are the libraries will be used for this lab.
import torch
import torch.nn as nn
import matplotlib.pylab as plt
import numpy as np
torch.manual_seed(0)
Explanation: <a href="http://cocl.us/pytorch_link_top">
<img src="https://cocl.us/Pytorch_top" width="750" alt="IBM 10TB Storage" />
</a>
<img src="https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width="200" alt="cognitiveclass.ai logo" />
<h1>Momentum</h1>
<h2>Table of Contents</h2>
<p>In this lab, you will deal with several problems associated with optimization and see how momentum can improve your results.</p>
<ul>
<li><a href="#Saddle">Saddle Points</a></li>
<li><a href="#Minima">Local Minima</a></li>
<li><a href="#Noise"> Noise </a></li>
</ul>
<p>Estimated Time Needed: <b>25 min</b></p>
<hr>
<h2>Preparation</h2>
Import the following libraries that you'll use for this lab:
End of explanation
# Plot the cubic
def plot_cubic(w, optimizer):
LOSS = []
W = torch.arange(-4, 4, 0.1)
for w.state_dict()['linear.weight'][0] in W:
LOSS.append(cubic(w(torch.tensor([[1.0]]))).item())
w.state_dict()['linear.weight'][0] = 4.0
n_epochs = 10
parameter = []
loss_list = []
# n_epochs
# Use PyTorch custom module to implement a ploynomial function
for n in range(n_epochs):
optimizer.zero_grad()
loss = cubic(w(torch.tensor([[1.0]])))
loss_list.append(loss)
parameter.append(w.state_dict()['linear.weight'][0].detach().data.item())
loss.backward()
optimizer.step()
plt.plot(parameter, loss_list, 'ro', label = 'parameter values')
plt.plot(W.numpy(), LOSS, label = 'objective function')
plt.xlabel('w')
plt.ylabel('l(w)')
plt.legend()
Explanation: This function will plot a cubic function and the parameter values obtained via Gradient Descent.
End of explanation
# Plot the fourth order function and the parameter values
def plot_fourth_order(w, optimizer, std = 0, color = 'r', paramlabel = 'parameter values', objfun = True):
W = torch.arange(-4, 6, 0.1)
LOSS = []
for w.state_dict()['linear.weight'][0] in W:
LOSS.append(fourth_order(w(torch.tensor([[1.0]]))).item())
w.state_dict()['linear.weight'][0] = 6
n_epochs = 100
parameter = []
loss_list = []
#n_epochs
for n in range(n_epochs):
optimizer.zero_grad()
loss = fourth_order(w(torch.tensor([[1.0]]))) + std * torch.randn(1, 1)
loss_list.append(loss)
parameter.append(w.state_dict()['linear.weight'][0].detach().data.item())
loss.backward()
optimizer.step()
# Plotting
if objfun:
plt.plot(W.numpy(), LOSS, label = 'objective function')
plt.plot(parameter, loss_list, 'ro',label = paramlabel, color = color)
plt.xlabel('w')
plt.ylabel('l(w)')
plt.legend()
Explanation: This function will plot a 4th order function and the parameter values obtained via Gradient Descent. You can also add Gaussian noise with a standard deviation determined by the parameter <code>std</code>.
End of explanation
# Create a linear model
class one_param(nn.Module):
# Constructor
def __init__(self, input_size, output_size):
super(one_param, self).__init__()
self.linear = nn.Linear(input_size, output_size, bias = False)
# Prediction
def forward(self, x):
yhat = self.linear(x)
return yhat
Explanation: This is a custom module. It will behave like a single parameter value. We do it this way so we can use PyTorch's build-in optimizers .
End of explanation
# Create a one_param object
w = one_param(1, 1)
Explanation: We create an object <code>w</code>, when we call the object with an input of one, it will behave like an individual parameter value. i.e <code>w(1)</code> is analogous to $w$
End of explanation
# Define a function to output a cubic
def cubic(yhat):
out = yhat ** 3
return out
Explanation: <!--Empty Space for separating topics-->
<h2 id="Saddle">Saddle Points</h2>
Let's create a cubic function with Saddle points
End of explanation
# Create a optimizer without momentum
optimizer = torch.optim.SGD(w.parameters(), lr = 0.01, momentum = 0)
Explanation: We create an optimizer with no momentum term
End of explanation
# Plot the model
plot_cubic(w, optimizer)
Explanation: We run several iterations of stochastic gradient descent and plot the results. We see the parameter values get stuck in the saddle point.
End of explanation
# Create a optimizer with momentum
optimizer = torch.optim.SGD(w.parameters(), lr = 0.01, momentum = 0.90)
Explanation: we create an optimizer with momentum term of 0.9
End of explanation
# Plot the model
plot_cubic(w, optimizer)
Explanation: We run several iterations of stochastic gradient descent with momentum and plot the results. We see the parameter values do not get stuck in the saddle point.
End of explanation
# Create a function to calculate the fourth order polynomial
def fourth_order(yhat):
out = torch.mean(2 * (yhat ** 4) - 9 * (yhat ** 3) - 21 * (yhat ** 2) + 88 * yhat + 48)
return out
Explanation: <!--Empty Space for separating topics-->
<h2 id="Minima">Local Minima</h2>
In this section, we will create a fourth order polynomial with a local minimum at <i>4</i> and a global minimum a <i>-2</i>. We will then see how the momentum parameter affects convergence to a global minimum. The fourth order polynomial is given by:
End of explanation
# Make the prediction without momentum
optimizer = torch.optim.SGD(w.parameters(), lr = 0.001)
plot_fourth_order(w, optimizer)
Explanation: We create an optimizer with no momentum term. We run several iterations of stochastic gradient descent and plot the results. We see the parameter values get stuck in the local minimum.
End of explanation
# Make the prediction with momentum
optimizer = torch.optim.SGD(w.parameters(), lr = 0.001, momentum = 0.9)
plot_fourth_order(w, optimizer)
Explanation: We create an optimizer with a momentum term of 0.9. We run several iterations of stochastic gradient descent and plot the results. We see the parameter values reach a global minimum.
End of explanation
# Make the prediction without momentum when there is noise
optimizer = torch.optim.SGD(w.parameters(), lr = 0.001)
plot_fourth_order(w, optimizer, std = 10)
Explanation: <!--Empty Space for separating topics-->
<h2 id="Noise">Noise</h2>
In this section, we will create a fourth order polynomial with a local minimum at 4 and a global minimum a -2, but we will add noise to the function when the Gradient is calculated. We will then see how the momentum parameter affects convergence to a global minimum.
with no momentum, we get stuck in a local minimum
End of explanation
# Make the prediction with momentum when there is noise
optimizer = torch.optim.SGD(w.parameters(), lr = 0.001,momentum = 0.9)
plot_fourth_order(w, optimizer, std = 10)
Explanation: with momentum, we get to the global minimum
End of explanation
# Practice: Create two SGD optimizer with lr = 0.001, and one without momentum and the other with momentum = 0.9. Plot the result out.
# Type your code here
Explanation: <!--Empty Space for separating topics-->
<h3>Practice</h3>
Create two <code> SGD</code> objects with a learning rate of <code> 0.001</code>. Use the default momentum parameter value for one and a value of <code> 0.9</code> for the second. Use the function <code>plot_fourth_order</code> with an <code>std=100</code>, to plot the different steps of each. Make sure you run the function on two independent cells.
End of explanation |
8,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling 2
Step1: Fit an emission line in a stellar spectrum
M dwarfs are low mass stars (less than half of the mass of the sun). Currently we do not understand completely the physics inside low mass stars because they do not behave the same way higher mass stars do. For example, they stay magnetically active longer than higher mass stars. One way to measure magnetic activity is the height of the $H\alpha$ emission line. It is located at $6563$ Angstroms at the spectrum.
Let's search for a spectrum of an M dwarf in the Sloan Digital Sky Survey (SDSS). First, we are going to look for the spectrum in the SDSS database. SDSS has a particular way to identify the stars it observes
Step2: Now that we have the spectrum...
One way to check what is inside the fits file spectrum is the following
Step3: To plot the spectrum we need the flux as a function of wavelength (usually called lambda or $\lambda$). Note that the wavelength is in log scale
Step4: To find the units for flux and wavelength, we look in fitsfile[0].header.
FITS standard requires that the header keyword 'bunit' or 'BUNIT' contains the physical units of the array values. That's where we'll find the flux units.
Step5: Different sources will definite wavelength information differently, so we need to check the documentation. For example, this SDSS tutorial tells us what header keyword to look at.
Step6: We are going to select only the characters of the unit we care about
Step7: Now we are ready to plot the spectrum with all the information.
Step8: We just plotted our spectrum! Check different ranges of wavelength to see how the full spectrum looks like in comparison to the one we saw before.
Fit an Emission Line with a Gaussian Model
The blue dashed line marks the $H\alpha$ emission line. We can tell this is an active star because it has a strong emission line.
Now, we would like to measure the height of this line. Let's use astropy.modeling to fit a gaussian to the $H\alpha$ line. We are going to initialize a gaussian model at the position of the $H\alpha$ line. The idea is that the gaussian amplitude will tell us the height of the line.
Step9: Let's plot the results.
Step10: We can see the fit is not doing a good job. Let's print the parameters of this fit
Step11: Exercise
Go back to the previous plot and try to make the fit work. Note
Step12: After this point, we fit the data in exactly the same way as before, except we use a compound model instead of the gaussian model.
Step13: It works! Let's take a look to the fit we just made.
Step14: Let's print all the parameters in a fancy way
Step15: We can see that the result includes all the fit parameters from the gaussian (mean, std and amplitude) and the two coefficients from the polynomial of degree 1. So now if we want to see just the amplitude
Step16: Conclusions
Step17: Now let's use this new model with a fixed parameter to fit the data the same way we did before.
Step18: We can see in the plot that the height of the fit does not match the $H\alpha$ line height. What happend here is that we were too strict with the mean value, so we did not get a good fit. But the mean value is where we want it! Let's loosen this condition a little. Another thing we can do is to define a minimum and maximum value for the mean.
Step19: Better! By loosening the condition we added to the mean value, we got a better fit and the mean of the gaussian is closer to where we want it.
Exercise
Modify the value of delta to change the minimum and maximum values for the mean of the gaussian. Look for
Step20: We can define a simple custom model by specifying which parameters we want to fit.
Step21: Now we have one more available model to use in the same way we fit data with astropy.modeling.
Step22: The fit looks good in the plot. Let's check the parameters and the Reduced Chi Square value, which will give us information about the goodness of the fit.
Step23: The Reduced Chi Square value is close to 1. Great! This means our fit is good, and we can corroborate it by comparing the values we got for the parameters and the ones we used to simulate the data.
Note
Step24: For the full custom model we can easily set the derivative of the function, which is used by different fitters, for example the LevMarLSQFitter().
Step25: Note Defining default values for the fit parameters allows to define a model as model=SineNew()
We are going to fit the data with our new model. Once more, the fit is very sensitive to the initial conditions due to the non-linearity of the parameters.
Step26: The Reduced Chi Squared value is showing the same as the plot | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.modeling import models, fitting
from astropy.modeling.models import custom_model
from astropy.modeling import Fittable1DModel, Parameter
from astroquery.sdss import SDSS
Explanation: Modeling 2: Create a User Defined Model using astropy.modeling
Authors
Rocio Kiman, Lia Corrales, Zé Vinícius, Stephanie T. Douglas
Learning Goals
Define a new model with astropy
Identify cases were a user-defined model could be useful
Define models in two different ways:
Compound models
Custom models
This tutorial assumes the student knows how to fit data using astropy.modeling. This topic is covered in the Models-Quick-Fit tutorial.
Keywords
modeling, FITS, astrostatistics, matplotlib, model fitting, error bars, scatter plots
Summary
In this tutorial, we will learn how to define a new model in two ways: with a compound model and with a custom model.
<div class="alert alert-info">
**Note:** This tutorial assumes you have already gone through
[Modeling 1](http://www.astropy.org/astropy-tutorials/rst-tutorials/Models-Quick-Fit.html),
which provides an introduction to `astropy.modeling`
</div>
Imports
End of explanation
spectrum = SDSS.get_spectra(plate=1349, fiberID=216, mjd=52797)[0]
Explanation: Fit an emission line in a stellar spectrum
M dwarfs are low mass stars (less than half of the mass of the sun). Currently we do not understand completely the physics inside low mass stars because they do not behave the same way higher mass stars do. For example, they stay magnetically active longer than higher mass stars. One way to measure magnetic activity is the height of the $H\alpha$ emission line. It is located at $6563$ Angstroms at the spectrum.
Let's search for a spectrum of an M dwarf in the Sloan Digital Sky Survey (SDSS). First, we are going to look for the spectrum in the SDSS database. SDSS has a particular way to identify the stars it observes: it uses three numbers: Plate, Fiber and MJD (Modified Julian Date). The star we are going to use has:
* Plate: 1349
* Fiber: 216
* MJD: 52797
So go ahead, put this numbers in the website and click on Plot to visualize the spectrum. Try to localize the $H\alpha$ line.
We could download the spectrum by hand from this website, but we are going to import it using the SDSSClass from astroquery.sdss. We can get the spectrum using the plate, fiber and mjd in the following way:
End of explanation
spectrum[1].columns
Explanation: Now that we have the spectrum...
One way to check what is inside the fits file spectrum is the following:
End of explanation
flux = spectrum[1].data['flux']
lam = 10**(spectrum[1].data['loglam'])
Explanation: To plot the spectrum we need the flux as a function of wavelength (usually called lambda or $\lambda$). Note that the wavelength is in log scale: loglam, so we calculate $10^\lambda$ to remove this scale.
End of explanation
#Units of the flux
units_flux = spectrum[0].header['bunit']
print(units_flux)
Explanation: To find the units for flux and wavelength, we look in fitsfile[0].header.
FITS standard requires that the header keyword 'bunit' or 'BUNIT' contains the physical units of the array values. That's where we'll find the flux units.
End of explanation
#Units of the wavelegth
units_wavelength_full = spectrum[0].header['WAT1_001']
print(units_wavelength_full)
Explanation: Different sources will definite wavelength information differently, so we need to check the documentation. For example, this SDSS tutorial tells us what header keyword to look at.
End of explanation
units_wavelength = units_wavelength_full[36:]
print(units_wavelength)
Explanation: We are going to select only the characters of the unit we care about: Angstroms
End of explanation
plt.plot(lam, flux, color='k')
plt.xlim(6300,6700)
plt.axvline(x=6563, linestyle='--')
plt.xlabel('Wavelength ({})'.format(units_wavelength))
plt.ylabel('Flux ({})'.format(units_flux))
plt.show()
Explanation: Now we are ready to plot the spectrum with all the information.
End of explanation
gausian_model = models.Gaussian1D(1, 6563, 10)
fitter = fitting.LevMarLSQFitter()
gaussian_fit = fitter(gausian_model, lam, flux)
Explanation: We just plotted our spectrum! Check different ranges of wavelength to see how the full spectrum looks like in comparison to the one we saw before.
Fit an Emission Line with a Gaussian Model
The blue dashed line marks the $H\alpha$ emission line. We can tell this is an active star because it has a strong emission line.
Now, we would like to measure the height of this line. Let's use astropy.modeling to fit a gaussian to the $H\alpha$ line. We are going to initialize a gaussian model at the position of the $H\alpha$ line. The idea is that the gaussian amplitude will tell us the height of the line.
End of explanation
plt.figure(figsize=(8,5))
plt.plot(lam, flux, color='k')
plt.plot(lam, gaussian_fit(lam), color='darkorange')
plt.xlim(6300,6700)
plt.xlabel('Wavelength (Angstroms)')
plt.ylabel('Flux ({})'.format(units_flux))
plt.show()
Explanation: Let's plot the results.
End of explanation
print(gaussian_fit)
Explanation: We can see the fit is not doing a good job. Let's print the parameters of this fit:
End of explanation
compound_model = models.Gaussian1D(1, 6563, 10) + models.Polynomial1D(degree=1)
Explanation: Exercise
Go back to the previous plot and try to make the fit work. Note: Do not spend more than 10 minutes in this exercise. A couple of ideas to try:
* Is it not working because of the model we chose to fit? You can find more models to use here.
* Is it not working because of the fitter we chose?
* Is it not working because of the range of data we are fitting?
* Is it not working because how we are plotting the data?
Compound models
One model is not enough to make this fit work. We need to combine a couple of models to make a compound model in astropy. The idea is that we can add, divide or multiply models that already exist in astropy.modeling and fit the compound model to our data.
For our problem we are going to combine the gaussian with a polynomial of degree 1 to account for the background spectrum close to the $H\alpha$ line. Take a look at the plot we made before to convince yourself that this is the case.
Now let's make our compound model!
End of explanation
fitter = fitting.LevMarLSQFitter()
compound_fit = fitter(compound_model, lam, flux)
plt.figure(figsize=(8,5))
plt.plot(lam, flux, color='k')
plt.plot(lam, compound_fit(lam), color='darkorange')
plt.xlim(6300,6700)
plt.xlabel('Wavelength (Angstroms)')
plt.ylabel('Flux ({})'.format(units_flux))
plt.show()
Explanation: After this point, we fit the data in exactly the same way as before, except we use a compound model instead of the gaussian model.
End of explanation
print(compound_fit)
Explanation: It works! Let's take a look to the fit we just made.
End of explanation
for x,y in zip(compound_fit.param_names, compound_fit.parameters):
print(x,y)
Explanation: Let's print all the parameters in a fancy way:
End of explanation
compound_fit.amplitude_0
Explanation: We can see that the result includes all the fit parameters from the gaussian (mean, std and amplitude) and the two coefficients from the polynomial of degree 1. So now if we want to see just the amplitude:
End of explanation
compound_model_fixed = models.Gaussian1D(1, 6563, 10) + models.Polynomial1D(degree=1)
compound_model_fixed.mean_0.fixed = True
Explanation: Conclusions: What was the difference between the first simple Gaussian and the compound model? The linear model that we added up to the gaussian model allowed the base of the Gaussian fit to have a slope and a background level. Normal Gaussians go to zero at $\pm \inf$; this one doesn't.
Fixed or bounded model parameters
The mean value of the gaussian from our previous model indicates where the $H\alpha$ line is. In our fit result, we can tell that it is a little off from $6563$ Angstroms. One way to fix this is to fix some of the parameters of the model. In astropy.modeling these are called fixed parameters.
End of explanation
fitter = fitting.LevMarLSQFitter()
compound_fit_fixed = fitter(compound_model_fixed, lam, flux)
plt.figure(figsize=(8,5))
plt.plot(lam, flux, color='k')
plt.plot(lam, compound_fit_fixed(lam), color='darkorange')
plt.xlim(6300,6700)
plt.xlabel('Wavelength (Angstroms)')
plt.ylabel('Flux ({})'.format(units_flux))
plt.show()
print(compound_fit_fixed)
Explanation: Now let's use this new model with a fixed parameter to fit the data the same way we did before.
End of explanation
compound_model_bounded = models.Gaussian1D(1, 6563, 10) + models.Polynomial1D(degree=1)
delta = 0.5
compound_model_bounded.mean_0.max = 6563 + delta
compound_model_bounded.mean_0.min = 6563 - delta
fitter = fitting.LevMarLSQFitter()
compound_fit_bounded = fitter(compound_model_bounded, lam, flux)
plt.figure(figsize=(8,5))
plt.plot(lam, flux, color='k')
plt.plot(lam, compound_fit_bounded(lam), color='darkorange')
plt.xlim(6300,6700)
plt.xlabel('Wavelength (Angstroms)')
plt.ylabel('Flux ({})'.format(units_flux))
plt.show()
print(compound_fit_bounded)
Explanation: We can see in the plot that the height of the fit does not match the $H\alpha$ line height. What happend here is that we were too strict with the mean value, so we did not get a good fit. But the mean value is where we want it! Let's loosen this condition a little. Another thing we can do is to define a minimum and maximum value for the mean.
End of explanation
x1 = np.linspace(0,10,100)
a = 3
b = -2
c = 0
y1 = a*np.exp(b*x1+c)
y1 += np.random.normal(0., 0.2, x1.shape)
y1_err = np.ones(x1.shape)*0.2
plt.errorbar(x1 , y1, yerr=y1_err, fmt='.')
plt.show()
Explanation: Better! By loosening the condition we added to the mean value, we got a better fit and the mean of the gaussian is closer to where we want it.
Exercise
Modify the value of delta to change the minimum and maximum values for the mean of the gaussian. Look for:
* The better delta so the mean is closer to the real value of the $H\alpha$ line.
* What is the minimum delta for which the fit is still good according to the plot?
Custom model
What should you do if you need a model that astropy.modeling doesn't provide? To solve that problem, Astropy has another tool called custom model. Using this tool, we can create any model we want.
We will describe two ways to create a custom model:
* basic
* full
We use the basic custom model when we need a simple function to fit and the full custom model when we need a more complex function. Let's use an example to understand each one of the custom models.
Basic custom model
An Exponential Model is not provided by Astropy models. Let's see one example of basic custom model for this case. First, let's simulate a dataset that follows an exponential:
End of explanation
@custom_model
def exponential(x, a=1., b=1., c=1.):
'''
f(x)=a*exp(b*x + c)
'''
return a*np.exp(b*x+c)
Explanation: We can define a simple custom model by specifying which parameters we want to fit.
End of explanation
exp_model = exponential(1.,-1.,1.)
fitter = fitting.LevMarLSQFitter()
exp_fit = fitter(exp_model, x1, y1, weights = 1.0/y1_err**2)
plt.errorbar(x1 , y1, yerr=y1_err, fmt='.')
plt.plot(x1, exp_fit(x1))
plt.show()
print(exp_fit)
Explanation: Now we have one more available model to use in the same way we fit data with astropy.modeling.
End of explanation
def calc_reduced_chi_square(fit, x, y, yerr, N, n_free):
'''
fit (array) values for the fit
x,y,yerr (arrays) data
N total number of points
n_free number of parameters we are fitting
'''
return 1.0/(N-n_free)*sum(((fit - y)/yerr)**2)
calc_reduced_chi_square(exp_fit(x1), x1, y1, y1_err, len(x1), 3)
Explanation: The fit looks good in the plot. Let's check the parameters and the Reduced Chi Square value, which will give us information about the goodness of the fit.
End of explanation
x2 = np.linspace(0,10,100)
a = 3
b = 2
c = 4
d = 1
y2 = a*np.sin(b*x2+c)+d
y2 += np.random.normal(0., 0.5, x2.shape)
y2_err = np.ones(x2.shape)*0.3
plt.errorbar(x2, y2, yerr=y2_err, fmt='.')
plt.show()
Explanation: The Reduced Chi Square value is close to 1. Great! This means our fit is good, and we can corroborate it by comparing the values we got for the parameters and the ones we used to simulate the data.
Note: Fits of non-linear parameters (like in our example) are extremely dependent on initial conditions. Pay attention to the initial conditions you select.
Exercise
Modify the initial conditions of the fit and check yourself the relation between the best fit parameters and the initial conditions for the previous example. You can check it by looking at the Reduced Chi Square value: if it gets closer to 1 the fit is better and vice versa. To compare the quality of the fits you can take note of the Reduced Chi Square value you get for each initial condition.
Full custom model
What if we want to use a model from astropy.modeling, but with a different set of parameters? One example is the Sine Model. It has a very particular definition of the frequency and phase. Let's define a new Sine function with a full custom model. Again, first let's create a simulated dataset.
End of explanation
class SineNew(Fittable1DModel):
a = Parameter(default=1.)
b = Parameter(default=1.)
c = Parameter(default=1.)
d = Parameter(default=1.)
@staticmethod
def evaluate(x, a, b, c, d):
return a*np.sin(b*x+c)+d
@staticmethod
def fit_deriv(x, a, b, c, d):
d_a = np.sin(b*x+c)+d
d_b = a*np.cos(b*x+c)*x
d_c = a*np.sin(b*x+c)
d_d = np.ones(x.shape)
return [d_a, d_b, d_c, d_d]
Explanation: For the full custom model we can easily set the derivative of the function, which is used by different fitters, for example the LevMarLSQFitter().
End of explanation
sine_model = SineNew(a=4.,b=2.,c=4.,d=0.)
fitter = fitting.LevMarLSQFitter()
sine_fit = fitter(sine_model, x2, y2, weights = 1.0/y2_err**2)
plt.errorbar(x2, y2, yerr=y2_err, fmt='.')
plt.plot(x2,sine_fit(x2))
plt.show()
print(sine_fit)
calc_reduced_chi_square(sine_fit(x2), x2, y2, y2_err, len(x2), 3)
Explanation: Note Defining default values for the fit parameters allows to define a model as model=SineNew()
We are going to fit the data with our new model. Once more, the fit is very sensitive to the initial conditions due to the non-linearity of the parameters.
End of explanation
x3 = np.linspace(-2,3,100)
y3 = x3**2* np.exp(-0.5 * (x3)**3 / 2**2)
y3 += np.random.normal(0., 0.5, x3.shape)
y3_err = np.ones(x3.shape)*0.5
plt.errorbar(x3,y3,yerr=y3_err,fmt='.')
plt.show()
Explanation: The Reduced Chi Squared value is showing the same as the plot: this fit could be improved. The Reduced Chi Squared is not close to 1 and the fit is off by small phase.
Exercise
Play with the initial values for the last fit and improve the Reduced Chi Squared value.
Note: A fancy way of doing this would be to code a function which iterates over different initial conditions, optimizing the Reduced Chi Squared value. No need to do it here, but feel free to try.
Exercise
Custom models are also useful when we want to fit an unusual function to our data. As an example, create a full custom model to fit the following data.
End of explanation |
8,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Below path is a shared directory, swap to own
Step1: Replication of 'csv_to_hdf5.py'
Original repo used some bizarre tuple method of reading in data to save in a hdf5 file using fuel. The following does the same approach in that module, only using pandas and saving in a bcolz format (w/ training data as example)
Step2: The array of long/lat coordinates per trip (row) is read in as a string. The function ast.literal_eval(x) evaluates the string into the expression it represents (safely). This happens below
Step3: Split into latitude/longitude
Step4: Further Feature Engineering
After converting 'csv_to_hdf5.py' functionality to pandas, I saved that array and then simply constructed the rest of the features as specified in the paper using pandas. I didn't bother seeing how the author did it as it was extremely obtuse and involved the fuel module.
Step5: The paper discusses how many categorical variables there are per category. The following all check out
Step6: Self-explanatory
Step7: Quarter hour of the day, i.e. 1 of the 4*24 = 96 quarter hours of the day
Step8: Self-explanatory
Step9: Target coords are the last in the sequence (final position). If there are no positions, or only 1, then mark as invalid w/ nan in order to drop later
Step10: This function creates the continuous inputs, which are the concatened k first and k last coords in a sequence, as discussed in the paper.
If there aren't at least 2* k coords excluding the target, then the k first and k last overlap. In this case the sequence (excluding target) is padded at the end with the last coord in the sequence. The paper mentioned they padded front and back but didn't specify in what manner.
Also marks any invalid w/ na's
Step11: Drop na's
Step12: End to end feature transformation
Step13: Pre-calculated below on train set
Step14: MEANSHIFT
Meanshift clustering as performed in the paper
Step15: Clustering performed on the targets
Step16: Can use the commented out code for a estimate of bandwidth, which causes clustering to converge much quicker.
This is not mentioned in the paper but is included in the code. In order to get results similar to the paper's,
they manually chose the uncommented bandwidth
Step17: This takes some time
Step18: This is very close to the number of clusters mentioned in the paper
Step19: Formatting Features for Bcolz iterator / garbage
Step20: MODEL
Load training data and cluster centers
Step21: Validation cuts
Step22: The equirectangular loss function mentioned in the paper.
Note
Step23: The following returns a fully-connected model as mentioned in the paper. Takes as input k as defined before, and the cluster centers.
Inputs
Step24: As mentioned, construction of repeated cluster longs/lats for input
Iterator for in memory train pandas dataframe. I did this as opposed to bcolz iterator due to the pre-processing
Step25: Of course, k in the model needs to match k from feature construction. We again use 5 as they did in the paper
Step26: Paper used SGD opt w/ following paramerters
Step27: original
Step28: new valid
Step29: It works, but it seems to converge unrealistically quick and the loss values are not the same. The paper does not mention what it's using as "error" in it's results. I assume the same equirectangular? Not very clear. The difference in values could be due to the missing Earth-radius factor
Kaggle Entry
Step30: To-do
Step31: hd5f files | Python Code:
data_path = "data/taxi/"
Explanation: Below path is a shared directory, swap to own
End of explanation
meta = pd.read_csv(data_path+'metaData_taxistandsID_name_GPSlocation.csv', header=0)
meta.head()
train = pd.read_csv(data_path+'train/train.csv', header=0)
train.head()
train['ORIGIN_CALL'] = pd.Series(pd.factorize(train['ORIGIN_CALL'])[0]) + 1
train['ORIGIN_STAND']=pd.Series([0 if pd.isnull(x) or x=='' else int(x) for x in train["ORIGIN_STAND"]])
train['TAXI_ID'] = pd.Series(pd.factorize(train['TAXI_ID'])[0]) + 1
# train['DAY_TYPE'] = pd.Series([ord(x[0]) - ord('A') for x in train['DAY_TYPE']])
train['DAY_TYPE'] = pd.Series([(ord(x[0]) - ord('A')) for x in train['DAY_TYPE']]) # - correct
Explanation: Replication of 'csv_to_hdf5.py'
Original repo used some bizarre tuple method of reading in data to save in a hdf5 file using fuel. The following does the same approach in that module, only using pandas and saving in a bcolz format (w/ training data as example)
End of explanation
polyline = pd.Series([ast.literal_eval(x) for x in train['POLYLINE']])
Explanation: The array of long/lat coordinates per trip (row) is read in as a string. The function ast.literal_eval(x) evaluates the string into the expression it represents (safely). This happens below
End of explanation
train['LATITUDE'] = pd.Series([np.array([point[1] for point in poly],dtype=np.float32) for poly in polyline])
train['LONGITUDE'] = pd.Series([np.array([point[0] for point in poly],dtype=np.float32) for poly in polyline])
utils2.save_array(data_path+'train/train.bc', train.as_matrix())
utils2.save_array(data_path+'train/meta_train.bc', meta.as_matrix())
Explanation: Split into latitude/longitude
End of explanation
train = pd.DataFrame(utils2.load_array(data_path+'train/train.bc'), columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE'])
train.head()
Explanation: Further Feature Engineering
After converting 'csv_to_hdf5.py' functionality to pandas, I saved that array and then simply constructed the rest of the features as specified in the paper using pandas. I didn't bother seeing how the author did it as it was extremely obtuse and involved the fuel module.
End of explanation
train['ORIGIN_CALL'].max()
train['ORIGIN_STAND'].max()
train['TAXI_ID'].max()
Explanation: The paper discusses how many categorical variables there are per category. The following all check out
End of explanation
train['DAY_OF_WEEK'] = pd.Series([datetime.datetime.fromtimestamp(t).weekday() for t in train['TIMESTAMP']])
Explanation: Self-explanatory
End of explanation
train['QUARTER_HOUR'] = pd.Series([int((datetime.datetime.fromtimestamp(t).hour*60 + datetime.datetime.fromtimestamp(t).minute)/15)
for t in train['TIMESTAMP']])
Explanation: Quarter hour of the day, i.e. 1 of the 4*24 = 96 quarter hours of the day
End of explanation
train['WEEK_OF_YEAR'] = pd.Series([datetime.datetime.fromtimestamp(t).isocalendar()[1] for t in train['TIMESTAMP']])
Explanation: Self-explanatory
End of explanation
train['TARGET'] = pd.Series([[l[1][0][-1], l[1][1][-1]] if len(l[1][0]) > 1 else np.nan for l in train[['LONGITUDE','LATITUDE']].iterrows()])
Explanation: Target coords are the last in the sequence (final position). If there are no positions, or only 1, then mark as invalid w/ nan in order to drop later
End of explanation
def start_stop_inputs(k):
result = []
for l in train[['LONGITUDE','LATITUDE']].iterrows():
if len(l[1][0]) < 2 or len(l[1][1]) < 2:
result.append(np.nan)
elif len(l[1][0][:-1]) >= 2*k:
result.append(np.concatenate([l[1][0][0:k],l[1][0][-(k+1):-1],l[1][1][0:k],l[1][1][-(k+1):-1]]).flatten())
else:
l1 = np.lib.pad(l[1][0][:-1], (0,20-len(l[1][0][:-1])), mode='edge')
l2 = np.lib.pad(l[1][1][:-1], (0,20-len(l[1][1][:-1])), mode='edge')
result.append(np.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten())
return pd.Series(result)
train['COORD_FEATURES'] = start_stop_inputs(5)
train.shape
train.dropna().shape
Explanation: This function creates the continuous inputs, which are the concatened k first and k last coords in a sequence, as discussed in the paper.
If there aren't at least 2* k coords excluding the target, then the k first and k last overlap. In this case the sequence (excluding target) is padded at the end with the last coord in the sequence. The paper mentioned they padded front and back but didn't specify in what manner.
Also marks any invalid w/ na's
End of explanation
train = train.dropna()
utils2.save_array(data_path+'train/train_features.bc', train.as_matrix())
Explanation: Drop na's
End of explanation
train = pd.read_csv(data_path+'train/train.csv', header=0)
test = pd.read_csv(data_path+'test/test.csv', header=0)
def start_stop_inputs(k, data, test):
result = []
for l in data[['LONGITUDE','LATITUDE']].iterrows():
if not test:
if len(l[1][0]) < 2 or len(l[1][1]) < 2:
result.append(np.nan)
elif len(l[1][0][:-1]) >= 2*k:
result.append(np.concatenate([l[1][0][0:k],l[1][0][-(k+1):-1],l[1][1][0:k],l[1][1][-(k+1):-1]]).flatten())
else:
l1 = np.lib.pad(l[1][0][:-1], (0,4*k-len(l[1][0][:-1])), mode='edge')
l2 = np.lib.pad(l[1][1][:-1], (0,4*k-len(l[1][1][:-1])), mode='edge')
result.append(np.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten())
else:
if len(l[1][0]) < 1 or len(l[1][1]) < 1:
result.append(np.nan)
elif len(l[1][0]) >= 2*k:
result.append(np.concatenate([l[1][0][0:k],l[1][0][-k:],l[1][1][0:k],l[1][1][-k:]]).flatten())
else:
l1 = np.lib.pad(l[1][0], (0,4*k-len(l[1][0])), mode='edge')
l2 = np.lib.pad(l[1][1], (0,4*k-len(l[1][1])), mode='edge')
result.append(np.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten())
return pd.Series(result)
Explanation: End to end feature transformation
End of explanation
lat_mean = 41.15731
lat_std = 0.074120656
long_mean = -8.6161413
long_std = 0.057200309
def feature_ext(data, test=False):
data['ORIGIN_CALL'] = pd.Series(pd.factorize(data['ORIGIN_CALL'])[0]) + 1
data['ORIGIN_STAND']=pd.Series([0 if pd.isnull(x) or x=='' else int(x) for x in data["ORIGIN_STAND"]])
data['TAXI_ID'] = pd.Series(pd.factorize(data['TAXI_ID'])[0]) + 1
data['DAY_TYPE'] = pd.Series([ord(x[0]) - ord('A') for x in data['DAY_TYPE']])
polyline = pd.Series([ast.literal_eval(x) for x in data['POLYLINE']])
data['LATITUDE'] = pd.Series([np.array([point[1] for point in poly],dtype=np.float32) for poly in polyline])
data['LONGITUDE'] = pd.Series([np.array([point[0] for point in poly],dtype=np.float32) for poly in polyline])
if not test:
data['TARGET'] = pd.Series([[l[1][0][-1], l[1][1][-1]] if len(l[1][0]) > 1 else np.nan for l in data[['LONGITUDE','LATITUDE']].iterrows()])
data['LATITUDE'] = pd.Series([(t-lat_mean)/lat_std for t in data['LATITUDE']])
data['LONGITUDE'] = pd.Series([(t-long_mean)/long_std for t in data['LONGITUDE']])
data['COORD_FEATURES'] = start_stop_inputs(5, data, test)
data['DAY_OF_WEEK'] = pd.Series([datetime.datetime.fromtimestamp(t).weekday() for t in data['TIMESTAMP']])
data['QUARTER_HOUR'] = pd.Series([int((datetime.datetime.fromtimestamp(t).hour*60 + datetime.datetime.fromtimestamp(t).minute)/15)
for t in data['TIMESTAMP']])
data['WEEK_OF_YEAR'] = pd.Series([datetime.datetime.fromtimestamp(t).isocalendar()[1] for t in data['TIMESTAMP']])
data = data.dropna()
return data
train = feature_ext(train)
# train["TARGET"]
train.head()
test = feature_ext(test, test=True)
test.head()
utils2.save_array(data_path+'train/train_features.bc', train.as_matrix())
utils2.save_array(data_path+'test/test_features.bc', test.as_matrix())
train.head()
Explanation: Pre-calculated below on train set
End of explanation
# train = pd.DataFrame(utils2.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
# 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'DAY_OF_WEEK',
# 'QUARTER_HOUR', "WEEK_OF_YEAR", "TARGET", "COORD_FEATURES"])
# - Correct column order to load the Bcolz array that was saved above
train = pd.DataFrame(utils2.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET', 'COORD_FEATURES', 'DAY_OF_WEEK',
'QUARTER_HOUR', 'WEEK_OF_YEAR'])
Explanation: MEANSHIFT
Meanshift clustering as performed in the paper
End of explanation
y_targ = np.vstack(train["TARGET"].as_matrix())
from sklearn.cluster import MeanShift, estimate_bandwidth
Explanation: Clustering performed on the targets
End of explanation
#bw = estimate_bandwidth(y_targ, quantile=.1, n_samples=1000)
bw = 0.001
Explanation: Can use the commented out code for a estimate of bandwidth, which causes clustering to converge much quicker.
This is not mentioned in the paper but is included in the code. In order to get results similar to the paper's,
they manually chose the uncommented bandwidth
End of explanation
ms = MeanShift(bandwidth=bw, bin_seeding=True, min_bin_freq=5)
ms.fit(y_targ)
cluster_centers = ms.cluster_centers_
Explanation: This takes some time
End of explanation
cluster_centers.shape
utils2.save_array(data_path+"cluster_centers_bw_001.bc", cluster_centers)
Explanation: This is very close to the number of clusters mentioned in the paper
End of explanation
train = pd.DataFrame(utils2.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET',
'COORD_FEATURES', 'DAY_OF_WEEK', "QUARTER_HOUR", "WEEK_OF_YEAR"])
cluster_centers = utils2.load_array(data_path+"cluster_centers_bw_001.bc")
long = np.array([c[0] for c in cluster_centers])
lat = np.array([c[1] for c in cluster_centers])
X_train, X_val = train_test_split(train, test_size=0.2, random_state=42)
def get_features(data):
return [np.vstack(data['COORD_FEATURES'].as_matrix()), np.vstack(data['ORIGIN_CALL'].as_matrix()),
np.vstack(data['TAXI_ID'].as_matrix()), np.vstack(data['ORIGIN_STAND'].as_matrix()),
np.vstack(data['QUARTER_HOUR'].as_matrix()), np.vstack(data['DAY_OF_WEEK'].as_matrix()),
np.vstack(data['WEEK_OF_YEAR'].as_matrix()), np.array([long for i in range(0,data.shape[0])]),
np.array([lat for i in range(0,data.shape[0])])]
def get_target(data):
return np.vstack(data["TARGET"].as_matrix())
X_train_features = get_features(X_train)
X_train_target = get_target(X_train)
# utils2.save_array(data_path+'train/X_train_features.bc', get_features(X_train)) # - doesn't work - needs an array, not a list
Explanation: Formatting Features for Bcolz iterator / garbage
End of explanation
train = pd.DataFrame(utils2.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET',
'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"])
Explanation: MODEL
Load training data and cluster centers
End of explanation
cuts = [
1376503200, # 2013-08-14 18:00
1380616200, # 2013-10-01 08:30
1381167900, # 2013-10-07 17:45
1383364800, # 2013-11-02 04:00
1387722600 # 2013-12-22 14:30
]
print(datetime.datetime.fromtimestamp(1376503200))
train.shape
val_indices = []
index = 0
for index, row in train.iterrows():
time = row['TIMESTAMP']
latitude = row['LATITUDE']
for ts in cuts:
if time <= ts and time + 15 * (len(latitude) - 1) >= ts:
val_indices.append(index)
break
index += 1
X_valid = train.iloc[val_indices]
X_valid.head()
for d in X_valid['TIMESTAMP']:
print(datetime.datetime.fromtimestamp(d))
X_train = train.drop(train.index[[val_indices]])
cluster_centers = utils2.load_array(data_path+"cluster_centers_bw_001.bc")
long = np.array([c[0] for c in cluster_centers])
lat = np.array([c[1] for c in cluster_centers])
utils2.save_array(data_path+'train/X_train.bc', X_train.as_matrix())
utils2.save_array(data_path+'valid/X_val.bc', X_valid.as_matrix())
X_train = pd.DataFrame(utils2.load_array(data_path+'train/X_train.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET',
'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"])
X_valid = pd.DataFrame(utils2.load_array(data_path+'valid/X_val.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET',
'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"])
Explanation: Validation cuts
End of explanation
def equirectangular_loss(y_true, y_pred):
deg2rad = 3.141592653589793 / 180
long_1 = y_true[:,0]*deg2rad
long_2 = y_pred[:,0]*deg2rad
lat_1 = y_true[:,1]*deg2rad
lat_2 = y_pred[:,1]*deg2rad
return 6371*K.sqrt(K.square((long_1 - long_2)*K.cos((lat_1 + lat_2)/2.))
+K.square(lat_1 - lat_2))
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype='int64', name=name)
return inp, Embedding(n_in, n_out, input_length=1, embeddings_regularizer=l2(reg))(inp) # Keras 2
Explanation: The equirectangular loss function mentioned in the paper.
Note: Very important that y[0] is longitude and y[1] is latitude.
Omitted the radius of the earth constant "R" as it does not affect minimization and units were not given in the paper.
End of explanation
def taxi_mlp(k, cluster_centers):
shp = cluster_centers.shape[0]
nums = Input(shape=(4*k,))
center_longs = Input(shape=(shp,))
center_lats = Input(shape=(shp,))
emb_names = ['client_ID', 'taxi_ID', "stand_ID", "quarter_hour", "day_of_week", "week_of_year"]
emb_ins = [57106, 448, 64, 96, 7, 52]
emb_outs = [10 for i in range(0,6)]
regs = [0 for i in range(0,6)]
embs = [embedding_input(e[0], e[1]+1, e[2], e[3]) for e in zip(emb_names, emb_ins, emb_outs, regs)]
x = concatenate([nums] + [Flatten()(e[1]) for e in embs]) # Keras 2
x = Dense(500, activation='relu')(x)
x = Dense(shp, activation='softmax')(x)
y = concatenate([dot([x, center_longs], axes=1), dot([x, center_lats], axes=1)]) # Keras 2
return Model(inputs = [nums]+[e[0] for e in embs] + [center_longs, center_lats], outputs = y) # Keras 2
Explanation: The following returns a fully-connected model as mentioned in the paper. Takes as input k as defined before, and the cluster centers.
Inputs: Embeddings for each category, concatenated w/ the 4*k continous variable representing the first/last k coords as mentioned above.
Embeddings have no regularization, as it was not mentioned in paper, though are easily equipped to include.
Paper mentions global normalization. Didn't specify exactly how they did that, whether thay did it sequentially or whatnot. I just included a batchnorm layer for the continuous inputs.
After concatenation, 1 hidden layer of 500 neurons as called for in paper.
Finally, output layer has as many outputs as there are cluster centers, w/ a softmax activation. Call this output P.
The prediction is the weighted sum of each cluster center c_i w/ corresponding predicted prob P_i.
To facilitate this, dotted output w/ cluster latitudes and longitudes separately. (this happens at variable y), then concatenated
into single tensor.
NOTE!!: You will see that I have the cluster center coords as inputs. Ideally, This function should store the cluster longs/lats as a constant to be used in the model, but I could not figure out. As a consequence, I pass them in as a repeated input.
End of explanation
def data_iter(data, batch_size, cluster_centers):
long = [c[0] for c in cluster_centers]
lat = [c[1] for c in cluster_centers]
i = 0
N = data.shape[0]
while True:
yield ([np.vstack(data['COORD_FEATURES'][i:i+batch_size].as_matrix()), np.vstack(data['ORIGIN_CALL'][i:i+batch_size].as_matrix()),
np.vstack(data['TAXI_ID'][i:i+batch_size].as_matrix()), np.vstack(data['ORIGIN_STAND'][i:i+batch_size].as_matrix()),
np.vstack(data['QUARTER_HOUR'][i:i+batch_size].as_matrix()), np.vstack(data['DAY_OF_WEEK'][i:i+batch_size].as_matrix()),
np.vstack(data['WEEK_OF_YEAR'][i:i+batch_size].as_matrix()), np.array([long for i in range(0,batch_size)]),
np.array([lat for i in range(0,batch_size)])], np.vstack(data["TARGET"][i:i+batch_size].as_matrix()))
i += batch_size
# x=Lambda(thing)([x,long,lat])
Explanation: As mentioned, construction of repeated cluster longs/lats for input
Iterator for in memory train pandas dataframe. I did this as opposed to bcolz iterator due to the pre-processing
End of explanation
del model
model = taxi_mlp(5, cluster_centers)
Explanation: Of course, k in the model needs to match k from feature construction. We again use 5 as they did in the paper
End of explanation
# Reduced the initial 0.001 learning rate to avoid NaN's
model.compile(optimizer=SGD(1e-6, momentum=0.9), loss=equirectangular_loss, metrics=['mse'])
# - Try also Adam optimizer
# optim = Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
# model.compile(optimizer=optim, loss=equirectangular_loss, metrics=['mse'])
X_train_feat = get_features(X_train)
X_train_target = get_target(X_train)
X_val_feat = get_features(X_valid)
X_val_target = get_target(X_valid)
tqdm = TQDMNotebookCallback()
# - Added verbose=1 to track improvement through epochs
checkpoint = ModelCheckpoint(verbose=1, filepath=data_path+'models/weights.{epoch:03d}.{val_loss:.8f}.hdf5', save_best_only=True)
batch_size=256
Explanation: Paper used SGD opt w/ following paramerters
End of explanation
model.fit(X_train_feat, X_train_target, epochs=1, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
model.fit(X_train_feat, X_train_target, epochs=30, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
# - Load the saved best model, otherwise the training would go on from the current model
# - which is not guaranteed to be the best one
# - (check the actual file name)
model = load_model(data_path+'models/weights.028.4.29282813.hdf5', custom_objects={'equirectangular_loss':equirectangular_loss})
# - trying also learning rate annealing
K.set_value(model.optimizer.lr, 5e-4)
model.fit(X_train_feat, X_train_target, epochs=100, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
model.save(data_path+'models/current_model.hdf5')
Explanation: original
End of explanation
model.fit(X_train_feat, X_train_target, epochs=1, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
# - Load again the saved best model, otherwise the training would go on from the current model
# - which is not guaranteed to be the best one
# - (check the actual file name)
model = load_model(data_path+'models/weights.000.0.73703137.hdf5', custom_objects={'equirectangular_loss':equirectangular_loss})
model.fit(X_train_feat, X_train_target, epochs=400, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
model.save(data_path+'models/current_model.hdf5')
len(X_val_feat[0])
Explanation: new valid
End of explanation
# - Use the filename of the best model
best_model = load_model(data_path+'models/weights.308.0.03373993.hdf5', custom_objects={'equirectangular_loss':equirectangular_loss})
best_model.evaluate(X_val_feat, X_val_target)
test = pd.DataFrame(utils2.load_array(data_path+'test/test_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE',
'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"])
# test['ORIGIN_CALL'] = pd.read_csv(data_path+'real_origin_call.csv', header=None) # - file not available
# test['TAXI_ID'] = pd.read_csv(data_path+'real_taxi_id.csv',header=None) # # - file not available
X_test = get_features(test)
b = np.sort(X_test[1],axis=None)
test_preds = np.round(best_model.predict(X_test), decimals=6)
d = {0:test['TRIP_ID'], 1:test_preds[:,1], 2:test_preds[:,0]}
kaggle_out = pd.DataFrame(data=d)
kaggle_out.to_csv(data_path+'submission.csv', header=['TRIP_ID','LATITUDE', 'LONGITUDE'], index=False)
def hdist(a, b):
deg2rad = 3.141592653589793 / 180
lat1 = a[:, 1] * deg2rad
lon1 = a[:, 0] * deg2rad
lat2 = b[:, 1] * deg2rad
lon2 = b[:, 0] * deg2rad
dlat = abs(lat1-lat2)
dlon = abs(lon1-lon2)
al = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * (np.sin(dlon/2)**2)
d = np.arctan2(np.sqrt(al), np.sqrt(1-al))
hd = 2 * 6371 * d
return hd
val_preds = best_model.predict(X_val_feat)
trn_preds = model.predict(X_train_feat)
er = hdist(val_preds, X_val_target)
er.mean()
K.equal()
Explanation: It works, but it seems to converge unrealistically quick and the loss values are not the same. The paper does not mention what it's using as "error" in it's results. I assume the same equirectangular? Not very clear. The difference in values could be due to the missing Earth-radius factor
Kaggle Entry
End of explanation
cuts = [
1376503200, # 2013-08-14 18:00
1380616200, # 2013-10-01 08:30
1381167900, # 2013-10-07 17:45
1383364800, # 2013-11-02 04:00
1387722600 # 2013-12-22 14:30
]
np.any([train['TIMESTAMP'].map(lambda x: x in cuts)])
train['TIMESTAMP']
np.any(train['TIMESTAMP']==1381167900)
times = train['TIMESTAMP'].as_matrix()
X_train.columns
times
count = 0
for index, row in X_val.iterrows():
for ts in cuts:
time = row['TIMESTAMP']
latitude = row['LATITUDE']
if time <= ts and time + 15 * (len(latitude) - 1) >= ts:
count += 1
one = count
count + one
import h5py
h = h5py.File(data_path+'original/data.hdf5', 'r')
evrData=h['/Configure:0000/Run:0000/CalibCycle:0000/EvrData::DataV3/NoDetector.0:Evr.0/data']
c = np.load(data_path+'original/arrival-clusters.pkl')
Explanation: To-do: simple to extend to validation data
Uh oh... training data not representative of test
End of explanation
from fuel.utils import find_in_data_path
from fuel.datasets import H5PYDataset
original_path = '/data/bckenstler/data/taxi/original/'
train_set = H5PYDataset(original_path+'data.hdf5', which_sets=('train',),load_in_memory=True)
valid_set = H5PYDataset(original_path+'valid.hdf5', which_sets=('cuts/test_times_0',),load_in_memory=True)
print(train_set.num_examples)
print(valid_set.num_examples)
data = train_set.data_sources
data[0]
valid_data = valid_set.data_sources
valid_data[4][0]
stamps = valid_data[-3]
stamps[0]
for i in range(0,304):
print(np.any([t==int(stamps[i]) for t in X_val['TIMESTAMP']]))
type(X_train['TIMESTAMP'][0])
type(stamps[0])
check = [s in stamps for s in X_val['TIMESTAMP']]
for s in X_val['TIMESTAMP']:
print(datetime.datetime.fromtimestamp(s))
for s in stamps:
print(datetime.datetime.fromtimestamp(s))
ids = valid_data[-1]
type(ids[0])
ids
X_val
Explanation: hd5f files
End of explanation |
8,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Final SpIES High-z Quasar Selection
Notebook performing selection of $3.5<z<5$ quasars from SDSS+SpIES data.
Largely the same as SpIESHighzQuasars notebook except using the algoirthm(s) from
SpIESHighzCandidateSelection2. See notes below for creating a version of the
test set that includes i-band mag and extinctu. (This wasn't easy.)
First load the training data, then instantiate and train the algorithm; see https
Step1: Second, load the test data
Test Data
Test set data set was made as follows (see 18 April 2016 README entry)
Step2: I had some problems with GTR-ADM-QSO-ir_good_test_2016n.fits because it thought that there were blank entries among the attributes. There actually weren't (as far as I could tell), but I found that I could use filled to fix the problem. However, that just caused problems later!
Step3: Taking too long to do all the objects, so just do Stripe 82, which is all that we really care about anyway.
Step4: Quasar Candidates
Finally, do the classification and output the test file, including the predicted labels.
Step5: Now write results to output file. Didn't do bagging b/c takes too long. See SpIESHighzQuasarsS82all.py which I ran on dirac. | Python Code:
%matplotlib inline
from astropy.table import Table
import numpy as np
import matplotlib.pyplot as plt
data = Table.read('GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean.fits')
# X is in the format need for all of the sklearn tools, it just has the colors
# X = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], data['zs1'], data['s1s2'], data['imag'], data['extinctu']]).T
# Don't use imag and extinctu since they don't contribute much to the accuracy and they add a lot to the data volume.
X = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], data['zs1'], data['s1s2'] ]).T
y = np.array(data['labels'])
# For algorithms that need scaled data:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X) # Use the full training set now
XStrain = scaler.transform(X)
# SVM
from sklearn.svm import SVC
svm = SVC(random_state=42)
svm.fit(XStrain,y)
# Bagging
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
bag = BaggingClassifier(KNeighborsClassifier(n_neighbors=7), max_samples=0.5, max_features=1.0, random_state=42)
bag.fit(XStrain, y)
Explanation: Final SpIES High-z Quasar Selection
Notebook performing selection of $3.5<z<5$ quasars from SDSS+SpIES data.
Largely the same as SpIESHighzQuasars notebook except using the algoirthm(s) from
SpIESHighzCandidateSelection2. See notes below for creating a version of the
test set that includes i-band mag and extinctu. (This wasn't easy.)
First load the training data, then instantiate and train the algorithm; see https://github.com/gtrichards/QuasarSelection/blob/master/SpIESHighzCandidateSelection2.ipynb
End of explanation
#data2 = Table.read('GTR-ADM-QSO-ir_good_test_2016n.fits')
data2 = Table.read('GTR-ADM-QSO-ir_good_test_2016.fits')
print data2.keys()
Explanation: Second, load the test data
Test Data
Test set data set was made as follows (see 18 April 2016 README entry):
maketest_2016.py
Output is:
classifiers_out = open('GTR-ADM-QSO-ir_classifiers_good_test_2016.dat','w')
others_out= open('GTR-ADM-QSO-ir_others_good_test_2016.dat','w')
czr_out = open('GTR-ADM-QSO-ir_photoz_in7_good_test_2016.dat','w')
Really need the first two files combined (so that we have both RA/Dec and colors in one place).
But couldn't merge them with TOPCAT or STILTS. So had to break them into 3 pieces (with TOPCAT),
then used combine_test_files_STILTS.py to merge them together (just changing the input/output file names by hand).
Actually ran this on dirac so that I'd have more memory than on quasar. Copied the output files back to quasar and merged them together with TOPCAT.
So<br>
GTR-ADM-QSO-ir_others_good_test_2016a.dat + GTR-ADM-QSO-ir_classifiers_good_test_2016a.dat<br>
gives<br>
GTR-ADM-QSO-ir_good_test_2016a.dat<br>
(and so on for "b" and "c").
Then<br>
GTR-ADM-QSO-ir_good_test_2016a.dat + GTR-ADM-QSO-ir_good_test_2016b.dat + GTR-ADM-QSO-ir_good_test_2016c.dat<br>
gives<br>
GTR-ADM-QSO-ir_good_test_2016.dat<br>
and similarly for the fits output file.
Since I wanted to use the imag and extinctu, then I also had to make a version of the test file with combine_test_files_STILTSn.py (on quasar). This was fairly involved because of memory issues. The new output file is GTR-ADM-QSO-ir_good_test_2016n.dat. In the end, I ended up not using that and this is more of an exploration of SVM and bagging as alternatives to RF.
Now read in the test file and convert it to an appropriate array format for sklearn.
End of explanation
# Not sure why I need to do this because there don't appear to be any unfilled columns
# but the code segment below won't run without it.
# Only need to do for the file with imag and extinctu
# data2 = data2.filled()
Explanation: I had some problems with GTR-ADM-QSO-ir_good_test_2016n.fits because it thought that there were blank entries among the attributes. There actually weren't (as far as I could tell), but I found that I could use filled to fix the problem. However, that just caused problems later!
End of explanation
ramask = ( ( (data2['ra']>=300.0) & (data2['ra']<=360.0) ) | ( (data2['ra']>=0.0) & (data2['ra']<=60.0) ) )
decmask = ((data2['dec']>=-1.5) & (data2['dec']<=1.5))
dataS82 = data2[ramask & decmask]
print len(dataS82)
#Xtest = np.vstack([dataS82['ug'], dataS82['gr'], dataS82['ri'], dataS82['iz'], dataS82['zs1'], dataS82[]'s1s2'], dataS82['i'], data2['extinctu']]).T
Xtest = np.vstack([dataS82['ug'], dataS82['gr'], dataS82['ri'], dataS82['iz'], dataS82['zs1'], dataS82['s1s2'] ]).T
XStest = scaler.transform(Xtest)
Explanation: Taking too long to do all the objects, so just do Stripe 82, which is all that we really care about anyway.
End of explanation
from dask import compute, delayed
def processSVM(Xin):
return svm.predict(Xin)
# Create dask objects
# Reshape is necessary because the format of x as drawm from Xtest
# is not what sklearn wants.
dobjsSVM = [delayed(processSVM)(x.reshape(1,-1)) for x in XStest]
import dask.threaded
ypredSVM = compute(*dobjsSVM, get=dask.threaded.get)
ypredSVM = np.array(ypredSVM).reshape(1,-1)[0]
from dask import compute, delayed
def processBAG(Xin):
return bag.predict(Xin)
# Create dask objects
# Reshape is necessary because the format of x as drawm from Xtest
# is not what sklearn wants.
dobjsBAG = [delayed(processBAG)(x.reshape(1,-1)) for x in XStest]
import dask.threaded
ypredBAG = compute(*dobjsBAG, get=dask.threaded.get)
ypredBAG = np.array(ypredBAG).reshape(1,-1)[0]
Explanation: Quasar Candidates
Finally, do the classification and output the test file, including the predicted labels.
End of explanation
dataS82['ypredSVM'] = ypredSVM
dataS82['ypredBAG'] = ypredBAG
#dataS82.write('GTR-ADM-QSO-ir_good_test_2016_Stripe82svm.fits', format='fits')
Explanation: Now write results to output file. Didn't do bagging b/c takes too long. See SpIESHighzQuasarsS82all.py which I ran on dirac.
End of explanation |
8,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The role of dipole orientations in distributed source localization
When performing source localization in a distributed manner (MNE/dSPM/sLORETA),
the source space is defined as a grid of dipoles that spans a large portion of
the cortex. These dipoles have both a position and an orientation. In this
tutorial, we will look at the various options available to restrict the
orientation of the dipoles and the impact on the resulting source estimate.
Loading data
Load everything we need to perform source localization on the sample dataset.
Step1: The source space
Let's start by examining the source space as constructed by the
Step2: Fixed dipole orientations
While the source space defines the position of the dipoles, the inverse
operator defines the possible orientations of them. One of the options is to
assign a fixed orientation. Since the neural currents from which MEG and EEG
signals originate flows mostly perpendicular to the cortex [1]_, restricting
the orientation of the dipoles accordingly places a useful restriction on the
source estimate.
By specifying fixed=True when calling
Step3: Restricting the dipole orientations in this manner leads to the following
source estimate for the sample data
Step4: The direction of the estimated current is now restricted to two directions
Step5: When computing the source estimate, the activity at each of the three dipoles
is collapsed into the XYZ components of a single vector, which leads to the
following source estimate for the sample data
Step6: Limiting orientations, but not fixing them
Often, the best results will be obtained by allowing the dipoles to have
somewhat free orientation, but not stray too far from a orientation that is
perpendicular to the cortex. The loose parameter of the
Step7: Discarding dipole orientation information
Often, further analysis of the data does not need information about the
orientation of the dipoles, but rather their magnitudes. The pick_ori
parameter of the | Python Code:
from mayavi import mlab
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
data_path = sample.data_path()
evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')
left_auditory = evokeds[0].apply_baseline()
fwd = mne.read_forward_solution(
data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif',
surf_ori=True)
noise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')
subjects_dir = data_path + '/subjects'
Explanation: The role of dipole orientations in distributed source localization
When performing source localization in a distributed manner (MNE/dSPM/sLORETA),
the source space is defined as a grid of dipoles that spans a large portion of
the cortex. These dipoles have both a position and an orientation. In this
tutorial, we will look at the various options available to restrict the
orientation of the dipoles and the impact on the resulting source estimate.
Loading data
Load everything we need to perform source localization on the sample dataset.
End of explanation
lh = fwd['src'][0] # Visualize the left hemisphere
verts = lh['rr'] # The vertices of the source space
tris = lh['tris'] # Groups of three vertices that form triangles
dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles
white = (1.0, 1.0, 1.0) # RGB values for a white color
gray = (0.5, 0.5, 0.5) # RGB values for a gray color
red = (1.0, 0.0, 0.0) # RGB valued for a red color
mlab.figure(size=(600, 400), bgcolor=white)
# Plot the cortex
mlab.triangular_mesh(verts[:, 0], verts[:, 1], verts[:, 2], tris, color=gray)
# Mark the position of the dipoles with small red dots
mlab.points3d(dip_pos[:, 0], dip_pos[:, 1], dip_pos[:, 2], color=red,
scale_factor=1E-3)
mlab.view(azimuth=180, distance=0.25)
Explanation: The source space
Let's start by examining the source space as constructed by the
:func:mne.setup_source_space function. Dipoles are placed along fixed
intervals on the cortex, determined by the spacing parameter. The source
space does not define the orientation for these dipoles.
End of explanation
mlab.figure(size=(600, 400), bgcolor=white)
# Plot the cortex
mlab.triangular_mesh(verts[:, 0], verts[:, 1], verts[:, 2], tris, color=gray)
# Show the dipoles as arrows pointing along the surface normal
normals = lh['nn'][lh['vertno']]
mlab.quiver3d(dip_pos[:, 0], dip_pos[:, 1], dip_pos[:, 2],
normals[:, 0], normals[:, 1], normals[:, 2],
color=red, scale_factor=1E-3)
mlab.view(azimuth=180, distance=0.1)
Explanation: Fixed dipole orientations
While the source space defines the position of the dipoles, the inverse
operator defines the possible orientations of them. One of the options is to
assign a fixed orientation. Since the neural currents from which MEG and EEG
signals originate flows mostly perpendicular to the cortex [1]_, restricting
the orientation of the dipoles accordingly places a useful restriction on the
source estimate.
By specifying fixed=True when calling
:func:mne.minimum_norm.make_inverse_operator, the dipole orientations are
fixed to be orthogonal to the surface of the cortex, pointing outwards. Let's
visualize this:
End of explanation
# Compute the source estimate for the 'left - auditory' condition in the sample
# dataset.
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
Explanation: Restricting the dipole orientations in this manner leads to the following
source estimate for the sample data:
End of explanation
mlab.figure(size=(600, 400), bgcolor=white)
# Define some more colors
green = (0.0, 1.0, 0.0)
blue = (0.0, 0.0, 1.0)
# Plot the cortex
mlab.triangular_mesh(verts[:, 0], verts[:, 1], verts[:, 2], tris, color=gray)
# Make an inverse operator with loose dipole orientations
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=1.0)
# Show the three dipoles defined at each location in the source space
dip_dir = inv['source_nn'].reshape(-1, 3, 3)
dip_dir = dip_dir[:len(dip_pos)] # Only select left hemisphere
for ori, color in zip((0, 1, 2), (red, green, blue)):
mlab.quiver3d(dip_pos[:, 0], dip_pos[:, 1], dip_pos[:, 2],
dip_dir[:, ori, 0], dip_dir[:, ori, 1], dip_dir[:, ori, 2],
color=color, scale_factor=1E-3)
mlab.view(azimuth=180, distance=0.1)
Explanation: The direction of the estimated current is now restricted to two directions:
inward and outward. In the plot, blue areas indicate current flowing inwards
and red areas indicate current flowing outwards. Given the curvature of the
cortex, groups of dipoles tend to point in the same direction: the direction
of the electromagnetic field picked up by the sensors.
Loose dipole orientations
Forcing the source dipoles to be strictly orthogonal to the cortex makes the
source estimate sensitive to the spacing of the dipoles along the cortex,
since the curvature of the cortex changes within each ~10 square mm patch.
Furthermore, misalignment of the MEG/EEG and MRI coordinate frames is more
critical when the source dipole orientations are strictly constrained [2]_.
To lift the restriction on the orientation of the dipoles, the inverse
operator has the ability to place not one, but three dipoles at each
location defined by the source space. These three dipoles are placed
orthogonally to form a Cartesian coordinate system. Let's visualize this:
End of explanation
# Compute the source estimate, indicate that we want a vector solution
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
Explanation: When computing the source estimate, the activity at each of the three dipoles
is collapsed into the XYZ components of a single vector, which leads to the
following source estimate for the sample data:
End of explanation
# Set loose to 0.2, the default value
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=0.2)
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
Explanation: Limiting orientations, but not fixing them
Often, the best results will be obtained by allowing the dipoles to have
somewhat free orientation, but not stray too far from a orientation that is
perpendicular to the cortex. The loose parameter of the
:func:mne.minimum_norm.make_inverse_operator allows you to specify a value
between 0 (fixed) and 1 (unrestricted or "free") to indicate the amount the
orientation is allowed to deviate from the surface normal.
End of explanation
# Only retain vector magnitudes
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
Explanation: Discarding dipole orientation information
Often, further analysis of the data does not need information about the
orientation of the dipoles, but rather their magnitudes. The pick_ori
parameter of the :func:mne.minimum_norm.apply_inverse function allows you
to specify whether to return the full vector solution ('vector') or
rather the magnitude of the vectors (None, the default) or only the
activity in the direction perpendicular to the cortex ('normal').
End of explanation |
8,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
syncID
Step1: Open a GeoTIFF with GDAL
Let's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the gdal.Open function
Step2: Read GeoTIFF Tags
The GeoTIFF file format comes with associated metadata containing information about the location and coordinate system/projection. Once we have read in the dataset, we can access this information with the following commands
Step3: Use GetProjection
We can use the gdal GetProjection method to display information about the coordinate system and EPSG code.
Step4: Use GetGeoTransform
The geotransform contains information about the origin (upper-left corner) of the raster, the pixel size, and the rotation angle of the data. All NEON data in the latest format have zero rotation. In this example, the values correspond to
Step5: In this case, the geotransform values correspond to
Step6: Use GetRasterBand
We can read in a single raster band with GetRasterBand and access information about this raster band such as the No Data Value, Scale Factor, and Statitiscs as follows
Step7: Use ReadAsArray
Finally we can convert the raster to an array using the ReadAsArray method. Cast the array to a floating point value using astype(np.float). Once we generate the array, we want to set No Data Values to NaN, and apply the scale factor
Step8: Plot Canopy Height Data
To get a better idea of the dataset, we can use a similar function to plot_aop_refl that we used in the NEON AOP reflectance tutorials
Step9: Histogram of Data
As we did with the reflectance tile, it is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values. First we'll make a copy of the array and remove the nan values.
Step10: On your own, adjust the number of bins, and range of the y-axis to get a good idea of the distribution of the canopy height values. We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values
Step11: Note that it appears that the trees don't have a smooth or normal distribution, but instead appear blocked off in chunks. This is an artifact of the Canopy Height Model algorithm, which bins the trees into 5m increments (this is done to avoid another artifact of "pits" (Khosravipour et al., 2014).
From the histogram we can see that the majority of the trees are < 30m. We can re-plot the CHM array, this time adjusting the color bar limits to better visualize the variation in canopy height. We will plot the non-zero array so that CHM=0 appears white.
Step12: Threshold Based Raster Classification
Next, we will create a classified raster object. To do this, we will use the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups
Step13: We can define our own colormap to plot these discrete classifications, and create a custom legend to label the classes | Python Code:
import numpy as np
import gdal, copy
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
Explanation: syncID: b0860577d1994b6e8abd23a6edf9e005
title: "Classify a Raster Using Threshold Values in Python - 2018"
description: "Learn how to read NEON lidar raster GeoTIFFs (e.g., CHM, slope, aspect) into Python numpy arrays with gdal and create a classified raster object."
dateCreated: 2018-07-04
authors: Bridget Hass
contributors: Donal O'Leary, Max Burner
estimatedTime: 1 hour
packagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot, os
topics: lidar, raster, remote-sensing
languagesTool: python
dataProduct: DP1.30003, DP3.30015, DP3.30024, DP3.30025
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.ipynb
tutorialSeries: intro-lidar-py-series
urlTitle: classify-raster-thresholds-2018-py
In this tutorial, we will learn how to read NEON lidar raster GeoTIFFS
(e.g., CHM, slope aspect) into Python numpy arrays with gdal and create a
classified raster object.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Read NEON lidar raster GeoTIFFS (e.g., CHM, slope aspect) into Python numpy arrays with gdal.
* Create a classified raster object using thresholds.
### Install Python Packages
* **numpy**
* **gdal**
* **matplotlib**
### Download Data
For this lesson, we will be using a 1km tile of a Canopy Height Model derived from lidar data collected at the Smithsonian Environmental Research Center (SERC) NEON site. <a href="https://ndownloader.figshare.com/files/25787420">Download Data Here</a>.
<a href="https://ndownloader.figshare.com/files/25787420" class="link--button link--arrow">
Download Dataset</a>
</div>
In this tutorial, we will work with the NEON AOP L3 LiDAR ecoysystem structure (Canopy Height Model) data product. For more information about NEON data products and the CHM product DP3.30015.001, see the <a href="http://data.neonscience.org/data-products/DP3.30015.001" target="_blank">NEON Data Product Catalog</a>.
First, let's import the required packages and set our plot display to be in-line:
End of explanation
# Note that you will need to update the filepath below according to your local machine
chm_filename = '/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_CHM.tif'
chm_dataset = gdal.Open(chm_filename)
Explanation: Open a GeoTIFF with GDAL
Let's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the gdal.Open function:
End of explanation
#Display the dataset dimensions, number of bands, driver, and geotransform
cols = chm_dataset.RasterXSize; print('# of columns:',cols)
rows = chm_dataset.RasterYSize; print('# of rows:',rows)
print('# of bands:',chm_dataset.RasterCount)
print('driver:',chm_dataset.GetDriver().LongName)
Explanation: Read GeoTIFF Tags
The GeoTIFF file format comes with associated metadata containing information about the location and coordinate system/projection. Once we have read in the dataset, we can access this information with the following commands:
End of explanation
print('projection:',chm_dataset.GetProjection())
Explanation: Use GetProjection
We can use the gdal GetProjection method to display information about the coordinate system and EPSG code.
End of explanation
print('geotransform:',chm_dataset.GetGeoTransform())
Explanation: Use GetGeoTransform
The geotransform contains information about the origin (upper-left corner) of the raster, the pixel size, and the rotation angle of the data. All NEON data in the latest format have zero rotation. In this example, the values correspond to:
End of explanation
chm_mapinfo = chm_dataset.GetGeoTransform()
xMin = chm_mapinfo[0]
yMax = chm_mapinfo[3]
xMax = xMin + chm_dataset.RasterXSize/chm_mapinfo[1] #divide by pixel width
yMin = yMax + chm_dataset.RasterYSize/chm_mapinfo[5] #divide by pixel height (note sign +/-)
chm_ext = (xMin,xMax,yMin,yMax)
print('chm raster extent:',chm_ext)
Explanation: In this case, the geotransform values correspond to:
Left-Most X Coordinate = 367000.0
W-E Pixel Resolution = 1.0
Rotation (0 if Image is North-Up) = 0.0
Upper Y Coordinate = 4307000.0
Rotation (0 if Image is North-Up) = 0.0
N-S Pixel Resolution = -1.0
The negative value for the N-S Pixel resolution reflects that the origin of the image is the upper left corner. We can convert this geotransform information into a spatial extent (xMin, xMax, yMin, yMax) by combining information about the origin, number of columns & rows, and pixel size, as follows:
End of explanation
chm_raster = chm_dataset.GetRasterBand(1)
noDataVal = chm_raster.GetNoDataValue(); print('no data value:',noDataVal)
scaleFactor = chm_raster.GetScale(); print('scale factor:',scaleFactor)
chm_stats = chm_raster.GetStatistics(True,True)
print('SERC CHM Statistics: Minimum=%.2f, Maximum=%.2f, Mean=%.3f, StDev=%.3f' %
(chm_stats[0], chm_stats[1], chm_stats[2], chm_stats[3]))
Explanation: Use GetRasterBand
We can read in a single raster band with GetRasterBand and access information about this raster band such as the No Data Value, Scale Factor, and Statitiscs as follows:
End of explanation
chm_array = chm_dataset.GetRasterBand(1).ReadAsArray(0,0,cols,rows).astype(np.float)
chm_array[chm_array==int(noDataVal)]=np.nan #Assign CHM No Data Values to NaN
chm_array=chm_array/scaleFactor
print('SERC CHM Array:\n',chm_array) #display array values
chm_array.shape
# Calculate the % of pixels that are NaN and non-zero:
pct_nan = np.count_nonzero(np.isnan(chm_array))/(rows*cols)
print('% NaN:',round(pct_nan*100,2))
print('% non-zero:',round(100*np.count_nonzero(chm_array)/(rows*cols),2))
Explanation: Use ReadAsArray
Finally we can convert the raster to an array using the ReadAsArray method. Cast the array to a floating point value using astype(np.float). Once we generate the array, we want to set No Data Values to NaN, and apply the scale factor:
End of explanation
def plot_band_array(band_array,refl_extent,colorlimit,ax=plt.gca(),title='',cmap_title='',colormap=''):
plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit);
cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=90,labelpad=20);
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
Explanation: Plot Canopy Height Data
To get a better idea of the dataset, we can use a similar function to plot_aop_refl that we used in the NEON AOP reflectance tutorials:
End of explanation
import copy
chm_nonan_array = copy.copy(chm_array)
chm_nonan_array = chm_nonan_array[~np.isnan(chm_array)]
plt.hist(chm_nonan_array,weights=np.zeros_like(chm_nonan_array)+1./
(chm_array.shape[0]*chm_array.shape[1]),bins=50);
plt.title('Distribution of SERC Canopy Height')
plt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')
Explanation: Histogram of Data
As we did with the reflectance tile, it is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values. First we'll make a copy of the array and remove the nan values.
End of explanation
chm_nonzero_array = copy.copy(chm_array)
chm_nonzero_array[chm_array==0]=np.nan
chm_nonzero_nonan_array = chm_nonzero_array[~np.isnan(chm_nonzero_array)]
# Use weighting to plot relative frequency
plt.hist(chm_nonzero_nonan_array,bins=50);
# plt.hist(chm_nonzero_nonan_array.flatten(),50)
plt.title('Distribution of SERC Non-Zero Canopy Height')
plt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')
Explanation: On your own, adjust the number of bins, and range of the y-axis to get a good idea of the distribution of the canopy height values. We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values:
End of explanation
plot_band_array(chm_array,
chm_ext,
(0,35),
title='SERC Canopy Height',
cmap_title='Canopy Height, m',
colormap='BuGn')
Explanation: Note that it appears that the trees don't have a smooth or normal distribution, but instead appear blocked off in chunks. This is an artifact of the Canopy Height Model algorithm, which bins the trees into 5m increments (this is done to avoid another artifact of "pits" (Khosravipour et al., 2014).
From the histogram we can see that the majority of the trees are < 30m. We can re-plot the CHM array, this time adjusting the color bar limits to better visualize the variation in canopy height. We will plot the non-zero array so that CHM=0 appears white.
End of explanation
chm_reclass = copy.copy(chm_array)
chm_reclass[np.where(chm_array==0)] = 1 # CHM = 0 : Class 1
chm_reclass[np.where((chm_array>0) & (chm_array<=10))] = 2 # 0m < CHM <= 10m - Class 2
chm_reclass[np.where((chm_array>10) & (chm_array<=20))] = 3 # 10m < CHM <= 20m - Class 3
chm_reclass[np.where((chm_array>20) & (chm_array<=30))] = 4 # 20m < CHM <= 30m - Class 4
chm_reclass[np.where(chm_array>30)] = 5 # CHM > 30m - Class 5
Explanation: Threshold Based Raster Classification
Next, we will create a classified raster object. To do this, we will use the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups:
- Class 1: CHM = 0 m
- Class 2: 0m < CHM <= 10m
- Class 3: 10m < CHM <= 20m
- Class 4: 20m < CHM <= 30m
- Class 5: CHM > 30m
We can use np.where to find the indices where a boolean criteria is met.
End of explanation
import matplotlib.colors as colors
plt.figure();
cmapCHM = colors.ListedColormap(['lightblue','yellow','orange','green','red'])
plt.imshow(chm_reclass,extent=chm_ext,cmap=cmapCHM)
plt.title('SERC CHM Classification')
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
# Create custom legend to label the four canopy height classes:
import matplotlib.patches as mpatches
class1_box = mpatches.Patch(color='lightblue', label='CHM = 0m')
class2_box = mpatches.Patch(color='yellow', label='0m < CHM <= 10m')
class3_box = mpatches.Patch(color='orange', label='10m < CHM <= 20m')
class4_box = mpatches.Patch(color='green', label='20m < CHM <= 30m')
class5_box = mpatches.Patch(color='red', label='CHM > 30m')
ax.legend(handles=[class1_box,class2_box,class3_box,class4_box,class5_box],
handlelength=0.7,bbox_to_anchor=(1.05, 0.4),loc='lower left',borderaxespad=0.)
Explanation: We can define our own colormap to plot these discrete classifications, and create a custom legend to label the classes:
End of explanation |
8,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Transport Problem
Summary
The goal of the Transport Problem is to select the quantities of an homogeneous good that has several production plants and several punctiform markets as to minimise the transportation costs.
It is the default tutorial for the GAMS language, and GAMS equivalent code is inserted as single-dash comments. The original GAMS code needs slighly different ordering of the commands and it's available at http
Step1: Set Definitions
Sets are created as attributes object of the main model objects and all the information is given as parameter in the constructor function. Specifically, we are passing to the constructor the initial elements of the set and a documentation string to keep track on what our set represents
Step2: Parameters
Parameter objects are created specifying the sets over which they are defined and are initialised with either a python dictionary or a scalar
Step3: A third, powerful way to initialize a parameter is using a user-defined function.
This function will be automatically called by pyomo with any possible (i,j) set. In this case pyomo will actually call c_init() six times in order to initialize the model.c parameter.
Step4: Variables
Similar to parameters, variables are created specifying their domain(s). For variables we can also specify the upper/lower bounds in the constructor.
Differently from GAMS, we don't need to define the variable that is on the left hand side of the objective function.
Step5: Constrains
At this point, it should not be a surprise that constrains are again defined as model objects with the required information passed as parameter in the constructor function.
Step6: The above code take advantage of list comprehensions, a powerful feature of the python language that provides a concise way to loop over a list. If we take the supply_rule as example, this is actually called two times by pyomo (once for each of the elements of i). Without list comprehensions we would have had to write our function using a for loop, like
Step7: Using list comprehension is however quicker to code and more readable.
Objective and Solving
The definition of the objective is similar to those of the constrains, except that most solvers require a scalar objective function, hence a unique function, and we can specify the sense (direction) of the optimisation.
Step8: As we are here looping over two distinct sets, we can see how list comprehension really simplifies the code. The objective function could have being written without list comprehension as
Step9: Retrieving the Output
We use the pyomo_postprocess() function to retrieve the output and do something with it. For example, we could display solution values (see below), plot a graph with matplotlib or save it in a csv file.
This function is called by pyomo after the solver has finished.
Step10: We can print model structure information with model.pprint() (“pprint” stand for “pretty print”).
Results are also by default saved in a results.json file or, if PyYAML is installed in the system, in results.yml.
Editing and Running the Script
Differently from GAMS, you can use whatever editor environment you wish to code a pyomo script. If you don't need debugging features, a simple text editor like Notepad++ (in windows), gedit or kate (in Linux) will suffice. They already have syntax highlight for python.
If you want advanced features and debugging capabilities you can use a dedicated Python IDE, like e.g. Spyder.
You will normally run the script as pyomo solve –solver=glpk transport.py. You can output solver specific output adding the option –stream-output. If you want to run the script as python transport.py add the following lines at the end
Step11: Finally, if you are very lazy and want to run the script with just ./transport.py (and you are in Linux) add the following lines at the top
Step12: Complete script
Here is the complete script
Step13: Solutions
Running the model lead to the following output
Step14: By default, the optimization results are stored in the file results.yml | Python Code:
# Import of the pyomo module
from pyomo.environ import *
# Creation of a Concrete Model
model = ConcreteModel()
Explanation: The Transport Problem
Summary
The goal of the Transport Problem is to select the quantities of an homogeneous good that has several production plants and several punctiform markets as to minimise the transportation costs.
It is the default tutorial for the GAMS language, and GAMS equivalent code is inserted as single-dash comments. The original GAMS code needs slighly different ordering of the commands and it's available at http://www.gams.com/mccarl/trnsport.gms.
Problem Statement
The Transport Problem can be formulated mathematically as a linear programming problem using the following model.
Sets
$I$ = set of canning plants
$J$ = set of markets
Parameters
$a_i$ = capacity of plant $i$ in cases, $\forall i \in I$ <br />
$b_j$ = demand at market $j$ in cases, $\forall j \in J$ <br />
$d_{i,j}$ = distance in thousands of miles, $\forall i \in I, \forall j \in J$ <br />
$f$ = freight in dollars per case per thousand miles <br />
$c_{i,j}$ = transport cost in thousands of dollars per case
$c_{i,j}$ is obtained exougenously to the optimisation problem as $c_{i,j} = f \cdot d_{i,j}$, $\forall i \in I, \forall j \in J$
Variables
$x_{i,j}$ = shipment quantities in cases <br />
z = total transportation costs in thousands of dollars
Objective
Minimize the total cost of the shipments: <br />
$\min_{x} z = \sum_{i \in I} \sum_{j \in J} c_{i,j} x_{i,j}$
Constraints
Observe supply limit at plant i: <br />
$\sum_{j \in J} x_{i,j} \leq a_{i}$, $\forall i \in I$
Satisfy demand at market j: <br />
$\sum_{i \in I} x_{i,j} \geq b_{j}$, $\forall j \in J$
Non-negative transportation quantities <br />
$x_{i,j} \geq 0$, $\forall i \in I, \forall j \in J$
Pyomo Formulation
Creation of the Model
In pyomo everything is an object. The various components of the model (sets, parameters, variables, constraints, objective..) are all attributes of the main model object while being objects themselves.
There are two type of models in pyomo: A ConcreteModel is one where all the data is defined at the model creation. We are going to use this type of model in this tutorial. Pyomo however supports also an AbstractModel, where the model structure is firstly generated and then particular instances of the model are generated with a particular set of data.
The first thing to do in the script is to load the pyomo library and create a new ConcreteModel object. We have little imagination here, and we call our model "model". You can give it whatever name you want. However, if you give your model an other name, you also need to create a model object at the end of your script:
End of explanation
## Define sets ##
# Sets
# i canning plants / seattle, san-diego /
# j markets / new-york, chicago, topeka / ;
model.i = Set(initialize=['seattle','san-diego'], doc='Canning plans')
model.j = Set(initialize=['new-york','chicago', 'topeka'], doc='Markets')
Explanation: Set Definitions
Sets are created as attributes object of the main model objects and all the information is given as parameter in the constructor function. Specifically, we are passing to the constructor the initial elements of the set and a documentation string to keep track on what our set represents:
End of explanation
## Define parameters ##
# Parameters
# a(i) capacity of plant i in cases
# / seattle 350
# san-diego 600 /
# b(j) demand at market j in cases
# / new-york 325
# chicago 300
# topeka 275 / ;
model.a = Param(model.i, initialize={'seattle':350,'san-diego':600}, doc='Capacity of plant i in cases')
model.b = Param(model.j, initialize={'new-york':325,'chicago':300,'topeka':275}, doc='Demand at market j in cases')
# Table d(i,j) distance in thousands of miles
# new-york chicago topeka
# seattle 2.5 1.7 1.8
# san-diego 2.5 1.8 1.4 ;
dtab = {
('seattle', 'new-york') : 2.5,
('seattle', 'chicago') : 1.7,
('seattle', 'topeka') : 1.8,
('san-diego','new-york') : 2.5,
('san-diego','chicago') : 1.8,
('san-diego','topeka') : 1.4,
}
model.d = Param(model.i, model.j, initialize=dtab, doc='Distance in thousands of miles')
# Scalar f freight in dollars per case per thousand miles /90/ ;
model.f = Param(initialize=90, doc='Freight in dollars per case per thousand miles')
Explanation: Parameters
Parameter objects are created specifying the sets over which they are defined and are initialised with either a python dictionary or a scalar:
End of explanation
# Parameter c(i,j) transport cost in thousands of dollars per case ;
# c(i,j) = f * d(i,j) / 1000 ;
def c_init(model, i, j):
return model.f * model.d[i,j] / 1000
model.c = Param(model.i, model.j, initialize=c_init, doc='Transport cost in thousands of dollar per case')
Explanation: A third, powerful way to initialize a parameter is using a user-defined function.
This function will be automatically called by pyomo with any possible (i,j) set. In this case pyomo will actually call c_init() six times in order to initialize the model.c parameter.
End of explanation
## Define variables ##
# Variables
# x(i,j) shipment quantities in cases
# z total transportation costs in thousands of dollars ;
# Positive Variable x ;
model.x = Var(model.i, model.j, bounds=(0.0,None), doc='Shipment quantities in case')
Explanation: Variables
Similar to parameters, variables are created specifying their domain(s). For variables we can also specify the upper/lower bounds in the constructor.
Differently from GAMS, we don't need to define the variable that is on the left hand side of the objective function.
End of explanation
## Define contrains ##
# supply(i) observe supply limit at plant i
# supply(i) .. sum (j, x(i,j)) =l= a(i)
def supply_rule(model, i):
return sum(model.x[i,j] for j in model.j) <= model.a[i]
model.supply = Constraint(model.i, rule=supply_rule, doc='Observe supply limit at plant i')
# demand(j) satisfy demand at market j ;
# demand(j) .. sum(i, x(i,j)) =g= b(j);
def demand_rule(model, j):
return sum(model.x[i,j] for i in model.i) >= model.b[j]
model.demand = Constraint(model.j, rule=demand_rule, doc='Satisfy demand at market j')
Explanation: Constrains
At this point, it should not be a surprise that constrains are again defined as model objects with the required information passed as parameter in the constructor function.
End of explanation
def supply_rule(model, i):
supply = 0.0
for j in model.j:
supply += model.x[i,j]
return supply <= model.a[i]
Explanation: The above code take advantage of list comprehensions, a powerful feature of the python language that provides a concise way to loop over a list. If we take the supply_rule as example, this is actually called two times by pyomo (once for each of the elements of i). Without list comprehensions we would have had to write our function using a for loop, like:
End of explanation
## Define Objective and solve ##
# cost define objective function
# cost .. z =e= sum((i,j), c(i,j)*x(i,j)) ;
# Model transport /all/ ;
# Solve transport using lp minimizing z ;
def objective_rule(model):
return sum(model.c[i,j]*model.x[i,j] for i in model.i for j in model.j)
model.objective = Objective(rule=objective_rule, sense=minimize, doc='Define objective function')
Explanation: Using list comprehension is however quicker to code and more readable.
Objective and Solving
The definition of the objective is similar to those of the constrains, except that most solvers require a scalar objective function, hence a unique function, and we can specify the sense (direction) of the optimisation.
End of explanation
def objective_rule(model):
obj = 0.0
for ki in model.i:
for kj in model.j:
obj += model.c[ki,kj]*model.x[ki,kj]
return obj
Explanation: As we are here looping over two distinct sets, we can see how list comprehension really simplifies the code. The objective function could have being written without list comprehension as:
End of explanation
## Display of the output ##
# Display x.l, x.m ;
def pyomo_postprocess(options=None, instance=None, results=None):
model.x.display()
Explanation: Retrieving the Output
We use the pyomo_postprocess() function to retrieve the output and do something with it. For example, we could display solution values (see below), plot a graph with matplotlib or save it in a csv file.
This function is called by pyomo after the solver has finished.
End of explanation
# This is an optional code path that allows the script to be run outside of
# pyomo command-line. For example: python transport.py
if __name__ == '__main__':
# This emulates what the pyomo command-line tools does
from pyomo.opt import SolverFactory
import pyomo.environ
opt = SolverFactory("glpk")
results = opt.solve(model)
#sends results to stdout
results.write()
print("\nDisplaying Solution\n" + '-'*60)
pyomo_postprocess(None, model, results)
Explanation: We can print model structure information with model.pprint() (“pprint” stand for “pretty print”).
Results are also by default saved in a results.json file or, if PyYAML is installed in the system, in results.yml.
Editing and Running the Script
Differently from GAMS, you can use whatever editor environment you wish to code a pyomo script. If you don't need debugging features, a simple text editor like Notepad++ (in windows), gedit or kate (in Linux) will suffice. They already have syntax highlight for python.
If you want advanced features and debugging capabilities you can use a dedicated Python IDE, like e.g. Spyder.
You will normally run the script as pyomo solve –solver=glpk transport.py. You can output solver specific output adding the option –stream-output. If you want to run the script as python transport.py add the following lines at the end:
End of explanation
#!/usr/bin/env python
# -*- coding: utf-8 -*-
Explanation: Finally, if you are very lazy and want to run the script with just ./transport.py (and you are in Linux) add the following lines at the top:
End of explanation
!cat transport.py
Explanation: Complete script
Here is the complete script:
End of explanation
!pyomo solve --solver=glpk transport.py
Explanation: Solutions
Running the model lead to the following output:
End of explanation
!cat results.yml
Explanation: By default, the optimization results are stored in the file results.yml:
End of explanation |
8,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trying out the Transformer dataset class from Pylearn2 with our current dataset class as raw, should be able to make a block to apply to it using one of our processing functions that will produce a random combination of it's processing functions.
Setting up
Loading the data and the model, the classic loosely AlexNet based model we've been using for a while.
Step1: Making a Block
Pylearn2 uses Blocks to apply the processing functions to the raw data. The Transformer class appears to be able to also take model pickle files as transforms. Hopefully, we can just make an object that inherits the Block base class and supply it with a function to transform an image and that might work. The documentation doesn't really say one way or the other.
Step2: So it flips images like it's supposed to. Now we can try to make a TransformerDataset using it
Step3: Should be possible to hack together from here. Making a transformer dataset that takes a stochastic processing function in a block; sampling from a set of possible augmentations and applying them to the image.
Stupid Transformer
The stupid transformer takes a dataset after preprocessing; after the dataset has been resized and normalised into a homogeneous numpy array. It then applies its processing function to each of the examples in the array when a batch is requested.
This is pretty easy to make; in fact we've pretty much done it above. All we need is a stochastic augmentation function that will apply a random augmentation to the images supplied each time. Then, we'll have a potentially massive dataset.
Step4: Had to make some modifications to the Pylearn2 code to make this work
Step5: Iterator gets called during the SGD train loop, specifically on lines 445-464 in pylearn2/training_algorithms/sgd.py
Step6: So the iterators can't actually produce examples? In that case, what is it actually training on? Maybe it's failing over to the raw iterator silently? Would explain the lack of difference in actual performance.
Step13: Smart Transformer
The big problem with the dataset before is that these transformations have to occur after resizing, normalisation and loading all the images into this big numpy array. We might be able to hack our way round this by loading the images unprocessed into a very large numpy array and padding the spare area around most of the images with an indicator number; then shaving this off before augmentation and homogenisation back down to whatever size we're aiming for.
It would be much better if the transformer dataset had a stochastic function which it applied whenever it needed a batch to a set of raw images held in memory. To make this, first going to try to create a dummy raw dataset that simply loads the raw images as a list of numpy arrays and supports the expected interface that the Transformer class will be looking for. Then, we just need to initialise our Block class with a processing function that can support processing from raw images.
Step14: OK, so we've written a dataset that has an iterator that follows the standard Python conventions. Now, all we need to do is get Pylearn2 to accept this dataset. Easiest way to do this is to write it into a YAML file and run a training script. Writing the above into modules in our codebase and using the following YAML file
Step15: Ran the model (many times) updating the code until it would work. Now the above YAML will train.
Bugs?
We might have bugs in the ListDataset as when running it with the same augmentations as used in the traditional methods we don't see anywhere near the same performance increases. In fact, it was barely able to learn at all, so it's probably broken somehow. As it can't learn at all, it might be garbling the Images, producing batches that don't correspond to the targets.
The following code checks that we don't have problems with our random number generators
Step16: Checking Against Dense
The DensePNGDataset doesn't appear to have the problems we're seeing with the ListDataset. If we load the exact same processing in both and iterate over the minibatches sequentially we should see exactly the same minibatches being produced.
Step17: Recent tests indicate it is working after all, so I'm neglecting these tests.
Hierarchical Models
We want to be able to represent some of taxonomic tree information in the labels, in order to hopefully propagate some more useful information from these additional labels. This amounts to having multiple softmax layers in our output layer. These are wrapped in a FlattenerLayer so expect to see a big n-of-k encoded vector indicator the class and superclasses as targets for every data point.
Unfortunately, we've got some bugs with how this is working, so going to look at how to debug these.
Step18: Check for Heisenbugs | Python Code:
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import holoviews as hl
%load_ext holoviews.ipython
import sklearn.metrics
cd ..
settings = neukrill_net.utils.Settings("settings.json")
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/alexnet_based.json", settings, force=True)
# loading the model
model = pylearn2.utils.serial.load(run_settings['pickle abspath'])
# loading the data
dataset = neukrill_net.dense_dataset.DensePNGDataset(settings_path=run_settings['settings_path'],
run_settings=run_settings['run_settings_path'],
train_or_predict='train',
training_set_mode='validation', force=True)
Explanation: Trying out the Transformer dataset class from Pylearn2 with our current dataset class as raw, should be able to make a block to apply to it using one of our processing functions that will produce a random combination of it's processing functions.
Setting up
Loading the data and the model, the classic loosely AlexNet based model we've been using for a while.
End of explanation
import pylearn2.blocks
import neukrill_net.image_processing
b = pylearn2.blocks.Block()
b.fn = lambda x: neukrill_net.image_processing.flip_image(x,flip_x=True)
t = dataset.get_topological_view(dataset.X[:1,:])
t.shape
%opts Image style(cmap='gray')
i = hl.Image(t.reshape(t.shape[1:3]))
i
import pdb
class SampleAugment(pylearn2.blocks.Block):
def __init__(self,fn,target_shape):
self._fn = fn
self.cpu_only=False
self.target_shape = target_shape
def __call__(self,inputs):
return self.fn(inputs)
def fn(self,inputs):
# prepare empty array same size as inputs
req = inputs.shape
sh = [inputs.shape[0]] + list(self.target_shape)
inputs = inputs.reshape(sh)
processed = np.zeros(sh)
# hand each image as a 2D array
for i in range(inputs.shape[0]):
processed[i] = self._fn(inputs[i].reshape(self.target_shape))
processed = processed.reshape(req)
return processed
b = SampleAugment(lambda x: neukrill_net.image_processing.flip_image(x,flip_x=True),(48,48))
hl.Image(b(t).reshape(t.shape[1:3]))
Explanation: Making a Block
Pylearn2 uses Blocks to apply the processing functions to the raw data. The Transformer class appears to be able to also take model pickle files as transforms. Hopefully, we can just make an object that inherits the Block base class and supply it with a function to transform an image and that might work. The documentation doesn't really say one way or the other.
End of explanation
# want to make sure the processing is obvious
b = SampleAugment(lambda x: np.zeros(x.shape),(48,48))
import pylearn2.datasets.transformer_dataset
tdataset = pylearn2.datasets.transformer_dataset.TransformerDataset(dataset,b,
space_preserving=True)
hl.Image(dataset.get_batch_topo(1).reshape(t.shape[1:3]))
hl.Image(tdataset.get_batch_topo(1).reshape(t.shape[1:3]))
Explanation: So it flips images like it's supposed to. Now we can try to make a TransformerDataset using it:
End of explanation
import neukrill_net.augment
reload(neukrill_net.augment)
import neukrill_net.blocks
reload(neukrill_net.blocks)
fn = neukrill_net.augment.RandomAugment(**{"units":"float64",
"rotate":-1,
"flip":1,
"rotate_is_resizable":0,
"shear":[0,np.pi/4,np.pi/2],
"crop":[0.05,0.1,0.2],
"noise":0.001,
"scale":[0.9,1.0,1.1,1.5],
"resize":(48,48)
})
t.squeeze().shape
hl.Image(t.squeeze())
hl.Image(fn(t.squeeze()))
b = neukrill_net.blocks.SampleAugment(lambda x: fn(x),(48,48),(48,48))
tdataset = pylearn2.datasets.transformer_dataset.TransformerDataset(raw=dataset,transformer=b,
space_preserving=True)
reload(neukrill_net.image_processing)
tdataset.get_batch_topo(2).shape
hl.Image(tdataset.get_batch_topo(1).reshape((48,48)))
tdataset.get_num_examples()
batch_size = 128
num_batches = int(tdataset.get_num_examples()/batch_size)
Explanation: Should be possible to hack together from here. Making a transformer dataset that takes a stochastic processing function in a block; sampling from a set of possible augmentations and applying them to the image.
Stupid Transformer
The stupid transformer takes a dataset after preprocessing; after the dataset has been resized and normalised into a homogeneous numpy array. It then applies its processing function to each of the examples in the array when a batch is requested.
This is pretty easy to make; in fact we've pretty much done it above. All we need is a stochastic augmentation function that will apply a random augmentation to the images supplied each time. Then, we'll have a potentially massive dataset.
End of explanation
import pylearn2.utils.iteration
reload(pylearn2.utils.iteration)
Explanation: Had to make some modifications to the Pylearn2 code to make this work:
End of explanation
from pylearn2.space import CompositeSpace
from pylearn2.utils.data_specs import DataSpecsMapping
data_specs = (model.get_input_space(),model.get_input_source())
mapping = DataSpecsMapping(data_specs)
space_tuple = mapping.flatten(data_specs[0], return_tuple=True)
source_tuple = mapping.flatten(data_specs[1], return_tuple=True)
flat_data_specs = (CompositeSpace(space_tuple), source_tuple)
iterator = tdataset.iterator(mode='random_slice', data_specs=flat_data_specs,
batch_size=batch_size,num_batches=num_batches)
%pdb
iterator.next().shape
Explanation: Iterator gets called during the SGD train loop, specifically on lines 445-464 in pylearn2/training_algorithms/sgd.py:
```python
iterator = dataset.iterator(mode=self.train_iteration_mode,
batch_size=self.batch_size,
data_specs=flat_data_specs,
return_tuple=True, rng=rng,
num_batches=self.batches_per_iter)
on_load_batch = self.on_load_batch
for batch in iterator:
for callback in on_load_batch:
callback(*batch)
self.sgd_update(*batch)
# iterator might return a smaller batch if dataset size
# isn't divisible by batch_size
# Note: if data_specs[0] is a NullSpace, there is no way to know
# how many examples would actually have been in the batch,
# since it was empty, so actual_batch_size would be reported as 0.
actual_batch_size = flat_data_specs[0].np_batch_size(batch)
self.monitor.report_batch(actual_batch_size)
for callback in self.update_callbacks:
callback(self)
```
So we have to call the iterator the same way, specifically getting whatever flat_data_specs right.
End of explanation
iterator.raw_iterator.next().shape
iterator.num_examples
Explanation: So the iterators can't actually produce examples? In that case, what is it actually training on? Maybe it's failing over to the raw iterator silently? Would explain the lack of difference in actual performance.
End of explanation
import pylearn2.datasets
# don't have to think too hard about how to write this:
# https://stackoverflow.com/questions/19151/build-a-basic-python-iterator
class FlyIterator(object):
Simple iterator class to take a dataset and iterate over
it's contents applying a processing function. Assuming
the dataset has a processing function to apply.
It may have an issue of there being some leftover examples
that will never be shown on any epoch. Can avoid this by
seeding with sampled numbers from the dataset's own rng.
def __init__(self, dataset, batch_size, num_batches,
final_shape, seed=42):
self.dataset = dataset
self.batch_size = batch_size
self.num_batches = num_batches
self.final_shape = final_shape
# initialise rng
self.rng = np.random.RandomState(seed=seed)
# shuffle indices of size equal to number of examples
# in dataset
N = self.dataset.get_num_examples()
self.indices = range(N)
self.rng.shuffle(self.indices)
def __iter__(self):
return self
def next(self):
# return one batch
batch_indices = [self.indices.pop() for i in range(batch_size)]
# preallocate array
if len(self.final_shape) == 2:
batch = np.zeros([batch_size]+list(self.final_shape)+[1])
elif len(self.final_shape) == 3:
batch = np.zeros([batch_size]+list(self.final_shape))
# iterate over indices, applying the dataset's processing function
for i,j in enumerate(batch_indices):
batch[i] = self.dataset.fn(self.dataset.X[j]).reshape(batch.shape[1:])
return batch
class ListDataset(pylearn2.datasets.dataset.Dataset):
Loads images as raw numpy arrays in a list, tries
its best to respect the interface expected of a
Pylearn2 Dataset.
def __init__(self, transformer, settings_path="settings.json",
run_settings_path="run_settings/alexnet_based.json",
verbose=False, force=False, seed=42):
Loads the images as a list of differently shaped
numpy arrays and loads the labels as a vector of
integers, mapped deterministically.
self.fn = transformer
# load settings
self.settings = neukrill_net.utils.Settings(settings_path)
self.run_settings = neukrill_net.utils.load_run_settings(run_settings_path,
self.settings,
force=force)
self.X, labels = neukrill_net.utils.load_rawdata(self.settings.image_fnames,
classes=self.settings.classes,
verbose=verbose)
# transform labels from strings to integers
class_dictionary = {}
for i,c in enumerate(self.settings.classes):
class_dictionary[c] = i
self.y = np.array(map(lambda c: class_dictionary[c],labels))
# set up the random state
self.rng = np.random.RandomState(seed)
# shuffle a list of image indices
self.N = len(self.X)
self.indices = range(self.N)
self.rng.shuffle(self.indices)
def iterator(self, mode=None, batch_size=None, num_batches=None, rng=None,
data_specs=None, return_tuple=False):
Returns iterator object with standard Pythonic interface; iterates
over the dataset over batches, popping off batches from a shuffled
list of indices.
if not num_batches:
# guess that we want to use all of them
num_batches = int(len(dataset.X)/batch_size)
iterator = FlyIterator(dataset=self, batch_size=batch_size,
num_batches=num_batches, final_shape=run_settings["final_shape"],
seed=self.rng.random_integers(low=0, high=256))
return iterator
def adjust_to_be_viewed_with():
raise NotImplementedError("Didn't think this was important, so didn't write it.")
def get_batch_design(self, batch_size, include_labels=False):
Will return a list of the size batch_size of carefully raveled arrays.
Optionally, will also include labels (using include_labels).
selection = self.rng.random_integers(0,high=self.N,size=batch_size)
batch = [self.X[s].ravel() for s in selection]
return batch
def get_batch_topo(self, batch_size, include_labels=False):
Will return a list of the size batch_size of raw, unfiltered, artisan
numpy arrays. Optionally, will also include labels (using include_labels).
Strongly discouraged to use this method for learning code, so I guess
this isn't so important?
selection = self.rng.random_integers(0,high=self.N,size=batch_size)
batch = [self.X[s] for s in selection]
return batch
def get_num_examples(self):
return self.N
def get_topological_view():
raise NotImplementedError("Not written yet, not sure we need it")
def get_weights_view():
raise NotImplementedError("Not written yet, didn't think it was important")
def has_targets(self):
if self.y:
return True
else:
return False
lset = ListDataset(fn,force=True)
i = lset.iterator(batch_size=128)
for b in i:
print(b.shape)
t = b
break
hl.Image(t[1,:].squeeze())
Explanation: Smart Transformer
The big problem with the dataset before is that these transformations have to occur after resizing, normalisation and loading all the images into this big numpy array. We might be able to hack our way round this by loading the images unprocessed into a very large numpy array and padding the spare area around most of the images with an indicator number; then shaving this off before augmentation and homogenisation back down to whatever size we're aiming for.
It would be much better if the transformer dataset had a stochastic function which it applied whenever it needed a batch to a set of raw images held in memory. To make this, first going to try to create a dummy raw dataset that simply loads the raw images as a list of numpy arrays and supports the expected interface that the Transformer class will be looking for. Then, we just need to initialise our Block class with a processing function that can support processing from raw images.
End of explanation
!cat yaml_templates/alexnet_based_listdataset.yaml
Explanation: OK, so we've written a dataset that has an iterator that follows the standard Python conventions. Now, all we need to do is get Pylearn2 to accept this dataset. Easiest way to do this is to write it into a YAML file and run a training script. Writing the above into modules in our codebase and using the following YAML file:
End of explanation
import neukrill_net.image_directory_dataset
fn = neukrill_net.augment.RandomAugment(**{"units":"float64",
"rotate":[0,90,180,270],
"flip":1,
"rotate_is_resizable":0,
"normalise":{"global_or_pixel":"global",
"mu":0.95727,"sigma":0.1423},
"resize":(48,48)
})
fn2 = neukrill_net.augment.RandomAugment(**{"units":"float64",
"rotate":[0,90,180,270],
"flip":1,
"rotate_is_resizable":0,
"normalise":{"global_or_pixel":"global",
"mu":0.95727,"sigma":0.1423},
"resize":(48,48)
})
dataset = neukrill_net.image_directory_dataset.ListDataset(fn, force=True)
dataset2 = neukrill_net.image_directory_dataset.ListDataset(fn2, force=True)
iterator = dataset.iterator(batch_size=128)
iterator2 = dataset2.iterator(batch_size=128)
for a,b in zip(iterator,iterator2):
if (not np.allclose(a[0],b[0]) or not np.allclose(a[1],b[1])):
print("Shit.")
not np.allclose(a[0],b[0])
not np.allclose(a[0],b[0]) or not np.allclose(a[1],b[1])
Explanation: Ran the model (many times) updating the code until it would work. Now the above YAML will train.
Bugs?
We might have bugs in the ListDataset as when running it with the same augmentations as used in the traditional methods we don't see anywhere near the same performance increases. In fact, it was barely able to learn at all, so it's probably broken somehow. As it can't learn at all, it might be garbling the Images, producing batches that don't correspond to the targets.
The following code checks that we don't have problems with our random number generators:
End of explanation
reload(neukrill_net.image_directory_dataset)
dense = neukrill_net.dense_dataset.DensePNGDataset(
run_settings=run_settings['run_settings_path'],
force=True, verbose=True)
run_settings['run_settings_path']
fn = neukrill_net.augment.RandomAugment(**{"units":"float64",
"normalise":{"global_or_pixel":"global",
"mu":0.95727,"sigma":0.1423},
"resize":(48,48)})
lists = neukrill_net.image_directory_dataset.ListDataset(fn,
run_settings_path=run_settings['run_settings_path'], force=True)
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/alexnet_based_runtest.json",settings,
force=True)
ystring = neukrill_net.utils.format_yaml(run_settings,settings)
train = pylearn2.config.yaml_parse.load(ystring)
data_specs = train.algorithm.cost.get_data_specs(train.model)
mapping = DataSpecsMapping(data_specs)
space_tuple = mapping.flatten(data_specs[0], return_tuple=True)
source_tuple = mapping.flatten(data_specs[1], return_tuple=True)
flat_data_specs = (CompositeSpace(space_tuple), source_tuple)
dense_iterator = dense.iterator(mode='even_sequential',batch_size=128,
data_specs=flat_data_specs,return_tuple=True)
list_iterator = lists.iterator(mode='sequential',batch_size=128)
a = dense_iterator.next()
b = list_iterator.next()
print(a[0].shape,a[1].shape)
print(b[0].shape,b[1].shape)
Explanation: Checking Against Dense
The DensePNGDataset doesn't appear to have the problems we're seeing with the ListDataset. If we load the exact same processing in both and iterate over the minibatches sequentially we should see exactly the same minibatches being produced.
End of explanation
import neukrill_net.image_directory_dataset
dataset = neukrill_net.image_directory_dataset.ListDataset(transformer=fn,
run_settings_path="run_settings/alexnet_based_extra_convlayer_with_superclasses.json",
force=True)
import neukrill_net.encoding
hier = neukrill_net.encoding.get_hierarchy()
hier
l = sum([1 for a in hier for b in a])
l
sum([len(a) for a in hier])
x = settings.classes[1]
class_dictionary = {}
for i,c in enumerate(settings.classes):
class_dictionary[c] = i
conflicted = []
for x in settings.classes:
v1 = np.array(neukrill_net.encoding.get_encoding(x,hier)[0])
v2 = np.zeros(len(settings.classes))
v2[class_dictionary[x]] = 1
if not np.allclose(v1,v2):
print(x)
conflicted.append(x)
hier[0] = [str(c) for c in settings.classes]
for x in settings.classes:
v1 = np.array(neukrill_net.encoding.get_encoding(x,hier)[0])
v2 = np.zeros(len(settings.classes))
v2[class_dictionary[x]] = 1
if not np.allclose(v1,v2):
print(x)
[np.where(np.array(a)==1)[0][0] for a in neukrill_net.encoding.get_encoding(x,hier)]
class_dictionary = {}
for c in hier[0]:
class_dictionary[c] = np.where(np.array([a
for l in neukrill_net.encoding.get_encoding(c,hier)
for a in l])==1)[0]
class_dictionary
y = np.zeros((len(settings.classes),188))
for i,j in enumerate(map(lambda c: class_dictionary[c],settings.classes)):
y[i,j] = 1
print(i,j)
plt.imshow(y,cmap='Greys')
hier = neukrill_net.encoding.get_hierarchy()
class_dictionary2 = {}
for c in hier[0]:
class_dictionary2[c] = np.where(np.array([a
for l in neukrill_net.encoding.get_encoding(c,hier)
for a in l])==1)[0]
yb = np.zeros((len(settings.classes),188))
for i,j in enumerate(map(lambda c: class_dictionary2[c],settings.classes)):
yb[i,j] = 1
plt.imshow(yb,cmap='Greys')
np.allclose(y[121:],yb[121:])
for c in conflicted:
print(c,class_dictionary[c],class_dictionary2[c])
oldy = y[:]
Explanation: Recent tests indicate it is working after all, so I'm neglecting these tests.
Hierarchical Models
We want to be able to represent some of taxonomic tree information in the labels, in order to hopefully propagate some more useful information from these additional labels. This amounts to having multiple softmax layers in our output layer. These are wrapped in a FlattenerLayer so expect to see a big n-of-k encoded vector indicator the class and superclasses as targets for every data point.
Unfortunately, we've got some bugs with how this is working, so going to look at how to debug these.
End of explanation
for _ in range(100):
y = np.zeros((len(settings.classes),188))
for i,j in enumerate(map(lambda c: class_dictionary[c],settings.classes)):
y[i,j] = 1
if not np.allclose(y,oldy):
print("Arrays do not match.")
oldy = y[:]
reload(neukrill_net.encoding)
hierarchy = neukrill_net.encoding.get_hierarchy(settings)
for i,j in zip(settings.classes,hierarchy[0]):
if i != j:
print(i,j)
class_dictionary = neukrill_net.encoding.make_class_dictionary(settings.classes,hierarchy)
y = np.zeros((len(settings.classes),188))
for i,j in enumerate(map(lambda c: class_dictionary[c],hierarchy[0])):
y[i,j] = 1
plt.imshow(y,cmap='Greys')
Explanation: Check for Heisenbugs:
End of explanation |
8,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optically pumped magnetometer (OPM) data
In this dataset, electrical median nerve stimulation was delivered to the
left wrist of the subject. Somatosensory evoked fields were measured using
nine QuSpin SERF OPMs placed over the right-hand side somatomotor area.
Here we demonstrate how to localize these custom OPM data in MNE.
Step1: Prepare data for localization
First we filter and epoch the data
Step2: Examine our coordinate alignment for source localization and compute a
forward operator
Step3: Perform dipole fitting
Step4: Perform minimum-norm localization
Due to the small number of sensors, there will be some leakage of activity
to areas with low/no sensitivity. Constraining the source space to
areas we are sensitive to might be a good idea. | Python Code:
import os.path as op
import numpy as np
import mne
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif')
coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
Explanation: Optically pumped magnetometer (OPM) data
In this dataset, electrical median nerve stimulation was delivered to the
left wrist of the subject. Somatosensory evoked fields were measured using
nine QuSpin SERF OPMs placed over the right-hand side somatomotor area.
Here we demonstrate how to localize these custom OPM data in MNE.
End of explanation
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(None, 90, h_trans_bandwidth=10.)
raw.notch_filter(50., notch_widths=1)
# Set epoch rejection threshold a bit larger than for SQUIDs
reject = dict(mag=2e-10)
tmin, tmax = -0.5, 1
# Find Median nerve stimulator trigger
event_id = dict(Median=257)
events = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and')
picks = mne.pick_types(raw.info, meg=True, eeg=False)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
reject=reject, picks=picks, proj=False, decim=4)
evoked = epochs.average()
evoked.plot()
cov = mne.compute_covariance(epochs, tmax=0.)
Explanation: Prepare data for localization
First we filter and epoch the data:
End of explanation
bem = mne.read_bem_solution(bem_fname)
trans = None
# To compute the forward solution, we must
# provide our temporary/custom coil definitions, which can be done as::
#
# with mne.use_coil_def(coil_def_fname):
# fwd = mne.make_forward_solution(
# raw.info, trans, src, bem, eeg=False, mindist=5.0,
# n_jobs=1, verbose=True)
fwd = mne.read_forward_solution(fwd_fname)
with mne.use_coil_def(coil_def_fname):
fig = mne.viz.plot_alignment(
raw.info, trans, subject, subjects_dir, ('head', 'pial'), bem=bem)
mne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4,
focalpoint=(0.02, 0, 0.04))
Explanation: Examine our coordinate alignment for source localization and compute a
forward operator:
<div class="alert alert-info"><h4>Note</h4><p>The Head<->MRI transform is an identity matrix, as the
co-registration method used equates the two coordinate
systems. This mis-defines the head coordinate system
(which should be based on the LPA, Nasion, and RPA)
but should be fine for these analyses.</p></div>
End of explanation
# Fit dipoles on a subset of time points
with mne.use_coil_def(coil_def_fname):
dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.015, 0.080),
cov, bem, trans, verbose=True)
idx = np.argmax(dip_opm.gof)
print('Best dipole at t=%0.1f ms with %0.1f%% GOF'
% (1000 * dip_opm.times[idx], dip_opm.gof[idx]))
# Plot N20m dipole as an example
dip_opm.plot_locations(trans, subject, subjects_dir,
mode='orthoview', idx=idx)
Explanation: Perform dipole fitting
End of explanation
inverse_operator = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov)
method = "MNE"
snr = 3.
lambda2 = 1. / snr ** 2
stc = mne.minimum_norm.apply_inverse(
evoked, inverse_operator, lambda2, method=method,
pick_ori=None, verbose=True)
# Plot source estimate at time of best dipole fit
brain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir,
initial_time=dip_opm.times[idx],
clim=dict(kind='percent', lims=[99, 99.9, 99.99]))
Explanation: Perform minimum-norm localization
Due to the small number of sensors, there will be some leakage of activity
to areas with low/no sensitivity. Constraining the source space to
areas we are sensitive to might be a good idea.
End of explanation |
8,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook will investigate instances where the river is reversed, and sewage is dumped into the lake. We will take a look at rainfall before these events, to see if there is a correlation
Step1: Now we will look at any n-year storms that occurred during the 10 days prior to the reversal | Python Code:
# Get River reversals
reversals = pd.read_csv('data/lake_michigan_reversals.csv')
reversals['start_date'] = pd.to_datetime(reversals['start_date'])
reversals.head()
# Create rainfall dataframe. Create a series that has hourly precipitation
rain_df = pd.read_csv('data/ohare_hourly_20160929.csv')
rain_df['datetime'] = pd.to_datetime(rain_df['datetime'])
rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime']))
rain_df = rain_df['19700101':]
chi_rain_series = rain_df['HOURLYPrecip'].resample('1H', label='right').max()
chi_rain_series.head()
# Find the rainfall 'hours' hours before the timestamp
def cum_rain(timestamp, hours):
end_of_day = (timestamp + timedelta(days=1)).replace(hour=0, minute=0)
start_time = end_of_day - timedelta(hours=(hours-1))
return chi_rain_series[start_time:end_of_day].sum()
t = pd.to_datetime('2015-06-15')
cum_rain(t, 240)
# Set the ten_day_rain field in reversals to the amount of rain that fell the previous 10 days (including the day that
# the lock was opened)
# TODO: Is there a more Pandaic way to do this?
for index, reversal in reversals.iterrows():
reversals.loc[index,'ten_day_rain'] = cum_rain(reversal['start_date'], 240)
reversals
# Information about the 10 days that preceed these overflows
reversals['ten_day_rain'].describe(percentiles=[.25, .5, .75])
Explanation: This notebook will investigate instances where the river is reversed, and sewage is dumped into the lake. We will take a look at rainfall before these events, to see if there is a correlation
End of explanation
# N-Year Storm stuff
n_year_threshes = pd.read_csv('../../n-year/notebooks/data/n_year_definitions.csv')
n_year_threshes = n_year_threshes.set_index('Duration')
dur_str_to_hours = {
'5-min':5/60.0,
'10-min':10/60.0,
'15-min':15/60.0,
'30-min':0.5,
'1-hr':1.0,
'2-hr':2.0,
'3-hr':3.0,
'6-hr':6.0,
'12-hr':12.0,
'18-hr':18.0,
'24-hr':24.0,
'48-hr':48.0,
'72-hr':72.0,
'5-day':5*24.0,
'10-day':10*24.0
}
n_s = [int(x.replace('-year','')) for x in reversed(list(n_year_threshes.columns.values))]
duration_strs = sorted(dur_str_to_hours.items(), key=operator.itemgetter(1), reverse=False)
n_year_threshes
# This method returns the first n-year storm found in a given interval. It starts at the 100-year storm and decriments, so
# will return the highest n-year storm found
def find_n_year_storm(start_time, end_time):
for n in n_s:
n_index = n_s.index(n)
next_n = n_s[n_index-1] if n_index != 0 else None
for duration_tuple in reversed(duration_strs):
duration_str = duration_tuple[0]
low_thresh = n_year_threshes.loc[duration_str, str(n) + '-year']
high_thresh = n_year_threshes.loc[duration_str, str(next_n) + '-year'] if next_n is not None else None
duration = int(dur_str_to_hours[duration_str])
sub_series = chi_rain_series[start_time: end_time]
rolling = sub_series.rolling(window=int(duration), min_periods=0).sum()
if high_thresh is not None:
event_endtimes = rolling[(rolling >= low_thresh) & (rolling < high_thresh)].sort_values(ascending=False)
else:
event_endtimes = rolling[(rolling >= low_thresh)].sort_values(ascending=False)
if len(event_endtimes) > 0:
return {'inches': event_endtimes[0], 'n': n, 'end_time': event_endtimes.index[0], 'hours': duration}
return None
start_time = pd.to_datetime('2008-09-04 01:00:00')
end_time = pd.to_datetime('2008-09-14 20:00:00')
find_n_year_storm(start_time, end_time)
# Add a column to the reversals data frame to show n-year storms that occurred before the reversal
# TODO: Is there a more Pandaic way to do this?
for index, reversal in reversals.iterrows():
end_of_day = (reversal['start_date'] + timedelta(days=1)).replace(hour=0, minute=0)
start_time = end_of_day - timedelta(days=10)
reversals.loc[index,'find_n_year_storm'] = str(find_n_year_storm(start_time, end_of_day))
reversals
no_n_year = reversals.loc[reversals['find_n_year_storm'] == 'None']
print("There are %s reversals without an n-year event" % len(no_n_year))
no_n_year
reversals.loc[reversals['year'] == 1997]
reversals.sort_values('crcw', ascending=False)
Explanation: Now we will look at any n-year storms that occurred during the 10 days prior to the reversal
End of explanation |
8,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maren Equations Summary
This notebook is just pulling out the important figures and tables for the manuscript. For more detailed explanations and exploring see other notebooks.
Step1: Import clean data set
This data set was created by
Step2: Additional cleaning
For the maren equations, I am also going to drop exonic regions with less than 10 genotypes. The maren equations make some assumptions about the population level sums. Obvisouly the more genotypes that are present for each fusions the better, but I am comfortable with as few as 10 genotypes.
Step3: Raw Counts
Step4: Plot Distribution of cis- and trans-effects
Step8: Plot cis- and trans-effects vs Allelic Proportion
Step9: Plot cis- and trans-effects vs Allelic Proportion for Specific Exonic Regions
Step10: F10317_SI
This fusion has a weaker cis-line effects but trans-line effects look more linear.
Step12: F10482_SI
This fusion has a weaker cis-line effects but trans-line effects look more linear. | Python Code:
# Set-up default environment
%run '../ipython_startup.py'
# Import additional libraries
import sas7bdat as sas
import cPickle as pickle
import statsmodels.formula.api as smf
from ase_cisEq import marenEq
from ase_cisEq import marenPrintTable
from ase_normalization import meanCenter
from ase_normalization import q3Norm
from ase_normalization import meanStd
pjoin = os.path.join
Explanation: Maren Equations Summary
This notebook is just pulling out the important figures and tables for the manuscript. For more detailed explanations and exploring see other notebooks.
End of explanation
# Import clean dataset
with sas.SAS7BDAT(pjoin(PROJ, 'sas_data/clean_ase_stack.sas7bdat')) as FH:
df = FH.to_data_frame()
dfClean = df[['line', 'mating_status', 'fusion_id', 'flag_AI_combined', 'q5_mean_theta', 'sum_both', 'sum_line', 'sum_tester', 'sum_total', 'mean_apn']]
print 'Rows ' + str(dfClean.shape[0])
print 'Columns ' + str(dfClean.shape[1])
print 'Number of Genotypes ' + str(len(set(dfClean['line'].tolist())))
print 'Number of Exonic Regions ' + str(len(set(dfClean['fusion_id'].tolist())))
Explanation: Import clean data set
This data set was created by: ase_summarize_ase_filters.sas
The data has had the following droped:
* regions that were always bias in 100 genome simulation
* regions with APN $\le 25$
* regions not in at least 10% of genotypes
* regions not in mated and virgin
* genotypes with extreme bias in median(q5_mean_theta)
* genotypes with $\le500$ regions
End of explanation
# Drop groups with less than 10 lines per fusion
grp = dfClean.groupby(['mating_status', 'fusion_id'])
dfGt10 = grp.filter(lambda x: x['line'].count() >= 10).copy()
print 'Rows ' + str(dfGt10.shape[0])
print 'Columns ' + str(dfGt10.shape[1])
print 'Number of Genotypes ' + str(len(set(dfGt10['line'].tolist())))
print 'Number of Exonic Regions ' + str(len(set(dfGt10['fusion_id'].tolist())))
Explanation: Additional cleaning
For the maren equations, I am also going to drop exonic regions with less than 10 genotypes. The maren equations make some assumptions about the population level sums. Obvisouly the more genotypes that are present for each fusions the better, but I am comfortable with as few as 10 genotypes.
End of explanation
# Calculate Maren TIG equations by mating status and exonic region
marenRawCounts = marenEq(dfGt10, Eii='sum_line', Eti='sum_tester', group=['mating_status', 'fusion_id'])
marenRawCounts['mag_cis'] = abs(marenRawCounts['cis_line'])
marenRawCounts.columns
marenRawCounts
Explanation: Raw Counts
End of explanation
# Plot densities
def panelKde(df, **kwargs):
options = {'subplots': True,
'layout': (7, 7),
'figsize': (20, 20),
'xlim': (-500, 500),
'legend': False,
'color': 'k'}
options.update(kwargs)
# Make plot
axes = df.plot(kind='kde', **options)
# Add titles intead of legends
try:
for ax in axes.ravel():
h, l = ax.get_legend_handles_labels()
ax.set_title(l[0])
ax.get_yaxis().set_visible(False)
ax.axvline(0, lw=1)
except:
ax = axes
ax.get_yaxis().set_visible(False)
ax.axvline(0, lw=1)
return plt.gcf()
def linePanel(df, value='cis_line', index='fusion_id', columns='line'):
mymap = {
'cis_line': 'cis-Line Effects',
'trans_line': 'trans-Line Effects',
'line': 'genotype',
'fusion_id': 'exonic_region'
}
# Iterate over mated and virgin
for k, v in {'M': 'Mated', 'V': 'Virgin'}.iteritems():
# Pivot data frame so that the thing you want to make panels by is in columns.
dfPiv = pd.pivot_table(df[df['mating_status'] == k],
values=value, index=index, columns=columns)
# Generate panel plot with at most 49 panels
if value == 'cis_line':
xlim = (-500, 500)
else:
# trans-effects appear to be larger in magnitude
xlim = (-1500, 1500)
fig = panelKde(dfPiv.iloc[:, :49], xlim=xlim)
title = '{}\n{}'.format(mymap[value], v)
fig.suptitle(title, fontsize=18, fontweight='bold')
fname = pjoin(PROJ, 'pipeline_output/cis_effects/density_plot_by_{}_{}_{}.png'.format(mymap[columns], value, v.lower()))
plt.savefig(fname, bbox_inches='tight')
print("Saved figure to: " + fname)
plt.close()
def testerPanel(df, value='cis_tester'):
mymap = {
'cis_tester': 'cis-Tester Effects',
'trans_tester': 'trans-Trans Effects'
}
# Iterate over mated and virgin
for k, v in {'M': 'Mated', 'V': 'Virgin'}.iteritems():
# Split table by mating status and drop duplicates, because
# there is only one tester value for each exonic region
dfSub = df.ix[df['mating_status'] == k,['fusion_id', value]].drop_duplicates()
# Generate Panel Plot
fig = panelKde(dfSub, subplots=False)
title = '{}\n{}'.format(mymap[value], v)
fig.suptitle(title, fontsize=18, fontweight='bold')
fname = pjoin(PROJ, 'pipeline_output/cis_effects/density_plot_{}_{}.png'.format(value, v.lower()))
plt.savefig(fname, bbox_inches='tight')
print("Saved figure to: " + fname)
plt.close()
# Cis and trans line effects by genotype
linePanel(marenRawCounts, value='cis_line', index='fusion_id', columns='line')
linePanel(marenRawCounts, value='trans_line', index='fusion_id', columns='line')
# Cis and trans line effects by exonic region
linePanel(marenRawCounts, value='cis_line', index='line', columns='fusion_id')
linePanel(marenRawCounts, value='trans_line', index='line', columns='fusion_id')
# Cis and trans tester effects
testerPanel(marenRawCounts, value='cis_tester')
testerPanel(marenRawCounts, value='trans_tester')
Explanation: Plot Distribution of cis- and trans-effects
End of explanation
# Set Globals
SHAPES = {'M': 'o', 'V': '^'}
CMAP='jet'
# Add color column to color by genotype
colors = {}
cnt = 0
genos = set(dfGt10['line'].tolist())
for l in genos:
colors[l] = cnt
cnt += 1
marenRawCounts['color'] = marenRawCounts['line'].map(colors)
# Plotting scatter
def getR2(df, x, y):
Calculate the R-squared using OLS with an intercept
formula = '{} ~ {} + 1'.format(y, x)
return smf.ols(formula, df).fit().rsquared
def scatPlt(df, x, y, c=None, cmap='jet', s=50, marker='o', ax=None, title=None, xlab=None, ylab=None, diag='pos'):
Make a scatter plot using some default options
ax = df.plot(x, y, kind='scatter', ax=ax, c=c, cmap=cmap, s=s, marker=marker, title=title, colorbar=False)
# Add a diag line
if diag == 'neg':
# draw a diag line with negative slope
ax.plot([0, 1], [1, 0], transform=ax.transAxes)
elif diag == 'pos':
# draw a diag line with positive slope
ax.plot([0, 1], [0, 1], transform=ax.transAxes)
ax.set_xlabel(xlab)
ax.set_ylabel(ylab)
return ax
def scatPltPanel(df, line='sum_line', tester='sum_tester', x='cis_line', y='prop', cmap='jet', s=60, panel_title=None, diag='pos'):
Make a panel of scatter plots using pandas
# Plot the cis-line effects x proportion Line by fusion
df['prop'] = 1 - df[tester] / (df[line] + df[tester])
# Create 5x5 panel plot
fig, axes = plt.subplots(5, 5, figsize=(20, 20))
fig.suptitle(panel_title, fontsize=12, fontweight='bold')
axes = axes.ravel()
# Group by fusion_id
for i, (n, gdf) in enumerate(df.groupby('fusion_id')):
ax = axes[i]
# Calculate R-squared value
r2 = getR2(gdf, x, y)
# Make new title with R-squared in it
t = '{}\nR^2: {}'.format(n, round(r2, 3))
# Change marker style based on mating status and plot
for ms, msdf in gdf.groupby('mating_status'):
scatPlt(msdf, x, y, c='color', cmap=cmap, s=s, marker=SHAPES[ms], ax=ax, title=t, xlab=x, ylab=y, diag=diag)
# only plot 25 fusions
if i == 24:
break
fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_by_exonic_region_{}_v_{}.png'.format(x, y))
plt.savefig(fname, bbox_inches='tight')
print("Saved figure to: " + fname)
plt.close()
# Plot the cis-line effects x proportion by fusion
scatPltPanel(marenRawCounts, line='sum_line', tester='sum_tester', cmap=CMAP, panel_title='Raw Counts: cis-line')
# Plot the trans-line effects x proportion by fusion
scatPltPanel(marenRawCounts, line='sum_line', tester='sum_tester', x='trans_line', cmap=CMAP, panel_title='Raw Counts: trans-line', diag='neg')
Explanation: Plot cis- and trans-effects vs Allelic Proportion
End of explanation
# Plot F10005_SI
FUSION='F10005_SI'
dfFus = marenRawCounts[marenRawCounts['fusion_id'] == FUSION].copy()
dfFus['prop'] = 1 - dfFus['sum_tester'] / (dfFus['sum_line'] + dfFus['sum_tester'])
# Generate 3 panel plot
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
fig.suptitle(FUSION, fontsize=14, fontweight='bold')
for n, mdf in dfFus.groupby('mating_status'):
# Plot the cis-line effects x proportion by fusion
scatPlt(mdf, x='cis_line', y='prop', ax=axes[0], c='color', cmap=CMAP, marker=SHAPES[n], title='cis-line', xlab='cis-line', ylab='prop')
# Plot the trans-line effects x proportion by fusion
scatPlt(mdf, x='trans_line', y='prop', ax=axes[1], c='color', cmap=CMAP, marker=SHAPES[n], title='trans-line', xlab='trans-line', ylab='prop', diag='neg')
# Plot the Tester effects x proportion by fusion
scatPlt(mdf, x='cis_tester', y='prop', ax=axes[2], c='color', cmap=CMAP, marker=SHAPES[n], title='Tester', xlab='cis-tester', ylab='prop', diag=None)
fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_effects_v_prop.png'.format(FUSION))
plt.savefig(fname, bbox_inches='tight')
print("Saved figure to: " + fname)
plt.close()
Explanation: Plot cis- and trans-effects vs Allelic Proportion for Specific Exonic Regions
End of explanation
# Plot F10317_SI
FUSION='F10317_SI'
dfFus = marenRawCounts[marenRawCounts['fusion_id'] == FUSION].copy()
dfFus['prop'] = 1 - dfFus['sum_tester'] / (dfFus['sum_line'] + dfFus['sum_tester'])
# Generate 3 panel plot
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
fig.suptitle(FUSION, fontsize=14, fontweight='bold')
for n, mdf in dfFus.groupby('mating_status'):
# Plot the cis-line effects x proportion by fusion
scatPlt(mdf, x='cis_line', y='prop', ax=axes[0], c='color', cmap=CMAP, marker=SHAPES[n], title='cis-line', xlab='cis-line', ylab='prop')
# Plot the trans-line effects x proportion by fusion
scatPlt(mdf, x='trans_line', y='prop', ax=axes[1], c='color', cmap=CMAP, marker=SHAPES[n], title='trans-line', xlab='trans-line', ylab='prop', diag='neg')
# Plot the Tester effects x proportion by fusion
scatPlt(mdf, x='cis_tester', y='prop', ax=axes[2], c='color', cmap=CMAP, marker=SHAPES[n], title='Tester', xlab='cis-tester', ylab='prop', diag=None)
fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_effects_v_prop.png'.format(FUSION))
plt.savefig(fname, bbox_inches='tight')
print("Saved figure to: " + fname)
plt.close()
Explanation: F10317_SI
This fusion has a weaker cis-line effects but trans-line effects look more linear.
End of explanation
# Plot F10482_SI
FUSION='F10482_SI'
dfFus = marenRawCounts[marenRawCounts['fusion_id'] == FUSION].copy()
dfFus['prop'] = 1 - dfFus['sum_tester'] / (dfFus['sum_line'] + dfFus['sum_tester'])
# Generate 3 panel plot
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
fig.suptitle(FUSION, fontsize=14, fontweight='bold')
for n, mdf in dfFus.groupby('mating_status'):
# Plot the cis-line effects x proportion by fusion
scatPlt(mdf, x='cis_line', y='prop', ax=axes[0], c='color', cmap=CMAP, marker=SHAPES[n], title='cis-line', xlab='cis-line', ylab='prop')
# Plot the trans-line effects x proportion by fusion
scatPlt(mdf, x='trans_line', y='prop', ax=axes[1], c='color', cmap=CMAP, marker=SHAPES[n], title='trans-line', xlab='trans-line', ylab='prop', diag='neg')
# Plot the Tester effects x proportion by fusion
scatPlt(mdf, x='cis_tester', y='prop', ax=axes[2], c='color', cmap=CMAP, marker=SHAPES[n], title='Tester', xlab='cis-tester', ylab='prop', diag=None)
fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_effects_v_prop.png'.format(FUSION))
plt.savefig(fname, bbox_inches='tight')
print("Saved figure to: " + fname)
plt.close()
meanByMsLine = marenRawCounts[['mean_apn', 'cis_line', 'mating_status', 'line']].groupby(['mating_status', 'line']).mean()
meanByMsLine.columns
meanByMsLine.plot(kind='scatter', x='mean_apn', y='cis_line')
def cisAPN(df, fusion, value='cis_line', xcutoff='>=150', ycutoff='<=-180'):
Plot effects vs mean apn
# Pull out fusion of interest
dfSub = marenRawCounts[marenRawCounts['fusion_id'] == fusion]
# Make scatter plot
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
dfSub.plot(kind='scatter', x='mean_apn', y='cis_line', ax=ax, title=fusion)
# Annotate outliers
xc =
filt = dfSub.loc[(dfSub[value] eval(ycutoff)) | (dfSub['mean_apn'] eval(xcutoff)), ['line', 'mating_status', 'mean_apn', 'cis_line']]
for row in filt.values:
line, ms, apn, cis = row
ax.annotate(line + '_' + ms, xy=(apn, cis))
fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_{}_v_meanApn.png'.format(fusion, value))
plt.savefig(fname, bbox_inches='tight')
marenRawCounts.columns
mated = marenRawCounts[marenRawCounts['mating_status'] == 'M']
genos = set(mated['line'].tolist())
mated.head()
grp = mated.groupby('line')
sub = grp.get_group('r324')
fig, ax = plt.subplots(1, 1, figsize=(10, 6), dpi=300)
ax.axvline(0, c='k', lw=1)
sub['cis_line'].plot(kind='kde', xlim=(-600, 600), ax=ax, label='all', style='b-', lw=3)
sub.ix[sub['flag_AI_combined'] == 1, 'cis_line'].plot(kind='kde', xlim=(-600, 600), ax=ax, label='AI', style='r--')
sub.ix[sub['flag_AI_combined'] == 0, 'cis_line'].plot(kind='kde', xlim=(-600, 600), ax=ax, label='no AI', style='m-.')
plt.legend()
plt.title('r324')
Explanation: F10482_SI
This fusion has a weaker cis-line effects but trans-line effects look more linear.
End of explanation |
8,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implement estimators of large-scale sparse Gaussian densities
by Soumyajit De (email
Step1: First, to keep the notion of Krylov subspace, we view the matrix as a linear operator that applies on a vector, resulting a new vector. We use <a href="http
Step2: Next, we use <a href="http
Step3: <p>This corresponds to averaging over 13 source vectors rather than one (but has much lower variance as using 13 Gaussian source vectors). A comparison between the convergence behavior of using probing sampler and Gaussian sampler is presented later.</p>
<p>Then we define <a href="http
Step4: Finally, we use the <a href="http
Step5: To verify the accuracy of the estimate, we compute exact log-determinant of A using Cholesky factorization using <a href="http
Step6: <h2>Statistics</h2>
We use a smaller sparse-matrix, <a href="http
Step7: <h2>A motivational example - likelihood of the Ozone dataset</h2>
<p>In <a href="http
Step8: <h2>Useful components</h2>
<p>As a part of the implementation of log-determinant estimator, a number of classes have been developed, which may come useful for several other occassions as well.
<h3>1. <a href="http
Step9: <h3>2. <a href="http
Step10: <h4><a href="http
Step11: <h4><a href="http
Step12: Apart from iterative solvers, a few more triangular solvers are added.
<h4><a href="http
Step13: <h4><a href="http | Python Code:
%matplotlib inline
from scipy.sparse import eye
from scipy.io import mmread
from matplotlib import pyplot as plt
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
matFile=os.path.join(SHOGUN_DATA_DIR, 'logdet/apache2.mtx.gz')
M = mmread(matFile)
rows = M.shape[0]
cols = M.shape[1]
A = M + eye(rows, cols) * 10000.0
plt.title("A")
plt.spy(A, precision = 1e-2, marker = '.', markersize = 0.01)
plt.show()
Explanation: Implement estimators of large-scale sparse Gaussian densities
by Soumyajit De (email: heavensdevil6909@gmail.com, soumyajitde@cse.iitb.ac.in. Github: <a href="https://github.com/lambday">lambday</a>)<br/> Many many thanks to my mentor Heiko Strathmann, Sergey Lisitsyn, Sören Sonnenburg, Viktor Gal
This notebook illustrates large-scale sparse Gaussian density likelihood estimation. It first introduces the reader to the mathematical background and then shows how one can do the estimation with Shogun on a number of real-world data sets.
<h2>Theoretical introduction</h2>
<p><i>Multivariate Gaussian distributions</i>, i.e. some random vector $\mathbf{x}\in\mathbb{R}^n$ having probability density function
$$p(\mathbf{x}|\boldsymbol\mu, \boldsymbol\Sigma)=(2\pi)^{-n/2}\text{det}(\boldsymbol\Sigma)^{-1/2} \exp\left(-\frac{1}{2}(\mathbf{x}-\boldsymbol\mu)^{T}\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu)\right)$$
$\boldsymbol\mu$ being the mean vector and $\boldsymbol\Sigma$ being the covariance matrix, arise in numerous occassions involving large datasets. Computing <i>log-likelihood</i> in these requires computation of the log-determinant of the covariance matrix
$$\mathcal{L}(\mathbf{x}|\boldsymbol\mu,\boldsymbol\Sigma)=-\frac{n}{2}\log(2\pi)-\frac{1}{2}\log(\text{det}(\boldsymbol\Sigma))-\frac{1}{2}(\mathbf{x}-\boldsymbol\mu)^{T}\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu)$$
The covariance matrix and its inverse are symmetric positive definite (spd) and are often sparse, e.g. due to conditional independence properties of Gaussian Markov Random Fields (GMRF). Therefore they can be stored efficiently even for large dimension $n$.</p>
<p>The usual technique for computing the log-determinant term in the likelihood expression relies on <i><a href="http://en.wikipedia.org/wiki/Cholesky_factorization">Cholesky factorization</a></i> of the matrix, i.e. $\boldsymbol\Sigma=\mathbf{LL}^{T}$, ($\mathbf{L}$ is the lower triangular Cholesky factor) and then using the diagonal entries of the factor to compute $\log(\text{det}(\boldsymbol\Sigma))=2\sum_{i=1}^{n}\log(\mathbf{L}_{ii})$. However, for sparse matrices, as covariance matrices usually are, the Cholesky factors often suffer from <i>fill-in</i> phenomena - they turn out to be not so sparse themselves. Therefore, for large dimensions this technique becomes infeasible because of a massive memory requirement for storing all these irrelevant non-diagonal co-efficients of the factor. While ordering techniques have been developed to permute the rows and columns beforehand in order to reduce fill-in, e.g. <i><a href="http://en.wikipedia.org/wiki/Minimum_degree_algorithm">approximate minimum degree</a></i> (AMD) reordering, these techniques depend largely on the sparsity pattern and therefore not guaranteed to give better result.</p>
<p>Recent research shows that using a number of techniques from complex analysis, numerical linear algebra and greedy graph coloring, we can, however, approximate the log-determinant up to an arbitrary precision [<a href="http://link.springer.com/article/10.1007%2Fs11222-012-9368-y">Aune et. al., 2012</a>]. The main trick lies within the observation that we can write $\log(\text{det}(\boldsymbol\Sigma))$ as $\text{trace}(\log(\boldsymbol\Sigma))$, where $\log(\boldsymbol\Sigma)$ is the matrix-logarithm. Computing the log-determinant then requires extracting the trace of the matrix-logarithm as
$$\text{trace}(\log(\boldsymbol\Sigma))=\sum_{j=1}^{n}\mathbf{e}^{T}_{j}\log(\boldsymbol\Sigma)\mathbf{e}_{j}$$
where each $\mathbf{e}_{j}$ is a unit basis vector having a 1 in its $j^{\text{th}}$ position while rest are zeros and we assume that we can compute $\log(\boldsymbol\Sigma)\mathbf{e}_{j}$ (explained later). For large dimension $n$, this approach is still costly, so one needs to rely on sampling the trace. For example, using stochastic vectors we can obtain a <i><a href="http://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo estimator</a></i> for the trace -
$$\text{trace}(\log(\boldsymbol\Sigma))=\mathbb{E}_{\mathbf{v}}(\mathbf{v}^{T}\log(\boldsymbol\Sigma)\mathbf{v})\approx \sum_{j=1}^{k}\mathbf{s}^{T}_{j}\log(\boldsymbol\Sigma)\mathbf{s}_{j}$$
where the source vectors ($\mathbf{s}_{j}$) have zero mean and unit variance (e.g. $\mathbf{s}_{j}\sim\mathcal{N}(\mathbf{0}, \mathbf{I}), \forall j\in[1\cdots k]$). But since this is a Monte Carlo method, we need many many samples to get sufficiently accurate approximation. However, by a method suggested in Aune et. al., we can reduce the number of samples required drastically by using <i>probing-vectors</i> that are obtained from <a href="http://en.wikipedia.org/wiki/Graph_coloring">coloring of the adjacency graph</a> represented by the power of the sparse-matrix, $\boldsymbol\Sigma^{p}$, i.e. we can obtain -
$$\mathbb{E}_{\mathbf{v}}(\mathbf{v}^{T}\log(\boldsymbol\Sigma)\mathbf{v})\approx \sum_{j=1}^{m}\mathbf{w}^{T}_{j}\log(\boldsymbol\Sigma)\mathbf{w}_{j}$$
with $m\ll n$, where $m$ is the number of colors used in the graph coloring. For a particular color $j$, the probing vector $\mathbb{w}_{j}$ is obtained by filling with $+1$ or $-1$ uniformly randomly for entries corresponding to nodes of the graph colored with $j$, keeping the rest of the entries as zeros. Since the matrix is sparse, the number of colors used is usually very small compared to the dimension $n$, promising the advantage of this approach.</p>
<p>There are two main issues in this technique. First, computing $\boldsymbol\Sigma^{p}$ is computationally costly, but experiments show that directly applying a <i>d-distance</i> coloring algorithm on the sparse matrix itself also results in a pretty good approximation. Second, computing the exact matrix-logarithm is often infeasible because its is not guaranteed to be sparse. Aune et. al. suggested that we can rely on rational approximation of the matrix-logarithm times vector using an approach described in <a href="http://eprints.ma.man.ac.uk/1136/01/covered/MIMS_ep2007_103.pdf">Hale et. al [2008]</a>, i.e. writing $\log(\boldsymbol\Sigma)\mathbf{w}_{j}$ in our desired expression using <i><a href="http://en.wikipedia.org/wiki/Cauchy's_integral_formula">Cauchy's integral formula</a></i> as -
$$log(\boldsymbol\Sigma)\mathbf{w}_{j}=\frac{1}{2\pi i}\oint_{\Gamma}log(z)(z\mathbf{I}-\boldsymbol\Sigma)^{-1}\mathbf{w}_{j}dz\approx \frac{-8K(\lambda_{m}\lambda_{M})^{\frac{1}{4}}}{k\pi N} \boldsymbol\Sigma\Im\left(-\sum_{l=1}^{N}\alpha_{l}(\boldsymbol\Sigma-\sigma_{l}\mathbf{I})^{-1}\mathbf{w}_{j}\right)$$
$K$, $k \in \mathbb{R}$, $\alpha_{l}$, $\sigma_{l} \in \mathbb{C}$ are coming from <i><a href="http://en.wikipedia.org/wiki/Jacobi_elliptic_functions">Jacobi elliptic functions</a></i>, $\lambda_{m}$ and $\lambda_{M}$ are the minimum/maximum eigenvalues of $\boldsymbol\Sigma$ (they have to be real-positive), respectively, $N$ is the number of contour points in the quadrature rule of the above integral and $\Im(\mathbf{x})$ represents the imaginary part of $\mathbf{x}\in\mathbb{C}^{n}$.</p>
<p>The problem then finally boils down to solving the shifted family of linear systems $(\boldsymbol\Sigma-\sigma_{l}\mathbf{I})\mathbb{x}_{j}=\mathbb{w}_{j}$. Since $\boldsymbol\Sigma$ is sparse, matrix-vector product is not much costly and therefore these systems can be solved with a low memory-requirement using <i>Krylov subspace iterative solvers</i> like <i><a href="http://en.wikipedia.org/wiki/Conjugate_gradient_method">Conjugate Gradient</a></i> (CG). Since the shifted matrices have complex entries along their diagonal, the appropriate method to choose is <i>Conjugate Orthogonal Conjugate Gradient</i> (COCG) [<a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=106415&tag=1">H.A. van der Vorst et. al., 1990.</a>]. Alternatively, these systems can be solved at once using <i>CG-M</i> [<a href"http://arxiv.org/abs/hep-lat/9612014">Jegerlehner, 1996.</a>] solver which solves for $(\mathbf{A}+\sigma\mathbf{I})\mathbf{x}=\mathbf{b}$ for all values of $\sigma$ using as many matrix-vector products in the CG-iterations as required to solve for one single shifted system. This algorithm shows reliable convergance behavior for systems with reasonable condition number.</p>
<p>One interesting property of this approach is that once the graph coloring information and shifts/weights are known, all the computation components - solving linear systems, computing final vector-vector product - are independently computable. Therefore, computation can be speeded up using parallel computation of these. To use this, a computation framework for Shogun is developed and the whole log-det computation works on top of it.</p>
<h2>An example of using this approach in Shogun</h2>
<p>We demonstrate the usage of this technique to estimate log-determinant of a real-valued spd sparse matrix with dimension $715,176\times 715,176$ with $4,817,870$ non-zero entries, <a href="http://www.cise.ufl.edu/research/sparse/matrices/GHS_psdef/apache2.html">apache2</a>, which is obtained from the <a href="http://www.cise.ufl.edu/research/sparse/matrices/">The University of Florida Sparse Matrix Collection</a>. Cholesky factorization with AMD for this sparse-matrix gives rise to factors with $353,843,716$ non-zero entries (from source). We use CG-M solver to solve the shifted systems. Since the original matrix is badly conditioned, here we added a ridge along its diagonal to reduce the condition number so that the CG-M solver converges within reasonable time. Please note that for high condition number, the number of iteration has to be set very high.
End of explanation
from shogun import RealSparseMatrixOperator, LanczosEigenSolver
op = RealSparseMatrixOperator(A.tocsc())
# Lanczos iterative Eigensolver to compute the min/max Eigenvalues which is required to compute the shifts
eigen_solver = LanczosEigenSolver(op)
# we set the iteration limit high to compute the eigenvalues more accurately, default iteration limit is 1000
eigen_solver.set_max_iteration_limit(2000)
# computing the eigenvalues
eigen_solver.compute()
print('Minimum Eigenvalue:', eigen_solver.get_min_eigenvalue())
print('Maximum Eigenvalue:', eigen_solver.get_max_eigenvalue())
Explanation: First, to keep the notion of Krylov subspace, we view the matrix as a linear operator that applies on a vector, resulting a new vector. We use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SparseMatrixOperator.html">RealSparseMatrixOperator</a> that is suitable for this example. All the solvers work with <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LinearOperator.html">LinearOperator</a> type objects. For computing the eigenvalues, we use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LanczosEigenSolver.html">LanczosEigenSolver</a> class. Although computation of the Eigenvalues is done internally within the log-determinant estimator itself (see below), here we explicitely precompute them.
End of explanation
# We can specify the power of the sparse-matrix that is to be used for coloring, default values will apply a
# 2-distance greedy graph coloring algorithm on the sparse-matrix itself. Matrix-power, if specified, is computed in O(lg p)
from shogun import ProbingSampler
trace_sampler = ProbingSampler(op)
# apply the graph coloring algorithm and generate the number of colors, i.e. number of trace samples
trace_sampler.precompute()
print('Number of colors used:', trace_sampler.get_num_samples())
Explanation: Next, we use <a href="http://www.shogun-toolbox.org/doc/en/latest/ProbingSampler_8h_source.html">ProbingSampler</a> class which uses an external library <a href="http://www.cscapes.org/coloringpage/">ColPack</a>. Again, the number of colors used is precomputed for demonstration purpose, although computed internally inside the log-determinant estimator.
End of explanation
from shogun import CGMShiftedFamilySolver, LogRationalApproximationCGM
cgm = CGMShiftedFamilySolver()
# setting the iteration limit (set this to higher value for higher condition number)
cgm.set_iteration_limit(100)
# accuracy determines the number of contour points in the rational approximation (i.e. number of shifts in the systems)
accuracy = 1E-15
# we create a operator-log-function using the sparse matrix operator that uses CG-M to solve the shifted systems
op_func = LogRationalApproximationCGM(op, eigen_solver, cgm, accuracy)
op_func.precompute()
print('Number of shifts:', op_func.get_num_shifts())
Explanation: <p>This corresponds to averaging over 13 source vectors rather than one (but has much lower variance as using 13 Gaussian source vectors). A comparison between the convergence behavior of using probing sampler and Gaussian sampler is presented later.</p>
<p>Then we define <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLogRationalApproximationCGM.html">LogRationalApproximationCGM</a> operator function class, which internally uses the Eigensolver to compute the Eigenvalues, uses <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CJacobiEllipticFunctions.html">JacobiEllipticFunctions</a> to compute the complex shifts, weights and the constant multiplier in the rational approximation expression, takes the probing vector generated by the trace sampler and then uses CG-M solver (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMShiftedFamilySolver.html">CGMShiftedFamilySolver</a>) to solve the shifted systems. Precompute is not necessary here too.</p>
End of explanation
import numpy as np
from shogun import LogDetEstimator
# number of log-det samples (use a higher number to get better estimates)
# (this is 5 times number of colors estimate in practice, so usually 1 probing estimate is enough)
num_samples = 5
log_det_estimator = LogDetEstimator(trace_sampler, op_func)
estimates = log_det_estimator.sample(num_samples)
estimated_logdet = np.mean(estimates)
print('Estimated log(det(A)):', estimated_logdet)
Explanation: Finally, we use the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LogDetEstimator.html">LogDetEstimator</a> class to sample the log-determinant of the matrix.
End of explanation
# the following method requires massive amount of memory, for demonstration purpose
# the following code is commented out and direct value obtained from running it once is used
# from shogun import Statistics
# actual_logdet = Statistics.log_det(A)
actual_logdet = 7120357.73878
print('Actual log(det(A)):', actual_logdet)
plt.hist(estimates)
plt.plot([actual_logdet, actual_logdet], [0,len(estimates)], linewidth=3)
plt.show()
Explanation: To verify the accuracy of the estimate, we compute exact log-determinant of A using Cholesky factorization using <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Statistics.html#a9931a4ea72310b239efdc05503442525">Statistics::log_det</a> method.
End of explanation
from scipy.sparse import csc_matrix
from scipy.sparse import identity
m = mmread(os.path.join(SHOGUN_DATA_DIR, 'logdet/west0479.mtx'))
# computing a spd with added ridge
B = csc_matrix(m.transpose() * m + identity(m.shape[0]) * 1000.0)
fig = plt.figure(figsize=(12, 4))
ax = fig.add_subplot(1,2,1)
ax.set_title('B')
ax.spy(B, precision = 1e-5, marker = '.', markersize = 2.0)
ax = fig.add_subplot(1,2,2)
ax.set_title('lower Cholesky factor')
dense_matrix = B.todense()
L = np.linalg.cholesky(dense_matrix)
ax.spy(csc_matrix(L), precision = 1e-5, marker = '.', markersize = 2.0)
plt.show()
op = RealSparseMatrixOperator(B)
eigen_solver = LanczosEigenSolver(op)
# computing log-det estimates using probing sampler
probing_sampler = ProbingSampler(op)
cgm.set_iteration_limit(500)
op_func = LogRationalApproximationCGM(op, eigen_solver, cgm, 1E-5)
log_det_estimator = LogDetEstimator(probing_sampler, op_func)
num_probing_estimates = 100
probing_estimates = log_det_estimator.sample(num_probing_estimates)
# computing log-det estimates using Gaussian sampler
from shogun import NormalSampler, Statistics
num_colors = probing_sampler.get_num_samples()
normal_sampler = NormalSampler(op.get_dimension())
log_det_estimator = LogDetEstimator(normal_sampler, op_func)
num_normal_estimates = num_probing_estimates * num_colors
normal_estimates = log_det_estimator.sample(num_normal_estimates)
# average in groups of n_effective_samples
effective_estimates_normal = np.zeros(num_probing_estimates)
for i in range(num_probing_estimates):
idx = i * num_colors
effective_estimates_normal[i] = np.mean(normal_estimates[idx:(idx + num_colors)])
actual_logdet = Statistics.log_det(B)
print('Actual log(det(B)):', actual_logdet)
print('Estimated log(det(B)) using probing sampler:', np.mean(probing_estimates))
print('Estimated log(det(B)) using Gaussian sampler:', np.mean(effective_estimates_normal))
print('Variance using probing sampler:', np.var(probing_estimates))
print('Variance using Gaussian sampler:', np.var(effective_estimates_normal))
fig = plt.figure(figsize=(15, 4))
ax = fig.add_subplot(1,3,1)
ax.set_title('Probing sampler')
ax.plot(np.cumsum(probing_estimates)/(np.arange(len(probing_estimates))+1))
ax.plot([0,len(probing_estimates)], [actual_logdet, actual_logdet])
ax.legend(["Probing", "True"])
ax = fig.add_subplot(1,3,2)
ax.set_title('Gaussian sampler')
ax.plot(np.cumsum(effective_estimates_normal)/(np.arange(len(effective_estimates_normal))+1))
ax.plot([0,len(probing_estimates)], [actual_logdet, actual_logdet])
ax.legend(["Gaussian", "True"])
ax = fig.add_subplot(1,3,3)
ax.hist(probing_estimates)
ax.hist(effective_estimates_normal)
ax.plot([actual_logdet, actual_logdet], [0,len(probing_estimates)], linewidth=3)
plt.show()
Explanation: <h2>Statistics</h2>
We use a smaller sparse-matrix, <a href="http://www.cise.ufl.edu/research/sparse/matrices/HB/west0479.html">'west0479'</a> in this section to demonstrate the benefits of using probing vectors over standard Gaussian vectors to sample the trace of matrix-logarithm. In the following we can easily observe the fill-in phenomena described earlier. Again, a ridge has been added to reduce the runtime for demonstration purpose.
End of explanation
from scipy.io import loadmat
def get_Q_y_A(kappa):
# read the ozone data and create the matrix Q
ozone = loadmat(os.path.join(SHOGUN_DATA_DIR, 'logdet/ozone_data.mat'))
GiCG = ozone["GiCG"]
G = ozone["G"]
C0 = ozone["C0"]
kappa = 13.1
Q = GiCG + 2 * (kappa ** 2) * G + (kappa ** 4) * C0
# also, added a ridge here
Q = Q + eye(Q.shape[0], Q.shape[1]) * 10000.0
plt.spy(Q, precision = 1e-5, marker = '.', markersize = 1.0)
plt.show()
# read y and A
y = ozone["y_ozone"]
A = ozone["A"]
return Q, y, A
def log_det(A):
op = RealSparseMatrixOperator(A)
eigen_solver = LanczosEigenSolver(op)
probing_sampler = ProbingSampler(op)
cgm = CGMShiftedFamilySolver()
cgm.set_iteration_limit(100)
op_func = LogRationalApproximationCGM(op, eigen_solver, cgm, 1E-5)
log_det_estimator = LogDetEstimator(probing_sampler, op_func)
num_estimates = 1
return np.mean(log_det_estimator.sample(num_estimates))
def log_likelihood(tau, kappa):
Q, y, A = get_Q_y_A(kappa)
n = len(y);
AtA = A.T.dot(A)
M = Q + tau * AtA;
# Computing log-determinants")
logdet1 = log_det(Q)
logdet2 = log_det(M)
first = 0.5 * logdet1 + 0.5 * n * np.log(tau) - 0.5 * logdet2
# computing the rest of the likelihood
second_a = -0.5 * tau * (y.T.dot(y))
second_b = np.array(A.T.dot(y))
from scipy.sparse.linalg import spsolve
second_b = spsolve(M, second_b)
second_b = A.dot(second_b)
second_b = y.T.dot(second_b)
second_b = 0.5 * (tau ** 2) * second_b
log_det_part = first
quadratic_part = second_a + second_b
const_part = -0.5 * n * np.log(2 * np.pi)
log_marignal_lik = const_part + log_det_part + quadratic_part
return log_marignal_lik
L = log_likelihood(1.0, 15.0)
print('Log-likelihood estimate:', L)
Explanation: <h2>A motivational example - likelihood of the Ozone dataset</h2>
<p>In <a href="http://arxiv.org/abs/1306.4032">Lyne et. al. (2013)</a>, an interesting scenario is discussed where the log-likelihood of a model involving large spatial dataset is considered. The data, collected by a satellite consists of $N=173,405$ ozone measurements around the globe. The data is modelled using three stage hierarchical way -
$$y_{i}|\mathbf{x},\kappa,\tau\sim\mathcal{N}(\mathbf{Ax},\tau^{−1}\mathbf{I})$$
$$\mathbf{x}|\kappa\sim\mathcal{N}(\mathbf{0}, \mathbf{Q}(\kappa))$$
$$\kappa\sim\log_{2}\mathcal{N}(0, 100), \tau\sim\log_{2}\mathcal{N}(0, 100)$$
Where the precision matrix, $\mathbf{Q}$, of a Matern SPDE model, defined on a fixed traingulation of the globe, is sparse and the parameter $\kappa$ controls for the range at which correlations in the field are effectively zero (see Girolami et. al. for details). The log-likelihood estiamate of the posterior using this model is
$$2\mathcal{L}=2\log \pi(\mathbf{y}|\kappa,\tau)=C+\log(\text{det}(\mathbf{Q}(\kappa)))+N\log(\tau)−\log(\text{det}(\mathbf{Q}(\kappa)+\tau \mathbf{A}^{T}\mathbf{A}))− \tau\mathbf{y}^{T}\mathbf{y}+\tau^{2}\mathbf{y}^{T}\mathbf{A}(\mathbf{Q}(\kappa)+\tau\mathbf{A}^{T}\mathbf{A})^{−1}\mathbf{A}^{T}\mathbf{y}$$
In the expression, we have two terms involving log-determinant of large sparse matrices. The rational approximation approach described in the previous section can readily be applicable to estimate the log-likelihood. The following computation shows the usage of Shogun's log-determinant estimator for estimating this likelihood (code has been adapted from an open source library, <a href="https://github.com/karlnapf/ozone-roulette.git">ozone-roulette</a>, written by Heiko Strathmann, one of the authors of the original paper).
<b>Please note that we again added a ridge along the diagonal for faster execution of this example. Since the original matrix is badly conditioned, one needs to set the iteration limits very high for both the Eigen solver and the linear solver in absense of precondioning.</b>
End of explanation
from shogun import RealSparseMatrixOperator, ComplexDenseMatrixOperator
dim = 5
np.random.seed(10)
# create a random valued sparse matrix linear operator
A = csc_matrix(np.random.randn(dim, dim))
op = RealSparseMatrixOperator(A)
# creating a random vector
np.random.seed(1)
b = np.array(np.random.randn(dim))
v = op.apply(b)
print('A.apply(b)=',v)
# create a dense matrix linear operator
B = np.array(np.random.randn(dim, dim)).astype(complex)
op = ComplexDenseMatrixOperator(B)
print('Dimension:', op.get_dimension())
Explanation: <h2>Useful components</h2>
<p>As a part of the implementation of log-determinant estimator, a number of classes have been developed, which may come useful for several other occassions as well.
<h3>1. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LinearOperator.html">Linear Operators</a></h3>
All the linear solvers and Eigen solvers work with linear operators. Both real valued and complex valued operators are supported for dense/sparse matrix linear operators.
End of explanation
from scipy.sparse import csc_matrix
from scipy.sparse import identity
from shogun import ConjugateGradientSolver
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = csc_matrix(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
Q = RealSparseMatrixOperator(a)
# creating a random vector
y = np.array(np.random.randn(dim))
# solve the system Qx=y
# the argument is set as True to gather convergence statistics (default is False)
cg = ConjugateGradientSolver(True)
cg.set_iteration_limit(20)
x = cg.solve(Q,y)
print('x:',x)
# verifying the result
print('y:', y)
print('Qx:', Q.apply(x))
residuals = cg.get_residuals()
plt.plot(residuals)
plt.show()
Explanation: <h3>2. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LinearSolver.html">Linear Solvers</a></h3>
<p> Conjugate Gradient based iterative solvers, that construct the Krylov subspace in their iteration by computing matrix-vector products are most useful for solving sparse linear systems. Here is an overview of CG based solvers that are currently available in Shogun.</p>
<h4> <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CConjugateGradientSolver.html">Conjugate Gradient Solver</a></h4>
This solver solves for system $\mathbf{Qx}=\mathbf{y}$, where $\mathbf{Q}$ is real-valued spd linear operator (e.g. dense/sparse matrix operator), and $\mathbf{y}$ is real vector.
End of explanation
from shogun import ComplexSparseMatrixOperator
from shogun import ConjugateOrthogonalCGSolver
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = csc_matrix(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
a = a.astype(complex)
# adding a complex entry along the diagonal
for i in range(0, dim):
a[i,i] += complex(np.random.randn(), np.random.randn())
Q = ComplexSparseMatrixOperator(a)
z = np.array(np.random.randn(dim))
# solve for the system Qx=z
cocg = ConjugateOrthogonalCGSolver(True)
cocg.set_iteration_limit(20)
x = cocg.solve(Q, z)
print('x:',x)
# verifying the result
print('z:',z)
print('Qx:',np.real(Q.apply(x)))
residuals = cocg.get_residuals()
plt.plot(residuals)
plt.show()
Explanation: <h4><a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1ConjugateOrthogonalCGSolver.html">Conjugate Orthogonal CG Solver</a></h4>
Solves for systems $\mathbf{Qx}=\mathbf{z}$, where $\mathbf{Q}$ is symmetric but non-Hermitian (i.e. having complex entries in its diagonal) and $\mathbf{z}$ is real valued vector.
End of explanation
from shogun import CGMShiftedFamilySolver
cgm = CGMShiftedFamilySolver()
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = csc_matrix(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
Q = RealSparseMatrixOperator(a)
# creating a random vector
v = np.array(np.random.randn(dim))
# number of shifts (will be equal to the number of contour points)
num_shifts = 3;
# generating some random shifts
shifts = []
for i in range(0, num_shifts):
shifts.append(complex(np.random.randn(), np.random.randn()))
sigma = np.array(shifts)
print('Shifts:', sigma)
# generating some random weights
weights = []
for i in range(0, num_shifts):
weights.append(complex(np.random.randn(), np.random.randn()))
alpha = np.array(weights)
print('Weights:',alpha)
# solve for the systems
cgm = CGMShiftedFamilySolver(True)
cgm.set_iteration_limit(20)
x = cgm.solve_shifted_weighted(Q, v, sigma, alpha)
print('x:',x)
residuals = cgm.get_residuals()
plt.plot(residuals)
plt.show()
# verifying the result with cocg
x_s = np.array([0+0j] * dim)
for i in range(0, num_shifts):
a_s = a.astype(complex)
for j in range(0, dim):
# moving the complex shift inside the operator
a_s[j,j] += sigma[i]
Q_s = ComplexSparseMatrixOperator(a_s)
# multiplying the result with weight
x_s += alpha[i] * cocg.solve(Q_s, v)
print('x\':', x_s)
Explanation: <h4><a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMShiftedFamilySolver.html">CG-M Shifted Family Solver</a></h4>
Solves for systems with real valued spd matrices with complex shifts. For using it with log-det, an option to specify the weight of each solution is also there. The solve_shifted_weighted method returns $\sum\alpha_{l}\mathbf{x}{l}$ where $\mathbf{x}{l}=(\mathbf{A}+\sigma_{l}\mathbf{I})^{-1}\mathbf{y}$, $\sigma,\alpha\in\mathbb{C}$, $\mathbf{y}\in\mathbb{R}$.
End of explanation
from shogun import DirectSparseLinearSolver
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = csc_matrix(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
Q = RealSparseMatrixOperator(a)
# creating a random vector
y = np.array(np.random.randn(dim))
# solve the system Qx=y
chol = DirectSparseLinearSolver()
x = chol.solve(Q,y)
print('x:',x)
# verifying the result
print('y:', y)
print('Qx:', Q.apply(x))
Explanation: Apart from iterative solvers, a few more triangular solvers are added.
<h4><a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDirectSparseLinearSolver.html">Direct Sparse Linear Solver</a></h4>
This uses sparse Cholesky to solve for linear systems $\mathbf{Qx}=\mathbf{y}$, where $\mathbf{Q}$ is real-valued spd linear operator (e.g. dense/sparse matrix operator), and $\mathbf{y}$ is real vector.
End of explanation
from shogun import DirectLinearSolverComplex
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = np.array(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
a = a.astype(complex)
# adding a complex entry along the diagonal
for i in range(0, dim):
a[i,i] += complex(np.random.randn(), np.random.randn())
Q = ComplexDenseMatrixOperator(a)
z = np.array(np.random.randn(dim))
# solve for the system Qx=z
solver = DirectLinearSolverComplex()
x = solver.solve(Q, z)
print('x:',x)
# verifying the result
print('z:',z)
print('Qx:',np.real(Q.apply(x)))
Explanation: <h4><a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDirectLinearSolverComplex.html">Direct Linear Solver for Complex</a></h4>
This solves linear systems $\mathbf{Qx}=\mathbf{y}$, where $\mathbf{Q}$ is complex-valued dense matrix linear operator, and $\mathbf{y}$ is real vector.
End of explanation |
8,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Evaluation using mir_eval
mir_eval (documentation, paper) is a Python library containing evaluation functions for a variety of common audio and music processing tasks.
mir_eval was primarily created by Colin Raffel. This notebook was created by Brian McFee and edited by Steve Tjoa.
Why mir_eval?
Most tasks in MIR are complicated. Evaluation is also complicated!
Any given task has many ways to evaluate a system. There is no one right away.
For example, here are issues to consider when choosing an evaluation method
Step1: mir_eval finds the largest feasible set of matches using the Hopcroft-Karp algorithm.
Example
Step2: Example
Step3: Hidden benefits
Input validation! Many errors can be traced back to ill-formatted data.
Standardized behavior, full test coverage.
More than metrics
mir_eval has tools for display and sonification.
Step4: Common plots
Step5: Example | Python Code:
y, sr = librosa.load('audio/simple_piano.wav')
# Estimate onsets.
est_onsets = librosa.onset.onset_detect(y=y, sr=sr, units='time')
est_onsets
# Load the reference annotation.
ref_onsets = numpy.array([0.1, 0.21, 0.3])
mir_eval.onset.evaluate(ref_onsets, est_onsets)
Explanation: ← Back to Index
Evaluation using mir_eval
mir_eval (documentation, paper) is a Python library containing evaluation functions for a variety of common audio and music processing tasks.
mir_eval was primarily created by Colin Raffel. This notebook was created by Brian McFee and edited by Steve Tjoa.
Why mir_eval?
Most tasks in MIR are complicated. Evaluation is also complicated!
Any given task has many ways to evaluate a system. There is no one right away.
For example, here are issues to consider when choosing an evaluation method:
event matching
time padding
tolerance windows
vocabulary alignment
mir_eval tasks and submodules
onset, tempo, beat
chord, key
melody, multipitch
transcription
segment, hierarchy, pattern
separation (like bss_eval in Matlab)
Install mir_eval
pip install mir_eval
If that doesn't work:
pip install --no-deps mir_eval
Example: Onset Detection
End of explanation
est_tempo, est_beats = librosa.beat.beat_track(y=y, sr=sr)
est_beats = librosa.frames_to_time(est_beats, sr=sr)
est_beats
# Load the reference annotation.
ref_beats = numpy.array([0.53, 1.02])
mir_eval.beat.evaluate(ref_beats, est_beats)
Explanation: mir_eval finds the largest feasible set of matches using the Hopcroft-Karp algorithm.
Example: Beat Tracking
End of explanation
mir_eval.chord.evaluate()
Explanation: Example: Chord Estimation
End of explanation
import librosa.display
import mir_eval.display
Explanation: Hidden benefits
Input validation! Many errors can be traced back to ill-formatted data.
Standardized behavior, full test coverage.
More than metrics
mir_eval has tools for display and sonification.
End of explanation
librosa.display.specshow(S, x_axis='time', y_axis='mel')
mir_eval.display.events(ref_beats, color='w', alpha=0.8, linewidth=3)
mir_eval.display.events(est_beats, color='c', alpha=0.8, linewidth=3, linestyle='--')
Explanation: Common plots: events, labeled_intervals
pitch, multipitch, piano_roll
segments, hierarchy,
separation
Example: Events
End of explanation
y_harm, y_perc = librosa.effects.hpss(y, margin=8)
plt.figure(figsize=(12, 4))
mir_eval.display.separation([y_perc, y_harm], sr, labels=['percussive', 'harmonic'])
plt.legend()
Audio(data=numpy.vstack([
mir_eval.sonify.chords()
Explanation: Example: Labeled Intervals
Example: Source Separation
End of explanation |
8,083 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
how to implement the mean squared loss in TensorFlow
| Python Code::
import tensorflow as tf
from tensorflow.keras.losses import MeanSquaredError
y_true = [1., 0.]
y_pred = [2., 3.]
mse_loss = MeanSquaredError()
loss = mse_loss(y_true, y_pred).numpy()
|
8,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
用Python 3开发网络爬虫
By Terrill Yang (Github
Step1: 请求响应的类型是 requests.models.Response
Step2: 状态码是200
Step3: 响应体的类型是字符串str
Step4: 可以得到响应的 HTTP HEADER
Step5: 响应体内容
Step6: Cookies的类型是RequestsCookieJar
Step7: 我们可以发现,使用了 requests.get(url) 方法就成功实现了一个 GET 请求。这倒不算什么,更方便的在于其他的请求类型依然可以用一句话来完成。
下面介绍一个很棒的demo网站,httpbin(1)
Step8: 其实这只是 requests 库的冰山一角,更多的还在后面呢。
GET请求
HTTP 中最常见的请求之一就是 GET 请求,我们首先来详细了解下利用 requests 来构建 GET 请求的方法以及相关属性方法操作。
首先让我们来构建一个最简单的GET请求,请求 httpbin.org/get ,它会判断如果你是 GET 请求的话,会返回响应的请求信息。
Step9: 可以发现我们成功发起了get请求,请求的链接和头信息都有相应的返回。
那么 GET 请求,如果要附加额外的信息一般是怎样来添加?没错,那就是直接当做参数添加到url后面。
比如现在我想添加两个参数,名字name是germey,年龄age是22。构造这个请求链接是不是我们要直接写成
r = requests.get("http
Step10: 通过返回信息我们可以判断,请求的链接自动被构造成了 http
Step11: 但注意,如果返回结果不是Json格式,便会出现解析错误,抛出 json.decoder.JSONDecodeError 的异常。
抓取网页
如上的请求链接返回的是Json形式的字符串,那么如果我们请求普通的网页,那么肯定就能获得相应的内容了。
下面我们以知乎-发现页面为例来体验一下:
Step12: 如上代码,我们请求了知乎-发现页面 https
Step13: 在这里打印了 response 的两个属性,一个是 text,另一个是 content 。前两行便是r.text的结果,最后一行是r.content的结果。
可以注意到,前者出现了乱码,后者结果前面带有一个b,代表这是bytes类型的数据。由于图片是二进制数据,所以前者在打印时转化为str类型,也就是图片直接转化为字符串,理所当然会出现乱码。
两个属性有什么区别?前者返回的是字符串类型,如果返回结果是文本文件,那么用这种方式直接获取其内容即可。如果返回结果是图片、音频、视频等文件,requests会为我们自动解码成bytes类型,即获取字节流数据。
进一步地,我们可以将刚才提取到的图片保存下来。
Step14: 在这里用了open() 函数,第一个参数是文件名称,第二个参数代表以二进制写的形式打开,可以向文件里写入二进制数据,然后保存。
运行结束之后,可以发现在data文件夹中出现了名为favicon.ico的图标。
添加头信息
如urllib.request一样,我们也可以通过headers参数来传递头信息。你可以在headers这个数组中任意添加其他的头信息。
比如上面的知乎的例子,如果不传递头信息,就不能正常请求:
Step15: 基本POST请求
在前面我们讲解了最基本的 GET 请求,另外一种比较常见的请求方式就是 POST 了,就像模拟表单提交一样,将一些数据提交到某个链接。
使用 requests 实现 POST 请求同样非常简单,下面的例子请求的是httpbin.org/post,它可以判断如果请求是POST方式,就把相关请求信息输出出来。
Step16: 可以发现,成功获得了返回结果,返回结果中的form部分就是提交的数据,那么这就证明POST请求成功发送了。
响应
发送请求之后,得到的自然就是响应,在上面的实例中我们使用了text和content获取了响应内容。不过还有很多属性和方法可以获取其他的信息。比如响应状态码、响应头、Cookies。
下面用一个实例来感受一下:
Step17: 在这里分别打印输出了响应状态吗status_code,响应头headers,Cookies,请求连接,请求历史的类型和内容。可以看到,headers还有cookies这两个部分都是特定的数据结构,打开浏览器同样可以发现有同样的响应头信息。
状态码
在这里状态码常用来判断请求是否成功,requests还提供了一个内置的状态码查询对象requests.codes。比如你可以通过if r.status_code == requests.codes.ok来判断请求是否成功。
Step18: 在这里,通过比较返回码和内置的成功的返回码是一致的,来保证请求得到了正常响应,输出成功请求的消息,否则程序终止。
那么肯定不能只有ok这个条件码,还有没有其他的呢?答案是肯定的。下面列出了返回码和相应的查询条件:
```bash
# Informational.
100
Step19: 这个网站会返回一个响应,里面包含files这个字段,而form是空的,这证明文件上传部分,会单独有一个files来标识。
Cookie处理
在前面我们使用了urllib,让它处理cookie真的是挺麻烦的,而有了requests,获得和提交cookies只需要一步。我们先用一个实例感受一下获取Cookie的过程:
Step20: 可以看到,首先打印输出了cookie,可以发现它是一个RequestCookieJar类型。然后用items()方法将其转化为元组组成的列表,遍历输出每一个cookie的名和值。
当然,我们也可以直接用Cookie来维持登录状态。~~比如我们以知乎为例,直接利用Cookie来维持登录状态。~~
~~首先登录知乎,将请求头中的Cookie复制下来。~~
(知乎已经更改了登陆方式,现在使用Cookie无法登陆)
Step21: 会话维持
在requests中,我们如果直接利用requests.get()或requests.post()等方法的确可以做到模拟网页的请求。但是这实际上是相当于不同的会话,即不同的session,也就是说相当于你用了两个浏览器打开了不同的页面。
设想这样一个场景,你第一个请求利用了requests.post()方法登录了某个网站,第二次想获取成功登录后的自己的个人信息,你又用了一次requests.get()方法。实际上,这相当于打开了两个浏览器,是两个完全不相关的会话,你说你能成功获取个人信息吗?那当然不能。
有小伙伴就说了,我在两次请求的时候都设置好一样的Cookie不就行了?行是行,但是不觉得麻烦吗?每次都要这样。是我我忍不了。其实解决这个问题的主要方法就是维持同一个会话,也就是相当于打开一个新的浏览器选项卡而不是新开一个浏览器。但是我又不想每次设置Cookie,那该咋办?这时候就有了新的利器Session。利用它,我们可以方便地维护一个会话,而且不用担心Cookie的问题,它会帮我们自动处理好。
在下面的实例中我们请求了一个测试网址,http
Step22: 并不行。那这时候我们想起刚才说的Session了,改成这个试试看:
Step23: 成功获取!所以,利用Session我们可以做到模拟同一个会话,而且不用担心Cookie的问题,通常用于模拟登录成功之后再进行下一步的操作。
Session在平常用到的非常广泛,可以用于模拟在一个浏览器中打开同一站点的不同页面。
SSL证书验证
reqeusts 提供了证书验证的功能,当一个请求是 https 的时候,它会检查 SSL 证书,可以使用 verify 这个参数。(其实如果不加的话默认是True,会自动验证的)
现在以12306( https
Step24: 提示一个错误,叫做SSLError,证书验证错误。
所以如果我们请求一个https站点,但是证书验证错误的页面时,就会报这样的错误,那么如何避免这个错误呢?很简单,把verify这个参数设置为False即可。
改成如下代码:
Step25: 不过发现报了一个警告,它提示建议让我们给它指定证书。
~~当然你可以选择忽略警告:~~(依旧报出警告)
Step26: 不过这不是最好的方式,https协议的请求把证书验证都忽略了还有什么意义?
当然你也可以指定一个本地证书用作客户端证书,可以是单个文件(包含密钥和证书)或一个包含两个文件路径的元组。
python
response = requests.get('https
Step27: 通过这样的方式,我们可以将超时时间设置为1秒,如果1秒内没有响应,那就抛出异常。
实际上请求分为两个阶段,即连接和读取。上面的设置timeout值将会用作connect和read二者的timeout。如果要分别指定,就传入一个元组:
Step28: 如果想永久等待,那么你可以直接将timeout设置为None,或者不设置,直接留空,因为默认是None。这样的话,如果服务器还在运行,但是响应特别慢,那就慢慢等吧,它永远不会返回超时错误的。
用法如下:
Step29: 或直接不加参数:
Step30: 身份认证
在访问网站时,我们可能会遇到这样的认证页面:
如果遇到这样的网站验证,可以使用requests带的身份认证功能。
Step31: 如果用户名和密码正确的话,认证成功,那么运行结果会返回200,如果认证失败,则会返回401状态码。
当然如果参数都传一个HTTPBasicAuth类,那的确太繁琐了,所以requests提供了一个简单的写法,你可以直接传一个元组,它会默认使用HTTPBasicAuth这个类来认证。
所以上面的代码可以直接简写如下: | Python Code:
import requests
r = requests.get('https://www.baidu.com/')
Explanation: 用Python 3开发网络爬虫
By Terrill Yang (Github: https://github.com/yttty)
由你需要这些:Python3.x爬虫学习资料整理 - 知乎专栏整理而来。
本篇来自requests的基本使用
用Python 3开发网络爬虫 - Chapter 04
使用requests库
在 urllib 库中,有 urllib.request.urlopen(url) 的方法,实际上它是以 GET 方式请求了一个网页。
那么在 requests 中,相应的方法就是 requests.get(url) ,是不是感觉表达更明确一些?
下面我们用一个实例来感受一下:
导入并请求 https://www.baidu.com/
End of explanation
print(type(r))
Explanation: 请求响应的类型是 requests.models.Response
End of explanation
print(r.status_code)
Explanation: 状态码是200
End of explanation
print(type(r.text))
Explanation: 响应体的类型是字符串str
End of explanation
print(r.headers)
Explanation: 可以得到响应的 HTTP HEADER
End of explanation
print(r.text)
Explanation: 响应体内容
End of explanation
print(r.cookies)
Explanation: Cookies的类型是RequestsCookieJar
End of explanation
r = requests.post('http://httpbin.org/post')
print('----POST----\n', r.text)
r = requests.put('http://httpbin.org/put')
print('----PUT----\n', r.text)
r = requests.delete('http://httpbin.org/delete')
print('----DELETE----\n', r.text)
r = requests.head('http://httpbin.org/get')
print('----HEAD----\n', r.text)
r = requests.options('http://httpbin.org/get')
print('----OPTIONS----\n', r.text)
Explanation: 我们可以发现,使用了 requests.get(url) 方法就成功实现了一个 GET 请求。这倒不算什么,更方便的在于其他的请求类型依然可以用一句话来完成。
下面介绍一个很棒的demo网站,httpbin(1): HTTP Request & Response Service (http://httpbin.org) ,这个网站的介绍大概是这个样子的
```
Testing an HTTP Library can become difficult sometimes. RequestBin is fantastic for testing POST requests, but doesn't let you control the response. This exists to cover all kinds of HTTP scenarios. Additional endpoints are being considered.
All endpoint responses are JSON-encoded.
```
简单的说就是你可以对这个网站发出各类HTTP请求,来测试你的请求报文。
下面用一个实例来感受一下 requests 库的强大威力:
End of explanation
r = requests.get('http://httpbin.org/get')
print(r.text)
Explanation: 其实这只是 requests 库的冰山一角,更多的还在后面呢。
GET请求
HTTP 中最常见的请求之一就是 GET 请求,我们首先来详细了解下利用 requests 来构建 GET 请求的方法以及相关属性方法操作。
首先让我们来构建一个最简单的GET请求,请求 httpbin.org/get ,它会判断如果你是 GET 请求的话,会返回响应的请求信息。
End of explanation
data = {
'name': 'germey',
'age': 22
}
r = requests.get("http://httpbin.org/get", params=data)
print(r.text)
Explanation: 可以发现我们成功发起了get请求,请求的链接和头信息都有相应的返回。
那么 GET 请求,如果要附加额外的信息一般是怎样来添加?没错,那就是直接当做参数添加到url后面。
比如现在我想添加两个参数,名字name是germey,年龄age是22。构造这个请求链接是不是我们要直接写成
r = requests.get("http://httpbin.org/get?name=germey&age=22")
可以是可以,但是不觉得很不人性化吗?一般的这种信息数据我们会用字典来存储,那么怎样来构造这个链接呢?
{"name": "germey", "age": 22}
同样很简单,利用params这个参数就好了。
实例如下:
End of explanation
r = requests.get("http://httpbin.org/get")
print('type(r.text) : ', type(r.text), '\n')
print('r.json() : ', r.json(), '\n')
print('type(r.json()) : ', type(r.json()), '\n')
Explanation: 通过返回信息我们可以判断,请求的链接自动被构造成了 http://httpbin.org/get?age=22&name=germey ,是不是很方便?
另外,网页的返回类型实际上是str类型,但是它很特殊,是Json的格式,所以如果我们想直接把返回结果解析,得到一个字典dict格式的话,可以直接调用json()方法。
用一个实例来感受一下:
End of explanation
import requests
import re
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
}
r = requests.get("https://www.zhihu.com/explore", headers=headers)
pattern = re.compile('explore-feed.*?question_link.*?>(.*?)</a>', re.S)
titles = re.findall(pattern, r.text)
print(titles)
Explanation: 但注意,如果返回结果不是Json格式,便会出现解析错误,抛出 json.decoder.JSONDecodeError 的异常。
抓取网页
如上的请求链接返回的是Json形式的字符串,那么如果我们请求普通的网页,那么肯定就能获得相应的内容了。
下面我们以知乎-发现页面为例来体验一下:
End of explanation
r = requests.get("https://github.com/favicon.ico")
print(r.text)
print(r.content)
Explanation: 如上代码,我们请求了知乎-发现页面 https://www.zhihu.com/explore ,在这里加入了头信息,头信息中包含了 User-Agent 信息,也就是上一节提到的浏览器标识信息。如果不加这个,知乎会禁止抓取。可以看到,我们成功提取出了所有的问题内容。
抓取二进制数据
在上面的例子中,我们抓取的是知乎的一个页面,实际上它返回的是一个HTML文档,那么如果我们想抓去图片、音频、视频等文件的话应该怎么办呢?
我们都知道,图片、音频、视频这些文件都是本质上由二进制码组成的,由于有特定的保存格式和对应的解析方式,我们才可以看到这些形形色色的多媒体。所以想要抓取他们,那就需要拿到他们的二进制码。
下面我们以GitHub的站点图标,也就是在浏览器每一个标签上显示的小图标,为例来感受一下:
End of explanation
with open('data/favicon.ico', 'wb') as f:
f.write(r.content)
f.close()
Explanation: 在这里打印了 response 的两个属性,一个是 text,另一个是 content 。前两行便是r.text的结果,最后一行是r.content的结果。
可以注意到,前者出现了乱码,后者结果前面带有一个b,代表这是bytes类型的数据。由于图片是二进制数据,所以前者在打印时转化为str类型,也就是图片直接转化为字符串,理所当然会出现乱码。
两个属性有什么区别?前者返回的是字符串类型,如果返回结果是文本文件,那么用这种方式直接获取其内容即可。如果返回结果是图片、音频、视频等文件,requests会为我们自动解码成bytes类型,即获取字节流数据。
进一步地,我们可以将刚才提取到的图片保存下来。
End of explanation
r = requests.get("https://www.zhihu.com/explore")
print(r.text)
Explanation: 在这里用了open() 函数,第一个参数是文件名称,第二个参数代表以二进制写的形式打开,可以向文件里写入二进制数据,然后保存。
运行结束之后,可以发现在data文件夹中出现了名为favicon.ico的图标。
添加头信息
如urllib.request一样,我们也可以通过headers参数来传递头信息。你可以在headers这个数组中任意添加其他的头信息。
比如上面的知乎的例子,如果不传递头信息,就不能正常请求:
End of explanation
import requests
data = {'name': 'germey', 'age': '22'}
r = requests.post("http://httpbin.org/post", data=data)
print(r.text)
Explanation: 基本POST请求
在前面我们讲解了最基本的 GET 请求,另外一种比较常见的请求方式就是 POST 了,就像模拟表单提交一样,将一些数据提交到某个链接。
使用 requests 实现 POST 请求同样非常简单,下面的例子请求的是httpbin.org/post,它可以判断如果请求是POST方式,就把相关请求信息输出出来。
End of explanation
r = requests.get('http://www.jianshu.com')
print(type(r.status_code), r.status_code)
print(type(r.headers), r.headers)
print(type(r.cookies), r.cookies)
print(type(r.url), r.url)
print(type(r.history), r.history)
Explanation: 可以发现,成功获得了返回结果,返回结果中的form部分就是提交的数据,那么这就证明POST请求成功发送了。
响应
发送请求之后,得到的自然就是响应,在上面的实例中我们使用了text和content获取了响应内容。不过还有很多属性和方法可以获取其他的信息。比如响应状态码、响应头、Cookies。
下面用一个实例来感受一下:
End of explanation
r = requests.get('http://www.jianshu.com')
exit() if not r.status_code == requests.codes.ok else print('Request Successfully')
Explanation: 在这里分别打印输出了响应状态吗status_code,响应头headers,Cookies,请求连接,请求历史的类型和内容。可以看到,headers还有cookies这两个部分都是特定的数据结构,打开浏览器同样可以发现有同样的响应头信息。
状态码
在这里状态码常用来判断请求是否成功,requests还提供了一个内置的状态码查询对象requests.codes。比如你可以通过if r.status_code == requests.codes.ok来判断请求是否成功。
End of explanation
files = {'file': open('data/favicon.ico', 'rb')}
r = requests.post("http://httpbin.org/post", files=files)
print(r.text)
Explanation: 在这里,通过比较返回码和内置的成功的返回码是一致的,来保证请求得到了正常响应,输出成功请求的消息,否则程序终止。
那么肯定不能只有ok这个条件码,还有没有其他的呢?答案是肯定的。下面列出了返回码和相应的查询条件:
```bash
# Informational.
100: ('continue',),
101: ('switching_protocols',),
102: ('processing',),
103: ('checkpoint',),
122: ('uri_too_long', 'request_uri_too_long'),
200: ('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\o/', '✓'),
201: ('created',),
202: ('accepted',),
203: ('non_authoritative_info', 'non_authoritative_information'),
204: ('no_content',),
205: ('reset_content', 'reset'),
206: ('partial_content', 'partial'),
207: ('multi_status', 'multiple_status', 'multi_stati', 'multiple_stati'),
208: ('already_reported',),
226: ('im_used',),
# Redirection.
300: ('multiple_choices',),
301: ('moved_permanently', 'moved', '\\o-'),
302: ('found',),
303: ('see_other', 'other'),
304: ('not_modified',),
305: ('use_proxy',),
306: ('switch_proxy',),
307: ('temporary_redirect', 'temporary_moved', 'temporary'),
308: ('permanent_redirect',
'resume_incomplete', 'resume',), # These 2 to be removed in 3.0
# Client Error.
400: ('bad_request', 'bad'),
401: ('unauthorized',),
402: ('payment_required', 'payment'),
403: ('forbidden',),
404: ('not_found', '-o-'),
405: ('method_not_allowed', 'not_allowed'),
406: ('not_acceptable',),
407: ('proxy_authentication_required', 'proxy_auth', 'proxy_authentication'),
408: ('request_timeout', 'timeout'),
409: ('conflict',),
410: ('gone',),
411: ('length_required',),
412: ('precondition_failed', 'precondition'),
413: ('request_entity_too_large',),
414: ('request_uri_too_large',),
415: ('unsupported_media_type', 'unsupported_media', 'media_type'),
416: ('requested_range_not_satisfiable', 'requested_range', 'range_not_satisfiable'),
417: ('expectation_failed',),
418: ('im_a_teapot', 'teapot', 'i_am_a_teapot'),
421: ('misdirected_request',),
422: ('unprocessable_entity', 'unprocessable'),
423: ('locked',),
424: ('failed_dependency', 'dependency'),
425: ('unordered_collection', 'unordered'),
426: ('upgrade_required', 'upgrade'),
428: ('precondition_required', 'precondition'),
429: ('too_many_requests', 'too_many'),
431: ('header_fields_too_large', 'fields_too_large'),
444: ('no_response', 'none'),
449: ('retry_with', 'retry'),
450: ('blocked_by_windows_parental_controls', 'parental_controls'),
451: ('unavailable_for_legal_reasons', 'legal_reasons'),
499: ('client_closed_request',),
# Server Error.
500: ('internal_server_error', 'server_error', '/o\\', '✗'),
501: ('not_implemented',),
502: ('bad_gateway',),
503: ('service_unavailable', 'unavailable'),
504: ('gateway_timeout',),
505: ('http_version_not_supported', 'http_version'),
506: ('variant_also_negotiates',),
507: ('insufficient_storage',),
509: ('bandwidth_limit_exceeded', 'bandwidth'),
510: ('not_extended',),
511: ('network_authentication_required', 'network_auth', 'network_authentication'),
```
比如如果你想判断结果是不是404状态,你可以用requests.codes.not_found来比对。
响应头
如果想得到响应头信息,可以使用headers属性。它其实本质上也是一个字典形式。可以通过数组索引或者get()方法来获取某一条头信息内容。
比如获取Content-Type可以用r.headers['Content-Type'],也可以用r.headers.get('content-type')。
文件上传
我们知道reqeuests可以模拟提交一些数据,假如有的网站需要我们上传文件,我们同样可以利用它来上传,实现非常简单。
在上面一节中我们下载保存了一个文件叫做favicon.ico,这次我们用它为例来模拟文件上传的过程。
End of explanation
import requests
r = requests.get("https://www.baidu.com")
print(r.cookies)
for key, value in r.cookies.items():
print(key + '=' + value)
Explanation: 这个网站会返回一个响应,里面包含files这个字段,而form是空的,这证明文件上传部分,会单独有一个files来标识。
Cookie处理
在前面我们使用了urllib,让它处理cookie真的是挺麻烦的,而有了requests,获得和提交cookies只需要一步。我们先用一个实例感受一下获取Cookie的过程:
End of explanation
headers = {
'Cookie': '_za=dc07d0bb-599c-46e9-8906-f6dd252910b4; d_c0="AIAABz1hvQmPTup3qtT92xkZN2UQoova_cc=|1460107885"; _zap=3c9b8860-2a38-4532-b476-c3617ff3fb0d; _ga=GA1.2.1945944829.1442545470; aliyungf_tc=AQAAANoM4BIxkAMAj67seLZyZFoQgALT; q_c1=a366104786da4623b23af9cd321e4d38|1484577173000|1468254315000; _xsrf=b9564d003dd4bdb7b6f9e6185b8a0b78; l_cap_id="MTZlMjVmODgwYTVlNGEyYzg1ZDBkNzhkMzNjYjMxYmI=|1484577173|1c7374494b0e1a7ef526ea93c7065b2a8b9736eb"; cap_id="MzU3OGUzZjYwYjIwNGNmNWFhMTk2OGU2NjRjOWE3MDk=|1484577173|b5723f8256e35bd66f86d1e070c3a3d4324c655a"; r_cap_id="YmQwYmEwNDIxMDYxNDBkZmI2NzAzNjI2ZjJkMzNmOGQ=|1484577175|01cabbd0c7a481a7e4a907ae60ba3e2c64d07ddc"; login="NWVhMDVlNzZjMzI0NGE3ZGIyMmExODFhNzEzNGVhNmY=|1484577185|45503ef9b8dc4c3468f776c202fbc7fe442f8521"; n_c=1; z_c0=Mi4wQUFDQXhFMGpBQUFBZ0FBSFBXRzlDUmNBQUFCaEFsVk5vV2FrV0FBQkU5RmpvbXljZE15a2FZNU8zQVRjS0g1Qm1n|1484577189|3dc63f1a6bd389c32162f4d901ebf10e6750dffc; nweb_qa=heifetz; __utma=51854390.1945944829.1442545470.1484577176.1484577176.1; __utmb=51854390.0.10.1484577176; __utmc=51854390; __utmz=51854390.1484577176.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmv=51854390.100-1|2=registration_date=20140101=1^3=entry_date=20140101=1',
'Host': 'www.zhihu.com',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36',
}
r = requests.get("http://www.zhihu.com", headers=headers)
print(r.text)
Explanation: 可以看到,首先打印输出了cookie,可以发现它是一个RequestCookieJar类型。然后用items()方法将其转化为元组组成的列表,遍历输出每一个cookie的名和值。
当然,我们也可以直接用Cookie来维持登录状态。~~比如我们以知乎为例,直接利用Cookie来维持登录状态。~~
~~首先登录知乎,将请求头中的Cookie复制下来。~~
(知乎已经更改了登陆方式,现在使用Cookie无法登陆)
End of explanation
requests.get('http://httpbin.org/cookies/set/number/123456789')
r = requests.get('http://httpbin.org/cookies')
print(r.text)
Explanation: 会话维持
在requests中,我们如果直接利用requests.get()或requests.post()等方法的确可以做到模拟网页的请求。但是这实际上是相当于不同的会话,即不同的session,也就是说相当于你用了两个浏览器打开了不同的页面。
设想这样一个场景,你第一个请求利用了requests.post()方法登录了某个网站,第二次想获取成功登录后的自己的个人信息,你又用了一次requests.get()方法。实际上,这相当于打开了两个浏览器,是两个完全不相关的会话,你说你能成功获取个人信息吗?那当然不能。
有小伙伴就说了,我在两次请求的时候都设置好一样的Cookie不就行了?行是行,但是不觉得麻烦吗?每次都要这样。是我我忍不了。其实解决这个问题的主要方法就是维持同一个会话,也就是相当于打开一个新的浏览器选项卡而不是新开一个浏览器。但是我又不想每次设置Cookie,那该咋办?这时候就有了新的利器Session。利用它,我们可以方便地维护一个会话,而且不用担心Cookie的问题,它会帮我们自动处理好。
在下面的实例中我们请求了一个测试网址,http://httpbin.org/cookies/set/number/123456789请求这个网址我们可以设置Cookie,名称叫做number,内容是123456789,后面的网址http://httpbin.org/cookies可以获取当前的Cookie。
你觉得这样能成功获取到设置的Cookie吗?运行结果如下:
End of explanation
s = requests.Session()
s.get('http://httpbin.org/cookies/set/number/123456789')
r = s.get('http://httpbin.org/cookies')
print(r.text)
Explanation: 并不行。那这时候我们想起刚才说的Session了,改成这个试试看:
End of explanation
import requests
response = requests.get('https://www.12306.cn')
print(response.status_code)
Explanation: 成功获取!所以,利用Session我们可以做到模拟同一个会话,而且不用担心Cookie的问题,通常用于模拟登录成功之后再进行下一步的操作。
Session在平常用到的非常广泛,可以用于模拟在一个浏览器中打开同一站点的不同页面。
SSL证书验证
reqeusts 提供了证书验证的功能,当一个请求是 https 的时候,它会检查 SSL 证书,可以使用 verify 这个参数。(其实如果不加的话默认是True,会自动验证的)
现在以12306( https://www.12306.cn)为例来感受一下它的用法,我们现在访问它都可以看到一个证书问题的页面,如下:
现在我们用requests来测试一下:
End of explanation
response = requests.get('https://www.12306.cn', verify=False)
print(response.status_code)
Explanation: 提示一个错误,叫做SSLError,证书验证错误。
所以如果我们请求一个https站点,但是证书验证错误的页面时,就会报这样的错误,那么如何避免这个错误呢?很简单,把verify这个参数设置为False即可。
改成如下代码:
End of explanation
import requests
from requests.packages import urllib3
urllib3.disable_warnings()
response = requests.get('https://www.12306.cn', verify=False)
print(response.status_code)
Explanation: 不过发现报了一个警告,它提示建议让我们给它指定证书。
~~当然你可以选择忽略警告:~~(依旧报出警告)
End of explanation
import requests
r = requests.get("https://www.taobao.com", timeout = 1)
print(r.status_code)
Explanation: 不过这不是最好的方式,https协议的请求把证书验证都忽略了还有什么意义?
当然你也可以指定一个本地证书用作客户端证书,可以是单个文件(包含密钥和证书)或一个包含两个文件路径的元组。
python
response = requests.get('https://www.12306.cn', cert=('/path/server.crt', '/path/key'))
print(response.status_code)
上面代码是实例,你需要有crt和key文件,指定它们的路径。注意本地私有证书的key必须要是解密状态,加密状态的key是不支持的。
代理设置
对于某些网站,在测试的时候请求几次,能正常获取内容。但是一旦开始大规模爬取,对于大规模且频繁的请求,网站可能会直接登录验证,验证码,甚至直接把IP给封禁掉。为了防止这种情况的发生,我们就需要设置代理来解决这个问题,需要用到proxies这个参数。
```python
import requests
proxies = {
"http": "http://10.10.1.10:3128",
"https": "http://10.10.1.10:1080",
}
requests.get("https://www.taobao.com", proxies=proxies)
```
当然直接运行这个实例可能不行,因为这个代理可能是无效的,请换成自己的有效代理试验一下。
若你的代理需要使用HTTP Basic Auth,可以使用类似 http://user:password@host/ 这样的语法。
```python
import requests
proxies = {
"http": "http://user:password@10.10.1.10:3128/",
}
requests.get("https://www.taobao.com", proxies=proxies)
```
超时设置
在本机网络状况不好或者服务器网络响应太慢甚至无响应时,我们可能会等待特别久的时间才可能会收到一个响应,甚至到最后收不到响应而报错。为了防止服务器不能及时响应,我们应该设置一个超时时间,即超过了这个时间还没有得到响应,那就报错。
设置超时时间需要用到timeout参数。这个时间的计算是发出request请求到服务器响应response的时间。
下面用一个实例来感受一下:
End of explanation
r = requests.get('https://www.taobao.com', timeout=(5, 30))
print(r.status_code)
Explanation: 通过这样的方式,我们可以将超时时间设置为1秒,如果1秒内没有响应,那就抛出异常。
实际上请求分为两个阶段,即连接和读取。上面的设置timeout值将会用作connect和read二者的timeout。如果要分别指定,就传入一个元组:
End of explanation
r = requests.get('https://www.taobao.com', timeout=None)
print(r.status_code)
Explanation: 如果想永久等待,那么你可以直接将timeout设置为None,或者不设置,直接留空,因为默认是None。这样的话,如果服务器还在运行,但是响应特别慢,那就慢慢等吧,它永远不会返回超时错误的。
用法如下:
End of explanation
r = requests.get('https://www.taobao.com')
print(r.status_code)
Explanation: 或直接不加参数:
End of explanation
import requests
from requests.auth import HTTPBasicAuth
r = requests.get('http://120.27.34.24:9001', auth=HTTPBasicAuth('user', '123'))
print(r.status_code)
Explanation: 身份认证
在访问网站时,我们可能会遇到这样的认证页面:
如果遇到这样的网站验证,可以使用requests带的身份认证功能。
End of explanation
import requests
r = requests.get('http://120.27.34.24:9001', auth=('user', '123'))
print(r.status_code)
Explanation: 如果用户名和密码正确的话,认证成功,那么运行结果会返回200,如果认证失败,则会返回401状态码。
当然如果参数都传一个HTTPBasicAuth类,那的确太繁琐了,所以requests提供了一个简单的写法,你可以直接传一个元组,它会默认使用HTTPBasicAuth这个类来认证。
所以上面的代码可以直接简写如下:
End of explanation |
8,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 3 - Multi Layer Perceptron with MNIST
This lab corresponds to Module 3 of the "Deep Learning Explained" course. We assume that you have successfully completed Lab 1 (Downloading the MNIST data).
In this lab, we train a multi-layer perceptron on MNIST data. This notebook provides the recipe using Python APIs.
Introduction
Problem
We will continue to work on the same problem of recognizing digits in MNIST data. The MNIST data comprises of hand-written digits with little background noise.
Step1: Goal
Step2: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
Step3: Data reading
There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in distributed readers that can scale to terrabytes of data with little extra effort.
We are using the MNIST data you have downloaded using Lab 1 DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset.
In Lab 1, the data was downloaded and written to 2 CTF (CNTK Text Format) files, 1 for training, and 1 for testing. Each line of these text files takes the form
Step4: <a id='#Model Creation'></a>
Model Creation
Our multi-layer perceptron will be relatively simple with 2 hidden layers (num_hidden_layers). The number of nodes in the hidden layer being a parameter specified by hidden_layers_dim. The figure below illustrates the entire model we will use in this tutorial in the context of MNIST data.
If you are not familiar with the terms hidden_layer and number of hidden layers, please review the module 3 course videos.
Each Dense layer (as illustrated below) shows the input dimensions, output dimensions and activation function that layer uses. Specifically, the layer below shows
Step5: Network input and output
Step6: Multi-layer Perceptron setup
The code below is a direct translation of the model shown above.
Step7: z will be used to represent the output of a network.
We introduced sigmoid function in CNTK 102, in this tutorial you should try different activation functions in the hidden layer. You may choose to do this right away and take a peek into the performance later in the tutorial or run the preset tutorial and then choose to perform the suggested exploration.
Suggested Exploration
- Record the training error you get with sigmoid as the activation function
- Now change to relu as the activation function and see if you can improve your training error
Knowledge Check
Step8: Training
Below, we define the Loss function, which is used to guide weight changes during training.
As explained in the lectures, we use the softmax function to map the accumulated evidences or activations to a probability distribution over the classes (Details of the softmax function and other activation functions).
We minimize the cross-entropy between the label and predicted probability by the network.
Step9: Evaluation
Below, we define the Evaluation (or metric) function that is used to report a measurement of how well our model is performing.
For this problem, we choose the classification_error() function as our metric, which returns the average error over the associated samples (treating a match as "1", where the model's prediction matches the "ground truth" label, and a non-match as "0").
Step10: Configure training
The trainer strives to reduce the loss function by different optimization approaches, Stochastic Gradient Descent (sgd) being a basic one. Typically, one would start with random initialization of the model parameters. The sgd optimizer would calculate the loss or error between the predicted label against the corresponding ground-truth label and using gradient-decent generate a new set model parameters in a single iteration.
The aforementioned model parameter update using a single observation at a time is attractive since it does not require the entire data set (all observation) to be loaded in memory and also requires gradient computation over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation sample at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations and use an average of the loss or error from that set to update the model parameters. This subset is called a minibatch.
With minibatches we often sample observation from the larger training dataset. We repeat the process of model parameters update using different combination of training samples and over a period of time minimize the loss (and the error). When the incremental error rates are no longer changing significantly or after a preset number of maximum minibatches to train, we claim that our model is trained.
One of the key parameter for optimization is called the learning_rate. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration. We will be covering more details in later tutorial.
With this information, we are ready to create our trainer.
Step11: First let us create some helper functions that will be needed to visualize different functions associated with training.
Step12: <a id='#Run the trainer'></a>
Run the trainer
We are now ready to train our fully connected neural net. We want to decide what data we need to feed into the training engine.
In this example, each iteration of the optimizer will work on minibatch_size sized samples. We would like to train on all 60000 observations. Additionally we will make multiple passes through the data specified by the variable num_sweeps_to_train_with. With these parameters we can proceed with training our simple multi-layer perceptron network.
Step13: Let us plot the errors over the different training minibatches. Note that as we iterate the training loss decreases though we do see some intermediate bumps.
Step14: Evaluation / Testing
Now that we have trained the network, let us evaluate the trained network on the test data. This is done using trainer.test_minibatch.
Step15: Note, this error is very comparable to our training error indicating that our model has good "out of sample" error a.k.a. generalization error. This implies that our model can very effectively deal with previously unseen observations (during the training process). This is key to avoid the phenomenon of overfitting.
This is a huge reduction in error compared to multi-class LR (from Lab 02).
We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us route the network output through a softmax function. This maps the aggregated activations across the network to probabilities across the 10 classes.
Step16: Let us test a small minibatch sample from the test data.
Step17: As you can see above, our model is much better. Do you see any mismatches?
Let us visualize one of the test images and its associated label. Do they match?
Step18: Suggested Explorations
- Try exploring how the classifier behaves with different parameters - suggest changing the minibatch_size parameter from 25 to say 64 or 128. What happens to the error rate? How does the error compare to the logistic regression classifier?
- Try increasing the number of sweeps
- Can you change the network to reduce the training error rate? When do you see overfitting happening? | Python Code:
# Figure 1
Image(url= "http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png", width=200, height=200)
Explanation: Lab 3 - Multi Layer Perceptron with MNIST
This lab corresponds to Module 3 of the "Deep Learning Explained" course. We assume that you have successfully completed Lab 1 (Downloading the MNIST data).
In this lab, we train a multi-layer perceptron on MNIST data. This notebook provides the recipe using Python APIs.
Introduction
Problem
We will continue to work on the same problem of recognizing digits in MNIST data. The MNIST data comprises of hand-written digits with little background noise.
End of explanation
from __future__ import print_function # Use a function definition from future version (say 3.x from 2.7 interpreter)
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
import cntk as C
%matplotlib inline
Explanation: Goal:
Our goal is to train a classifier that will identify the digits in the MNIST dataset. Additionally, we aspire to achieve lower error rate with Multi-layer perceptron compared to Multi-class logistic regression.
Approach:
There are 4 stages in this lab:
- Data reading: We will use the CNTK Text reader.
- Data preprocessing: Covered in part A (suggested extension section).
- Model creation: Multi-Layer Perceptron model.
- Train-Test-Predict: This is the same workflow introduced in the lectures
End of explanation
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
# Test for CNTK version
if not C.__version__ == "2.0":
raise Exception("this lab is designed to work with 2.0. Current Version: " + C.__version__)
# Ensure we always get the same amount of randomness
np.random.seed(0)
C.cntk_py.set_fixed_random_seed(1)
C.cntk_py.force_deterministic_algorithms()
# Define the data dimensions
input_dim = 784
num_output_classes = 10
Explanation: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
End of explanation
# Read a CTF formatted text (as mentioned above) using the CTF deserializer from a file
def create_reader(path, is_training, input_dim, num_label_classes):
return C.io.MinibatchSource(C.io.CTFDeserializer(path, C.io.StreamDefs(
labels = C.io.StreamDef(field='labels', shape=num_label_classes, is_sparse=False),
features = C.io.StreamDef(field='features', shape=input_dim, is_sparse=False)
)), randomize = is_training, max_sweeps = C.io.INFINITELY_REPEAT if is_training else 1)
# Ensure the training and test data is generated and available for this tutorial.
# We search in two locations in the toolkit for the cached MNIST data set.
data_found = False
for data_dir in [os.path.join("..", "Examples", "Image", "DataSets", "MNIST"),
os.path.join("data", "MNIST")]:
train_file = os.path.join(data_dir, "Train-28x28_cntk_text.txt")
test_file = os.path.join(data_dir, "Test-28x28_cntk_text.txt")
if os.path.isfile(train_file) and os.path.isfile(test_file):
data_found = True
break
if not data_found:
raise ValueError("Please generate the data by completing Lab1_MNIST_DataLoader")
print("Data directory is {0}".format(data_dir))
Explanation: Data reading
There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in distributed readers that can scale to terrabytes of data with little extra effort.
We are using the MNIST data you have downloaded using Lab 1 DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset.
In Lab 1, the data was downloaded and written to 2 CTF (CNTK Text Format) files, 1 for training, and 1 for testing. Each line of these text files takes the form:
|labels 0 0 0 1 0 0 0 0 0 0 |features 0 0 0 0 ...
(784 integers each representing a pixel)
We are going to use the image pixels corresponding the integer stream named "features". We define a create_reader function to read the training and test data using the CTF deserializer. The labels are 1-hot encoded. Refer to Lab 1 for data format visualizations.
End of explanation
num_hidden_layers = 2
hidden_layers_dim = 400
#hidden_layers_dim = 50
Explanation: <a id='#Model Creation'></a>
Model Creation
Our multi-layer perceptron will be relatively simple with 2 hidden layers (num_hidden_layers). The number of nodes in the hidden layer being a parameter specified by hidden_layers_dim. The figure below illustrates the entire model we will use in this tutorial in the context of MNIST data.
If you are not familiar with the terms hidden_layer and number of hidden layers, please review the module 3 course videos.
Each Dense layer (as illustrated below) shows the input dimensions, output dimensions and activation function that layer uses. Specifically, the layer below shows: input dimension = 784 (1 dimension for each input pixel), output dimension = 400 (number of hidden nodes, a parameter specified by the user) and activation function being relu.
In this model we have 2 dense layer called the hidden layers each with an activation function of relu. These are followed by the dense output layer with no activation.
The output dimension (a.k.a. number of hidden nodes) in the 2 hidden layer is set to 400. The number of hidden layers is 2.
The final output layer emits a vector of 10 values. Since we will be using softmax to normalize the output of the model we do not use an activation function in this layer. The softmax operation comes bundled with the loss function we will be using later in this tutorial.
End of explanation
input = C.input_variable(input_dim)
label = C.input_variable(num_output_classes)
Explanation: Network input and output:
- input variable (a key CNTK concept):
An input variable is a container in which we fill different observations in this case image pixels during model learning (a.k.a.training) and model evaluation (a.k.a. testing). Thus, the shape of the input must match the shape of the data that will be provided. For example, when data are images each of height 10 pixels and width 5 pixels, the input feature dimension will be 50 (representing the total number of image pixels). More on data and their dimensions to appear in separate tutorials.
Knowledge Check What is the input dimension of your chosen model? This is fundamental to our understanding of variables in a network or model representation in CNTK.
End of explanation
def create_model(features):
with C.layers.default_options(init = C.layers.glorot_uniform(), activation = C.ops.relu):
#with C.layers.default_options(init = C.layers.glorot_uniform(), activation = C.ops.sigmoid):
h = features
for _ in range(num_hidden_layers):
h = C.layers.Dense(hidden_layers_dim)(h)
r = C.layers.Dense(num_output_classes, activation = None)(h)
#r = C.layers.Dense(num_output_classes, activation = C.ops.sigmoid)(h)
return r
z = create_model(input)
Explanation: Multi-layer Perceptron setup
The code below is a direct translation of the model shown above.
End of explanation
# Scale the input to 0-1 range by dividing each pixel by 255.
z = create_model(input/255.0)
Explanation: z will be used to represent the output of a network.
We introduced sigmoid function in CNTK 102, in this tutorial you should try different activation functions in the hidden layer. You may choose to do this right away and take a peek into the performance later in the tutorial or run the preset tutorial and then choose to perform the suggested exploration.
Suggested Exploration
- Record the training error you get with sigmoid as the activation function
- Now change to relu as the activation function and see if you can improve your training error
Knowledge Check: Name some of the different supported activation functions. Which activation function gives the least training error?
End of explanation
loss = C.cross_entropy_with_softmax(z, label)
Explanation: Training
Below, we define the Loss function, which is used to guide weight changes during training.
As explained in the lectures, we use the softmax function to map the accumulated evidences or activations to a probability distribution over the classes (Details of the softmax function and other activation functions).
We minimize the cross-entropy between the label and predicted probability by the network.
End of explanation
label_error = C.classification_error(z, label)
Explanation: Evaluation
Below, we define the Evaluation (or metric) function that is used to report a measurement of how well our model is performing.
For this problem, we choose the classification_error() function as our metric, which returns the average error over the associated samples (treating a match as "1", where the model's prediction matches the "ground truth" label, and a non-match as "0").
End of explanation
# Instantiate the trainer object to drive the model training
learning_rate = 0.2
lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)
learner = C.sgd(z.parameters, lr_schedule)
trainer = C.Trainer(z, (loss, label_error), [learner])
Explanation: Configure training
The trainer strives to reduce the loss function by different optimization approaches, Stochastic Gradient Descent (sgd) being a basic one. Typically, one would start with random initialization of the model parameters. The sgd optimizer would calculate the loss or error between the predicted label against the corresponding ground-truth label and using gradient-decent generate a new set model parameters in a single iteration.
The aforementioned model parameter update using a single observation at a time is attractive since it does not require the entire data set (all observation) to be loaded in memory and also requires gradient computation over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation sample at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations and use an average of the loss or error from that set to update the model parameters. This subset is called a minibatch.
With minibatches we often sample observation from the larger training dataset. We repeat the process of model parameters update using different combination of training samples and over a period of time minimize the loss (and the error). When the incremental error rates are no longer changing significantly or after a preset number of maximum minibatches to train, we claim that our model is trained.
One of the key parameter for optimization is called the learning_rate. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration. We will be covering more details in later tutorial.
With this information, we are ready to create our trainer.
End of explanation
# Define a utility function to compute the moving average sum.
# A more efficient implementation is possible with np.cumsum() function
def moving_average(a, w=5):
if len(a) < w:
return a[:] # Need to send a copy of the array
return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)]
# Defines a utility that prints the training progress
def print_training_progress(trainer, mb, frequency, verbose=1):
training_loss = "NA"
eval_error = "NA"
if mb%frequency == 0:
training_loss = trainer.previous_minibatch_loss_average
eval_error = trainer.previous_minibatch_evaluation_average
if verbose:
print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}%".format(mb, training_loss, eval_error*100))
return mb, training_loss, eval_error
# Initialize the parameters for the trainer
minibatch_size = 64
#minibatch_size = 512
num_samples_per_sweep = 60000
num_sweeps_to_train_with = 10
num_minibatches_to_train = (num_samples_per_sweep * num_sweeps_to_train_with) / minibatch_size
Explanation: First let us create some helper functions that will be needed to visualize different functions associated with training.
End of explanation
# Create the reader to training data set
reader_train = create_reader(train_file, True, input_dim, num_output_classes)
# Map the data streams to the input and labels.
input_map = {
label : reader_train.streams.labels,
input : reader_train.streams.features
}
# Run the trainer on and perform model training
training_progress_output_freq = 500
plotdata = {"batchsize":[], "loss":[], "error":[]}
for i in range(0, int(num_minibatches_to_train)):
# Read a mini batch from the training data file
data = reader_train.next_minibatch(minibatch_size, input_map = input_map)
trainer.train_minibatch(data)
batchsize, loss, error = print_training_progress(trainer, i, training_progress_output_freq, verbose=1)
if not (loss == "NA" or error =="NA"):
plotdata["batchsize"].append(batchsize)
plotdata["loss"].append(loss)
plotdata["error"].append(error)
Explanation: <a id='#Run the trainer'></a>
Run the trainer
We are now ready to train our fully connected neural net. We want to decide what data we need to feed into the training engine.
In this example, each iteration of the optimizer will work on minibatch_size sized samples. We would like to train on all 60000 observations. Additionally we will make multiple passes through the data specified by the variable num_sweeps_to_train_with. With these parameters we can proceed with training our simple multi-layer perceptron network.
End of explanation
# Compute the moving average loss to smooth out the noise in SGD
plotdata["avgloss"] = moving_average(plotdata["loss"])
plotdata["avgerror"] = moving_average(plotdata["error"])
# Plot the training loss and the training error
import matplotlib.pyplot as plt
plt.figure(1)
plt.subplot(211)
plt.plot(plotdata["batchsize"], plotdata["avgloss"], 'b--')
plt.xlabel('Minibatch number')
plt.ylabel('Loss')
plt.title('Minibatch run vs. Training loss')
plt.show()
plt.subplot(212)
plt.plot(plotdata["batchsize"], plotdata["avgerror"], 'r--')
plt.xlabel('Minibatch number')
plt.ylabel('Label Prediction Error')
plt.title('Minibatch run vs. Label Prediction Error')
plt.show()
Explanation: Let us plot the errors over the different training minibatches. Note that as we iterate the training loss decreases though we do see some intermediate bumps.
End of explanation
# Read the training data
reader_test = create_reader(test_file, False, input_dim, num_output_classes)
test_input_map = {
label : reader_test.streams.labels,
input : reader_test.streams.features,
}
# Test data for trained model
test_minibatch_size = 512
num_samples = 10000
num_minibatches_to_test = num_samples // test_minibatch_size
test_result = 0.0
for i in range(num_minibatches_to_test):
# We are loading test data in batches specified by test_minibatch_size
# Each data point in the minibatch is a MNIST digit image of 784 dimensions
# with one pixel per dimension that we will encode / decode with the
# trained model.
data = reader_test.next_minibatch(test_minibatch_size,
input_map = test_input_map)
eval_error = trainer.test_minibatch(data)
test_result = test_result + eval_error
# Average of evaluation errors of all test minibatches
print("Average test error: {0:.2f}%".format(test_result*100 / num_minibatches_to_test))
Explanation: Evaluation / Testing
Now that we have trained the network, let us evaluate the trained network on the test data. This is done using trainer.test_minibatch.
End of explanation
out = C.softmax(z)
Explanation: Note, this error is very comparable to our training error indicating that our model has good "out of sample" error a.k.a. generalization error. This implies that our model can very effectively deal with previously unseen observations (during the training process). This is key to avoid the phenomenon of overfitting.
This is a huge reduction in error compared to multi-class LR (from Lab 02).
We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us route the network output through a softmax function. This maps the aggregated activations across the network to probabilities across the 10 classes.
End of explanation
# Read the data for evaluation
reader_eval = create_reader(test_file, False, input_dim, num_output_classes)
eval_minibatch_size = 25
eval_input_map = {input: reader_eval.streams.features}
data = reader_test.next_minibatch(eval_minibatch_size, input_map = test_input_map)
img_label = data[label].asarray()
img_data = data[input].asarray()
predicted_label_prob = [out.eval(img_data[i]) for i in range(len(img_data))]
predicted_label_prob[0], np.argmax(predicted_label_prob[0])
# Find the index with the maximum value for both predicted as well as the ground truth
pred = [np.argmax(predicted_label_prob[i]) for i in range(len(predicted_label_prob))]
gtlabel = [np.argmax(img_label[i]) for i in range(len(img_label))]
print("Label :", gtlabel[:25])
print("Predicted:", pred)
Explanation: Let us test a small minibatch sample from the test data.
End of explanation
# Plot a random image
sample_number = 5
plt.imshow(img_data[sample_number].reshape(28,28), cmap="gray_r")
plt.axis('off')
img_gt, img_pred = gtlabel[sample_number], pred[sample_number]
print("Image Label: ", img_pred)
Explanation: As you can see above, our model is much better. Do you see any mismatches?
Let us visualize one of the test images and its associated label. Do they match?
End of explanation
final_test_img = Image(url= "MysteryNumberD.bmp")
final_test_img
type(img_data[i])
out.eval(img_data[i])
import numpy
from PIL import Image
img = Image.open("MysteryNumberD.bmp").convert("F")
img
imgarr = numpy.asarray(img)
imgarr[0]
imgarr[27]
imgarr.flatten()
out.eval(imgarr.flatten())
np.argmax(out.eval(imgarr.flatten()))
Explanation: Suggested Explorations
- Try exploring how the classifier behaves with different parameters - suggest changing the minibatch_size parameter from 25 to say 64 or 128. What happens to the error rate? How does the error compare to the logistic regression classifier?
- Try increasing the number of sweeps
- Can you change the network to reduce the training error rate? When do you see overfitting happening?
End of explanation |
8,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
%run ../stack/stack.py
%load ../stack/stack.py
class MyStack(Stack):
def sort(self):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Sort a stack. You can use another stack as a buffer.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
When sorted, should the largest element be at the top or bottom?
Top
Can you have duplicate values like 5, 5?
Yes
Can we assume we already have a stack class that can be used for this problem?
Yes
Test Cases
Empty stack -> None
One element stack
Two or more element stack (general case)
Already sorted stack
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_sort_stack.py
from random import randint
from nose.tools import assert_equal
class TestSortStack(object):
def get_sorted_stack(self, numbers):
stack = MyStack()
for x in numbers:
stack.push(x)
sorted_stack = stack.sort()
return sorted_stack
def test_sort_stack(self):
print('Test: Empty stack')
sorted_stack = self.get_sorted_stack([])
assert_equal(sorted_stack.pop(), None)
print('Test: One element stack')
sorted_stack = self.get_sorted_stack([1])
assert_equal(sorted_stack.pop(), 1)
print('Test: Two or more element stack (general case)')
num_items = 10
numbers = [randint(0, 10) for x in range(num_items)]
sorted_stack = self.get_sorted_stack(numbers)
sorted_numbers = []
for _ in range(num_items):
sorted_numbers.append(sorted_stack.pop())
assert_equal(sorted_numbers, sorted(numbers, reverse=True))
print('Success: test_sort_stack')
def main():
test = TestSortStack()
test.test_sort_stack()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
8,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
mocsy examples using mocsy from IOOS channel
12/12 and 11/9/2015. Emilio Mayorga. Reproduce and extend Examples 1-3 from the mocys Python documentation page.
- mocsy Python source documentation
Step1: Example 1, Simple scalar variables
Functions return 1-dimensional numpy arrays. Scalar inputs return length-1 arrays.
For DATA input
Step2: Compute the CONSTANTS with the same scalar input for S, T, and P (as above) but change to the newer options published since the best-practices guide
Step3: For MODEL input
Step4: Example 2, Simple arrays (numpy)
Also demonstrate import into Pandas DataFrame and plotting.
Step5: Import into Pandas DataFrame
Step6: Create simple, easily generated figure using Pandas plot
Step7: Create a customized property vs depth plot showing two variables using matplotlib
Step8: Note that pCO2 here is calculated for "in-situ" conditions, a capability that apparently is fairly unique to mocsy. See
Orr, J. C. and Epitalon, J.-M.
Step9: Notes for the above calculation
Step10: This output is a good match to the output csv file on the mocsy repository. | Python Code:
import mocsy
Explanation: mocsy examples using mocsy from IOOS channel
12/12 and 11/9/2015. Emilio Mayorga. Reproduce and extend Examples 1-3 from the mocys Python documentation page.
- mocsy Python source documentation: http://ocmip5.ipsl.jussieu.fr/mocsy/pyth.html
- See ioos-channel implementation discussions at https://github.com/ioos/conda-recipes/pull/563
- Include brief text describing how to add the IOOS channel and create a bare minimum conda env with mocsy (and pandas, etc)?
End of explanation
pH,pco2,fco2,co2,hco3,co3,OmegaA,OmegaC,BetaD,DENis,p,Tis = \
mocsy.mvars(temp=18, sal=35, alk=2300.e-6, dic=2000.e-6, sil=0, phos=0,
patm=1, depth=100, lat=0,
optcon='mol/kg', optt='Tinsitu', optp='db',
optb="u74", optk1k2='l', optkf="dg", optgas='Pinsitu')
print(pH,pco2,fco2,co2,hco3,co3,OmegaA,OmegaC,BetaD,DENis,p,Tis)
Explanation: Example 1, Simple scalar variables
Functions return 1-dimensional numpy arrays. Scalar inputs return length-1 arrays.
For DATA input: DIC and ALk in mol/kg, in situ temperature, pressure.
End of explanation
Kh,K1,K2,Kb,Kw,Ks,Kf,Kspc,Kspa,K1p,K2p,K3p,Ksi,St,Ft,Bt = \
mocsy.mconstants(temp=18, sal=35, patm=1, depth=0,lat=0,
optt='Tinsitu', optp='m',
optb="l10", optk1k2='m10', optkf="dg", optgas='Ppot')
print(Kh,K1,K2,Kb,Kw,Ks,Kf,Kspc,Kspa,K1p,K2p,K3p,Ksi,St,Ft,Bt)
Explanation: Compute the CONSTANTS with the same scalar input for S, T, and P (as above) but change to the newer options published since the best-practices guide: Lee et al. (2010) for total boron and Millero (2010) for K<sub>1</sub> and K<sub>2</sub>:
End of explanation
pH,pco2,fco2,co2,hco3,co3,OmegaA,OmegaC,BetaD,DENis,p,Tis = \
mocsy.mvars(temp=18, sal=35, alk=2300*1028e-6, dic=2000*1028e-6,
sil=0, phos=0, patm=1, depth=100, lat=0,
optcon='mol/m3', optt='Tpot', optp='m')
print(pH,pco2,fco2,co2,hco3,co3,OmegaA,OmegaC,BetaD,DENis,p,Tis)
Explanation: For MODEL input: DIC and Alk in mol/m3, potential temperature, and depth. Also do NOT specify last 4 options, thus using defaults (optb='l10', optk1k2='l', optkf='pf', optgas='Pinsitu').
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Make dummy array with 11 members
one = np.ones(11, dtype='float32')
sal = 35*one
temp = 2*one
patm = 1*one
depth = np.arange(0, 11000, 1000, dtype='float32') # units in 'm'
# Compute in situ density (3 steps using 2 mocsy routines):
# a) Specify latitude = 60°S for depth to pressure conversion formula
lat = -60*one
# b) Compute pressure (db) from depth (m) and latitude following Saunders (1981)
pfd = mocsy.mdepth2press(depth,lat) # units in 'db'
# c) Compute in situ density (kg/m3) from salinity (psu), in-situ temp (C), and pressure (db)
rhois = mocsy.mrhoinsitu(sal, temp, pfd)
# Specify surface concentrations typical of 60°S (from GLODAP and WOA2009)
alk = 2295*one # umol/kg
dic = 2154*one # umol/kg
sio2 = 50.*one # umol/L
po4 = 1.8*one # umol/L
# Convert nutrient units(mol/L) to model units (mol/m3)
sio2 = sio2 * 1.0e-3
po4 = po4 * 1.0e-3
# Convert Alk and DIC units (umol/kg) to model units (mol/m3)
dic = dic * rhois * 1e-6
alk = alk * rhois * 1e-6
# Compute carbonate system variables, using 'model' options.
# Note that original example code had an error:
# patm argument was passed as scalar rather than array shaped like one
response_tup = mocsy.mvars(temp=temp, sal=sal, alk=alk, dic=dic,
sil=sio2, phos=po4, patm=patm, depth=depth, lat=lat,
optcon='mol/m3', optt='Tpot', optp='m')
pH,pco2,fco2,co2,hco3,co3,OmegaA,OmegaC,BetaD,DENis,p,Tis = response_tup
depth, pH, OmegaA
Explanation: Example 2, Simple arrays (numpy)
Also demonstrate import into Pandas DataFrame and plotting.
End of explanation
data_for_pd = dict(depth=depth,
temp=temp, sal=sal, alk=alk, dic=dic, sil=sio2, phos=po4,
pH=pH, pCO2=pco2, fCO2=fco2, CO3=co3, OmegaA=OmegaA, OmegaC=OmegaC
)
data_df = pd.DataFrame(data_for_pd,
columns=['depth', 'temp', 'sal', 'alk', 'dic', 'sil', 'phos',
'pH', 'pCO2', 'fCO2', 'CO3', 'OmegaA', 'OmegaC']
)
data_df.set_index('depth', drop=False, inplace=True)
data_df
Explanation: Import into Pandas DataFrame
End of explanation
data_df['OmegaA'].plot(title='Omega Aragonite (OmegaA) vs depth');
Explanation: Create simple, easily generated figure using Pandas plot
End of explanation
fig, ax1 = plt.subplots(figsize=(5,7))
ax1.plot(data_df['OmegaA'], data_df['depth'], lw=2, color="blue")
ax1.set_xlabel(r"$\Omega_A (fraction)$", fontsize=16, color="blue")
for label in ax1.get_xticklabels():
label.set_color("blue")
ax2 = ax1.twiny()
ax2.plot(data_df['pCO2'], data_df['depth'], lw=2, color="red")
ax2.set_xlabel(r"$pCO_2 (\mu atm)$", fontsize=16, color="red")
for label in ax2.get_xticklabels():
label.set_color("red")
ax1.invert_yaxis()
ax1.set_ylabel(r"$depth (m)$", fontsize=16);
Explanation: Create a customized property vs depth plot showing two variables using matplotlib
End of explanation
# Extra Packages used.
# import numpy as np # Already imported above
# import pandas as pd # Already imported above
import urllib
import StringIO
# Read input .csv file from mocsy github repository
infileurl = "https://raw.githubusercontent.com/jamesorr/mocsy/master/DIC-Alk-P_vary_input.csv"
readfileurl = urllib.urlopen(infileurl).read()
infile = StringIO.StringIO(readfileurl)
# Read .csv input file with 7 columns: flag, Alk(mol/kg), DIC(mol/kg), salinity(psu),
# temp(C), press(db), PO43-(mol/kg), SiO2(mol/kg)
# infile = "DIC-Alk-P_vary_input.csv"
indata = np.loadtxt(infile, delimiter=',', dtype='float32', skiprows=1)
n = len(indata)
alk = indata[:,1]
dic = indata[:,2]
sal = indata[:,3]
temp = indata[:,4]
depth = indata[:,5] * 10 #Convert from bars to decibars
phos = indata[:,6]
sil = indata[:,7]
# Specify that latitude = 45°N for all samples (for depth to pressure conversion)
# note: "lat" is required input for mocsy, but results are only weakly sensitive
lat = np.full_like(depth, 45)
# Specify the atmospheric pressure for all samples (1 atm)
patm = np.full_like(depth, 1)
# Recover computed carbonate system variables
response_tup = mocsy.mvars(temp=temp, sal=sal, alk=alk, dic=dic,
sil=sil, phos=phos, patm=patm, depth=depth, lat=lat,
optcon='mol/kg', optt='Tinsitu', optp='db',
optb="u74", optk1k2='l', optkf="dg", optgas='Pzero')
pH, pco2, fco2, co2, hco3, co3, OmegaA, OmegaC, BetaD, rhoSW, p, tempis = response_tup
Explanation: Note that pCO2 here is calculated for "in-situ" conditions, a capability that apparently is fairly unique to mocsy. See
Orr, J. C. and Epitalon, J.-M.: Improved routines to model the ocean carbonate system: mocsy 2.0, Geosci. Model Dev., 8, 485-499, doi:10.5194/gmd-8-485-2015, 2015
Example 3, Read input from a .csv (spreadsheet) file
Also demonstrate import into Pandas DataFrame and plotting.
End of explanation
data_for_pd = dict(depth=depth,
temp=temp, sal=sal, alk=alk, dic=dic, sil=sil, phos=phos,
pH=pH, pCO2=pco2, fCO2=fco2, CO3=co3, OmegaA=OmegaA, OmegaC=OmegaC
)
data_df = pd.DataFrame(data_for_pd,
columns=['depth', 'temp', 'sal', 'alk', 'dic', 'sil', 'phos',
'pH', 'pCO2', 'fCO2', 'CO3', 'OmegaA', 'OmegaC']
)
data_df.set_index('depth', drop=False, inplace=True)
data_df
Explanation: Notes for the above calculation:
1. Original example code had an error: patm argument was missing
2. Changed the original code to use optgas='Pzero', to expose the optgas option and match the pre-existing output on https://github.com/jamesorr/mocsy/blob/master/DIC-Alk-P-vary-from-mocsy.csv. From the mvars module in the Fortran code, regarding Pzero: "'zero order' fCO2 and pCO2 (typical approach, which is flawed) considers in situ T & only atm pressure (hydrostatic=0)"
Import into Pandas DataFrame
End of explanation
fig, ax1 = plt.subplots(figsize=(5,7))
ax1.plot(data_df['OmegaA'], data_df['depth'], lw=2, color="blue")
ax1.set_xlabel(r"$\Omega_A (fraction)$", fontsize=16, color="blue")
for label in ax1.get_xticklabels():
label.set_color("blue")
ax2 = ax1.twiny()
ax2.plot(data_df['pH'], data_df['depth'], lw=2, color="red")
ax2.set_xlabel(r"$pH (total scale)$", fontsize=16, color="red")
for label in ax2.get_xticklabels():
label.set_color("red")
ax1.invert_yaxis()
ax1.set_ylabel(r"$depth (m)$", fontsize=16);
Explanation: This output is a good match to the output csv file on the mocsy repository.
End of explanation |
8,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook 1
Step1: Download the sequence data
Sequence data for this study is archived on the NCBI sequence read archive (SRA). The data were run in two separate Illumina runs, but are combined under a single project id number.
Project SRA
Step2: Create a set with reads concatenated from both technical replicates of each sample
Step3: Make a params file
Step4: Assemble the full data set
Added the 'a' option to output formats to build an ".alleles" file which will be used later for mrbayes/bucky analyses.
Step5: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts (1 sequence lane)
The average number of raw reads per sample is 1.37M.
Step6: Raw data amounts (2 sequence lanes)
The average nreads now is 2.74M
Step7: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std here is the std in means across samples. The std of depths within individuals is much higher.
Step8: get average number of loci per sample
Step9: get average number of samples with data for a locus
Step10: Infer an ML phylogeny
Step11: Plot the tree in R using ape
Step14: BUCKY -- write mrbayes nexus blocks for each locus
The functions nexmake and subsample are used to split the .loci file into individual nexus files for each locus within a new directory. Each nexus file is given a mrbayes command to run. Then we run the bucky tool mbsum to summarize the mrbayes output, and finally run bucky to infer concordance trees from the posterior distributions of trees across all loci.
Loci are selected on the basis that they have coverage across all tips of the selected subtree and that they contain at least 1 SNP.
Step15: Modify line endings of loci string for easier parsing
Step16: Make nexus files
Step17: Summarize posteriors with mbsum
Step19: Run Bucky to infer concordance factors
Step20: Subtree 1 (Oreinodentinus) (full data set)
Step21: Subtree 1 (Oreinodentinus) (half data set)
Step22: Subtree 2 (Urceolata) (full data set)
Step23: Subtree 2 (Urceolata) (half data set)
Step24: Run mrbayes on all nex files
Step25: Run mbsum to summarize the results
Step26: Run Bucky
Step27: Cleanup
Step28: check out the results
Step29: FINAL BUCKY RESULTS (DEEP_SCALE)
Step30: Get missing data percentatge for m2 data sets
For this I start raxml to get the info and then quit. Kind of lazy but simpler than calculating it myself.
Step31: Get average phylo dist (GTRgamma dist) | Python Code:
### Notebook 1
### Data set 1 (Viburnum)
### Language: Bash
### Data Location: NCBI SRA PRJNA299402 & PRJNA299407
%%bash
## make a new directory for this analysis
mkdir -p empirical_1/
mkdir -p empirical_1/halfrun
mkdir -p empirical_1/fullrun
## import Python libraries
import pandas as pd
import numpy as np
import ipyparallel
import urllib2
import glob
import os
Explanation: Notebook 1:
End of explanation
%%bash
## get the data from Dryad
for run in $(seq 24 28);
do
wget -q -r -nH --cut-dirs=9 \
ftp://ftp-trace.ncbi.nlm.nih.gov/\
sra/sra-instant/reads/ByRun/sra/SRR/\
SRR191/SRR19155$run/SRR19155$run".sra";
done
%%bash
## convert sra files to fastq using fastq-dump tool
fastq-dump *.sra
## IPython code
## This reads in a table mapping sample names to SRA numbers
## that is hosted on github
## open table from github url
url = "https://raw.githubusercontent.com/"+\
"dereneaton/virentes/master/SraRunTable.txt"
intable = urllib2.urlopen(url)
## make name xfer dictionary
DF = pd.read_table(intable, sep="\t")
D = {DF.Run_s[i]:DF.Library_Name_s[i] for i in DF.index}
## change file names and move to fastq dir/
for fname in glob.glob("*.fastq"):
os.rename(fname, "analysis_pyrad/fastq/"+\
D[fname.replace(".fastq",".fq")])
Explanation: Download the sequence data
Sequence data for this study is archived on the NCBI sequence read archive (SRA). The data were run in two separate Illumina runs, but are combined under a single project id number.
Project SRA: SRP055977
Project number: PRJNA277574
Biosample numbers: SAMN03394519 -- SAMN03394561
Runs: SRR1915524 -- SRR1915566
The barcodes file is in the github repository for this project.
The library contains 95 samples. We uploaded the two demultiplexed samples for each individual separately, so each sample has 2 files. Below we examine just the first library (the "half" data set) and then both libraries combined (the "full" data set). We analyze on 64 samples since the remaining samples are replicate individuals within species that are part of a separate project.
You can download the data set using the script below:
End of explanation
%%bash
mkdir -p fastq_combined
## IPython code that makes a bash call w/ (!)
## get all the data from the two libraries and concatenate it
lib1tax = glob.glob("/home/deren/Documents/Vib_Lib1/fastq_Lib1/*.gz")
lib2tax = glob.glob("/home/deren/Documents/Vib_Lib1/fastq_Lib2/*.gz")
## names had to be modified to match
taxa = [i.split("/")[-1].split("_", 1)[1] for i in lib1tax]
for tax in taxa:
! cat /home/deren/Documents/Vib_Lib1/fastq_Lib1/Lib1_$tax \
/home/deren/Documents/Vib_Lib1/fastq_Lib2/Lib2_$tax \
> /home/deren/Documents/Vib_Lib1/fastq_combined/$tax
Explanation: Create a set with reads concatenated from both technical replicates of each sample
End of explanation
%%bash
pyrad --version
%%bash
## create a new default params file
rm params.txt
pyrad -n
%%bash
## substitute new parameters into file
sed -i '/## 1. /c\empirical_1/halfrun ## 1. working directory ' params.txt
sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt
sed -i '/## 7. /c\30 ## 7. N processors ' params.txt
sed -i '/## 9. /c\6 ## 9. NQual ' params.txt
sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt
sed -i '/## 12./c\4 ## 12. MinCov ' params.txt
sed -i '/## 13./c\10 ## 13. maxSH ' params.txt
sed -i '/## 14./c\empirical_1_half_m4 ## 14. output name ' params.txt
sed -i '/## 18./c\/home/deren/Documents/Vib_Lib1/fastq_Lib1/*.fastq ## 18. data location ' params.txt
sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt
sed -i '/## 30./c\p,n,s,a ## 30. output formats ' params.txt
cat params.txt
%%bash
pyrad -p params.txt -s 234567 >> log.txt 2>&1
%%bash
sed -i '/## 12./c\2 ## 12. MinCov ' params.txt
sed -i '/## 14./c\empirical_1_half_m2 ## 14. output name ' params.txt
%%bash
pyrad -p params.txt -s 7 >> log.txt 2>&1
Explanation: Make a params file
End of explanation
%%bash
## substitute new parameters into file
sed -i '/## 1. /c\empirical_1/fullrun ## 1. working directory ' params.txt
sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt
sed -i '/## 7. /c\30 ## 7. N processors ' params.txt
sed -i '/## 9. /c\6 ## 9. NQual ' params.txt
sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt
sed -i '/## 12./c\4 ## 12. MinCov ' params.txt
sed -i '/## 13./c\10 ## 13. maxSH ' params.txt
sed -i '/## 14./c\empirical_1_full_m4 ## 14. output name ' params.txt
sed -i '/## 18./c\/home/deren/Documents/Vib_Lib1/fastq_combined/*.fastq ## 18. data location ' params.txt
sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt
sed -i '/## 30./c\p,n,s,a ## 30. output formats ' params.txt
%%bash
pyrad -p params.txt -s 234567 >> log.txt 2>&1
%%bash
sed -i '/## 12./c\2 ## 12. MinCov ' params.txt
sed -i '/## 14./c\empirical_1_full_m2 ## 14. output name ' params.txt
%%bash
pyrad -p params.txt -s 7 >> log.txt 2>&1
Explanation: Assemble the full data set
Added the 'a' option to output formats to build an ".alleles" file which will be used later for mrbayes/bucky analyses.
End of explanation
## read in the data
s2dat = pd.read_table("empirical_1/halfrun/stats/s2.rawedit.txt", header=0, nrows=66)
## print summary stats
print s2dat["passed.total"].describe()
## find which sample has the most raw data
maxraw = s2dat["passed.total"].max()
print "\nmost raw data in sample:"
print s2dat['sample '][s2dat['passed.total']==maxraw]
Explanation: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts (1 sequence lane)
The average number of raw reads per sample is 1.37M.
End of explanation
## read in the data
s2dat = pd.read_table("empirical_1/fullrun/stats/s2.rawedit.txt", header=0, nrows=66)
## print summary stats
print s2dat["passed.total"].describe()
## find which sample has the most raw data
maxraw = s2dat["passed.total"].max()
print "\nmost raw data in sample:"
print s2dat['sample '][s2dat['passed.total']==maxraw]
Explanation: Raw data amounts (2 sequence lanes)
The average nreads now is 2.74M
End of explanation
## read in the s3 results
s3dat = pd.read_table("empirical_1/halfrun/stats/s3.clusters.txt", header=0, nrows=66)
## print summary stats
print "summary of means\n=================="
print s3dat['dpt.me'].describe()
## print summary stats
print "\nsummary of std\n=================="
print s3dat['dpt.sd'].describe()
## print summary stats
print "\nsummary of proportion lowdepth\n=================="
print pd.Series(1-s3dat['d>5.tot']/s3dat["total"]).describe()
## find which sample has the greatest depth of retained loci
maxdepth = s3dat["d>5.tot"].max()
print "\nhighest coverage in sample:"
print s3dat['taxa'][s3dat['d>5.tot']==maxdepth]
## read in the s3 results
s3dat = pd.read_table("empirical_1/fullrun/stats/s3.clusters.txt", header=0, nrows=66)
## print summary stats
print "summary of means\n=================="
print s3dat['dpt.me'].describe()
## print summary stats
print "\nsummary of std\n=================="
print s3dat['dpt.sd'].describe()
## print summary stats
print "\nsummary of proportion hidepth\n=================="
print pd.Series(1-s3dat['d>5.tot']/s3dat["total"]).describe()
## find which sample has the greatest depth of retained loci
max_hiprop = (s3dat["d>5.tot"]/s3dat["total"]).max()
print "\nhighest coverage in sample:"
print s3dat['taxa'][s3dat['d>5.tot']/s3dat["total"]==max_hiprop]
## print mean and std of coverage for the highest coverage sample
with open("empirical_1/fullrun/clust.85/lantanoides_D15_Beartown_2.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
print depths.mean(), depths.std()
import toyplot
import toyplot.svg
import numpy as np
## read in the depth information for this sample
with open("empirical_1/fullrun/clust.85/lantanoides_D15_Beartown_2.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
## make a barplot in Toyplot
canvas = toyplot.Canvas(width=350, height=300)
axes = canvas.axes(xlabel="Depth of coverage (N reads)",
ylabel="N loci",
label="dataset1/sample=sulcatum_D9_MEX_003")
## select the loci with depth > 5 (kept)
keeps = depths[depths>5]
## plot kept and discarded loci
edat = np.histogram(depths, range(30)) # density=True)
kdat = np.histogram(keeps, range(30)) #, density=True)
axes.bars(edat)
axes.bars(kdat)
#toyplot.svg.render(canvas, "empirical_1_full_depthplot.svg")
cat empirical_1/halfrun/stats/empirical_1_half_m4.stats
Explanation: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std here is the std in means across samples. The std of depths within individuals is much higher.
End of explanation
import numpy as np
indat = open("empirical_1/halfrun/stats/empirical_1_half_m4.stats").readlines()
counts = [int(i.strip().split("\t")[1]) for i in indat[8:73]]
print np.mean(counts)
print np.std(counts)
Explanation: get average number of loci per sample
End of explanation
import numpy as np
import itertools
indat = open("empirical_1/halfrun/stats/empirical_1_half_m4.stats").readlines()
counts = [i.strip().split("\t") for i in indat[81:142]]
#print counts
ntax = [int(i[0]) for i in counts]
ncounts = [int(i[1]) for i in counts]
tots = list(itertools.chain(*[[i]*n for i,n in zip(ntax, ncounts)]))
print np.mean(tots)
print np.std(tots)
cat empirical_1/fullrun/stats/empirical_1_full_m4.stats
import numpy as np
indat = open("empirical_1/fullrun/stats/empirical_1_full_m4.stats").readlines()
counts = [int(i.strip().split("\t")[1]) for i in indat[8:73]]
print np.mean(counts)
print np.std(counts)
import numpy as np
import itertools
indat = open("empirical_1/fullrun/stats/empirical_1_full_m4.stats").readlines()
counts = [i.strip().split("\t") for i in indat[81:140]]
#print counts
ntax = [int(i[0]) for i in counts]
ncounts = [int(i[1]) for i in counts]
tots = list(itertools.chain(*[[i]*n for i,n in zip(ntax, ncounts)]))
print np.mean(tots)
print np.std(tots)
Explanation: get average number of samples with data for a locus
End of explanation
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 35 \
-w /home/deren/Documents/RADmissing/empirical_1/halfrun \
-n empirical_1_halfrun -s empirical_1/halfrun/outfiles/empirical_1_half_m4.phy \
-o "Lib1_clemensiae_DRY6_PWS_2135"
%%bash
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 35 \
-w /home/deren/Documents/RADmissing/empirical_1/fullrun \
-n empirical_1_fullrun -s empirical_1/fullrun/outfiles/empirical_1_full_m4.phy \
-o "clemensiae_DRY6_PWS_2135"
%%bash
head -n 20 empirical_1/halfrun/RAxML_info.empirical_1_half_m4
%%bash
head -n 20 empirical_1/fullrun/RAxML_info.empirical_1_full_m4
Explanation: Infer an ML phylogeny
End of explanation
%load_ext rpy2.ipython
%%R -w 600 -h 1000
library(ape)
tre_half <- read.tree("empirical_1/halfrun/RAxML_bipartitions.empirical_1_halfrun")
#rtre <- root(tre, "Lib1_clemensiae_DRY6_PWS_2135", resolve.root=T)
#rtre <- root(rtre, "Lib1_clemensiae_DRY6_PWS_2135", resolve.root=T)
ltre_half <- ladderize(tre_half)
plot(ltre_half, cex=0.8, edge.width=2)
nodelabels(ltre_half$node.label)
%%R -w 600 -h 1000
library(ape)
svg("outtree.svg", height=11, width=8)
tre_full <- read.tree("empirical_1/fullrun/RAxML_bipartitions.empirical_1_fullrun")
#rtre <- root(tre, "Lib1_clemensiae_DRY6_PWS_2135", resolve.root=T)
#rtre <- root(rtre, "Lib1_clemensiae_DRY6_PWS_2135", resolve.root=T)
ltre_full <- ladderize(tre_full)
plot(ltre_full, cex=0.8, edge.width=3)
#nodelabels(ltre_full$node.label)
dev.off()
Explanation: Plot the tree in R using ape
End of explanation
def nexmake(taxadict, loc, nexdir, trim):
outloc = open(nexdir+"/"+str(loc)+".nex", 'w')
header =
#NEXUS
begin data;
dimensions ntax={} nchar={};
format datatype=dna interleave=yes missing=N gap=-;
matrix
.format(len(taxadict), len(taxadict.values()[0]))
outloc.write(header)
for tax, seq in taxadict.items():
outloc.write("{}{}{}\n"\
.format(tax[trim:trim+9],
" "*(10-len(tax[0:9])),
"".join(seq)))
mbstring =
;
end;
begin mrbayes;
set autoclose=yes nowarn=yes;
lset nst=6 rates=gamma;
mcmc ngen=2200000 samplefreq=2000;
sump burnin=200000;
sumt burnin=200000;
end;
outloc.write(mbstring)
outloc.close()
def unstruct(amb):
" returns bases from ambiguity code"
D = {"R":["G","A"],
"K":["G","T"],
"S":["G","C"],
"Y":["T","C"],
"W":["T","A"],
"M":["C","A"]}
if amb in D:
return D.get(amb)
else:
return [amb,amb]
def resolveambig(subseq):
N = []
for col in subseq:
N.append([unstruct(i)[np.random.binomial(1, 0.5)] for i in col])
return np.array(N)
def newPIS(seqsamp):
counts = [Counter(col) for col in seqsamp.T if not ("-" in col or "N" in col)]
pis = [i.most_common(2)[1][1] > 1 for i in counts if len(i.most_common(2))>1]
if sum(pis) >= 2:
return sum(pis)
else:
return 0
def parseloci(iloci, taxadict, nexdir, trim=0):
nloc = 0
## create subsampled data set
for loc in iloci:
## if all tip samples have data in this locus
names = [line.split()[0] for line in loc.split("\n")[:-1]]
## check that locus has required samples for each subtree
if all([i in names for i in taxadict.values()]):
seqs = np.array([list(line.split()[1]) for line in loc.split("\n")[:-1]])
seqsamp = seqs[[names.index(tax) for tax in taxadict.values()]]
seqsamp = resolveambig(seqsamp)
pis = newPIS(seqsamp)
if pis:
nloc += 1
## remove invariable columns given this subsampling
keep = []
for n, col in enumerate(seqsamp.T):
if all([i not in ["N","-"] for i in col]):
keep.append(n)
subseq = seqsamp.T[keep].T
## write to a nexus file
nexdict = dict(zip(taxadict.keys(), [i.tostring() for i in subseq]))
nexmake(nexdict, nloc, nexdir, trim)
print nloc, 'loci kept'
Explanation: BUCKY -- write mrbayes nexus blocks for each locus
The functions nexmake and subsample are used to split the .loci file into individual nexus files for each locus within a new directory. Each nexus file is given a mrbayes command to run. Then we run the bucky tool mbsum to summarize the mrbayes output, and finally run bucky to infer concordance trees from the posterior distributions of trees across all loci.
Loci are selected on the basis that they have coverage across all tips of the selected subtree and that they contain at least 1 SNP.
End of explanation
def getloci(locifile):
## parse the loci file by new line characters
locifile = open(locifile)
lines = locifile.readlines()
## add "|" to end of lines that contain "|"
for idx in range(len(lines)):
if "|" in lines[idx]:
lines[idx] = lines[idx].strip()+"|\n"
## join lines back together into one large string
locistr = "".join(lines)
## break string into loci at the "|\n" character
loci = locistr.split("|\n")[:-1]
## how many loci?
print len(loci), "loci"
return loci
## run on both files
loci_full = getloci("empirical_1/fullrun/outfiles/empirical_1_full_m4.loci")
loci_half = getloci("empirical_1/halfrun/outfiles/empirical_1_half_m4.loci")
Explanation: Modify line endings of loci string for easier parsing
End of explanation
parseloci(loci_full[:], deep_dict_f, "deep_dict_full", 0)
parseloci(loci_half[:], deep_dict_h, "deep_dict_half", 0)
#parseloci(loci[:], shallow_dict, "shallow_dict", 0)
## create a parallel client
ipclient = ipyparallel.Client()
lview = ipclient.load_balanced_view()
## call function across all engines
def mrbayes(infile):
import subprocess
cmd = "mb %s" % infile
subprocess.check_call(cmd, shell=True)
## submit all nexus files to run mb
allnex = glob.glob("deep_dict_full/*.nex")
for nex in allnex:
lview.apply(mrbayes, nex)
ipclient.wait_interactive()
Explanation: Make nexus files
End of explanation
def mbsum(nexdir, nloci):
import subprocess
## combine trees from the two replicate runs
for n in range(1, nloci+1):
cmd = "mbsum -n 101 -o {}{}.in {}{}.nex.run1.t {}{}.nex.run2.t".\
format(nexdir, n, nexdir, n, nexdir, n)
subprocess.check_call(cmd, shell=True)
Explanation: Summarize posteriors with mbsum
End of explanation
import os
import numpy as np
from collections import Counter
def subsample(infile, requires, outgroup, nexdir, trim):
sample n taxa from infile to create nex file
## counter
loc = 0
## create output directory
if not os.path.exists(nexdir):
os.mkdir(nexdir)
## input .alleles file
loci = open(infile, 'r').read().strip().split("//")
## create a dictionary of {names:seqs}
for locus in xrange(len(loci)):
locnex = [""]*len(requires)
for line in loci[locus].strip().split("\n"):
tax = line.split()[0]
seq = line.split()[-1]
if ">" in tax:
if tax in requires:
locnex[requires.index(tax)] = seq
## if all tips
if len([i for i in locnex if i]) == len(requires):
## if locus is variable
## count how many times each char occurs in each site
ccs = [Counter(i) for i in np.array([list(i) for i in locnex]).T]
## remove N and - characters and the first most occurring base
for i in ccs:
del i['-']
del i['N']
if i:
del i[i.most_common()[0][0]]
## is anything left occuring more than once (minor allele=ma)?
ma = max([max(i.values()) if i else 0 for i in ccs])
if ma > 1:
nexmake(requires, locnex, loc, outgroup, nexdir, trim)
loc += 1
return loc
Explanation: Run Bucky to infer concordance factors
End of explanation
## inputs
requires = [">triphyllum_D13_PWS_1783_0",
">jamesonii_D12_PWS_1636_0",
">sulcatum_D9_MEX_003_0",
">acutifolium_DRY3_MEX_006_0",
">dentatum_ELS4_0",
">recognitum_AA_1471_83B_0"]
outgroup = ""
infile = "empirical_1/fullrun/outfiles/empirical_1_full_m4.alleles"
nexdir = "nex_files1"
## run function
nloci = subsample(infile, requires, outgroup, nexdir, trim=1)
print nloci
Explanation: Subtree 1 (Oreinodentinus) (full data set)
End of explanation
## inputs
requires = [">Lib1_triphyllum_D13_PWS_1783_0",
">Lib1_jamesonii_D12_PWS_1636_0",
">Lib1_sulcatum_D9_MEX_003_0",
">Lib1_acutifolium_DRY3_MEX_006_0",
">Lib1_dentatum_ELS4_0",
">Lib1_recognitum_AA_1471_83B_0"]
outgroup = ""
infile = "empirical_1/halfrun/outfiles/empirical_1_half_m4.alleles"
nexdir = "nex_files2"
## run function
nloci = subsample(infile, requires, outgroup, nexdir, trim=6)
print nloci
Explanation: Subtree 1 (Oreinodentinus) (half data set)
End of explanation
## inputs
requires = [">clemensiae_DRY6_PWS_2135_0",
">tinus_D33_WC_277_0",
">taiwanianum_TW1_KFC_1952_0",
">lantanoides_D15_Beartown_2_0",
">amplificatum_D3_SAN_156003_0",
">lutescens_D35_PWS_2077_0",
">lentago_ELS85_0",
">dentatum_ELS4_0"]
outgroup = ""
infile = "empirical_1/fullrun/outfiles/empirical_1_full_m4.alleles"
nexdir = "nex_files5"
## run function
nloci = subsample(infile, requires, outgroup, nexdir, trim=1)
print nloci
Explanation: Subtree 2 (Urceolata) (full data set)
End of explanation
## inputs
requires = [">Lib1_clemensiae_DRY6_PWS_2135_0",
">Lib1_tinus_D33_WC_277_0",
">Lib1_taiwanianum_TW1_KFC_1952_0",
">Lib1_lantanoides_D15_Beartown_2_0",
">Lib1_amplificatum_D3_SAN_156003_0",
">Lib1_lutescens_D35_PWS_2077_0",
">Lib1_lentago_ELS85_0",
">Lib1_dentatum_ELS4_0"]
outgroup = ""
infile = "empirical_1/halfrun/outfiles/empirical_1_half_m4.alleles"
nexdir = "nex_files6"
## run function
nloci = subsample(infile, requires, outgroup, nexdir, trim=6)
print nloci
Explanation: Subtree 2 (Urceolata) (half data set)
End of explanation
import ipyparallel
import subprocess
import glob
## create a parallel client
ipclient = ipyparallel.Client()
lview = ipclient.load_balanced_view()
## call function across all engines
def mrbayes(infile):
import subprocess
cmd = "mb %s" % infile
subprocess.check_call(cmd, shell=True)
## run on the full data set
res = lview.map_async(mrbayes, glob.glob("nex_files1/*"))
_ = res.get()
## run on the half data set
res = lview.map_async(mrbayes, glob.glob("nex_files2/*"))
_ = res.get()
## run on the half data set
res = lview.map_async(mrbayes, glob.glob("nex_files3/*"))
_ = res.get()
## run on the half data set
res = lview.map_async(mrbayes, glob.glob("nex_files4/*"))
_ = res.get()
## run on the half data set
res = lview.map_async(mrbayes, glob.glob("nex_files5/*"))
_ = res.get()
## run on the half data set
res = lview.map_async(mrbayes, glob.glob("nex_files6/*"))
_ = res.get()
Explanation: Run mrbayes on all nex files
End of explanation
import os
import subprocess
def mbsum(nexdir, nloci):
## create dir for bucky input files
insdir = os.path.join(nexdir, "ins")
if not os.path.exists(insdir):
os.mkdir(insdir)
## combine trees from the two replicate runs
for n in range(nloci):
cmd = "mbsum -n 101 -o {}/{}.in {}{}.nex.run1.t {}{}.nex.run2.t".\
format(insdir, n, nexdir, n, nexdir, n)
subprocess.check_call(cmd, shell=True)
#mbsum("nex_files1/", 3300)
#mbsum("nex_files2/", 364)
#mbsum("nex_files3/", 1692)
#mbsum("nex_files4/", 169)
mbsum("nex_files5/", 1203)
mbsum("nex_files6/", 106)
Explanation: Run mbsum to summarize the results
End of explanation
args = []
for insdir in ["nex_files5/ins", "nex_files6/ins"]:
## independence test
args.append("bucky --use-independence-prior -k 4 -n 500000 \
-o {}/BUCKY.ind {}/*.in".format(insdir, insdir))
## alpha at three levels
for alpha in [0.1, 1, 10]:
args.append("bucky -a {} -k 4 -n 500000 -c 4 -o {}/BUCKY.{} {}/*.in".\
format(alpha, insdir, alpha, insdir))
def bucky(arg):
import subprocess
subprocess.check_call(arg, shell=True)
return arg
res = lview.map_async(bucky, args)
res.get()
Explanation: Run Bucky
End of explanation
del lbview
ipclient.close()
Explanation: Cleanup
End of explanation
%%bash
head -n 40 nex_files1/ins/BUCKY.0.1.concordance
%%bash
head -n 40 nex_files1/ins/BUCKY.1.concordance
%%bash
head -n 40 nex_files2/ins/BUCKY.1.concordance
! head -n 45 nex_files3/ins/BUCKY.0.1.concordance
Explanation: check out the results
End of explanation
! head -n 45 nex_files4/ins/BUCKY.0.1.concordance
! head -n 45 nex_files5/ins/BUCKY.0.1.concordance
! head -n 45 nex_files6/ins/BUCKY.0.1.concordance
Explanation: FINAL BUCKY RESULTS (DEEP_SCALE)
End of explanation
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_1/fullrun \
-n empirical_1_full_m2 -s empirical_1/fullrun/outfiles/empirical_1_m2.phy
%%bash
head -n 20 empirical_1/fullrun/RAxML_info.empirical_1_full_m2
Explanation: Get missing data percentatge for m2 data sets
For this I start raxml to get the info and then quit. Kind of lazy but simpler than calculating it myself.
End of explanation
%%R
mean(cophenetic.phylo(ltre))
Explanation: Get average phylo dist (GTRgamma dist)
End of explanation |
8,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Orientation density functions
Step1: In this Python Notebook we will show how to properly run a simulation of a composite material, providing the ODF (orientation density function) of the reinforcments.
Such identification procedure require
Step2: In the previous graph we can see a multi-peak ODF (peaks are modeled using PEARSONVII functions). It actually represent quite well the microstructure of injected plates.
The next step is to discretize the ODF into phases.
The file containing the initial 2-phase microstructure contains the following informations | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from simmit import smartplus as sim
from simmit import identify as iden
import os
dir = os.path.dirname(os.path.realpath('__file__'))
Explanation: Orientation density functions
End of explanation
x = np.arange(0,182,2)
path_data = dir + '/data/'
peak_file = 'Npeaks0.dat'
y = sim.get_densities(x, path_data, peak_file, False)
fig = plt.figure()
plt.grid(True)
plt.plot(x,y, c='black')
Explanation: In this Python Notebook we will show how to properly run a simulation of a composite material, providing the ODF (orientation density function) of the reinforcments.
Such identification procedure require:
1. Proper ODF peak data
1. Proper composite properties
2. A proper numerical model (here a composite model for laminate constitutive model)
End of explanation
NPhases_file = dir + '/data/Nellipsoids0.dat'
NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python')
NPhases[::]
umat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments
nstatev = 0
rho = 1.12 #The density of the material (overall)
c_p = 1.64 #The specific heat capacity (overall)
nphases = 2 #The number of phases
num_file = 0 #The num of the file that contains the subphases
int1 = 20
int2 = 20
psi_rve = 0.
theta_rve = 0.
phi_rve = 0.
props = np.array([nphases, num_file, int1, int2, 0])
path_data = 'data'
path_results = 'results'
Nfile_init = 'Nellipsoids0.dat'
Nfile_disc = 'Nellipsoids1.dat'
nphases_rve = 36
num_phase_disc = 1
sim.ODF_discretization(nphases_rve, num_phase_disc, 0., 180., umat_name, props, path_data, peak_file, Nfile_init, Nfile_disc, 1)
NPhases_file = dir + '/data/Nellipsoids1.dat'
NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python')
#We plot here the five first phases
NPhases[:5]
#Plot the concentration and the angle
c, angle = np.loadtxt(NPhases_file, usecols=(4,5), skiprows=2, unpack=True)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
# the histogram of the data
xs = np.arange(0,180,5)
rects1 = ax1.bar(xs, c, width=5, color='r', align='center')
ax2.plot(x, y, 'b-')
ax1.set_xlabel('X data')
ax1.set_ylabel('Y1 data', color='g')
ax2.set_ylabel('Y2 data', color='b')
ax1.set_ylim([0,0.025])
ax2.set_ylim([0,0.25])
plt.show()
#plt.grid(True)
#plt.plot(angle,c, c='black')
plt.show()
#Run the simulation
pathfile = 'path.txt'
nphases = 37 #The number of phases
num_file = 1 #The num of the file that contains the subphases
props = np.array([nphases, num_file, int1, int2])
outputfile = 'results_MTN.txt'
sim.solver(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data, path_results, pathfile, outputfile)
fig = plt.figure()
outputfile_macro = dir + '/' + path_results + '/results_MTN_global-0.txt'
e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_macro, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True)
plt.grid(True)
plt.plot(e11,s11, c='black')
for i in range(8,12):
outputfile_micro = dir + '/' + path_results + '/results_MTN_global-0-' + str(i) + '.txt'
e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_micro, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True)
plt.grid(True)
plt.plot(e11,s11, c='red')
plt.xlabel('Strain')
plt.ylabel('Stress (MPa)')
plt.show()
Explanation: In the previous graph we can see a multi-peak ODF (peaks are modeled using PEARSONVII functions). It actually represent quite well the microstructure of injected plates.
The next step is to discretize the ODF into phases.
The file containing the initial 2-phase microstructure contains the following informations
End of explanation |
8,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: TEST-INSTITUTE-2
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
8,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spectral Temperature Estimation
Problem Statement
A spectral radiometer is used to determine the surface temperature of a hot object exposed to sunlight. The surface normal vector is pointing directly towards the sun.
The temperature and emissivity of the object surface, as well as the temperature of the sun are unknown.
The object is considerably hotter than the environment, hence atmospheric path radiance and reflected ambient flux can be ignored. Ignore atmospheric transmittance between the sun and the object and between the object and the sensor. The measurement results are corrupted by noise. The sun and object both radiate as Planck radiators. Emissivity is constant at all wavelengths.
Three different objects were characterised, each with different sun temperature, object temperature and object surface emissivity values. In other words, these are three completely independent cases. The spectral measured radiance for all three cases are contained in the file
https
Step2: Prepare the data file
This section is not part of the solution, it is used to prepare the data for the problem statement.
Calculate the spectral radiance curves from first principles.
Step3: Plot the data for visual check. Plot on log scale to capture the wide range of values.
Step4: Solution
Recognise that the signature comprises two components
Step6: Analysing the graphs above, rough estimates of the temperatures can be formed. The object temperatures can be estimated relatively easily from the bottom-left curve
Step8: The following function does the hard work to find the maxima. First find all the zero crossings in the data, but there are many such crossings (at least three actual crossings) because of the noise in the signal. Near each actual zero crossing there are a multitude of noisy crossings. We are unable to tell the actual crossing from the noise-caused ones. This is done by selecting (1) a spectral range according to the expected temperature (by Wien law), (2) finding all the zero crossings in the spectral range, (3) using the wavelengths where the zero crossings occur to determine the temperatures of such zero crossings, and (4) taking the mean value of the temperatures associated with all the zero crossings in the filter spectral range.
The emissivity determined by solving from $L_{\rm tot} = \epsilon \,L_{\rm bb}(T_{\rm obj}) + (1\,-\,\epsilon)\, L_{\rm bb}(T_{\rm sun})$, having determined the sun and object temperatures.
Finally, print the estimated temperature and the errors relative to the original values used to construct the problem.
Step9: From the above results, it is evident that the filtered data set did not really provide a better answer
Step10: This method is simple and very accurate, but it does require confirmation that the measured scenarios correspond with the model used in the data analysis. It is not a general solution, but specifically tuned to the equation used.
Python and module versions, and dates | Python Code:
from IPython.display import display
from IPython.display import Image
from IPython.display import HTML
%matplotlib inline
import numpy as np
from scipy.optimize import curve_fit
import pyradi.ryutils as ryutils
import pyradi.ryplot as ryplot
import pyradi.ryplanck as ryplanck
#make pngs at required dpi
import matplotlib as mpl
mpl.rc("savefig", dpi=75)
mpl.rc('figure', figsize=(10,8))
Explanation: Spectral Temperature Estimation
Problem Statement
A spectral radiometer is used to determine the surface temperature of a hot object exposed to sunlight. The surface normal vector is pointing directly towards the sun.
The temperature and emissivity of the object surface, as well as the temperature of the sun are unknown.
The object is considerably hotter than the environment, hence atmospheric path radiance and reflected ambient flux can be ignored. Ignore atmospheric transmittance between the sun and the object and between the object and the sensor. The measurement results are corrupted by noise. The sun and object both radiate as Planck radiators. Emissivity is constant at all wavelengths.
Three different objects were characterised, each with different sun temperature, object temperature and object surface emissivity values. In other words, these are three completely independent cases. The spectral measured radiance for all three cases are contained in the file
https://raw.githubusercontent.com/NelisW/pyradi/master/pyradi/data/EOSystemAnalysisDesign-Data/twoSourceSpectrum.txt
The first column in the file is wavelength in $\mu$m, and the remaining three columns are spectral radiance data for the three cases in W/(m$^2$.sr.$\mu$m).
Develop a model for the measurement setup; complete with mathematical description and a diagram of the setup. [5]
Develop two different techniques to solve the model parameters (sun temperature, object temperature and object emissivity) for the given spectral data. [13]
Evaluate the two techniques in terms of accuracy and risk in finding a stable and true solution. [2]
[20]
End of explanation
filename = 'twoSourceSpectrum.txt'
def bbfunc(wl,tsun,tobj,emis):
Given the two temperatures and emissivity, calculate the spectum.
lSun = 2.175e-5 * ryplanck.planck(wl,tsun, 'el') / np.pi # W/(m2.sr.um)
lObj = ryplanck.planck(wl,tobj, 'el') / np.pi # W/(m2.sr.um)
return lSun * (1 - emis) + lObj * emis
def percent(val1, val2):
return 100. * (val1 -val2) / val2
Explanation: Prepare the data file
This section is not part of the solution, it is used to prepare the data for the problem statement.
Calculate the spectral radiance curves from first principles.
End of explanation
wl = np.linspace(0.2, 14, 1000).reshape(-1,1) # wavelength
tempsuns = [5715, 5855, 6179]
tempobjs = [503, 998, 1420]
emiss = [0.89, 0.51, 0.2]
plotmins = [1e-0, 1e1, 1e1]
plotmaxs = [1e3, 1e4, 1e5]
noise = [.1, .5 , 10]
radiancesun = np.zeros((wl.shape[0], len(emiss)))
radianceobj = np.zeros((wl.shape[0], len(emiss)))
radiancesum = np.zeros((wl.shape[0], len(emiss)))
outarr = wl
p = ryplot.Plotter(2,3,1,figsize=(12,10))
for i,(tempsun, tempobj,emis, plotmin, plotmax) in enumerate(zip(tempsuns, tempobjs,emiss, plotmins, plotmaxs)):
radiancesum[:,i] = bbfunc(wl,tempsun,tempobj,emis) + noise[i] * np.random.normal(size=wl.shape[0])
radiancesun[:,i] = bbfunc(wl,tempsun,tempobj,0.0)
radianceobj[:,i] = bbfunc(wl,tempsun,tempobj,1.0)
outarr = np.hstack((outarr, radiancesum[:,i].reshape(-1,1)))
p.logLog(1+i,wl, radiancesun[:,i],
'Self exitance plus sun reflection, case {}'.format(i+1),
'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)',
maxNX=4, pltaxis=[0.2, 10, plotmin, plotmax])
p.logLog(1+i,wl, radianceobj[:,i],
'Self exitance plus sun reflection, case {}'.format(i+1),
'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)',
maxNX=4, pltaxis=[0.2, 10, plotmin, plotmax])
p.logLog(1+i,wl, radiancesum[:,i],
'Self exitance plus sun reflection, case {}'.format(i+1),
'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)',
maxNX=4, pltaxis=[0.2, 10, plotmin, plotmax])
#data not to be written out again, already committed to the pyradi web site.
if False:
with open(filename, 'wt') as fout:
fout.write(('{:25s}' * 4 + '\n').format('wavelength-um', 'Radiance-case1', 'Radiance-case2', 'Radiance-case3'))
np.savetxt(fout, outarr)
Explanation: Plot the data for visual check. Plot on log scale to capture the wide range of values.
End of explanation
radIn = np.loadtxt(filename, skiprows=1)
wl = radIn[:,0]
q = ryplot.Plotter(1,2,2,figsize=(12,8))
for i in range(0,len(emiss)):
tPeak = 2898. / wl
q.logLog(1,wl, radIn[:,i+1],
'Input data', 'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)',
label=['Case {}'.format(i+1)], maxNX=4, pltaxis=[0.2, 10, 1e0, 1e4])
q.semilogY(2,wl, radIn[:,i+1],
'Input data', 'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)',
label=['Case {}'.format(i+1)], maxNX=4, pltaxis=[0.2, 10, 1e0, 1e4])
q.semilogY(3,tPeak, radIn[:,i+1],
'Input data', 'Temperature K', 'Radiance W/(m$^2$.sr.$\mu$m)',
label=['Case {}'.format(i+1)], maxNX=4, pltaxis=[0., 2000., 1e0, 1e4])
q.semilogY(4,tPeak, radIn[:,i+1],
'Input data', 'Temperature K', 'Radiance W/(m$^2$.sr.$\mu$m)',
label=['Case {}'.format(i+1)], maxNX=4, pltaxis=[4000., 8000., 4e1, 1e3])
Explanation: Solution
Recognise that the signature comprises two components: reflected sunlight and self emittance.
The total signature is given by
$L_{\rm tot} = \epsilon \,L_{\rm bb}(T_{\rm obj}) + (1\,-\,\epsilon)\, \psi L_{\rm bb}(T_{\rm sun})$,
where
$\psi=A_{\rm sun}/(\pi R_{\rm sun}^2) =2.1757\times10^{-5}$ [sr/sr] follows from the geometry between the earth and the sun,
$\epsilon$ is the object surface emissivity,
$T_{\rm obj}$ the object temperature, and
$T_{\rm sun}$ is the sun surface temperature.
The three variables to be solved are
$\epsilon$,
$T_{\rm obj}$, and
$T_{\rm sun}$.
The problem is coded in mathematical form as
def bbfunc(wl,tsun,tobj,emis):
Given the two temperatures and emissivity, calculate the spectum.
lSun = 2.175e-5 * ryplanck.planck(wl,tsun, 'el') / np.pi # W/(m2.sr.um)
lObj = ryplanck.planck(wl,tobj, 'el') / np.pi # W/(m2.sr.um)
return lSun * (1 - emis) + lObj * emis
The first step in the analysis is to plot the supplied data. Sometimes it helps to plot the data in different plotting scales. In this case the sun and object are Planck radiators (given in problem statement) where the wavelength of peak radiance is related to the temperature by Wien's displacement law $T = 2898/\lambda_{p}$. Exploiting this fact, the radiance curves are plotted on a Wien-law scale where wavelength is converted to temperature. For an ideal Planck-law radiator the radiance peak can be used to read of the temperature on the temperature scale.
End of explanation
#temperatures read of from above graphs
tLo = np.asarray([[400, 600],[900,1100],[1400,1600]])
tHi = np.asarray([[5000, 6500],[5000,6500],[5000,6500]])
#get peak wavelengths associated with these temperatures
wLo = 2897.77 / tLo
wHi = 2897.77 / tHi
print(tLo)
print(tHi)
print(wLo)
print(wHi)
Explanation: Analysing the graphs above, rough estimates of the temperatures can be formed. The object temperatures can be estimated relatively easily from the bottom-left curve: approximately 500 K, 1000 K and 1500 K. The sun temperature is not so easily estimated: 5500 K, 5900 K and 6000 K. These estimates in themselves are inaccurate for final results, but can guide subsequent analysis.
Method 1: Differentiating Spectral Radiance
The first method is a refinement of the peak-detection method used above for the first order estimates. It relies on the mathematical method that the location of peaks and minima can be determined by the zero crossings of the first derivative. The derivative is calculated by using the Numpy diff method. The problem with this technique is that the measurement is somewhat noisy, and the derivation operator amplifies the noise. So we are making a noisy signal more noisy and then look for zero crossings - a method that hold tremendous promise to fail. The noisy zero crossings make it very difficult to determine the true zero crossing accurately. We will use the estimates determined above to limit the search range.
The data is noisy, so a filter was used to improve the signal to noise somewhat. There is however the risk that the filtering process may interfere with the true nature of the data, so the degree of filtering is limited. The Savitzky-Golay filter is used here. This filter is commonly used in the spectroscopy community.
In the code below the first derivative is calculated using numpy.diff and then normalised (because the absolute scale is not important). The differentiated signal is plotted for inspection. Note the noise in the signal - filtering seems to help quite a lot (but not sufficiently to count zero crossings naively).
In the analysis below uses privileged problem data, but only to determine the error between the answers and the true temperatures.
First detemine the approximate wavelength ranges where the estimated zero crossing occur, based on temperatures read of from the above graphs.
End of explanation
def processTempEmis(i, data, datadiff, dataName, tempsuns, tempobjs, emiss, wldcen, wLo, wHi):
data is the radiance spectrum
datadiff is the radiance spectrum differential
dataName is the text name to be used in printout
tempsuns, tempobjs, emiss are the reference data to determine errors
wldcen is wavelength values at the differentiation samples
wLo is the low temperature spectral range containing zero crossing
wHi is the high temperature spectral range containing zero crossing
#find all zero crossings (all wavelengths) to determine inflection wavelengths
zero_crossings = np.where(np.diff(np.sign(datadiff)),1,0)
# set up spectral filters according to the estimated temperature ranges
wLofilter = np.where((wldcen >= wLo[i,1]) & (wldcen <= wLo[i,0]),1., 0.)[1:]
wHifilter = np.where((wldcen >= wHi[i,1]) & (wldcen <= wHi[i,0]),1., 0.)[1:]
#find indexes of zero crossings within the spectral filters
zcHi = np.where(zero_crossings*wHifilter)
zcLo = np.where(zero_crossings*wLofilter)
#use Wien law to determine temperature
# but this could be many samples if more than one zero crossing, take mean
bbtempHi = np.mean(2897.77 / wldcen[zcHi])
bbtempLo = np.mean(2897.77 / wldcen[zcLo])
#calculate emissivity, given the two above temperatures
lsun = 2.15e-5 * ryplanck.planck(wldcen,bbtempHi, 'el') / np.pi # W/(m2.sr.um)
lobj = ryplanck.planck(wldcen, bbtempLo, 'el') / np.pi # W/(m2.sr.um)
estEmis = np.average((data[1:] - lsun)/(lobj - lsun))
print('\nCase {} {}:'.format(i+1, dataName))
print('Estimated Tsun={:.2f} Tobj={:.2f} emis={:.4f}'.format(bbtempHi,bbtempLo,estEmis ))
print('True Tsun={:.2f} Tobj={:.2f} emis={:.4f}'.format(tempsuns[i], tempobjs[i], emiss[i]))
print('Error Tsun={:.2f}% Tobj={:.2f}% emis={:.4f}%'.format(
percent(bbtempHi, tempsuns[i]),percent(bbtempLo, tempobjs[i]),percent(estEmis, emiss[i])))
return bbtempHi, bbtempLo, estEmis
radiancesumdif = np.zeros((wl.shape[0]-1, len(emiss)))
wldiff = np.diff(radIn[:,0],axis=0).reshape(-1,1)
wldcen = (radIn[:-1,0] + radIn[1:,0]) / 2.
p = ryplot.Plotter(2,3,1,figsize=(10,10))
for i in range(0,len(emiss)):
#take derivative and normalise because scale is not important
radiancesumdif[:,i] = (np.diff(radIn[:,i+1],axis=0).reshape(-1,1)/ wldiff).reshape(-1,)
radiancesumdif[:,i] /= np.max(radiancesumdif[:,i],axis=0)
#the derivative is too noisy, so filter the signal to suppress noise
sgfiltered = ryutils.savitzkyGolay1D(radIn[:,i+1], window_size=11, order=3, deriv=0, rate=1)
sgfiltereddif = (np.diff(sgfiltered,axis=0).reshape(-1,1)/ wldiff).reshape(-1,)
sgfiltereddif /= np.max(sgfiltereddif,axis=0)
#plot
p.semilogX(1+i,wldcen, radiancesumdif[:,i],'Differentiated signal case {}'.format(i+1),
'Wavelength $\mu$m','Radiance W/(m$^2$.sr.$\mu$m)', plotCol=['r'],maxNX=4,
pltaxis=[0.2, 10, -0.5, 1.0],label=['Raw'])
p.semilogX(1+i,wldcen, sgfiltereddif,'Differentiated signal case {}'.format(i+1),
'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)',plotCol=['b'], maxNX=4,
pltaxis=[0.2, 10, -0.5, 1.0],label=['Filtered'])
#process to find the zero crossings
processTempEmis(i, radIn[:,i+1], radiancesumdif[:,i], 'Raw', tempsuns, tempobjs, emiss, wldcen, wLo, wHi)
processTempEmis(i, radIn[:,i+1], sgfiltereddif, 'Filtered', tempsuns, tempobjs, emiss, wldcen, wLo, wHi)
print(30*'-')
Explanation: The following function does the hard work to find the maxima. First find all the zero crossings in the data, but there are many such crossings (at least three actual crossings) because of the noise in the signal. Near each actual zero crossing there are a multitude of noisy crossings. We are unable to tell the actual crossing from the noise-caused ones. This is done by selecting (1) a spectral range according to the expected temperature (by Wien law), (2) finding all the zero crossings in the spectral range, (3) using the wavelengths where the zero crossings occur to determine the temperatures of such zero crossings, and (4) taking the mean value of the temperatures associated with all the zero crossings in the filter spectral range.
The emissivity determined by solving from $L_{\rm tot} = \epsilon \,L_{\rm bb}(T_{\rm obj}) + (1\,-\,\epsilon)\, L_{\rm bb}(T_{\rm sun})$, having determined the sun and object temperatures.
Finally, print the estimated temperature and the errors relative to the original values used to construct the problem.
End of explanation
radIn = np.loadtxt(filename, skiprows=1)
for i in range(0,len(emiss)):
popt, pcov = curve_fit(bbfunc,radIn[:,0], radIn[:,i+1], p0=(6000., 1000., .5) )
print('\nCase {}:'.format(i+1))
print('Estimated Tsun={:.2f} Tobj={:.2f} emis={:.4f}'.format(popt[0],popt[1],popt[2]))
print('True Tsun={:.2f} Tobj={:.2f} emis={:.4f}'.format(tempsuns[i], tempobjs[i], emiss[i]))
print('Error Tsun={:.2f}% Tobj={:.2f}% emis={:.4f}%'.format(
percent(popt[0], tempsuns[i]),percent(popt[1], tempobjs[i]),percent(popt[2], emiss[i])))
Explanation: From the above results, it is evident that the filtered data set did not really provide a better answer: proof that filtering affects the data and subsequent results.
It can be argued that the use of a spectral filtering/selection range for each of the three cases already predetermines the final solution. Note however that the ranges were selected on the basis of the peak evident in the input data. Taking the average of the calculated zero crossings' temperatures is somewhat brute force, but appears to work reasonably well.
The method is complex and requires some fine tuning and careful algorithmic design. It is however a 'general' solution in the sense that it will work for any scenario.
Method 2: Curve fitting to a known equation
The second method is far simpler and yields better answers. The problem can be stated in the following mathematical form:
$L_{\rm tot} = \epsilon \,L_{\rm bb}(T_{\rm obj}) + (1\,-\,\epsilon)\, L_{\rm bb}(T_{\rm sun})$. This function is implemented in the function bbfunc(). Finding the solution is then simply a matter of using the scipy.optimize.curve_fit function to find the sun temperature, object temperature and emissivity that will provide the best fit. This approach is motivated on the premise that the problem statement supports a direct and accurate mathematical formulation, which can then be used curve fitting the given data.
End of explanation
# you only need to do this once
#!pip install --upgrade version_information
%load_ext version_information
%version_information numpy, scipy, matplotlib, pyradi
Explanation: This method is simple and very accurate, but it does require confirmation that the measured scenarios correspond with the model used in the data analysis. It is not a general solution, but specifically tuned to the equation used.
Python and module versions, and dates
End of explanation |
8,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: Checkpointer and PolicySaver
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: DQN agent
We are going to set up DQN agent, just like in the previous colab. The details are hidden by default as they are not core part of this colab, but you can click on 'SHOW CODE' to see the details.
Hyperparameters
Step3: Environment
Step4: Agent
Step5: Data Collection
Step6: Train the agent
Step8: Video Generation
Step9: Generate a video
Check the performance of the policy by generating a video.
Step10: Setup Checkpointer and PolicySaver
Now we are ready to use Checkpointer and PolicySaver.
Checkpointer
Step11: Policy Saver
Step12: Train one iteration
Step13: Save to checkpoint
Step14: Restore checkpoint
For this to work, the whole set of objects should be recreated the same way as when the checkpoint was created.
Step15: Also save policy and export to a location
Step16: The policy can be loaded without having any knowledge of what agent or network was used to create it. This makes deployment of the policy much easier.
Load the saved policy and check how it performs
Step17: Export and import
The rest of the colab will help you export / import checkpointer and policy directories such that you can continue training at a later point and deploy the model without having to train again.
Now you can go back to 'Train one iteration' and train a few more times such that you can understand the difference later on. Once you start to see slightly better results, continue below.
Step18: Create a zipped file from the checkpoint directory.
Step19: Download the zip file.
Step20: After training for some time (10-15 times), download the checkpoint zip file,
and go to "Runtime > Restart and run all" to reset the training,
and come back to this cell. Now you can upload the downloaded zip file,
and continue the training.
Step21: Once you have uploaded checkpoint directory, go back to 'Train one iteration' to continue training or go back to 'Generate a video' to check the performance of the loaded poliicy.
Alternatively, you can save the policy (model) and restore it.
Unlike checkpointer, you cannot continue with the training, but you can still deploy the model. Note that the downloaded file is much smaller than that of the checkpointer.
Step22: Upload the downloaded policy directory (exported_policy.zip) and check how the saved policy performs.
Step23: SavedModelPyTFEagerPolicy
If you don't want to use TF policy, then you can also use the saved_model directly with the Python env through the use of py_tf_eager_policy.SavedModelPyTFEagerPolicy.
Note that this only works when eager mode is enabled.
Step24: Convert policy to TFLite
See TensorFlow Lite converter for more details.
Step25: Run inference on TFLite model
See TensorFlow Lite Inference for more details. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
#@test {"skip": true}
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg python-opengl
!pip install pyglet
!pip install 'imageio==2.4.0'
!pip install 'xvfbwrapper==0.2.9'
!pip install tf-agents[reverb]
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import io
import matplotlib
import matplotlib.pyplot as plt
import os
import shutil
import tempfile
import tensorflow as tf
import zipfile
import IPython
try:
from google.colab import files
except ImportError:
files = None
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import policy_saver
from tf_agents.policies import py_tf_eager_policy
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tempdir = os.getenv("TEST_TMPDIR", tempfile.gettempdir())
#@test {"skip": true}
# Set up a virtual display for rendering OpenAI gym environments.
import xvfbwrapper
xvfbwrapper.Xvfb(1400, 900, 24).start()
Explanation: Checkpointer and PolicySaver
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/10_checkpointer_policysaver_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/10_checkpointer_policysaver_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/10_checkpointer_policysaver_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/10_checkpointer_policysaver_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Introduction
tf_agents.utils.common.Checkpointer is a utility to save/load the training state, policy state, and replay_buffer state to/from a local storage.
tf_agents.policies.policy_saver.PolicySaver is a tool to save/load only the policy, and is lighter than Checkpointer. You can use PolicySaver to deploy the model as well without any knowledge of the code that created the policy.
In this tutorial, we will use DQN to train a model, then use Checkpointer and PolicySaver to show how we can store and load the states and model in an interactive way. Note that we will use TF2.0's new saved_model tooling and format for PolicySaver.
Setup
If you haven't installed the following dependencies, run:
End of explanation
env_name = "CartPole-v1"
collect_steps_per_iteration = 100
replay_buffer_capacity = 100000
fc_layer_params = (100,)
batch_size = 64
learning_rate = 1e-3
log_interval = 5
num_eval_episodes = 10
eval_interval = 1000
Explanation: DQN agent
We are going to set up DQN agent, just like in the previous colab. The details are hidden by default as they are not core part of this colab, but you can click on 'SHOW CODE' to see the details.
Hyperparameters
End of explanation
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
Explanation: Environment
End of explanation
#@title
q_net = q_network.QNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=global_step)
agent.initialize()
Explanation: Agent
End of explanation
#@title
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
collect_driver = dynamic_step_driver.DynamicStepDriver(
train_env,
agent.collect_policy,
observers=[replay_buffer.add_batch],
num_steps=collect_steps_per_iteration)
# Initial data collection
collect_driver.run()
# Dataset generates trajectories with shape [BxTx...] where
# T = n_step_update + 1.
dataset = replay_buffer.as_dataset(
num_parallel_calls=3, sample_batch_size=batch_size,
num_steps=2).prefetch(3)
iterator = iter(dataset)
Explanation: Data Collection
End of explanation
#@title
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
def train_one_iteration():
# Collect a few steps using collect_policy and save to the replay buffer.
collect_driver.run()
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience)
iteration = agent.train_step_counter.numpy()
print ('iteration: {0} loss: {1}'.format(iteration, train_loss.loss))
Explanation: Train the agent
End of explanation
#@title
def embed_gif(gif_buffer):
Embeds a gif file in the notebook.
tag = '<img src="data:image/gif;base64,{0}"/>'.format(base64.b64encode(gif_buffer).decode())
return IPython.display.HTML(tag)
def run_episodes_and_create_video(policy, eval_tf_env, eval_py_env):
num_episodes = 3
frames = []
for _ in range(num_episodes):
time_step = eval_tf_env.reset()
frames.append(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_tf_env.step(action_step.action)
frames.append(eval_py_env.render())
gif_file = io.BytesIO()
imageio.mimsave(gif_file, frames, format='gif', fps=60)
IPython.display.display(embed_gif(gif_file.getvalue()))
Explanation: Video Generation
End of explanation
print ('global_step:')
print (global_step)
run_episodes_and_create_video(agent.policy, eval_env, eval_py_env)
Explanation: Generate a video
Check the performance of the policy by generating a video.
End of explanation
checkpoint_dir = os.path.join(tempdir, 'checkpoint')
train_checkpointer = common.Checkpointer(
ckpt_dir=checkpoint_dir,
max_to_keep=1,
agent=agent,
policy=agent.policy,
replay_buffer=replay_buffer,
global_step=global_step
)
Explanation: Setup Checkpointer and PolicySaver
Now we are ready to use Checkpointer and PolicySaver.
Checkpointer
End of explanation
policy_dir = os.path.join(tempdir, 'policy')
tf_policy_saver = policy_saver.PolicySaver(agent.policy)
Explanation: Policy Saver
End of explanation
#@test {"skip": true}
print('Training one iteration....')
train_one_iteration()
Explanation: Train one iteration
End of explanation
train_checkpointer.save(global_step)
Explanation: Save to checkpoint
End of explanation
train_checkpointer.initialize_or_restore()
global_step = tf.compat.v1.train.get_global_step()
Explanation: Restore checkpoint
For this to work, the whole set of objects should be recreated the same way as when the checkpoint was created.
End of explanation
tf_policy_saver.save(policy_dir)
Explanation: Also save policy and export to a location
End of explanation
saved_policy = tf.saved_model.load(policy_dir)
run_episodes_and_create_video(saved_policy, eval_env, eval_py_env)
Explanation: The policy can be loaded without having any knowledge of what agent or network was used to create it. This makes deployment of the policy much easier.
Load the saved policy and check how it performs
End of explanation
#@title Create zip file and upload zip file (double-click to see the code)
def create_zip_file(dirname, base_filename):
return shutil.make_archive(base_filename, 'zip', dirname)
def upload_and_unzip_file_to(dirname):
if files is None:
return
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
shutil.rmtree(dirname)
zip_files = zipfile.ZipFile(io.BytesIO(uploaded[fn]), 'r')
zip_files.extractall(dirname)
zip_files.close()
Explanation: Export and import
The rest of the colab will help you export / import checkpointer and policy directories such that you can continue training at a later point and deploy the model without having to train again.
Now you can go back to 'Train one iteration' and train a few more times such that you can understand the difference later on. Once you start to see slightly better results, continue below.
End of explanation
train_checkpointer.save(global_step)
checkpoint_zip_filename = create_zip_file(checkpoint_dir, os.path.join(tempdir, 'exported_cp'))
Explanation: Create a zipped file from the checkpoint directory.
End of explanation
#@test {"skip": true}
if files is not None:
files.download(checkpoint_zip_filename) # try again if this fails: https://github.com/googlecolab/colabtools/issues/469
Explanation: Download the zip file.
End of explanation
#@test {"skip": true}
upload_and_unzip_file_to(checkpoint_dir)
train_checkpointer.initialize_or_restore()
global_step = tf.compat.v1.train.get_global_step()
Explanation: After training for some time (10-15 times), download the checkpoint zip file,
and go to "Runtime > Restart and run all" to reset the training,
and come back to this cell. Now you can upload the downloaded zip file,
and continue the training.
End of explanation
tf_policy_saver.save(policy_dir)
policy_zip_filename = create_zip_file(policy_dir, os.path.join(tempdir, 'exported_policy'))
#@test {"skip": true}
if files is not None:
files.download(policy_zip_filename) # try again if this fails: https://github.com/googlecolab/colabtools/issues/469
Explanation: Once you have uploaded checkpoint directory, go back to 'Train one iteration' to continue training or go back to 'Generate a video' to check the performance of the loaded poliicy.
Alternatively, you can save the policy (model) and restore it.
Unlike checkpointer, you cannot continue with the training, but you can still deploy the model. Note that the downloaded file is much smaller than that of the checkpointer.
End of explanation
#@test {"skip": true}
upload_and_unzip_file_to(policy_dir)
saved_policy = tf.saved_model.load(policy_dir)
run_episodes_and_create_video(saved_policy, eval_env, eval_py_env)
Explanation: Upload the downloaded policy directory (exported_policy.zip) and check how the saved policy performs.
End of explanation
eager_py_policy = py_tf_eager_policy.SavedModelPyTFEagerPolicy(
policy_dir, eval_py_env.time_step_spec(), eval_py_env.action_spec())
# Note that we're passing eval_py_env not eval_env.
run_episodes_and_create_video(eager_py_policy, eval_py_env, eval_py_env)
Explanation: SavedModelPyTFEagerPolicy
If you don't want to use TF policy, then you can also use the saved_model directly with the Python env through the use of py_tf_eager_policy.SavedModelPyTFEagerPolicy.
Note that this only works when eager mode is enabled.
End of explanation
converter = tf.lite.TFLiteConverter.from_saved_model(policy_dir, signature_keys=["action"])
tflite_policy = converter.convert()
with open(os.path.join(tempdir, 'policy.tflite'), 'wb') as f:
f.write(tflite_policy)
Explanation: Convert policy to TFLite
See TensorFlow Lite converter for more details.
End of explanation
import numpy as np
interpreter = tf.lite.Interpreter(os.path.join(tempdir, 'policy.tflite'))
policy_runner = interpreter.get_signature_runner()
print(policy_runner._inputs)
policy_runner(**{
'0/discount':tf.constant(0.0),
'0/observation':tf.zeros([1,4]),
'0/reward':tf.constant(0.0),
'0/step_type':tf.constant(0)})
Explanation: Run inference on TFLite model
See TensorFlow Lite Inference for more details.
End of explanation |
8,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Making Inferences
Step1: Star Schema (facts vs. dimensions)
In our case, the individual review events are the facts and listings themselves are the dimensions.
Step2:
Step3: Pandas Resample String convention
Interesting things of note
Step4: Correlation vs. Regression
Actually pretty nearly similar mathematically...
At least inferentially
Step5: R-squared
Step6: Making Predictions | Python Code:
import pandas as pd
import matplotlib as plt
# draw plots in notebook
%matplotlib inline
# make plots SVG (higher quality)
%config InlineBackend.figure_format = 'svg'
# more time/compute intensive to parse dates. but we know we definitely have/need them
df = pd.read_csv('data/sf_listings.csv', parse_dates=['last_review'], infer_datetime_format=True)
df_reviews = pd.read_csv('data/reviews.csv', parse_dates=['date'], infer_datetime_format=True)
df_reviews.date[0]
df.head()
# display general diagnostic info
df.info()
df_reviews.head()
# index DataFrame on listing_id in order to join datasets
reindexed_df = df_reviews.set_index('listing_id')
reindexed_df.head()
# remember the original id in a column to group on
df['listing_id'] = df['id']
df_listing = df.set_index('id')
df_listing.head()
Explanation: Making Inferences: Do AirBnB's cause Rents to Increase?
Time Series in pandas
End of explanation
# join the listing information with the review information
review_timeseries = df_listing.join(reindexed_df)
print review_timeseries.columns
review_timeseries.head()
# nothing new/interesting here...
review_timeseries.groupby('listing_id').count()['name'].hist(bins=100, figsize=(12,6));
# causes python to crash, lets see if there is a better way
# review_timeseries.groupby(['neighbourhood','date']).count()
# lets try a pivot table...
reviews_over_time = pd.crosstab(review_timeseries.date, review_timeseries.neighbourhood)
reviews_over_time.head()
Explanation: Star Schema (facts vs. dimensions)
In our case, the individual review events are the facts and listings themselves are the dimensions.
End of explanation
# let's look at some particular neighborhoods
neighborhoods = df.neighbourhood.unique()
print neighborhoods
# a little noisy
reviews_over_time[['Mission', 'South of Market', 'Noe Valley']].plot(figsize=(12,6))
# smooth by resampling by month
reviews_over_time.resample('M').mean()[['Mission', 'South of Market', 'Noe Valley']].plot(figsize=(12,6))
Explanation:
End of explanation
# Exercise 1 Solution
Explanation: Pandas Resample String convention
Interesting things of note:
Each neighborhood has an activity spike in Fall 2014 and Summer 2015.
Likely a late summer vacation surge (since that is when SF has nicest weather :)
It is periodic and the magnitude of the increase is itself increasing (good news for AirBnB!)...
Exercise 1: Spotting Trends
Using the following functions, find which columns correlate with increased activity (# of reviews and reviews per month):
* pandas correlation function
* Seaborn Heatmaps
End of explanation
from sklearn import linear_model
features = df[['host_name', 'neighbourhood', 'room_type', 'minimum_nights','number_of_reviews', \
'calculated_host_listings_count', 'availability_365']]
labels = df['price']
# no price!
features.head()
# Categorical -> One Hot Encoding
# http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-categorical-features
dummies = pd.get_dummies(features)
# sklearn likes matrices
feature_matrix = dummies.as_matrix()
labels.as_matrix()
feature_matrix
# Initialize and Fit sklearn model
model = linear_model.LinearRegression()
clf = model.fit(feature_matrix, labels.as_matrix())
# How well did we do?
clf.score(feature_matrix, labels.as_matrix())
print "There are {0} features...".format(len(clf.coef_))
clf.coef_
Explanation: Correlation vs. Regression
Actually pretty nearly similar mathematically...
At least inferentially: http://stats.stackexchange.com/questions/2125/whats-the-difference-between-correlation-and-simple-linear-regression
Introduction to Machine Learning
slideshow!
Model Evaluation: Finding Under (or over) valued Listings
End of explanation
# Remove the name column, we are probably overfitting...
no_name = features.copy()
no_name.pop('host_name')
no_names_feature_m = pd.get_dummies(no_name).as_matrix()
model = linear_model.LinearRegression(normalize=True)
clf = model.fit(no_names_feature_m, labels.as_matrix())
# Turns out the name feature is highly predictive...
# but not very useful: https://www.kaggle.com/wiki/Leakage
clf.score(no_names_feature_m, labels.as_matrix())
len(clf.coef_)
# We need more and better features
df2 = pd.read_csv('data/listings_full.csv')
df2.columns
df2.head()
# get a snapshot of some of the columns in the center of the matrix
df2.iloc[1:5, 40:60]
# optimistically lets just use a few key features to start. Remember Occam's razor..
select_features = df2[['host_has_profile_pic' ,'host_identity_verified', 'host_listings_count','host_response_time', 'host_acceptance_rate', 'host_is_superhost', 'transit', 'neighbourhood_cleansed','is_location_exact', 'property_type', 'room_type', 'accommodates','bathrooms','bedrooms','beds']]
select_features.head()
# moar feature engineering. fill in missing data since it wil break our model
select_features = select_features.fillna({'host_response_time': 'NA', 'host_acceptance_rate': '-1%'})
select_features.info()
# convert the percentage as a string into a float
select_features.host_acceptance_rate = select_features.host_acceptance_rate.str.strip('%').astype(float) / 100
# Binarize transit column... the listing is either near transit or it isn't
select_features.transit = select_features.transit.isnull()
select_features.transit
# One last fill incase we missed any nulls
dummies = pd.get_dummies(select_features).fillna(0)
feature_matrix = dummies.as_matrix()
# Price as a currency string -> price as a float
labels = df2.price.str.strip('$').str.replace(',', '').astype(float)
# initialize model again
model = linear_model.LinearRegression(normalize=True)
clf = model.fit(feature_matrix, labels)
# much better!
clf.score(feature_matrix, labels)
# a sweet spot in between over and under fitting
len(clf.coef_)
Explanation: R-squared: https://en.wikipedia.org/wiki/Coefficient_of_determination
End of explanation
# Predict what we should price listing #1000 at given its features
clf.predict(feature_matrix[1100])
# Looks like it is overpriced...
df2.iloc[1100].price
# And it shows... there are only 2 reviews per month
df2.iloc[1100]
# Where the top listing have 10+ reviews per month
df2.sort_values('reviews_per_month', ascending=False).head()
# Zip together our column names with our beta coefficients
coefficients = zip(dummies.columns, clf.coef_)
# Most significant
sorted(coefficients, key=lambda coef: coef[1], reverse=True)[:10]
# Least significant
sorted(coefficients, key=lambda coef: coef[1])[:10]
Explanation: Making Predictions: How should I price my Listing?!?
End of explanation |
8,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Revisão
Step1: Para acessar elementos de uma lista utilizamos [indice], com indice sendo um inteiro representando a posição que se encontra o elemento.
Nota
Step2: Também podemos especificar índices negativos para acessar elementos da lista, só que contando de trás para frente
Step3: Podemos também acessar um pedaço da lista especificando faixas de valores no seguinte formato
Step4: Para alterar o conteúdo de um elemento da lista basta utilizarmos o operador =
Step5: Para criar uma lista podemos utilizar o comando for em conjunto com o comando append
Step6: Podemos combinar o append com expressões matemáticas ou funções para criar listas como
Step7: Também podemos criar uma lista utilizando uma notação mais próxima da matemática
Step8: E podemos adicionar condicionais para expressar funções mais interessantes
Step9: que é equivalente a
Step10: Exercício 01
Step11: Se quisermos calcular a somatória de uma lista precisamos seguir os seguintes passos
Step12: Exercício 04
Step13: Tipo dict
Um dicionário, ou tipo dict, no Python representa um mapa de associações entre dois conceitos.
Digamos que queremos armazenar informações sobre os estados brasileiros. Para isso queremos guardar a associação Nome completo => Abreviação, Abreviação => Qtd. de cidades no estado,.
Em outras palavras queremos armazenar
Step14: Para acessarmos um valor de um dicionário utilizamos a sintaxe
Step15: Podemos inserir uma nova associação ou alterar uma já existente
Step16: Se tentarmos acessar um elemento que não existe, o Python retorna uma mensagem de erro
Step17: Para verificar se uma chave existe em um dicionário utilizamos o operador in
Step18: Para os casos que queremos assumir um valor padrão quando a chave não existe, podemos utilizar o método .get()
Step19: Exercício 05
Step20: Exercício 06
Step21: Exercício 07
Step22: Para percorrer todas as chaves e valores de um dicionário utilizamos o método .items()
Step23: Tipo numpy.array
O tipo numpy.array, proveniente da biblioteca numpy, especifica uma estrutura de vetores e matrizes para cálculos vetoriais, matriciais e de álgebra linear.
Essencialmente ele se comporta como uma lista, porém com os operadores matemáticos se comportando da mesma forma que se espera com vetores e matrizes na matemática.
Step24: Tente advinhar o que as seguintes operações estão fazendo
Step25: O acesso aos elementos do vetor é similar ao das listas
Step26: Também é possível acessar faixas de valores utilizando comparações
Step27: Exercício 08
Step28: Matrizes podem ser criadas utilizando a mesma sintaxe
Step29: O numpy contém todas as funções da biblioteca math e mais algumas, e permite que elas sejam aplicadas elemento-a-elemento em um vetor ou matriz
Step30: Temos também operadores específicos de algebra linear e cálculo vetorial
Step31: Para acessar um elemento da matriz basta indicar as coordenadas separadas por vírgula no seguinte formato
Step32: Exercício 10 | Python Code:
lista1 = [1, 2, 3, 4, 5]
lista2 = ['um', 'dois', 'três', 'quatro', 'cinco']
lista3 = [1, 'dois', 3.0, 4, 'cinco']
print('lista1 é uma lista apenas do tipo int: ', lista1)
print('lista2 é uma lista apenas do tipo str: ', lista2)
print('lista3 é uma lista apenas contendo variáveis de tipos int, str e float: ', lista3)
Explanation: Revisão: Listas, Dicionários, Arrays
Este notebook tem o objetivo de revisar os conceitos básicos dos tipos list, dict e numpy.array da linguagem de programação Python, assim como destacar as principais diferenças entre eles.
Tipo list:
O tipo list é definido como um recipiente de valores de diversos tipos, ele pode conter qualquer tipo de dados do Python e, inclusive, mistura de tipos:
End of explanation
print('O primeiro elemento de lista1 é : ', lista1[0])
print('O quarto elemento de lista1 é : ', lista1[3])
Explanation: Para acessar elementos de uma lista utilizamos [indice], com indice sendo um inteiro representando a posição que se encontra o elemento.
Nota: o primeiro índice de uma lista é 0.
End of explanation
print('O último elemento da lista1 é: ', lista1[-1])
print('O penúltimo elemento da lista1 é: ', lista1[-2])
Explanation: Também podemos especificar índices negativos para acessar elementos da lista, só que contando de trás para frente:
End of explanation
print('Do segundo ao quarto elemento de 1 em 1: ', lista1[1:4:1])
print('Do segundo até o final de 2 em 2: ', lista1[1::2])
print('Do primeiro elemento até o final de 3 em 3: ', lista1[::3])
print('Do primeiro elemento até o final de trás para frente: ', lista1[::-1])
Explanation: Podemos também acessar um pedaço da lista especificando faixas de valores no seguinte formato:
Python
lista[ pos_inicial : pos_final : passo ]
onde, pos_inicial é a primeira posição da faixa, pos_final é um elemento após a última posição da faixa e passo representa de quanto em quanto queremos pegar os elementos.
Se omitirmos um desses valores o valor padrão é primeiro elemento, último elemento, passo 1, respectivamente.
End of explanation
lista1[1] = 5
print('O novo valor da segunda posição é: ', lista1[1])
lista1[1] = lista1[1]*2
print('A segunda posição teve seu valor dobrado: ', lista1[1])
lista1[1] = lista1[0] + lista1[2]
print('A segunda posição agora é a soma da posição anterior e da posterior: ', lista1[1])
Explanation: Para alterar o conteúdo de um elemento da lista basta utilizarmos o operador =:
Python
lista[ indice ] = novo_valor
onde indice é a posição que queremos alterar e novo_valor é o novo valor a ser atribuído nessa posição da lista.
Também podemos calcular o novo valor baseado no valor original:
Python
lista[ indice ] = lista[ indice ] + var
lista[ indice ] = lista[ indice ] * var
onde var pode ser um valor numérico ou uma variável contendo um valor numérico.
End of explanation
lista = []
for i in range(1,101):
lista.append(i)
print(lista)
Explanation: Para criar uma lista podemos utilizar o comando for em conjunto com o comando append:
Python
lista = []
for i in range(1,101):
lista.append(i)
lê-se: inicia uma lista vazia; para i na faixa de $1$ até $101$ (não-inclusivo), adiciona o valor de i ao final da lista.
End of explanation
def f(x):
return x**2 - 3*x
lista1 = []
lista2 = []
for x in range(1,101):
lista1.append( x**2 ) # lista1 é composta de i elevado ao quadrado
lista2.append( f(x) ) # lista2 é composta de f(i) para todo i pertencente a
print(lista1)
print()
print(lista2)
Explanation: Podemos combinar o append com expressões matemáticas ou funções para criar listas como:
$${ x^2 \mid \forall x \in [1,101[ }$$
$${ f(x) \mid \forall x \in [1,101[ }$$
End of explanation
lista = [f(x) for x in range(1,101)]
print(lista)
Explanation: Também podemos criar uma lista utilizando uma notação mais próxima da matemática:
python
lista = [ f(x) for x in range(1,101) ]
lê-se: lista é composta pelos valores retornados da função f() com parâmetro x sendo que x assume valores de 1 até 101 (exclusivo).
End of explanation
lista = [ 2*x if x%2==0 else x**2 for x in range(1,101)]
print(lista)
Explanation: E podemos adicionar condicionais para expressar funções mais interessantes:
$$
f(x) =
\begin{cases}
2*x, & \text{se } x \text{ for par} \
x^2, & \text{caso contrário}
\end{cases}
$$
$$ lista = {f(x) \mid \forall x \in [1,101[ } $$
End of explanation
lista = []
for x in range(1,101):
if x%2==0:
lista.append(2*x)
else:
lista.append(x**2)
print(lista)
Explanation: que é equivalente a:
End of explanation
lista = [1,2,3]
for x in lista:
print(x)
Explanation: Exercício 01: Faça uma função GeraLista(var) que retorne a seguinte lista:
$$ lista = { x^3 - 3x \mid \forall x \in [1,var] }, $$
onde $var$ é uma variável a ser definida pelo usuário.
O comando for permite percorrer uma lista atribuindo cada valor em uma variável:
python
lista = [1,2,3]
for x in lista:
print(x)
End of explanation
def Fibonacci(n):
F = [1,1]
for i in range(2,n+1):
F.append(F[i-1] + F[i-2])
return F
print(Fibonacci(10))
Explanation: Se quisermos calcular a somatória de uma lista precisamos seguir os seguintes passos:
Criar um acumulador que armazenará o resultado final, quando não existe elementos em uma lista o resultado deverá ser 0
Percorrer cada elemento da lista somando esse elemento ao acumulador
Exercício 02: Agora faça uma função que retorne a somatória dos elementos de uma lista.
Com esses dois exercícios você poderia resolver o Ex. A3 da prova prática simplesmente fazendo:
python
Soma(GeraLista(varA))
Exercício 03: Sequência de Fibonacci.
Sabendo que o $n$-ésimo termo da sequência de Fibonacci é definido por:
$$ F_n = F_{n-1} + F_{n-2} $$
e sabendo que:
$$ F_0 = F_1 = 1 $$
Crie uma função, utilizando listas, para gerar a sequência de Fibonacci até o termo $n$.
Dica: você pode criar uma lista inicial contendo os dois primeiros termos. Em seguida, basta adicionar a lista os próximos valores na sequência, baseado nos valores anteriores.
End of explanation
def MaiorMenor(lista):
maior = ???
menor = ???
for x in lista:
if x > maior:
maior = ???
if ???:
menor = x
return maior, menor
Explanation: Exercício 04: Complete a função abaixo, substituindo '???', para que ela retorne o maior e o menor valor de uma lista:
End of explanation
estados = {'São Paulo' : 'SP', 'Rio de Janeiro' : 'RJ', 'Minas Gerais' : 'MG'}
cidades = {'SP' : 645, 'RJ' : 92, 'MG' : 853}
Explanation: Tipo dict
Um dicionário, ou tipo dict, no Python representa um mapa de associações entre dois conceitos.
Digamos que queremos armazenar informações sobre os estados brasileiros. Para isso queremos guardar a associação Nome completo => Abreviação, Abreviação => Qtd. de cidades no estado,.
Em outras palavras queremos armazenar:
'São Paulo' => 'SP'
'Rio de Janeiro' => 'RJ'
'SP' => 645
...
Para isso utilizamos os dicionários do Python, seguindo a sintaxe:
python
dicionario = { chave1 : valor, chave2 : valor, ... }
onde chave1, chave2, etc. são as chaves do dicionário e valor é um valor associado a essa chave.
End of explanation
print('A abreviação de "São Paulo" é', estados['São Paulo'])
print('A abreviação de "Minas Gerais" é', estados['Minas Gerais'])
print('Rio de Janeiro tem', cidades[estados['Rio de Janeiro']], 'cidades')
Explanation: Para acessarmos um valor de um dicionário utilizamos a sintaxe:
python
dicionario[ chave ]
End of explanation
cidades['RJ'] = cidades['RJ'] - 1
print('Rio de Janeiro perdeu uma cidade e agora ele tem: ', cidades['RJ'],'cidades')
Explanation: Podemos inserir uma nova associação ou alterar uma já existente:
python
dicionario[ chave ] = valor
ou
python
dicionario[ chave ] = dicionario[chave] + valor
End of explanation
print(cidades['RS'])
Explanation: Se tentarmos acessar um elemento que não existe, o Python retorna uma mensagem de erro:
End of explanation
if 'RS' in cidades:
print(cidades['RS'])
else:
print('Não consta esse estado!')
Explanation: Para verificar se uma chave existe em um dicionário utilizamos o operador in:
End of explanation
print('O Rio Grande do Sul tem', cidades.get('RS', 0), 'cidades')
Explanation: Para os casos que queremos assumir um valor padrão quando a chave não existe, podemos utilizar o método .get():
python
dicionario.get( chave, valor_padrao )
End of explanation
def ContaPalavras(lista):
frequencia = {}
for palavra in lista:
frequencia[palavra] = ???
return frequencia
print(ContaPalavras(['eu', 'vou', 'tirar', 'dez', 'nessa', 'prova', 'ou', 'eu', 'reprovo']))
Explanation: Exercício 05: Complete a função abaixo para contar a frequência de palavras da lista passada como parâmetro utilizando o método .get().
End of explanation
import string
def GeraCifra(n):
letras = string.ascii_lowercase
cifra = {}
for i in range(len(letras)):
cifra[letras[i]] = ???
return cifra
print(GeraCifra(2))
Explanation: Exercício 06: Complete a função GeraCifra(n) para que ela crie um dicionário que associe uma certa letra do alfabeto com a n-ésima letra seguinte. Ex.:
n = 2
'a' => 'c'
'b' => 'd'
...
'x' => 'z'
'y' => 'a'
'z' => 'b'
Sabendo que a variável letras contém todas as letras do alfabeto na sequência e que letra[0] == 'a' e letra[1] == 'b'.
End of explanation
def Codifica(frase, n):
cifra = GeraCifra(n)
codificado = ''
for i in range(len(frase)):
codificado = codificado + ???
return codificado
code = Codifica('eu vou tirar dez', 4)
print('A frase codificada é:', code)
print('Por que consigo decodificar com esse comando?', Codifica(code, -4))
Explanation: Exercício 07: Complete a função Codifica(frase, n) para que ela troque cada letra da string frase pela letra correspondente do mapa gerado pela função GeraCifra(n).
End of explanation
dicionario = { 'a':1, 'b':2, 'c':3 }
for chave, valor in dicionario.items():
print(chave, '=>', valor)
Explanation: Para percorrer todas as chaves e valores de um dicionário utilizamos o método .items():
End of explanation
import numpy as np
v1 = np.array( range(20) )
v2 = np.array( range(100,120) )
print(v1)
print(v2)
Explanation: Tipo numpy.array
O tipo numpy.array, proveniente da biblioteca numpy, especifica uma estrutura de vetores e matrizes para cálculos vetoriais, matriciais e de álgebra linear.
Essencialmente ele se comporta como uma lista, porém com os operadores matemáticos se comportando da mesma forma que se espera com vetores e matrizes na matemática.
End of explanation
print( v1 + 1 )
print()
print( v1 * 10 )
print()
print( v1 + v2 )
print()
print( v1 * v2 )
Explanation: Tente advinhar o que as seguintes operações estão fazendo:
End of explanation
print('O primeiro elemento de v1 é',v1[0])
print('O último elemento de v1 é',v1[-1])
print('Os elementos de 2 até 10 de 2 em 2 de v1 são', v1[2:11:2])
Explanation: O acesso aos elementos do vetor é similar ao das listas:
End of explanation
v1 = np.array([1,10,100,1000,10000,5000,500,50,5])
v2 = np.array([1,0,1,0,1,0,1,0,1])
print( v1[ v2==1 ] )
Explanation: Também é possível acessar faixas de valores utilizando comparações:
python
vetor[ vetor > 10 ] # apenas valores maiores do que 10
vetor1[ vetor2 == 1 ] # apenas os índices em que o vetor2 tem valor 1
End of explanation
def VerificaPrimo(x):
for i in range(2,x):
if x%i == 0: # se encontrar um i que divide x, retorna falso
return False
return True # se não encontrou nenhum divisível, retorna verdadeiro
def Primos(n):
primos = np.ones(n+1) # vetor de tamanho n+1 com tudo igual a 1
numeros = np.arange(n+1)
primos[???] = 0
for atual in range(2,n+1):
if primos[???] == 1:
if VerificaPrimo(???):
primos[???] = 0
return numeros[???]
print(Primos(25))
Explanation: Exercício 08: Números primos usando o numpy.
Dada a função VerificaPrimo(x) que retorna True se o número x é primo e False caso contrário.
Faça uma função Primos(n) que retorna os números primos entre 1 e n. Para isso implemente o seguinte algoritmo:
Crie um vetor do numpy de tamanho n+1 com todos os valores iguais a 1
Esse vetor irá indicar quais elementos são primos (1) e quais não são (0).
Atribua o valor 0 para a posição 0 e 1, indicando que eles não são primos
Para todo número de 2 até n, faça:
Se o número ainda estiver marcado como primo
Se ele for realmente primo (verique com a função VerificaPrimo)
marque todos os múltiplos dele como não primos
Retorna os números primos marcados
End of explanation
matriz = np.array( [[1, 2, 3], [4, 5, 6], [7, 8, 9]] )
print(matriz)
print(matriz+1)
print()
print(matriz*2)
Explanation: Matrizes podem ser criadas utilizando a mesma sintaxe:
python
matriz = np.array( [[a11, a12, a13], [a21, a22, a23]] )
End of explanation
matriz2 = np.power(matriz, 2)
print(matriz2)
print()
matriz3 = np.sqrt(matriz)
print(matriz3)
print(matriz + matriz2)
print()
print(matriz * matriz2)
Explanation: O numpy contém todas as funções da biblioteca math e mais algumas, e permite que elas sejam aplicadas elemento-a-elemento em um vetor ou matriz:
End of explanation
print(np.dot(v1, v2)) # produto interno
print()
print(np.outer(v1[:3], v2[:3])) # produto externo entre os 3 primeiros elementos de cada vetor
print(matriz @ matriz2) # multiplicação de matriz
print()
print(np.linalg.inv(matriz)) # inversa da matriz
print()
print(matriz.T) # transposta da matriz
Explanation: Temos também operadores específicos de algebra linear e cálculo vetorial:
End of explanation
def GeraID(nomes):
id = 0
mapa = {}
for nome in nomes:
mapa[???] = ???
id = ???
return mapa
Explanation: Para acessar um elemento da matriz basta indicar as coordenadas separadas por vírgula no seguinte formato:
python
matriz[ i,j ]
Exercício 09: Vamos criar nosso próprio Facebook. Para isso vamos primeiro criar um dicionário contendo a relação Usuários e Número de Identificação. O número de identificação é um número sequencial iniciando em zero.
Faça uma função que receba uma lista de nomes e retorne um dicionário em que a chave é um nome o valor um número identificador.
End of explanation
def MatrizAmizade( relacoes ):
n = len(relacoes)
Amizades = np.zeros( (n,n) )
for relacao in relacoes:
Amizades[ ??? ] = 1
return Amizades
Explanation: Exercício 10: Agora vamos criar uma função que recebe uma lista de relações de amizade no seguinte formato:
[ (nome1, nome2), (nome3, nome4), ... ], indicando que nome1 e nome2 são amigos
e a função deve retornar uma matriz n x n em que o elemento i,j contém o valor 1 caso exista a relação (nome1, nome2) dentro da lista.
End of explanation |
8,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Data Algorithms Quick Reference
Table Of Contents
<a href="#1.-Manually-Consuming-an-Iterator">Manually Consuming an Iterator</a>
<a href="#2.-Delegating-Iterator">Delegating Iterator</a>
<a href="#3.-Map">Map</a>
<a href="#4.-Filter">Filter</a>
<a href="#5.-Named-Slices">Named Slices</a>
<a href="#6.-zip">zip</a>
<a href="#7.-itemgetter">itemgetter</a>
<a href="#8.-attrgetter">attrgetter</a>
<a href="#9.-groupby">groupby</a>
<a href="#10.-Generator-Expressions">Generator Expressions</a>
<a href="#11.-compress">compress</a>
<a href="#12.-reversed">reversed</a>
<a href="#13.-Generators-with-State">Generators with State</a>
<a href="#14.-slice-and-dropwhile">slice and dropwhile</a>
<a href="#15.-Permutations-and-Combinations-of-Elements">Permutations and Combinations of Elements</a>
<a href="#16.-Iterating-with-Indexes">Iterating with Indexes</a>
<a href="#17.-chain">chain</a>
<a href="#18.-Flatten-a-Nested-Sequence">Flatten a Nested Sequence</a>
<a href="#19.-Merging-Presorted-Iterables">Merging Presorted Iterables</a>
1. Manually Consuming an Iterator
Step1: 2. Delegating Iterator
Step2: 3. Map
map applies a function to every element of a sequence and returns an iterator of elements
Step3: 4. Filter
filter returns an iterator containing the elements from a sequence for which a condition is True
Step4: 5. Named Slices
Step5: 6. zip
Step6: zip can only be iterated over once!
Step7: 7. itemgetter
Step8: 8. attrgetter
Step9: 9. groupby
The groupby() function works by scanning a sequence and finding sequential “runs”
of identical values (or values returned by the given key function). On each iteration, it
returns the value along with an iterator that produces all of the items in a group with
the same value.
Step10: 10. Generator Expressions
Step11: 11. compress
itertools.compress() takes an iterable and an accompanying Boolean selector sequence as input. As output, it gives you all of the items in the iterable where the corresponding element in the selector is True.
Step12: 12. reversed
Step13: 13. Generators with State
Step14: 14. islice and dropwhile
Step15: 15. Permutations and Combinations of Elements
Step16: 16. Iterating with Indexes
Step17: 17. chain
Step18: 18. Flatten a Nested Sequence
Step19: 19. Merging Presorted Iterables | Python Code:
items = [1, 2, 3]
# Get the iterator
it = iter(items) # Invokes items.__iter__()
# Run the iterator
next(it) # Invokes it.__next__()
next(it)
next(it)
# if you uncomment this line it would throw a StopOperation exception
# next(it)
Explanation: Python Data Algorithms Quick Reference
Table Of Contents
<a href="#1.-Manually-Consuming-an-Iterator">Manually Consuming an Iterator</a>
<a href="#2.-Delegating-Iterator">Delegating Iterator</a>
<a href="#3.-Map">Map</a>
<a href="#4.-Filter">Filter</a>
<a href="#5.-Named-Slices">Named Slices</a>
<a href="#6.-zip">zip</a>
<a href="#7.-itemgetter">itemgetter</a>
<a href="#8.-attrgetter">attrgetter</a>
<a href="#9.-groupby">groupby</a>
<a href="#10.-Generator-Expressions">Generator Expressions</a>
<a href="#11.-compress">compress</a>
<a href="#12.-reversed">reversed</a>
<a href="#13.-Generators-with-State">Generators with State</a>
<a href="#14.-slice-and-dropwhile">slice and dropwhile</a>
<a href="#15.-Permutations-and-Combinations-of-Elements">Permutations and Combinations of Elements</a>
<a href="#16.-Iterating-with-Indexes">Iterating with Indexes</a>
<a href="#17.-chain">chain</a>
<a href="#18.-Flatten-a-Nested-Sequence">Flatten a Nested Sequence</a>
<a href="#19.-Merging-Presorted-Iterables">Merging Presorted Iterables</a>
1. Manually Consuming an Iterator
End of explanation
# if you write a container class, and want to expose an iterator over an internal collection use the __iter()__ method
class Node:
def __init__(self):
self._children = [1,2,3]
def __iter__(self):
return iter(self._children)
root = Node()
for x in root:
print(x)
Explanation: 2. Delegating Iterator
End of explanation
simpsons = ['homer', 'marge', 'bart']
map(len, simpsons) # returns [0, 2, 4]
#equivalent list comprehension
[len(word) for word in simpsons]
map(lambda word: word[-1], simpsons) # returns ['r','e', 't']
#equivalent list comprehension
[word[-1] for word in simpsons]
Explanation: 3. Map
map applies a function to every element of a sequence and returns an iterator of elements
End of explanation
nums = range(5)
filter(lambda x: x % 2 == 0, nums) # returns [0, 2, 4]
# equivalent list comprehension
[num for num in nums if num % 2 == 0]
Explanation: 4. Filter
filter returns an iterator containing the elements from a sequence for which a condition is True:
End of explanation
###### 0123456789012345678901234567890123456789012345678901234567890'
record = '....................100 .......513.25 ..........'
SHARES = slice(20,32)
PRICE = slice(40,48)
cost = int(record[SHARES]) * float(record[PRICE])
cost
Explanation: 5. Named Slices
End of explanation
# zip() allows you to create an iterable view over a tuple created out of two separate iterable views
prices = { 'ACME' : 45.23, 'AAPL': 612.78, 'IBM': 205.55, 'HPQ' : 37.20, 'FB' : 10.75 }
min_price = min(zip(prices.values(), prices.keys())) #(10.75, 'FB')
max((zip(prices.values(), prices.keys())))
Explanation: 6. zip
End of explanation
prices_and_names = zip(prices.values(), prices.keys())
print(min(prices_and_names))
# running the following code would fail
#print(min(prices_and_names))
# zip usually stops when any individual iterator ends (it iterates only until the end of the shortest sequence)
a = [1, 2, 3]
b = ['w', 'x', 'y', 'z']
for i in zip(a,b):
print(i)
# use zip_longest to keep iterating through longer sequences
from itertools import zip_longest
for i in zip_longest(a,b):
print(i)
# zip can run over more then 2 sequences
c = ['aaa', 'bbb', 'ccc']
for i in zip(a,b,c):
print(i)
Explanation: zip can only be iterated over once!
End of explanation
from operator import itemgetter
rows = [
{'fname': 'Brian', 'lname': 'Jones', 'uid': 1003},
{'fname': 'David', 'lname': 'Beazley', 'uid': 1002},
{'fname': 'John', 'lname': 'Cleese', 'uid': 1001},
{'fname': 'Big', 'lname': 'Jones', 'uid': 1004}
]
rows_by_fname = sorted(rows, key=itemgetter('fname'))
rows_by_fname
rows_by_uid = sorted(rows, key=itemgetter('uid'))
rows_by_uid
# itemgetter() function can also accept multiple keys
rows_by_lfname = sorted(rows, key=itemgetter('lname','fname'))
rows_by_lfname
Explanation: 7. itemgetter
End of explanation
from operator import attrgetter
#used to sort objects that dont natively support comparison
class User:
def __init__(self, user_id):
self.user_id = user_id
def __repr__(self):
return 'User({})'.format(self.user_id)
users = [User(23), User(3), User(99)]
users
sorted(users, key=attrgetter('user_id'))
min(users, key=attrgetter('user_id'))
Explanation: 8. attrgetter
End of explanation
from operator import itemgetter
from itertools import groupby
rows = [
{'address': '5412 N CLARK', 'date': '07/01/2012'},
{'address': '5148 N CLARK', 'date': '07/04/2012'},
{'address': '5800 E 58TH', 'date': '07/02/2012'},
{'address': '2122 N CLARK', 'date': '07/03/2012'},
{'address': '5645 N RAVENSWOOD', 'date': '07/02/2012'},
{'address': '1060 W ADDISON', 'date': '07/02/2012'},
{'address': '4801 N BROADWAY', 'date': '07/01/2012'},
{'address': '1039 W GRANVILLE', 'date': '07/04/2012'},
]
# important! must sort data on key field first!
rows.sort(key=itemgetter('date'))
#iterate in groups
for date, items in groupby(rows, key=itemgetter('date')):
print(date)
for i in items:
print(' %s' % i)
Explanation: 9. groupby
The groupby() function works by scanning a sequence and finding sequential “runs”
of identical values (or values returned by the given key function). On each iteration, it
returns the value along with an iterator that produces all of the items in a group with
the same value.
End of explanation
mylist = [1, 4, -5, 10, -7, 2, 3, -1]
positives = (n for n in mylist if n > 0)
positives
for x in positives:
print(x)
nums = [1, 2, 3, 4, 5]
sum(x * x for x in nums)
# Output a tuple as CSV
s = ('ACME', 50, 123.45)
','.join(str(x) for x in s)
# Determine if any .py files exist in a directory
import os
files = os.listdir('.')
if any(name.endswith('.py') for name in files):
print('There be python!')
else:
print('Sorry, no python.')
# Data reduction across fields of a data structure
portfolio = [
{'name':'GOOG', 'shares': 50},
{'name':'YHOO', 'shares': 75},
{'name':'AOL', 'shares': 20},
{'name':'SCOX', 'shares': 65}
]
min(s['shares'] for s in portfolio)
s = sum((x * x for x in nums)) # Pass generator-expr as argument
s = sum(x * x for x in nums) # More elegant syntax
s
Explanation: 10. Generator Expressions
End of explanation
from itertools import compress
addresses = [
'5412 N CLARK',
'5148 N CLARK',
'5800 E 58TH',
'2122 N CLARK'
'5645 N RAVENSWOOD',
'1060 W ADDISON',
'4801 N BROADWAY',
'1039 W GRANVILLE',
]
counts = [ 0, 3, 10, 4, 1, 7, 6, 1]
more5 = [n > 5 for n in counts]
more5
list(compress(addresses, more5))
Explanation: 11. compress
itertools.compress() takes an iterable and an accompanying Boolean selector sequence as input. As output, it gives you all of the items in the iterable where the corresponding element in the selector is True.
End of explanation
#iterates in reverse
a = [1, 2, 3, 4]
for x in reversed(a):
print(x)
#you can customize the behavior of reversed for your class by implementing __reversed()__ method
class Counter:
def __init__(self, start):
self.start = start
# Forward iterator
def __iter__(self):
n = 1
while n <= self.start:
yield n
n += 1
# Reverse iterator
def __reversed__(self):
n = self.start
while n > 0:
yield n
n -= 1
foo = Counter(5)
for x in reversed(foo):
print(x)
Explanation: 12. reversed
End of explanation
# To expose state available at each step of iteration, use a classs that implements __iter__()
class countingiterator:
def __init__(self, items):
self.items=items
def __iter__(self):
self.clear_count()
for item in self.items:
self.count+=1
yield item
def clear_count(self):
self.count=0
foo = countingiterator(["aaa","bbb","ccc"])
for i in foo:
print("{}:{}".format(foo.count, i))
Explanation: 13. Generators with State
End of explanation
# itertools.islice allows slicing of iterators
def count(n):
while True:
yield n
n += 1
c=count(0)
#the next line would fail
# c[10:20]
import itertools
for x in itertools.islice(c,10,15):
print(x)
c=count(0)
for x in itertools.islice(c, 10, 15, 2):
print(x)
# if you don't know how many to skip, but can define a skip condition, use dropwhile()
from itertools import dropwhile
foo = ['#','#','#','#','aaa','bbb','#','ccc']
def getstrings(f):
for i in f:
yield i
for ch in dropwhile(lambda ch: ch.startswith('#'), getstrings(foo)):
print(ch)
Explanation: 14. islice and dropwhile
End of explanation
from itertools import permutations
items = ['a', 'b', 'c']
for p in permutations(items):
print(p)
# for smaller subset permutations
for p in permutations(items,2):
print(p)
# itertools.combinations ignores element order in creating unique sets
from itertools import combinations
for c in combinations(items, 3):
print(c)
for c in combinations(items, 2):
print(c)
# itertools.combinations_with_replacement() will not remove an item from the list of possible candidates after it is chosen
# in other words, the same value can occur more then once
from itertools import combinations_with_replacement
for c in combinations_with_replacement(items, 3):
print(c)
Explanation: 15. Permutations and Combinations of Elements
End of explanation
# enumerate returns the iterated item and an index
my_list = ['a', 'b', 'c']
for idx, val in enumerate(my_list):
print(idx, val)
# pass a starting index to enumerate
for idx, val in enumerate(my_list, 7):
print(idx, val)
Explanation: 16. Iterating with Indexes
End of explanation
# chain iterates over several sequences, one after the other
# making them look like one long sequence
from itertools import chain
a = [1, 2]
b = ['x', 'y', 'z']
for x in chain(a, b):
print(x)
Explanation: 17. chain
End of explanation
# you want to traverse a sequence with nested sub sequences as one big sequence
from collections import Iterable
def flatten(items, ignore_types=(str, bytes)):
for x in items:
if isinstance(x, Iterable) and not isinstance(x, ignore_types): # ignore types treats iterable string/bytes as simple values
yield from flatten(x)
else:
yield x
items = [1, 2, [3, 4, [5, 6], 7], 8]
for x in flatten(items):
print(x)
Explanation: 18. Flatten a Nested Sequence
End of explanation
import heapq
a = [1, 4, 7]
b = [2, 5, 6]
for c in heapq.merge(a, b):
print(c)
Explanation: 19. Merging Presorted Iterables
End of explanation |
8,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous Target Decoding with SPoC
Source Power Comodulation (SPoC)
Step1: Plot the contributions to the detected components (i.e., the forward model) | Python Code:
# Author: Alexandre Barachant <alexandre.barachant@gmail.com>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import Epochs
from mne.decoding import SPoC
from mne.datasets.fieldtrip_cmc import data_path
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import Ridge
from sklearn.model_selection import KFold, cross_val_predict
# Define parameters
fname = data_path() / 'SubjectCMC.ds'
raw = mne.io.read_raw_ctf(fname)
raw.crop(50., 200.) # crop for memory purposes
# Filter muscular activity to only keep high frequencies
emg = raw.copy().pick_channels(['EMGlft']).load_data()
emg.filter(20., None)
# Filter MEG data to focus on beta band
raw.pick_types(meg=True, ref_meg=True, eeg=False, eog=False).load_data()
raw.filter(15., 30.)
# Build epochs as sliding windows over the continuous raw file
events = mne.make_fixed_length_events(raw, id=1, duration=0.75)
# Epoch length is 1.5 second
meg_epochs = Epochs(raw, events, tmin=0., tmax=1.5, baseline=None,
detrend=1, decim=12)
emg_epochs = Epochs(emg, events, tmin=0., tmax=1.5, baseline=None)
# Prepare classification
X = meg_epochs.get_data()
y = emg_epochs.get_data().var(axis=2)[:, 0] # target is EMG power
# Classification pipeline with SPoC spatial filtering and Ridge Regression
spoc = SPoC(n_components=2, log=True, reg='oas', rank='full')
clf = make_pipeline(spoc, Ridge())
# Define a two fold cross-validation
cv = KFold(n_splits=2, shuffle=False)
# Run cross validaton
y_preds = cross_val_predict(clf, X, y, cv=cv)
# Plot the True EMG power and the EMG power predicted from MEG data
fig, ax = plt.subplots(1, 1, figsize=[10, 4])
times = raw.times[meg_epochs.events[:, 0] - raw.first_samp]
ax.plot(times, y_preds, color='b', label='Predicted EMG')
ax.plot(times, y, color='r', label='True EMG')
ax.set_xlabel('Time (s)')
ax.set_ylabel('EMG Power')
ax.set_title('SPoC MEG Predictions')
plt.legend()
mne.viz.tight_layout()
plt.show()
Explanation: Continuous Target Decoding with SPoC
Source Power Comodulation (SPoC) :footcite:DahneEtAl2014 allows to identify
the composition of
orthogonal spatial filters that maximally correlate with a continuous target.
SPoC can be seen as an extension of the CSP for continuous variables.
Here, SPoC is applied to decode the (continuous) fluctuation of an
electromyogram from MEG beta activity using data from
Cortico-Muscular Coherence example of FieldTrip
End of explanation
spoc.fit(X, y)
spoc.plot_patterns(meg_epochs.info)
Explanation: Plot the contributions to the detected components (i.e., the forward model)
End of explanation |
8,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FDR correction on T-test on sensor data
One tests if the evoked response significantly deviates from 0.
Multiple comparison problem is addressed with
False Discovery Rate (FDR) correction.
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Compute statistic
Step4: Plot | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.stats import bonferroni_correction, fdr_correction
print(__doc__)
Explanation: FDR correction on T-test on sensor data
One tests if the evoked response significantly deviates from 0.
Multiple comparison problem is addressed with
False Discovery Rate (FDR) correction.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)[:30]
channel = 'MEG 1332' # include only this channel in analysis
include = [channel]
Explanation: Set parameters
End of explanation
picks = mne.pick_types(raw.info, meg=False, eog=True, include=include,
exclude='bads')
event_id = 1
reject = dict(grad=4000e-13, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
X = epochs.get_data() # as 3D matrix
X = X[:, 0, :] # take only one channel to get a 2D array
Explanation: Read epochs for the channel of interest
End of explanation
T, pval = stats.ttest_1samp(X, 0)
alpha = 0.05
n_samples, n_tests = X.shape
threshold_uncorrected = stats.t.ppf(1.0 - alpha, n_samples - 1)
reject_bonferroni, pval_bonferroni = bonferroni_correction(pval, alpha=alpha)
threshold_bonferroni = stats.t.ppf(1.0 - alpha / n_tests, n_samples - 1)
reject_fdr, pval_fdr = fdr_correction(pval, alpha=alpha, method='indep')
threshold_fdr = np.min(np.abs(T)[reject_fdr])
Explanation: Compute statistic
End of explanation
times = 1e3 * epochs.times
plt.close('all')
plt.plot(times, T, 'k', label='T-stat')
xmin, xmax = plt.xlim()
plt.hlines(threshold_uncorrected, xmin, xmax, linestyle='--', colors='k',
label='p=0.05 (uncorrected)', linewidth=2)
plt.hlines(threshold_bonferroni, xmin, xmax, linestyle='--', colors='r',
label='p=0.05 (Bonferroni)', linewidth=2)
plt.hlines(threshold_fdr, xmin, xmax, linestyle='--', colors='b',
label='p=0.05 (FDR)', linewidth=2)
plt.legend()
plt.xlabel("Time (ms)")
plt.ylabel("T-stat")
plt.show()
Explanation: Plot
End of explanation |
8,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ml - 2016 - Compétition ENSAE - Premiers modèles
Une compétition était proposée dans le cadre du cours Python pour un Data Scientist à l'ENSAE. Ce notebook facilite la prise en main des données et propose de mettre en oeuvre un modèle logit.
Step1: La première étape constiste à mettre en forme les données pour que les fonctions des modules statsmodels et skitlearn fonctionnent comme on le souhaite
Step2: Mise en forme des données pour la régression
Pour mieux analyser les données présentes dans la base de données, il faut passer par quelques manipulations.
Par exemple, il faut transformer les variables SEX, EDUCATION, MARRIAGE en indictatrices afin qu'elles ne soient pas prise en compte comme des variables continues dans le modèle.
Step3: Régression logistique en utilisant statsmodels
Pour notre premier modèle, nous allons voir comment fonctionne le module statsmodels
Par défaut, la regression logit n'a pas de beta zéro pour le module statsmodels
Step4: Prédiction sur la base de test
On va prédire les probabilités de défaut de paiement sur notre base de test.
Il faut commencer par transformer la base de test comme on l'a fait pour la base de train.
Step5: Maintenant que la base de test est également transformée, nous allons appliqué les résultats de notre modèle à cette table en utilisant la fonction predict
Step6: On trouve un taux moyen de défaut de 22%, ce qui est très proche du taux observé dans la base de train
Step7: Regression logistique en utilisant Scikit-learn
A présent, nous allons utiliser le module scikit learn pour estimer le même modèle que précédemment et comparer les résultats.
Ici pas besoin d'ajouter une variable égale à 1 (l'intercept) car Scikit learn considère qu'il y a un intercept par défaut.
Step8: A la différence de statsmodels, Scikit-learn ne propose pas une belle table de résultat | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.ml - 2016 - Compétition ENSAE - Premiers modèles
Une compétition était proposée dans le cadre du cours Python pour un Data Scientist à l'ENSAE. Ce notebook facilite la prise en main des données et propose de mettre en oeuvre un modèle logit.
End of explanation
from pyensae.datasource import download_data
download_data("ensae_competition_2016.zip",
url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/competitions/2016_ENSAE_2A/")
import pandas as pd
import statsmodels.api as sm
import pylab as pl
import numpy as np
fichier_train = "./ensae_competition_train.txt"
fichier_test = "./ensae_competition_test_X.txt"
df = pd.read_csv(fichier_train, header=[0,1], sep="\t", index_col=0)
Explanation: La première étape constiste à mettre en forme les données pour que les fonctions des modules statsmodels et skitlearn fonctionnent comme on le souhaite
End of explanation
#### Gender dummies
df['X2'] = df['X2'].applymap(str)
gender_dummies = pd.get_dummies(df['X2'] )
### education dummies
df['X3'] = df['X3'].applymap(str)
educ_dummies = pd.get_dummies(df['X3'] )
#### marriage dummies
df['X4'] = df['X4'].applymap(str)
mariage_dummies = pd.get_dummies(df['X4'] )
### On va aussi supprimer les multi index de la table
df.columns = df.columns.droplevel(0)
#### on aggrège ensuite les 3 tables ensemble
data = df.join(gender_dummies).join(educ_dummies).join(mariage_dummies)
data.head(n=2)
Explanation: Mise en forme des données pour la régression
Pour mieux analyser les données présentes dans la base de données, il faut passer par quelques manipulations.
Par exemple, il faut transformer les variables SEX, EDUCATION, MARRIAGE en indictatrices afin qu'elles ne soient pas prise en compte comme des variables continues dans le modèle.
End of explanation
# première étape pour ce module, il faut ajouter à la main le beta zéro - l'intercept
data['intercept'] = 1.0
data.rename(columns = {'default payment next month' : "Y"}, inplace = True)
data.columns
# variable = ['AGE', 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4',
# 'BILL_AMT5', 'BILL_AMT6', 'LIMIT_BAL', 'PAY_0',
# 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6', 'PAY_AMT1', 'PAY_AMT2',
# 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6', 'SEX_1',
# 'EDUCATION_0', 'EDUCATION_1', 'EDUCATION_2', 'EDUCATION_3',
# 'EDUCATION_4', 'EDUCATION_5', 'MARRIAGE_0', 'MARRIAGE_1',
# 'MARRIAGE_2', 'intercept']
train_cols = ["SEX_1", "AGE", "MARRIAGE_0", 'PAY_0','intercept']
# Cette cellule n'est nécessaire que si vous utilisez scipy 1.0 avec statsmodels 0.8.
from pymyinstall.fix import fix_scipy10_for_statsmodels08
fix_scipy10_for_statsmodels08()
logit = sm.Logit(data['Y'], data[train_cols].astype(float))
# fit the model
result = logit.fit()
print(result.summary())
Explanation: Régression logistique en utilisant statsmodels
Pour notre premier modèle, nous allons voir comment fonctionne le module statsmodels
Par défaut, la regression logit n'a pas de beta zéro pour le module statsmodels : il faut donc l'ajouter
End of explanation
data_test = pd.read_csv(fichier_test, header=[0,1], sep="\t", index_col=0)
#### Gender dummies
data_test['X2'] = data_test['X2'].applymap(str)
gender_dummies_test = pd.get_dummies(data_test['X2'] )
### education dummies
data_test['X3'] = data_test['X3'].applymap(str)
educ_dummies_test = pd.get_dummies(data_test['X3'] )
#### marriage dummies
data_test['X4'] = data_test['X4'].applymap(str)
mariage_dummies_test = pd.get_dummies(data_test['X4'] )
### On va aussi supprimer les multi index de la table
data_test.columns = data_test.columns.droplevel(0)
#### on aggrège ensuite les 3 tables ensemble
data_test = data_test.join(gender_dummies_test).join(educ_dummies_test).join(mariage_dummies_test)
data_test['intercept'] = 1.0
data_test[train_cols].head()
Explanation: Prédiction sur la base de test
On va prédire les probabilités de défaut de paiement sur notre base de test.
Il faut commencer par transformer la base de test comme on l'a fait pour la base de train.
End of explanation
data_test['prediction_statsmodel'] = result.predict(data_test[train_cols])
data_test['prediction_statsmodel'].describe()
Explanation: Maintenant que la base de test est également transformée, nous allons appliqué les résultats de notre modèle à cette table en utilisant la fonction predict
End of explanation
# puis on l'exporte
data_test['prediction_statsmodel'].to_csv("./answer.csv", index=False)
Explanation: On trouve un taux moyen de défaut de 22%, ce qui est très proche du taux observé dans la base de train
End of explanation
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
print("l'estimation des coefficients", logistic.fit(data[train_cols], data['Y']).coef_, "\n")
print("l'intercept : ", logistic.fit(data[train_cols], data['Y']).intercept_)
Explanation: Regression logistique en utilisant Scikit-learn
A présent, nous allons utiliser le module scikit learn pour estimer le même modèle que précédemment et comparer les résultats.
Ici pas besoin d'ajouter une variable égale à 1 (l'intercept) car Scikit learn considère qu'il y a un intercept par défaut.
End of explanation
logistic.predict_proba(data_test[train_cols])
# pour calculer la moyenne du taux de défaut prédit
logistic.predict_proba(data_test[train_cols]).mean(axis=0)
# on trouve à nouveau 22%
Explanation: A la différence de statsmodels, Scikit-learn ne propose pas une belle table de résultat : seulement les arrays qui contiennent les coefficients
Pour le détail des p-value et intervalles de confiance, il faudra les recalculer à la main.
Par contre, la fonction de prédiction existe et elle renvoit la probabilité de Y = 0 et Y = 1
End of explanation |
8,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graphs and visualization
A lot of the joy of digital humanities comes in handling our material in new ways, so that we see things we wouldn't have seen before. Quite literally.
Some of the most useful tools for DH work are graphing tools! Today we will look at the basics of what a graph is and how you might build one, both manually and programmatically, and then name the tools to look into if you want to know more.
Tools you'll need
Graphviz
Step1: Then you do this every time you want to make a graph. The -f svg says that it should make an SVG image, which is what I recommend.
Step2: Now maybe Tom has a friend too
Step3: ...And so on. But what if we do want a nice symmetrical undirected graph? That is even simpler. Instead of digraph we say graph, and instead of describing the connections with -> we use -- instead. If we have a model like Facebook where friendship is always two-way, we can do this
Step4: You can change the font, the shape of the nodes, the colors of the links, and so on by setting attributes in square brackets. Attributes can be set for the graph, for all nodes, for all edges, and for individual nodes and edges. All the different attribute options can be read about here.
Step5: Of course, this would hardly be fun if we couldn't do it programmatically!
Building graphs with Graphviz + Python
Now we are going to make a few graphs, not by writing out dot, but by making a graph object that holds our nodes and edges. We do this with the graphviz module.
Step6: We make a new directed graph with graphviz.Digraph(), and a new undirected graph with graphviz.Graph().
Step7: Let's make a social network graph of five friends, all of whom like each other. But instead of typing out all those
Anna -> Ben
sorts of lines, we will let the program do that for us.
Step8: And here is a little iPython magic function so that we can actually make the graph display right here in the notebook. This means that, instead of copy-pasting what you see above into a new cell, you can just ask IPython to do the copy-pasting for you!
Don't worry too much about understanding this (unless you want to!) but we will use it a little farther down.
Step9: Basic usage for the Graphviz python library
So here is a short summary of what we did above that you will want to remember
Step10: Labels and IDs
When you are making a graph, it is important that every node be unique - if you have two people named Tom, then the graph program will have no idea which Tom is friends with Anna. So how do you handle having two people named Tom, without resorting to last names or AHV numbers or something like that?
You use attributes in the graph, and specifically the label attribute. It looks something like this
Step11: Notice, in this, that Anna still popped into existence when we referred to her in a relationship. But in the real world, we will probably want to declare our nodes with (for example) student numbers as the unique identifier, and names for display in the graph.
Styling the graph
We can also set attributes on a graph using Python. Imagine that we want a graph that displays the relationships between members of a family and where they go each day.
Here's how we can do that in python, and what we get.
Step12: All done! There are a huge number of styling attributes - ways to control line thickness, color, shape, graph direction, and so on. They are documented in eye-watering detail here
Step13: So now we have all our items - let's see who the authors, editors, etc. are! We can start by looking at one of the records.
Step14: So we get their names, and we get the info of whether they are authors or editors or translators or what. Let's make a graph and see who publishes about digital humanities!
First, just to make sure we have the hang of this, let's list all the creators we find.
Step15: Ooh huh, that's an ugly error. The KeyError means that we found a record that doesn't have a firstName. We could use a try/except block like we did above, but we could also just check whether each person has a first name before we try to list them. Let's see what's going on with these records that don't, by just printing out the whole structure when we encounter one.
Step16: So some of our creators just have a name, instead of having it subdivided into first and last. We can handle that easily enough.
Step17: But we still have an error - at least one record doesn't have any 'creators' field at all. What's going on with that? Let's look at the JSON.
Step18: Sure enough, there is some form of record that doesn't have any creator information associated with it. Since we are interested in co-authorship, we can simply skip these, but we need to make sure that our code knows to do that. We will modify our code one more time, to make sure we can get through the list without error
Step19: There we are! We can list our creators, which means we can graph them! We are going to use our good old graphviz Python library to create the graph. We will use an undirected graph, since there is no particular hierarchy in the collaborations we find here.
Step20: and we can put in the data.
Here we want to have an edge between two authors whenever they worked together on a publication, whether as author or editor or contributor or what have you. So for each publication, we will make a list of their names (using the code we wrote above), and then make graph edges between each pair of names in that list.
Step21: So what did we get? Let's list it out.
Step22: We can also experiment with the other layout styles like this | Python Code:
# This is how you get the %%dot and %dotstr command that we use below.
%load_ext hierarchymagic
Explanation: Graphs and visualization
A lot of the joy of digital humanities comes in handling our material in new ways, so that we see things we wouldn't have seen before. Quite literally.
Some of the most useful tools for DH work are graphing tools! Today we will look at the basics of what a graph is and how you might build one, both manually and programmatically, and then name the tools to look into if you want to know more.
Tools you'll need
Graphviz: http://www.graphviz.org
The graphviz and sphinx modules for Python
So what can you do with graphs?
<img src="https://i.embed.ly/1/display/resize?key=1e6a1a1efdb011df84894040444cdc60&url=http%3A%2F%2Fpbs.twimg.com%2Fmedia%2FBslvzfjIcAAW6Ro.png">
You can visualize relationships, networks, you name it.
http://ckcc.huygens.knaw.nl/epistolarium/#
The DOT graph language
It's pretty easy to start building a graph, if you have the tools and a plain text editor. First you have to decide whether you want a directed or an undirected graph. If all the relationships you want to chart are symmetric and two-way (e.g. "these words appear together" or "these people corresponded", then it can be undirected. But if there is any asymmetry (e.g. in social networks - just because Tom is friends with Jane doesn't mean that Jane is friends with Tom!) then you want a directed graph.
If you want to make a directed graph, it looks like this:
digraph "My graph" {
[... graph data goes here ...]
}
and if you want to make an undirected graph, it looks like this.
graph "My graph" {
[... graph data goes here ...]
}
Let's say we want to make that little two-person social network. In graph terms, you have nodes and edges. The edges are the relationships, and the nodes are the things (people, places, dogs, cats, whatever) that are related. The easiest way to express that is like this:
digraph "My graph" {
Tom -> Jane
}
which says "The node Tom is connected to the node Jane, in that direction." We plug that into Graphviz, and what do we get? Let's use a little iPython magic to find out.
We are going to use an extension called 'hierarchymagic', which gives us the special %%dot command. You can install the extension like this. <br>You only have to do this once!
You will get a warning that "install_ext" is deprecated; there isn't much we can do about this ourselves, so don't worry unduly about it.
Now anytime you want to use graphs in IPython, this is how you do it.
End of explanation
%%dot -f svg
digraph "My graph" {
Tom -> Jane
}
# dot draws directed graphs
Explanation: Then you do this every time you want to make a graph. The -f svg says that it should make an SVG image, which is what I recommend.
End of explanation
%%dot -f svg
digraph "My graph" {
Tom -> Jane
Ben -> Tom
Tom -> Ben
}
Explanation: Now maybe Tom has a friend too:
digraph "My graph" {
Tom -> Jane
Ben -> Tom
Tom -> Ben
}
End of explanation
%%dot -f svg
graph "My graph" {
Tom -- Jane
Ben -- Tom
# Tom -- Ben
}
# adding the line Tom -- Ben creates simply a double line between the nodes "Ben" and "Tom"
%%dot -f svg
graph "My graph" {
layout=fdp
Tom -- Jane
Ben -- Tom
}
Explanation: ...And so on. But what if we do want a nice symmetrical undirected graph? That is even simpler. Instead of digraph we say graph, and instead of describing the connections with -> we use -- instead. If we have a model like Facebook where friendship is always two-way, we can do this:
graph "My graph" {
Tom -- Jane
Ben -- Tom
}
Note that we don't need the third line (Tom -- Ben) because it is now the same as saying Ben -- Tom.
Since this is an undirected graph, we want it to be laid out a little differently (not just straight up-and-down.) For this we can specify a different program with this -- -K flag. The options are dot (the default), neato, twopi, circo, fdp, and sfdp; they all take different approaches and you are welcome to play around with each one.
End of explanation
%%dot -f svg
graph "My pretty graph" {
# attributes are placed in[] and are key:value pairs
# you can set attributes on all nodes, all edges, or on individual nodes or edges
graph [layout=neato, bgcolor=black]
node [shape=plain, fontcolor=white, fontname=Helvetica]
Jane [fontcolor=red]
Tom -- Jane [color=green]
Jane -- Ben [color=green]
}
Explanation: You can change the font, the shape of the nodes, the colors of the links, and so on by setting attributes in square brackets. Attributes can be set for the graph, for all nodes, for all edges, and for individual nodes and edges. All the different attribute options can be read about here.
End of explanation
import graphviz # Use the Python graphviz library
Explanation: Of course, this would hardly be fun if we couldn't do it programmatically!
Building graphs with Graphviz + Python
Now we are going to make a few graphs, not by writing out dot, but by making a graph object that holds our nodes and edges. We do this with the graphviz module.
End of explanation
my_graph = graphviz.Digraph()
Explanation: We make a new directed graph with graphviz.Digraph(), and a new undirected graph with graphviz.Graph().
End of explanation
# Our list of friends
all_friends = [ 'Jane', 'Ben', 'Tom', 'Anna', 'Charlotte' ]
# Make them all friends with each other.
# As long as there are at least two people left in the list of friends...
while len( all_friends ) > 1:
this_friend = all_friends.pop() # Remove the last name from the list
for friend in all_friends: # Cycle through whoever is left and make them friends with each other
my_graph.edge( this_friend, friend ) # I like you
my_graph.edge( friend, this_friend ) # You like me
# Spit out the graph in its DOT format
# the source of a graph is always DOT format
print(my_graph.source)
Explanation: Let's make a social network graph of five friends, all of whom like each other. But instead of typing out all those
Anna -> Ben
sorts of lines, we will let the program do that for us.
End of explanation
%dotstr -f svg
# dotstr assumes that the very next variable is a string
my_graph
Explanation: And here is a little iPython magic function so that we can actually make the graph display right here in the notebook. This means that, instead of copy-pasting what you see above into a new cell, you can just ask IPython to do the copy-pasting for you!
Don't worry too much about understanding this (unless you want to!) but we will use it a little farther down.
End of explanation
# import graphviz;
this_graph = graphviz.Digraph() # start your directed graph
this_undirected = graphviz.Graph() # ...or your undirected graph
this_graph.edge( "me", "you" ) # Add a relationship between me and you
this_undirected.edge( "me", "you" )
print(this_graph.source) # Print out the dot.
print(this_undirected.source)
Explanation: Basic usage for the Graphviz python library
So here is a short summary of what we did above that you will want to remember:
End of explanation
lg = graphviz.Graph() # Make this one undirected
lg.graph_attr['layout'] = 'neato'
# in order to make the label an attribute, the node needs to be declared
lg.node( "Tom1", label="Tom" )
lg.node( "Tom2", label="Tom" )
# Anna node does not have to be declared because there's only one
# edges can have labels too
lg.edge( "Tom1", "Anna", label="siblings" )
lg.edge( "Tom1", "Tom2", label="friends" )
%dotstr -f svg
lg
Explanation: Labels and IDs
When you are making a graph, it is important that every node be unique - if you have two people named Tom, then the graph program will have no idea which Tom is friends with Anna. So how do you handle having two people named Tom, without resorting to last names or AHV numbers or something like that?
You use attributes in the graph, and specifically the label attribute. It looks something like this:
graph G {
Tom1 [ label="Tom" ]
Tom2 [ label="Tom" ]
Tom1 -- Anna
Tom1 -- Tom2
}
Before this, we only named our nodes when we needed them to define a relationship (an edge). But if we need to give any extra information about a node, such as a label, then we have to list it first, on its own line, with the extra information between the square brackets.
There are a whole lot of options for things you might want to define! Most of them have to do with how the graph should look, and we will look at them in a minute. For now, this is what we get for this graph:
End of explanation
copying_relations = [
("O", "a"),
("O", "b"),
("a", "Au318"),
("a", "Go325"),
("a", "Gr314"),
("a", "f"),
("a", "g"),
("b", "Au318"),
("b", "Ba96"),
("b", "Go325"),
("b", "Gr314"),
("b", "Sg524"),
("b", "c"),
("c", "An74"),
("c", "MuU151"),
("c", "d"),
("d", "Mu11475"),
("d", "e"),
("e", "Er16"),
("e", "Mu28315"),
("f", "Krems299"),
("f", "h"),
("g", "Mu22405"),
("g", "Wi3181"),
("h", "Kf133"),
("h", "Krems185"),
("Krems185", "b"),
("Krems299", "Mu22405")]
# Make a set of our witnesses so we can list them out with their attributes
witnesses = set()
for source, target in copying_relations:
witnesses.add(source)
witnesses.add(target)
# Make our stemma graph
stemma = graphviz.Digraph()
stemma.node_attr["fillcolor"] = "white"
stemma.node_attr["color"] = "white"
stemma.edge_attr['arrowhead'] = "none"
# Add our nodes
for witness in witnesses:
if len(witness) == 1: # It is a hypothetical / reconstructed witness
stemma.node(witness, fontcolor="grey", fontsize="11")
# else:
# stemma.node(witness)
# Add our edges
for source, target in copying_relations:
stemma.edge(source, target)
# Make the cell with the dot
# %dotstr -f svg
# stemma
# Make the cell outside Jupyter Notebook:
# take your DOT file:
print(stemma.source)
# paste it in a plain text editor
# save it as .gv
# go to the commandline and transform your file into an svg
# >>> dot -Tsvg [namefile.gv] > [name.svg]
family_members = ["Tara", "Mike", "Sophie"]
places = ["work", "school"]
# Make the graph
family = graphviz.Digraph()
# Set some defaults
family.graph_attr = {"bgcolor": "black"}
family.node_attr = {'fontcolor': "red" }
family.edge_attr = {'fontcolor': "white", 'color': "white"}
# Add the family members
for member in family_members:
family.node(member)
# Add the places they go
for place in places:
family.node(place, shape="house", color="blue", fontcolor="white")
# Set up the relationships
family.edge( "Tara", "Sophie", label="is mother", color="green", fontcolor="green" )
family.edge( "Mike", "Sophie", label="is father", color="green", fontcolor="green" )
family.edge( "Tara", "work", label="goes to" )
family.edge( "Mike", "work", label="goes to" )
family.edge( "Sophie", "school", label="goes to" )
# Make the cell with the dot
%dotstr -f svg
family
Explanation: Notice, in this, that Anna still popped into existence when we referred to her in a relationship. But in the real world, we will probably want to declare our nodes with (for example) student numbers as the unique identifier, and names for display in the graph.
Styling the graph
We can also set attributes on a graph using Python. Imagine that we want a graph that displays the relationships between members of a family and where they go each day.
Here's how we can do that in python, and what we get.
End of explanation
from pyzotero import zotero
import json
zotero_group = zotero.Zotero( 30, "group", 'SsbeUu6kJbK4w723P7GklmNb' )
our_items = zotero_group.top(limit=100)
for i in range(int(zotero_group.num_items() / 100)):
our_items.extend(zotero_group.follow())
len(our_items)
Explanation: All done! There are a huge number of styling attributes - ways to control line thickness, color, shape, graph direction, and so on. They are documented in eye-watering detail here:
http://www.graphviz.org/content/attrs
So now let's return to what we were doing with Zotero a few weeks ago...
End of explanation
our_items[3]
Explanation: So now we have all our items - let's see who the authors, editors, etc. are! We can start by looking at one of the records.
End of explanation
for item in our_items:
for creator in item['data']['creators']:
print("%s %s was a(n) %s" %
( creator['firstName'],
creator['lastName'],
creator['creatorType'] ))
Explanation: So we get their names, and we get the info of whether they are authors or editors or translators or what. Let's make a graph and see who publishes about digital humanities!
First, just to make sure we have the hang of this, let's list all the creators we find.
End of explanation
for item in our_items:
for creator in item['data']['creators']:
if 'firstName' in creator: ## NEW LINE
print("%s %s was a(n) %s" % (
creator['firstName'],
creator['lastName'],
creator['creatorType'] ))
else: ## NEW LINE
print(json.dumps(creator)) ## NEW LINE
Explanation: Ooh huh, that's an ugly error. The KeyError means that we found a record that doesn't have a firstName. We could use a try/except block like we did above, but we could also just check whether each person has a first name before we try to list them. Let's see what's going on with these records that don't, by just printing out the whole structure when we encounter one.
End of explanation
for item in our_items:
for creator in item['data']['creators']:
if 'firstName' in creator:
print("%s %s was a(n) %s" % (
creator['firstName'],
creator['lastName'],
creator['creatorType'] ))
else:
print("%s was a(n) %s" % ( # NEW LINE
creator['name'], # NEW LINE
creator['creatorType'])) # NEW LINE
Explanation: So some of our creators just have a name, instead of having it subdivided into first and last. We can handle that easily enough.
End of explanation
for item in our_items:
if 'creators' not in item['data']:
print(item['data'])
Explanation: But we still have an error - at least one record doesn't have any 'creators' field at all. What's going on with that? Let's look at the JSON.
End of explanation
for item in our_items:
if 'creators' not in item['data']: # NEW LINE
continue # NEW LINE
for creator in item['data']['creators']:
if 'firstName' in creator:
print("%s %s was a(n) %s" % (
creator['firstName'],
creator['lastName'],
creator['creatorType'] ))
else:
print("%s was a(n) %s" % (
creator['name'],
creator['creatorType']))
Explanation: Sure enough, there is some form of record that doesn't have any creator information associated with it. Since we are interested in co-authorship, we can simply skip these, but we need to make sure that our code knows to do that. We will modify our code one more time, to make sure we can get through the list without error:
End of explanation
author_graph = graphviz.Graph()
Explanation: There we are! We can list our creators, which means we can graph them! We are going to use our good old graphviz Python library to create the graph. We will use an undirected graph, since there is no particular hierarchy in the collaborations we find here.
End of explanation
for item in our_items:
if( 'creators' not in item['data'] ):
continue
# First, make a list of all the collaborators for this item.
item_collaborators = []
for creator in item['data']['creators']:
full_name = ''
if( 'firstName' in creator ):
full_name = creator['firstName'] + ' ' + creator['lastName']
else:
full_name = creator['name']
item_collaborators.append( full_name )
# Second, add each pair of collaborators to the graph as an edge.
while( len( item_collaborators ) > 1 ):
me = item_collaborators.pop()
for you in item_collaborators:
author_graph.edge( me, you )
Explanation: and we can put in the data.
Here we want to have an edge between two authors whenever they worked together on a publication, whether as author or editor or contributor or what have you. So for each publication, we will make a list of their names (using the code we wrote above), and then make graph edges between each pair of names in that list.
End of explanation
%dotstr -f svg
author_graph
Explanation: So what did we get? Let's list it out.
End of explanation
author_graph.graph_attr = {"layout": "fdp"}
%dotstr -f svg
author_graph
Explanation: We can also experiment with the other layout styles like this:
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.