repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15 values | content stringlengths 335 154k |
|---|---|---|---|
cogeorg/black_rhino | examples/degroot/Analyse_deGroot.ipynb | gpl-3.0 | de_groot_data = pd.read_csv('measurements/Measurement_degroot_new.csv', index_col=0)
"""
Explanation: Analyse deGroot
The notebook can be used to analyse the output of the deGroot model.
End of explanation
"""
de_groot_data.head(3)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,6))
ax.plot(de_groot_data.index, de_groot_data['opinion_agent_one'], label='agent-one')
ax.plot(de_groot_data.index, de_groot_data['opinion_agent_two'], label='agent-two')
ax.plot(de_groot_data.index, de_groot_data['opinion_agent_three'], label='agent-three')
ax.legend(loc='best', fontsize='14')
ax.set_ylabel('Opinions', fontsize='14')
ax.set_xlabel('Time', fontsize='14')
fig.savefig('deGrootOpinions.png')
"""
Explanation: The evolution of the model
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/a9e07affc8c71aa96bb4ffe855ff552c/morph_surface_stc.ipynb | bsd-3-clause | # Author: Tommy Clausner <tommy.clausner@gmail.com>
#
# License: BSD (3-clause)
import os
import os.path as op
import mne
from mne.datasets import sample
print(__doc__)
"""
Explanation: Morph surface source estimate
This example demonstrates how to morph an individual subject's
:class:mne.SourceEstimate to a common reference space. We achieve this using
:class:mne.SourceMorph. Pre-computed data will be morphed based on
a spherical representation of the cortex computed using the spherical
registration of FreeSurfer <tut-freesurfer-mne>
(https://surfer.nmr.mgh.harvard.edu/fswiki/SurfaceRegAndTemplates)
:footcite:GreveEtAl2013. This
transform will be used to morph the surface vertices of the subject towards the
reference vertices. Here we will use 'fsaverage' as a reference space (see
https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage).
The transformation will be applied to the surface source estimate. A plot
depicting the successful morph will be created for the spherical and inflated
surface representation of 'fsaverage', overlaid with the morphed surface
source estimate.
<div class="alert alert-info"><h4>Note</h4><p>For background information about morphing see `ch_morph`.</p></div>
End of explanation
"""
data_path = sample.data_path()
sample_dir = op.join(data_path, 'MEG', 'sample')
subjects_dir = op.join(data_path, 'subjects')
fname_src = op.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif')
fname_fwd = op.join(sample_dir, 'sample_audvis-meg-oct-6-fwd.fif')
fname_fsaverage_src = os.path.join(subjects_dir, 'fsaverage', 'bem',
'fsaverage-ico-5-src.fif')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
"""
Explanation: Setup paths
End of explanation
"""
# Read stc from file
stc = mne.read_source_estimate(fname_stc, subject='sample')
"""
Explanation: Load example data
End of explanation
"""
src_orig = mne.read_source_spaces(fname_src)
print(src_orig) # n_used=4098, 4098
fwd = mne.read_forward_solution(fname_fwd)
print(fwd['src']) # n_used=3732, 3766
print([len(v) for v in stc.vertices])
"""
Explanation: Setting up SourceMorph for SourceEstimate
In MNE, surface source estimates represent the source space simply as
lists of vertices (see tut-source-estimate-class).
This list can either be obtained from :class:mne.SourceSpaces (src) or from
the stc itself. If you use the source space, be sure to use the
source space from the forward or inverse operator, because vertices
can be excluded during forward computation due to proximity to the BEM
inner skull surface:
End of explanation
"""
src_to = mne.read_source_spaces(fname_fsaverage_src)
print(src_to[0]['vertno']) # special, np.arange(10242)
morph = mne.compute_source_morph(stc, subject_from='sample',
subject_to='fsaverage', src_to=src_to,
subjects_dir=subjects_dir)
"""
Explanation: We also need to specify the set of vertices to morph to. This can be done
using the spacing parameter, but for consistency it's better to pass the
src_to parameter.
<div class="alert alert-info"><h4>Note</h4><p>Since the default values of :func:`mne.compute_source_morph` are
``spacing=5, subject_to='fsaverage'``, in this example
we could actually omit the ``src_to`` and ``subject_to`` arguments
below. The ico-5 ``fsaverage`` source space contains the
special values ``[np.arange(10242)] * 2``, but in general this will
not be true for other spacings or other subjects. Thus it is recommended
to always pass the destination ``src`` for consistency.</p></div>
Initialize SourceMorph for SourceEstimate
End of explanation
"""
stc_fsaverage = morph.apply(stc)
"""
Explanation: Apply morph to (Vector) SourceEstimate
The morph will be applied to the source estimate data, by giving it as the
first argument to the morph we computed above.
End of explanation
"""
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# As spherical surface
brain = stc_fsaverage.plot(surface='sphere', **surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'Morphed to fsaverage (spherical)', 'title',
font_size=16)
"""
Explanation: Plot results
End of explanation
"""
brain_inf = stc_fsaverage.plot(surface='inflated', **surfer_kwargs)
# Add title
brain_inf.add_text(0.1, 0.9, 'Morphed to fsaverage (inflated)', 'title',
font_size=16)
"""
Explanation: As inflated surface
End of explanation
"""
stc_fsaverage = mne.compute_source_morph(stc,
subjects_dir=subjects_dir).apply(stc)
"""
Explanation: Reading and writing SourceMorph from and to disk
An instance of SourceMorph can be saved, by calling
:meth:morph.save <mne.SourceMorph.save>.
This method allows for specification of a filename under which the morph
will be save in ".h5" format. If no file extension is provided, "-morph.h5"
will be appended to the respective defined filename::
>>> morph.save('my-file-name')
Reading a saved source morph can be achieved by using
:func:mne.read_source_morph::
>>> morph = mne.read_source_morph('my-file-name-morph.h5')
Once the environment is set up correctly, no information such as
subject_from or subjects_dir must be provided, since it can be
inferred from the data and use morph to 'fsaverage' by default. SourceMorph
can further be used without creating an instance and assigning it to a
variable. Instead :func:mne.compute_source_morph and
:meth:mne.SourceMorph.apply can be
easily chained into a handy one-liner. Taking this together the shortest
possible way to morph data directly would be:
End of explanation
"""
|
jinntrance/MOOC | coursera/ml-regression/assignments/week-6-local-regression-assignment-blank.ipynb | cc0-1.0 | import graphlab
"""
Explanation: Predicting house prices using k-nearest neighbors regression
In this notebook, you will implement k-nearest neighbors regression. You will:
* Find the k-nearest neighbors of a given query input
* Predict the output for the query input using the k-nearest neighbors
* Choose the best value of k using a validation set
Fire up GraphLab Create
End of explanation
"""
sales = graphlab.SFrame('kc_house_data_small.gl/')
"""
Explanation: Load in house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
End of explanation
"""
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
"""
Explanation: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
End of explanation
"""
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis = 0)
normalized_features = feature_matrix / norms
return (normalized_features, norms)
"""
Explanation: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
End of explanation
"""
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
"""
Explanation: Split data into training, test, and validation sets
End of explanation
"""
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
"""
Explanation: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays:
End of explanation
"""
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
"""
Explanation: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.
End of explanation
"""
features_test[0]
"""
Explanation: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
End of explanation
"""
features_train[9]
"""
Explanation: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
End of explanation
"""
def euclidean_distance(x, y):
return np.sqrt(np.sum((x-y) ** 2))
euclidean_distance(features_test[0], features_train[9])
"""
Explanation: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
Note: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once.
End of explanation
"""
import sys
max_dis = sys.maxint
max_idx = sys.maxint
for i in xrange(10):
d = euclidean_distance(features_test[0], features_train[i])
if d < max_dis:
max_idx = i
max_dis = d
print max_idx, max_dis
"""
Explanation: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.
Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set.
End of explanation
"""
print max_idx
"""
Explanation: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
End of explanation
"""
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
"""
Explanation: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]):
End of explanation
"""
print features_train[0:3] - features_test[0]
"""
Explanation: The subtraction operator (-) in Numpy is vectorized as follows:
End of explanation
"""
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
"""
Explanation: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:
End of explanation
"""
diff = features_train - features_test[0]
"""
Explanation: Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation.
Perform 1-nearest neighbor regression
Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable diff such that diff[i] gives the element-wise difference between the features of the query house and the i-th training house.
End of explanation
"""
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
"""
Explanation: To test the code above, run the following cell, which should output a value -0.0934339605842:
End of explanation
"""
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
"""
Explanation: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
End of explanation
"""
distances = np.sqrt(np.sum(diff**2, axis=1))
"""
Explanation: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint: Do not forget to take the square root of the sum of squares.
End of explanation
"""
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
"""
Explanation: To test the code above, run the following cell, which should output a value 0.0237082324496:
End of explanation
"""
def compute_distance(train, query):
diff = train - query
return np.sqrt(np.sum(diff**2, axis=1))
"""
Explanation: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.
End of explanation
"""
distances_3 = compute_distance(features_train, features_test[2])
min_val = sys.maxint
min_idx = sys.maxint
for i in xrange(distances_3.size):
if distances_3[i] < min_val:
min_idx = i
min_val = distances_3[i]
print min_idx, min_val
print output_train[min_idx]
"""
Explanation: QUIZ QUESTIONS
Take the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
What is the predicted value of the query house based on 1-nearest neighbor regression?
End of explanation
"""
def knn(train, query, k):
distances = compute_distance(train, query)
indices = np.argsort(distances)
return indices[:k]
"""
Explanation: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint: Look at the documentation for np.argsort.
End of explanation
"""
idx = knn(features_train, features_test[2], 4)
print idx
"""
Explanation: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
End of explanation
"""
def knn_value(train, query, k, y):
idx = knn(train, query, k)
return np.average(y[idx])
"""
Explanation: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature vector of the query house, whose price we are predicting.
The function should return a predicted value of the query house.
Hint: You can extract multiple items from a Numpy array using a list of indices. For instance, output_train[[6, 10]] returns the prices of the 7th and 11th training houses.
End of explanation
"""
knn_value(features_train, features_test[2], 4, output_train)
"""
Explanation: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
End of explanation
"""
def knn_values(train, queries, k, y):
return [knn_value(train, query, k, y) for query in queries]
"""
Explanation: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature matrix for the query set.
The function should return a set of predicted values, one for each house in the query set.
Hint: To get the number of houses in the query set, use the .shape field of the query features matrix. See the documentation.
End of explanation
"""
knn_values(features_train, features_test[:10], 10, output_train)
"""
Explanation: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
What is the index of the house in this query set that has the lowest predicted value?
What is the predicted value of this house?
End of explanation
"""
rss_all = []
rss_min = sys.maxint
rss_k = sys.maxint
for k in xrange(1, 16):
values_valid = knn_values(features_train, features_valid, k, output_train)
rss = np.sum((values_valid - output_valid)**2)
rss_all.append(rss)
if rss < rss_min:
rss_min = rss
rss_k = k
"""
Explanation: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:
For k in [1, 2, ..., 15]:
Makes predictions for each house in the VALIDATION set using the k-nearest neighbors from the TRAINING set.
Computes the RSS for these predictions on the VALIDATION set
Stores the RSS computed above in rss_all
Report which k produced the lowest RSS on VALIDATION set.
(Depending on your computing environment, this computation may take 10-15 minutes.)
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
"""
Explanation: To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value:
End of explanation
"""
print rss_k, rss_min
values_test = knn_values(features_train, features_test, rss_k, output_train)
rss = np.sum((values_test - output_test)**2)
print rss
"""
Explanation: QUIZ QUESTION
What is the RSS on the TEST data using the value of k found above? To be clear, sum over all houses in the TEST set.
End of explanation
"""
|
shaypal5/rotten_needles | notebooks/Stats.ipynb | mit | imdb = pd.read_csv("C:\\Users\\Adam\\Google Drive\\School\\ComputerScience\\intro to data science\\rotten_needles\\data\\datasets\\movies_dataset.csv")
#imdb = imdb.dropna()
imdb = imdb.assign(rating10=(imdb['rating']*10))
imdb = imdb.assign(metascore10=(imdb['metascore']/10))
"""
Explanation: import data and drop NAs,
calculate metascore/10 and rating*10
End of explanation
"""
imdb = imdb.assign(score1=100*(imdb.gross_income-imdb.budget)/imdb.budget)
imdb = imdb.assign(score2=(imdb['gross_income']-imdb['budget'])) # best score measure
imdb = imdb.assign(score3=np.log(imdb['gross_income'])/np.log(imdb['budget']))
# imdb[['score2', 'name','rating','metascore']].sort_values('score2',ascending=0)
"""
Explanation: create movie profit score column
End of explanation
"""
plt.figure()
imdb_temp = imdb
imdb_temp['scaled_gross_income'] = np.log(imdb['gross_income']) # / 1000000
sns.regplot(x = imdb['rating']*10, y = 'scaled_gross_income', data = imdb_temp, color = 'yellow')
sns.regplot(x = imdb['metascore'], y = 'scaled_gross_income', data = imdb_temp, color = 'Green')
sns.plt.title("Gross Income against MetaScore \ IMDB Rating - Scatter")
sns.plt.xlabel("IMDB Rating, Metascore")
sns.plt.ylabel("Log of Gross Income")
# legend_patches = matplotlib.patches.Patch(color='green', label='label')
# Plot the legend
sns.plt.legend(['IMDB Ratings', 'Metascore'])
# imdb.isnull().sum()
"""
Explanation: Figure shows scatter of gross income against meta score and imdb rating
End of explanation
"""
plt.figure()
sns.countplot(x = 'rating', data = imdb)
plt.xticks(rotation=60)
sns.plt.title("Distribution of Movie Ratings")
sns.plt.xlabel("Movie Rating")
sns.plt.ylabel("Count of Ratings")
"""
Explanation: Figure shows distribution of Movie Ratings
End of explanation
"""
temp = pd.DataFrame(
data = {
'type':
[i for i in range(1,11) for genre in imdb.columns if 'genre' in genre],
'votes':
[imdb[imdb[genre] == 1]['rating_freq.1'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.2'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.3'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.4'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.5'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.6'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.7'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.8'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.9'].mean() for genre in imdb.columns if 'genre' in genre] +
[imdb[imdb[genre] == 1]['rating_freq.10'].mean() for genre in imdb.columns if 'genre' in genre]
},
index=
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]*10
)
plt.figure()
sns.barplot(x = temp.index , y = 'votes',hue = 'type', data = temp)
plt.xticks(rotation=45, ha='right')
sns.plt.title("Distribution of Ratings by Genres")
sns.plt.xlabel("Genres")
sns.plt.ylabel("Number of Votes")
"""
Explanation: Distribution of ratings by Genres
End of explanation
"""
# plt.figure()
# plt.ylim([0,10])
# plt.xlim([0,10])
# sns.regplot(x ='avg_rating_per_demo.aged_under_18', y = 'avg_rating_per_demo.aged_45+', data = imdb, color = 'red')
# plt.figure()
# plt.ylim([0,10])
# plt.xlim([0,10])
# sns.regplot(x ='avg_rating_per_demo.aged_18-29', y = 'avg_rating_per_demo.aged_45+', data = imdb, color = 'green')
# imdb.plot(kind='scatter', x='rating', y='avg_rating_per_demo.us_users');
"""
Explanation: scattering stuff
End of explanation
"""
plt.figure()
sns.regplot(x = 'opening_weekend_income', y = 'gross_income', data=imdb, color='seagreen')
sns.plt.title("Opening weeked Incomes vs Total Incomes")
sns.plt.xlabel("Opening Weekend")
sns.plt.ylabel("Total")
"""
Explanation: Figure shows high correlation between opening weekend incomes and Total weekend
End of explanation
"""
# imdb[['metascore','critic_review_count','rating','rating_count','gross_income','rating_freq.3','rating_freq.4','rating_freq.5','rating_freq.6',
# 'rating_freq.7','rating_freq.8','rating_freq.9','score2']].corr()
# imdb[['avg_rating_per_demo.males','avg_rating_per_demo.females']].corr()
"""
Explanation: correlations
End of explanation
"""
from pandas.tools.plotting import scatter_matrix
temp = imdb[['avg_rating_per_demo.aged_under_18','avg_rating_per_demo.aged_18-29',
'avg_rating_per_demo.aged_30-44','avg_rating_per_demo.aged_45+']]
temp.columns = ['-18','18-29','30-44','45+']
scatter_matrix(temp, alpha=0.2,figsize=(6,6))
plt.suptitle('Rating Scatter over Different Age Groups')
"""
Explanation: figure shows how different age groups tend to vote the same, the diagonal shows the rating distribution of each age group
End of explanation
"""
plt.figure()
sns.regplot(x = 'rating_count', y = 'rating', data=imdb, color='seagreen')
sns.plt.title("IMDB Rating vs Number of Votes")
sns.plt.xlabel("Number of Votes")
sns.plt.ylabel("IMDB Rating")
"""
Explanation: figure shows that above 400K voters, the average rating is allways greater than 7 - people tend to rate when they like a movie
End of explanation
"""
temp = pd.DataFrame(
data={
'sex':
['Male' for genre in imdb.columns if 'genre' in genre]
+
['Female' for genre in imdb.columns if 'genre' in genre],
'score':
[
imdb[imdb[genre] == 1]['votes_per_demo.males'].mean()
for genre in imdb.columns if 'genre' in genre
]
+
[
imdb[imdb[genre] == 1]['votes_per_demo.females'].mean()
for genre in imdb.columns if 'genre' in genre
]
},
index=
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
+
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
)
plt.figure()
sns.barplot(x = temp.index , y = 'score',hue = 'sex', data = temp)
plt.xticks(rotation=45, ha='right')
sns.plt.title("Number of Votes, Difference between Male and Female")
sns.plt.xlabel("Genres")
sns.plt.ylabel("Number of Votes")
"""
Explanation: figure shows the difference of males and females number of votes over different genres
End of explanation
"""
temp1 = pd.DataFrame(
data={
'sex':
['Male' for genre in imdb.columns if 'genre' in genre]
+
['Female' for genre in imdb.columns if 'genre' in genre],
'score':
[
imdb[imdb[genre] == 1]['avg_rating_per_demo.males'].mean()
for genre in imdb.columns if 'genre' in genre
]
+
[
imdb[imdb[genre] == 1]['avg_rating_per_demo.females'].mean()
for genre in imdb.columns if 'genre' in genre
]
},
index=
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
+
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
)
plt.figure()
sns.barplot(x = temp1.index , y = 'score',hue = 'sex', data = temp1)
plt.xticks(rotation=45, ha='right')
sns.plt.title("Average Ratings, Difference between Male and Female")
sns.plt.xlabel("Genres")
sns.plt.ylabel("Average Rating")
# plt.figure()
# plt.ylim([0,10])
# plt.xlim([0,10])
# sns.regplot(x ='avg_rating_per_demo.males', y = 'avg_rating_per_demo.females', data = imdb, color = 'red')
"""
Explanation: figure shows the similarity of males and females average scores over different genres - women are more mefargenot!
End of explanation
"""
temp2 = pd.DataFrame(
data={
'score':
[
imdb[imdb[genre] == 1]['score1'].mean()
for genre in imdb.columns if 'genre' in genre
]
},
index=
[genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]
)
plt.figure()
sns.barplot(x = temp2.index , y = 'score', data = temp2)
plt.xticks(rotation=45, ha='right')
sns.plt.title("Return on Investment by Genre")
sns.plt.xlabel("Genres")
sns.plt.ylabel("Roi %")
"""
Explanation: figure shows retrun on investment (gross income divided by budget)
End of explanation
"""
|
mabevillar/rmtk | rmtk/vulnerability/derivation_fragility/NLTHA_on_SDOF/MSA_on_SDOF.ipynb | agpl-3.0 | import MSA_on_SDOF
from rmtk.vulnerability.common import utils
import numpy as np
%matplotlib inline
"""
Explanation: Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built.
<img src="../../../../figures/MSA_example.jpg" width="500" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
"""
capacity_curves_file = "../../../../../rmtk_data/capacity_curves_Sd-Sa.csv"
sdof_hysteresis = "Default"
#sdof_hysteresis = "../../../../../rmtk_data/pinching_parameters.csv"
from read_pinching_parameters import read_parameters
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
"""
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
End of explanation
"""
gmrs_folder = "../../../../../rmtk_data/accelerograms"
minT, maxT = 0.1, 2.0
no_bins = 4
no_rec_bin = 4
record_scaled_folder = "../../../../../rmtk_data/Scaled_trial"
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
"""
Explanation: Load ground motion records
For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required:
1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual.
3. no_bins: number of Intensity Measure bins.
4. no_rec_bin: number of records per bin
If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
"""
damage_model_file = "../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
"""
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
End of explanation
"""
damping_ratio = 0.05
degradation = False
msa = {}; msa['n. bins']=no_bins; msa['records per bin']=no_rec_bin; msa['input folder']=record_scaled_folder
PDM, Sds, IML_info = MSA_on_SDOF.calculate_fragility(capacity_curves, hysteresis, msa, gmrs,
damage_model, damping_ratio, degradation)
"""
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_ratio: This parameter defines the damping ratio for the structure.
2. degradation: This boolean parameter should be set to True or False to specify whether structural degradation should be considered in the analysis or not.
End of explanation
"""
import MSA_post_processing
IMT = "Sa"
T = 0.466
regression_method = "max likelihood"
fragility_model = MSA_post_processing.calculate_fragility_model(PDM,gmrs,IML_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
"""
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity).
2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)).
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
"""
minIML, maxIML = 0.01, 4
utils.plot_fragility_model(fragility_model, minIML, maxIML)
"""
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
"""
taxonomy = "RC"
minIML, maxIML = 0.01, 3.00
output_type = "csv"
output_path = "../../../../../phd_thesis/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
"""
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
cons_model_file = "../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
"""
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
"""
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
"""
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
|
jacobdein/alpine-soundscapes | examples/Playing with rasterio and fiona.ipynb | mit | sample_points_filepath = ""
DEM_filepath = ""
elevation_filepath = ""
"""
Explanation: Playing with rasterio and fiona
Variable declarations
sample_points_filepath – path to sample points shapefile <br />
DEM_filepath – path to DEM raster <br />
elevation_filepath – path to export excel file containing elevation values for each sample site
End of explanation
"""
import rasterio
import fiona
import pandas
import numpy
from pyproj import Proj, transform
from fiona.crs import from_epsg
with fiona.open(sample_points_filepath, 'r') as source_points:
points = [f['geometry']['coordinates'] for f in source_points]
original = Proj(source_points.crs)
destination = Proj(from_epsg(4326))
#destination = Proj(' +proj=latlong +ellps=bessel')
with rasterio.drivers():
with rasterio.open(DEM_filepath) as source_dem:
s = source_dem.sample(points)
elevs = numpy.array([n[0] for n in s])
source_dem.close
source_points.close
"""
Explanation: Import statements
End of explanation
"""
points_projected = []
for p in points:
x, y = p
lat, long = transform(original, destination, x, y)
points_projected.append((long,lat))
points_projected_pd = pandas.DataFrame(points_projected, columns=["lat", "long"])
with fiona.open(sample_points_filepath, 'r') as source_points:
names = numpy.array([p['properties']['NAME'] for p in source_points])
IDs = numpy.array([p['properties']['ID'] for p in source_points])
source_points.close
elevs_names = [{"ID":IDs[i],"elevation":elevs[i], "name":names[i], "latitude":points_projected[i][0], "longitude":points_projected[i][1]} for i in range(len(elevs))]
elevs_pd = pandas.DataFrame(elevs_names)
elevs_pd
elevs_pd.to_excel(elevation_filepath)
"""
Explanation: Transform points
End of explanation
"""
|
thomasmeagher/DS-501 | lectures/06 Machine Learning Part 1 and Midterm Review/2_ML.ipynb | mit | # Old libraries that we know and love.
import numpy as np
import matplotlib.pylab as py
import pandas as pa
%matplotlib inline
# Our new libraries.
from sklearn import datasets
from mpl_toolkits.mplot3d import Axes3D
import mayavi.mlab as mlab
iris = datasets.load_iris()
"""
Explanation: Loading in the libraries.
End of explanation
"""
print iris['DESCR']
iris
X = iris['data']
y = iris['target']
py.plot(X[y==0,0],X[y==0,1],'r.')
py.plot(X[y==1,0],X[y==1,1],'g.')
py.plot(X[y==2,0],X[y==2,1],'b.')
py.plot(X[y==0,2],X[y==0,3],'r.')
py.plot(X[y==1,2],X[y==1,3],'g.')
py.plot(X[y==2,2],X[y==2,3],'b.')
fig = py.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(X[y==0, 0], X[y==0, 1], X[y==0, 2], c='r')
ax.scatter(X[y==1, 0], X[y==1, 1], X[y==1, 2], c='g')
ax.scatter(X[y==2, 0], X[y==2, 1], X[y==2, 2], c='b')
py.show()
mlab.clf()
mlab.points3d(X[y==0, 0], X[y==0, 1], X[y==0, 2],color=(1,0,0))
mlab.points3d(X[y==1, 0], X[y==1, 1], X[y==1, 2],color=(0,1,0))
mlab.points3d(X[y==2, 0], X[y==2, 1], X[y==2, 2],color=(0,0,1))
mlab.axes()
mlab.show()
"""
Explanation: Looking at the data
End of explanation
"""
mu1 = np.array([0,0,0])
mu2 = np.array([6,0,0])
np.random.seed(123)
Sigma = np.matrix(np.random.normal(size=[3,3]))
# U,E,VT = np.linalg.svd(Sigma)
# E[0] = 1
# E[1] = 1
# E[2] = 1
# Sigma = U*np.diag(E)*VT
Xrandom1 = np.random.multivariate_normal(mu1,np.array(Sigma*Sigma.T),size=500)
Xrandom2 = np.random.multivariate_normal(mu2,np.array(Sigma*Sigma.T),size=500)
"""
Explanation: Is there a more principled way to look at the data? Yes! <b>Let's go back to the notes.</b>
More principled ways to look at the data, Principle Component Analysis (PCA)!
Some sample data to demonstrate PCA on.
End of explanation
"""
mlab.clf()
mlab.points3d(Xrandom1[:,0], Xrandom1[:,1], Xrandom1[:,2],color=(1,0,0))
mlab.points3d(Xrandom2[:,0], Xrandom2[:,1], Xrandom2[:,2],color=(0,1,0))
mlab.axes()
mlab.show()
"""
Explanation: Plot the data so that it is "spread out" as much as possible.
End of explanation
"""
from sklearn.decomposition import PCA
X2D = PCA(n_components=3).fit_transform(X)
py.plot(X2D[y==0,0],X2D[y==0,1],'r.')
py.plot(X2D[y==1,0],X2D[y==1,1],'g.')
py.plot(X2D[y==2,0],X2D[y==2,1],'b.')
"""
Explanation: Can do the same thing with our classification data.
End of explanation
"""
fig = py.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
X3D = PCA(n_components=3).fit_transform(X)
ax.scatter(X3D[y==0, 0], X3D[y==0, 1], X3D[y==0, 2], c='r')
ax.scatter(X3D[y==1, 0], X3D[y==1, 1], X3D[y==1, 2], c='g')
ax.scatter(X3D[y==2, 0], X3D[y==2, 1], X3D[y==2, 2], c='b')
py.show()
"""
Explanation: Just as one can project from a high dimensional space to a two-dimensional space, one can also do the same thing to project to a three-dimensional space.
End of explanation
"""
mlab.clf()
mlab.points3d(X3D[y==0, 0], X3D[y==0, 1], X3D[y==0, 2],color=(1,0,0))
mlab.points3d(X3D[y==1, 0], X3D[y==1, 1], X3D[y==1, 2],color=(0,1,0))
mlab.points3d(X3D[y==2, 0], X3D[y==2, 1], X3D[y==2, 2],color=(0,0,1))
mlab.axes()
mlab.show()
"""
Explanation: And do the same with Mayavi.
End of explanation
"""
# Load in the support vector machine (SVM) library
from sklearn import svm
# If there is one thing that I want to harp on, it is the difference
# between testing and training errors! So, here we create a training
# set on which we computer the parameters of our algorithm, and a
# testing set for seeing how well we generalize (and work on real
# world problems).
np.random.seed(1236)
perm = np.random.permutation(len(y))
trainSize = 100
Xtrain = X[perm[:trainSize],0:2]
Xtest = X[perm[trainSize:],0:2]
yHat = np.zeros([len(y)])
# Exists a separator
#yHat[np.logical_or(y==1,y==2)] = 1
# No perfect separator
#yHat[np.logical_or(y==1,y==0)] = 1
# All the data
yHat = y
yHattrain = yHat[perm[:trainSize]]
yHattest = yHat[perm[trainSize:]]
"""
Explanation: <b>Let's go back to the notes for our first algorithm.</b>
Our first classification tool, Linear Support Vector Machines.
End of explanation
"""
# Some parameters we can get to play with
# If there is no perfect separator then how much do you penalize points
# that lay on the wrong side?
C = 100.
# The shape of the loss function for points that lay on the wrong side.
loss = 'l2'
# Run the calculation!
clf = svm.LinearSVC(loss=loss,C=C)
clf.fit(Xtrain, yHattrain)
# Make some plots, inspired by scikit-learn tutorial
from matplotlib.colors import ListedColormap
# step size in the mesh for plotting the decision boundary.
h = .02
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
py.figure(1, figsize=(8, 6))
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
py.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
py.scatter(Xtrain[:, 0], Xtrain[:, 1], c=yHattrain, cmap=cmap_bold,marker='o')
py.scatter(Xtest[:, 0], Xtest[:, 1], c=yHattest, cmap=cmap_bold,marker='+')
py.xlim(xx.min(), xx.max())
py.ylim(yy.min(), yy.max())
py.show()
# Print out some metrics
print 'training score',clf.score(Xtrain,yHattrain)
print 'testing score',clf.score(Xtest,yHattest)
"""
Explanation: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ <p><font color="red">But why do you do this? See the notes.</font>
End of explanation
"""
# Import the K-NN solver
from sklearn import neighbors
# If there is one thing that I want to harp on, it is the difference
# between testing and training errors! So, here we create a training
# set on which we computer the parameters of our algorithm, and a
# testing set for seeing how well we generalize (and work on real
# world problems).
np.random.seed(123)
perm = np.random.permutation(len(y))
trainSize = 50
Xtrain = X[perm[:trainSize],0:2]
Xtest = X[perm[trainSize:],0:2]
ytrain = y[perm[:trainSize]]
ytest = y[perm[trainSize:]]
# Some parameters to play around with
# The number of neighbors to use.
n_neighbors = 7
#weights = 'distance'
weights = 'uniform'
# Run the calculation
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(Xtrain, ytrain)
# Make some plots inspired by sci-kit learn tutorial
# step size in the mesh for plotting the decision boundary.
h = .02
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
py.figure(1, figsize=(8, 6))
py.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
py.scatter(Xtrain[:, 0], Xtrain[:, 1], c=ytrain, cmap=cmap_bold,marker='o')
py.scatter(Xtest[:, 0], Xtest[:, 1], c=ytest, cmap=cmap_bold,marker='+')
py.xlim(xx.min(), xx.max())
py.ylim(yy.min(), yy.max())
py.show()
# Print out some scores.
print 'training score',clf.score(Xtrain,ytrain)
print 'testing score',clf.score(Xtest,ytest)
"""
Explanation: <b>Back to the notes to define our next method.</b>
Our second classification tool, K-nearest neighbors.
End of explanation
"""
# Old libraries that we know and love.
import numpy as np
import matplotlib.pylab as py
import pandas as pa
# Our new libraries.
from sklearn import cross_validation, linear_model, feature_selection, metrics
import mayavi.mlab as mlab
"""
Explanation: <b>Back to the notes.</b>
Loading in the libraries for regression.
End of explanation
"""
# Read in the data using
Xy = pa.read_csv('Advertising.csv')
# Take a look at the contents.
Xy
# Normalize data
# We do this to make plotting and processing easier. Many Sklearn functions do this
# for you behind the scenes, but we do it explicitly.
# Note, that this is a cousing of the physics idea of nondimensionalization. Think
# about the case where TV was measured in millions, while Radio was measured in
# thousands. One could imagine TV totally washing out the effect of Radio.
# In effect, after normalization, each predictor now stands on an "even footing".
#
# Is this always a good idea?
Xy = (Xy-Xy.min())/(Xy.max()-Xy.min())
Xy
# Select out our predictor columns and our response columns
X = Xy.ix[:,['TV']]
y = Xy.ix[:,['Sales']]
# Last time we did this by hand, now we are smarter and use the sklearn
# routine. This routine splits data into training and testing subsets.
cross_validation.train_test_split([1,2,3,4,5],
[6,7,8,9,10],
test_size=0.4,
random_state=5)
# Now we do it for the real data.
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8)
# Let's take a quick look at the data.
X_train
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# There are the slope and intercept of the line we computed.
# Beta_0
print reg.intercept_
# Beta_1
print reg.coef_
# Do a plot
plotX = np.linspace(0,1,100)
plotY = reg.predict(np.matrix(plotX).T)
py.plot(X_train,y_train,'ro')
py.plot(X_test,y_test,'go')
py.plot(plotX,plotY,'b-')
# Use the metrics package to print our errors. See discussion on slides.
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
"""
Explanation: Supervised Regression
Linear Regression
End of explanation
"""
# Select out our predictor columns and our response columns
X = Xy.ix[:,['TV','Radio']]
y = Xy.ix[:,['Sales']]
# Select subsets for training and testing
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8,
random_state=123)
# Plot the data to get a feel for it.
mlab.clf()
mlab.points3d(X_train.ix[:,0]/X.ix[:,0].std(),
X_train.ix[:,1]/X.ix[:,1].std(),
y_train.ix[:,0]/y.ix[:,0].std(),
color=(1,0,0), scale_factor=0.2)
mlab.points3d(X_test.ix[:,0]/X.ix[:,0].std(),
X_test.ix[:,1]/X.ix[:,1].std(),
y_test.ix[:,0]/y.ix[:,0].std(),
color=(0,1,0), scale_factor=0.2)
mlab.axes()
mlab.show()
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# Create data for plotting
size=10
xPlot,yPlot = np.meshgrid(np.linspace(0,1,size),
np.linspace(0,1,size))
np.array([xPlot.flatten(),yPlot.flatten()])
zPlot = reg.predict(np.transpose(np.array([xPlot.flatten(),
yPlot.flatten()])))
zPlot = zPlot.reshape([size,size])
# Since we will be plotting many times, we
def myPlot(reg,X_train,y_train,X_test,y_test,xPlot,yPlot,zPlot,size=10,scale_factor=0.05):
mlab.clf()
mlab.points3d(X_train.ix[:,0],
X_train.ix[:,1],
y_train.ix[:,0],
color=(1,0,0), scale_factor=scale_factor)
mlab.points3d(X_test.ix[:,0],
X_test.ix[:,1],
y_test.ix[:,0],
color=(0,1,0), scale_factor=scale_factor)
mlab.mesh(xPlot,yPlot,zPlot,color=(0,0,1))
mlab.axes()
mlab.show()
myPlot(reg,X_train,y_train,X_test,y_test,xPlot,yPlot,zPlot)
# Use the metrics package to print our errors
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
"""
Explanation: <b>Back to slides.</b>
Multi-dimensional regression
End of explanation
"""
# Now we try non-linear fittng. See notes for details.
# Note that we add a new column which is a *non-linear* function
# of the original data!
XyNonlinear = Xy.copy()
XyNonlinear['TV*Radio'] = Xy['TV']*Xy['Radio']
# Select out our predictor columns and our response columns
X = XyNonlinear.ix[:,['TV','Radio','TV*Radio']]
y = XyNonlinear.ix[:,['Sales']]
# Select subsets for training and testing
X_train,X_test,y_train,y_test = cross_validation.train_test_split(X,
y,
test_size=0.8,
random_state=123)
# Run the solver
reg = linear_model.LinearRegression(fit_intercept=True)
reg.fit(X_train,y_train)
# Create data for plotting
size = 10
xPlot,yPlot = np.meshgrid(np.linspace(0,1,size),
np.linspace(0,1,size))
zPlot = reg.predict(np.transpose(np.array([xPlot.flatten(),
yPlot.flatten(),
(xPlot*yPlot).flatten()])))
zPlot = zPlot.reshape([size,size])
myPlot(reg,X_train,y_train,X_test,y_test,xPlot,yPlot,zPlot)
# Use the metrics package to print our errors
print 'training error'
print metrics.mean_squared_error(y_train,reg.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,reg.predict(X_test))
"""
Explanation: <b>Back to the notes.</b>
Non-linear fitting
End of explanation
"""
# What about adding many non-linear combinations! See notes for details.
degree=5
XCrazy = np.zeros([Xy.shape[0],degree**2])
for i in range(degree):
for j in range(degree):
XCrazy[:,i*degree + j] = (Xy['TV']**i)*(Xy['Radio']**j)
# Select subsets for training and testing
X_train,X_test,y_train,y_test = cross_validation.train_test_split(XCrazy,
y,
test_size=0.8,
random_state=123)
# Run the solver
regOver = linear_model.LinearRegression(fit_intercept=True)
regOver.fit(X_train,y_train)
print regOver.intercept_
print regOver.coef_
# Create data for plotting
size = 10
xPlot,yPlot = np.meshgrid(np.linspace(0,1,size),
np.linspace(0,1,size))
tmp = []
for i in range(degree):
for j in range(degree):
tmp.append( ( (xPlot**i)*(yPlot**j) ).flatten() )
zPlot = regOver.predict(np.transpose(np.array(tmp)))
zPlot = zPlot.reshape([size,size])
# Plot the data
# Select subsets for training and testing
X_train_plot,X_test_plot = cross_validation.train_test_split(Xy.ix[:,['TV','Radio']],
test_size=0.8,
random_state=123)
myPlot(reg,X_train_plot,y_train,X_test_plot,y_test,xPlot,yPlot,zPlot)
# Use the metrics package to print our errors
print 'training error'
print metrics.mean_squared_error(y_train,regOver.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,regOver.predict(X_test))
"""
Explanation: <b>Back to the notes.</b>
Too much of a good thing...
End of explanation
"""
# Fortunately, there is a *lot* that one can do to help. It is possible to have
# many predictors but still get good answers. See notes for details...
degree=5
XCrazy = np.zeros([Xy.shape[0],degree**2])
names = []
for i in range(degree):
for j in range(degree):
XCrazy[:,i*degree + j] = (Xy['TV']**i)*(Xy['Radio']**j)
names.append('TV**%d*Radio**%d'%(i,j))
# Select subsets for training and testing
X_train,X_test,y_train,y_test = cross_validation.train_test_split(XCrazy,
y,
test_size=0.8,
random_state=123)
# We can try None and 3 to see what we get.
selector = feature_selection.RFE(regOver,n_features_to_select=3)
selector.fit(X_train,y_train)
# Print out the predictors we use. These are the predictors selection by the RFE algorithm
# as the most important.
for i in range(len(names)):
print names[i],
print selector.get_support()[i]
# Create data for plotting
size = 10
xPlot,yPlot = np.meshgrid(np.linspace(0,1,size),
np.linspace(0,1,size))
tmp = []
for i in range(degree):
for j in range(degree):
tmp.append( ( (xPlot**i)*(yPlot**j) ).flatten() )
zPlot = selector.predict(np.transpose(np.array(tmp)))
zPlot = zPlot.reshape([size,size])
# Plot the data
# Select subsets for training and testing
X_train_plot,X_test_plot = cross_validation.train_test_split(Xy.ix[:,['TV','Radio']],
test_size=0.8,
random_state=123)
myPlot(reg,X_train_plot,y_train,X_test_plot,y_test,xPlot,yPlot,zPlot)
# Use the metrics package to print our errors
print 'training error'
print metrics.mean_squared_error(y_train,selector.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,selector.predict(X_test))
"""
Explanation: <b>Back to notes.</b>
Model Selection
End of explanation
"""
# Lasso regression is another method for doing feature selection.
# It is, by far, by favorite it is a close cousin of my personal
# research topic. See notes for details...
degree=5
XCrazy = np.zeros([Xy.shape[0],degree**2])
names = []
for i in range(degree):
for j in range(degree):
XCrazy[:,i*degree + j] = (Xy['TV']**i)*(Xy['Radio']**j)
names.append('TV**%d*Radio**%d'%(i,j))
# Select subsets for training and testing
X_train,X_test,y_train,y_test = cross_validation.train_test_split(XCrazy,
y,
test_size=0.8,
random_state=123)
# Run the solver
regLasso = linear_model.Lasso(alpha=0.002,fit_intercept=True,normalize=True)
regLasso.fit(X_train,y_train)
# Print out the predictors we use. These betas with non-zero weights are those
# selected by the Lasso algorithm as being the most important. What do you notice?
print regLasso.intercept_
for i in range(len(regLasso.coef_)):
print names[i],regLasso.coef_[i]
# Create data for plotting
size = 10
xPlot,yPlot = np.meshgrid(np.linspace(0,1,size),
np.linspace(0,1,size))
tmp = []
for i in range(degree):
for j in range(degree):
tmp.append( ( (xPlot**i)*(yPlot**j) ).flatten() )
zPlot = regLasso.predict(np.transpose(np.array(tmp)))
zPlot = zPlot.reshape([size,size])
# Plot the data
# Select subsets for training and testing
X_train_plot,X_test_plot = cross_validation.train_test_split(Xy.ix[:,['TV','Radio']],
test_size=0.8,
random_state=123)
myPlot(reg,X_train_plot,y_train,X_test_plot,y_test,xPlot,yPlot,zPlot)
# Use the metrics package to print our errors
print 'training error'
print metrics.mean_squared_error(y_train,regLasso.predict(X_train))
print 'testing error'
print metrics.mean_squared_error(y_test,regLasso.predict(X_test))
"""
Explanation: <b>Back to notes.</b>
Lasso!
End of explanation
"""
|
particle-physics-playground/playground | activities/activity01_cms_dimuons.ipynb | mit | import numpy as np
import matplotlib.pylab as plt
%matplotlib notebook
import h5hep
import pps_tools as hep
from file_download_tools import download_file
infile = "../data/dimuons_1000_collisions.hdf5"
print("Reading in the data....")
collisions = hep.get_collisions(infile,experiment='CMS',verbose=False)
print(len(collisions))
"""
Explanation: Looking at the dimuon spectrum over a wide energy range
<h3>Learning goals</h3>
<ul>
<li>Relativistic kinematics.
<li>Mesons.
</ul>
<b>Background</b>
To determine the mass ($m$) of a particle you need to know the 4-momenta of the particles ($\mathbf{P}$) that are detected after the collision: the energy ($E$), the momentum in the x direction ($p_x$), the momentum in the y direction ($p_y$), the momentum in the z direction ($p_z$).
$$\mathbf{P} = (E,p_x,p_y,p_z)$$
\begin{equation} m = \sqrt{E^2-(p_x^2+p_y^2 + p_z^2)} \end{equation}
Some particles are very unstable and decay (turn into) to two or more other particles. In fact, they can decay so quickly, that they never interact with your detector! Yikes!
However, we can reconstruct the parent particle (sometimes referred to as <b>the initial state particle</b>) and its 4-momentum by adding the 4-momenta of the child particles (sometimes referred to as <b>the decay products</b>).
$$\mathbf{P_{\rm parent}} = \mathbf{P_{\rm child 0}} + \mathbf{P_{\rm child 1}} + \mathbf{P_{\rm child 2}} + ...$$
which breaks down into...
$$E_{\rm parent} = E_{\rm child 0} + E_{\rm child 1} + E_{\rm child 2} + ...$$
$$p_{\rm x parent} = p_{\rm x child 0} + p_{\rm x child 1} + p_{\rm x child 2} + ...$$
$$p_{\rm y parent} = p_{\rm y child 0} + p_{\rm y child 1} + p_{\rm y child 2} + ...$$
$$p_{\rm z parent} = p_{\rm z child 0} + p_{\rm y child 1} + p_{\rm z child 2} + ...$$
<b>Let's code!</b>
Here is some very, very basic starter code. It reads in data from the CMS experiment.
If you haven't already, you will want to go through the Data Interfacing model (also included when you cloned this directory) exercise so you know how to pull out the relevant information.
In order to see the full physics of the dimuon system, we need a larger data file than the one used for the previous activity (this one has 100,000 collisions rather than 1,000). The code for doing so is shown below, but for more details on how to download other files, see the download more data exercise, also included in this repository.
End of explanation
"""
from IPython.display import Image
Image(filename='images/dimuons_sketch.jpeg')
#your code here
"""
Explanation: <h2><font color="red">Challenge!</font></h2>
Use the sample code to find the mass of the particle that the two muons came from (parent particle).
To do this, you will need to loop over all pairs of muons for each collision, sum their 4-momenta (energy, px, py, and pz) and then use that to calculate the invariant mass.
Do this for all possible pairs and in addition, break it down so that you calculate the invariant mass for the cases where:
* Both muons are positively charged.
* Both muons are negatively charged.
* The muons have opposite charges.
Be careful. Some collisions may have more than 2 muons, so write your code such that it calculates all possible pairs of muons in a given collisions. For example, if there are 3 muons in a collision, there are 3 possible pairs that you can make.
<i>Hint!</i>
It is very likely that a particle exists where there is a peak in the data. However, this is not always true.
A peak in the data is most likely the mass of a particle. You can look at the approximate mass to figure out which particle
is found in the data.
Your histogram should look something like the following sketch. The value of the peaks should be the mass of a particle. You should be able to find two particles in their ground state. <a href="http://en.wikipedia.org/wiki/J/psi_meson">Check your answer for the first particle!</a> <a href="http://en.wikipedia.org/wiki/Upsilon_meson">Check your answer for the second particle!</a>
End of explanation
"""
|
giraph/data-sci | poker/Poker Odds.ipynb | unlicense | KNOWN = 5
UNKNOWN = 47
def card_odds(outs):
duds = UNKNOWN - outs
#return '%d:%d' % (duds, outs)
return '%d:%d' % (round(duds/outs), 1)
print(card_odds(1))
print(card_odds(6))
print(card_odds(11))
print(card_odds(16))
print(card_odds(21))
print(card_odds(26))
print(card_odds(31))
print(card_odds(36))
"""
Explanation: Pot odds simply involves using the odds or likelihood of winning when on a drawing hand to decide whether or not to call a bet or a raise.
Ratio method
1. Calculate the card odds
End of explanation
"""
def pot_odds(pot, bet):
#return '%d:%d' % (pot, bet)
return '%d:%d' % (round(pot/bet), 1)
print(pot_odds(100,20))
print(pot_odds(100,40))
print(pot_odds(100,60))
print(pot_odds(100,80))
def call_or_fold_rat(outs, pot, bet):
#return ((UNKNOWN - outs)/outs) > pot/call
return 'call' if (UNKNOWN - outs)/outs < pot/bet else 'fold'
call_or_fold_rat(9,100,20)
call_or_fold_rat(9,100,50)
"""
Explanation: 2. Compare with pot odds
End of explanation
"""
def card_equity(outs):
return 2 * outs + 1
card_equity(9)
"""
Explanation: Percentage Method
1. Calculate the card equity
End of explanation
"""
def pot_equity(pot, bet):
return round(bet / (pot + bet) * 100)
pot_equity(100,20)
def call_or_fold_pct(outs, pot, bet):
return 'call' if (2 * outs + 1) > (bet / (pot + bet) * 100) else 'fold'
call_or_fold_pct(9,100,20)
call_or_fold_pct(9,100,50)
"""
Explanation: 2. Calculate the pot odds
End of explanation
"""
def call_or_fold_24(outs, pot, bet):
return 'call' if (2 * outs) > (bet / (pot + bet) * 100) else 'fold'
call_or_fold_24(9,100,20)
call_or_fold_24(9,100,50)
def stella(outs, pot, bet):
return 'call' if (2 * outs) > (bet / (pot + bet) * 100) else 'fold'
stella(9,100,20)
"""
Explanation: |outs|held | $\to$ |desired |
|:--:|:----------------------|:------|:------------------|
| 2 | pair | $\to$ | trips |
| 4 | two pair | $\to$ | full house |
| 4 | gut shot | $\to$ | straight |
| 6 | overcards | $\to$ | pair |
| 8 | open-ended straight | $\to$ | straight |
| 9 | four flush | $\to$ | flush |
| 15 | straight & flush draw | $\to$ | straight or flush |
Rule of 2/4
Multiply your outs by 2 when you are on the flop waiting for the turn.
Multiply your outs by 2 when you are on the turn waiting for the river.
Multiply your outs by 4 when you are on the flop waiting for the river (opponent is all-in).
Call if > pot odds (percentage method).
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/thu/cmip6/models/sandbox-3/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-3', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: THU
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
aleph314/K2 | Statistical Inference/HR-Exercise.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# read data
df = pd.read_csv('HR_comma_sep.csv')
# print first rows
df.head()
# print info, we have no nulls
df.info()
# describe numeric columns
# satisfaction_level and last_evaluation seems percentages
# work_accident, left and promotion are booleans
df.describe()
# describe object columns and print unique values
print(df[['sales', 'salary']].describe())
print(df.sales.unique())
print(df.salary.unique())
"""
Explanation: HR Dataset - Statistics Review
Explore the data
The data set we will use for this exercise comes from a Kaggle challenge and is often used for predictive analytics, namely to predict why the best and most experienced employees tend to leave the company. We won't be using it for any predictive purposes here, but will instead use this data set to review many of the concepts explored in the Statistical Inference lectures.
This data contains fields for various measures of employee performance and reported satisfaction levels, as well as categorical variables for events and salary level. For now, just explore the data a bit to get a general idea of what is going on.
End of explanation
"""
n_employees = len(df)
left = df.left.sum()
accident = df.Work_accident.sum()
accident_left = len(df[(df['Work_accident'] == 1) & (df['left'] == 1)])
# probability that a randomly selected employee left the company
print(left/n_employees)
# probability that experienced a work accident
print(accident/n_employees)
# probability that a randomly selected employee left the company and experienced a work accident
print(accident_left/n_employees)
# Creating two dataframes, one for employees who left and one for those who stayed
df_left = df[df['left'] == 1]
df_stayed = df[df['left'] == 0]
# Compute the 25th, 50th, and 90th percentiles for the satisfaction level score for all employees that left the company.
print('Employees who left 25th, 50th and 90th percentile: {}, {}, {}'
.format(df_left.satisfaction_level.quantile(0.25),
df_left.satisfaction_level.quantile(0.5),
df_left.satisfaction_level.quantile(0.9)))
# Compare these results to the same percentiles for those that did not leave. What can you say about the results?
print('Employees who stayed 25th, 50th and 90th percentile: {}, {}, {}'
.format(df_stayed.satisfaction_level.quantile(0.25),
df_stayed.satisfaction_level.quantile(0.5),
df_stayed.satisfaction_level.quantile(0.9)))
"""
Explanation: Probability, Expectation Values, and Variance
The concepts of probability, expectation values, and variance are the bedrock of statistical inference. Let's begin by employing some of these concepts to see if we can find some interesting paths to go down which may provide some insight into the inner workings of this company.
What is the probability that a randomly selected employee left the company? What about experienced a work accident? Also compute the probability that a randomly selected employee left the company and experienced a work accident.
Compute the 25th, 50th, and 90th percentiles for the satisfaction level score for all employees that left the company. Compare these results to the same percentiles for those that did not leave. What can you say about the results?
Compute the variance and standard deviation of hours worked.
Compare the variance between the satisfaction levels of employees who left versus those who stayed. Which is larger? What does this mean?
Compute the mean satisfaction level for each salary category. Comment on your results.
Given an employees salary level (low, medium, or high), calculate the probability that they worked more than two standard deviations of the average monthly hours across all groups. In other words, compute
$$P(hours > 2\sigma \vert salary ) = \dfrac{P(salary \vert hours > 2\sigma) P(hours > 2\sigma)}{P(salary)}$$
What can you say about your results in part 6?
Repeat parts 6 and 7 for
$$P(left \vert salary ) = \dfrac{P(salary \vert left) P(left)}{P(salary)}$$
What is the odds ratio of an employee with a high salary getting a promotion within the past five years versus a low salary employee? Comment on your results.
Suppose we were to pull a random sample of size 50 of employee satisfaction levels. What would approximately be the mean of this sample? What would be the mean of, say, 10 sets of random samples? Demonstrate your assertions by writing some python code to do just that.
End of explanation
"""
# Compute the variance and standard deviation of hours worked.
print(df.average_montly_hours.var())
print(df.average_montly_hours.std())
# Compare the variance between the satisfaction levels of employees who left versus those who stayed.
# Which is larger? What does this mean?
print(df_left.satisfaction_level.var())
print(df_stayed.satisfaction_level.var())
"""
Explanation: There seems to be a difference but before we draw any conclusion we would need to perform a hypothesis test:
End of explanation
"""
# Compute the mean satisfaction level for each salary category. Comment on your results.
df.groupby('salary').satisfaction_level.mean()
"""
Explanation: The variance in the satisfaction levels is larger for employees who left, so the satisfaction level for this employees is more spread out around the mean. This may indicate that the employees leaving the company have a level of satisfaction more variable than those who stay.
End of explanation
"""
# Given an employees salary level (low, medium, or high), calculate the probability that
# they worked more than two standard deviations of the average monthly hours across all groups.
# In other words, compute P(hours > 2*sigma|salary) = P(salary|hours > 2*\sigma)*P(hours > 2*sigma)/P(salary)
# Creating a dataset for each salary level
df_low = df[df['salary'] == 'low']
df_medium = df[df['salary'] == 'medium']
df_high = df[df['salary'] == 'high']
# And one for employees who have worked more than two std above
sigma_hours = df.average_montly_hours.mean() + 2*df.average_montly_hours.std()
df_above = df[df['average_montly_hours'] > sigma_hours]
P_hours = len(df_above) / n_employees
for d, level in zip([df_low, df_medium, df_high], ['low', 'medium', 'high']):
P_salary = len(d) / n_employees
P_salary_hours = len(df_above[df_above['salary'] == level]) / len(df_above)
P = (P_salary_hours * P_hours) / P_salary
P_hours_salary = len(d[d['average_montly_hours'] > sigma_hours]) / len(d)
print('{} salary level probability: {:.5f}, {:.5f}'.format(level, P, P_hours_salary))
"""
Explanation: The satisfaction level increases with the salary, as expected. It seems though to be a more sensible difference between low and medium than medium and high salaries, but again we would need to test to see if the difference is significant.
End of explanation
"""
# Repeat parts 6 and 7 for P(left|salary) = P(salary|left)*P(left)/P(salary)
P_left = len(df_left) / n_employees
for d, level in zip([df_low, df_medium, df_high], ['low', 'medium', 'high']):
P_salary = len(d) / n_employees
P_salary_left = len(df_left[df_left['salary'] == level]) / len(df_left)
P = (P_salary_left * P_left) / P_salary
P_left_salary = len(d[d['left'] == 1]) / len(d)
print('{} salary level probability: {:.5f}, {:.5f}'.format(level, P, P_left_salary))
"""
Explanation: What can you say about your results in part 6?
There seems to be a clear trend and difference between the probability that an employee who worked more time is in a given salary level: the employee is more probable to be found in low and medium salary levels than high. This difference, though it seems quite marked, should be verified through a test.
End of explanation
"""
# What is the odds ratio of an employee with a high salary getting a promotion
# within the past five years versus a low salary employee? Comment on your results.
p_high = df_high.promotion_last_5years.value_counts() / len(df_high)
p_low = df_low.promotion_last_5years.value_counts() / len(df_low)
print(p_high)
print(p_low)
print((p_high[1] / p_high[0]) / (p_low[1] / p_low[0]))
"""
Explanation: As above the probability that an employee left the company given a certain salary level is higher for low and medium salary levels than for high ones.
End of explanation
"""
import random
# Demonstrate your assertions by writing some python code to do just that.
random.seed(7)
size = 50
s = random.sample(range(0, n_employees), size)
print('Dataset mean: {}\nSample mean: {}'.format(df.satisfaction_level.mean(), df.iloc[s].satisfaction_level.mean()))
random.seed(7)
size = 50
n_samples = 10
mean = 0
for i in range(n_samples):
s = random.sample(range(0, n_employees), size)
mean += df.iloc[s].satisfaction_level.mean()
mean = mean / n_samples
print('Dataset mean: {}\nMean of sample means: {}'.format(df.satisfaction_level.mean(), mean))
"""
Explanation: The probability is sensibly higher for high salary level than low.
I think this is in part explained because if you are in a high salary level you had to get there by being promoted, instead if you are in a low level there is some chance that you are a newly arrived employee and you couldn't have got a promotion yet.
Suppose we were to pull a random sample of size 50 of employee satisfaction levels.
What would approximately be the mean of this sample? Somewhere near the mean of the entire dataset, but not that near because the sample size isn't that big.
What would be the mean of, say, 10 sets of random samples? Closer to the mean of the dataset because we have taken more samples, even the same size of each sample hasn't changed.
End of explanation
"""
from scipy import stats
"""
Explanation: Distributions and The Central Limit Theorem
The Bernoulli Distribution
Bernoulli distributions are the result of a random variable with a binary outcome, like a coin flip or medical test giving a positive or negative result. Typically we represent the outcomes of a Bernoulli Random variable $X$ of only taking values of 0 or 1, with probabilities $p$ and $1 - p$ respectively, mean $p$, variance $p(1 - p)$, and PMF given by
$$ P(X = x) = p^x (1 - p)^{1 - x} $$
Where $x$ is the outcome and $p$ is the probability of the positive outcome (1).
Bernoulli random variables crop up very often in statistical analysis — most often in the form of Binomial trials, or, as a sum of independent Bernoulli variables with PMF given by
$$ P(X = x) = {n \choose x} p^x (1 - p)^{n - x} $$
where
$$ {n \choose x} = \frac{n!}{x!(n - x)!} $$
In this exercise you'll take a look at the HR data and apply these concepts to gain some insight.
Using the HR data, answer the following.
1. Which variables in the HR data can be said to be Bernoulli random variables?
2. For the variables you identified in part 1, compute the probabilities $p_k$, of each having a positive $(x = 1)$ result, where $k$ is a placeholder for each variable.
3. Compute the variance of each of the variables in part 2 using $p_k$ as described above.
4. For each of the k variables, compute the probability of randomly selecting 3500 employees with a positive result. Comment on your answer.
5. For each of the k variables, compute the probability of randomly selecting 3500 or less with a positive result. Comment on your answer.
6. Now plot both the PMF and CDF as a function of the number of drawn samples for each of the k variables. Comment on your results.
End of explanation
"""
# For the variables you identified in part 1, compute the probabilities p_k, of each having a positive (x = 1) result,
# where k is a placeholder for each variable.
bernoulli = ['Work_accident', 'left', 'promotion_last_5years']
for var in bernoulli:
print('probability for {}: {:.5f}'.format(var, df[var].sum() / n_employees))
# Compute the variance of each of the variables in part 2 using p_k as described above.
for var in bernoulli:
p = df[var].sum() / n_employees
print('variance for {}: {:.5f}'.format(var, p*(1-p)))
# For each of the k variables, compute the probability of randomly selecting 3500 employees with a positive result.
# Comment on your answer.
x = 3500
for var in bernoulli:
p = df[var].sum() / n_employees
print('PMF for {}: {:.5f}'.format(var, stats.binom.pmf(x, n_employees, p)))
# For each of the k variables, compute the probability of randomly selecting 3500 or less with a positive result.
# Comment on your answer.
x = 3500
for var in bernoulli:
p = df[var].sum() / n_employees
print('CDF for {}: {:.5f}'.format(var, stats.binom.cdf(x, n_employees, p)))
# Now plot both the PMF and CDF as a function of the number of drawn samples for each of the k variables.
# Comment on your results.
x = np.arange(0, n_employees)
fig, ax = plt.subplots(3, 2, figsize=(16, 15))
i = j = 0
for var in bernoulli:
p = df[var].sum() / n_employees
ax[i, 0].plot(x, stats.binom.pmf(x, n_employees, p))
ax[i, 0].set_title(var +' PMF')
ax[i, 1].plot(x, stats.binom.cdf(x, n_employees, p))
ax[i, 1].set_title(var + ' CDF')
i += 1
"""
Explanation: Which variables in the HR data can be said to be Bernoulli random variables?
I think these variables are good candidates:
Work_accident
left
promotion_last_5years
End of explanation
"""
# For the variables in part 1, plot some histograms.
normal = ['satisfaction_level', 'last_evaluation', 'number_project', 'average_montly_hours']
fig, ax = plt.subplots(2, 2, figsize=(16, 10))
i = j = 0
for var in normal:
if j == 2:
i += 1
j = 0
ax[i, j].hist(df[var], bins=50)
ax[i, j].set_title(var)
j += 1
# Compute the mean and variance for each of the variables used in parts 1 and 2.
for var in normal:
print('{}\n\tmean = {:.5f}\n\tvariance = {:.5f}'.format(var, df[var].mean(), df[var].var()))
# Using the mean and variance in part 3, construct normal distributions for each
# and overlay them on top of the histograms you made in part one.
# Are they well approximated by normals?
fig, ax = plt.subplots(2, 2, figsize=(16, 10))
i = j = 0
for var in normal:
x = np.linspace(min(df[var]), max(df[var]), 1000)
Z = stats.norm.pdf(x, loc=df[var].mean(), scale=df[var].std())
if j == 2:
i += 1
j = 0
ax[i, j].hist(df[var], bins=50, normed=True)
ax[i, j].set_title(var)
ax[i, j].plot(x, Z)
j += 1
"""
Explanation: The Normal Distribution
The Normal distribution (or sometimes called the Bell Curve or Gaussian) is by far the most prevalent and useful distribution in any field that utilizes statistical techniques. In fact, in can be shown that the means of random variables sampled repeatedly from any distribution eventually form a normal given a sufficiently large sample size.
A normal distribution is characterized by the PDF given by
$$p(x|\mu,\sigma) = \frac{1}{\sqrt{(2\pi\sigma^2)}}e^{-\frac{(x - \mu)^2}{2\sigma^2}} $$
where $\mu$ is the mean and $\sigma^2$ is the variance, thus the distribution is characterized by mean and variance alone. In this exercise, you'll examine some of the variables in the HR dataset and construct some normal distributions approximating them.
Using the HR data, answer the following
Which variables may be approximately normal?
For the variables in part 1, plot some histograms.
Compute the mean and variance for each of the variables used in parts 1 and 2.
Using the mean and variance in part 3, construct normal distributions for each and overlay them on top of the histograms you made in part one. Are they well approximated by normals?
Which variables may be approximately normal?
satisfaction_level
last_evaluation
number_project
average_montly_hours
End of explanation
"""
# For each variable in part 1, divide each by salary and fit a Poisson distribution to each.
poisson = ['time_spend_company']
for var in poisson:
for level in ['low', 'medium', 'high']:
mu = df[df['salary'] == level][var].mean()
fit = stats.poisson(mu)
print('{} {} mean and Poisson fit mean: {:.5f}, {:.5f}'.format(var, level, mu, fit.mean()))
x = np.arange(0, 20)
fig, ax = plt.subplots(3, 1, figsize=(16, 15))
i = 0
for var in poisson:
for level in ['low', 'medium', 'high']:
mu = df[df['salary'] == level][var].mean()
ax[i].set_title(var + ' ' + level + ' salary')
ax[i].plot(x, stats.poisson.pmf(x, mu))
ax[i].hist(df[df['salary'] == level][var], bins=20, normed=True)
i += 1
# For each salary level, compute the probability of obtaining at least the mean of each variable
# regardless of salary level by using the Poisson distributions you constructed in part 2.
# Comment on your results.
for var in poisson:
global_mean = df[var].mean()
# print(global_mean)
for level in ['low', 'medium', 'high']:
mu = df[df['salary'] == level][var].mean()
# print(mu, df[df['salary'] == level][var].median())
print('{} {} salary level probability is {:.5f}'.format(var, level, stats.poisson.sf(global_mean, mu)))
"""
Explanation: The Poisson Distribution
The Poisson distribution is very versatile but is typically used to model counts per unit time or space, such as the number of ad clicks or arriving flights, each per unit time. It has a PDF given by
$$ P(X = x, \lambda) = \frac{\lambda^x e^{-\lambda}}{x!} $$
where the mean and variance are both equal to $\lambda$
Using the HR data, answer the following.
What variables would be good candidates for modeling with a Poisson distribution?
For each variable in part 1, divide each by salary and fit a Poisson distribution to each.
For each salary level, compute the probability of obtaining at least the mean of each variable — regardless of salary level — by using the Poisson distributions you constructed in part 2. Comment on your results.
What variables would be good candidates for modeling with a Poisson distribution?
time_spend_company
End of explanation
"""
# Choose two variables which may be good candidates to test this theorem.
central = ['average_montly_hours', 'last_evaluation']
# Using the variables chosen in part 1, randomly select a set of n = 10 samples and take the mean.
# Repeat this 1000 times for each variable.
def make_samples(n_samples, size, seed):
res = {}
for var in central:
random.seed(seed)
mean = []
for i in range(n_samples):
s = random.sample(range(0, n_employees), size)
mean.append(df.iloc[s][var].mean())
# from solution using a list comprehension:
# mean = [df[var].sample(size).mean() for i in range(n_samples)]
res[var] = mean
return res
sampsize10 = make_samples(1000, 10, 7)
# Plot a histogram for each variable used in part 2. Comment on your results.
def plot_samples(samples):
fig, ax = plt.subplots(1, 2, figsize=(16, 5))
i = 0
for var in central:
ax[i].set_title(var)
ax[i].hist(samples[var], bins=50)
i += 1
plot_samples(sampsize10)
# Repeat parts 2-3 for n = 100, n = 500, and n = 1000. Comment on your results.
sampsize100 = make_samples(1000, 100, 7)
plot_samples(sampsize100)
sampsize500 = make_samples(1000, 500, 7)
plot_samples(sampsize500)
sampsize1000 = make_samples(1000, 1000, 7)
plot_samples(sampsize1000)
# Overlay an normal curve on your n = 1000 plots, using the mean and variance computed from the data.
# Comment on your results.
fig, ax = plt.subplots(1, 2, figsize=(16, 5))
i = 0
for var in central:
x = np.linspace(min(sampsize1000[var]), max(sampsize1000[var]), 1000)
ax[i].set_title(var)
ax[i].hist(sampsize1000[var], bins=50, normed=True)
# from solutions: divide by sqrt(1000) to find the std of a sampled distribution!
# but if I do it results are not so good!
ax[i].plot(x, stats.norm.pdf(x, loc=pd.Series(sampsize1000[var]).mean(), scale=pd.Series(sampsize1000[var]).std()), color='red')
i += 1
"""
Explanation: The Central Limit Theorem
The Central Limit Theorem is perhaps one of the most remarkable results in statistics and mathematics in general. In short, it says that the distribution of means of independent random variables, sampled from any distribution, tends to approach a normal distribution as the sample size increases.
An example of this would be taking a pair of dice, rolling them, and recording the mean of each result. The Central Limit Theorem states, that after enough rolls, the distribution of the means will be approximately normal. Stated formally, the result is
$$ \bar{X_n} \sim N(\mu, \sigma^2/n) = \frac{\sqrt{n}}{\sigma \sqrt{2\pi}}e^{-n(\bar{X_n} - \mu)^2/\sigma^2}$$
In this exercise, you'll conduct some simulation experiments to explore this idea.
Using the HR data, answer the following.
1. Choose two variables which may be good candidates to test this theorem.
2. Using the variables chosen in part 1, randomly select a set of n = 10 samples and take the mean. Repeat this 1000 times for each variable.
3. Plot a histogram for each variable used in part 2. Comment on your results.
4. Repeat parts 2-3 for n = 100, n = 500, and n = 1000. Comment on your results.
5. Overlay an normal curve on your n = 1000 plots, using the mean and variance computed from the data. Comment on your results.
End of explanation
"""
# Compute a confidence interval for satisfaction levels, at the 95% confidence level,
# of employees who left the company and those who didn't.
# Do this using both a t distribution and a normal. Comment on your results.
import statsmodels.stats.api as sm
# checking mean and variance
print('Employees who left mean and variance are {:.5f} and {:.5f}'.
format(df_left.satisfaction_level.mean(),
df_left.satisfaction_level.var()))
print('Employees who stayed mean and variance are {:.5f} and {:.5f}'.
format(df_stayed.satisfaction_level.mean(),
df_stayed.satisfaction_level.var()))
# using normal distribution
print('\nNormal Distribution\n')
# df_left_norm_confidence = df_left.satisfaction_level.mean() + df_left.satisfaction_level.std() / np.sqrt(len(df_left)) * np.array(stats.norm.ppf([0.025, 0.975]))
# print('Employees who left 95% confidence interval: {}'.format(df_left_norm_confidence))
# print('Employees who left 95% confidence interval: {}'.
# format(stats.norm.interval(0.95,
# df_left.satisfaction_level.mean(),
# df_left.satisfaction_level.std()/np.sqrt(len(df_left)))))
print('Employees who left 95% confidence interval: {}'.format(sm.DescrStatsW(df_left.satisfaction_level).zconfint_mean(alpha=0.05)))
# df_stayed_norm_confidence = df_stayed.satisfaction_level.mean() + df_stayed.satisfaction_level.std() / np.sqrt(len(df_stayed)) * np.array(stats.norm.interval(0.95))
# print('Employees who stayed 95% confidence interval: {}'.format(df_stayed_norm_confidence))
# print('Employees who stayed 95% confidence interval: {}'.
# format(stats.norm.interval(0.95,
# df_stayed.satisfaction_level.mean(),
# df_stayed.satisfaction_level.std()/np.sqrt(len(df_stayed)))))
print('Employees who stayed 95% confidence interval: {}'.format(sm.DescrStatsW(df_stayed.satisfaction_level).zconfint_mean(alpha=0.05)))
# using t distribution
print('\nT Distribution with n={}\n'.format(n_employees))
# df_left_t_confidence = df_left.satisfaction_level.mean() + df_left.satisfaction_level.std() / np.sqrt(len(df_left)) * np.array(stats.t.interval(0.95, n_employees))
# print('Employees who left 95% confidence interval: {}'.format(df_left_t_confidence))
# print('Employees who left 95% confidence interval: {}'.
# format(stats.t.interval(0.95,
# n_employees,
# df_left.satisfaction_level.mean(),
# df_left.satisfaction_level.std()/np.sqrt(len(df_left)))))
print('Employees who left 95% confidence interval: {}'.format(sm.DescrStatsW(df_left.satisfaction_level).tconfint_mean(alpha=0.05)))
# df_stayed_t_confidence = df_stayed.satisfaction_level.mean() + df_stayed.satisfaction_level.std() / np.sqrt(len(df_stayed)) * np.array(stats.t.interval(0.95, n_employees))
# print('Employees who stayed 95% confidence interval: {}'.format(df_stayed_t_confidence))
# print('Employees who stayed 95% confidence interval: {}'.
# format(stats.t.interval(0.95,
# n_employees,
# df_stayed.satisfaction_level.mean(),
# df_stayed.satisfaction_level.std()/np.sqrt(len(df_stayed)))))
print('Employees who stayed 95% confidence interval: {}'.format(sm.DescrStatsW(df_stayed.satisfaction_level).tconfint_mean(alpha=0.05)))
"""
Explanation: Hypothesis Testing
Hypothesis testing is essentially using the data to answer questions of interest. For example, does a new medication provide any benefit over placebo? Or is a subset of the population disproportionately more susceptible to a particular disease? Or is the difference between two companies profits' significant or due to chance alone?
Before doing some hypothesis testing on the HR data, recall that hypothesis typically come in pairs of the form $H_0$, called the null hypothesis, versus $H_a$, called the alternative hypothesis. The null hypothesis represents the "default" assumption -- that a medication has no effect for example, while the alternative hypothesis represents what exactly we are looking to discover, in the medication case, whether it provides a significant benefit. Another common case is testing the difference between two means. Here, the null hypothesis is that there is no difference between two population means, whereas the alternative hypothesis is that there is a difference. Stated more precisely
$$H_0: \mu_1 - \mu_2 = 0$$
$$H_a: \mu_1 - \mu_2 \ne 0$$
Hypotheses are usually tested by constructing a confidence interval around the test statistic and selecting a "cut-off" significance level denoted $\alpha$. A typical $\alpha$ significance is 0.05 and is often called a "p-value". If a test produces a p-value of $\alpha$ or below, then the null hypothesis can be rejected, strengthening the case of the alternative hypothesis. It is very important to remember that hypothesis testing can only tell you if your hypothesis is statistically significant -- this does not mean that your result may be scientifically significant which requires much more evidence.
In this exercise you'll explore the HR data more and test some hypothesis.
Using the HR data, answer the following.
Compute a confidence interval for satisfaction levels, at the 95% confidence level, of employees who left the company and those who didn't. Do this using both a t distribution and a normal. Comment on your results.
Use a t-test to test the hypothesis that employees who left the company, had lower satisfaction levels than those who did not. If significant, is the mean difference? Comment on your results. (Hint: Do the two populations have equal variance?)
Fit a normal curve to each group in part 2 and put them on the same plot next to each other. Comment on your results.
Test the hypothesis that the satisfaction level between each salary group, denoted k, differs signicantly from the mean. Namely
$H_0: \mu - \mu_k = 0$
$H_a: \mu - \mu_k \ne 0$
How would you interpret your results in part 4?
Generate plots for part 4 as you did in part 3. What conclusions can you draw from the plot?
Repeat parts 4-6 on a hypothesis of your choosing.
Recall that Power is the probability of failing to reject the null hypothesis when it is false (thus more power is good). Compute the power for the hypothesis that the satisfaction level of high paid employees is different than that of medium paid employees using a t distribution.
End of explanation
"""
# Use a t-test to test the hypothesis that employees who left the company,
# had lower satisfaction levels than those who did not. If significant, is the mean difference?
# Comment on your results. (Hint: Do the two populations have equal variance?)
print('Test assuming equal variance: {}'.format(stats.ttest_ind(df_left.satisfaction_level,
df_stayed.satisfaction_level,
equal_var=True)))
print('Test assuming different variance: {}'.format(stats.ttest_ind(df_left.satisfaction_level,
df_stayed.satisfaction_level,
equal_var=False)))
"""
Explanation: The results are almost the same because for $n \rightarrow \infty$ the t distribution tends to the normal.
End of explanation
"""
# Fit a normal curve to each group in part 2 and put them on the same plot next to each other. Comment on your results.
fig = plt.figure(figsize=(15, 10))
ax = plt.axes()
x1 = np.linspace(df_left.satisfaction_level.min(), df_left.satisfaction_level.max(), 1000)
ax.plot(x1, stats.norm.pdf(x1, df_left.satisfaction_level.mean(), df_left.satisfaction_level.std() / np.sqrt(len(df_left))), label='left')
x2 = np.linspace(df_stayed.satisfaction_level.min(), df_stayed.satisfaction_level.max(), 1000)
ax.plot(x2, stats.norm.pdf(x2, df_stayed.satisfaction_level.mean(), df_stayed.satisfaction_level.std() / np.sqrt(len(df_stayed))), label='stayed')
ax.legend();
"""
Explanation: The difference is significant, and since the mean of employees who left is lower we can say that the hypothesis that they had lower satisfaction level is statistically relevant (we have a two-tailed test but the p-value is very small in either case).
Also, the test is significant both considering the populations to have equal variance or not.
End of explanation
"""
# Test the hypothesis that the satisfaction level between each salary group, denoted k,
# differs signicantly from the mean. Namely
# H0:μ−μk=0H0:μ−μk=0
# Ha:μ−μk≠0Ha:μ−μk≠0
# print(df.satisfaction_level.mean())
# print(df_high.satisfaction_level.mean())
# print(df_medium.satisfaction_level.mean())
# print(df_low.satisfaction_level.mean())
# print(df.satisfaction_level.var())
# print(df_high.satisfaction_level.var())
# print(df_medium.satisfaction_level.var())
# print(df_low.satisfaction_level.var())
print('High level test: {}'.format(stats.ttest_1samp(df_high.satisfaction_level, df.satisfaction_level.mean())))
print('Medium level test: {}'.format(stats.ttest_1samp(df_medium.satisfaction_level, df.satisfaction_level.mean())))
print('Low level test: {}'.format(stats.ttest_1samp(df_low.satisfaction_level, df.satisfaction_level.mean())))
"""
Explanation: From the plots we can see that the peaks of the normal are far apart, thus backing up our test results.
End of explanation
"""
# Generate plots for part 4 as you did in part 3. What conclusions can you draw from the plot?
fig = plt.figure(figsize=(15, 10))
ax = plt.axes()
# x1 = np.linspace(df.satisfaction_level.min(), df.satisfaction_level.max(), 1000)
# ax.plot(x1, stats.norm.pdf(x1, df.satisfaction_level.mean(), df.satisfaction_level.std() / np.sqrt(len(df))), label='all')
x2 = np.linspace(df_low.satisfaction_level.min(), df_low.satisfaction_level.max(), 1000)
ax.plot(x2, stats.norm.pdf(x2, df_low.satisfaction_level.mean(), df_low.satisfaction_level.std() / np.sqrt(len(df_low))), label='low')
x3 = np.linspace(df_medium.satisfaction_level.min(), df_medium.satisfaction_level.max(), 1000)
ax.plot(x3, stats.norm.pdf(x3, df_medium.satisfaction_level.mean(), df_medium.satisfaction_level.std() / np.sqrt(len(df_medium))), label='medium')
x4 = np.linspace(df_high.satisfaction_level.min(), df_high.satisfaction_level.max(), 1000)
ax.plot(x4, stats.norm.pdf(x4, df_high.satisfaction_level.mean(), df_high.satisfaction_level.std() / np.sqrt(len(df_high))), label='high')
ax.legend();
"""
Explanation: How would you interpret your results in part 4?
We can reject the null hypothesis for all three salary levels with $\alpha = 0.01$.
End of explanation
"""
# Repeat parts 4-6 on a hypothesis of your choosing.
# Last evaluation mean differs between people who left and people who stayed
print('Last evaluation mean and variance for employees who left: {}, {}'.format(df_left.last_evaluation.mean(),
df_left.last_evaluation.var()))
print('Last evaluation mean and variance for employees who stayed: {}, {}'.format(df_stayed.last_evaluation.mean(),
df_stayed.last_evaluation.var()))
print('Test assuming different variance: {}'.format(stats.ttest_ind(df_left.last_evaluation,
df_stayed.last_evaluation,
equal_var=False)))
"""
Explanation: In this case the curves are not that far apart as in the previous case but they still seem reasonably distant.
End of explanation
"""
fig = plt.figure(figsize=(15, 10))
ax = plt.axes()
x1 = np.linspace(df_left.last_evaluation.min(), df_left.last_evaluation.max(), 1000)
ax.plot(x1, stats.norm.pdf(x1, df_left.last_evaluation.mean(), df_left.last_evaluation.std() / np.sqrt(len(df_left))), label='left')
x2 = np.linspace(df_stayed.last_evaluation.min(), df_stayed.last_evaluation.max(), 1000)
ax.plot(x2, stats.norm.pdf(x2, df_stayed.last_evaluation.mean(), df_stayed.last_evaluation.std() / np.sqrt(len(df_stayed))), label='stayed')
ax.legend();
"""
Explanation: We can't reject the null hypothesis, let's plot the data:
End of explanation
"""
# when it is false (thus more power is good). Compute the power for the
# hypothesis that the satisfaction level of high paid employees is different than
# that of medium paid employees using a t distribution.
import statsmodels.stats.power as smp
# From solution the size effect to use is high-medium divided by std of all the data:
effect_size = (df_high.satisfaction_level.mean() - df_medium.satisfaction_level.mean()) / df.satisfaction_level.std()
print(smp.TTestIndPower().solve_power(effect_size,
nobs1=len(df_high),
ratio=len(df_high)/len(df_medium),
alpha=0.05,
alternative='two-sided'))
# This is used in the solution but I don't understand the use of the number of employees who stayed as nobs1...
print(sm.TTestIndPower().power(effect_size, nobs1=len(df_stayed), ratio=len(df_high)/len(df_medium), alpha=0.05))
"""
Explanation: Indeed the means seems very close!
End of explanation
"""
def bootstrap(n, b, var):
statistics = []
for i in range(b):
sample = pd.Series(np.random.choice(np.array(df[var]), size=n))
statistics.append(sample.median())
return pd.Series(statistics)
n = 100
b = 100
var = 'satisfaction_level'
sat_lvl_bootstrapped = bootstrap(n, b, var)
print('Bootstrapped samples median mean and variance: {:.5f}, {:.5f}'.format(sat_lvl_bootstrapped.mean(), sat_lvl_bootstrapped.std()))
print('True median: {:.5f}'.format(df[var].median()))
"""
Explanation: Bootstrapping
Bootstrapping is an immensely useful technique in practice. Very often you may find yourself in a situation where you want to compute some statistic, but lack sufficient data to do so. Bootstrapping works as a remedy to this problem.
Recall that the bootstrapping algorithm breaks down as follows:
1. Sample n observations with replacement from the observed data resulting in one simulated complete data set.
1. Take the statistic of the simulated data set
1. Repeat these two steps B times, resulting in B simulated statistics
1. These statistics are approximately drawn from the sampling distribution of the statistic of n observations
- This is a lot like what you did when drawing many sample means
In this exercise you will implement this algorithm on the HR data.
Write a function that can perform bootrapping for the median of a set of n samples in the HR data set. Test this function on the satisfaction_level with n = 100 and b = 100 and compare your results to the true median. Also compute the standard deviation of the bootstrapped median.
End of explanation
"""
|
BL-Labs/poetryhunt | Clustering running notepad.ipynb | mit | %matplotlib inline
import mpld3
mpld3.enable_notebook()
# Get the dataset:
from clustering import create_cluster_dataset, NewspaperArchive
DBFILE = "1749_1750_no_drift.db"
n = NewspaperArchive()
ds = create_cluster_dataset(n, daterange = [1749, 1750], dbfile = DBFILE)
"""
Explanation: Clustering experiments
I hope that by interrogating various ways of looking at the newspaper text placement and the way it is aligned on a page, that some sort of grouping might surface. From the selection of poetry, it seems that a poem is likely to have an aligned left edge to the text, but a more wildly varying left edge.
'clustering.py' can create a database of vectors for a given date range slice of the (readable) Burney newspaper archive. This vector can then be used to investigate various coorelations to see if, in fact, it is possible to cluster the text columns in such a way that poetry is very likely to be found near each other.
Further to this, one we have a means of creating interesting clusters of text, we can ask it about other data and find out which cluster it would put the new data. If we find a cluster that is by majority poetry, then if it puts new data into this cluster, we can have a level of confidence that the new data is also like these and a poem.
Plan:
Iterate through the following steps:
Pull or derive a set of interesting types of numbers from the dataset. Each block of text will have a set of these numbers (a 'vector').
Create a suitable number of clusters using two (though hopefully more) of these types to test.
Check to see if these clusters are sensible and are not arbitrary in nature subjectively.
Given the set of found poems, see into which clusters the poems get assigned.
If a high % of the poems get assigned to a single cluster -> Success! Focus on this!
Otherwise, try again from the top.
End of explanation
"""
data, transform, id_list = ds
print(data.toarray())
print(transform.get_feature_names())
"""
Explanation: What do these 'vectors' look like? What do the columns refer to?
End of explanation
"""
from clustering import ClusterDB
db = ClusterDB(DBFILE)
print(dict(db.vecidtoitem(id_list[-1])))
print(data.toarray()[-1])
from burney_data import BurneyDB
bdb = BurneyDB("burney.db")
bdb.get_title_row(titleAbbreviation="B0574REMEMBRA")
"""
Explanation: Going from a vector back to the metadata reference:
By keeping an 'id_list', we can look up the identifier for any vector in the list from the database we've made for this clustering attemp. This lets us look up what the reference for that is, and where we can find it:
End of explanation
"""
from scipy import cluster
from matplotlib import pyplot as plt
import numpy as np
# Where is the K-means 'elbow'?
# Try between 1 and 10
# use only the x1 and x2 variences
vset = [cluster.vq.kmeans(data.toarray()[:, [3,6]], i) for i in range(1,10)]
plt.plot([v for (c,v) in vset])
plt.show()
"""
Explanation: Initial data woes
There was a considerable discrepancy between the x1 average indent and the column "box" left edge. Looking at the data, the presence of a few outliers can really affect this value. Omitting the 2 smallest and largest x values might be enough to avoid this biasing the sample too badly.
Also, the initial 'drift correction' (adjustments made to correct warped or curved columns) seemed to add more issues than it solved, so the dataset was remade without it.
End of explanation
"""
# Mask off leaving just the front and end variance columns
npdata = data.toarray()
mask = np.ones((8), dtype=bool)
mask[[0,1,2,4,5,7]] = False
marray = npdata[:,mask]
"""
Explanation: Seems the elbow is quite wide and not sharply defined, based on just the line variences. Let's see what it looks like in general.
End of explanation
"""
plt.scatter(marray[:,0], marray[:,1])
plt.show()
"""
Explanation: x1 vs x2 varience?
What is the rough shape of this data? The varience of x1 and x2 are equivalent to the left and right alignment of the text varies in a given block of text.
End of explanation
"""
#trying a different KMeans
from sklearn.cluster import KMeans
estimators = {'k_means_3': KMeans(n_clusters=3),
'k_means_5': KMeans(n_clusters=5),
'k_means_8': KMeans(n_clusters=8),}
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(8, 8))
plt.clf()
plt.cla()
est.fit(marray)
labels = est.labels_
plt.scatter(marray[:,0], marray[:,1], c=labels.astype(np.float))
fignum = fignum + 1
plt.show()
"""
Explanation: Attempting K-Means
What sort of clustering algorithm to employ is actually a good question. K-means can give fairly meaningless responses if the data is of a given sort. Generally, it can be useful but cannot be used blindly.
Given the data above, it might be a good start however.
End of explanation
"""
mpld3.disable_notebook() # switch off the interactive graph functionality which doesn't work well with the 3D library
from mpl_toolkits.mplot3d import Axes3D
X = npdata[:, [3,5,6]]
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(8, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=5, azim=30)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))
ax.set_xlabel('x1 varience')
ax.set_ylabel('x2 varience')
ax.set_zlabel('Average indent')
fignum = fignum + 1
plt.show()
"""
Explanation: Interesting!
The lack of really well defined clusters bolstered the "elbow" test above. K-means is likely not put to good use here, with just these two variables.
The left edge of the scatterplot is a region that contains those blocks of text with lines aligned to the left edge of the paper's column, but have some considerable variation to the length of the line.
For example, I'd expect text looking like the following:
Qui quis at ex voluptatibus cupiditate quod quia.
Quas fuga quasi sit mollitia quos atque. Saepe atque officia sed dolorem.
Numquam quas aperiam eaque nam sunt itaque est. Sed expedita
maxime fugiat mollitia error necessitatibus quam soluta. Amet laborum eius
sequi quae sit sit.
This is promising (as long as the data is realistic and there isn't a bug in generating that...)
Now, I wonder if including the "margin" (x1ave-ledge: average x1 coordinate minus the leftmost edge) might help find or distinguish these further?
End of explanation
"""
X = npdata[:, [3,0,6]]
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(8, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))
ax.set_xlabel('x1 varience')
ax.set_ylabel('x2 varience')
ax.set_zlabel('Density')
fignum = fignum + 1
plt.show()
"""
Explanation: How about the area density? In other words, what does it look like if the total area of the block is compared to the area taken up by just the words themselves?
End of explanation
"""
mask = npdata[:,1] > 40 * 5 # mask based on the ltcount value
print(mask)
print("Amount of vectors: {0}, Vectors with ltcount < 50: {1}".format(len(npdata), sum([1 for item in mask if item == False])))
m_npdata = npdata[mask, :]
X = m_npdata[:, [3,0,6]]
# Let's just plot one graph to see:
est = estimators['k_means_8']
fig = plt.figure(fignum, figsize=(8, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))
ax.set_xlabel('x1 varience')
ax.set_ylabel('x2 varience')
ax.set_zlabel('Density')
plt.show()
"""
Explanation: More outliers skewing the results. This time for blocks with nearly zero varience at either end, but a huge amount of letter area attributed to it by the ocr, but sweeping out a very small overall area. Perhaps mask out the columns which aren't actually columns but dividers mistaken for text? ie skip all blocks that are narrow under 100px perhaps. Another way might be to ignore blocks which are under approximately 40 words (40 words * 5 characters)
End of explanation
"""
|
UCBerkeleySETI/breakthrough | SDR/stations/sdr_stations.ipynb | gpl-3.0 | def calc_time_diff(time1, time2):
"""returns difference in seconds between time 1 and time 2
time1, time2: strings in format hh:mm:ss
"""
str_to_sec = lambda time: sum([60**p[0]*int(p[1]) for p in enumerate(time.split(":")[::-1])])
return str_to_sec(time1) - str_to_sec(time2)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import csv
import pandas as pd
import datetime
from collections import OrderedDict
from FMstations import *
df = pd.read_csv("logfile.csv", header=None)
MINFREQ = min(df[2])
MAXFREQ = max(df[3])
FREQ_BIN = df[4][0]
LINES_PER_TIME = len(df[2].unique())
INTERVAL = calc_time_diff(df[1][LINES_PER_TIME], df[1][0])
TOTAL_TIME = INTERVAL * len(df) / LINES_PER_TIME
raw_data = df.as_matrix()[:,6:].astype("float") # raw_data is 2d numpy matrix of only data
"""
Explanation: Radio Frequency Interference: Identifying Nearby Sources
In the world of radio astronomy, observations of signals from extraterrestrial origins are often compounded by human-made sources. Many electronical sources, such as radio stations, cell phone towers, WiFi, even GPS satellites and your personal microwave oven, can interfere with radio astronomy. This tutorial combines usage of hardware with some code to identify nearby interference sources. This analysis is part of the larger practice of filtering out any human-made and Earth-based radio signals to better identify potential signals of interest from space.
For my experiment, I used a Realtek RTL2838 dongle as my hardware receiver. While I specifically looked for nearby radio stations within the limited scope of the hardware, the code can be applied with little modification, depending on the hardware used, to processing and identifying a wide range of other sources.
Ultimately, this script produces two outputs:
1. A plot showing signal sources as peaks, which are marked according to a given dictionary of the sources' names. The plot is a spectrum of signal power against frequency
2. A dictionary matching the peaking frequencies with their sources
Familiarizing with the properties of human-made sources can be convenient for manually dismissing uninteresting signals.
From the Command Line: rtl_power
Before diving into python code, we first need to generate a file to store in the data from the dongle. The command to do that is rtl_power:
rtl_power -f min:max:bin filename.ext
min: initial frequency
max: terminal frequency
bin: frequency interval
The file extension I used is csv. There are additional parameters for further options; example:
rtl_power -f 87M:108M:1k -g 20 -i 10 -e 5m logfile.csv
Here, I set the gain to 20 and the runtime to 15 minutes, taking data in 10-second intervals. All the data is stored in a csv file logfile.csv. With our data in hand, open a python file with your favorite editor and begin the processing.
Modules to Import and Global Variables
Several modules are important for the basic functioning of this script:
1. Numpy: essential for computing and number crunching
2. Matplotlib.pyplot: useful for plotting
3. csv: used to generate the raw data, which is stored as a csv file, into a workable text file. The appropriate modules, depending on the format of the raw data file and the format of the desired data conversion, should be used here
4. OrderedDict from collections: outputs any dictionary with the entries in the order that you entered them. This will be useful later for ordering peaks, but is useful in general when dealing with dictionaries
I've written a short helper function that allows you to calculate differences between times since python works primarily in date times. This is just a kludge and will not work if you take measurements that cross into different days. In that case I recommend you just manually enter the information associated with time.
End of explanation
"""
df
"""
Explanation: Next block visualizes the data
End of explanation
"""
def total_freq(num_samples_per_time):
"""total frequency band of the data
num_samples_per_time: number of data samples over entire frequency spectrum per timestep
"""
return np.linspace(MINFREQ, MAXFREQ, num_samples_per_time)
TotalBand = total_freq(len(raw_data[0]) * LINES_PER_TIME)
"""
Explanation: FMstations is a separate .py file storing a dictionary to be used in a later function. While dictionaries can be included in the executing script without functional issues, their potential large size warrants storage in a separate file.
logfile.csv is the file containing the raw data, an array of lines. Each line is an array itself, consisting of a number of power values corresponding to the sampling rate. Specifically for my experiment, the sampling rate is 2.5 MHz and each eight lines constitute a scan of the bandwidth of interest, the 21 MHz band between 87 and 108 MHz. U.S. radio stations are allocated broadcast frequencies in that range, and these values should be changed according to the experiment at hand. Data is collected every 10 seconds for every 610.35 (raw_data[0][4]) frequency difference in that range over a total of 300 seconds.
The raw data file consists of 240 lines because the scan is performed 30 times, the total number of seconds divided by the scan interval. Each line unfortunately comes with an initial six outputs, such the date of when the data was taken, that we have no interest of for purposes of data processing and plotting. Those outputs would have to be truncated.
Processing Raw Data
First, we have to idenfity the frequency range we are interested in nd have it be consistent with the type and number of data in the file for plotting by generating a list of frequencies between our min and max frequencies that correspond to the amount of samples we have for each time step
End of explanation
"""
def power_total(data):
"""returns an array of arrays that is flattened from the original array such that each row consists of the
entire frequency range instead of chunks
data: input raw data; an array of arrays
"""
flattened = []
cur = []
for i in range(len(data)):
cur += list(raw_data[i])
if (i+1)%LINES_PER_TIME == 0:
flattened.append(cur)
cur = []
return np.array(flattened)
total_data = power_total(raw_data)
"""
Explanation: There are many ways to go about processing the raw data values to correspond with our frequency range. Since my data came in the form of an array of arrays, that's how I initially kept my data as, though with the fir several unneeded elements truncated from each line. The follow code does just that.
End of explanation
"""
avg_data = np.mean(total_data, axis=0)
"""
Explanation: Unlike the raw data, however, each inner array now consists of a full scan of the relevant band (as opposed to each line in the raw data corresponding to only an eighth of the band). For my data, there's now only 30 inner arrays, each a scan at a specific time, down from 240 lines in the raw data.
Having this result is useful for, as an example, plotting multiple graphs to see the time-evolution of the signal. For our immediate purposes, we look at the time-averaged values to produce a single plot.
We now take the average across time of all these arrays
End of explanation
"""
plt.figure(figsize=(15,5))
plt.title('Time-Averaged Radio Spectrum over 90 Scans')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.plot(TotalBand, avg_data)
plt.show()
"""
Explanation: We now have a single array of values ripe for plotting.
Spectrum Improvement and Plotting
Let's see a visual of our plot of the average power values against our frequency band.
End of explanation
"""
def Reduce(spec, n):
"""takes the median of a set of points, removing noise-produced peaks and much noise
n: half the number of a set of points that the median is taken over
"""
R = []
i = 0
for x in spec:
if i < n:
R.append(np.median(spec[0 : i + n + 1]))
i += 1
elif i >= 32769:
R.append(np.median(spec[i - n :]))
i += 1
else:
R.append(np.median(spec[i - n : i + n + 1]))
i += 1
return R
ReduSpec = Reduce(avg_data, 10)
plt.figure(figsize=(15,5))
plt.title('Noise-Reduced, Time-Averaged Radio Spectrum')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.plot(TotalBand, ReduSpec)
plt.show()
"""
Explanation: We have a functional plot, but it's riddled with issues associated with signal process. We need to remove the noisy peaks and smooth out the graph so that we can systematically identify the peaks, which correspond to frequencies which signals are received at. Power is in units of dBm, decibels referencing milliwatts. Online resources have more in-depth explanation of the bel unit and power representation in signal processing.
End of explanation
"""
plt.figure(figsize=(15,5))
plt.title('Highest Peak Zoomed-In')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
highest_peak_loc = np.argmax(ReduSpec)
plt.plot(TotalBand[highest_peak_loc-100:highest_peak_loc+100], ReduSpec[highest_peak_loc-100:highest_peak_loc+100])
plt.show()
"""
Explanation: This is seemingly our desired plot, but upon closer inspection...
End of explanation
"""
def smooth(spec, win_len, window, beta = 20):
"""smooths a signal with kernel window of win_len number of inputs
signal: Input data spectrum to be smoothed
win_len: the size of the kernel window used to smooth the input
window: type of kernel; e.g. 'blackman'
"""
if window == 'kaiser':
w = eval('np.'+window+'(win_len, beta)')
elif window == 'flat':
w = np.ones(len(win_len, 'd'))
else:
w = eval('np.'+window+'(win_len)')
s = np.r_[spec[win_len-1 : 0 : -1], spec, spec[-1 : -win_len : -1]]
y = np.convolve(w/w.sum(), s, mode='valid')
return y[(int(win_len / 2) - 1) : (int(-win_len / 2))]
"""
Explanation: Despite the plot looking great from afar, it's still rather rugged even at the peaks upon closer inspection. The roughness can make it difficult to mathematically identify peaks. What we need is a smoothing. The process of smoothing a function over another (convolution) is rooted in mathematical rigor, so we do not go in depth there. Essentially, a smoothing function, traditionally called the kernel, is 'applied' and moved along our spectrum. There are various ways to implement convolution and thankfully numpy has built-in function to make the job simpler.
End of explanation
"""
SmooSpec = smooth(ReduSpec, 150, 'kaiser')
plt.figure(figsize=(15,5))
plt.title('Smoothed, Time-Averaged Radio Spectrum')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.plot(TotalBand, SmooSpec)
plt.show()
plt.figure(figsize=(15,5))
plt.title('Second Highest Peak Zoomed-In')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
highest_peak_loc = np.argmax(SmooSpec)
plt.plot(TotalBand[highest_peak_loc-100:highest_peak_loc+100], SmooSpec[highest_peak_loc-100:highest_peak_loc+100])
plt.show()
"""
Explanation: The code evaluates based on what type of smoothing we're using. Numpy has a number of built-in smoothing functions. For my plots, I used the kaiser kernel, which is derived from Bessel functions. If flat smoothing is chosen, then our plot is smoothed by a line. The result is a much smoother graph, with the side effect of reduced power at every point.
End of explanation
"""
def get_peaks(spec, threshold, xreducer, rounder=2):
"""identifies the peaks of a plot. Returns an array of 2 lists:
1. the indices of the frequencies corresponding to the peaks;
2. said frequencies, divided by xreducer for simpler units and rounded to rounder decimals
spec: input data spectrum
threshold: only data above which are taken into account to ignore the noise level
"""
Peaks = []
spec = spec.tolist()
for i in np.arange(len(spec)):
if spec[i] > threshold and spec[i] > spec[i-1] and spec[i] > spec[i+1]:
Peaks.append(spec[i])
else:
continue
Ordered_Indices = []
while True:
if np.array(Peaks).tolist() == []:
Ordered_Freq = [(x * FREQ_BIN + MINFREQ) for x in Ordered_Indices]
Reduced_Freq = np.around((np.array(Ordered_Freq) / xreducer), rounder)
return [Ordered_Indices, Reduced_Freq.tolist()]
elif len(Peaks) == 1:
Ordered_Indices.append(spec.index(Peaks[0]))
Peaks = np.delete(Peaks, 0)
else:
Ordered_Indices.append(spec.index(np.amax(Peaks)))
Peaks = np.delete(Peaks, np.array(Peaks).tolist().index(np.amax(Peaks)))
"""
Explanation: We finally have our smoothed plot and can systematically identify characteristics with numpy without dealing with much of the interference due to rough data patterns.
Identifying and Marking Peaks and their Frequencies
We can isolate the peak frequencies with simple methods.
End of explanation
"""
def mark_peaks(src_dict, spec, threshold, line, title, xreducer, error=.01, bound1='left', bound2='bottom', rot=90):
"""returns both a plot and a dictionary
plot: shows the stations next to the marked peaks
dictionary: matches the relevant peak frequencies with the corresponding station(s)
src_dict: input dictionary of frequencies and stations from which the results are selected from
spec: input spectrum data
threshold: only data above which are taken in account to ignore the noise level
title: title for the plot
xreducer: the values of the x-axis divided by which to simpler units
error: within which the obtained frequencies are acceptable as equivalent to that of a station
remaining parameters: used the adjust the markings and labels of the plots
"""
stations = []
peakfreqs = []
stations_i = []
peaker = get_peaks(spec, threshold, xreducer)
p0 = peaker[0]
p1 = peaker[1]
for i in np.arange(len(p1)):
if p1[i] in src_dict.keys():
stations.append(src_dict[p1[i]])
peakfreqs.append(p1[i])
stations_i.append(p0[i])
else:
for x in np.arange(p1[i]-error, p1[i]+error, error):
if x in src_dict.keys():
stations.append(src_dict[x])
peakfreqs.append(p1[i])
stations_i.append(p0[i])
else:
continue
peaks = [spec[y] for y in stations_i]
plt.figure(figsize=(15,5))
plt.title(title)
plt.xlabel('Frequency (MHz)')
plt.ylabel('Reduced Power (dBm)')
yoffset = (np.amax(spec) - np.amin(spec)) / 4
plt.ylim(np.amin(spec) - yoffset, np.amax(spec) + yoffset)
plt.plot(TotalBand / 1000000, spec)
plt.scatter(peakfreqs, peaks, marker = 'o', color = 'r', s = 40)
text_bounds = {'ha':bound1, 'va':bound2}
for i in np.arange(len(peakfreqs)):
plt.text(peakfreqs[i], peaks[i] + (yoffset / 10), stations[i], text_bounds, rotation=rot)
plt.savefig('stations_peaks.pdf')
plt.show()
stations_dict = OrderedDict()
for i in np.arange(len(stations)):
stations_dict[peakfreqs[i]] = stations[i]
return stations_dict
"""
Explanation: THe function iterates over all the values in the spectrum and stores any frequency (and its associated index) of the peaks (values above a certain threshold that's a local maxima) in arrays. The indices will be useful for corresponding the frequencies with their peak location. The majority of the function is ordering the frequencies in descending order of the peak intensity by iterating over the array of peaks and determining each's corresponding peak value. The ordered list is then rounded to one decimal and is in units of MHz or in a desired unit as determined by xreducer.
With a list of peaks, we can use them to physically mark their sources on our plot, provided a dictionary of the sources.
End of explanation
"""
print(mark_peaks(BAFMRS, SmooSpec, -54, 8, 'Bay Area FM Radio Stations', 1000000))
"""
Explanation: It's a rather long function, but a lot of it is making adjustments to the final plot. The function requires an input [ython dictionary, so that dictionary either has to be from an established source online or be manually made. For my code, I made a dictionary of all Bay Area FM Radio Stations (BAFMRS) with their corresponding frequencies as the dictionary keys. For example, an entry in the dictionary is '88.5: KQED' with 88.5 MHz being the transmitting frequency and KQED being the name of the station.
A radio station signal comes in the form of a primary analog signal (almost all the peaks you see on the plot) and two digital signals on the side; only the stronger signals, such as the one corresponding to the leftmost peak, visibly show the digital parts. This function naturally filters out almost all digital peaks because their frequencies do not correspond to any of that in a dictionary of radio station signals. Unfortunately, it's not easy to feasibly filter multiple stations broadcasting the same frequency with this level of coding. Naturally, however, there won't be many of these cases at any given location.
The final plot, all processed and labeled:
End of explanation
"""
def waterfall(data, title, axesrange, gridshape='auto'):
"""returns a waterfall grid consisting off all the spectra from the input
data: an array of arrays
title: title of grid
axesrange: boundaries of the values of the grid
"""
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('Time (s)')
plt.imshow(data, extent=axesrange)
ax.set_aspect(gridshape)
plt.colorbar(orientation='vertical')
plt.show()
waterfall(total_data, 'Radio Spectra Waterfall', [MINFREQ*1e-6, MAXFREQ*1e-6, 0, TOTAL_TIME])
"""
Explanation: We got our final result. This particular approach is useful for identifying nearby signal sources for any purpose. Another approach uses not the average power values, but all the power values from all the scans. Visualizing the time-evolution can be useful for other purposes, such as noting the fluctuations of the peaks' strengths or finding sudden peaks arising at certain moments. The "waterfall" plot displays, in this case, intensity (power) with color codes and has time instead on the y-axis.
End of explanation
"""
|
mmaelicke/scikit-gstat | tutorials/06_gstools.ipynb | mit | # import
import skgstat as skg
import gstools as gs
import numpy as np
import matplotlib.pyplot as plt
import plotly.offline as pyo
import warnings
pyo.init_notebook_mode()
warnings.filterwarnings('ignore')
# use the example from gstools
# generate a synthetic field with an exponential model
x = np.random.RandomState(19970221).rand(1000) * 100.
y = np.random.RandomState(20011012).rand(1000) * 100.
model = gs.Exponential(dim=2, var=2, len_scale=8)
srf = gs.SRF(model, mean=0, seed=19970221)
field = srf((x, y))
# combine x and y for use in skgstat
coords = np.column_stack((x, y))
"""
Explanation: 6 - GSTools
With version 0.5 scikit-gstat offers an interface to the awesome gstools library. This way, you can use a Variogram estimated with scikit-gstat in gstools to perform random field generation, kriging and much, much more.
For a Variogram instance, there are three possibilities to export into gstools:
Variogram.get_empirical(bin_center=True) returns a pair of distance lag bins and experimental semi-variance values, like gstools.variogram.vario_estimate.
Variogram.to_gstools returns a parameterized CovModel derived from the Variogram.
Variogram.to_gs_krige returns a GSTools Krige instance based on the variogram
6.1 get_empirical
6.1.1 Reproducing the gstools example
You can reproduce the Getting Started example for variogram estimation from GSTools docs with scikit-gstat, and replace the calculation of the empirical variogram with skg.Variogram.
Note: This does only make sense if you want to use a distance metric, binning procedure or semi-variance estimator, that is not included in gstools or are bound to scikit-gstat for any other reason. Variogram will always perform a full model fitting cycle on instantiation, which could lead to some substantial overhead here.
This behavior might change in a future version of scikit-gstat.
End of explanation
"""
V = skg.Variogram(coords, field, n_lags=21, estimator='matheron', maxlag=45, bin_func='even')
bin_center, gamma = V.get_empirical(bin_center=True)
"""
Explanation: In the example, gstools.variogram.vario_estimate is used to estimate the empirical variogram:
```Python
estimate the variogram of the field
bin_center, gamma = gs.vario_estimate((x, y), field)
```
Here, we can use skg.Variogram. From the shown arguments, estimator and bin_func are using the default values:
End of explanation
"""
%matplotlib inline
# fit the variogram with a stable model. (no nugget fitted)
fit_model = gs.Stable(dim=2)
fit_model.fit_variogram(bin_center, gamma, nugget=False)
# output
ax = fit_model.plot(x_max=max(bin_center))
ax.scatter(bin_center, gamma)
print(fit_model)
"""
Explanation: And finally, the exact same code from the GSTools docs can be called:
End of explanation
"""
bin_edges, _ = V.get_empirical(bin_center=False)
# fit the variogram with a stable model. (no nugget fitted)
edge_model = gs.Stable(dim=2)
_ = edge_model.fit_variogram(bin_edges, gamma, nugget=False)
fig, axes = plt.subplots(1,2, figsize=(12,4))
# plot first
fit_model.plot(ax=axes[1], label='center=True')
# plot second
edge_model.plot(ax=axes[1], label='center=False')
# bins
axes[0].scatter(bin_center, gamma, label='center=True')
axes[0].scatter(bin_edges, gamma, label='center=False')
axes[0].set_title('Empirical Variogram')
axes[1].set_title('Variogram Model')
axes[0].legend(loc='lower right')
print(fit_model)
print(edge_model)
"""
Explanation: 6.1.2 bin_center=False
It is important to understand, that gstools and skgstat are handling lag bins different. While skgstat uses the upper limit, gstools assumes the bin center. This can have implications, if a model is fitted. Consider the example below, in which only the bin_center setting is different.
End of explanation
"""
V = skg.Variogram(coords, field, n_lags=15, estimator='dowd', maxlag=45, bin_func='uniform', dist_func='cityblock')
bin_center, gamma = V.get_empirical(bin_center=True)
# fit the variogram with a stable model. (no nugget fitted)
fit_model = gs.Stable(dim=2)
fit_model.fit_variogram(bin_center, gamma, nugget=True)
# output
ax = fit_model.plot(x_max=max(bin_center))
ax.scatter(bin_center, gamma)
print(fit_model)
"""
Explanation: Notice the considerable gap between the two model functions. This can already lead to seroius differences, i.e. in Kriging.
6.1.3 Using other arguments
Now, with the example from the GSTools docs working, we can start chaning the arguments to create quite different empirical variograms.
Note: This should just illustrate the available possibilities, the result is by no means producing a better estimate of the initially created Gaussian random field.
In this example different things will be changed:
use only 15 lag classes, but distribute the point pairs equally. Note the differing widths of the classes. (bin_func='uniform')
The Dowd estimator is used. (estimator='dowd')
The Taxicab metric (aka. Manhattan metric or cityblock metric) is used over Euklidean for no obvious reason. (dist_func='cityblock')
End of explanation
"""
skg.plotting.backend('plotly')
V = skg.Variogram(coords, field, n_lags=21, estimator='matheron', model='exponential', maxlag=45, bin_func='even')
fig = V.plot(show=False)
pyo.iplot(fig)
"""
Explanation: If you fit the gs.Stable with a nugget, it fits quite well. But keep in mind that this does not necessarily describe the original field very well and was just fitted for demonstration.
6.2 to_gstools
The second possible interface to gstools is the Variogram.to_gstools function. This will return one of the classes listed in the gstools documentation. The variogram parameters are extracted and passed to gstools. You should be able to use it, just like any other CovModel.
However, there are a few things to consider:
skgstat can only export isotropic models.
The 'harmonize' cannot be exported
6.2.1 exporting Variogram
In this example, the same Variogram from above is estimated, but we use the 'exponential' model. An exponential covariance function was used in the first place to create the field that was sampled.
End of explanation
"""
exp_model = V.to_gstools()
print(exp_model)
# get the empirical for the plot as well
bins, gamma = V.get_empirical(bin_center=True)
ax = exp_model.plot(x_max=45)
ax.scatter(bins, gamma)
"""
Explanation: Now export the model to gstools:
End of explanation
"""
x = y = range(100)
new_field = gs.SRF(exp_model, seed=13062018)
new_field.structured([x, y])
new_field.plot()
"""
Explanation: Note: It is important to understand, that skgstat and gstools handle coordinates slightly different. If you export the Variogram to a CovModel and you want to use the Variogram.coordinates, you must transpose them.
```Python
variogram is a skgstat.Variogram instance
model = variogram.to_gstools()
cond_pos = variogram.coordinates.T
use i.e. in Kriging
krige = gs.krige.Ordinary(model, cond_pos, variogram.values)
```
6.2.2 Spatial Random Field Generation
With a CovModel, we can use any of the great tools implemented in gstools. First, let's create another random field with the exponential model that we exported in the last section:
End of explanation
"""
malformed = gs.SRF(fit_model, seed=24092013)
malformed.structured([x,y])
malformed.plot()
"""
Explanation: Keep in mind, that we did not call a Kriging procedure, but created another field.
Of course, we can do the same thing with the more customized model, created in 6.1.3:
End of explanation
"""
# export
krige = V.to_gs_krige(unbiased=True) # will result in ordinary kriging
print(krige)
# create a regular grid
x = y = range(100)
# interpolate
result, sigma = krige.structured((x, y))
fig, axes = plt.subplots(1, 2, figsize=(8, 4))
# plot
axes[0].imshow(result, origin='lower')
axes[1].imshow(sigma, origin='lower', cmap='RdYlGn_r')
# label
axes[0].set_title('Kriging')
axes[1].set_title('Error Variance')
plt.tight_layout()
"""
Explanation: Notice how the spatial properties as well as the value range has changed. That's why it is important to estimate Variogram or CovModel carefully and not let the GIS do that for you somewhere hidden in the dark.
6.3 to_gs_krige
Finally, after carefully esitmating and fitting a variogram using SciKit-GStat, you can also export it directly into a GSTools Krige instance. We use the variogram as in the other sections:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/mpiesm-1-2-ham/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: MPIESM-1-2-HAM
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
dtamayo/reboundx | ipython_examples/TidesConstantTimeLag.ipynb | gpl-3.0 | import rebound
import reboundx
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
def getsim():
sim = rebound.Simulation()
sim.units = ('yr', 'AU', 'Msun')
sim.add(m=0.86) # post-MS Sun
sim.add(m=3.e-6, a=1., e=0.03) # Earth
sim.move_to_com()
rebx = reboundx.Extras(sim)
tides = rebx.load_force("tides_constant_time_lag")
rebx.add_force(tides)
return sim, rebx, tides
"""
Explanation: Drag from Tides
This adds a constant time lag model (Hut 1981) to tides raised on either the primary and/or the orbiting bodies.
As an example, we'll add the tides raised on a post-main sequence Sun near its tip-RGB phase by the Earth.
For a more advanced example implementation that includes stellar evolution using "Parameter Interpolation" and extended integration of terrestrial-like planets, see §4.2 and Fig. 5 of Baronett et al. (2021) and https://github.com/sabaronett/REBOUNDxPaper.
End of explanation
"""
sim, rebx, tides = getsim()
ps = sim.particles
ps[0].r = 0.85 # AU
ps[0].params["tctl_k2"] = 0.03
"""
Explanation: We specify the primary and secondaries' equilibrium gravitational response to the tidal field acting on them through the tctl_k2 potential Love number of degree 2. If we additionally give the primary a physical radius, then any (massive) orbiting body will raise equilibrium tides on the primary. Similarly, if we add a physical radius and tctl_k2 to any of the orbiting bodies, the primary will raise tides on that particle (but note that orbiting bodies will not raise tides on one another):
End of explanation
"""
H0 = sim.calculate_energy() + rebx.tides_constant_time_lag_potential(tides)
tmax = 5000
Nout=1000
pomega, Eerr = np.zeros(Nout), np.zeros(Nout)
times = np.linspace(0,tmax,Nout)
for i, time in enumerate(times):
sim.integrate(time)
pomega[i] = ps[1].pomega
H = sim.calculate_energy() + rebx.tides_constant_time_lag_potential(tides)
Eerr[i] = abs((H-H0)/H0)
%matplotlib inline
import matplotlib.pyplot as plt
fig, axarr = plt.subplots(nrows=2, figsize=(12,8))
axarr[0].plot(times, pomega)
axarr[0].set_ylabel("Pericenter", fontsize='xx-large')
axarr[1].plot(times, Eerr, '.')
axarr[1].set_xscale('log')
axarr[1].set_yscale('log')
axarr[1].set_xlabel('Time', fontsize='xx-large')
axarr[1].set_ylabel('Energy Error', fontsize='xx-large')
"""
Explanation: If we stop here and don't add a time lag, we will get the instantaneous equilibrium tide, which provides a conservative, radial non-Keplerian potential. The total energy will be conserved, but the pericenter will precess.
End of explanation
"""
sim, rebx, tides = getsim()
ps = sim.particles
ps[0].r = 0.85 # AU
ps[0].params["tctl_k2"] = 0.03
ps[0].params["tctl_tau"] = 0.04
ps[0].params["Omega"] = 0 # explicitly set to 0 (would be 0 by default)
"""
Explanation: Constant Time Lag
If we additionally set the tctl_tau constant time lag parameter, this delayed response introduces dissipation, which will typically cause eccentricity damping, and will migrate the orbiting bodies either inward or outward depending on whether they orbit faster or slower than the spin of the tidally deformed body. We set the spin rate of each body with the Omega parameter. If it is not set, Omega is assumed to be zero.
We note that this implementation assumes bodies' spins are fixed, so consider whether more angular momentum is being changed in the system than is available in the spins! We additionally assume that bodies spins are aligned with the reference z axis.
As an example, for a highly-evolved RGB Sun, tidal friction in the outer convective envelope will retard tidal bulges on the solar photosphere (Schröder & Smith 2008), resulting in a non-zero constant time lag.
From Eq. 8 of Baronett et al. (2021),
$$
\tau = \dfrac{2R^3}{GMt_f},
$$
where $\tau$ is the constant time lag parameter (tctl_tau),
$R$ and $M$ are the physical radius and mass of the tidally deformed body respectively,
$G$ is the gravitational constant, and
$t_f(t) = (M(t)R(t)^2/L(t))^{1/3} \approx \mathcal{O}(1 \textrm{yr}$) is the convective friction time (Zahn 1989, Eq. 7).
For this simulation's values (i.e., $R = 0.85\,\text{au}$, $G = 4\pi^2\,\text{au}^3\cdot\text{yr}^{-2}\cdot M_\odot^{-1}$, $M = 0.86\,M_\odot$, and $t_f = 1\,\text{yr}$),
$$
\tau \approx 0.04\,\text{yr}.
$$
End of explanation
"""
import numpy as np
tmax = 2.5e5
Nout=1000
a, e = np.zeros(Nout), np.zeros(Nout)
times = np.linspace(0,tmax,Nout)
# to plot physical radius of the Sun
R0 = 0*times + ps[0].r
q = (ps[1].m/ps[0].m)
T = ps[0].r**3/sim.G/ps[0].m/ps[0].params["tctl_tau"]
apred = ps[0].r*((ps[1].a/ps[0].r)**8 - 48.*ps[0].params["tctl_k2"]*q*(1+q)*times/T)**(1./8.)
%%time
for i, time in enumerate(times):
sim.integrate(time)
a[i] = ps[1].a
e[i] = ps[1].e
fig, ax = plt.subplots(figsize=(12,4))
ax.plot(times/1e3, a, label='$a_{\oplus}$')
ax.plot(times/1e3, R0, label='$R_{\odot}$')
ax.plot(times/1e3, apred, '--', label='$a_{\oplus}$ predicted')
ax.set_xlabel('$t$ / kyr', fontsize='xx-large')
ax.set_ylabel('(AU)', fontsize='xx-large')
ax.legend(fontsize='xx-large', loc='best')
"""
Explanation: We can compare our numerical integration to the theoretical prediction assuming a circular orbit (see Baronett et al. 2021, Eq. 7). We'll integrate for 250 kyr and store the Earth's semi-major axis and eccentricity.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,4))
ax.plot(times/1e3, e, label='$a_{\oplus}$')
ax.set_xlabel('$t$ / kyr', fontsize='xx-large')
ax.set_ylabel('e', fontsize='xx-large')
ax.legend(fontsize='xx-large', loc='best')
"""
Explanation: Note the small eccentricity we originally initialized for the Earth causes our numerical result to diverge only slightly from the circular, theoretical prediction.
In fact, we can also check that the eccentricity decays:
End of explanation
"""
|
unpingco/Python-for-Probability-Statistics-and-Machine-Learning | chapters/machine_learning/notebooks/clustering.ipynb | mit | from IPython.display import Image
Image('https://github.com/unpingco/Python-for-Probability-Statistics-and-Machine-Learning/raw/master/python_for_probability_statistics_and_machine_learning.jpg')
%matplotlib inline
from matplotlib.pylab import subplots
import numpy as np
from sklearn.datasets import make_blobs
"""
Explanation: Clustering
End of explanation
"""
from sklearn.datasets import make_blobs
fig,ax=subplots()
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
_=ax.scatter(X[:,0],X[:,1],c=y,s=50,cmap='gray');
ax.tick_params(labelsize='x-large')
ax.set_aspect(1/1.6)
"""
Explanation: Clustering is the simplest member of a family of machine learning methods that
do not require supervision to learn from data. Unsupervised methods
have training sets that do not have a target variable. These unsupervised
learning
methods rely upon a meaningful metric to group data into
clusters. This makes it an excellent exploratory data analysis
method because there are very few assumptions built into the method itself.
In this section, we focus on the popular K-means clustering method that is
available in Scikit-learn.
Let's manufacture some data to get going with make_blobs from Scikit-learn.
Figure shows some example clusters in two dimensions.
Clustering methods work by minimizing the following objective function,
End of explanation
"""
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4)
kmeans.fit(X)
"""
Explanation: <!-- dom:FIGURE: [fig-machine_learning/clustering_001.png, width=500 frac=0.85]
The four clusters are pretty easy to see in this example and we want clustering
methods to determine the extent and number of such clusters automatically. <div
id="fig:clustering_001"></div> -->
<!-- begin figure -->
<div id="fig:clustering_001"></div>
<p>The four clusters are pretty easy to see in this example and we want
clustering methods to determine the extent and number of such clusters
automatically.</p>
<img src="fig-machine_learning/clustering_001.png" width=500>
<!-- end figure -->
$$
J = \sum_k \sum_i \Vert \mathbf{x}_i-\mathbf{\mu}_k \Vert^2
$$
The distortion for the $k^{th}$ cluster is the summand,
$$
\Vert x_i - \mathbf{ \mu }_k \Vert^2
$$
Thus, clustering algorithms work to minimize this by adjusting the
centers of the individual clusters, $\mu_k$. Intuitively, each $\mu_k$ is the
center of mass of the points in the cloud. The Euclidean distance is
the typical metric used for this,
$$
\Vert \mathbf{ x } \Vert^2 = \sum x_i^2
$$
There are many clever algorithms that can solve this problem for
the best $\mu_k$ cluster-centers. The K-means algorithm starts with a
user-specified number of $K$ clusters to optimize over. This is implemented in
Scikit-learn with the KMeans object that follows the usual fitting
conventions in Scikit-learn,
End of explanation
"""
from scipy.spatial.distance import cdist
m_distortions=[]
for k in range(1,7):
kmeans = KMeans(n_clusters=k)
_=kmeans.fit(X)
tmp=cdist(X,kmeans.cluster_centers_,'euclidean')
m_distortions.append(sum(np.min(tmp,axis=1))/X.shape[0])
fig,ax=subplots()
fig.set_size_inches((8,5))
_=ax.plot(m_distortions,'-o',ms=10,color='gray')
_=ax.set_xlabel('K',fontsize=16)
_=ax.set_ylabel('Mean Distortion',fontsize=16)
ax.tick_params(labelsize='x-large')
# ax.set_aspect(1/1.6)
"""
Explanation: where we have chosen $K=4$. How do we choose the value of
$K$? This is the eternal question of generalization versus
approximation --- too many clusters provide great approximation but
bad generalization. One way to approach this problem is to compute the
mean distortion for increasingly larger values of $K$ until it no
longer makes sense. To do this, we want to take every data point and
compare it to the centers of all the clusters. Then, take the
smallest value of this across all clusters and average those. This
gives us an idea of the overall mean performance for the $K$ clusters.
The following code computes this explicitly.
Programming Tip.
The cdist function from Scipy computes all the pairwise
differences between the two input collections according to the
specified metric.
End of explanation
"""
from sklearn.metrics import silhouette_score
def scatter_fit(X,y,ax):
_=kmeans.fit(X)
_=ax.scatter(X[:,0],X[:,1],c=y,s=50,cmap='gray',marker='.')
_=ax.set_title('silhouette={:.3f}'.format(silhouette_score(X,kmeans.labels_)))
fig,axs = subplots(2,2,sharex=True,sharey=True)
np.random.seed(12)
ax=axs[0,0]
X,y=make_blobs(centers=[[0,0],[3,0]],n_samples=100)
scatter_fit(X,y,ax)
ax=axs[0,1]
X,y=make_blobs(centers=[[0,0],[10,0]],n_samples=100)
scatter_fit(X,y,ax)
ax=axs[1,0]
X,y=make_blobs(centers=[[0,0],[3,0]],n_samples=100,cluster_std=[.5,.5])
scatter_fit(X,y,ax)
ax=axs[1,1]
X,y=make_blobs(centers=[[0,0],[10,0]],n_samples=100,cluster_std=[.5,.5])
scatter_fit(X,y,ax)
"""
Explanation: <!-- dom:FIGURE: [fig-machine_learning/clustering_002.png, width=500 frac=0.75]
The Mean Distortion shows that there is a diminishing value in using more
clusters. <div id="fig:clustering_002"></div> -->
<!-- begin figure -->
<div id="fig:clustering_002"></div>
<p>The Mean Distortion shows that there is a diminishing value in using more
clusters.</p>
<img src="fig-machine_learning/clustering_002.png" width=500>
<!-- end figure -->
Note that code above uses the cluster_centers_, which are
estimated from K-means algorithm. The resulting Figure
shows the point of diminishing returns for
added additional clusters.
Another figure-of-merit is the silhouette coefficient, which measures
how compact and separated the individual clusters are. To compute the
silhouette coefficient, we need to compute the mean intra-cluster
distance for each sample ($a_i$) and the mean distance to the next
nearest cluster ($b_i$). Then, the silhouette coefficient for the
$i^{th}$ sample is
$$
\texttt{sc}_i = \frac{b_i-a_i}{\max(a_i,b_i)}
$$
The mean silhouette coefficient is just the mean of all these values
over all the samples. The best value is one and the worst is negative one,
with values near zero indicating overlapping clusters and negative values
showing that samples have been incorrectly assigned to the wrong cluster. This
figure-of-merit is implemented in Scikit-learn as in the following,
End of explanation
"""
X,y = make_blobs(centers=[[0,0],[5,0]],random_state=100,n_samples=200)
Xx,yx=make_blobs(centers=[[20,0]],random_state=100,n_samples=3)
X=np.vstack([X,Xx])
y=np.hstack([y,yx+2])
fig,axs=subplots(2,1,sharex=True,sharey=True)
ax=axs[0]
_=ax.scatter(X[:,0],X[:,1],c=y,s=50,cmap='gray',marker='.',alpha=.3);
_=kmeans = KMeans(n_clusters=2,random_state=123,init='random')
_=kmeans.fit(X)
_=ax.set_aspect(1)
_=ax.plot(kmeans.cluster_centers_[:,0],kmeans.cluster_centers_[:,1],'o',color='gray',ms=15,alpha=.5)
X,y = make_blobs(centers=[[0,0],[5,0]],random_state=100,n_samples=200)
Xx,yx=make_blobs(centers=[[20,0]],random_state=100,n_samples=10)
X=np.vstack([X,Xx])
y=np.hstack([y,yx+2])
ax=axs[1]
_=ax.scatter(X[:,0],X[:,1],c=y,s=50,cmap='gray',marker='.',alpha=.3);
kmeans = KMeans(n_clusters=2,random_state=123,init='random')
_=kmeans.fit(X)
_=ax.set_aspect(1)
_=ax.plot(kmeans.cluster_centers_[:,0],kmeans.cluster_centers_[:,1],'o',color='gray',ms=15,alpha=.8)
"""
Explanation: Figure shows how the silhouette coefficient
varies
as the clusters become more dispersed and/or closer together.
<!-- dom:FIGURE: [fig-machine_learning/clustering_003.png, width=500 frac=0.85]
The shows how the silhouette coefficient varies as the clusters move closer and
become more compact. <div id="fig:clustering_003"></div> -->
<!-- begin figure -->
<div id="fig:clustering_003"></div>
<p>The shows how the silhouette coefficient varies as the clusters move closer
and become more compact.</p>
<img src="fig-machine_learning/clustering_003.png" width=500>
<!-- end figure -->
K-means is easy to understand and to implement, but can be sensitive
to the initial choice of cluster-centers. The default initialization
method in Scikit-learn uses a very effective and clever randomization
to come up with the initial cluster-centers. Nonetheless, to see why
initialization can cause instability with K-means, consider the
following Figure, In Figure, there
are two large clusters on the left and
a very sparse cluster on the far right. The large circles at the
centers are the cluster-centers that K-means found. Given $K=2$, how
should the cluster-centers be chosen? Intuitively, the first two
clusters should have their own cluster-center somewhere between them
and the sparse cluster on the right should have its own cluster-center
[^kmeans].
Why isn't this happening?
[^kmeans]: Note that we are using the init=random keyword argument for this
example in order to illustrate this.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/dwd/cmip6/models/sandbox-1/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
PyDataTokyo/pydata-tokyo-tutorial-1 | pydatatokyo_tutorial_dh.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
np.seterr(invalid='ignore') # Workaround
df = pd.read_csv("data/train.csv")
df[df.Age == 65][["Name", "Age"]]
"""
Explanation: 1. チュートリアル第一部「Data Handling」
第一部の目的
IPythonの使い方について学びます
第二部で利用するチュートリアル用のデータについて学びます。
Pandasを使ったデータの前処理について学びます。
matploblibを利用したデータの可視化について学びます。
使用するパッケージ
Python 3.4.2
Pandas 0.15.2
matplotlib 1.4.3
IPython[notebook] 3.0.0
講師紹介
PyData.Tokyoオーガナイザー 池内 孝啓(@iktakahiro)
株式会社ユーリエ https://eurie.co.jp CEO & Founder
Twitter: @iktakahiro https://twitter.com/iktakahiro
Python, Go lang, Amazon Web Service
Agenda
データの読み込み
集計・統計解析
データの前処理
データの可視化
1-1. データの読み込み
PandasにはCSVを含め様々なデータを読み込むための関数が既に用意されています。今回利用するデータもCSVですので、下記の一行を実行するとデータフレームに読み込まれます。
データフレームとは
データベースで言うところのテーブルと同義です。Excelのシートと同じようなものです。関連する数値ベクトルや文字ベクトル、などの異なる型のデータを各行にまとめて格納します。各行・列はラベルを持ち、ラベルによる操作が可能です。Rでも使われていることで知られています。
End of explanation
"""
df.head(2)
"""
Explanation: 読み込んだデータを見てみましょう。各カラムにはラベルが付いており、このようなデータが存在します。
PassengerId: 乗客ID
Survived: 1 = 生き残り 0 = 死亡
Pclass: 等級
Name: 名前
Sex: 性別
Age: 年齢
Parch: 子供の数
Ticket: チケット番号
Fare: 運賃
Cabin: 部屋番号
Embarked: 乗船地
など
データを見てみよう
アウトプットの最後に出力されたように、このファイルには891人分(行)の11種類(カラム)のデータが含まれています。
各行に性別、年齢、等級などの値(分析上「素性」と呼ばれる)が入っています。
End of explanation
"""
df.tail()
"""
Explanation: head()関数は引数に指定した行数分、先頭からデータを選択します。
End of explanation
"""
df[['Name', 'Age', 'Sex']].head(3)
"""
Explanation: tail()関数は引数に指定した行数分、末尾からデータを選択します。head()関数もtail関数も、行数の指定を省略した場合、5行分のデータを選択します。
特定のカラムのみを指定してデータを抽出できます。
複数カラムの指定もできます。
End of explanation
"""
df.describe()
"""
Explanation: 1-2. 集計
describe()関数を利用すると、データフレームの概要を把握することが出来ます。
count: レコード数です。
mean: 平均値です。
std 標準偏差です。
最小値です。
25%, 50%, 75%: 第1四分位, 中央値、第3四分位です。
max: 最大値です。
End of explanation
"""
max_age = df['Age'].max()
print('年齢の最大値: {0}'.format(max_age))
mean_age = df['Age'].mean()
print('年齢の平均値: {0}'.format(mean_age))
"""
Explanation: 1つ1つのカラムをもう少し丁寧に見ていきましょう。
End of explanation
"""
df[df.Sex=='female'][['Name', 'Sex', 'Age']].sort_values(by='Age', ascending=False).head(10)
"""
Explanation: データフレームの特定の列に対して、max()やmean()などの集計実行することが出来ます。
年齢の高い上位10名を確認してみましょう。
上位10名は全員男性のようですね。女性に限定してデータを見てみましょう。
End of explanation
"""
df['Cabin'].isnull().sum()
"""
Explanation: 更に詳しくPandasについて知りたい場合は、Pandasの作者であるWes Mckinney著 Pythonによるデータ分析入門――NumPy、pandasを使ったデータ処がオススメです。また、オンライン上でも、基本的な情報を網羅したチュートリアルがあります。
1-3. データの前処理
不要カラムの削除
Cabin(部屋番号)などの値には多くの欠損データが含まれています。
End of explanation
"""
df[['Name', 'Ticket']].head()
"""
Explanation: Ticket(チケット番号)は、今回の分析では有用とは考えられません。
End of explanation
"""
df.drop(['Ticket', 'Cabin'], axis=1, inplace=True)
df.head()
"""
Explanation: CabinとTicketのカラムは削除してしまいましょう。
End of explanation
"""
df.loc[4:10]
"""
Explanation: 欠損値の補間
データフレーム中に、NaN という値があります。これが今回の欠損値です。
End of explanation
"""
df.loc[4:6][['Name', 'Age']].interpolate()
"""
Explanation: Pandasには データの補間 を行うための interpolate()関数が存在します。
End of explanation
"""
female_age_mean = round(df[df.Sex=='female']['Age'].mean())
male_age_mean = round(df[df.Sex=='male']['Age'].mean())
print('女性の平均年齢は{0}歳、男性は{1}歳です。この平均年齢で補間します。'.format(female_age_mean, male_age_mean))
round(df[df.Sex=='male']['Age'].mean())
df[df.PassengerId==6][['PassengerId', 'Name', 'Sex', 'Age']]
df_female = df[df.Sex=='female'].fillna({'Age': female_age_mean})
df_male = df[df.Sex=='male'].fillna({'Age': male_age_mean})
filled_df = df_female.append(df_male)
filled_df[filled_df.PassengerId==6][['PassengerId', 'Name', 'Sex', 'Age']]
"""
Explanation: 補間の手法が様々実装されていますが、デフォルトでは線型補間によりデータを補完します。ただし、今回のように並び順に意味のないデータの場合、この方法による補間は有効とは言えません。あくまで関数の紹介目的で解説しました。
次に、年齢の欠損値を、性別毎の年齢の平均値で補間してみます。
End of explanation
"""
def classification_age(age):
if age <= 19:
return '1'
elif age <= 34:
return '2'
elif age <= 49:
return '3'
elif age >= 50:
return '4'
else:
return '0'
filled_df['AgeClass'] = filled_df.Age.map(classification_age)
filled_df.head()
"""
Explanation: カラムの追加
データフレームにカラムを追加します。
年齢で分類し、数値をふってみましょう。
End of explanation
"""
filled_df['Survived'].plot(alpha=0.6, kind='hist', bins=2)
plt.xlabel('Survived')
plt.ylabel('N')
"""
Explanation: 1-4. データの可視化
読み込んだデータをビジュアライズしてどのようなロジックで生存者が見分けられるのか、ざっくり調べてみましょう。可視化にはPandasのプロット関数を使います。ここに紹介した以外にも様々な例があります。
Pandas - plotting
まず、年齢や、性別等によって生存確率がどのように異なるのかを調べるためにデータを可視化してみます。
0 = 死亡, 1 = 生存という2つの軸でテータを見てみます。
End of explanation
"""
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
for i, sex in enumerate(['male', 'female']):
filled_df['Survived'][filled_df.Sex==sex].hist(alpha=0.5, bins=2, ax=axes[i])
axes[i].set_title(sex)
fig.subplots_adjust(hspace=0.3)
fig.tight_layout()
"""
Explanation: 男性 / 女性の軸を加えてデータを見てみましょう。
End of explanation
"""
plt.hist([filled_df[(filled_df.Survived==0) & (filled_df.Sex=='male')]['Age'], filled_df[(filled_df.Survived==1) & (filled_df.Sex=='male')]['Age']],
alpha=0.6, range=(1,80), bins=10, stacked=True,
label=('Died', 'Survived'))
plt.legend()
plt.xlabel('Age')
plt.ylabel('N')
plt.title('male')
plt.hist([filled_df[(filled_df.Survived==0) & (filled_df.Sex=='female')]['Age'],
filled_df[(filled_df.Survived==1) & (filled_df.Sex=='female')]['Age']],
alpha=0.6, range=(1,80), bins=10, stacked=True,
label=('Died', 'Survived'))
plt.legend()
plt.xlabel('Age')
plt.ylabel('N')
plt.title('female')
fig = plt.figure(figsize=[15, 5])
ax1 = fig.add_subplot(121)
plt.hist([filled_df[(filled_df.Survived==0) & (filled_df.Sex=='female')]['Age'],
filled_df[(filled_df.Survived==1) & (filled_df.Sex=='female')]['Age']],
alpha=0.6, range=(1,80), bins=10, stacked=True,
label=('Died', 'Survived'))
plt.xlabel('Age')
plt.yticks([0, 40, 80, 120])
plt.ylabel('N')
plt.title('female')
plt.legend()
ax2 = fig.add_subplot(122)
plt.hist([filled_df[(filled_df.Survived==0) & (filled_df.Sex=='male')]['Age'],
filled_df[(filled_df.Survived==1) & (filled_df.Sex=='male')]['Age']],
alpha=0.6, range=(1,80), bins=10, stacked=True,
label=('Died', 'Survived'))
plt.xlabel('Age')
plt.yticks([0, 40, 80, 120])
plt.ylabel('N')
plt.title('male')
plt.legend()
plt.show()
"""
Explanation: 男性よりも、女性のほうが生存率が高いことが分かります。ここに年齢を軸に加えます。
End of explanation
"""
mean_age = df['Age'].mean()
for pclass in [1, 2, 3]:
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=[10, 10])
sex_n=0
for sex in ['male', 'female']:
for survived in [0, 1]:
fig = filled_df[((filled_df.Survived==survived) & (filled_df.Sex==sex) & (filled_df.Pclass==pclass) )].Age.hist(alpha=0.6, bins=10, ax=axes[sex_n][survived])
fig.set_xlabel("Age")
fig.set_ylabel('N ('+sex+str(survived)+' )')
axes[sex_n][survived].set_ylim(0,70)
fig.set_title('Pclass = {0} / mean_age = {1}'.format(pclass, round(mean_age)))
sex_n += 1
plt.subplots_adjust(hspace=0.5)
plt.show()
"""
Explanation: 最後に、Pclass(等級)の軸も加えて可視化してみましょう。
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_compute_raw_data_spectrum.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io, read_proj, read_selection
from mne.datasets import sample
from mne.time_frequency import psd_multitaper
print(__doc__)
"""
Explanation: Compute the power spectral density of raw data
This script shows how to compute the power spectral density (PSD)
of measurements on a raw dataset. It also show the effect of applying SSP
to the data to reduce ECG and EOG artifacts.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_eog-proj.fif'
tmin, tmax = 0, 60 # use the first 60s of data
# Setup for reading the raw data (to save memory, crop before loading)
raw = io.read_raw_fif(raw_fname).crop(tmin, tmax).load_data()
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Add SSP projection vectors to reduce EOG and ECG artifacts
projs = read_proj(proj_fname)
raw.add_proj(projs, remove_existing=True)
fmin, fmax = 2, 300 # look at frequencies between 2 and 300Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
"""
Explanation: Load data
We'll load a sample MEG dataset, along with SSP projections that will
allow us to reduce EOG and ECG artifacts. For more information about
reducing artifacts, see the preprocessing section in documentation.
End of explanation
"""
raw.plot_psd(area_mode='range', tmax=10.0, show=False, average=True)
"""
Explanation: Plot the raw PSD
First we'll visualize the raw PSD of our data. We'll do this on all of the
channels first. Note that there are several parameters to the
:meth:mne.io.Raw.plot_psd method, some of which will be explained below.
End of explanation
"""
# Pick MEG magnetometers in the Left-temporal region
selection = read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads', selection=selection)
# Let's just look at the first few channels for demonstration purposes
picks = picks[:4]
plt.figure()
ax = plt.axes()
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=False, ax=ax, color=(0, 0, 1), picks=picks,
show=False, average=True)
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=True, ax=ax, color=(0, 1, 0), picks=picks,
show=False, average=True)
# And now do the same with SSP + notch filtering
# Pick all channels for notch since the SSP projection mixes channels together
raw.notch_filter(np.arange(60, 241, 60), n_jobs=1, fir_design='firwin')
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=True, ax=ax, color=(1, 0, 0), picks=picks,
show=False, average=True)
ax.set_title('Four left-temporal magnetometers')
plt.legend(ax.lines[::3], ['Without SSP', 'With SSP', 'SSP + Notch'])
"""
Explanation: Plot a cleaned PSD
Next we'll focus the visualization on a subset of channels.
This can be useful for identifying particularly noisy channels or
investigating how the power spectrum changes across channels.
We'll visualize how this PSD changes after applying some standard
filtering techniques. We'll first apply the SSP projections, which is
accomplished with the proj=True kwarg. We'll then perform a notch filter
to remove particular frequency bands.
End of explanation
"""
f, ax = plt.subplots()
psds, freqs = psd_multitaper(raw, low_bias=True, tmin=tmin, tmax=tmax,
fmin=fmin, fmax=fmax, proj=True, picks=picks,
n_jobs=1)
psds = 10 * np.log10(psds)
psds_mean = psds.mean(0)
psds_std = psds.std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD', xlabel='Frequency',
ylabel='Power Spectral Density (dB)')
plt.show()
"""
Explanation: Alternative functions for PSDs
There are also several functions in MNE that create a PSD using a Raw
object. These are in the :mod:mne.time_frequency module and begin with
psd_*. For example, we'll use a multitaper method to compute the PSD
below.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nuist/cmip6/models/sandbox-1/landice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-1', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/lattice/tutorials/keras_layers.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
#@test {"skip": true}
!pip install tensorflow-lattice pydot
"""
Explanation: TFL 레이어로 Keras 모델 만들기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/keras_layers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
TFL Keras 레이어를 사용하여 단조 및 기타 형상 제약 조건이 있는 Keras 모델을 구성할 수 있습니다. 이 예제에서는 TFL 레이어를 사용하여 UCI heart 데이터세트에 대해 보정된 격자 모델을 구축하고 훈련합니다.
보정된 격자 모델에서 각 특성은 tfl.layers.PWLCalibration 또는 tfl.layers.CategoricalCalibration 레이어에 의해 변환되고 결과는 tfl.layers.Lattice를 사용하여 비선형적으로 융합됩니다.
보정된 격자 모델에서 각 특성은 tfl.layers.PWLCalibration 또는 tfl.layers.CategoricalCalibration 레이어에 의해 변환되고 결과는 tfl.layers.Lattice를 사용하여 비선형적으로 융합됩니다.
설정
TF Lattice 패키지 설치하기
End of explanation
"""
import tensorflow as tf
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
from tensorflow import feature_column as fc
logging.disable(sys.maxsize)
"""
Explanation: 필수 패키지 가져오기
End of explanation
"""
# UCI Statlog (Heart) dataset.
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
training_data_df = pd.read_csv(csv_file).sample(
frac=1.0, random_state=41).reset_index(drop=True)
training_data_df.head()
"""
Explanation: UCI Statlog(Heart) 데이터세트 다운로드하기
End of explanation
"""
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 100
"""
Explanation: 이 가이드에서 훈련에 사용되는 기본값 설정하기
End of explanation
"""
# Lattice layer expects input[i] to be within [0, lattice_sizes[i] - 1.0], so
lattice_sizes = [3, 2, 2, 2, 2, 2, 2]
"""
Explanation: 순차형 Keras 모델
이 예제는 순차형 Keras 모델을 생성하고 TFL 레이어만 사용합니다.
격자 레이어는 input[i]이 [0, lattice_sizes[i] - 1.0] 내에 있을 것으로 예상하므로 보정 레이어보다 먼저 격자 크기를 정의해야 보정 레이어의 출력 범위를 올바르게 지정할 수 있습니다.
End of explanation
"""
combined_calibrators = tfl.layers.ParallelCombination()
"""
Explanation: tfl.layers.ParallelCombination 레이어를 사용하여 순차형 모델을 생성하기 위해 병렬로 실행해야 하는 보정 레이어를 그룹화합니다.
End of explanation
"""
# ############### age ###############
calibrator = tfl.layers.PWLCalibration(
# Every PWLCalibration layer must have keypoints of piecewise linear
# function specified. Easiest way to specify them is to uniformly cover
# entire input range by using numpy.linspace().
input_keypoints=np.linspace(
training_data_df['age'].min(), training_data_df['age'].max(), num=5),
# You need to ensure that input keypoints have same dtype as layer input.
# You can do it by setting dtype here or by providing keypoints in such
# format which will be converted to desired tf.dtype by default.
dtype=tf.float32,
# Output range must correspond to expected lattice input range.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
)
combined_calibrators.append(calibrator)
# ############### sex ###############
# For boolean features simply specify CategoricalCalibration layer with 2
# buckets.
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Initializes all outputs to (output_min + output_max) / 2.0.
kernel_initializer='constant')
combined_calibrators.append(calibrator)
# ############### cp ###############
calibrator = tfl.layers.PWLCalibration(
# Here instead of specifying dtype of layer we convert keypoints into
# np.float32.
input_keypoints=np.linspace(1, 4, num=4, dtype=np.float32),
output_min=0.0,
output_max=lattice_sizes[2] - 1.0,
monotonicity='increasing',
# You can specify TFL regularizers as a tuple ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4))
combined_calibrators.append(calibrator)
# ############### trestbps ###############
calibrator = tfl.layers.PWLCalibration(
# Alternatively, you might want to use quantiles as keypoints instead of
# uniform keypoints
input_keypoints=np.quantile(training_data_df['trestbps'],
np.linspace(0.0, 1.0, num=5)),
dtype=tf.float32,
# Together with quantile keypoints you might want to initialize piecewise
# linear function to have 'equal_slopes' in order for output of layer
# after initialization to preserve original distribution.
kernel_initializer='equal_slopes',
output_min=0.0,
output_max=lattice_sizes[3] - 1.0,
# You might consider clamping extreme inputs of the calibrator to output
# bounds.
clamp_min=True,
clamp_max=True,
monotonicity='increasing')
combined_calibrators.append(calibrator)
# ############### chol ###############
calibrator = tfl.layers.PWLCalibration(
# Explicit input keypoint initialization.
input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
dtype=tf.float32,
output_min=0.0,
output_max=lattice_sizes[4] - 1.0,
# Monotonicity of calibrator can be decreasing. Note that corresponding
# lattice dimension must have INCREASING monotonicity regardless of
# monotonicity direction of calibrator.
monotonicity='decreasing',
# Convexity together with decreasing monotonicity result in diminishing
# return constraint.
convexity='convex',
# You can specify list of regularizers. You are not limited to TFL
# regularizrs. Feel free to use any :)
kernel_regularizer=[('laplacian', 0.0, 1e-4),
tf.keras.regularizers.l1_l2(l1=0.001)])
combined_calibrators.append(calibrator)
# ############### fbs ###############
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[5] - 1.0,
# For categorical calibration layer monotonicity is specified for pairs
# of indices of categories. Output for first category in pair will be
# smaller than output for second category.
#
# Don't forget to set monotonicity of corresponding dimension of Lattice
# layer to '1'.
monotonicities=[(0, 1)],
# This initializer is identical to default one('uniform'), but has fixed
# seed in order to simplify experimentation.
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=0.0, maxval=lattice_sizes[5] - 1.0, seed=1))
combined_calibrators.append(calibrator)
# ############### restecg ###############
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[6] - 1.0,
# Categorical monotonicity can be partial order.
monotonicities=[(0, 1), (0, 2)],
# Categorical calibration layer supports standard Keras regularizers.
kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.001),
kernel_initializer='constant')
combined_calibrators.append(calibrator)
"""
Explanation: 각 특성에 대한 보정 레이어를 만들고 병렬 조합 레이어에 추가합니다. 숫자 특성에는 tfl.layers.PWLCalibration을 사용하고 범주형 특성에는 tfl.layers.CategoricalCalibration을 사용합니다.
End of explanation
"""
lattice = tfl.layers.Lattice(
lattice_sizes=lattice_sizes,
monotonicities=[
'increasing', 'none', 'increasing', 'increasing', 'increasing',
'increasing', 'increasing'
],
output_min=0.0,
output_max=1.0)
"""
Explanation: 그런 다음 calibrator의 출력을 비선형적으로 융합하기 위해 격자 레이어를 만듭니다.
필요한 차원에 대해 증가할 격자의 단조를 지정해야 합니다. 보정에서 단조로운 방향의 구성은 단조의 엔드 투 엔드 방향을 올바르게 만듭니다. 여기에는 CategoricalCalibration 레이어의 부분 단조가 포함됩니다.
End of explanation
"""
model = tf.keras.models.Sequential()
model.add(combined_calibrators)
model.add(lattice)
"""
Explanation: 그런 다음 결합된 calibrator 및 격자 레이어를 사용하여 순차형 모델을 만들 수 있습니다.
End of explanation
"""
features = training_data_df[[
'age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg'
]].values.astype(np.float32)
target = training_data_df[['target']].values.astype(np.float32)
model.compile(
loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.Adagrad(learning_rate=LEARNING_RATE))
model.fit(
features,
target,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_split=0.2,
shuffle=False,
verbose=0)
model.evaluate(features, target)
"""
Explanation: 훈련은 다른 Keras 모델과 동일하게 동작합니다.
End of explanation
"""
# We are going to have 2-d embedding as one of lattice inputs.
lattice_sizes = [3, 2, 2, 3, 3, 2, 2]
"""
Explanation: 함수형 Keras 모델
이 예제에서는 Keras 모델 생성을 위한 함수형 API를 사용합니다.
이전 섹션에서 언급했듯이 격자 레이어는 input[i]가 [0, lattice_sizes[i] - 1.0] 내에 있을 것으로 예상되므로 보정 레이어보다 먼저 격자 크기를 정의해야 보정 레이어의 출력 범위를 적절하게 지정할 수 있습니다.
End of explanation
"""
model_inputs = []
lattice_inputs = []
# ############### age ###############
age_input = tf.keras.layers.Input(shape=[1], name='age')
model_inputs.append(age_input)
age_calibrator = tfl.layers.PWLCalibration(
# Every PWLCalibration layer must have keypoints of piecewise linear
# function specified. Easiest way to specify them is to uniformly cover
# entire input range by using numpy.linspace().
input_keypoints=np.linspace(
training_data_df['age'].min(), training_data_df['age'].max(), num=5),
# You need to ensure that input keypoints have same dtype as layer input.
# You can do it by setting dtype here or by providing keypoints in such
# format which will be converted to desired tf.dtype by default.
dtype=tf.float32,
# Output range must correspond to expected lattice input range.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
monotonicity='increasing',
name='age_calib',
)(
age_input)
lattice_inputs.append(age_calibrator)
# ############### sex ###############
# For boolean features simply specify CategoricalCalibration layer with 2
# buckets.
sex_input = tf.keras.layers.Input(shape=[1], name='sex')
model_inputs.append(sex_input)
sex_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Initializes all outputs to (output_min + output_max) / 2.0.
kernel_initializer='constant',
name='sex_calib',
)(
sex_input)
lattice_inputs.append(sex_calibrator)
# ############### cp ###############
cp_input = tf.keras.layers.Input(shape=[1], name='cp')
model_inputs.append(cp_input)
cp_calibrator = tfl.layers.PWLCalibration(
# Here instead of specifying dtype of layer we convert keypoints into
# np.float32.
input_keypoints=np.linspace(1, 4, num=4, dtype=np.float32),
output_min=0.0,
output_max=lattice_sizes[2] - 1.0,
monotonicity='increasing',
# You can specify TFL regularizers as tuple ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4),
name='cp_calib',
)(
cp_input)
lattice_inputs.append(cp_calibrator)
# ############### trestbps ###############
trestbps_input = tf.keras.layers.Input(shape=[1], name='trestbps')
model_inputs.append(trestbps_input)
trestbps_calibrator = tfl.layers.PWLCalibration(
# Alternatively, you might want to use quantiles as keypoints instead of
# uniform keypoints
input_keypoints=np.quantile(training_data_df['trestbps'],
np.linspace(0.0, 1.0, num=5)),
dtype=tf.float32,
# Together with quantile keypoints you might want to initialize piecewise
# linear function to have 'equal_slopes' in order for output of layer
# after initialization to preserve original distribution.
kernel_initializer='equal_slopes',
output_min=0.0,
output_max=lattice_sizes[3] - 1.0,
# You might consider clamping extreme inputs of the calibrator to output
# bounds.
clamp_min=True,
clamp_max=True,
monotonicity='increasing',
name='trestbps_calib',
)(
trestbps_input)
lattice_inputs.append(trestbps_calibrator)
# ############### chol ###############
chol_input = tf.keras.layers.Input(shape=[1], name='chol')
model_inputs.append(chol_input)
chol_calibrator = tfl.layers.PWLCalibration(
# Explicit input keypoint initialization.
input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
output_min=0.0,
output_max=lattice_sizes[4] - 1.0,
# Monotonicity of calibrator can be decreasing. Note that corresponding
# lattice dimension must have INCREASING monotonicity regardless of
# monotonicity direction of calibrator.
monotonicity='decreasing',
# Convexity together with decreasing monotonicity result in diminishing
# return constraint.
convexity='convex',
# You can specify list of regularizers. You are not limited to TFL
# regularizrs. Feel free to use any :)
kernel_regularizer=[('laplacian', 0.0, 1e-4),
tf.keras.regularizers.l1_l2(l1=0.001)],
name='chol_calib',
)(
chol_input)
lattice_inputs.append(chol_calibrator)
# ############### fbs ###############
fbs_input = tf.keras.layers.Input(shape=[1], name='fbs')
model_inputs.append(fbs_input)
fbs_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[5] - 1.0,
# For categorical calibration layer monotonicity is specified for pairs
# of indices of categories. Output for first category in pair will be
# smaller than output for second category.
#
# Don't forget to set monotonicity of corresponding dimension of Lattice
# layer to '1'.
monotonicities=[(0, 1)],
# This initializer is identical to default one ('uniform'), but has fixed
# seed in order to simplify experimentation.
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=0.0, maxval=lattice_sizes[5] - 1.0, seed=1),
name='fbs_calib',
)(
fbs_input)
lattice_inputs.append(fbs_calibrator)
# ############### restecg ###############
restecg_input = tf.keras.layers.Input(shape=[1], name='restecg')
model_inputs.append(restecg_input)
restecg_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[6] - 1.0,
# Categorical monotonicity can be partial order.
monotonicities=[(0, 1), (0, 2)],
# Categorical calibration layer supports standard Keras regularizers.
kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.001),
kernel_initializer='constant',
name='restecg_calib',
)(
restecg_input)
lattice_inputs.append(restecg_calibrator)
"""
Explanation: 각 특성에 대해 입력 레이어와 보정 레이어를 만들어야 합니다. 숫자 특성에는 tfl.layers.PWLCalibration을 사용하고 범주형 특성에는 tfl.layers.CategoricalCalibration을 사용합니다.
End of explanation
"""
lattice = tfl.layers.Lattice(
lattice_sizes=lattice_sizes,
monotonicities=[
'increasing', 'none', 'increasing', 'increasing', 'increasing',
'increasing', 'increasing'
],
output_min=0.0,
output_max=1.0,
name='lattice',
)(
lattice_inputs)
"""
Explanation: 그런 다음 calibrator의 출력을 비선형적으로 융합하기 위해 격자 레이어를 만듭니다.
필요한 차원에 대해 증가할 격자의 단조를 지정해야 합니다. 보정에서 단조로운 방향의 구성은 단조의 엔드 투 엔드 방향을 올바르게 만듭니다. 여기에는 tfl.layers.CategoricalCalibration 레이어의 부분 단조가 포함됩니다.
End of explanation
"""
model_output = tfl.layers.PWLCalibration(
input_keypoints=np.linspace(0.0, 1.0, 5),
name='output_calib',
)(
lattice)
"""
Explanation: 모델에 더 많은 유연성을 추가하기 위해 출력 보정 레이어를 추가합니다.
End of explanation
"""
model = tf.keras.models.Model(
inputs=model_inputs,
outputs=model_output)
tf.keras.utils.plot_model(model, rankdir='LR')
"""
Explanation: 이제 입력과 출력을 사용하여 모델을 만들 수 있습니다.
End of explanation
"""
feature_names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg']
features = np.split(
training_data_df[feature_names].values.astype(np.float32),
indices_or_sections=len(feature_names),
axis=1)
target = training_data_df[['target']].values.astype(np.float32)
model.compile(
loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.Adagrad(LEARNING_RATE))
model.fit(
features,
target,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_split=0.2,
shuffle=False,
verbose=0)
model.evaluate(features, target)
"""
Explanation: 훈련은 다른 Keras 모델과 동일하게 동작합니다. 설정에서 입력 특성은 별도의 텐서로 전달됩니다.
End of explanation
"""
|
tuanavu/python-cookbook-3rd | notebooks/ch01/16_filtering_list_elements.ipynb | mit | mylist = [1, 4, -5, 10, -7, 2, 3, -1]
# All positive values
pos = [n for n in mylist if n > 0]
print(pos)
# All negative values
neg = [n for n in mylist if n < 0]
print(neg)
# Negative values clipped to 0
neg_clip = [n if n > 0 else 0 for n in mylist]
print(neg_clip)
# Positive values clipped to 0
pos_clip = [n if n < 0 else 0 for n in mylist]
print(pos_clip)
"""
Explanation: Filtering Sequence Elements
Problem
You have data inside of a sequence, and need to extract values or reduce the sequence using some criteria.
Solution
The easiest way to filter sequence data is often to use a list comprehension.
End of explanation
"""
pos = (n for n in mylist if n > 0)
pos
for x in pos:
print(x)
"""
Explanation: You can use generator expressions to produce the filtered values iteratively.
End of explanation
"""
values = ['1', '2', '-3', '-', '4', 'N/A', '5']
def is_int(val):
try:
x = int(val)
return True
except ValueError:
return False
ivals = list(filter(is_int, values))
print(ivals)
# Outputs ['1', '2', '-3', '4', '5']
"""
Explanation: Discussion
Filtering
The filtering criteria cannot be easily expressed in a list comprehension or generator expression. For example, suppose that the filtering process involves exception handling or some other complicated detail. You need to use filter() function.
End of explanation
"""
addresses = [
'5412 N CLARK',
'5148 N CLARK',
'5800 E 58TH',
'2122 N CLARK',
'5645 N RAVENSWOOD',
'1060 W ADDISON',
'4801 N BROADWAY',
'1039 W GRANVILLE',
]
counts = [ 0, 3, 10, 4, 1, 7, 6, 1]
from itertools import compress
more5 = [ n > 5 for n in counts ]
a = list(compress(addresses, more5))
print(a)
"""
Explanation: Using itertools
Another notable filtering tool is itertools.compress(), which takes an iterable and an accompanying Boolean selector sequence as input. As output, it gives you all of the items in the iterable where the corresponding element in the selector is True. This can be useful if you’re trying to apply the results of filtering one sequence to another related sequence.
End of explanation
"""
|
google/applied-machine-learning-intensive | content/04_classification/01_binary_classification/colab.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/04_classification/01_binary_classification/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
"""
!KAGGLE_CONFIG_DIR=`pwd` kaggle datasets download joshmcadams/oranges-vs-grapefruit
!ls
"""
Explanation: Binary Classification
In this unit we will explore binary classification using logistic regression.
Some of these terms might be new, so let's explore them a bit more.
Classification is the process of mapping a set of data points to a finite set of labels. From our regression labs, you likely remember that regression models such as linear regression map input variables to a range of continuous values. In the domain of machine learning, models that predict continuous values are considered regression models. Models that predict a known finite set of values are considered classification models.
So what does binary mean?
Binary means there are only two values to predict. Binary classification is used to predict one of two values. These can be true/false, malignant/benign, yes/no, or any possible this-or-that options. For simplicity, these options are usually encoded as 1 and 0.
And what about logistic regression?
You've already seen linear regression, which attempts to fit a line to a set of data in order to predict continuous values. Logistic regression similarly attempts to fit a line to data. However, the line is typically a logistic/sigmoid curve. Instead of predicting a continuous value, the model uses the logistic curve to split the data into two classes. One class falls to one side of the line, and the other class falls to the other side of the line.
Framing the Problem
Cindy's Produce For Good has a problem. Its business model revolves around collecting unsold fruit and vegetables from local growers and distributing them to families in need so that they can consume it or resell it at local farmer's markets and roadside stands.
Quite a few complaints have come in lately from families and customers who have had a bitter surprise. They've peeled what they thought was an orange only to bite in and find out that they are eating a grapefruit!
Cindy's growers give her truckloads of mixed citrus: lemons, limes, oranges, and grapefruit. A volunteer crew sorts the fruit. They are really good at sorting lemons and limes, but they falsely identify grapefruit as oranges about 5% of the time.
In order to ensure customers get the oranges they expect, Cindy has created a machine that measures the weight, color, and largest diameter of fruit. She wants to create some software that can use this information and tell her workers if the fruit is an orange or not.
She put a few thousand pieces of orange-looking fruit from one of her shipments through the sensors and manually labelled them as oranges or grapefruit. Looking at the data, she couldn't see an obvious pattern. Her best performance was about 90% accuracy. Her human sorters can do at least 95%. She's requested our help to see if we can solve the orange vs. grapefruit problem.
In this lab we'll examine Cindy's citrus data and try to build a model to help her reliably sort her fruit as well as or better than human sorters.
Exercise 1: Thinking About the Data
Before we dive in to looking closely at the data, let's think about the problem space and the dataset. Consider the questions below.
Question 1
Is this problem actually a good fit for machine learning? Why or why not?
Student Solution
Please Put Your Answer Here
Question 2
If we do build Cindy a machine learning model, what biases might exist in the data? Is there anything that might cause her model to have trouble generalizing to other data? If so, how might she make the model more resilient?
Student Solution
Please Put Your Answer Here
Question 3
We've been asked to create a system that determines if a piece of fruit is an orange or not an orange. But aside from that, we haven't gotten much information about how the system would work as a whole.
Describe how you would design the system from end-to-end. Things to consider:
Would the input fruit be all of the fruit that Cindy receives? Only the fruit suspected of being an orange? Only questionable fruit? Anything suspected of being an orange or a grapefruit?
What happens to fruit classified as "not orange". Is it automatically considered a grapefruit? Is it thrown away? Put in a mixed fruit bag?
Justify the inputs and the output actions for the system. What are the trade-offs?
Student Solution
Please Put Your Answer Here
Exploratory Data Analysis
Acquire the Data
We have some idea about the problem that we are trying to solve, so let's take a look at what has been collected. The data is hosted on Kaggle. You can download the dataset and then upload it to this lab or use the code blocks below to fetch the data directly.
Direct Kaggle Download
Follow the API Credentials instructions and get a kaggle.json file (if you don't already have one), and upload it to this lab.
Then run the code block below to download the oranges vs. grapefruit dataset.
End of explanation
"""
!unzip -o oranges-vs-grapefruit.zip
!ls
"""
Explanation: There should now be an oranges-vs-grapefruit.zip file in the virtual machine for this lab. Let's unzip it so we can access the data.
End of explanation
"""
import pandas as pd
citrus_df = pd.read_csv('citrus.csv', header=0)
citrus_df.sample(10, random_state=2020)
"""
Explanation: There is now a citrus.csv file in our virtual machine. Let's start digging into the data next.
Basic Analysis
First and foremost, we need to load the data. For that we'll rely on Pandas and use the read_csv function since the data was provided to us as a CSV file.
After we load the data, let's sample it to get an idea of what we are working with.
End of explanation
"""
# Your Solution Goes Here
"""
Explanation: It looks like we have a mixed bag of fruit containing oranges and grapefruit, just as expected.
How many do we have of each?
Exercise 2: Basic Statistics
Let's take a moment to determine the distribution of fruit in our dataset. Use pyplot to create a histogram of the values in the name column of our DataFrame.
Student Solution
End of explanation
"""
citrus_df.describe()
"""
Explanation: Interpreting Our Histogram
The histogram shows the data evenly distributed across different types of fruit. This distribution makes the dataset very balanced for building a model for our classifier.
Describing Our Dataset
Next let's do a simple describe of our dataset to get some more detailed information.
End of explanation
"""
# Your Solution Goes Here
"""
Explanation: Since every count is 10,000, we don't seem to have missing values.
Also, every min value is a positive number. This is good since it would be really odd to have negative diameters, weights, or colors.
Do the values themselves look reasonable? The diameter is measured in centimeters. Is a 2 cm piece of fruit believable? What about a 16 cm piece of fruit?
Similarly, do the weights seem within ranges that we'd expect?
It is actually difficult to tell since we have different kinds of fruit in this bag. It would be easier to inspect summary statistics for each type of fruit.
Exercise 3: More Focused Description
We have used describe() to get statistics about the entire dataset, but there isn't a lot of information in the data. Write Python code to print describe() statistics for each type of fruit in the dataset. Use the percentiles argument to the describe method to not print the 25th and 75th percentile.
Your output should look similar to:
```
orange
diameter weight red green blue
count 5000.000000 5000.000000 5000.000000 5000.000000 5000.000000
mean 8.474424 152.804920 156.832800 81.988200 7.115200
std 1.260665 18.669021 9.890258 10.090789 6.493779
min 2.960000 86.760000 123.000000 49.000000 2.000000
50% 8.470000 152.665000 157.000000 82.000000 4.000000
max 12.870000 231.090000 192.000000 116.000000 38.000000
grapefruit
diameter weight red green blue
count 5000.000000 5000.000000 5000.000000 5000.000000 5000.000000
mean 11.476946 197.296664 150.862800 70.033000 15.611200
std 1.221148 19.193190 10.103148 10.044924 9.271592
min 7.630000 126.790000 115.000000 31.000000 2.000000
50% 11.450000 197.430000 151.000000 70.000000 15.000000
max 16.450000 261.510000 187.000000 103.000000 56.000000
```
Student Solution
End of explanation
"""
import altair as alt
"""
Explanation: Visualizing With Box Plots
Now that we've sanity checked our data, let's visualize it to see if we can gather more insight. Above we gathered the min, max, mean, etc. for each numeric column for each type of fruit in a tabular form. Let's now visualize that data using a box plot and the Altair visualization library.
To start using Altair, we simply import it.
End of explanation
"""
citrus_df_sample = citrus_df.sample(n=5000, random_state=2020)
alt.Chart(citrus_df_sample, width=400).mark_boxplot().encode(
x='name',
y='diameter'
)
"""
Explanation: Next we will use the mark_boxplot method of the Chart class to create our boxplot.
Let's start by plotting the diameter by name.
To do this we must first sample a subset of our data. We have 20,000 rows of data, and Altair cannot visualize that much data in a boxplot. The row limit is 5,000 rows, so we'll create a 5,000 row sample and then pass that sample to Altair.
End of explanation
"""
alt.Chart(citrus_df_sample, width=400).mark_boxplot().encode(
x='name',
y='diameter'
) | alt.Chart(citrus_df_sample, width=400).mark_boxplot().encode(
x='name',
y='weight'
)
"""
Explanation: What insights can we glean from this graphic?
As expected, the diameter of a grapefruit trends larger than that of an orange, but there is some overlap.
Let's now add in weight to our boxplot.
End of explanation
"""
# Your Solution Goes Here
"""
Explanation: Correlation
Notice that relative weight and diameter seem pretty similar. These two columns might be closely correlated enough that we only need to use one of them. Let's check the correlation coefficient.
Exercise 4: Correlation Coefficient
Based on our visualization above, we suspect that diameter and weight are highly correlated. Write code to find the correlation coefficient between the diameter and weight columns in our DataFrame.
Hint: Check out the corr documentation.
Student Solution
End of explanation
"""
alt.Chart(citrus_df_sample).mark_circle().encode(
x='diameter',
y='weight',
color='name'
)
"""
Explanation: Understanding the Correlation
The correlation between diameter and weight is over 99%. That is a very high value.
This shouldn't come as a big surprise. We should expect that the weight of a piece of fruit grows as its diameter grows.
For now we can leave the data as is, but remember this correlation. We might be able to use it to remove a column from our training data without negatively affecting our model.
Let's take another look at height and weight. They are definitely correlated, but how do they relate to each other for each fruit type?
One way to see this is to use a scatter plot chart to plot the diameter versus the weight, segmented by fruit type.
We'll use Altair to do this.
End of explanation
"""
alt.Chart(citrus_df_sample, width=400).mark_boxplot().encode(
x='name',
y='red'
) | alt.Chart(citrus_df_sample, width=400).mark_boxplot().encode(
x='name',
y='green'
) | alt.Chart(citrus_df_sample, width=400).mark_boxplot().encode(
x='name',
y='blue'
)
"""
Explanation: We can see that oranges and grapefruit have very similar rates of weight gain as their diameter increases. This shouldn't be too surprising since they are very similar fruits.
In this chart we can also see that there are some fruits that are clearly oranges because of their small size and weight, as well as some that are clearly grapefruit due to their large size and weight. However, we have a large number of fruits that will be difficult to classify using diameter and weight alone.
Checking Color Values
We've looked pretty closely at the diameter and weight values, but we haven't done much with the color (RGB) values.
Let's first see if boxplots are helpful.
End of explanation
"""
citrus_df.groupby('name')['name'].count()
"""
Explanation: There doesn't seem to be a lot of value there, at least examining each element of color separately. There is quite a bit of overlap between each color element, with grapefruit displaying a little less red and green typically.
It would also be nice to "sanity check" the color values, similar to how we checked to make sure that our diameters and weights were within reason. We could see if the values fall within a reasonable range, but then we'd need to know reasonable RGB values for oranges and grapefruit.
Since we are dealing with color data, we can just create an image for each piece of fruit that contains a sampling (or all) of the colors that we have and we can see if it looks reasonable.
First, let's get an exact count of the number of samples of each fruit type.
End of explanation
"""
from PIL import Image
from matplotlib.pyplot import imshow
import numpy as np
height, width = 50, 100
img = Image.new('RGB', (width, height), color=(255, 255, 255))
pixels = img.load()
row_i, col_i = 0, 0
for _, fruit in citrus_df[citrus_df['name'] == 'orange'].iterrows():
pixels[col_i, row_i] = (fruit['red'], fruit['green'], fruit['blue'])
col_i += 1
if col_i >= width:
col_i = 0
row_i += 1
imshow(img)
"""
Explanation: As expected, we have 5000 samples each. We can create a 100x50 image for each fruit type and visualize the data.
We'll use PIL's Image class to create a white 100x50 image. Then we'll get the editable pixel map from the image and assign the color value for each orange in our data to a different pixel in the image.
Once we have the image filled out with color, we'll use PyPlot to display the image.
End of explanation
"""
# Your Solution Goes Here
"""
Explanation: That looks like a pretty reasonable orange color. What about the grapefruit?
Exercise 5: Create a Color Map Image
We only visualized data from oranges. We'd really like to see the colors of all of the fruit. Create and show a 100x100 image that contains the colors for all of the oranges in the first 100x50 block. This should be followed with the colors for all of the grapefruit in the next 100x50 block. Visually inspect your image to see if the colors are believable as oranges and grapefruit.
Student Solution
End of explanation
"""
citrus_df.columns
"""
Explanation: Data Analysis Summary
We've done a lot of data analysis and have a pretty good feel for our data. We have:
Examined the distribution of our dataset and seen we have an equal distribution of fruit types
Determined that no data is missing
Determined that our weight, diameter, and color values are all within reason
Found a strong correlation between weight and diameter
Let's see if we can build a model to classify our oranges!
Simple Logistic Model
It is now time to build and iterate on a model. We'll start with a simple logistic regression model using scikit-learn's sklearn.linear_model.LogisticRegression class and the feature columns already in our training data.
Let's first remind ourselves of the columns we have at our disposal.
End of explanation
"""
# Your Solution Goes Here
"""
Explanation: We'll use 'diameter', 'weight', 'red', 'green', and 'blue' as feature columns. Using 'name' for our target column is tempting, but remember that it contains fruit names for values, and for this exercise, we are only interested in determining if a piece of fruit is an orange or not an orange. Let's create a new column called 'is_orange' that contains the value True if the datum is an orange and False otherwise.
Exercise 6: Is Orange?
Create a new column in citrus_df called is_orange. The column should be a boolean column and should contain the value True if a given row is labeled as an orange and False otherwise.
Student Solution
End of explanation
"""
citrus_df.groupby('is_orange')['is_orange'].count()
"""
Explanation: Examining Our New Target Column
Now that we've created a new target column, we should do some checking to make sure that it was created correctly.
First we'll simply see the count per value.
End of explanation
"""
citrus_df[citrus_df['is_orange']]['name'].unique()
"""
Explanation: There should be 5,000 True values and 5,000 False values.
Now check to see that all 5,000 of the True values have the name "orange".
End of explanation
"""
target_column = 'is_orange'
feature_columns = ['diameter', 'weight', 'red', 'green', 'blue']
target_column, feature_columns
"""
Explanation: We should only see a single value in the unique list ('orange') since all rows with 'is_orange' set to True should have a 'name' of 'orange'.
Train/Test Split
We can now split our data for training and testing. First we will create variables to hold our training and target column names.
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
citrus_df[feature_columns],
citrus_df[target_column],
test_size=0.2,
random_state=180,
shuffle=True)
y_train.groupby(y_train).count()
"""
Explanation: We need to split the data into a training and testing set. In this case we'll split 20% of the data off for testing and train on the other 80%. We can use scikit-learn's train_test_split function to do this. It is also a really good idea to shuffle our data, and train_test_split allows us to do this too.
After we make the split, we can see how many data points we will train off of for each class.
End of explanation
"""
y_test.groupby(y_test).count()
"""
Explanation: Hmm. It looks like our training set has become a little uneven. Ideally, we would maintain the same ratio of oranges to non-oranges in our training and testing groups as the ratio in the whole set (50/50). But after splitting the data, we've ended up with a training set that skews towards non-oranges, and a test set that skews the opposite way, towards oranges.
End of explanation
"""
# Your Code Goes Here
"""
Explanation: Luckily, there's a solution for this problem: stratified sampling. Stratifying our data ensures that the ratio of distinct values in the given column remains the same in our training and test sets as it is in the whole set (half orange and half non-orange, in this case).
Exercise 7: Stratified Train Test Split
Look at the documentation for train_test_split and find the argument that can be used to stratify the data. Rewrite the split above to create a stratified split. When you are done, there should be 4,000 True values and 4,000 False values in the training data and 1,000 of each in the testing data. Print the counts to verify.
Student Solution
End of explanation
"""
X_train.shape, y_train.shape
"""
Explanation: Examining The Split Data
We can now verify that we have 80% of the data in training...
End of explanation
"""
X_test.shape, y_test.shape
"""
Explanation: And 20% in testing.
End of explanation
"""
y_train.describe()
"""
Explanation: Let's look at the training data and see if it stratified correctly.
End of explanation
"""
y_test.describe()
"""
Explanation: From this output we can see that there are 8,000 pieces of data with 2 unique values. The top value is True, and it occurs 4,000 times. That would leave us with 4,000 other values that are False.
We can do the same for the y_test data.
End of explanation
"""
y_test.groupby(by=y_test).count()
"""
Explanation: Another alternative is to use groupby on the series. Notice that the by argument contains the series once again and not a column name.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(random_state=2020)
model.fit(X_train, y_train)
"""
Explanation: Create and Train the Model
It is finally time to build and train our model. As a reminder, we are using sklearn.linear_model.LogisticRegression.
First, we'll build a baseline model with default arguments, and see how well it does. To build the model we import LogisticRegression, create a class instance, and then fit the model.
End of explanation
"""
predictions = model.predict(X_test)
"""
Explanation: Measure Model Performance
We now have a model ready to use to make predictions. Let's first make predictions on the test data that we held out of our training set and see how well we did.
The first step is to actually make the predictions.
End of explanation
"""
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: ', round(accuracy_score(predictions, y_test), 3))
print('Precision: ', round(precision_score(predictions, y_test), 3))
print('Recall: ', round(recall_score(predictions, y_test), 3))
print('F1: ', round(f1_score(predictions, y_test), 3))
"""
Explanation: Now we can use metrics functions from scikit-learn to see how well our model performed. We'll check the accuracy, precision, recall, and F1 scores.
End of explanation
"""
from sklearn.metrics import confusion_matrix
tn, fp, fn, tp = confusion_matrix(y_test, predictions).ravel()
print(f'True Positive: {tp}\nTrue Negative: {tn}\nFalse Positive: {fp}\nFalse Negative: {fn}')
"""
Explanation: Numbers for most of the metrics are above 90%, which is better than Cindy was sorting!
Let's see how this looks in a confusion matrix.
End of explanation
"""
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
scores = model.decision_function(X_test)
fpr, tpr, _ = roc_curve(y_test, scores, pos_label=True)
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.plot(fpr, tpr)
plt.show()
"""
Explanation: We have just under 100 falsely identified fruit. There are about twice as many false negatives as there are false positives. Let's take a few minutes to think about what this confusion matrix means.
Exercise 8: Interpreting a Confusion Matrix
In the text cell below, explain what a false positive and false negative represent in our dataset: which is an orange classified as a grapefruit and which is a grapefruit classified as an orange?
Student Solution
Please Put Your Answer Here
ROC Curve
We can visualize this in another way using the Receiver-Operator Curve. This graph plots the true positive rate on the y-axis against the false positive rate on the x-axis.
End of explanation
"""
from sklearn.metrics import precision_recall_curve
scores = model.decision_function(X_test)
precision, recall, _ = precision_recall_curve(y_test, scores)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.plot(recall, precision)
plt.show()
"""
Explanation: We can see that there is a steep increase in false positives as the true positive rate crosses into the 90% range.
Precision Recall Curve
We can also get a feel for how precision and recall relate for this model by plotting the precision recall curve.
End of explanation
"""
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.model_selection import train_test_split, GridSearchCV
citrus_df = pd.read_csv('citrus.csv', header=0)
citrus_df['is_orange'] = citrus_df['name'].apply(lambda name: name == 'orange')
target_column = 'is_orange'
feature_columns = ['diameter', 'weight', 'red', 'green', 'blue']
X_train, X_validate, y_train, y_validate = train_test_split(
citrus_df[feature_columns],
citrus_df[target_column],
test_size=0.2,
random_state=42,
shuffle=True,
stratify=citrus_df[target_column])
model = LogisticRegression(
random_state=2020,
)
search = GridSearchCV(model, {
# Your Solution Goes Here
})
search.fit(X_train, y_train)
print(search.best_estimator_)
"""
Explanation: This shows the balance between precision and recall as the model adjusts classification thresholds.
Improving Our Model
Our initial model was actually pretty good. But can it be even better?
In the next exercise we'll attempt to improve our model by exploring hyperparameters and manipulating features.
Exercise 9: Using GridSearchCV
We will now experiment with different hyperparameters to see if we can tune the model to increase our scores. To do this we will use the GridSearchCV class to tune hyperparameters of the scikit-learn LogisticRegressor.
GridSearchCV is a class used to test different hyperparameters for a model. The search accepts a dictionary containing keys that map to model parameters. The values are lists for hyperparameters that you want to experiment with or single values for parameters that you want to keep constant.
Question 1: Performing the Search
Below is some code that imports the necessary functions and classes and sets up a logistic regression model for grid search. Add code to the grid search to test different hyperparameters such as tol, C, solver, and max_iter.
The best estimator will be displayed after running the code block.
Student Solution
End of explanation
"""
# Your Solution Goes Here
"""
Explanation: Question 2: Validate the Model
Now that we have found a model that scored the highest in a cross-validation grid search, let's validate the model to see if it generalizes well on our validation data.
We held out validation data in the X_validate and y_validate variables. Use that data to calculate the accuracy, precision, recall, and F1 scores for the model.
Student Solution
End of explanation
"""
# Your code goes here
"""
Explanation: Question 3: Relative Model Quality
Now that we have the scores for our model on our validation set, is the version found by grid search notably better? Discuss the difference in scores, if any, between our base model and the model selected by grid search.
Student Solution
Please Put Your Answer Here
Exercise 10: Final Model Assessment
Given our model performance, is this machine learning model a good fit for the problem?
Student Solution
Please Put Your Answer Here
Challenge
Question 1
Normalization and standardization of data is not strictly required for performing logistic regression. It is, however, suggested in some cases. Research reasons why you might want (or not want) to normalize or standardize your input data to a logistic regression.
Explain your findings and link to any relevant articles.
Student Solution
Please Put Your Answer Here
Question 2
Use the sklearn.preprocessing.StandardScaler to scale the feature data before training a logistic model on our oranges dataset. Use sklearn.model_selection.GridSearchCV to iterate through hyperparameters to find an optimal model.
Student Solution
End of explanation
"""
|
KIPAC/StatisticalMethods | tutorials/agn_photometry_metro.ipynb | gpl-2.0 | exec(open('tbc.py').read()) # define TBC and TBC_above
import astropy.io.fits as pyfits
import numpy as np
import matplotlib.pyplot as plt
from io import StringIO # StringIO behaves like a file object
import scipy.stats as st
%matplotlib inline
import corner
import incredible as cr
"""
Explanation: Tutorial: AGN Photometry with Metropolis Sampling
Note: this tutorial follows AGN Photometry on a Grid. Along with AGN Photometry with Gibbs Sampling, it should be done before MCMC Diagnostics.
Having laboriously done Baysian inference on a grid to fit an AGN source to X-ray data, we will now turn to solving the same problem using the Metropolis sampling method of MCMC.
End of explanation
"""
# get the Image object defined in the "X-ray image introduction" notebook, for convenience
from xray_image import Image
TBC() # datadir = '../ignore/' # or whatever - path to where you put the downloaded files
imagefile = datadir + 'P0098010101M2U009IMAGE_3000.FTZ'
expmapfile = datadir + 'P0098010101M2U009EXPMAP3000.FTZ'
imfits = pyfits.open(imagefile)
exfits = pyfits.open(expmapfile)
im = imfits[0].data
ex = exfits[0].data
orig = Image(im, ex)
x0 = 417
y0 = 209
stampwid = 25
stamp = orig.cutout(x0-stampwid, x0+stampwid, y0-stampwid, y0+stampwid)
plt.rcParams['figure.figsize'] = (10.0, 10.0)
stamp.display(log_image=False)
"""
Explanation: Once again, we will read in the X-ray image data, and extract a small image around an AGN that we wish to study.
End of explanation
"""
params = {'x0':x0, 'y0':y0, 'lnF0':-5.0, 'b':1e-6, 'sigma':1.25}
"""
Explanation: Model implementation
This time we're going to do a fit using the Metropolis algorithm. This doesn't require any of the restrictive assumptions that were needed to demonstrate conjugate Gibbs sampling, so let's go straight to fitting the whole model from the Grid notebook on this subimage.
End of explanation
"""
def log_prior(x0, y0, lnF0, b, sigma):
TBC()
TBC_above()
"""
Explanation: As before, we will assume uniform priors on all parameters as defined below, for simplicity. Since we're not confined to a grid, also include reasonable, wide boundaries (beyond the physical requirements $b>=0$ and $\sigma>0$). You can define a hyperparams dictionary to hold the endpoints, or simply include them directly in the log_prior function, below.
End of explanation
"""
def log_likelihood(data, x0, y0, lnF0, b, sigma):
TBC()
TBC_above()
"""
Explanation: The log_likelihood function can very likely be recycled from your agn_photometry_grid notebook without changes.
End of explanation
"""
def log_posterior(data, **params):
lnp = log_prior(**params)
if np.isfinite(lnp):
lnp += log_likelihood(data, **params)
return lnp
"""
Explanation: And here is the free log-posterior function again:
End of explanation
"""
print( log_prior(**params) )
print( log_likelihood(stamp, **params) )
print( log_posterior(stamp, **params) )
"""
Explanation: As always, let's check that they return finite values without obvious bugs.
End of explanation
"""
TBC()
# proposal_distribution = {'x0':st.norm(... ,
# 'y0':st.norm(... ,
# 'lnF0':st.norm(... ,
# 'b':st.norm(... ,
# 'sigma':st.norm(... )}
"""
Explanation: Sampler implementation
Next, we need a proposal distribution. I'll use a multivariate Gaussian centered on the current position. This is translationally invariant, so later on we can use the simple Metropolis acceptance rule instead of the slightly more complex Metropolis-Hastings rule. A Gaussian isn't necessarily the best choice in general, since the most likely proposals are very small steps, but it will do for the moment.
To further keep it simple, let's make the proposal independent in each parameter (a diagonal covariance matrix for the 5-dimensional Gaussian). Similarly to the grid method, you'll want to guess the appropriate order of magnitude for steps in each parameter, which is the same order as the width of the posterior, and you may need to return to this point to adjust them after seeing the performance.
Since we're assuming a diagonal covariance, let's go ahead and just represent the proposal distribution as 5 univariate Gaussians, as below.
End of explanation
"""
def propose(current_params, dists):
"""
current_params: dictionary holding current position in parameter space
dists: dictionary of proposal distributions
Return value: a new dictionary holding the proposed destination in parameter space
"""
TBC()
TBC_above()
"""
Explanation: Next, define a function that returns a proposed point in parameter space, given the current location and the above dictionary of proposal distributions.
Technical note: Remember that, in Python, b = a does not make a copy of a if a is a dictionary (or a numpy array for that matter). Both b and a would point to the same data in memory. The safest/quickest way to get a new dictionary with the same structure as a whose values can then be overwritten is b = a.copy();
End of explanation
"""
propose(params, proposal_distribution)
"""
Explanation: See if it works:
End of explanation
"""
def step(data, current_params, current_lnP, proposal_dists):
"""
data: Image object
current_params: dictionary of parameter values
current_lnP: log-posterior density corresponding to current_params
proposal_dists: dictionary of proposal distributions
Return value: a tuple holding the next parameter dictionary and corresponding log-posterior density
"""
TBC()
# trial_params = ...
# trial_lnP = ...
# if [accept/reject condition]:
# return (trial_params, trial_lnP)
# else:
# return (current_params, current_lnP)
TBC_above()
"""
Explanation: Finally, the sampler itself. Write a function that takes the current parameter values and log-posterior value as input (along with the data and proposal distributions), and returns the next set of parameters values and corresponding log-posterior. These can be identical to the inputs, if the proposal is rejected.
End of explanation
"""
step(stamp, params, log_posterior(stamp, **params), proposal_distribution)
"""
Explanation: And, again, make sure it works without crashing:
End of explanation
"""
%%time
nsamples = 10000
current_lnP = log_posterior(stamp, **params)
samples = np.zeros((nsamples, len(params)))
for i in range(samples.shape[0]):
params, current_lnP = step(stamp, params, current_lnP, proposal_distribution)
samples[i,:] = [params['x0'], params['y0'], params['lnF0'], params['b'], params['sigma']]
"""
Explanation: Results
Assuming everything above looks ok, let's run a test chain.
End of explanation
"""
param_labels = [r'$x_0$', r'$y_0$', r'$\ln F_0$', r'$b$', r'$\sigma$']
plt.rcParams['figure.figsize'] = (16.0, 12.0)
fig, ax = plt.subplots(len(param_labels), 1);
cr.plot_traces(samples, ax, labels=param_labels)
"""
Explanation: Let's do the most basic (yet still extremely important) visual check to see how our sampler performed, looking at traces of the Markov chain for each parameter. (It's ok if you haven't read the notes on MCMC Diagnostics yet; we will go more in-depth later.) These trace plots show the value of each parameter as a function of iteration.
End of explanation
"""
corner.corner(samples, labels=param_labels);
"""
Explanation: Exactly what you see here will depend on the width of your proposal distributions. But, hopefully, you can see the sequence finding its way to a part of parameter space that it likes, and then continuing to jump around there.
We can use corner to quickly visualize the posterior. This package shows us all the 1D marginalized posteriors (as histograms) and every pair of 2D marginalized posteriors (as a contour plot) in a triangular grid. Note that we should really remove the burn-in phase before doing this, but this is just for a quick look.
End of explanation
"""
%%time
chains = [np.zeros((nsamples, len(params))) for j in range(4)]
for samples in chains:
params = {'x0':st.uniform.rvs()*20.0 + 410.0,
'y0':st.uniform.rvs()*20.0 + 200.0,
'lnF0':st.uniform.rvs()*3.0 -8.0,
'b':st.uniform.rvs()*5e-6 + 3e-6,
'sigma':st.uniform.rvs()*4.9 + 0.1}
current_lnP = log_posterior(stamp, **params)
for i in range(samples.shape[0]):
params, current_lnP = step(stamp, params, current_lnP, proposal_distribution)
samples[i,:] = [params['x0'], params['y0'], params['lnF0'], params['b'], params['sigma']]
"""
Explanation: If you've already worked through the Gibbs sampling notebook, how do these traces compare qualitatively? What role do you think the proposal distribution width plays? Feel free to tweak those widths and re-run (from the same starting point) to see how things change. You can also change the chain length if you think that will help explore the parameter space better.
We weren't overly concerned with the starting point for the test chain above. But, for later notebooks, we'll want to see how multiple, independent chains with different starting points behave when using this method. The cell below will take care of running 4 chains, started at random positions broadly in the vicinity of the input values.
End of explanation
"""
plt.rcParams['figure.figsize'] = (16.0, 12.0)
fig, ax = plt.subplots(len(param_labels), 1);
cr.plot_traces(chains, ax, labels=param_labels, Line2D_kwargs={'markersize':1.0})
"""
Explanation: Now we can look at a more colorful version of the trace plots, showing all of the chains simultaneously:
End of explanation
"""
TBC() # change path below, if desired
#for i,samples in enumerate(chains):
# np.savetxt('../ignore/agn_metro_chain_'+str(i)+'.txt', samples, header='x0 y0 lnF0 b sigma')
"""
Explanation: Save them for later, and we're done!
End of explanation
"""
|
OpenPIV/openpiv-python | openpiv/docs/src/piv_basics.ipynb | gpl-3.0 | # import the standard numerical and plotting packages
import matplotlib.pyplot as plt
import numpy as np
from skimage.io import imread
"""
Explanation: Basics of Particle Image Velocimetry (PIV)
Using open source PIV software, OpenPIV (http://www.openpiv.net) written with the great help of Python, Numpy, Scipy (http://www.scipy.org) and runs online thanks to the great MyBinder project.
What is it about? Particle Image Velocimetry (PIV)
From Wikipedia: "Particle image velocimetry (PIV) is an optical method of flow visualization used in education and research. It is used to obtain instantaneous velocity measurements and related properties in fluids. The fluid is seeded with tracer particles which, for sufficiently small particles, are assumed to faithfully follow the flow dynamics (the degree to which the particles faithfully follow the flow is represented by the Stokes number). The fluid with entrained particles is illuminated so that particles are visible. The motion of the seeding particles is used to calculate speed and direction (the velocity field) of the flow being studied." Read more at http://en.wikipedia.org/wiki/Particle_image_velocimetry.
Particle Image Velocimetry (PIV) is a non-intrusive state-of-the-art technique for flow measurements (e.g.: Raffel et al., 2007, Adrian, 1991). The PIV technique is based on image recording of the illuminated flow field using seeding particles. The technique is based on illuminating the particles in a plane by forming a coherent light sheet. The light scattered by the particles is recorded on a sequence of image frames. The displacement of the particle images between two consecutive light pulses is determined through evaluation of the PIV recordings and by applying a spatial cross-correlation function as implemented by the OpenPIV resulting with a two dimensional two component velocity field.
In practice, small tracer particles, common sized are in the order of 10-100 microns, are introduced to the flow. The flow is illuminated twice by means of a laser light sheet forming a plane where the camera is focused on. The time delay between the pulses depends on the mean velocity and the image magnification. It is assumed that the tracer particles follow the local flow velocity between the two consecutive illuminations. The light scattered from the tracer particles is imaged via an optical lens on a digital camera.
The images, acquired as pairs correspond to the two laser pulses, are than correlated using a cross-correlation function and image processing tools in order to provide the velocity field.
The effectiveness of the measurement results strongly depends on a large number of parameters such as particles concentration, size distribution and shape, illumination source, recording device, and synchronization between the illumination, acquisition and recording systems (Huang et al., 1997). An appropriate choice of the different parameters of the cross correlation analysis (e.g., interrogation area, time between pulses, scaling) will influence the results accuracy. Read more about PIV in the following chapters: Gurka and Kit, in Handbook of Environmental Fluid Mechanics, CRC Press, 2014 http://www.crcnetbase.com/doi/abs/10.1201/b13691-39 or Taylor, Gurka and Liberzon "Particle Image Velocimetry for Biological Mechanics" in the Handbook of Imaging in Biological Mechanics, CRC Press, 2015, http://www.crcpress.com/product/isbn/9781466588134.
Open source software to learn the basics
In principle velocimetry is a method to find out the velocity field of the moving fluid. "Particle" image velocimetry is the way to get velocity field from images of small particles, called tracers. The basic principle is to use two images of the same particles with a small time delay between them. For that purpose, typically two laser shots are created and two images are taken.
This tutorial will follow the most simple analysis path from the two images to the velocity field and some post-analysis. We will use one of many open source packages, the open source particle image velocimetry http://www.openpiv.net
End of explanation
"""
# load the images
a = imread("../images/B005_1.tif")
b = imread("../images/B005_2.tif")
fig, axs = plt.subplots(1, 2, figsize=(9, 4))
axs[0].imshow(a, cmap=plt.cm.gray)
axs[1].imshow(b, cmap=plt.cm.gray)
plt.show()
"""
Explanation: We have downloaded some sample images from PIV challenge,
see http://www.pivchallenge.org/pub/#b or another standard PIV images project: http://www.piv.jp/down/image05e.html
End of explanation
"""
win_size = 32
a_win = a[:win_size, :win_size].copy()
b_win = b[:win_size, :win_size].copy()
fig, axs = plt.subplots(1, 2, figsize=(9, 4))
axs[0].imshow(a_win, cmap=plt.cm.gray)
axs[1].imshow(b_win, cmap=plt.cm.gray)
plt.show()
"""
Explanation: The two images show the positions of the particles at two different times. We can analyze small regions of interest, called interrogation windows. Typically we can start with a size of 32 x 32 pixels or smaller. Until recently, the fast algorithms used powers of 2, so the historical sizes are always powers of 2: 8, 16, 32, 64, 128, ...
Let's take the first top left windows from each image.
End of explanation
"""
fig = plt.imshow(b_win - a_win, cmap=plt.cm.gray)
plt.title("Without shift")
plt.show()
plt.imshow(b_win - np.roll(a_win, (1, 0), axis=(0, 1)), cmap=plt.cm.gray)
plt.title("Difference when A has been shifted by 1 pixel")
plt.show()
"""
Explanation: We can see that the bright pixels moved between the two frames. We can find out the distance that all the particles moved between frame A and frame B using the principles of least squares or correlations, but let's first try to get it manually.
If we shift the window IA by some pixels to the right and subtract from IB the shifted IA, we shall see how good the shift predicts the real displacement between the two.
End of explanation
"""
def match_template(img, template, maxroll=8):
best_dist = np.inf
best_shift = (-1, -1)
for y in range(maxroll):
for x in range(maxroll):
# calculate Euclidean distance
dist = np.sqrt(np.sum((img - np.roll(template, (y, x), axis=(0, 1))) ** 2))
if dist < best_dist:
best_dist = dist
best_shift = (y, x)
return (best_dist, best_shift)
# let's test that it works by manually rolling (shifting circurlarly) the same
# image
match_template(np.roll(a_win, (2, 0), axis=(0, 1)), a_win)
# indeed, when we find the correct shift, we got zero distance. it's not so in real images:
best_dist, best_shift = match_template(b_win, a_win)
print(f"{best_dist=}")
print(f"{best_shift=}")
"""
Explanation: Let's try to find the best shift algorithmically: shift and calculated the sum of squared differences the minimum is the best shift
End of explanation
"""
fig, axs = plt.subplots(1, 2, figsize=(9, 4))
axs[0].imshow(np.roll(a_win, best_shift, axis=(0, 1)), cmap=plt.cm.gray)
axs[1].imshow(b_win, cmap=plt.cm.gray)
plt.show()
"""
Explanation: We can draw this as a vector of velocity
$$
u = \frac{\Delta x \text{ pixels}}{\Delta t} ,\qquad v = \frac{\Delta y \text{ pixels}}{\Delta t}
$$
where $\Delta t$ is the time interval (delay) between the two images (or two laser pulses).
End of explanation
"""
from scipy.signal import correlate
cross_corr = correlate(b_win - b_win.mean(), a_win - a_win.mean(), method="fft")
# Note that it's approximately twice as large than the original windows, as we
# can shift a_win by a maximum of it's size - 1 horizontally and vertically
# while still maintaining some overlap between the two windows.
print("Size of the correlation map: %d x %d" % cross_corr.shape)
# let's see what the cross-correlation looks like
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
Y, X = np.meshgrid(np.arange(cross_corr.shape[0]), np.arange(cross_corr.shape[1]))
ax.plot_surface(Y, X, cross_corr, cmap=plt.cm.jet, linewidth=0.2)
plt.title("Correlation map — peak is the most probable shift")
plt.show()
# let's see the same correlation map, from above
plt.imshow(cross_corr, cmap=plt.cm.gray)
y, x = np.unravel_index(cross_corr.argmax(), cross_corr.shape)
print(f"{y=}, {x=}")
plt.plot(x, y, "ro")
plt.show()
"""
Explanation: Well, maybe it's not the best match, but it is already better than nothing.
The problem now is that manually shifting each image and repeating the loop many times is impractical. However, based on the same principle of finding the right shift, one can get by using a different template matching principle, based on the property called cross-correlation (cross because we use two different images). In short this is an efficient computational algorithm to find out the right shift. You can see more details here: http://paulbourke.net/miscellaneous/correlate/.
End of explanation
"""
dy, dx = y - 31, x - 31
print(f"{dy=}, {dx=}")
"""
Explanation: The image of the correlation map shows the same result that we got manually looping. We need to shift a_win to give the best possible correlation between the two windows. If there best correlation would come from no shift, the result would be (31, 31)—the center of symmetry.
End of explanation
"""
def vel_field(curr_frame, next_frame, win_size):
ys = np.arange(0, curr_frame.shape[0], win_size)
xs = np.arange(0, curr_frame.shape[1], win_size)
dys = np.zeros((len(ys), len(xs)))
dxs = np.zeros((len(ys), len(xs)))
for iy, y in enumerate(ys):
for ix, x in enumerate(xs):
int_win = curr_frame[y : y + win_size, x : x + win_size]
search_win = next_frame[y : y + win_size, x : x + win_size]
cross_corr = correlate(
search_win - search_win.mean(), int_win - int_win.mean(), method="fft"
)
dys[iy, ix], dxs[iy, ix] = (
np.unravel_index(np.argmax(cross_corr), cross_corr.shape)
- np.array([win_size, win_size])
+ 1
)
# draw velocity vectors from the center of each window
ys = ys + win_size / 2
xs = xs + win_size / 2
return xs, ys, dxs, dys
xs, ys, dxs, dys = vel_field(a, b, 32)
norm_drs = np.sqrt(dxs ** 2 + dys ** 2)
fig, ax = plt.subplots(figsize=(6, 6))
# we need these flips on y since quiver uses a bottom-left origin, while our
# arrays use a top-right origin
ax.quiver(
xs,
ys[::-1],
dxs,
-dys,
norm_drs,
cmap=plt.cm.plasma,
angles="xy",
scale_units="xy",
scale=0.25,
)
ax.set_aspect("equal")
plt.show()
"""
Explanation: We can get the first velocity field by repeating this analysis for all small windows. Let's take 32 x 32 pixels windows from each image and do the loop:
End of explanation
"""
def vel_field_asymmetric_wins(
curr_frame, next_frame, half_int_win_size, half_search_win_size
):
ys = np.arange(half_int_win_size[0], curr_frame.shape[0], 2 * half_int_win_size[0])
xs = np.arange(half_int_win_size[1], curr_frame.shape[1], 2 * half_int_win_size[1])
dys = np.zeros((len(ys), len(xs)))
dxs = np.zeros((len(ys), len(xs)))
for iy, y in enumerate(ys):
for ix, x in enumerate(xs):
int_win = curr_frame[
y - half_int_win_size[0] : y + half_int_win_size[0],
x - half_int_win_size[1] : x + half_int_win_size[1],
]
search_win_y_min = y - half_search_win_size[0]
search_win_y_max = y + half_search_win_size[0]
search_win_x_min = x - half_search_win_size[1]
search_win_x_max = x + half_search_win_size[1]
truncated_search_win = next_frame[
max(0, search_win_y_min) : min(b.shape[0], search_win_y_max),
max(0, search_win_x_min) : min(b.shape[1], search_win_x_max),
]
cross_corr = correlate(
truncated_search_win - np.mean(truncated_search_win),
int_win - np.mean(int_win),
mode="valid",
method="fft",
)
dy, dx = np.unravel_index(np.argmax(cross_corr), cross_corr.shape)
# if the top of the search window got truncated, shift the origin
# up to the top edge of the (non-truncated) search window
if search_win_y_min < 0:
dy += -search_win_y_min
# if the left of the search window got truncated, shift the origin
# over to the left edge of the (non-truncated) search window
if search_win_x_min < 0:
dx += -search_win_x_min
# shift origin to the center of the search window
dy -= half_search_win_size[0] - half_int_win_size[0]
dx -= half_search_win_size[1] - half_int_win_size[1]
dys[iy, ix] = dy
dxs[iy, ix] = dx
return xs, ys, dxs, dys
int_win_size = np.array([32, 32])
print(f"{int_win_size=}")
assert np.all(np.array(a.shape) % int_win_size == 0)
assert np.all(int_win_size % 2 == 0)
half_int_win_size = int_win_size // 2
search_win_size = int_win_size * 2
print(f"{search_win_size=}")
assert np.all(search_win_size % 2 == 0)
half_search_win_size = search_win_size // 2
assert np.all(search_win_size > int_win_size)
print(
"max velocity that can be detected with these window sizes: "
+ f"{half_search_win_size - half_int_win_size}"
)
"""
Explanation: If you've followed along this far, great! Now you understand the basics.
We can also try out a variant of this that uses a search window larger than the interrogation window instead of relying on zero-padding. By avoiding using zero-padding around the search window, movement detection should theoretically be a bit better, assuming that the window sizes are chosen well.
End of explanation
"""
xs_asym, ys_asym, dxs_asym, dys_asym = vel_field_asymmetric_wins(
a, b, half_int_win_size, half_search_win_size
)
norm_drs_asym = np.sqrt(dxs_asym ** 2 + dys_asym ** 2)
fig, axs = plt.subplots(1, 2, figsize=(12, 6))
axs[0].quiver(
xs,
ys[::-1],
dxs,
-dys,
norm_drs,
cmap=plt.cm.plasma,
angles="xy",
scale_units="xy",
scale=0.25,
)
axs[1].quiver(
xs_asym,
ys_asym[::-1],
dxs_asym,
-dys_asym,
norm_drs_asym,
cmap=plt.cm.plasma,
angles="xy",
scale_units="xy",
scale=0.25,
)
axs[0].set_title(
f"{win_size} x {win_size} int. win. + "
f"{win_size} x {win_size} 0-padded search win."
)
axs[1].set_title(
f"{int_win_size[0]} x {int_win_size[1]} int. win. + "
f"{search_win_size[0]} x {search_win_size[0]} unpadded search win."
)
ax.set_aspect("equal")
plt.show()
"""
Explanation: Making the search window larger compared to the interrogation window would allow for larger velocities to be detected.
End of explanation
"""
|
massimo-nocentini/simulation-methods | notes/matrices-functions/riordan-arrays-ctors-thesis.ipynb | mit | from sympy import *
from sympy.abc import n, i, N, x, lamda, phi, z, j, r, k, a, t, alpha
from sequences import *
init_printing()
m = 5
d_fn, h_fn = Function('d'), Function('h')
d, h = IndexedBase('d'), IndexedBase('h')
"""
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
Massimo Nocentini<br>
<small>
<br>March 2, 2018: compositional inverse
<br>February 26, 2018: splitting from generic nb
</small>
</div>
</p>
<br>
<br>
<div align="center">
<b>Abstract</b><br>
Ctors for (symbolic) Riordan arrays.
</div>
End of explanation
"""
rows, cols = 5, 5
ctor = lambda i,j: d[i,j]
Matrix(rows, cols, ctor)
"""
Explanation:
End of explanation
"""
d_series = Eq(d_fn(t), 1+sum(d[i]*t**i for i in range(1,m)))
h_series = Eq(h_fn(t), t*(1+sum(h[i]*t**i for i in range(1,m-1)))).expand()
d_series, h_series
R = Matrix(m, m, riordan_matrix_by_convolution(m, d_series, h_series))
R
production_matrix(R) # too verbose to show
d_series = Eq(d_fn(t), 1/(1-t))
h_series = Eq(h_fn(t), t*d_series.rhs)
d_series, h_series
R = Matrix(10, 10, riordan_matrix_by_convolution(10, d_series, h_series))
R
"""
Explanation: By series convolution
End of explanation
"""
dim = 5
a, b, b_bar, c = symbols(r'a b \bar{b} c')
M = Matrix(dim, dim,
riordan_matrix_by_recurrence(
dim, lambda n, k: {(n-1, k-1):a,
(n-1, k): b if k else b_bar,
(n-1, k+1):c}))
M
production_matrix(M)
Msubs = M.subs({a:1, b_bar:b})
Msubs, production_matrix(Msubs)
"""
Explanation: By recurrence relation
End of explanation
"""
A, Z = Function('A'), Function('Z')
A_eq = Eq(A(t), 1 + t)
Z_eq = Eq(Z(t),1)
A_eq, Z_eq
R = Matrix(10, 10, riordan_matrix_by_AZ_sequences(10, (Z_eq, A_eq)))
R, production_matrix(R)
"""
Explanation: By $A, Z$ sequences
$\mathcal{P}$
End of explanation
"""
A = Function('A')
A_ones = Eq(A(t), 1/(1-t))
R = Matrix(10, 10, riordan_matrix_by_AZ_sequences(10, (A_ones, A_ones)))
R, production_matrix(R)
"""
Explanation: $\mathcal{C}$
End of explanation
"""
dim = 5
A = Function('A')
a = IndexedBase('a')
A_gen = Eq(A(t), sum((a[j] if j else 1)*t**j for j in range(dim)))
R = Matrix(dim, dim, riordan_matrix_by_AZ_sequences(dim, (A_gen, A_gen)))
R
z = IndexedBase('z')
A_gen = Eq(A(t), sum((a[j] if j else 1)*t**j for j in range(dim)))
Z_gen = Eq(Z(t), sum((z[j] if j else 1)*t**j for j in range(dim)))
Raz = Matrix(dim, dim, riordan_matrix_by_AZ_sequences(dim, (Z_gen, A_gen)))
Raz
production_matrix(R), production_matrix(Raz)
"""
Explanation: $\mathcal{R}$
End of explanation
"""
H = Function('h')
C_eq = Eq(H(t), (1-sqrt(1-4*t))/2)
C_eq, compositional_inverse(C_eq)
P_eq = Eq(H(t), t/(1-t))
(P_eq,
compositional_inverse(P_eq),
compositional_inverse(compositional_inverse(P_eq), y=t))
"""
Explanation: Compositional inverse
End of explanation
"""
d_series = Eq(d_fn(t), 1/(1-t))
h_series = Eq(h_fn(t), t/(1-t))
P_inverse = group_inverse(d_series, h_series)
P_inverse
R = Matrix(10, 10, riordan_matrix_by_convolution(10, *P_inverse))
R, production_matrix(R)
catalan_term = (1-sqrt(1-4*t))/(2*t)
d_series = Eq(d_fn(t), catalan_term)
h_series = Eq(h_fn(t), t*catalan_term)
C_inverse = group_inverse(d_series, h_series, post=radsimp)
C_inverse
R = Matrix(10, 10, riordan_matrix_by_convolution(10, C_inverse[0], C_inverse[1]))
R
"""
Explanation: Group inverse
End of explanation
"""
d_series = Eq(d_fn(t), 1)
h_series = Eq(h_fn(t), exp(t)-1)
d_series, h_series
R = Matrix(10, 10, riordan_matrix_exponential(
riordan_matrix_by_convolution(10, d_series, h_series)))
R
production_matrix(R), production_matrix(R, exp=True)
inspect(R)
"""
Explanation: Exponential RA
build the triangle of Stirling numbers of the II kind
End of explanation
"""
d_series = Eq(d_fn(t), 1/(1-t))
h_series = Eq(h_fn(t), t/(1-t))
d_series, h_series
R = Matrix(10, 10, riordan_matrix_exponential(
riordan_matrix_by_convolution(10, d_series, h_series)))
R
production_matrix(R), production_matrix(R, exp=True)
inspect(R)
"""
Explanation: https://oeis.org/A021009
End of explanation
"""
|
c22n/ion-channel-ABC | docs/examples/human-atrial/standardised_ina.ipynb | gpl-3.0 | import os, tempfile
import logging
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from ionchannelABC import theoretical_population_size
from ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor
from ionchannelABC.experiment import setup
from ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom
import myokit
from pyabc import Distribution, RV, History, ABCSMC
from pyabc.epsilon import MedianEpsilon
from pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler
from pyabc.populationstrategy import ConstantPopulationSize
"""
Explanation: ABC calibration of $I_\text{Na}$ in standardised model to unified dataset.
End of explanation
"""
from experiments.ina_sakakibara import (sakakibara_act_nyg_adjust,
sakakibara_inact_nyg_adjust,
sakakibara_inact_kin_nyg_adjust,
sakakibara_rec_nyg_adjust)
from experiments.ina_schneider import schneider_taum_nyg_adjust
modelfile = 'models/standardised_ina.mmt'
"""
Explanation: Initial set-up
Load experiments used for unified dataset calibration:
- Steady-state activation [Sakakibara1992]
- Activation time constant [Schneider1994]
- Steady-state inactivation [Sakakibara1992]
- Inactivation time constant (fast+slow) [Sakakibara1992]
- Recovery time constant (fast+slow) [Sakakibara1992]
End of explanation
"""
observations, model, summary_statistics = setup(modelfile,
sakakibara_act_nyg_adjust,
sakakibara_inact_nyg_adjust,
schneider_taum_nyg_adjust,
sakakibara_inact_kin_nyg_adjust,
sakakibara_rec_nyg_adjust)
assert len(observations)==len(summary_statistics(model({})))
g = plot_sim_results(modelfile,
sakakibara_act_nyg_adjust,
schneider_taum_nyg_adjust,
sakakibara_inact_nyg_adjust,
sakakibara_inact_kin_nyg_adjust,
sakakibara_rec_nyg_adjust)
"""
Explanation: Combine model and experiments to produce:
- observations dataframe
- model function to run experiments and return traces
- summary statistics function to accept traces
End of explanation
"""
limits = {'log_ina.A': (0., 1.),
'log_ina.p_1': (1., 5.),
'ina.p_2': (1e-7, 0.2),
'log_ina.p_3': (-3., 1.),
'ina.p_4': (1e-7, 0.4),
'log_ina.p_5': (-1., 3.),
'ina.p_6': (1e-7, 0.2),
'log_ina.p_7': (-4., 0.),
'ina.p_8': (1e-7, 0.2)}
prior = Distribution(**{key: RV("uniform", a, b - a)
for key, (a,b) in limits.items()})
# Test this works correctly with set-up functions
assert len(observations) == len(summary_statistics(model(prior.rvs())))
"""
Explanation: Set up prior ranges for each parameter in the model.
See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
End of explanation
"""
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "standardised_ina_unified.db"))
logging.basicConfig()
abc_logger = logging.getLogger('ABC')
abc_logger.setLevel(logging.DEBUG)
eps_logger = logging.getLogger('Epsilon')
eps_logger.setLevel(logging.DEBUG)
pop_size = theoretical_population_size(2, len(limits))
print("Theoretical minimum population size is {} particles".format(pop_size))
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=IonChannelDistance(
exp_id=list(observations.exp_id),
variance=list(observations.variance),
delta=0.05),
population_size=ConstantPopulationSize(1000),
summary_statistics=summary_statistics,
transitions=EfficientMultivariateNormalTransition(),
eps=MedianEpsilon(initial_epsilon=100),
sampler=MulticoreEvalParallelSampler(n_procs=16),
acceptor=IonChannelAcceptor())
obs = observations.to_dict()['y']
obs = {str(k): v for k, v in obs.items()}
abc_id = abc.new(db_path, obs)
history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
"""
Explanation: Run ABC calibration
End of explanation
"""
history = History(db_path)
history.all_runs() # most recent is relevant
df, w = history.get_distribution( m=0)
df.describe()
sns.set_context('poster')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
g = plot_sim_results(modelfile,
sakakibara_act_nyg_adjust,
schneider_taum_nyg_adjust,
sakakibara_inact_nyg_adjust,
sakakibara_inact_kin_nyg_adjust,
sakakibara_rec_nyg_adjust,
df=df, w=w)
plt.tight_layout()
m,_,_ = myokit.load(modelfile)
sns.set_context('paper')
g = plot_kde_matrix_custom(df, w, limits=limits)
plt.tight_layout()
"""
Explanation: Results analysis
End of explanation
"""
|
Neuroglycerin/neukrill-net-work | notebooks/model_run_and_result_analyses/Interactive Pylearn2 - Integrating OpenCV features.ipynb | mit | import pylearn2.space
final_shape = (48,48)
vector_size = 100
input_space = pylearn2.space.CompositeSpace([
pylearn2.space.Conv2DSpace(shape=final_shape,num_channels=1,axes=['b',0,1,'c']),
pylearn2.space.VectorSpace(vector_size)
])
"""
Explanation: Building the model
This time we want to use a CompositeSpace again, but one of the inputs will just be a vector of pre-computed OpenCV features calculated straight from the raw images.
End of explanation
"""
import pylearn2.models.mlp
import pylearn2.blocks
"""
Explanation: Composite Layers
Up until we reach the fully connected layers we want to have a convolutional pipeline and a spacetransformer pipeline that just passes the vector inputs to the MLP layer. To do this, we have to define two of these pipelines inside a CompositeLayer.
End of explanation
"""
convlayer = pylearn2.models.mlp.MLP(
layer_name="convlayer",
batch_size=128,
layers=[pylearn2.models.mlp.ConvRectifiedLinear(
layer_name='h1',
output_channels=48,
irange=0.025,
init_bias=0,
kernel_shape=[8,8],
pool_shape=[2,2],
pool_stride=[2,2],
max_kernel_norm=1.9365
),
pylearn2.models.mlp.ConvRectifiedLinear(
layer_name='h2',
output_channels=96,
irange=0.025,
init_bias=0,
kernel_shape=[5,5],
pool_shape=[2,2],
pool_stride=[2,2],
max_kernel_norm=1.9365
),
pylearn2.models.mlp.ConvRectifiedLinear(
layer_name='h3',
output_channels=128,
irange=0.025,
init_bias=0,
kernel_shape=[3,3],
pool_shape=[2,2],
pool_stride=[2,2],
max_kernel_norm=1.9365
),
pylearn2.models.mlp.ConvRectifiedLinear(
layer_name='h4',
output_channels=128,
irange=0.025,
init_bias=0,
kernel_shape=[3,3],
pool_shape=[2,2],
pool_stride=[2,2],
max_kernel_norm=1.9365
)
]
)
"""
Explanation: First, we have to instantiate the above layers as their own MLP objects. Originally, I thought these should have an input_source to specify the inputs they take, turns out nested MLPs do not have input or target sources. Instantiating each:
End of explanation
"""
passthrough = pylearn2.models.mlp.MLP(
layer_name="passthrough",
batch_size=128,
layers=[pylearn2.models.mlp.RectifiedLinear(
dim=256,
max_col_norm=1.9,
layer_name='h1p5',
istdev=0.05,
W_lr_scale=0.25,
b_lr_scale=0.25)])
"""
Explanation: Can't figure out what the layer is called that just acts as a dummy so putting in a single MLP layer in the mean time. Could cause us problems.
End of explanation
"""
inputs_to_layers = {0:[0],1:[1]}
compositelayer = pylearn2.models.mlp.CompositeLayer(
layer_name="parallel_conv",
layers=[convlayer,passthrough],
inputs_to_layers=inputs_to_layers)
"""
Explanation: Then we can initialise our CompositeLayer with these two stacks of convolutional layers. Have to define dictionary mapping which of the inputs in the composite space supplied goes to which component of the space.
End of explanation
"""
flattened = pylearn2.models.mlp.FlattenerLayer(raw_layer=compositelayer)
"""
Explanation: Unfortunately, it turns out we also have to put a FlattenerLayer around this so that the output of this layer will play nicely with the fully connected layer following this:
End of explanation
"""
n_classes=121
main_mlp =None
main_mlp = pylearn2.models.mlp.MLP(
batch_size=128,
input_space=input_space,
input_source=['img_1','img_2'],
layers=[
flattened,
pylearn2.models.mlp.RectifiedLinear(
dim=1024,
max_col_norm=1.9,
layer_name='h5',
istdev=0.05,
W_lr_scale=0.25,
b_lr_scale=0.25),
pylearn2.models.mlp.Softmax(
n_classes=121,
max_col_norm=1.9365,
layer_name='y',
istdev=0.05,
W_lr_scale=0.25,
b_lr_scale=0.25
)
]
)
"""
Explanation: Now we need to connect this composite layer to the rest of the network, which is a single fully connected layer and the softmax output layer. To do this, we instantiate another MLP object, in which the first layer is this composite layer. This also when we use the composite input space we defined above.
End of explanation
"""
import neukrill_net.image_directory_dataset
import copy
reload(neukrill_net.image_directory_dataset)
class PassthroughIterator(object):
def __init__(self, *args, **keyargs):
keyargs['rng'] = np.random.RandomState(42)
self.iterator_1 = neukrill_net.image_directory_dataset.FlyIterator(*args,**keyargs)
self.cached = np.zeros((keyargs['num_batches']*keyargs['batch_size'],vector_size))
self.cached = self.cached.astype(np.float32)
self.batch_size = keyargs['batch_size']
self.stochastic=False
self.num_examples = self.iterator_1.num_examples
self.index = 0
def __iter__(self):
return self
def next(self):
# get a batch from both iterators:
Xbatch1,ybatch1 = self.iterator_1.next()
vectorbatch = self.cached[self.index*self.batch_size:(self.index+1)*self.batch_size,:]
self.index += 1
return Xbatch1,vectorbatch,ybatch1
class PassthroughDataset(neukrill_net.image_directory_dataset.ListDataset):
def iterator(self, mode=None, batch_size=None, num_batches=None, rng=None,
data_specs=None, return_tuple=False):
if not num_batches:
num_batches = int(len(self.X)/batch_size)
iterator = PassthroughIterator(dataset=self, batch_size=batch_size,
num_batches=num_batches,
final_shape=self.run_settings["final_shape"],
rng=None,mode=mode)
return iterator
import neukrill_net.augment
import os
dataset = PassthroughDataset(
transformer=neukrill_net.augment.RandomAugment(
units='float',
rotate=[0,90,180,270],
rotate_is_resizable=0,
flip=1,
resize=final_shape,
normalise={'global_or_pixel':'global',
'mu': 0.957,
'sigma': 0.142}
),
settings_path=os.path.abspath("settings.json"),
run_settings_path=os.path.abspath("run_settings/replicate_8aug.json"),
force=True
)
"""
Explanation: Creating the dataset
To test this model we need a dataset that's going to supply the input data in the correct format. This should be a tuple of 4D arrays returns by the iterator in the tuple containing the input and target batches. We can create this pretty easily by just making a Dataset that inherits our old ListDataset and creates an iterator that contains two FlyIterators.
End of explanation
"""
iterator = dataset.iterator(mode='even_shuffled_sequential',batch_size=128)
X1,v,y = iterator.next()
"""
Explanation: Testing this new dataset iterator:
End of explanation
"""
print(v)
print(v.shape)
"""
Explanation: Checking that the vector being produced is the right size:
End of explanation
"""
hl.Image(X1[0].squeeze())
"""
Explanation: Looks good. Image double check:
End of explanation
"""
import pylearn2.training_algorithms.sgd
import pylearn2.costs.mlp.dropout
import pylearn2.costs.cost
import pylearn2.termination_criteria
algorithm = pylearn2.training_algorithms.sgd.SGD(
train_iteration_mode='even_shuffled_sequential',
monitor_iteration_mode='even_sequential',
batch_size=128,
learning_rate=0.1,
learning_rule= pylearn2.training_algorithms.learning_rule.Momentum(
init_momentum=0.5
),
monitoring_dataset={
'train':dataset,
'valid':PassthroughDataset(
transformer=neukrill_net.augment.RandomAugment(
units='float',
rotate=[0,90,180,270],
rotate_is_resizable=0,
flip=1,
resize=final_shape,
normalise={'global_or_pixel':'global',
'mu': 0.957,
'sigma': 0.142}
),
settings_path=os.path.abspath("settings.json"),
run_settings_path=os.path.abspath("run_settings/replicate_8aug.json"),
force=True, training_set_mode='validation'
)
},
cost=pylearn2.costs.cost.SumOfCosts(
costs=[
pylearn2.costs.mlp.dropout.Dropout(
input_include_probs={'h5':0.5},
input_scales={'h5':2.0}),
pylearn2.costs.mlp.WeightDecay(coeffs={'parallel_conv':0.00005,
'h5':0.00005})
]
),
termination_criterion=pylearn2.termination_criteria.EpochCounter(max_epochs=500)
)
import pylearn2.train_extensions
import pylearn2.train_extensions.best_params
extensions = [
pylearn2.training_algorithms.learning_rule.MomentumAdjustor(
start=1,
saturate=200,
final_momentum=0.95
),
pylearn2.training_algorithms.sgd.LinearDecayOverEpoch(
start=1,
saturate=200,
decay_factor=0.025
),
pylearn2.train_extensions.best_params.MonitorBasedSaveBest(
channel_name='valid_y_nll',
save_path='/disk/scratch/neuroglycerin/models/parallel_interactive_opencv.pkl'
),
pylearn2.training_algorithms.sgd.MonitorBasedLRAdjuster(
high_trigger=1.0,
low_trigger=0.999,
grow_amt=1.012,
shrink_amt=0.986,
max_lr=0.4,
min_lr=0.00005,
channel_name='valid_y_nll'
)
]
"""
Explanation: Creating the rest
The rest of the train object stays the same, apart from the save path and that the algorithm will have to load one of these new ParallelDataset objects for its validation set. So, we're missing:
algorithm - contains validation set, which must be set up as a parallel dataset.
extensions - keeping these the same but changing save paths
It's worth noting that when we define the cost and the weight decay we have to address the new convolutional layers inside the composite layer.
End of explanation
"""
import pylearn2.train
train = pylearn2.train.Train(
dataset=dataset,
model=main_mlp,
algorithm=algorithm,
extensions=extensions,
save_path='/disk/scratch/neuroglycerin/models/parallel_interactive_opencv_recent.pkl',
save_freq=1
)
"""
Explanation: Assembling the full train object
We now have everything we need to make up our train object, so we can put it together and see how well it runs.
End of explanation
"""
train.main_loop()
import pickle
with open('/disk/scratch/s1145806/cached_hlf_train_data.pkl','rb') as f:
cached = pickle.load(f)
cached.shape
cached = np.load('/disk/scratch/s1145806/cached_hlf_train_data.pkl')
cached.shape
cached
import sklearn.externals.joblib
cached=sklearn.externals.joblib.load('/disk/scratch/s1145806/cached_hlf_train_data.pkl')
cached.shape
cached.squeeze().shape
"""
Explanation: We can live with that warning.
Now, attempting to run the model:
End of explanation
"""
|
jacobdein/alpine-soundscapes | Compute location distance error.ipynb | mit | from geo.models import SampleLocation
from database.models import Site
from shapely.geometry import shape, MultiPoint
import geopandas
import pandas
import numpy
from django.db import connection
"""
Explanation: Compute location distance error
This notebook computes the average distance between the generated recording locations and the actual recording locations used in the analysis.
import statements
End of explanation
"""
def get_geodataframe(queryset, modification=None, crs={'+init':'epsg:31254'}):
query = queryset.query.sql_with_params()
if modification:
query = (modification, query[1])
return geopandas.read_postgis(query[0], connection,
geom_col='geometry',
params=query[1],
index_col='id',
crs=crs)
"""
Explanation: function declarations
End of explanation
"""
generated = get_geodataframe(SampleLocation.objects.all())
actual = get_geodataframe(Site.objects.filter(id__lte=30))
"""
Explanation: load locations from database
End of explanation
"""
distance_array = numpy.zeros(30)
distances = pandas.DataFrame({'id': generated.index, 'name': actual.sort_index().name, 'distance': distance_array}).set_index('id')
for i in range(1, 31):
x1 = generated[generated.index == i].geometry.as_matrix()[0].coords.xy[0][0]
x2 = actual[actual.index == i].geometry.as_matrix()[0].coords.xy[0][0]
y1 = generated[generated.index == i].geometry.as_matrix()[0].coords.xy[1][0]
y2 = actual[actual.index == i].geometry.as_matrix()[0].coords.xy[1][0]
distance_array[i - 1] = numpy.sqrt((x2 - x1)**2 + (y2 - y1)**2)
distances['distance'] = distance_array
distances
"""
Explanation: loop through locations and compute distance
End of explanation
"""
distances.distance.mean().round(0)
distances.distance.std().round(0)
"""
Explanation: compute the distance mean and standard deviation
End of explanation
"""
|
SylvainCorlay/bqplot | examples/Marks/Pyplot/GridHeatMap.ipynb | apache-2.0 | np.random.seed(0)
data = np.random.randn(10, 10)
"""
Explanation: Get Data
End of explanation
"""
from ipywidgets import *
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data)
fig
grid_map.display_format = '.2f'
grid_map.font_style = {'font-size': '16px', 'fill':'blue', 'font-weight': 'bold'}
"""
Explanation: Basic Heat map
End of explanation
"""
axes_options = {'column': {'visible': False}, 'row': {'visible': False}, 'color': {'visible': False}}
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, axes_options=axes_options)
fig
"""
Explanation: Hide tick_labels and color axis using 'axes_options'
End of explanation
"""
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(), 'y': LinearScale(reverse=True)})
## The data along the rows is not uniform. Hence the 5th row(from top) of the map
## is twice the height of the remaining rows.
row_data = np.arange(10)
row_data[5:] = np.arange(6, 11)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
print(row_data.shape)
print(column_data.shape)
print(data.shape)
"""
Explanation: Non Uniform Heat map
End of explanation
"""
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(), 'y': LinearScale(reverse=True)})
row_data = np.arange(11)
column_data = np.arange(10, 21)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
"""
Explanation: Alignment of the data with respect to the grid
For a N-by-N matrix, N+1 points along the row or the column are assumed to be end points.
End of explanation
"""
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(),
'y': LinearScale(reverse=True, max=15)})
row_data = np.arange(10)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
"""
Explanation: By default, for N points along any dimension, data aligns to the start of the rectangles in the grid.
The grid extends infinitely in the other direction. By default, the grid extends infintely
towards the bottom and the right.
End of explanation
"""
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(),
'y': LinearScale(reverse=True, min=-5, max=15)})
row_data = np.arange(10)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data, row_align='end')
fig
"""
Explanation: By changing the row_align and column_align properties, the grid can extend in the opposite direction
End of explanation
"""
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(),
'y': LinearScale(reverse=True, min=-5, max=15)})
row_data = np.arange(9)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data, row_align='end')
fig
"""
Explanation: For N+1 points on any direction, the grid extends infintely in both directions
End of explanation
"""
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, opacity=0.3, stroke='white', axes_options=axes_options)
fig
"""
Explanation: Changing opacity and stroke
End of explanation
"""
data = np.random.randn(10, 10)
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, interactions={'click':'select'},
selected_style={'stroke': 'blue', 'stroke-width': 3},
axes_options=axes_options)
fig
"""
Explanation: Selections on the grid map
Selection on the GridHeatMap works similar to excel. Clicking on a cell selects the cell, and deselects the previous selection. Using the Ctrl key allows multiple cells to be selected, while the Shift key selects the range from the last cell in the selection to the current cell.
End of explanation
"""
grid_map.selected
"""
Explanation: The selected trait of a GridHeatMap contains a list of lists, with each sub-list containing the row and column index of a selected cell.
End of explanation
"""
import numpy as np
from IPython.display import display
np.random.seed(0)
data = np.random.randn(10, 10)
figure = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, interactions={'click': 'select'},
selected_style={'stroke': 'blue', 'stroke-width': 3})
from ipywidgets import Output
out = Output()
@out.capture()
def print_event(self, target):
print(target)
# test
print_event(1, 'test output')
grid_map.on_element_click(print_event)
display(figure)
display(out)
"""
Explanation: Registering on_element_click event handler
End of explanation
"""
|
robertclf/FAFT | FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb | bsd-3-clause | import numpy as np
import ctypes
from ctypes import *
import pycuda.gpuarray as gpuarray
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import math
%matplotlib inline
"""
Explanation: 2D Fast Accurate Fourier Transform
with an extra gpu array for the 33th complex values
End of explanation
"""
gridDIM = 64
size = gridDIM*gridDIM
axes0 = 0
axes1 = 1
makeC2C = 0
makeR2C = 1
makeC2R = 1
axesSplit_0 = 0
axesSplit_1 = 1
m = size
segment_axes0 = 0
segment_axes1 = 0
DIR_BASE = "/home/robert/Documents/new1/FFT/mycode/"
# FAFT
_faft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'FAFT128_2D_R2C.so' )
_faft128_2D.FAFT128_2D_R2C.restype = int
_faft128_2D.FAFT128_2D_R2C.argtypes = [ctypes.c_void_p, ctypes.c_void_p,
ctypes.c_float, ctypes.c_float, ctypes.c_int,
ctypes.c_int, ctypes.c_int, ctypes.c_int]
cuda_faft = _faft128_2D.FAFT128_2D_R2C
# Inv FAFT
_ifaft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'IFAFT128_2D_C2R.so' )
_ifaft128_2D.IFAFT128_2D_C2R.restype = int
_ifaft128_2D.IFAFT128_2D_C2R.argtypes = [ctypes.c_void_p, ctypes.c_void_p,
ctypes.c_float, ctypes.c_float, ctypes.c_int,
ctypes.c_int, ctypes.c_int, ctypes.c_int]
cuda_ifaft = _ifaft128_2D.IFAFT128_2D_C2R
def fftGaussian(p,sigma):
return np.exp( - p**2*sigma**2/2. )
"""
Explanation: Loading FFT routines
End of explanation
"""
def Gaussian(x,mu,sigma):
return np.exp( - (x-mu)**2/sigma**2/2. )/(sigma*np.sqrt( 2*np.pi ))
def fftGaussian(p,mu,sigma):
return np.exp(-1j*mu*p)*np.exp( - p**2*sigma**2/2. )
# Gaussian parameters
mu_x = 1.5
sigma_x = 1.
mu_y = 1.5
sigma_y = 1.
# Grid parameters
x_amplitude = 7.
p_amplitude = 5. # With the traditional method p amplitude is fixed to: 2 * np.pi /( 2*x_amplitude )
dx = 2*x_amplitude/float(gridDIM) # This is dx in Bailey's paper
dp = 2*p_amplitude/float(gridDIM) # This is gamma in Bailey's paper
delta = dx*dp/(2*np.pi)
x_range = np.linspace( -x_amplitude, x_amplitude-dx, gridDIM)
p_range = np.linspace( -p_amplitude, p_amplitude-dp, gridDIM)
x = x_range[ np.newaxis, : ]
y = x_range[ :, np.newaxis ]
f = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y)
plt.imshow( f, extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower')
axis_font = {'size':'24'}
plt.text( 0., 7.1, '$W$' , **axis_font)
plt.colorbar()
#plt.ylim(0,0.44)
print ' Amplitude x = ',x_amplitude
print ' Amplitude p = ',p_amplitude
print ' '
print 'mu_x = ', mu_x
print 'mu_y = ', mu_y
print 'sigma_x = ', sigma_x
print 'sigma_y = ', sigma_y
print ' '
print 'n = ', x.size
print 'dx = ', dx
print 'dp = ', dp
print ' standard fft dp = ',2 * np.pi /( 2*x_amplitude ) , ' '
print ' '
print 'delta = ', delta
print ' '
print 'The Gaussian extends to the numerical error in single precision:'
print ' min = ', np.min(f)
"""
Explanation: Initializing Data
Gaussian
End of explanation
"""
f33 = np.zeros( [1 ,64], dtype = np.complex64 )
# One gpu array.
f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )
f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )
"""
Explanation: $W$ TRANSFORM FROM AXES-0
After the transfom, f_gpu[:32, :] contains real values and f_gpu[32:, :] contains imaginary values. g33_gpu contains the 33th. complex values
End of explanation
"""
# Executing FFT
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeR2C, axesSplit_0 )
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_0 )
plt.imshow(
f_gpu.get()
)
plt.plot( f33_gpu.get().real.reshape(64) )
def ReconstructFFT2D_axesSplit_0(f,f65):
n = f.shape[0]
freal_half = f_gpu.get()[:n/2,:]
freal = np.append( freal_half , f65.real.reshape(1,f65.size) , axis=0)
freal = np.append( freal , freal_half[:0:-1,:] ,axis=0)
fimag_half = f_gpu.get()[n/2:,:]
fimag = np.append( fimag_half , f65.imag.reshape(1,f65.size) ,axis=0)
fimag = np.append( fimag , -fimag_half[:0:-1,:] ,axis=0)
return freal + 1j*fimag
plt.imshow(
ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() ).real/float(size),
extent=[-p_amplitude , p_amplitude-dp, -p_amplitude , p_amplitude-dp] , origin='lower')
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -2, 6.2, '$Re \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.xlabel('$p_x$',**axis_font)
plt.ylabel('$p_y$',**axis_font)
plt.imshow(
ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() ).imag/float(size),
extent=[-p_amplitude , p_amplitude-dp, -p_amplitude , p_amplitude-dp] , origin='lower')
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -2, 6.2, '$Imag\, \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.xlabel('$p_x$',**axis_font)
plt.ylabel('$p_y$',**axis_font)
"""
Explanation: Forward Transform
End of explanation
"""
plt.figure(figsize=(10,10))
plt.plot( p_range,
ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() )[32,:].real/float(size),
'o-' , label='Real')
plt.plot( p_range,
ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() )[32,:].imag/float(size),
'ro-' , label='Imag')
plt.xlabel('$p_x$',**axis_font)
plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).real ,'bx');
plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).imag ,'rx');
plt.legend(loc='upper left')
"""
Explanation: Central Section: $p_y =0$
End of explanation
"""
# Executing iFFT
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_0 )
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2R, axesSplit_0 )
plt.imshow( f_gpu.get()/(float(size*size)) ,
extent=[-x_amplitude , x_amplitude-dx, -x_amplitude, x_amplitude-dx], origin='lower' )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -1, 7.2, '$W$', **axis_font )
plt.xlim(-x_amplitude , x_amplitude-dx)
plt.ylim(-x_amplitude , x_amplitude-dx)
plt.xlabel('$x$',**axis_font)
plt.ylabel('$y$',**axis_font)
"""
Explanation: Inverse Transform
End of explanation
"""
f = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y)
plt.imshow( f, extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower')
f33 = np.zeros( [64, 1], dtype = np.complex64 )
# One gpu array.
f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )
f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )
"""
Explanation: $W$ TRANSFORM FROM AXES-1
After the transfom, f_gpu[:, :32] contains real values and f_gpu[:, 32:] contains imaginary values. f33_gpu contains the 33th. complex values
End of explanation
"""
# Executing FFT
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeR2C, axesSplit_1 )
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_1 )
plt.imshow(
f_gpu.get()
)
plt.plot( f33_gpu.get().real.reshape(64) )
def ReconstructFFT2D_axesSplit_1(f,f65):
n = f.shape[0]
freal_half = f_gpu.get()[:,:n/2]
freal = np.append( freal_half , f65.real.reshape(f65.size,1) , axis=1)
freal = np.append( freal , freal_half[:,:0:-1] , axis=1)
fimag_half = f_gpu.get()[:,n/2:]
fimag = np.append( fimag_half , f65.imag.reshape(f65.size,1) ,axis=1)
fimag = np.append( fimag , -fimag_half[:,:0:-1] ,axis=1)
return freal + 1j*fimag
ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).shape
plt.imshow( ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).real/float(size),
extent=[-p_amplitude , p_amplitude-dp, -p_amplitude, p_amplitude-dp] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -3.0, 6.2, '$Re \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.xlabel('$p_x$',**axis_font)
plt.ylabel('$p_y$',**axis_font)
plt.imshow( ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).imag/float(size),
extent=[-p_amplitude , p_amplitude-dp, -p_amplitude, p_amplitude-dp] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -3.0, 6.2, '$Imag \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.xlabel('$p_x$',**axis_font)
plt.ylabel('$p_y$',**axis_font)
plt.figure(figsize=(10,10))
plt.plot( p_range,
ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() )[32,:].real/float(size),
'o-' , label='Real')
plt.plot( p_range,
ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() )[32,:].imag/float(size),
'ro-' , label='Imag')
plt.xlabel('$p_x$',**axis_font)
plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).real ,'bx');
plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).imag ,'rx');
plt.legend(loc='upper left')
"""
Explanation: Forward Transform
End of explanation
"""
# Executing iFFT
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_1 )
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2R, axesSplit_1 )
plt.imshow( f_gpu.get()/float(size)**2 ,
extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower')
axis_font = {'size':'24'}
plt.text( 0., 7.1, '$W$' , **axis_font)
plt.colorbar()
"""
Explanation: Inverse Transform
End of explanation
"""
|
zczapran/datascienceintensive | linear_regression/Mini_Project_Linear_Regression.ipynb | mit | # special IPython command to prepare the notebook for matplotlib and other libraries
%pylab inline
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import sklearn
import seaborn as sns
# special matplotlib argument for improved plots
from matplotlib import rcParams
sns.set_style("whitegrid")
sns.set_context("poster")
"""
Explanation: Regression in Python
This is a very quick run-through of some basic statistical concepts, adapted from Lab 4 in Harvard's CS109 course. Please feel free to try the original lab if you're feeling ambitious :-) The CS109 git repository also has the solutions if you're stuck.
Linear Regression Models
Prediction using linear regression
Some re-sampling methods
Train-Test splits
Cross Validation
Linear regression is used to model and predict continuous outcomes while logistic regression is used to model binary outcomes. We'll see some examples of linear regression as well as Train-test splits.
The packages we'll cover are: statsmodels, seaborn, and scikit-learn. While we don't explicitly teach statsmodels and seaborn in the Springboard workshop, those are great libraries to know.
<img width=600 height=300 src="https://imgs.xkcd.com/comics/sustainable.png"/>
End of explanation
"""
from sklearn.datasets import load_boston
boston = load_boston()
boston.keys()
boston.data.shape
# Print column names
print(boston.feature_names)
# Print description of Boston housing data set
print(boston.DESCR)
"""
Explanation: Part 1: Linear Regression
Purpose of linear regression
<div class="span5 alert alert-info">
<p> Given a dataset $X$ and $Y$, linear regression can be used to: </p>
<ul>
<li> Build a <b>predictive model</b> to predict future values of $X_i$ without a $Y$ value. </li>
<li> Model the <b>strength of the relationship</b> between each dependent variable $X_i$ and $Y$</li>
<ul>
<li> Sometimes not all $X_i$ will have a relationship with $Y$</li>
<li> Need to figure out which $X_i$ contributes most information to determine $Y$ </li>
</ul>
<li>Linear regression is used in so many applications that I won't warrant this with examples. It is in many cases, the first pass prediction algorithm for continuous outcomes. </li>
</ul>
</div>
A brief recap (feel free to skip if you don't care about the math)
Linear Regression is a method to model the relationship between a set of independent variables $X$ (also knowns as explanatory variables, features, predictors) and a dependent variable $Y$. This method assumes the relationship between each predictor $X$ is linearly related to the dependent variable $Y$.
$$ Y = \beta_0 + \beta_1 X + \epsilon$$
where $\epsilon$ is considered as an unobservable random variable that adds noise to the linear relationship. This is the simplest form of linear regression (one variable), we'll call this the simple model.
$\beta_0$ is the intercept of the linear model
Multiple linear regression is when you have more than one independent variable
$X_1$, $X_2$, $X_3$, $\ldots$
$$ Y = \beta_0 + \beta_1 X_1 + \ldots + \beta_p X_p + \epsilon$$
Back to the simple model. The model in linear regression is the conditional mean of $Y$ given the values in $X$ is expressed a linear function.
$$ y = f(x) = E(Y | X = x)$$
http://www.learner.org/courses/againstallodds/about/glossary.html
The goal is to estimate the coefficients (e.g. $\beta_0$ and $\beta_1$). We represent the estimates of the coefficients with a "hat" on top of the letter.
$$ \hat{\beta}_0, \hat{\beta}_1 $$
Once you estimate the coefficients $\hat{\beta}_0$ and $\hat{\beta}_1$, you can use these to predict new values of $Y$
$$\hat{y} = \hat{\beta}_0 + \hat{\beta}_1 x_1$$
How do you estimate the coefficients?
There are many ways to fit a linear regression model
The method called least squares is one of the most common methods
We will discuss least squares today
Estimating $\hat\beta$: Least squares
Least squares is a method that can estimate the coefficients of a linear model by minimizing the difference between the following:
$$ S = \sum_{i=1}^N r_i = \sum_{i=1}^N (y_i - (\beta_0 + \beta_1 x_i))^2 $$
where $N$ is the number of observations.
We will not go into the mathematical details, but the least squares estimates $\hat{\beta}_0$ and $\hat{\beta}_1$ minimize the sum of the squared residuals $r_i = y_i - (\beta_0 + \beta_1 x_i)$ in the model (i.e. makes the difference between the observed $y_i$ and linear model $\beta_0 + \beta_1 x_i$ as small as possible).
The solution can be written in compact matrix notation as
$$\hat\beta = (X^T X)^{-1}X^T Y$$
We wanted to show you this in case you remember linear algebra, in order for this solution to exist we need $X^T X$ to be invertible. Of course this requires a few extra assumptions, $X$ must be full rank so that $X^T X$ is invertible, etc. This is important for us because this means that having redundant features in our regression models will lead to poorly fitting (and unstable) models. We'll see an implementation of this in the extra linear regression example.
Note: The "hat" means it is an estimate of the coefficient.
Part 2: Boston Housing Data Set
The Boston Housing data set contains information about the housing values in suburbs of Boston. This dataset was originally taken from the StatLib library which is maintained at Carnegie Mellon University and is now available on the UCI Machine Learning Repository.
Load the Boston Housing data set from sklearn
This data set is available in the sklearn python module which is how we will access it today.
End of explanation
"""
bos = pd.DataFrame(boston.data)
bos.head()
"""
Explanation: Now let's explore the data set itself.
End of explanation
"""
bos.columns = boston.feature_names
bos.head()
"""
Explanation: There are no column names in the DataFrame. Let's add those.
End of explanation
"""
print(boston.target.shape)
bos['PRICE'] = boston.target
bos.head()
"""
Explanation: Now we have a pandas DataFrame called bos containing all the data we want to use to predict Boston Housing prices. Let's create a variable called PRICE which will contain the prices. This information is contained in the target data.
End of explanation
"""
bos.describe()
"""
Explanation: EDA and Summary Statistics
Let's explore this data set. First we use describe() to get basic summary statistics for each of the columns.
End of explanation
"""
plt.scatter(bos.CRIM, bos.PRICE)
plt.xlabel("Per capita crime rate by town (CRIM)")
plt.ylabel("Housing Price")
plt.title("Relationship between CRIM and Price")
"""
Explanation: Scatter plots
Let's look at some scatter plots for three variables: 'CRIM', 'RM' and 'PTRATIO'.
What kind of relationship do you see? e.g. positive, negative? linear? non-linear?
End of explanation
"""
#your turn: scatter plot between *RM* and *PRICE*
plt.scatter(bos.RM, bos.PRICE)
plt.xlabel("Number of rooms per dwelling (RM)")
plt.ylabel("Housing Price")
plt.title("Relationship between RM and Price")
#your turn: scatter plot between *PTRATIO* and *PRICE*
plt.scatter(bos.PTRATIO, bos.PRICE)
plt.xlabel("Pupil-teacher ratio (PTRATIO)")
plt.ylabel("Housing Price")
plt.title("Relationship between PTRATIO and Price")
"""
Explanation: Your turn: Create scatter plots between RM and PRICE, and PTRATIO and PRICE. What do you notice?
End of explanation
"""
#your turn: create some other scatter plots
plt.scatter(bos.DIS, bos.PRICE)
plt.xlabel("Weighted distance to employment centres (DIS)")
plt.ylabel("Housing Price")
plt.title("Relationship between DIS and Price")
"""
Explanation: Your turn: What are some other numeric variables of interest? Plot scatter plots with these variables and PRICE.
End of explanation
"""
sns.regplot(y="PRICE", x="RM", data=bos, fit_reg = True)
"""
Explanation: Scatter Plots using Seaborn
Seaborn is a cool Python plotting library built on top of matplotlib. It provides convenient syntax and shortcuts for many common types of plots, along with better-looking defaults.
We can also use seaborn regplot for the scatterplot above. This provides automatic linear regression fits (useful for data exploration later on). Here's one example below.
End of explanation
"""
plt.hist(bos.CRIM, bins=50)
plt.title("CRIM")
plt.xlabel("Crime rate per capita")
plt.ylabel("Frequency")
plt.show()
"""
Explanation: Histograms
Histograms are a useful way to visually summarize the statistical properties of numeric variables. They can give you an idea of the mean and the spread of the variables as well as outliers.
End of explanation
"""
#your turn
plt.hist(bos.RM, bins=40)
plt.title("RM")
plt.xlabel("Average number of rooms")
plt.ylabel("Frequency")
plt.show()
plt.hist(bos.PTRATIO, bins=20)
plt.title("PTRATIO")
plt.xlabel("Pupil-teacher ratio")
plt.ylabel("Frequency")
plt.show()
"""
Explanation: Your turn: Plot separate histograms and one for RM, one for PTRATIO. Any interesting observations?
End of explanation
"""
# Import regression modules
# ols - stands for Ordinary least squares, we'll use this
import statsmodels.api as sm
from statsmodels.formula.api import ols
# statsmodels works nicely with pandas dataframes
# The thing inside the "quotes" is called a formula, a bit on that below
m = ols('PRICE ~ RM',bos).fit()
print(m.summary())
"""
Explanation: Linear regression with Boston housing data example
Here,
$Y$ = boston housing prices (also called "target" data in python)
and
$X$ = all the other features (or independent variables)
which we will use to fit a linear regression model and predict Boston housing prices. We will use the least squares method as the way to estimate the coefficients.
We'll use two ways of fitting a linear regression. We recommend the first but the second is also powerful in its features.
Fitting Linear Regression using statsmodels
Statsmodels is a great Python library for a lot of basic and inferential statistics. It also provides basic regression functions using an R-like syntax, so it's commonly used by statisticians. While we don't cover statsmodels officially in the Data Science Intensive, it's a good library to have in your toolbox. Here's a quick example of what you could do with it.
End of explanation
"""
# your turn
plt.scatter(m.fittedvalues, bos.PRICE)
plt.xlabel("Predicted Price")
plt.ylabel("Housing Price")
plt.title("Relationship between Predicted and Actual Price")
"""
Explanation: Interpreting coefficients
There is a ton of information in this output. But we'll concentrate on the coefficient table (middle table). We can interpret the RM coefficient (9.1021) by first noticing that the p-value (under P>|t|) is so small, basically zero. We can interpret the coefficient as, if we compare two groups of towns, one where the average number of rooms is say $5$ and the other group is the same except that they all have $6$ rooms. For these two groups the average difference in house prices is about $9.1$ (in thousands) so about $\$9,100$ difference. The confidence interval fives us a range of plausible values for this difference, about ($\$8,279, \$9,925$), deffinitely not chump change.
statsmodels formulas
This formula notation will seem familiar to R users, but will take some getting used to for people coming from other languages or are new to statistics.
The formula gives instruction for a general structure for a regression call. For statsmodels (ols or logit) calls you need to have a Pandas dataframe with column names that you will add to your formula. In the below example you need a pandas data frame that includes the columns named (Outcome, X1,X2, ...), bbut you don't need to build a new dataframe for every regression. Use the same dataframe with all these things in it. The structure is very simple:
Outcome ~ X1
But of course we want to to be able to handle more complex models, for example multiple regression is doone like this:
Outcome ~ X1 + X2 + X3
This is the very basic structure but it should be enough to get you through the homework. Things can get much more complex, for a quick run-down of further uses see the statsmodels help page.
Let's see how our model actually fit our data. We can see below that there is a ceiling effect, we should probably look into that. Also, for large values of $Y$ we get underpredictions, most predictions are below the 45-degree gridlines.
Your turn: Create a scatterpot between the predicted prices, available in m.fittedvalues and the original prices. How does the plot look?
End of explanation
"""
from sklearn.linear_model import LinearRegression
X = bos.drop('PRICE', axis = 1)
# This creates a LinearRegression object
lm = LinearRegression()
lm
"""
Explanation: Fitting Linear Regression using sklearn
End of explanation
"""
# Look inside lm object
# lm.fit(X=X, y=bos.PRICE)
"""
Explanation: What can you do with a LinearRegression object?
Check out the scikit-learn docs here. We have listed the main functions here.
Main functions | Description
--- | ---
lm.fit() | Fit a linear model
lm.predit() | Predict Y using the linear model with estimated coefficients
lm.score() | Returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, as the proportion of total variation of outcomes explained by the model
What output can you get?
End of explanation
"""
# Use all 13 predictors to fit linear regression model
lm.fit(X, bos.PRICE)
"""
Explanation: Output | Description
--- | ---
lm.coef_ | Estimated coefficients
lm.intercept_ | Estimated intercept
Fit a linear model
The lm.fit() function estimates the coefficients the linear regression using least squares.
End of explanation
"""
print('Estimated intercept coefficient:', lm.intercept_)
print('Number of coefficients:', len(lm.coef_))
# The coefficients
pd.DataFrame(list(zip(X.columns, lm.coef_)), columns = ['features', 'estimatedCoefficients'])
"""
Explanation: Your turn: How would you change the model to not fit an intercept term? Would you recommend not having an intercept?
Estimated intercept and coefficients
Let's look at the estimated coefficients from the linear model using 1m.intercept_ and lm.coef_.
After we have fit our linear regression model using the least squares method, we want to see what are the estimates of our coefficients $\beta_0$, $\beta_1$, ..., $\beta_{13}$:
$$ \hat{\beta}0, \hat{\beta}_1, \ldots, \hat{\beta}{13} $$
End of explanation
"""
# first five predicted prices
lm.predict(X)[0:5]
"""
Explanation: Predict Prices
We can calculate the predicted prices ($\hat{Y}_i$) using lm.predict.
$$ \hat{Y}i = \hat{\beta}_0 + \hat{\beta}_1 X_1 + \ldots \hat{\beta}{13} X_{13} $$
End of explanation
"""
# your turn
plt.hist(lm.predict(X))
plt.title("Predicted Prices")
plt.xlabel("Predicted Prices")
plt.ylabel("Frequency")
plt.show()
plt.scatter(lm.predict(X), bos.PRICE)
plt.xlabel("Predicted Price")
plt.ylabel("Housing Price")
plt.title("Relationship between Predicted and Actual Price")
"""
Explanation: Your turn:
Histogram: Plot a histogram of all the predicted prices
Scatter Plot: Let's plot the true prices compared to the predicted prices to see they disagree (we did this with statsmodels before).
End of explanation
"""
print(np.sum((bos.PRICE - lm.predict(X)) ** 2))
"""
Explanation: Residual sum of squares
Let's calculate the residual sum of squares
$$ S = \sum_{i=1}^N r_i = \sum_{i=1}^N (y_i - (\beta_0 + \beta_1 x_i))^2 $$
End of explanation
"""
#your turn
mse = ((bos.PRICE - lm.predict(X)) ** 2).mean()
print(mse)
"""
Explanation: Mean squared error
This is simple the mean of the residual sum of squares.
Your turn: Calculate the mean squared error and print it.
End of explanation
"""
lm = LinearRegression()
lm.fit(X[['PTRATIO']], bos.PRICE)
msePTRATIO = np.mean((bos.PRICE - lm.predict(X[['PTRATIO']])) ** 2)
print(msePTRATIO)
"""
Explanation: Relationship between PTRATIO and housing price
Try fitting a linear regression model using only the 'PTRATIO' (pupil-teacher ratio by town)
Calculate the mean squared error.
End of explanation
"""
plt.scatter(bos.PTRATIO, bos.PRICE)
plt.xlabel("Pupil-to-Teacher Ratio (PTRATIO)")
plt.ylabel("Housing Price")
plt.title("Relationship between PTRATIO and Price")
plt.plot(bos.PTRATIO, lm.predict(X[['PTRATIO']]), color='blue', linewidth=3)
plt.show()
"""
Explanation: We can also plot the fitted linear regression line.
End of explanation
"""
lm = LinearRegression()
lm.fit(X[['CRIM', 'RM', 'PTRATIO']], bos.PRICE)
mse2 = np.mean((bos.PRICE - lm.predict(X[['CRIM', 'RM', 'PTRATIO']])) ** 2)
print(mse2)
"""
Explanation: Your turn
Try fitting a linear regression model using three independent variables
'CRIM' (per capita crime rate by town)
'RM' (average number of rooms per dwelling)
'PTRATIO' (pupil-teacher ratio by town)
Calculate the mean squared error.
End of explanation
"""
X_train = X[:-50]
X_test = X[-50:]
Y_train = bos.PRICE[:-50]
Y_test = bos.PRICE[-50:]
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
"""
Explanation: Other important things to think about when fitting a linear regression model
<div class="span5 alert alert-danger">
<ul>
<li>**Linearity**. The dependent variable $Y$ is a linear combination of the regression coefficients and the independent variables $X$. </li>
<li>**Constant standard deviation**. The SD of the dependent variable $Y$ should be constant for different values of X.
<ul>
<li>e.g. PTRATIO
</ul>
</li>
<li> **Normal distribution for errors**. The $\epsilon$ term we discussed at the beginning are assumed to be normally distributed.
$$ \epsilon_i \sim N(0, \sigma^2)$$
Sometimes the distributions of responses $Y$ may not be normally distributed at any given value of $X$. e.g. skewed positively or negatively. </li>
<li> **Independent errors**. The observations are assumed to be obtained independently.
<ul>
<li>e.g. Observations across time may be correlated
</ul>
</li>
</ul>
</div>
Part 3: Training and Test Data sets
Purpose of splitting data into Training/testing sets
<div class="span5 alert alert-info">
<p> Let's stick to the linear regression example: </p>
<ul>
<li> We built our model with the requirement that the model fit the data well. </li>
<li> As a side-effect, the model will fit <b>THIS</b> dataset well. What about new data? </li>
<ul>
<li> We wanted the model for predictions, right?</li>
</ul>
<li> One simple solution, leave out some data (for <b>testing</b>) and <b>train</b> the model on the rest </li>
<li> This also leads directly to the idea of cross-validation, next section. </li>
</ul>
</div>
One way of doing this is you can create training and testing data sets manually.
End of explanation
"""
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(
X, bos.PRICE, test_size=0.33, random_state = 5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
"""
Explanation: Another way, is to split the data into random train and test subsets using the function train_test_split in sklearn.cross_validation. Here's the documentation.
End of explanation
"""
lm = LinearRegression()
lm.fit(X_train, Y_train)# your turn
lm.predict(X_test)
"""
Explanation: Your turn: Let's build a linear regression model using our new training data sets.
Fit a linear regression model to the training set
Predict the output on the test set
End of explanation
"""
print(np.mean((Y_train - lm.predict(X_train)) ** 2))
print(np.mean((Y_test - lm.predict(X_test)) ** 2))
"""
Explanation: Your turn:
Calculate the mean squared error
using just the test data
using just the training data
Are they pretty similar or very different? What does that mean?
End of explanation
"""
plt.scatter(lm.predict(X_train), lm.predict(X_train) - Y_train, c='b', s=40, alpha=0.5)
plt.scatter(lm.predict(X_test), lm.predict(X_test) - Y_test, c='g', s=40)
plt.hlines(y = 0, xmin=0, xmax = 50)
plt.title('Residual Plot using training (blue) and test (green) data')
plt.ylabel('Residuals')
"""
Explanation: Residual plots
End of explanation
"""
|
jhjungCode/pytorch-tutorial | 05_MNIST.ipynb | mit | %matplotlib inline
"""
Explanation: Minist 예제
Minist 예제를 살펴봅시다. 사실 minist 예제는 3장 다룬 기초적인 Neural Networw와 거의 동일 합니다.
단지, 입력 DataLoader를 사용하여 Minist dataset를 이용하는 부분만 차이가 나고, 데이터량이 많아서 시간이 좀 많이 걸리는 부분입니다.
입력 DataLoader를 이용하는 것은 4장에서 잠시 다루었기 때문에, 시간을 줄이기 위해서 cuda gpu를 사용하는 부분을 추가했습니다.
입력변수와 network상의 변수의 torch.Tensor를 cuda() 함수를 통해서 선언하면 됩니다.
python
is_cuda = torch.cuda.is_available()
if is_cuda : model.cuda()
if is_cuda : data, target = data.cuda(), target.cuda()
End of explanation
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
import numpy as np
is_cuda = torch.cuda.is_available() # cuda사 사용가능시, True
batch_size = 50
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True, transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, transform=transforms.ToTensor()),
batch_size=1000)
"""
Explanation: 1. 입력DataLoader 설정
train 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 50, shuffle를 실행)
test 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 1000)
End of explanation
"""
class MnistModel(nn.Module):
def __init__(self):
super(MnistModel, self).__init__()
# input is 28x28
# padding=2 for same padding
self.conv1 = nn.Conv2d(1, 32, 5, padding=2)
# feature map size is 14*14 by pooling
# padding=2 for same padding
self.conv2 = nn.Conv2d(32, 64, 5, padding=2)
# feature map size is 7*7 by pooling
self.fc1 = nn.Linear(64*7*7, 1024)
self.fc2 = nn.Linear(1024, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, 64*7*7) # reshape Variable
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
model = MnistModel()
if is_cuda : model.cuda()
loss_fn = nn.NLLLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
"""
Explanation: 2. 사전 설정
* model
* loss
* opimizer
End of explanation
"""
# trainning
model.train()
train_loss = []
train_accu = []
for epoch in range(3):
for i, (image, target) in enumerate(train_loader):
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image), Variable(target) # 입력image Target 설정
output = model(image) # model 생성
loss = loss_fn(output, target) #loss 생성
optimizer.zero_grad() # zero_grad
loss.backward() # calc backward grad
optimizer.step() # update parameter
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
if i % 300 == 0:
print(i, loss.data[0])
plt.plot(train_accu)
plt.plot(train_loss)
"""
Explanation: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
Summary 추가 및 출력하기
간단하게 loss와 accuracy를 list에 추가하도록 나중에 matplotlib를 이용하여 출력하겠습니다.
이 부분은 train_loss, train_accu라는 list에 loss와 accuracy를 append하면 됩니다.
물론 Tensorboard같은 별도의 전용 tool이 있으면 편리하겠지만, 간단히 확인할 경우 큰 불편사항은 없습니다.
일단 빈 list를 선언한 다음,
python
train_loss = []
train_accu = []
train과정이나 test과정에 loss와 accuracy를 추가하면 됩니다.
```python
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
```
End of explanation
"""
model.eval()
correct = 0
for image, target in test_loader:
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
prediction = output.data.max(1)[1]
correct += prediction.eq(target.data).sum()
print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset)))
"""
Explanation: 4. Predict & Evaluate
End of explanation
"""
checkpoint_filename = 'minist.ckpt'
torch.save(model.state_dict(), checkpoint_filename)
"""
Explanation: 5. save model parameter
훈련시킨 model의 parameter를 파일에 저장한다. 다음장에서 저장한 parameter를 restore할 것입니다.
End of explanation
"""
model.eval()
image, target = iter(test_loader).next() #test_loader로 부터 한번만 dataset을 호출
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
## 이미지, 참값, 예측값을 numpy array로 변환
images = image.data.cpu().numpy()
cls_true = target.data.cpu().numpy().squeeze()
prediction = output.data.max(1)[1].cpu().numpy().squeeze()
# 예측값이 참값과 틀린것을 확인
incorrect = (prediction != cls_true)
# 예측이 틀린 것만을 추출
images = images[incorrect]
cls_true = cls_true[incorrect]
prediction = prediction[incorrect]
# 에러율을 표지
print('error : {:.1%}, number ={:}'.format(incorrect.sum()/len(incorrect), incorrect.sum()))
# 틀린 것들의 이미지를 표시
tensorImg = torch.Tensor(images)
plt.imshow(torchvision.utils.make_grid(tensorImg).numpy().transpose((1,2,0)))
plt.show()
# 틀린 것들의 예측치를 표시
print('prediction :')
pred_resized = np.pad(prediction, (0, 8 - len(prediction)%8), 'constant', constant_values=(0, 0))
print(pred_resized.reshape(-1,8))
print('\n')
# 틀린 것들의 참값을 표시
print('True :')
true_resized = np.pad(cls_true, (0, 8 - len(cls_true)%8), 'constant', constant_values=(0, 0))
print(true_resized.reshape(-1,8))
"""
Explanation: 6. plot images which failed to predict
여러장을 표시할 경우 별로 함수를 이용하여도 되나, pytorch에서 제공하는 함수를 이용하여 출력하여 보았습니다.
python
torchvision.utils.make_grid(tensor, nrow=8, padding=2)
plt.imshow할때도 약간 주의해야 하는데, array는 colordepth x Height x Width 되어 있지만, Height x Width x colordepth 형태로 바꾸어야 해서 transpose(1, 2, 0)를 수행하였습니다.
End of explanation
"""
|
PMEAL/OpenPNM | examples/simulations/transient/transient_advection_diffusion.ipynb | mit | from scipy import special
from scipy.optimize import curve_fit
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
import numpy as np
np.random.seed(0)
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(precision=3)
"""
Explanation: Transient Advection-Diffusion
This example will show how to preform a transient advection-diffusion simulation on a 2D Cubic network.
End of explanation
"""
shape = [40, 40, 1]
pn = op.network.Cubic(shape=shape, spacing=1e-4)
geo = op.geometry.SpheresAndCylinders(network=pn, pores=pn.Ps, throats=pn.Ts)
water = op.phases.Water(network=pn)
phys = op.physics.Standard(network=pn, phase=water, geometry=geo)
"""
Explanation: Generating Network
A 2D 40 X 40 Cubic network is generated with a spacing of $10^{-4}$m, but a 3D network would work as well. The geometry, phase, and physics are also defined as follows.
End of explanation
"""
def effective_pore_volume(target, throat_volume='throat.volume', pore_volume='pore.volume'):
Pvol = geo['pore.volume']
Tvol = geo['throat.volume']
Vtot = Pvol.sum() + Tvol.sum()
np.add.at(Pvol, pn.conns[:, 0], geo['throat.volume']/2)
np.add.at(Pvol, pn.conns[:, 1], geo['throat.volume']/2)
assert np.isclose(Pvol.sum(), Vtot) # Ensure total volume has been added to Pvol
return Pvol
geo.add_model(propname='pore.effective_volume', model=effective_pore_volume)
"""
Explanation: Defining Effective Pore Volume
The accumulation of mass in the network occurs only in the pores, where the concentration is solved. In order for mass accumulate properly, it is necessary to assign the throat volumes to their surrounded throats. This creates an effective pore volume. We can define this in a custom pore-scale model, making use of the numpy.add.at function, to add 1/2 the volume of each throat to its neighboring pores.
End of explanation
"""
sf = op.algorithms.StokesFlow(network=pn, phase=water,)
sf.set_value_BC(pores=pn.pores('back'), values=50.0)
sf.set_value_BC(pores=pn.pores('front'), values=0)
sf.run();
"""
Explanation: Perform Stokes flow
The advection diffusion algorithm assumes a velocity field. Therefore, Stokes flow in the pore netwok is solved. The StokesFlow algorthm is solved prior to running the AdvectionDiffusion algorthim. For more information there is a seperate tutorial on Stokes Flow.
End of explanation
"""
water.update(sf.results())
"""
Explanation: The results obtained from the StokesFlow algorthim must be attached to the water phase.
End of explanation
"""
mod = op.models.physics.ad_dif_conductance.ad_dif
phys.add_model(propname='throat.ad_dif_conductance', model=mod, s_scheme='powerlaw')
"""
Explanation: Add Diffusive Conductance Model
End of explanation
"""
ad = op.algorithms.TransientAdvectionDiffusion(network=pn, phase=water)
"""
Explanation: Define Transient Advection Diffusion
An algorthim for transient advection diffusion is defined here. It is assigned to the network and the phase, and will be able to retrieve all information that will be needed.
End of explanation
"""
inlet = pn.pores('back')
outlet = pn.pores('front')
ad.set_value_BC(pores=inlet, values=1.0)
ad.set_outflow_BC(pores=outlet)
"""
Explanation: The Dirichlet boundary conditions and the inital conditions are next defined as follows. If the inital condition is not defined then it is assumed to be zero, so it is redundunt in this case. The boundary conditions can be defined as value, outflow, and rate.
End of explanation
"""
tspan = (0, 100)
saveat = 5
"""
Explanation: Setup the Transient Algorithim
The settings of the transient algorthim can be updated here. We first define the time span:
End of explanation
"""
ad.settings['pore_volume'] = 'pore.effective_volume'
"""
Explanation: We must also tell the algorithm to use the effective pore volume rather than the default which is just 'pore.volume'
End of explanation
"""
soln = ad.run(x0=0, tspan=tspan, saveat=saveat)
"""
Explanation: The algorthim than can be run, but we must pass the initial conditions (could be a scalar or an array), time span, and optionally the intervals at which the solution is desired to be stored.
End of explanation
"""
print(ad.settings)
"""
Explanation: We can print the algorthims settings as follows:
End of explanation
"""
print(ad)
"""
Explanation: The solution at eveery time step is stored in the algorthim, and can be printed as follows.
End of explanation
"""
fig, ax = plt.subplots(nrows=1, ncols=4, figsize=(12, 4))
t = [10, 20, 30, 40]
for axi, ti in zip(ax, t):
axi.imshow(soln(ti).reshape(shape))
axi.set_title(f"t = {ti}")
"""
Explanation: Visualization using Matplotlib
The pore concentration can be visualized using a 2D heatmap using matplotlib.
End of explanation
"""
q_throat = sf.rate(throats=pn.Ts, mode='single')
L = pn['throat.length']
A = np.pi/4 * pn['throat.diameter']**2
Pe = q_throat * L / (A * water['throat.diffusivity'])
n, bins, patches = plt.hist(Pe, bins=40, edgecolor='k')
plt.xlabel('Peclet Number')
plt.ylabel('Number of Throats')
plt.title(r'Histogram of Peclet Numbers:')
plt.show()
Pe_avg = Pe.mean()
print(f"Average Peclet Number = {Pe_avg:.2f}")
"""
Explanation: Peclet Number
The Peclet number is a dimensionless number defined as the ratio of the rate of advective transport to the rate of diffusive transport. This is often a useful number to know when analyzing advection diffusion problems. It can be calculated using the following equation:
$$Pe_{throat} = \frac{q_{throat}L}{AD_{Ae}}$$
Where $q_{throat}$ is the volumetric flow rate for the throat, $L$ is the length of the throat, $A$ is the cross sectional area of the throat, and $D_{Ae}$ is the diffusion coefficent. A histogram representing the peclet numbers of all throats is presented below as well.
End of explanation
"""
Ps_front = pn.pores(['front'])
Ts_front = pn.find_neighbor_throats(pores=Ps_front, mode='xor')
steps = tspan[1]/saveat + 1
count = 0
c_avg = []
for ti in soln.t:
c_front = soln(ti)[Ps_front]
q_front = sf.rate(throats=pn.Ts,mode='single')[Ts_front]
c_avg.append((q_front*c_front).sum() / q_front.sum())
fig, ax = plt.subplots()
ax.plot(soln.t, c_avg, "o-")
ax.legend(('simulation', 'fitted'))
ax.set_xlabel('time (s)')
ax.set_ylabel('concentration');
"""
Explanation: Elution Curve
End of explanation
"""
q_throat = sf.rate(throats=pn.Ts, mode='single')
A_throat = pn['throat.cross_sectional_area']
v_throat = q_throat/A_throat
v_pred = sum(q_throat*v_throat)/sum(q_throat)
def elution(step,v,DL):
x = 40*1e-4
el1 = 0.5*(special.erfc((x-step*v)/(2*(DL*step)**(1/2))))
el2 = 0.5*np.exp(v*x/DL)
el3 = special.erfc((x+step*v)/(2*(DL*step)**(1/2)))
return el1+el2*el3
g = [v_pred, 1e-3]
xdata = [float(x) for x in soln.t]
ydata = c_avg
popt, pcov = curve_fit(elution, xdata, ydata, p0=g)
disp_coeff = popt[1]
v_fit = popt[0]
print('Dispersion Coefficient = ', "{0:.4E}".format(disp_coeff), ' m^2/s')
print('v_pred = ', "{0:.4E}".format(v_pred), ' m/s')
print('v_fitted = ', "{0:.4E}".format(v_fit), ' m/s')
el = np.zeros(len(ydata))
for i in range(len(ydata)):
el[i] = elution(xdata[i], popt[0], popt[1])
fig, ax = plt.subplots()
ax.plot(xdata, ydata, label="simulation")
ax.plot(xdata, el, ".", label="fitted")
ax.legend()
ax.set_xlabel('time (s)')
ax.set_ylabel('concentration');
"""
Explanation: Solving for the Dispersion Coefficient
The following equation given by Fried (1971) is used to solve the longitudinal dispersion coefficient:
$$\frac{C}{C_{0}} = \frac{1}{2}erfc\Bigl(\frac{x-Ut}{2(D_{L}t)^{\frac{1}{2}}}\Bigr)+\frac{1}{2}exp\Bigl(\frac{Ux}{D_{L}}\Bigl)erfc\Bigr(\frac{x+Ut}{2(D_{L}t)^{\frac{1}{2}}}\Bigr)$$
Where $x$ is the length between the inlet and the outlet, $t$ is the time, $D_{L}$ is the longitudinal dispersion coefficient, $U$ is the average pore velocity, $C_{0}$ is the inlet concentration, and $C$ is the concentration at the given time. Since we defined the inlet concentration as being equal to 1, solving for $C$ is effictivly equal to solving for $\frac{C}{C_{0}}$. erfc is the complementary error function, which is imported from scipy.
End of explanation
"""
|
nehal96/Deep-Learning-ND-Exercises | Recurrent Neural Networks/Anna KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
chars[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
np.max(chars)+1
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the first split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
"""
Explanation: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
train_x, train_y, val_x, val_y = split_data(chars, 10, 50)
train_x.shape
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
train_x[:,:50]
"""
Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
"""
Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# One-hot encoding the input and target characters
x_one_hot = tf.one_hot(inputs, num_classes)
y_one_hot = tf.one_hot(targets, num_classes)
### Build the RNN layers
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
### Run the data through the RNN layers
# This makes a list where each element is on step in the sequence
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one output row for each step for each batch
seq_output = tf.concat(outputs, axis=1)
output = tf.reshape(seq_output, [-1, lstm_size])
# Now connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(num_classes))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and batch
logits = tf.matmul(output, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
preds = tf.nn.softmax(logits, name='predictions')
# Reshape the targets to match the logits
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
cost = tf.reduce_mean(loss)
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
# NOTE: I'm using a namedtuple here because I think they are cool
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: Building the model
Below is a function where I build the graph for the network.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
keep_prob = 0.5
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
"""
Explanation: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
checkpoint = "checkpoints/i3560_l512_v1.124.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/fast-and-lean-data-science/fairing_train.ipynb | apache-2.0 | BUCKET = "gs://" # your bucket here
assert re.search(r'gs://.+', BUCKET), 'A GCS bucket is required to store your results.'
"""
Explanation: Authenticate with the docker registry first
bash
gcloud auth configure-docker
If using TPUs please also authorize Cloud TPU to access your project as described here.
Set up your output bucket
End of explanation
"""
!cat Dockerfile
!docker build . -t {base_image}
!docker push {base_image}
"""
Explanation: Build a base image to work with fairing
End of explanation
"""
additional_files = '' # If your code requires additional files, you can specify them here (or include everything in the current folder with glob.glob('./**', recursive=True))
# If your code does not require any dependencies or config changes, you can directly start from an official Tensorflow docker image
#fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image='gcr.io/deeplearning-platform-release/tf-gpu.1-13')
# base image
fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image=base_image)
# AI Platform job hardware config
fairing.config.set_deployer('gcp', job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}})
# input and output notebooks
fairing.config.set_preprocessor('full_notebook',
notebook_file="05K_MNIST_TF20Keras_Tensorboard_playground.ipynb",
input_files=additional_files,
output_file=os.path.join(BUCKET, 'fairing-output', 'mnist-001.ipynb'))
# GPU settings for single K80, single p100 respectively
# job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}}
# job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}}
# These job_config settings for TPUv2
#job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}}
#job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'n1-standard-8', 'workerType': 'cloud_tpu', 'workerCount': 1,
# 'workerConfig': {'accelerator_config': {'type': 'TPU_V2','count': 8}}}})
# On AI Platform, TPUv3 support is alpha and available to whitelisted customers only
fairing.config.run()
"""
Explanation: Start an AI Platform job
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ncc/cmip6/models/noresm2-mh/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mh', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MH
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
elliotk/twitter_eda | develop/20171010_fastforwardlabs_tweet_counts.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
import tweepy
import numpy as np
import pandas as pd
from collections import Counter
from datetime import datetime
# Turn on retina mode for high-quality inline plot resolution
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina')
# Version of Python
import platform
platform.python_version()
# Import Twitter API keys
from credentials import *
# Helper function to connect to Twitter API
def twitter_setup():
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)
api = tweepy.API(auth)
return api
# Extract Twitter data
extractor = twitter_setup()
# Twitter user
twitter_handle = 'fastforwardlabs'
# Get most recent two hundred tweets
tweets = extractor.user_timeline(screen_name=twitter_handle, count=200)
print('Number of tweets extracted: {}.\n'.format(len(tweets)))
"""
Explanation: Setup
End of explanation
"""
# Inspect attributes of tweepy object
print(dir(tweets[0])) # look at the first element/record
"""
Explanation: Tweet activity
Let's explore counts by hour, day of the week, and weekday versus weekend hourly trends.
End of explanation
"""
# What format is it in? answer: GMT, according to Twitter API
print(tweets[0].created_at)
# Create datetime index: convert to GMT then to Eastern daylight time EDT
tweet_dates = pd.DatetimeIndex([tweet.created_at for tweet in tweets], tz='GMT').tz_convert('US/Eastern')
"""
Explanation: Hmmm, what's this created_at attribute?
End of explanation
"""
# Count the number of tweets per hour
num_per_hour = pd.DataFrame( { 'counts': Counter(tweet_dates.hour) })
# Create hours data frame
hours = pd.DataFrame({'hours': np.arange(24)})
"""
Explanation: Hourly counts:
End of explanation
"""
# Merge data frame objects on common index, peform left outer join and fill NaN with zero-values
hour_counts = pd.merge(hours, num_per_hour, left_index=True, right_index=True, how='left').fillna(0)
hour_counts
"""
Explanation: Because there are hours of the day where there are no tweets, one must explicitly add any zero-count hours to the index.
End of explanation
"""
# Count the number of tweets by day of the week
num_per_day = pd.DataFrame( { 'counts': Counter(tweet_dates.weekday) })
# Create days data frame
days = pd.DataFrame({'day': np.arange(7)})
# Merge data frame objects on common index, perform left outer join and fill NaN with zero-values
daily_counts = pd.merge(days, num_per_day, left_index=True, right_index=True, how='left').fillna(0)
"""
Explanation: Day of the week counts:
End of explanation
"""
# Flag the weekend from weekday tweets
weekend = np.where(tweet_dates.weekday < 5, 'weekday', 'weekend')
# Construct multiply-indexed DataFrame obj indexed by weekday/weekend and by hour
by_time = pd.DataFrame([tweet.created_at for tweet in tweets],
columns=['counts'],
index=tweet_dates).groupby([weekend, tweet_dates.hour]).count()
# Optionally, set the names attribute of the index
by_time.index.names=['daytype', 'hour']
# Show two-dimensional view of multiply-indexed DataFrame
by_time.unstack()
# Merge DataFrame on common index, perform left outer join and fill NaN with zero-values
by_time = pd.merge(hours, by_time.unstack(level=0), left_index=True, right_index=True, how='left').fillna(0)
# Show last five records
by_time.tail()
"""
Explanation: Weekday vs weekend hourly counts:
End of explanation
"""
# Optional: Create xtick labels in Standard am/pm time format
xticks = pd.date_range('00:00', '23:00', freq='H', tz='US/Eastern').map(lambda x: pd.datetime.strftime(x, '%I %p'))
"""
Explanation: Visualize tweet counts
By hour:
End of explanation
"""
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
# Plot
ax = hour_counts.plot(x='hours', y='counts', kind='line', figsize=(12, 8))
ax.set_xticks(np.arange(24))
#ax.set_xticklabels(xticks, rotation=50)
#ax.set_title('Number of Tweets per hour')
#ax.set_xlabel('Hour')
#ax.set_ylabel('No. of Tweets')
#ax.set_yticklabels(labels=['0 ', '5 ', '10 ', '15 ', '20 ', '25 ', '30 ', '35 ', '40 '])
ax.tick_params(axis='both', which='major', labelsize=14)
ax.axhline(y=0, color='black', linewidth=1.3, alpha=0.7)
ax.set_xlim(left=-1, right=24)
ax.xaxis.label.set_visible(False)
now = datetime.strftime(datetime.now(), '%a, %Y-%b-%d at %I:%M %p EDT')
ax.text(x=-2.25, y=-5.5,
s = u"\u00A9" + 'THE_KLEI {} Source: Twitter, Inc. '.format(now),
fontsize=14, color='#f0f0f0', backgroundcolor='grey')
ax.text(x=-2.35, y=44, s="When does @{} tweet? - time of the day".format(twitter_handle),
fontsize=26, weight='bold', alpha=0.75)
ax.text(x=-2.35, y=42,
s='Number of Tweets per hour based-on 200 most-recent tweets as of {}'.format(datetime.strftime(datetime.now(), '%b %d, %Y')),
fontsize=19, alpha=0.85)
plt.show()
"""
Explanation: Let's see if we can "fancy-it-up" a bit by making it 538 blog-like. Note: The following cell disables notebook autoscrolling for long outputs. Otherwise, the notebook will embed the plot inside a scrollable cell, which is more difficult to read the plot.
End of explanation
"""
# Plot
daily_counts.index = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun']
daily_counts['counts'].plot(title='Daily tweet counts', figsize=(12, 8), legend=True)
plt.show()
"""
Explanation: By day of the week:
End of explanation
"""
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
# Plot
fig, ax = plt.subplots(2, 1, figsize=(14, 12))
# weekdays
by_time.loc[:, [('counts', 'weekday')]].plot(ax=ax[0], title='Weekdays', kind='line')
# weekends
by_time.loc[:, [('counts', 'weekend')]].plot(ax=ax[1], title='Weekend', kind='line')
ax[0].set_xticks(np.arange(24))
#ax[0].set_xticklabels(xticks, rotation=50)
ax[1].set_xticks(np.arange(24))
#ax[1].set_xticklabels(xticks, rotation=50)
plt.show()
"""
Explanation: By weekday and weekend:
End of explanation
"""
|
davidruffner/cv-people-detector | testWalkerDetection.ipynb | mit | video_capture = cv2.VideoCapture('resources/TestWalker.mp4')
# From https://www.learnopencv.com/how-to-find-frame-rate-or-frames-per-second-fps-in-opencv-python-cpp/
# Find OpenCV version
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
print major_ver, minor_ver, subminor_ver
# With webcam get(CV_CAP_PROP_FPS) does not work.
# Let's see for ourselves.
if int(major_ver) < 3 :
fps = video_capture.get(cv2.cv.CV_CAP_PROP_FPS)
print "Frames per second using video.get(cv2.cv.CV_CAP_PROP_FPS): {0}".format(fps)
else :
fps = video_capture.get(cv2.CAP_PROP_FPS)
print "Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}".format(fps)
# Number of frames to capture
num_frames = 120;
print "Capturing {0} frames".format(num_frames)
# Start time
start = time.time()
# Grab a few frames
for i in xrange(0, num_frames) :
ret, frame = video_capture.read()
# End time
end = time.time()
# Time elapsed
seconds = end - start
print "Time taken : {0} seconds".format(seconds)
# Calculate frames per second
fps = num_frames / seconds;
print "Estimated frames per second : {0}".format(fps);
# cProfile.runctx('video_capture.read()', globals(), locals(), 'profile.prof')
# use snakeviz to read the output of the profiling
"""
Explanation: Walker detection with openCV
Open video and get video info
End of explanation
"""
def getSmallGrayFrame(video):
ret, frame = video.read()
if not ret:
return ret, frame
frameSmall = frame[::4, ::-4]
gray = cv2.cvtColor(frameSmall, cv2.COLOR_BGR2GRAY)
return ret, gray
#cv2.startWindowThread()
count = 0
for x in range(200):
count = count + 1
print count
ret1, gray1 = getSmallGrayFrame(video_capture)
ret2, gray2 = getSmallGrayFrame(video_capture)
diff = cv2.absdiff(gray1, gray2)
print np.amax(diff), np.amin(diff)
print
diffThresh = cv2.threshold(diff, 15, 255, cv2.THRESH_BINARY)
kernel = np.ones((3,3),np.uint8)
erosion = cv2.erode(diffThresh[1],kernel,iterations = 1)
dilation = cv2.dilate(erosion,kernel,iterations = 1)
color1 = cv2.cvtColor(gray1, cv2.COLOR_GRAY2RGB)
color1[:,:,0:1] = color1[:,:,0:1]
colorDil = cv2.cvtColor(dilation, cv2.COLOR_GRAY2RGB)
colorDil[:,:,1:2] = colorDil[:,:,1:2]*0
total = cv2.add(color1, colorDil)
if not ret1 or not ret2:
break
cv2.imshow('Video', total)
cv2.imwrite('resources/frame{}.png'.format(x), total)
if cv2.waitKey(1) & 0xFF == ord('q'): # Need the cv2.waitKey to update plot
break
# To close the windows: http://stackoverflow.com/questions/6116564/destroywindow-does-not-close-window-on-mac-using-python-and-opencv#15058451
cv2.waitKey(1000)
cv2.waitKey(1)
cv2.destroyAllWindows()
cv2.waitKey(1)
"""
Explanation: Track walker using difference between frames
Following http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms
End of explanation
"""
|
huggingface/pytorch-transformers | notebooks/02-transformers.ipynb | apache-2.0 | # !pip install transformers
import torch
from transformers import AutoModel, AutoTokenizer, BertTokenizer
torch.set_grad_enabled(False)
# Store the model we want to use
MODEL_NAME = "bert-base-cased"
# We need to create the model and tokenizer
model = AutoModel.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
"""
Explanation: Introduction
The transformers library is an open-source, community-based repository to train, use and share models based on
the Transformer architecture (Vaswani & al., 2017) such as Bert (Devlin & al., 2018),
Roberta (Liu & al., 2019), GPT2 (Radford & al., 2019),
XLNet (Yang & al., 2019), etc.
Along with the models, the library contains multiple variations of each of them for a large variety of
downstream-tasks like Named Entity Recognition (NER), Sentiment Analysis,
Language Modeling, Question Answering and so on.
Before Transformer
Back to 2017, most of the people using Neural Networks when working on Natural Language Processing were relying on
sequential processing of the input through Recurrent Neural Network (RNN).
RNNs were performing well on large variety of tasks involving sequential dependency over the input sequence.
However, this sequentially-dependent process had issues modeling very long range dependencies and
was not well suited for the kind of hardware we're currently leveraging due to bad parallelization capabilities.
Some extensions were provided by the academic community, such as Bidirectional RNN (Schuster & Paliwal., 1997, Graves & al., 2005),
which can be seen as a concatenation of two sequential process, one going forward, the other one going backward over the sequence input.
And also, the Attention mechanism, which introduced a good improvement over "raw" RNNs by giving
a learned, weighted-importance to each element in the sequence, allowing the model to focus on important elements.
Then comes the Transformer
The Transformers era originally started from the work of (Vaswani & al., 2017) who
demonstrated its superiority over Recurrent Neural Network (RNN)
on translation tasks but it quickly extended to almost all the tasks RNNs were State-of-the-Art at that time.
One advantage of Transformer over its RNN counterpart was its non sequential attention model. Remember, the RNNs had to
iterate over each element of the input sequence one-by-one and carry an "updatable-state" between each hop. With Transformer, the model is able to look at every position in the sequence, at the same time, in one operation.
For a deep-dive into the Transformer architecture, The Annotated Transformer
will drive you along all the details of the paper.
Getting started with transformers
For the rest of this notebook, we will use the BERT (Devlin & al., 2018) architecture, as it's the most simple and there are plenty of content about it
over the internet, it will be easy to dig more over this architecture if you want to.
The transformers library allows you to benefits from large, pretrained language models without requiring a huge and costly computational
infrastructure. Most of the State-of-the-Art models are provided directly by their author and made available in the library
in PyTorch and TensorFlow in a transparent and interchangeable way.
If you're executing this notebook in Colab, you will need to install the transformers library. You can do so with this command:
End of explanation
"""
tokens_pt = tokenizer("This is an input example", return_tensors="pt")
for key, value in tokens_pt.items():
print("{}:\n\t{}".format(key, value))
"""
Explanation: With only the above two lines of code, you're ready to use a BERT pre-trained model.
The tokenizers will allow us to map a raw textual input to a sequence of integers representing our textual input
in a way the model can manipulate. Since we will be using a PyTorch model, we ask the tokenizer to return to us PyTorch tensors.
End of explanation
"""
outputs = model(**tokens_pt)
last_hidden_state = outputs.last_hidden_state
pooler_output = outputs.pooler_output
print("Token wise output: {}, Pooled output: {}".format(last_hidden_state.shape, pooler_output.shape))
"""
Explanation: The tokenizer automatically converted our input to all the inputs expected by the model. It generated some additional tensors on top of the IDs:
token_type_ids: This tensor will map every tokens to their corresponding segment (see below).
attention_mask: This tensor is used to "mask" padded values in a batch of sequence with different lengths (see below).
You can check our glossary for more information about each of those keys.
We can just feed this directly into our model:
End of explanation
"""
# Single segment input
single_seg_input = tokenizer("This is a sample input")
# Multiple segment input
multi_seg_input = tokenizer("This is segment A", "This is segment B")
print("Single segment token (str): {}".format(tokenizer.convert_ids_to_tokens(single_seg_input['input_ids'])))
print("Single segment token (int): {}".format(single_seg_input['input_ids']))
print("Single segment type : {}".format(single_seg_input['token_type_ids']))
# Segments are concatened in the input to the model, with
print()
print("Multi segment token (str): {}".format(tokenizer.convert_ids_to_tokens(multi_seg_input['input_ids'])))
print("Multi segment token (int): {}".format(multi_seg_input['input_ids']))
print("Multi segment type : {}".format(multi_seg_input['token_type_ids']))
# Padding highlight
tokens = tokenizer(
["This is a sample", "This is another longer sample text"],
padding=True # First sentence will have some PADDED tokens to match second sequence length
)
for i in range(2):
print("Tokens (int) : {}".format(tokens['input_ids'][i]))
print("Tokens (str) : {}".format([tokenizer.convert_ids_to_tokens(s) for s in tokens['input_ids'][i]]))
print("Tokens (attn_mask): {}".format(tokens['attention_mask'][i]))
print()
"""
Explanation: As you can see, BERT outputs two tensors:
- One with the generated representation for every token in the input (1, NB_TOKENS, REPRESENTATION_SIZE)
- One with an aggregated representation for the whole input (1, REPRESENTATION_SIZE)
The first, token-based, representation can be leveraged if your task requires to keep the sequence representation and you
want to operate at a token-level. This is particularly useful for Named Entity Recognition and Question-Answering.
The second, aggregated, representation is especially useful if you need to extract the overall context of the sequence and don't
require a fine-grained token-level. This is the case for Sentiment-Analysis of the sequence or Information Retrieval.
End of explanation
"""
from transformers import TFBertModel, BertModel
# Let's load a BERT model for TensorFlow and PyTorch
model_tf = TFBertModel.from_pretrained('bert-base-cased')
model_pt = BertModel.from_pretrained('bert-base-cased')
# transformers generates a ready to use dictionary with all the required parameters for the specific framework.
input_tf = tokenizer("This is a sample input", return_tensors="tf")
input_pt = tokenizer("This is a sample input", return_tensors="pt")
# Let's compare the outputs
output_tf, output_pt = model_tf(input_tf), model_pt(**input_pt)
# Models outputs 2 values (The value for each tokens, the pooled representation of the input sentence)
# Here we compare the output differences between PyTorch and TensorFlow.
for name in ["last_hidden_state", "pooler_output"]:
print("{} differences: {:.5}".format(name, (output_tf[name].numpy() - output_pt[name].numpy()).sum()))
"""
Explanation: Frameworks interoperability
One of the most powerfull feature of transformers is its ability to seamlessly move from PyTorch to Tensorflow
without pain for the user.
We provide some convenient methods to load TensorFlow pretrained weight insinde a PyTorch model and opposite.
End of explanation
"""
from transformers import DistilBertModel
bert_distil = DistilBertModel.from_pretrained('distilbert-base-cased')
input_pt = tokenizer(
'This is a sample input to demonstrate performance of distiled models especially inference time',
return_tensors="pt"
)
%time _ = bert_distil(input_pt['input_ids'])
%time _ = model_pt(input_pt['input_ids'])
"""
Explanation: Want it lighter? Faster? Let's talk distillation!
One of the main concerns when using these Transformer based models is the computational power they require. All over this notebook we are using BERT model as it can be run on common machines but that's not the case for all of the models.
For example, Google released a few months ago T5 an Encoder/Decoder architecture based on Transformer and available in transformers with no more than 11 billions parameters. Microsoft also recently entered the game with Turing-NLG using 17 billions parameters. This kind of model requires tens of gigabytes to store the weights and a tremendous compute infrastructure to run such models which makes it impracticable for the common man !
With the goal of making Transformer-based NLP accessible to everyone we @huggingface developed models that take advantage of a training process called Distillation which allows us to drastically reduce the resources needed to run such models with almost zero drop in performances.
Going over the whole Distillation process is out of the scope of this notebook, but if you want more information on the subject you may refer to this Medium article written by my colleague Victor SANH, author of DistilBERT paper, you might also want to directly have a look at the paper (Sanh & al., 2019)
Of course, in transformers we have distilled some models and made them available directly in the library !
End of explanation
"""
# Let's load German BERT from the Bavarian State Library
de_bert = BertModel.from_pretrained("dbmdz/bert-base-german-cased")
de_tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
de_input = de_tokenizer(
"Hugging Face ist eine französische Firma mit Sitz in New-York.",
return_tensors="pt"
)
print("Tokens (int) : {}".format(de_input['input_ids'].tolist()[0]))
print("Tokens (str) : {}".format([de_tokenizer.convert_ids_to_tokens(s) for s in de_input['input_ids'].tolist()[0]]))
print("Tokens (attn_mask): {}".format(de_input['attention_mask'].tolist()[0]))
print()
outputs_de = de_bert(**de_input)
last_hidden_state_de = outputs_de.last_hidden_state
pooler_output_de = outputs_de.pooler_output
print("Token wise output: {}, Pooled output: {}".format(last_hidden_state_de.shape, pooler_output_de.shape))
"""
Explanation: Community provided models
Last but not least, earlier in this notebook we introduced Hugging Face transformers as a repository for the NLP community to exchange pretrained models. We wanted to highlight this features and all the possibilities it offers for the end-user.
To leverage community pretrained models, just provide the organisation name and name of the model to from_pretrained and it will do all the magic for you !
We currently have more 50 models provided by the community and more are added every day, don't hesitate to give it a try !
End of explanation
"""
|
deepmind/deepmind-research | enformer/enformer-training.ipynb | apache-2.0 | !pip install dm-sonnet tqdm
# Get enformer source code
!wget -q https://raw.githubusercontent.com/deepmind/deepmind-research/master/enformer/attention_module.py
!wget -q https://raw.githubusercontent.com/deepmind/deepmind-research/master/enformer/enformer.py
"""
Explanation: Copyright 2021 DeepMind Technologies Limited
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This colab showcases training of the Enformer model published in
"Effective gene expression prediction from sequence by integrating long-range interactions"
Žiga Avsec, Vikram Agarwal, Daniel Visentin, Joseph R. Ledsam, Agnieszka Grabska-Barwinska, Kyle R. Taylor, Yannis Assael, John Jumper, Pushmeet Kohli, David R. Kelley
Steps
Setup tf.data.Dataset by directly accessing the Basenji2 data on GCS: gs://basenji_barnyard/data
Train the model for a few steps, alternating training on human and mouse data batches
Evaluate the model on human and mouse genomes
Setup
Start the colab kernel with GPU: Runtime -> Change runtime type -> GPU
Install dependencies
End of explanation
"""
import tensorflow as tf
# Make sure the GPU is enabled
assert tf.config.list_physical_devices('GPU'), 'Start the colab kernel with GPU: Runtime -> Change runtime type -> GPU'
# Easier debugging of OOM
%env TF_ENABLE_GPU_GARBAGE_COLLECTION=false
import sonnet as snt
from tqdm import tqdm
from IPython.display import clear_output
import numpy as np
import pandas as pd
import time
import os
assert snt.__version__.startswith('2.0')
tf.__version__
# GPU colab has T4 with 16 GiB of memory
!nvidia-smi
"""
Explanation: Import
End of explanation
"""
import enformer
# @title `get_targets(organism)`
def get_targets(organism):
targets_txt = f'https://raw.githubusercontent.com/calico/basenji/master/manuscripts/cross2020/targets_{organism}.txt'
return pd.read_csv(targets_txt, sep='\t')
# @title `get_dataset(organism, subset, num_threads=8)`
import glob
import json
import functools
def organism_path(organism):
return os.path.join('gs://basenji_barnyard/data', organism)
def get_dataset(organism, subset, num_threads=8):
metadata = get_metadata(organism)
dataset = tf.data.TFRecordDataset(tfrecord_files(organism, subset),
compression_type='ZLIB',
num_parallel_reads=num_threads)
dataset = dataset.map(functools.partial(deserialize, metadata=metadata),
num_parallel_calls=num_threads)
return dataset
def get_metadata(organism):
# Keys:
# num_targets, train_seqs, valid_seqs, test_seqs, seq_length,
# pool_width, crop_bp, target_length
path = os.path.join(organism_path(organism), 'statistics.json')
with tf.io.gfile.GFile(path, 'r') as f:
return json.load(f)
def tfrecord_files(organism, subset):
# Sort the values by int(*).
return sorted(tf.io.gfile.glob(os.path.join(
organism_path(organism), 'tfrecords', f'{subset}-*.tfr'
)), key=lambda x: int(x.split('-')[-1].split('.')[0]))
def deserialize(serialized_example, metadata):
"""Deserialize bytes stored in TFRecordFile."""
feature_map = {
'sequence': tf.io.FixedLenFeature([], tf.string),
'target': tf.io.FixedLenFeature([], tf.string),
}
example = tf.io.parse_example(serialized_example, feature_map)
sequence = tf.io.decode_raw(example['sequence'], tf.bool)
sequence = tf.reshape(sequence, (metadata['seq_length'], 4))
sequence = tf.cast(sequence, tf.float32)
target = tf.io.decode_raw(example['target'], tf.float16)
target = tf.reshape(target,
(metadata['target_length'], metadata['num_targets']))
target = tf.cast(target, tf.float32)
return {'sequence': sequence,
'target': target}
"""
Explanation: Code
End of explanation
"""
df_targets_human = get_targets('human')
df_targets_human.head()
human_dataset = get_dataset('human', 'train').batch(1).repeat()
mouse_dataset = get_dataset('mouse', 'train').batch(1).repeat()
human_mouse_dataset = tf.data.Dataset.zip((human_dataset, mouse_dataset)).prefetch(2)
it = iter(mouse_dataset)
example = next(it)
# Example input
it = iter(human_mouse_dataset)
example = next(it)
for i in range(len(example)):
print(['human', 'mouse'][i])
print({k: (v.shape, v.dtype) for k,v in example[i].items()})
"""
Explanation: Load dataset
End of explanation
"""
def create_step_function(model, optimizer):
@tf.function
def train_step(batch, head, optimizer_clip_norm_global=0.2):
with tf.GradientTape() as tape:
outputs = model(batch['sequence'], is_training=True)[head]
loss = tf.reduce_mean(
tf.keras.losses.poisson(batch['target'], outputs))
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply(gradients, model.trainable_variables)
return loss
return train_step
learning_rate = tf.Variable(0., trainable=False, name='learning_rate')
optimizer = snt.optimizers.Adam(learning_rate=learning_rate)
num_warmup_steps = 5000
target_learning_rate = 0.0005
model = enformer.Enformer(channels=1536 // 4, # Use 4x fewer channels to train faster.
num_heads=8,
num_transformer_layers=11,
pooling_type='max')
train_step = create_step_function(model, optimizer)
# Train the model
steps_per_epoch = 20
num_epochs = 5
data_it = iter(human_mouse_dataset)
global_step = 0
for epoch_i in range(num_epochs):
for i in tqdm(range(steps_per_epoch)):
global_step += 1
if global_step > 1:
learning_rate_frac = tf.math.minimum(
1.0, global_step / tf.math.maximum(1.0, num_warmup_steps))
learning_rate.assign(target_learning_rate * learning_rate_frac)
batch_human, batch_mouse = next(data_it)
loss_human = train_step(batch=batch_human, head='human')
loss_mouse = train_step(batch=batch_mouse, head='mouse')
# End of epoch.
print('')
print('loss_human', loss_human.numpy(),
'loss_mouse', loss_mouse.numpy(),
'learning_rate', optimizer.learning_rate.numpy()
)
"""
Explanation: Model training
End of explanation
"""
# @title `PearsonR` and `R2` metrics
def _reduced_shape(shape, axis):
if axis is None:
return tf.TensorShape([])
return tf.TensorShape([d for i, d in enumerate(shape) if i not in axis])
class CorrelationStats(tf.keras.metrics.Metric):
"""Contains shared code for PearsonR and R2."""
def __init__(self, reduce_axis=None, name='pearsonr'):
"""Pearson correlation coefficient.
Args:
reduce_axis: Specifies over which axis to compute the correlation (say
(0, 1). If not specified, it will compute the correlation across the
whole tensor.
name: Metric name.
"""
super(CorrelationStats, self).__init__(name=name)
self._reduce_axis = reduce_axis
self._shape = None # Specified in _initialize.
def _initialize(self, input_shape):
# Remaining dimensions after reducing over self._reduce_axis.
self._shape = _reduced_shape(input_shape, self._reduce_axis)
weight_kwargs = dict(shape=self._shape, initializer='zeros')
self._count = self.add_weight(name='count', **weight_kwargs)
self._product_sum = self.add_weight(name='product_sum', **weight_kwargs)
self._true_sum = self.add_weight(name='true_sum', **weight_kwargs)
self._true_squared_sum = self.add_weight(name='true_squared_sum',
**weight_kwargs)
self._pred_sum = self.add_weight(name='pred_sum', **weight_kwargs)
self._pred_squared_sum = self.add_weight(name='pred_squared_sum',
**weight_kwargs)
def update_state(self, y_true, y_pred, sample_weight=None):
"""Update the metric state.
Args:
y_true: Multi-dimensional float tensor [batch, ...] containing the ground
truth values.
y_pred: float tensor with the same shape as y_true containing predicted
values.
sample_weight: 1D tensor aligned with y_true batch dimension specifying
the weight of individual observations.
"""
if self._shape is None:
# Explicit initialization check.
self._initialize(y_true.shape)
y_true.shape.assert_is_compatible_with(y_pred.shape)
y_true = tf.cast(y_true, 'float32')
y_pred = tf.cast(y_pred, 'float32')
self._product_sum.assign_add(
tf.reduce_sum(y_true * y_pred, axis=self._reduce_axis))
self._true_sum.assign_add(
tf.reduce_sum(y_true, axis=self._reduce_axis))
self._true_squared_sum.assign_add(
tf.reduce_sum(tf.math.square(y_true), axis=self._reduce_axis))
self._pred_sum.assign_add(
tf.reduce_sum(y_pred, axis=self._reduce_axis))
self._pred_squared_sum.assign_add(
tf.reduce_sum(tf.math.square(y_pred), axis=self._reduce_axis))
self._count.assign_add(
tf.reduce_sum(tf.ones_like(y_true), axis=self._reduce_axis))
def result(self):
raise NotImplementedError('Must be implemented in subclasses.')
def reset_states(self):
if self._shape is not None:
tf.keras.backend.batch_set_value([(v, np.zeros(self._shape))
for v in self.variables])
class PearsonR(CorrelationStats):
"""Pearson correlation coefficient.
Computed as:
((x - x_avg) * (y - y_avg) / sqrt(Var[x] * Var[y])
"""
def __init__(self, reduce_axis=(0,), name='pearsonr'):
"""Pearson correlation coefficient.
Args:
reduce_axis: Specifies over which axis to compute the correlation.
name: Metric name.
"""
super(PearsonR, self).__init__(reduce_axis=reduce_axis,
name=name)
def result(self):
true_mean = self._true_sum / self._count
pred_mean = self._pred_sum / self._count
covariance = (self._product_sum
- true_mean * self._pred_sum
- pred_mean * self._true_sum
+ self._count * true_mean * pred_mean)
true_var = self._true_squared_sum - self._count * tf.math.square(true_mean)
pred_var = self._pred_squared_sum - self._count * tf.math.square(pred_mean)
tp_var = tf.math.sqrt(true_var) * tf.math.sqrt(pred_var)
correlation = covariance / tp_var
return correlation
class R2(CorrelationStats):
"""R-squared (fraction of explained variance)."""
def __init__(self, reduce_axis=None, name='R2'):
"""R-squared metric.
Args:
reduce_axis: Specifies over which axis to compute the correlation.
name: Metric name.
"""
super(R2, self).__init__(reduce_axis=reduce_axis,
name=name)
def result(self):
true_mean = self._true_sum / self._count
total = self._true_squared_sum - self._count * tf.math.square(true_mean)
residuals = (self._pred_squared_sum - 2 * self._product_sum
+ self._true_squared_sum)
return tf.ones_like(residuals) - residuals / total
class MetricDict:
def __init__(self, metrics):
self._metrics = metrics
def update_state(self, y_true, y_pred):
for k, metric in self._metrics.items():
metric.update_state(y_true, y_pred)
def result(self):
return {k: metric.result() for k, metric in self._metrics.items()}
def evaluate_model(model, dataset, head, max_steps=None):
metric = MetricDict({'PearsonR': PearsonR(reduce_axis=(0,1))})
@tf.function
def predict(x):
return model(x, is_training=False)[head]
for i, batch in tqdm(enumerate(dataset)):
if max_steps is not None and i > max_steps:
break
metric.update_state(batch['target'], predict(batch['sequence']))
return metric.result()
metrics_human = evaluate_model(model,
dataset=get_dataset('human', 'valid').batch(1).prefetch(2),
head='human',
max_steps=100)
print('')
print({k: v.numpy().mean() for k, v in metrics_human.items()})
metrics_mouse = evaluate_model(model,
dataset=get_dataset('mouse', 'valid').batch(1).prefetch(2),
head='mouse',
max_steps=100)
print('')
print({k: v.numpy().mean() for k, v in metrics_mouse.items()})
"""
Explanation: Evaluate
End of explanation
"""
np.random.seed(42)
EXTENDED_SEQ_LENGTH = 393_216
SEQ_LENGTH = 196_608
inputs = np.array(np.random.random((1, EXTENDED_SEQ_LENGTH, 4)), dtype=np.float32)
inputs_cropped = enformer.TargetLengthCrop1D(SEQ_LENGTH)(inputs)
checkpoint_gs_path = 'gs://dm-enformer/models/enformer/sonnet_weights/*'
checkpoint_path = '/tmp/enformer_checkpoint'
!mkdir /tmp/enformer_checkpoint
# Copy checkpoints from GCS to temporary directory.
# This will take a while as the checkpoint is ~ 1GB.
for file_path in tf.io.gfile.glob(checkpoint_gs_path):
print(file_path)
file_name = os.path.basename(file_path)
tf.io.gfile.copy(file_path, f'{checkpoint_path}/{file_name}', overwrite=True)
!ls -lh /tmp/enformer_checkpoint
enformer_model = enformer.Enformer()
checkpoint = tf.train.Checkpoint(module=enformer_model)
latest = tf.train.latest_checkpoint(checkpoint_path)
print(latest)
status = checkpoint.restore(latest)
# Using `is_training=False` to match TF-hub predict_on_batch function.
restored_predictions = enformer_model(inputs_cropped, is_training=False)
import tensorflow_hub as hub
enformer_tf_hub_model = hub.load("https://tfhub.dev/deepmind/enformer/1").model
hub_predictions = enformer_tf_hub_model.predict_on_batch(inputs)
np.allclose(hub_predictions['human'], restored_predictions['human'], atol=1e-5)
# Can run with 'is_training=True' but note that this will
# change the predictions as the batch statistics will be updated
# and the outputs will likley not match the TF-hub model.
# enformer(inputs_cropped, is_training=True)
"""
Explanation: Restore Checkpoint
Note: For the TF-Hub Enformer model, the required input sequence length is 393,216 which actually gets cropped within the model to 196,608. The open source module does not internally crop the sequence. Therefore, the code below crops the central 196,608 bp of the longer sequence to reproduce the output of the TF hub from the reloaded checkpoint.
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/272b39eb7cbe2bfe1e8c768341ec7c56/time_frequency_simulated.ipynb | bsd-3-clause | # Authors: Hari Bharadwaj <hari@nmr.mgh.harvard.edu>
# Denis Engemann <denis.engemann@gmail.com>
# Chris Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD-3-Clause
import numpy as np
from matplotlib import pyplot as plt
from mne import create_info, EpochsArray
from mne.baseline import rescale
from mne.time_frequency import (tfr_multitaper, tfr_stockwell, tfr_morlet,
tfr_array_morlet)
from mne.viz import centers_to_edges
print(__doc__)
"""
Explanation: Time-frequency on simulated data (Multitaper vs. Morlet vs. Stockwell)
This example demonstrates the different time-frequency estimation methods
on simulated data. It shows the time-frequency resolution trade-off
and the problem of estimation variance. In addition it highlights
alternative functions for generating TFRs without averaging across
trials, or by operating on numpy arrays.
End of explanation
"""
sfreq = 1000.0
ch_names = ['SIM0001', 'SIM0002']
ch_types = ['grad', 'grad']
info = create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)
n_times = 1024 # Just over 1 second epochs
n_epochs = 40
seed = 42
rng = np.random.RandomState(seed)
noise = rng.randn(n_epochs, len(ch_names), n_times)
# Add a 50 Hz sinusoidal burst to the noise and ramp it.
t = np.arange(n_times, dtype=np.float64) / sfreq
signal = np.sin(np.pi * 2. * 50. * t) # 50 Hz sinusoid signal
signal[np.logical_or(t < 0.45, t > 0.55)] = 0. # Hard windowing
on_time = np.logical_and(t >= 0.45, t <= 0.55)
signal[on_time] *= np.hanning(on_time.sum()) # Ramping
data = noise + signal
reject = dict(grad=4000)
events = np.empty((n_epochs, 3), dtype=int)
first_event_sample = 100
event_id = dict(sin50hz=1)
for k in range(n_epochs):
events[k, :] = first_event_sample + k * n_times, 0, event_id['sin50hz']
epochs = EpochsArray(data=data, info=info, events=events, event_id=event_id,
reject=reject)
epochs.average().plot()
"""
Explanation: Simulate data
We'll simulate data with a known spectro-temporal structure.
End of explanation
"""
freqs = np.arange(5., 100., 3.)
vmin, vmax = -3., 3. # Define our color limits.
"""
Explanation: Calculate a time-frequency representation (TFR)
Below we'll demonstrate the output of several TFR functions in MNE:
:func:mne.time_frequency.tfr_multitaper
:func:mne.time_frequency.tfr_stockwell
:func:mne.time_frequency.tfr_morlet
Multitaper transform
First we'll use the multitaper method for calculating the TFR.
This creates several orthogonal tapering windows in the TFR estimation,
which reduces variance. We'll also show some of the parameters that can be
tweaked (e.g., time_bandwidth) that will result in different multitaper
properties, and thus a different TFR. You can trade time resolution or
frequency resolution or both in order to get a reduction in variance.
End of explanation
"""
n_cycles = freqs / 2.
time_bandwidth = 2.0 # Least possible frequency-smoothing (1 taper)
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Least smoothing, most variance')
"""
Explanation: (1) Least smoothing (most variance/background fluctuations).
End of explanation
"""
n_cycles = freqs # Increase time-window length to 1 second.
time_bandwidth = 4.0 # Same frequency-smoothing as (1) 3 tapers.
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Less frequency smoothing, more time smoothing')
"""
Explanation: (2) Less frequency smoothing, more time smoothing.
End of explanation
"""
n_cycles = freqs / 2.
time_bandwidth = 8.0 # Same time-smoothing as (1), 7 tapers.
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Less time smoothing, more frequency smoothing')
"""
Explanation: (3) Less time smoothing, more frequency smoothing.
End of explanation
"""
fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
fmin, fmax = freqs[[0, -1]]
for width, ax in zip((0.2, .7, 3.0), axs):
power = tfr_stockwell(epochs, fmin=fmin, fmax=fmax, width=width)
power.plot([0], baseline=(0., 0.1), mode='mean', axes=ax, show=False,
colorbar=False)
ax.set_title('Sim: Using S transform, width = {:0.1f}'.format(width))
plt.tight_layout()
"""
Explanation: Stockwell (S) transform
Stockwell uses a Gaussian window to balance temporal and spectral resolution.
Importantly, frequency bands are phase-normalized, hence strictly comparable
with regard to timing, and, the input signal can be recoverd from the
transform in a lossless way if we disregard numerical errors. In this case,
we control the spectral / temporal resolution by specifying different widths
of the gaussian window using the width parameter.
End of explanation
"""
fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
all_n_cycles = [1, 3, freqs / 2.]
for n_cycles, ax in zip(all_n_cycles, axs):
power = tfr_morlet(epochs, freqs=freqs,
n_cycles=n_cycles, return_itc=False)
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
axes=ax, show=False, colorbar=False)
n_cycles = 'scaled by freqs' if not isinstance(n_cycles, int) else n_cycles
ax.set_title('Sim: Using Morlet wavelet, n_cycles = %s' % n_cycles)
plt.tight_layout()
"""
Explanation: Morlet Wavelets
Finally, show the TFR using morlet wavelets, which are a sinusoidal wave
with a gaussian envelope. We can control the balance between spectral and
temporal resolution with the n_cycles parameter, which defines the
number of cycles to include in the window.
End of explanation
"""
n_cycles = freqs / 2.
power = tfr_morlet(epochs, freqs=freqs,
n_cycles=n_cycles, return_itc=False, average=False)
print(type(power))
avgpower = power.average()
avgpower.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Using Morlet wavelets and EpochsTFR', show=False)
"""
Explanation: Calculating a TFR without averaging over epochs
It is also possible to calculate a TFR without averaging across trials.
We can do this by using average=False. In this case, an instance of
:class:mne.time_frequency.EpochsTFR is returned.
End of explanation
"""
power = tfr_array_morlet(epochs.get_data(), sfreq=epochs.info['sfreq'],
freqs=freqs, n_cycles=n_cycles,
output='avg_power')
# Baseline the output
rescale(power, epochs.times, (0., 0.1), mode='mean', copy=False)
fig, ax = plt.subplots()
x, y = centers_to_edges(epochs.times * 1000, freqs)
mesh = ax.pcolormesh(x, y, power[0], cmap='RdBu_r', vmin=vmin, vmax=vmax)
ax.set_title('TFR calculated on a numpy array')
ax.set(ylim=freqs[[0, -1]], xlabel='Time (ms)')
fig.colorbar(mesh)
plt.tight_layout()
plt.show()
"""
Explanation: Operating on arrays
MNE also has versions of the functions above which operate on numpy arrays
instead of MNE objects. They expect inputs of the shape
(n_epochs, n_channels, n_times). They will also return a numpy array
of shape (n_epochs, n_channels, n_freqs, n_times).
End of explanation
"""
|
mattssilva/UW-Machine-Learning-Specialization | Week 1/Getting started with iPython Notebook.ipynb | mit | print ('Hello World!')
"""
Explanation: Getting started with Python
End of explanation
"""
i = 4 # int
type(i)
f = 4.1 # float
type(f)
b = True # boolean variable
s = "This is a string!"
print s
"""
Explanation: Create some variables in Python
End of explanation
"""
l = [3,1,2] # list
print l
d = {'foo':1, 'bar':2.3, 's':'my first dictionary'} # dictionary
print d
print d['foo'] # element of a dictionary
n = None # Python's null type
type(n)
"""
Explanation: Advanced python types
End of explanation
"""
print "Our float value is %s. Our int value is %s." % (f,i) # Python is pretty good with strings
"""
Explanation: Advanced printing
End of explanation
"""
if i == 1 and f > 4:
print "The value of i is 1 and f is greater than 4."
elif i > 4 or f > 4:
print "i or f are both greater than 4."
else:
print "both i and f are less than or equal to 4"
"""
Explanation: Conditional statements in python
End of explanation
"""
print l
for e in l:
print e
"""
Explanation: Conditional loops
End of explanation
"""
counter = 6
while counter < 10:
print counter
counter += 1
"""
Explanation: Note that in Python, we don't use {} or other markers to indicate the part of the loop that gets iterated. Instead, we just indent and align each of the iterated statements with spaces or tabs. (You can use as many as you want, as long as the lines are aligned.)
End of explanation
"""
def add2(x):
y = x + 2
return y
i = 5
add2(i)
"""
Explanation: Creating functions in Python
Again, we don't use {}, but just indent the lines that are part of the function.
End of explanation
"""
square = lambda x: x*x
square(add2(i))
"""
Explanation: We can also define simple functions with lambdas:
End of explanation
"""
|
Aniruddha-Tapas/Applied-Machine-Learning | Ensemble Learning/Classifying Default of Credit Card Clients.ipynb | mit | import os
from sklearn.tree import DecisionTreeClassifier, export_graphviz
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from time import time
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score , classification_report
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
# read .csv from provided dataset
xls_filename="default of credit card clients.xls"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_excel(xls_filename, skiprows=1)
df.head()
df.columns
features=list(df.columns[1:-1])
X=df[features]
y = df['default payment next month']
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
print X_train.shape, y_train.shape
"""
Explanation: Classifying Default of Credit Card Clients
<hr>
The dataset can be downloaded from : https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients
This dataset contains information on default payments, demographic factors, credit data, history of payment, and bill statements of credit card clients in Taiwan from April 2005 to September 2005.
Data Set Information:
This research that provided this dataset aimed at the case of customers default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification - credible or not credible clients.
Attribute Information:
The dataset contains a binary variable, default payment (Yes = 1, No = 0), as the response variable. This study reviewed the literature and used the following 23 variables as explanatory variables:
<pre>
X1: Amount of the given credit (NT dollar): it includes both the individual consumer credit and his/her family (supplementary) credit.
X2: Gender (1 = male; 2 = female).
X3: Education (1 = graduate school; 2 = university; 3 = high school; 4 = others).
X4: Marital status (1 = married; 2 = single; 3 = others).
X5: Age (year).
X6 - X11: History of past payment. We tracked the past monthly payment records (from April to September, 2005) as follows: X6 = the repayment status in September, 2005; X7 = the repayment status in August, 2005; . . .;X11 = the repayment status in April, 2005. The measurement scale for the repayment status is: -1 = pay duly; 1 = payment delay for one month; 2 = payment delay for two months; . . .; 8 = payment delay for eight months; 9 = payment delay for nine months and above.
X12-X17: Amount of bill statement (NT dollar). X12 = amount of bill statement in September, 2005; X13 = amount of bill statement in August, 2005; . . .; X17 = amount of bill statement in April, 2005.
X18-X23: Amount of previous payment (NT dollar). X18 = amount paid in September, 2005; X19 = amount paid in August, 2005; . . .;X23 = amount paid in April, 2005.
</pre>
Optimization
Ensemble Learning
End of explanation
"""
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(num=None, figsize=(14, 10), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
importances[indices[:5]]
for f in range(5):
print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]]))
best_features = []
for i in indices[:5]:
best_features.append(features[i])
# Plot the top 5 feature importances of the forest
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(5), importances[indices][:5],
color="r", yerr=std[indices][:5], align="center")
plt.xticks(range(5), best_features)
plt.xlim([-1, 5])
plt.show()
"""
Explanation: Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
End of explanation
"""
t0=time()
print "DecisionTree"
#dt = DecisionTreeClassifier(min_samples_split=1,random_state=99)
dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99)
clf_dt=dt.fit(X_train,y_train)
print "Acurracy: ", clf_dt.score(X_test,y_test)
t1=time()
print "time elapsed: ", t1-t0
"""
Explanation: Decision Tree accuracy and time elapsed caculation
End of explanation
"""
tt0=time()
print "cross result========"
scores = cross_validation.cross_val_score(dt, X,y, cv=5)
print scores
print scores.mean()
tt1=time()
print "time elapsed: ", tt1-tt0
print "\n"
"""
Explanation: cross validation for DT
End of explanation
"""
from sklearn.metrics import classification_report
pipeline = Pipeline([
('clf', DecisionTreeClassifier(criterion='entropy'))
])
parameters = {
'clf__max_depth': (5, 25 , 50),
'clf__min_samples_split': (1, 5, 10),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print classification_report(y_test, predictions)
"""
Explanation: Tuning our hyperparameters using GridSearch
End of explanation
"""
t2=time()
print "RandomForest"
rf = RandomForestClassifier(n_estimators=100,n_jobs=-1)
clf_rf = rf.fit(X_train,y_train)
print "Acurracy: ", clf_rf.score(X_test,y_test)
t3=time()
print "time elapsed: ", t3-t2
"""
Explanation: Random Forest accuracy and time elapsed caculation
End of explanation
"""
tt2=time()
print "cross result========"
scores = cross_validation.cross_val_score(rf, X,y, cv=5)
print scores
print scores.mean()
tt3=time()
print "time elapsed: ", tt3-tt2
print "\n"
"""
Explanation: cross validation for RF
End of explanation
"""
roc_auc_score(y_test,rf.predict(X_test))
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
predictions = rf.predict_proba(X_test)
false_positive_rate, recall, thresholds = roc_curve(y_test, predictions[:, 1])
roc_auc = auc(false_positive_rate, recall)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, recall, 'b', label='AUC = %0.2f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1], 'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.ylabel('Recall')
plt.xlabel('Fall-out')
plt.show()
"""
Explanation: Receiver Operating Characteristic (ROC) curve
End of explanation
"""
pipeline2 = Pipeline([
('clf', RandomForestClassifier(criterion='entropy'))
])
parameters = {
'clf__n_estimators': (25, 50, 100),
'clf__max_depth': (5, 25 , 50),
'clf__min_samples_split': (1, 5, 10),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline2, parameters, n_jobs=-1, verbose=1, scoring='accuracy', cv=3)
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print 'Accuracy:', accuracy_score(y_test, predictions)
print classification_report(y_test, predictions)
"""
Explanation: Tuning Models using GridSearch
End of explanation
"""
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
# NOTE: Setting the `warm_start` construction parameter to `True` disables
# support for paralellised ensembles but is necessary for tracking the OOB
# error trajectory during training.
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, oob_score=True,
max_features="sqrt",
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 15
max_estimators = 175
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(X, y)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
"""
Explanation: OOB Errors for Random Forests
End of explanation
"""
t4=time()
print "NaiveBayes"
nb = BernoulliNB()
clf_nb=nb.fit(X_train,y_train)
print "Acurracy: ", clf_nb.score(X_test,y_test)
t5=time()
print "time elapsed: ", t5-t4
"""
Explanation: Naive Bayes accuracy and time elapsed caculation
End of explanation
"""
tt4=time()
print "cross result========"
scores = cross_validation.cross_val_score(nb, X,y, cv=5)
print scores
print scores.mean()
tt5=time()
print "time elapsed: ", tt5-tt4
print "\n"
"""
Explanation: cross-validation for NB
End of explanation
"""
t6=time()
print "KNN"
# knn = KNeighborsClassifier(n_neighbors=3)
knn = KNeighborsClassifier()
clf_knn=knn.fit(X_train, y_train)
print "Acurracy: ", clf_knn.score(X_test,y_test)
t7=time()
print "time elapsed: ", t7-t6
"""
Explanation: KNN accuracy and time elapsed caculation
End of explanation
"""
tt6=time()
print "cross result========"
scores = cross_validation.cross_val_score(knn, X,y, cv=5)
print scores
print scores.mean()
tt7=time()
print "time elapsed: ", tt7-tt6
print "\n"
from sklearn.cross_validation import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import grid_search
knn = KNeighborsClassifier()
parameters = {'n_neighbors': (10, 15, 25)}
grid = grid_search.GridSearchCV(knn, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid.fit(X_train, y_train)
print 'Best score: %0.3f' % grid.best_score_
print 'Best parameters set:'
best_parameters = grid.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid.predict(X_test)
print classification_report(y_test, predictions)
"""
Explanation: cross validation for KNN
End of explanation
"""
from sklearn.base import BaseEstimator
from sklearn.base import ClassifierMixin
from sklearn.preprocessing import LabelEncoder
from sklearn.externals import six
from sklearn.base import clone
from sklearn.pipeline import _name_estimators
import numpy as np
import operator
class MajorityVoteClassifier(BaseEstimator,
ClassifierMixin):
""" A majority vote ensemble classifier
Parameters
----------
classifiers : array-like, shape = [n_classifiers]
Different classifiers for the ensemble
vote : str, {'classlabel', 'probability'} (default='label')
If 'classlabel' the prediction is based on the argmax of
class labels. Else if 'probability', the argmax of
the sum of probabilities is used to predict the class label
(recommended for calibrated classifiers).
weights : array-like, shape = [n_classifiers], optional (default=None)
If a list of `int` or `float` values are provided, the classifiers
are weighted by importance; Uses uniform weights if `weights=None`.
"""
def __init__(self, classifiers, vote='classlabel', weights=None):
self.classifiers = classifiers
self.named_classifiers = {key: value for key, value
in _name_estimators(classifiers)}
self.vote = vote
self.weights = weights
def fit(self, X, y):
""" Fit classifiers.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Matrix of training samples.
y : array-like, shape = [n_samples]
Vector of target class labels.
Returns
-------
self : object
"""
if self.vote not in ('probability', 'classlabel'):
raise ValueError("vote must be 'probability' or 'classlabel'"
"; got (vote=%r)"
% self.vote)
if self.weights and len(self.weights) != len(self.classifiers):
raise ValueError('Number of classifiers and weights must be equal'
'; got %d weights, %d classifiers'
% (len(self.weights), len(self.classifiers)))
# Use LabelEncoder to ensure class labels start with 0, which
# is important for np.argmax call in self.predict
self.lablenc_ = LabelEncoder()
self.lablenc_.fit(y)
self.classes_ = self.lablenc_.classes_
self.classifiers_ = []
for clf in self.classifiers:
fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y))
self.classifiers_.append(fitted_clf)
return self
def predict(self, X):
""" Predict class labels for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Matrix of training samples.
Returns
----------
maj_vote : array-like, shape = [n_samples]
Predicted class labels.
"""
if self.vote == 'probability':
maj_vote = np.argmax(self.predict_proba(X), axis=1)
else: # 'classlabel' vote
# Collect results from clf.predict calls
predictions = np.asarray([clf.predict(X)
for clf in self.classifiers_]).T
maj_vote = np.apply_along_axis(
lambda x:
np.argmax(np.bincount(x,
weights=self.weights)),
axis=1,
arr=predictions)
maj_vote = self.lablenc_.inverse_transform(maj_vote)
return maj_vote
def predict_proba(self, X):
""" Predict class probabilities for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
avg_proba : array-like, shape = [n_samples, n_classes]
Weighted average probability for each class per sample.
"""
probas = np.asarray([clf.predict_proba(X)
for clf in self.classifiers_])
avg_proba = np.average(probas, axis=0, weights=self.weights)
return avg_proba
def get_params(self, deep=True):
""" Get classifier parameter names for GridSearch"""
if not deep:
return super(MajorityVoteClassifier, self).get_params(deep=False)
else:
out = self.named_classifiers.copy()
for name, step in six.iteritems(self.named_classifiers):
for key, value in six.iteritems(step.get_params(deep=True)):
out['%s__%s' % (name, key)] = value
return out
"""
Explanation: Ensemble Learning
End of explanation
"""
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
import numpy as np
clf1 = LogisticRegression(penalty='l2',
C=0.001,
random_state=0)
clf2 = DecisionTreeClassifier(max_depth=5,
min_samples_leaf=5,
min_samples_split=1,
criterion='entropy',
random_state=0)
clf3 = KNeighborsClassifier(n_neighbors=1,
p=2,
metric='minkowski')
pipe1 = Pipeline([['sc', StandardScaler()],
['clf', clf1]])
pipe3 = Pipeline([['sc', StandardScaler()],
['clf', clf3]])
clf_labels = ['Logistic Regression', 'Decision Tree', 'KNN']
print('10-fold cross validation:\n')
for clf, label in zip([pipe1, clf2, pipe3], clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='roc_auc')
print("ROC AUC: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
"""
Explanation: Combining different algorithms for classification with majority vote
End of explanation
"""
# Majority Rule (hard) Voting
mv_clf = MajorityVoteClassifier(
classifiers=[pipe1, clf2,pipe3])
clf_labels = ['Logistic Regression', 'Decision Tree', 'K Nearest Neighbours', 'Majority Voting']
all_clf = [pipe1, clf2, pipe3, mv_clf]
for clf, label in zip(all_clf, clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='roc_auc')
print("ROC AUC: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
"""
Explanation: You may be wondering why we trained the logistic regression and k-nearest neighbors classifier as part of a pipeline. The reason behind it is that, both the
logistic regression and k-nearest neighbors algorithms (using the Euclidean distance metric) are not scale-invariant in contrast with decision trees.
Now let's move on to the more exciting part and combine the individual classifiers for majority rule voting in our MajorityVoteClassifier:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
colors = ['black', 'orange', 'blue', 'green']
linestyles = [':', '--', '-.', '-']
for clf, label, clr, ls in zip(all_clf, clf_labels, colors, linestyles):
# assuming the label of the positive class is 1
y_pred = clf.fit(X_train,
y_train).predict_proba(X_test)[:, 1]
fpr, tpr, thresholds = roc_curve(y_true=y_test,
y_score=y_pred)
roc_auc = auc(x=fpr, y=tpr)
plt.plot(fpr, tpr,
color=clr,
linestyle=ls,
label='%s (auc = %0.2f)' % (label, roc_auc))
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1],
linestyle='--',
color='gray',
linewidth=2)
plt.xlim([-0.1, 1.1])
plt.ylim([-0.1, 1.1])
plt.grid()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show()
"""
Explanation: Evaluating and tuning the ensemble classifier
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
clf1 = LogisticRegression(random_state=123)
clf2 = RandomForestClassifier(random_state=123)
clf3 = GaussianNB()
eclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
voting='soft',
weights=[1, 1, 5])
# predict class probabilities for all classifiers
probas = [c.fit(X, y).predict_proba(X) for c in (clf1, clf2, clf3, eclf)]
# get class probabilities for the first sample in the dataset
class1_1 = [pr[0, 0] for pr in probas]
class2_1 = [pr[0, 1] for pr in probas]
# plotting
N = 4 # number of groups
ind = np.arange(N) # group positions
width = 0.35 # bar width
fig, ax = plt.subplots()
# bars for classifier 1-3
p1 = ax.bar(ind, np.hstack(([class1_1[:-1], [0]])), width, color='green')
p2 = ax.bar(ind + width, np.hstack(([class2_1[:-1], [0]])), width, color='lightgreen')
# bars for VotingClassifier
p3 = ax.bar(ind, [0, 0, 0, class1_1[-1]], width, color='blue')
p4 = ax.bar(ind + width, [0, 0, 0, class2_1[-1]], width, color='steelblue')
# plot annotations
plt.axvline(2.8, color='k', linestyle='dashed')
ax.set_xticks(ind + width)
ax.set_xticklabels(['LogisticRegression\nweight 1',
'GaussianNB\nweight 1',
'RandomForestClassifier\nweight 5',
'VotingClassifier\n(average probabilities)'],
rotation=40,
ha='right')
plt.ylim([0, 1])
plt.title('Class probabilities for sample 1 by different classifiers')
plt.legend([p1[0], p2[0]], ['class 1', 'class 2'], loc='upper left')
plt.show()
"""
Explanation: Plot class probabilities calculated by the VotingClassifier
End of explanation
"""
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=None)
bag = BaggingClassifier(base_estimator=tree,
n_estimators=500,
max_samples=1.0,
max_features=1.0,
bootstrap=True,
bootstrap_features=False,
n_jobs=-1,
random_state=1)
from sklearn.metrics import accuracy_score
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('Decision tree train/test accuracies %.3f/%.3f'
% (tree_train, tree_test))
bag = bag.fit(X_train, y_train)
y_train_pred = bag.predict(X_train)
y_test_pred = bag.predict(X_test)
bag_train = accuracy_score(y_train, y_train_pred)
bag_test = accuracy_score(y_test, y_test_pred)
print('Bagging train/test accuracies %.3f/%.3f'
% (bag_train, bag_test))
"""
Explanation: Bagging -- Building an ensemble of classifiers from bootstrap samples
Bagging is an ensemble learning technique that is closely related to the MajorityVoteClassifier,however, instead of using the same training set to fit the individual classifiers in the ensemble, we draw bootstrap samples (random samples with replacement) from the initial training set, which is why bagging is also known as bootstrap aggregating.
End of explanation
"""
from sklearn.ensemble import AdaBoostClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=1)
ada = AdaBoostClassifier(base_estimator=tree,
n_estimators=500,
learning_rate=0.1,
random_state=0)
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('Decision tree train/test accuracies %.3f/%.3f'
% (tree_train, tree_test))
ada = ada.fit(X_train, y_train)
y_train_pred = ada.predict(X_train)
y_test_pred = ada.predict(X_test)
ada_train = accuracy_score(y_train, y_train_pred)
ada_test = accuracy_score(y_test, y_test_pred)
print('AdaBoost train/test accuracies %.3f/%.3f'
% (ada_train, ada_test))
"""
Explanation: Leveraging weak learners via adaptive boosting
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles
# Create and fit an AdaBoosted decision tree
bdt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=5),
algorithm="SAMME",
n_estimators=200)
bdt.fit(X, y)
plot_colors = "br"
plot_step = 0.02
class_names = "AB"
plt.figure(figsize=(10, 5))
# Plot the decision boundaries
plt.subplot(121)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis("tight")
# Plot the training points
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1],
c=c, cmap=plt.cm.Paired,
label="Class %s" % n)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc='upper right')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Decision Boundary')
# Plot the two-class decision scores
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
plt.subplot(122)
for i, n, c in zip(range(2), class_names, plot_colors):
plt.hist(twoclass_output[y == i],
bins=10,
range=plot_range,
facecolor=c,
label='Class %s' % n,
alpha=.5)
x1, x2, y1, y2 = plt.axis()
plt.axis((x1, x2, y1, y2 * 1.2))
plt.legend(loc='upper right')
plt.ylabel('Samples')
plt.xlabel('Score')
plt.title('Decision Scores')
plt.tight_layout()
plt.subplots_adjust(wspace=0.35)
plt.show()
"""
Explanation: Two-class AdaBoost
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import zero_one_loss
from sklearn.ensemble import AdaBoostClassifier
n_estimators = 400
# A learning rate of 1. may not be optimal for both SAMME and SAMME.R
learning_rate = 1.
dt_stump = DecisionTreeClassifier(max_depth=1, min_samples_leaf=1)
dt_stump.fit(X_train, y_train)
dt_stump_err = 1.0 - dt_stump.score(X_test, y_test)
dt = DecisionTreeClassifier(max_depth=9, min_samples_leaf=1)
dt.fit(X_train, y_train)
dt_err = 1.0 - dt.score(X_test, y_test)
ada_discrete = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME")
ada_discrete.fit(X_train, y_train)
ada_real = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME.R")
ada_real.fit(X_train, y_train)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, n_estimators], [dt_stump_err] * 2, 'k-',
label='Decision Stump Error')
ax.plot([1, n_estimators], [dt_err] * 2, 'k--',
label='Decision Tree Error')
ada_discrete_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_test)):
ada_discrete_err[i] = zero_one_loss(y_pred, y_test)
ada_discrete_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_train)):
ada_discrete_err_train[i] = zero_one_loss(y_pred, y_train)
ada_real_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_test)):
ada_real_err[i] = zero_one_loss(y_pred, y_test)
ada_real_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_train)):
ada_real_err_train[i] = zero_one_loss(y_pred, y_train)
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err,
label='Discrete AdaBoost Test Error',
color='red')
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err_train,
label='Discrete AdaBoost Train Error',
color='blue')
ax.plot(np.arange(n_estimators) + 1, ada_real_err,
label='Real AdaBoost Test Error',
color='orange')
ax.plot(np.arange(n_estimators) + 1, ada_real_err_train,
label='Real AdaBoost Train Error',
color='green')
ax.set_ylim((0.0, 0.5))
ax.set_xlabel('n_estimators')
ax.set_ylabel('error rate')
leg = ax.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.7)
plt.show()
"""
Explanation: Discrete versus Real AdaBoost
End of explanation
"""
import numpy as np
np.random.seed(10)
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import (RandomTreesEmbedding, RandomForestClassifier,
GradientBoostingClassifier)
from sklearn.preprocessing import OneHotEncoder
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_curve
from sklearn.pipeline import make_pipeline
n_estimator = 10
# It is important to train the ensemble of trees on a different subset
# of the training data than the linear regression model to avoid
# overfitting, in particular if the total number of leaves is
# similar to the number of training samples
X_train, X_train_lr, y_train, y_train_lr = train_test_split(X_train,
y_train,
test_size=0.5)
# Unsupervised transformation based on totally random trees
rt = RandomTreesEmbedding(max_depth=3, n_estimators=n_estimator,
random_state=0)
rt_lm = LogisticRegression()
pipeline = make_pipeline(rt, rt_lm)
pipeline.fit(X_train, y_train)
y_pred_rt = pipeline.predict_proba(X_test)[:, 1]
fpr_rt_lm, tpr_rt_lm, _ = roc_curve(y_test, y_pred_rt)
# Supervised transformation based on random forests
rf = RandomForestClassifier(max_depth=3, n_estimators=n_estimator)
rf_enc = OneHotEncoder()
rf_lm = LogisticRegression()
rf.fit(X_train, y_train)
rf_enc.fit(rf.apply(X_train))
rf_lm.fit(rf_enc.transform(rf.apply(X_train_lr)), y_train_lr)
y_pred_rf_lm = rf_lm.predict_proba(rf_enc.transform(rf.apply(X_test)))[:, 1]
fpr_rf_lm, tpr_rf_lm, _ = roc_curve(y_test, y_pred_rf_lm)
grd = GradientBoostingClassifier(n_estimators=n_estimator)
grd_enc = OneHotEncoder()
grd_lm = LogisticRegression()
grd.fit(X_train, y_train)
grd_enc.fit(grd.apply(X_train)[:, :, 0])
grd_lm.fit(grd_enc.transform(grd.apply(X_train_lr)[:, :, 0]), y_train_lr)
y_pred_grd_lm = grd_lm.predict_proba(
grd_enc.transform(grd.apply(X_test)[:, :, 0]))[:, 1]
fpr_grd_lm, tpr_grd_lm, _ = roc_curve(y_test, y_pred_grd_lm)
# The gradient boosted model by itself
y_pred_grd = grd.predict_proba(X_test)[:, 1]
fpr_grd, tpr_grd, _ = roc_curve(y_test, y_pred_grd)
# The random forest model by itself
y_pred_rf = rf.predict_proba(X_test)[:, 1]
fpr_rf, tpr_rf, _ = roc_curve(y_test, y_pred_rf)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rt_lm, tpr_rt_lm, label='RT + LR')
plt.plot(fpr_rf, tpr_rf, label='RF')
plt.plot(fpr_rf_lm, tpr_rf_lm, label='RF + LR')
plt.plot(fpr_grd, tpr_grd, label='GBT')
plt.plot(fpr_grd_lm, tpr_grd_lm, label='GBT + LR')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
plt.figure(2)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rt_lm, tpr_rt_lm, label='RT + LR')
plt.plot(fpr_rf, tpr_rf, label='RF')
plt.plot(fpr_rf_lm, tpr_rf_lm, label='RF + LR')
plt.plot(fpr_grd, tpr_grd, label='GBT')
plt.plot(fpr_grd_lm, tpr_grd_lm, label='GBT + LR')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve (zoomed in at top left)')
plt.legend(loc='best')
plt.show()
"""
Explanation: Feature transformations with ensembles of trees
End of explanation
"""
target_names = ['Shares > 1400' , 'Shares < 1400']
X.values
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
pca = PCA(n_components=2)
reduced_X = pca.fit_transform(X)
for a in [red_x, red_y,blue_x,blue_y]:
print len(a)
red_x, red_y = [], []
blue_x, blue_y = [], []
for i in range(len(reduced_X)):
if y[i] == 0:
red_x.append(reduced_X[i][0])
red_y.append(reduced_X[i][1])
elif y[i] == 1:
blue_x.append(reduced_X[i][0])
blue_y.append(reduced_X[i][1])
plt.scatter(red_x, red_y, c='r', marker='x')
plt.scatter(blue_x, blue_y, c='b', marker='.')
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
"""
pca = PCA(n_components=2)
X_r = pca.fit(X.values).transform(X.values)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X.values, y.values).transform(X.values)
"""
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
# Percentage of variance explained for each components
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
plt.figure()
colors = ['blue','red']
for i in xrange(len(colors)):
px = X_r[:, 0][y == i]
py = X_r[:, 1][y == i]
plt.scatter(px, py, c=colors[i])
plt.legend(target_names)
plt.title('PCA')
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
plt.figure()
colors = ['blue','red']
for i in xrange(len(colors)):
px = X_r2[:, 0][y == i]
py = X_pca[:, 1][y == i]
plt.scatter(px, py, c=colors[i])
plt.legend(target_names)
plt.title('LDA')
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
plt.show()
"""
for c, i, target_name in zip("rb", [0, 1], target_names):
plt.scatter(X_r[y == i, 0], X_r[y == i, 1], c=c, label=target_name)
plt.legend()
plt.title('PCA')
plt.figure()
for c, i, target_name in zip("rb", [0, 1], target_names):
plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], c=c, label=target_name)
plt.legend()
plt.title('LDA')
plt.show()
"""
plt.figure()
def plot_pca_scatter():
colors = ['blue','red']
for i in xrange(len(colors)):
px = X_pca[:, 0][y == i]
py = X_pca[:, 1][y == i]
plt.scatter(px, py, c=colors[i])
plt.legend(target_names)
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
from sklearn.decomposition import PCA
estimator = PCA(n_components=2)
X_pca = estimator.fit_transform(X.values)
plot_pca_scatter() # Note that we only plot the first and second principal component
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(comp.reshape((8, 8)), interpolation='nearest')
plt.text(0, -1, str(i + 1) + '-component')
plt.xticks(())
plt.yticks(())
"""
Explanation: PCA Decomposition
End of explanation
"""
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.grid_search import GridSearchCV
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
# This dataset is way to high-dimensional. Better do PCA:
pca = PCA(n_components=2)
# Maybe some original features where good, too?
selection = SelectKBest(k=1)
# Build estimator from PCA and Univariate selection:
combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
dt = DecisionTreeClassifier(min_samples_split=1,max_depth=5,min_samples_leaf=5,random_state=99)
# Do grid search over k, n_components and max_depth:
pipeline = Pipeline([("features", combined_features), ("dt", dt)])
param_grid = dict(features__pca__n_components=[1, 2, 3],
features__univ_select__k=[1, 2],
dt__max_depth=[3, 5, 7])
grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
grid_search.fit(X, y)
print(grid_search.best_estimator_)
print(grid_search.best_score_)
"""
Explanation: Optimization
Concatenating multiple feature extraction methods
End of explanation
"""
import numpy as np
np.random.seed(0)
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.calibration import calibration_curve
# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
knn = KNeighborsClassifier(n_neighbors=25)
rfc = RandomForestClassifier(n_estimators=100)
###############################################################################
# Plot calibration plots
plt.figure(figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(gnb, 'Naive Bayes'),
(knn, 'K Neighbors Classifier'),
(rfc, 'Random Forest')]:
clf.fit(X_train, y_train)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s" % (name, ))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
plt.show()
"""
Explanation: Comparison of Calibration of Classifiers
End of explanation
"""
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,
f1_score)
from sklearn.calibration import CalibratedClassifierCV, calibration_curve
from sklearn.cross_validation import train_test_split
def plot_calibration_curve(est, name, fig_index):
"""Plot calibration curve for est w/o and with calibration. """
# Calibrated with isotonic calibration
isotonic = CalibratedClassifierCV(est, cv=2, method='isotonic')
# Calibrated with sigmoid calibration
sigmoid = CalibratedClassifierCV(est, cv=2, method='sigmoid')
# Logistic regression with no calibration as baseline
lr = LogisticRegression(C=1., solver='lbfgs')
fig = plt.figure(fig_index, figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(est, name),
(isotonic, name + ' + Isotonic'),
(sigmoid, name + ' + Sigmoid')]:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
clf_score = brier_score_loss(y_test, prob_pos, pos_label=y.max())
print("%s:" % name)
print("\tBrier: %1.3f" % (clf_score))
print("\tPrecision: %1.3f" % precision_score(y_test, y_pred))
print("\tRecall: %1.3f" % recall_score(y_test, y_pred))
print("\tF1: %1.3f\n" % f1_score(y_test, y_pred))
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s (%1.3f)" % (name, clf_score))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
# Plot calibration cuve for Gaussian Naive Bayes
plot_calibration_curve(GaussianNB(), "Naive Bayes", 1)
# Plot calibration cuve for Linear SVC
plot_calibration_curve(DecisionTreeClassifier(), "Decision Tree", 2)
plt.show()
"""
Explanation: Probability Calibration curves
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
#PCA
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
n_features = X_r.shape[1]
C = 1.0
# Create different classifiers. The logistic regression cannot do
# multiclass out of the box.
classifiers = {'L1 logistic': LogisticRegression(C=C, penalty='l1'),
'L2 logistic': LogisticRegression(C=C, penalty='l2'),
'Decision Tree': DecisionTreeClassifier(max_depth=5,min_samples_leaf=5,min_samples_split=1,random_state=99),
'K Nearest Neighbors': KNeighborsClassifier(n_neighbors=3),
'Random Forest' : RandomForestClassifier(max_depth=25,min_samples_leaf=2,
min_samples_split=10,n_estimators=100,n_jobs=-1)
}
n_classifiers = len(classifiers)
plt.figure(figsize=(2 * 2, n_classifiers * 2))
plt.subplots_adjust(bottom=.2, top=.95)
xx = np.linspace(3, 9, 100)
yy = np.linspace(1, 5, 100).T
xx, yy = np.meshgrid(xx, yy)
Xfull = np.c_[xx.ravel(), yy.ravel()]
for index, (name, classifier) in enumerate(classifiers.items()):
classifier.fit(X_r, y)
y_pred = classifier.predict(X_r)
classif_rate = np.mean(y_pred.ravel() == y.ravel()) * 100
print("classif_rate for %s : %f " % (name, classif_rate))
# View probabilities=
probas = classifier.predict_proba(Xfull)
n_classes = np.unique(y_pred).size
for k in range(n_classes):
plt.subplot(n_classifiers, n_classes, index * n_classes + k + 1)
plt.title("Class %d" % k)
if k == 0:
plt.ylabel(name)
imshow_handle = plt.imshow(probas[:, k].reshape((100, 100)),
extent=(3, 9, 1, 5), origin='lower')
plt.xticks(())
plt.yticks(())
idx = (y_pred == k)
if idx.any():
plt.scatter(X_r[idx, 0], X_r[idx, 1], marker='o', c='k')
ax = plt.axes([0.15, 0.04, 0.7, 0.05])
plt.title("Probability")
plt.colorbar(imshow_handle, cax=ax, orientation='horizontal')
plt.show()
"""
Explanation: Plot classification probability
End of explanation
"""
import matplotlib.pyplot as plt
from sklearn.cross_validation import StratifiedKFold
from sklearn.feature_selection import RFECV
from sklearn.datasets import make_classification
# Create the RFE object and compute a cross-validated score.
rf = RandomForestClassifier(max_depth=25,min_samples_leaf=2,min_samples_split=10,n_estimators=100,n_jobs=-1)
# The "accuracy" scoring is proportional to the number of correct classifications
rfecv = RFECV(estimator=rf, step=1, cv=StratifiedKFold(y, 2),
scoring='accuracy')
rfecv.fit(X, y)
print("Optimal number of features : %d" % rfecv.n_features_)
# Plot number of features VS. cross-validation scores
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (nb of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.show()
"""
Explanation: Recursive feature elimination with cross-validation
End of explanation
"""
|
refugeehackathon/brain-backend | SpreadsheetConversion/convert_tables_for_db.ipynb | mit | import requests
from io import BytesIO
import pandas as pd
spreadsheet_url = "https://docs.google.com/spreadsheets/d/1WbYov7KrliIvh9Ei485zxPF27Wx7-CYFZliNj3hZ9WE"
stream = requests.get("{0}/export?format=xlsx".format(spreadsheet_url))
full_table = pd.read_excel(BytesIO(stream.content), sheetname="Project-DB")
full_table.head()
curated = full_table[:38]
curated.columns
"""
Explanation: Table Conversions
End of explanation
"""
matching = {
"url": "web_url",
"title": "title",
"description": "description_content",
"logo": "logo_url",
"code_repository": "repository_url",
"organization_name": "organization_name",
"code_license": "code_license",
"releasedate": "publishedAt"
}
projects_columns = ["id", "user_id", "title", "description_tech", "description_content",
"code_license", "data_license", "logo_url", "web_url", "repository_url",
"organization_name", "dev_support_needed", "general_support_needed",
"publishedAt", "createddAt", "updatedAt"]
projects_export = curated[list(matching.keys())].copy()
projects_export.columns = [matching[key] for key in projects_export.columns]
projects_export["id"] = projects_export.index + 1
for key in projects_columns:
if key not in projects_export.columns:
projects_export[key] = None
projects_export.head()
projects_export.to_csv("projects.csv", index=False, header=False, columns=projects_columns)
"""
Explanation: The following columns are ignored for later processing:
* kind, is a category
* area, is geospatial information
* status, has categories
* hashtags, become project tags
* categories, need to be merged with our list
* orgacontact_*, will become part of contact information
* contact_*, will become part of contact information
* programming_languages, are categories
* languages, part of contact details
* organization_type, is a category
* entrydate, automatic DB entry
* software_development_needs, is now a Boolean 'dev_support_needed' combined with categories of platform, etc.
* random_generated_key, should be part of the user information who is entitled to modifying the DB
Create Primary 'projects' Table
End of explanation
"""
s = pd.DataFrame(curated["categories"].str.split(',').tolist()).stack()
print(len(s))
s.head()
s.index = s.index.droplevel(-1)
s.name = 'category'
s.head()
category_join = projects_export.join(s)
category_join.head()
"""
Explanation: Create 'contact_informations'
Create 'projects_categories' Table
End of explanation
"""
category_nodes_translations_columns = ["id", "category_node_id", "title", "locale"]
categories_german = pd.read_csv("category_nodes_translations.csv", header=None, names=category_nodes_translations_columns)
categories_german.head()
title2id = dict(categories_german[["title", "category_node_id"]].itertuples(index=False))
"""
Explanation: Load categories. Create dict to ID. Columns:
project_id
category_node_id
End of explanation
"""
category_join["category_node_id"] = [title2id.get(category) for category in category_join["category"]]
project_categories_export = category_join[["id", "category_node_id", "category"]].copy()
project_categories_export.rename(columns={"id": "project_id"}, inplace=True)
project_categories_export.dropna(subset=["category_node_id"])
"""
Explanation: Try to find categories from project table in the existing list. Will probably introduce a lot of missing values. A real string search (using different languages) would yield better results.
End of explanation
"""
|
enoordeh/StatisticalMethods | examples/XrayImage/Summarizing.ipynb | gpl-2.0 | from __future__ import print_function
import astropy.io.fits as pyfits
import numpy as np
import astropy.visualization as viz
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
targdir = 'a1835_xmm/'
imagefile = targdir+'P0098010101M2U009IMAGE_3000.FTZ'
expmapfile = targdir+'P0098010101M2U009EXPMAP3000.FTZ'
bkgmapfile = targdir+'P0098010101M2X000BKGMAP3000.FTZ'
!du -sch $targdir/*
"""
Explanation: Summarizing Images
Images are high dimensional objects: our XMM image contains 648*648 = datapoints (the pixel values).
Visualizing the data is an extremely important first step: the next is summarizing, which can be thought of as dimensionality reduction.
Let's dust off some standard statistics and put them to good use in summarizing this X-ray image.
End of explanation
"""
imfits = pyfits.open(imagefile)
im = imfits[0].data
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
"""
Explanation: How Many Photons Came From the Cluster?
Let's estimate the total counts due to the cluster.
That means we need to somehow ignore
all the other objects in the field
the diffuse X-ray "background"
Let's start by masking various regions of the image to separate cluster from background.
End of explanation
"""
# First make some coordinate arrays, including polar r from the cluster center:
ny, nx = im.shape
centroid = np.unravel_index(im.argmax(), im.shape)
x = np.linspace(0, nx-1, nx)
y = np.linspace(0, ny-1, ny)
dx, dy = np.meshgrid(x,y)
dx = dx - centroid[1]
dy = dy - centroid[0]
r = np.sqrt(dx*dx + dy*dy)
# Now select an outer annulus, for the background,
# and an inner circle, for the cluster:
background = (r >= 100.0) & (r <= 150.0)
signal = (r < 100.0)
"""
Explanation: Estimating the background
Now let's look at the outer parts of the image, far from the cluster, and estimate the background level there.
End of explanation
"""
maskedimage = im.copy()
maskedimage[np.logical_not(background)] = -1
plt.imshow(viz.scale_image(maskedimage, scale='log', max_cut=40), cmap='gray', origin='lower')
"""
Explanation: First, let's visualize the background region by masking out everything else.
End of explanation
"""
meanbackground = np.mean(im[background])
medianbackground = np.median(im[background])
print("Mean background counts per pixel = ",meanbackground)
print("Median background counts per pixel = ",medianbackground)
"""
Explanation: Now let's look at the mean and median of the pixels in the background annulus that have non-negative values.
End of explanation
"""
plt.figure(figsize=(10,7))
n, bins, patches = plt.hist(im[background], bins=np.linspace(-3.5,29.5,34))
# plt.yscale('log', nonposy='clip')
plt.xlabel('Background annulus pixel value (counts)')
plt.ylabel('Frequency')
plt.axis([-3.0, 30.0, 0, 40000])
plt.grid(True)
plt.show()
stdevbackground = np.std(im[background])
print("Standard deviation: ",stdevbackground)
"""
Explanation: Q: Why do you think there's a difference?
Talk to your neighbor for a minute, and be ready to suggest an answer.
To understand the difference in these two estimates, lets look at a pixel histogram for this annulus.
End of explanation
"""
maskedimage = im.copy()
maskedimage[np.logical_not(signal)] = 0
plt.imshow(viz.scale_image(maskedimage, scale='log', max_cut=40), cmap='gray', origin='lower')
plt.figure(figsize=(10,7))
n, bins, patches = plt.hist(im[signal], bins=np.linspace(-3.5,29.5,34), color='red')
plt.yscale('log', nonposy='clip')
plt.xlabel('Signal region pixel value (counts)')
plt.ylabel('Frequency')
plt.axis([-3.0, 30.0, 0, 500000])
plt.grid(True)
plt.show()
"""
Explanation: Exercise:
"The background level in this image is approximately $0.09 \pm 0.66$ counts"
What's wrong with this statement?
Talk to your neighbor for a few minutes, and see if you can come up with a better version.
Estimating the Cluster Counts
Now let's summarize the circular region centered on the cluster, by making another masked image.
End of explanation
"""
# Total counts in signal region:
Ntotal = np.sum(im[signal])
# Background counts: the mean counts per pixel in the annulus,
# multiplied by the number of pixels in the signal region:
Nbackground = np.count_nonzero(signal)*meanbackground # Is this a good choice?
# Difference is the cluster counts:
Ncluster = Ntotal - Nbackground
print("Counts in signal region: ",Ntotal)
print("Approximate counts due to background: ",Nbackground)
print("Approximate counts due to cluster: ",Ncluster)
"""
Explanation: Now we can make our estimates:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/sandbox-2/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-2', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
khalido/nd101 | Handwritten Digit Recognition with TFLearn.ipynb | gpl-3.0 | # Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
"""
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
"""
# mnist fails to load, so got this patch from the nd101 slack
def patched_read32(bytestream):
dt = np.dtype(np.uint32).newbyteorder('>')
return np.frombuffer(bytestream.read(4), dtype=dt)[0]
mnist._read32 = patched_read32
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
"""
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
"""
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
"""
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
"""
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
#input layer
net = tflearn.input_data([None, 784])
#hidden layer 1
net = tflearn.fully_connected(net, 196, activation='ReLU')
# hidden layer 2
net = tflearn.fully_connected(net, 49, activation='ReLU')
# output layer
net = tflearn.fully_connected(net, 10, activation='softmax')
# how does it learn?
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.05, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
"""
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation
"""
|
AllenDowney/ModSimPy | examples/yoyo.ipynb | mit | # install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
"""
Explanation: Simulating a Yo-Yo
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
"""
from sympy import symbols, Eq, solve
T, a, alpha, I, m, g, r = symbols('T a alpha I m g r')
eq1 = Eq(a, -r * alpha)
eq1
eq2 = Eq(T - m * g, m * a)
eq2
eq3 = Eq(T * r, I * alpha)
eq3
soln = solve([eq1, eq2, eq3], [T, a, alpha])
soln[T]
soln[a]
soln[alpha]
"""
Explanation: Yo-yo
Suppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin.
The following diagram shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls.
In this system, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations:
$\sum F = m a $
$\sum \tau = I \alpha$
where the summations indicate that we are adding up forces and torques.
As in the previous examples, linear and angular velocity are related because of the way the string unrolls:
$\frac{dy}{dt} = -r \frac{d \theta}{dt} $
In this example, the linear and angular accelerations have opposite sign. As the yo-yo rotates counter-clockwise, $\theta$ increases and $y$, which is the length of the rolled part of the string, decreases.
Taking the derivative of both sides yields a similar relationship between linear and angular acceleration:
$\frac{d^2 y}{dt^2} = -r \frac{d^2 \theta}{dt^2} $
Which we can write more concisely:
$ a = -r \alpha $
This relationship is not a general law of nature; it is specific to scenarios like this where there is rolling without stretching or slipping.
Because of the way we've set up the problem, $y$ actually has two meanings: it represents the length of the rolled string and the height of the yo-yo, which decreases as the yo-yo falls. Similarly, $a$ represents acceleration in the length of the rolled string and the height of the yo-yo.
We can compute the acceleration of the yo-yo by adding up the linear forces:
$\sum F = T - mg = ma $
Where $T$ is positive because the tension force points up, and $mg$ is negative because gravity points down.
Because gravity acts on the center of mass, it creates no torque, so the only torque is due to tension:
$\sum \tau = T r = I \alpha $
Positive (upward) tension yields positive (counter-clockwise) angular acceleration.
Now we have three equations in three unknowns, $T$, $a$, and $\alpha$, with $I$, $m$, $g$, and $r$ as known parameters. We could solve these equations by hand, but we can also get SymPy to do it for us.
End of explanation
"""
Rmin = 8e-3 # m
Rmax = 16e-3 # m
Rout = 35e-3 # m
mass = 50e-3 # kg
L = 1 # m
g = 9.8 # m / s**2
"""
Explanation: The results are
$T = m g I / I^* $
$a = -m g r^2 / I^* $
$\alpha = m g r / I^* $
where $I^*$ is the augmented moment of inertia, $I + m r^2$.
You can also see the derivation of these equations in this video.
We can use these equations for $a$ and $\alpha$ to write a slope function and simulate this system.
Exercise: Simulate the descent of a yo-yo. How long does it take to reach the end of the string?
Here are the system parameters:
End of explanation
"""
1 / (Rmax)
"""
Explanation: Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.
Rout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string.
L is the length of the string.
g is the acceleration of gravity.
End of explanation
"""
I = mass * Rout**2 / 2
I
"""
Explanation: Based on these parameters, we can compute the moment of inertia for the yo-yo, modeling it as a solid cylinder with uniform density (see here).
In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
End of explanation
"""
k = (Rmax**2 - Rmin**2) / 2 / L
k
"""
Explanation: And we can compute k, which is the constant that determines how the radius of the spooled string decreases as it unwinds.
End of explanation
"""
init = State(theta=0, omega=0, y=L, v=0)
"""
Explanation: The state variables we'll use are angle, theta, angular velocity, omega, the length of the spooled string, y, and the linear velocity of the yo-yo, v.
Here is a State object with the the initial conditions.
End of explanation
"""
system = System(init=init, t_end=2)
"""
Explanation: And here's a System object with init and t_end (chosen to be longer than I expect for the yo-yo to drop 1 m).
End of explanation
"""
# Solution goes here
"""
Explanation: Write a slope function for this system, using these results from the book:
$ r = \sqrt{2 k y + R_{min}^2} $
$ T = m g I / I^* $
$ a = -m g r^2 / I^* $
$ \alpha = m g r / I^* $
where $I^*$ is the augmented moment of inertia, $I + m r^2$.
End of explanation
"""
# Solution goes here
"""
Explanation: Test your slope function with the initial conditions.
The results should be approximately
0, 180.5, 0, -2.9
End of explanation
"""
# Solution goes here
"""
Explanation: Notice that the initial acceleration is substantially smaller than g because the yo-yo has to start spinning before it can fall.
Write an event function that will stop the simulation when y is 0.
End of explanation
"""
# Solution goes here
"""
Explanation: Test your event function:
End of explanation
"""
# Solution goes here
"""
Explanation: Then run the simulation.
End of explanation
"""
# Solution goes here
"""
Explanation: Check the final state. If things have gone according to plan, the final value of y should be close to 0.
End of explanation
"""
results.theta.plot(color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
"""
Explanation: How long does it take for the yo-yo to fall 1 m? Does the answer seem reasonable?
The following cells plot the results.
theta should increase and accelerate.
End of explanation
"""
results.y.plot(color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
"""
Explanation: y should decrease and accelerate down.
End of explanation
"""
results.v.plot(label='velocity', color='C3')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
"""
Explanation: Plot velocity as a function of time; is the acceleration constant?
End of explanation
"""
a = gradient(results.v)
a.plot(label='acceleration', color='C4')
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
"""
Explanation: We can use gradient to estimate the derivative of v. How does the acceleration of the yo-yo compare to g?
End of explanation
"""
r = np.sqrt(2*k*results.y + Rmin**2)
r.plot(label='radius')
decorate(xlabel='Time (s)',
ylabel='Radius of spooled thread (m)')
"""
Explanation: And we can use the formula for r to plot the radius of the spooled thread over time.
End of explanation
"""
|
robertoalotufo/ia898 | src/dftshift.ipynb | mit | import numpy as np
def dftshift(f):
import ia898.src as ia
return ia.ptrans(f, np.array(f.shape)//2)
"""
Explanation: Function dftshift
Synopse
Shifts zero-frequency component to center of spectrum.
g = iafftshift(f)
OUTPUT
g: Image.
INPUT
f: Image. n-dimensional.
Description
The origin (0,0) of the DFT is normally at top-left corner of the image. For visualization
purposes, it is common to periodically translate the origin to the image center. This is
particularlly interesting because of the complex conjugate simmetry of the DFT of a real function.
Note that as the image can have even or odd sizes, to translate back the DFT from the center to
the corner, there is another correspondent function: idftshift.
End of explanation
"""
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python dftshift.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
"""
Explanation: Examples
End of explanation
"""
if testing:
f = ia.circle([120,150],6,[60,75])
F = ia.dft(f)
Fs = ia.dftshift(F)
ia.adshow(ia.dftview(F))
ia.adshow(ia.dftview(Fs))
if testing:
F = np.array([[10+6j,20+5j,30+4j],
[40+3j,50+2j,60+1j]])
Fs = ia.dftshift(F)
print('Fs=\n',Fs)
"""
Explanation: Example 1
End of explanation
"""
if testing:
print('testing dftshift')
print(repr(ia.dftshift(np.array([[10+6j,20+5j,30+4j],
[40+3j,50+2j,60+1j]]))) ==
repr(np.array([[ 60.+1.j, 40.+3.j, 50.+2.j],
[ 30.+4.j, 10.+6.j, 20.+5.j]])))
"""
Explanation: Equation
$$ \begin{matrix}
HS &=& H_{xo,yo} \xo &=& \lfloor W/2 \rfloor \yo &=& \lfloor H/2 \rfloor
\end{matrix} $$
See Also
iaptrans iaptrans - Periodic translation
iaifftshift iaifftshift - Undoes the translation of iafftshift
End of explanation
"""
|
reyadji/data-512-a1 | A1.ipynb | mit | import pprint
import requests
import json
# Global variables
pagecounts_url = 'https://wikimedia.org/api/rest_v1/metrics/legacy/{apiname}/aggregate/en.wikipedia.org/{access}/monthly/{start}/{end}'
pageviews_url = 'https://wikimedia.org/api/rest_v1/metrics/{apiname}/aggregate/en.wikipedia.org/{access}/{agent}/monthly/{start}/{end}'
my_github = 'reyadji'
my_email = 'adjir@uw.edu'
default_params = {
'apiname': 'pageviews',
'access': 'mobile-web',
'agent': 'spider',
'start': '2008010100',
'end': '2016060100'}
json_files = []
def wikipedia_query(params=default_params):
headers = {
'User-Agent': 'https://github.com/{}'.format(my_github),
'From': my_email}
url = ''
if params['apiname'] is 'pageviews':
url = pageviews_url
params['start'] = '2015070100'
params['end'] = '2017100100'
elif params['apiname'] is 'pagecounts':
url = pagecounts_url
params['start'] = '2008010100'
params['end'] = '2016080100'
r = requests.get(url.format(**params), headers=headers)
# print('URL: {}'.format(r.uri))
print('Response status code: {}'.format(r.status_code))
# print('Response JSON: {}'.format(pprint.pprint(r.json())))
return r
def store_to_json(params, r):
params['firstmonth'] = params['start'][:-2]
params['lastmonth'] = params['end'][:-2]
filename = '{apiname}_{access}_{firstmonth}-{lastmonth}.json'.format(**params)
with open(filename, 'w+') as f:
json.dump(r.json()['items'], f, indent=4)
json_files.append(filename)
def load_json(file):
with open(file, 'r') as f:
return json.load(f)
"""
Explanation: A1 Data Curation
The goal is to construct, analyze, and publish a dataset of monthly traffic on English Wikipedia from July 1 2008 - September 30 2017
The section below is to establish shared variables and library among different stages
End of explanation
"""
pc_desk_params = {
'apiname': 'pagecounts',
'access': 'desktop-site',
'agent': ''}
r = wikipedia_query(pc_desk_params)
store_to_json(pc_desk_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
pc_mob_params = {
'apiname': 'pagecounts',
'access': 'mobile-site',
'agent': ''}
r = wikipedia_query(pc_mob_params)
store_to_json(pc_mob_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
pv_desk_params = {
'apiname': 'pageviews',
'access': 'desktop',
'agent': 'user'}
r = wikipedia_query(pv_desk_params)
store_to_json(pv_desk_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
pv_mobapp_params = {
'apiname': 'pageviews',
'access': 'mobile-app',
'agent': 'user'}
r = wikipedia_query(pv_mobapp_params)
store_to_json(pv_mobapp_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
pv_mobweb_params = {
'apiname': 'pageviews',
'access': 'mobile-web',
'agent': 'user'}
r = wikipedia_query(pv_mobweb_params)
store_to_json(pv_mobweb_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
print('JSON files: P{}'.format(json_files))
"""
Explanation: Stage 1. Data Acquisition
Collect all months traffic data using two different Wikimedia REST API endpoints, Pagecounts and Pageviews, for both mobile and desktop (excluding spider bot).
Output is 5 JSON files:
1. pagecounts desktop
2. pagecounts mobile
3. pageview desktop
4. pageview mobile web
5. pageview mobile app
End of explanation
"""
import csv
import pandas as pd
csv_file = 'en-wikipedia_traffic_200801-201709.csv'
headers = [
'timestamp',
'year',
'month',
'pagecounts_all_views',
'pagecounts_desktop_views',
'pagecounts_mobile_views',
'pageviews_all_views',
'pageviews_desktop_views',
'pageviews_mobile_views']
def load_df(file):
apiname = file.split('_')[0]
accesstype = file.split('_')[1].split('-')[0]
column = apiname + '_' + accesstype + '_views'
with open(file, 'r') as f:
views = json.load(f)
data = pd.DataFrame.from_dict(views)
if apiname == 'pageviews':
data = data.drop(['access','agent','granularity','project'], axis=1)
data = data.rename(columns = {'views': column})
# if 'mobile' in view['access']:
else:
data = data.drop(['access-site', 'granularity','project'], axis=1)
data = data.rename(columns = {'count': column})
return data
df = pd.DataFrame()
for i in json_files:
# Load json file to pandas dataframe
data = load_df(i)
if len(df) == 0:
df = data.copy(True)
else:
df = df.merge(data, on='timestamp', how='outer')
# Create year and month out of timestamp attribute
df = df.assign(year=df.timestamp.str[0:4])
df = df.assign(month=df.timestamp.str[4:6])
df.timestamp = df.timestamp.str[:-2]
# Combining two pageviews_mobile_views columns, one from mobile-app, and the other from mobile-web
df = df.assign(pageviews_mobile_views= lambda x: x.pageviews_mobile_views_x + x.pageviews_mobile_views_y)
df = df.drop(['pageviews_mobile_views_x', 'pageviews_mobile_views_y'], axis=1)
# Sum mobile and desktop to get all views
df = df.assign(pageviews_all_views= lambda x: x.pageviews_mobile_views + x.pageviews_desktop_views)
df = df.assign(pagecounts_all_views= lambda x: x.pagecounts_mobile_views + x.pagecounts_desktop_views)
df = df.fillna(value=0)
df.to_csv(csv_file, columns=headers, index = False)
"""
Explanation: The first step is to query each Wikipedia Rest API endpoint using $wikipedia_query$ function with different parameter combination. Each response then saved to a json file using $store_to_json$ function. Print statements are used to debug the number of items the response returns with.
Two JSON files from two extra queries (all views for each API) are also produced to help with data processing in the next stage.
Stage 2. Data Processing
Process these data files to prepare them for analysis by combining each JSON file into a single CSV file with these headers:
- year
- month
- pagecount_all_views
- pagecount_desktop_views
- pagecount_mobile_views
- pageview_all_views
- pageview_desktop_views
- pageview_mobile_views
End of explanation
"""
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
csv_file = 'en-wikipedia_traffic_200801-201709.csv'
# Read the CSV file to pandas dataframe in which the 'timestamp' column is the new index
df = pd.read_csv(csv_file, index_col=0, parse_dates=[0], infer_datetime_format=True)
# Drop year and month columns since it's not needed for plotting
df = df.drop(['year','month'], axis=1)
plt.figure()
df.plot()
plt.xlabel('datetime')
plt.ylabel('views (in 10 millions)')
plt.title('Wikipedia Traffic Data')
plt.legend()
plt.savefig('en-wikipedia_traffic_200801-201709.png')
plt.show()
"""
Explanation: There are two basicways to combine number of views from pageviews and pagecounts API: using Pandas library or using Python built-in library. I found out it is very difficult to pivot from the JSON files to year and month using Python built-in basic library. On the other hand, pandas provide a nice way to mung the data. Each JSON file is loaded to a pandas dataframe, all of which is merged on timestamp to a single dataframe. Year and month are derived from the timestamp, and pageviews' mobile app and mobile web numbers are combined. Mobile and desktop views are summed to get pagecounts' and apgeviews' all views. Finally, I replaced all non-existant value with 0 before saving the dataframe to a CSV file.
Stage 3. Analysis
Visualize the dataset as a time series graph. This will include mobile, desktop, and combined (mobile+desktop) traffic.
End of explanation
"""
|
StingraySoftware/notebooks | Modeling/ModelingExamples.ipynb | mit | %load_ext autoreload
%autoreload 2
# ignore warnings to make notebook easier to see online
# COMMENT OUT THESE LINES FOR ACTUAL ANALYSIS
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
import matplotlib.pyplot as plt
try:
import seaborn as sns
sns.set_palette("colorblind")
except ImportError:
print("Install seaborn. It help you make prettier figures!")
import numpy as np
from astropy.modeling import models
"""
Explanation: The Stingray Modeling API Explained
Some more in-depth explanations of how the Stingray modeling API works.
Who should be using this API?
Basically, anyone who wants to model power spectral products with parametric functions. The purpose of this API is two-fold:
(1) provide convenient methods and classes in order to model a large range of typical data representations implemented in Stingray
(2) provide a more general framework for users to build their own models
A note on terminology: in this tutorial, we largely use model to denote both the parametric model describing the underlying process that generated the data, and the statistical model used to account for uncertainties in the measurement process.
The modeling subpackage defines a wider range of classes for typical statistical models than most standard modelling packages in X-ray astronomy, including likelihoods for Gaussian-distributed uncertainties (what astronomers call the $\chi^2$ likelihood), Poisson-distributed data (e.g. light curves) and $\chi^2$-distributed data (confusingly, not what astronomers call the $\chi^2$ likelihood, but the likelihood of data with $\chi^2$-distributed uncertainties appropriate for power spectra). It also defines a superclass LogLikelihood that make extending the framework to other types of data uncertainties straightforward. It supports Bayesian modelling via the Posterior class and its subclasses (for different types of data, equivalent to the likelihood classes) and provides support for defining priors.
The class ParameterEstimation and its data type-specific subclasses implement a range of operations usually done with power spectra and other products, including optimization (fitting), sampling (via Markov-Chain Monte Carlo), calibrating models comparison metrics (particularly likelihood ratio tests) and outlier statistics (for finding periodic signal candidates).
Overall, it is designed to be as modular as possible and extensible to new data types and problems in many places, though we do explicitly not aim to provide a fully general modelling framework (for example, at the moment, we have given no thought to modeling multi-variate data, though this may change in the future).
Some background
Modeling power spectra and light curves with parametric models is a fairly standard task. Stingray aims to make solving these problems as easy as possible.
We aim to integrate our existing code with astropy.modeling for for maximum compatibility. Please note, however, that we are only using the models, not the fitting interface, which is too constrained for our purposes.
End of explanation
"""
g = models.Gaussian1D()
# Generate fake data
np.random.seed(0)
x = np.linspace(-5., 5., 200)
y = 3 * np.exp(-0.5 * (x - 1.3)**2 / 0.8**2)
y += np.random.normal(0., 0.2, x.shape)
yerr = 0.2
plt.figure(figsize=(8,5))
plt.errorbar(x, y, yerr=yerr, fmt='ko')
"""
Explanation: The models and API of astropy.modeling.models is explained in the astropy documentation in more detail.
Here's how you instantiate a simple 1-D Gaussian:
End of explanation
"""
# define power law component
pl = models.PowerLaw1D()
# fix x_0 of power law component
pl.x_0.fixed = True
# define constant
c = models.Const1D()
# make compound model
plc = pl + c
"""
Explanation: Likelihoods and Posteriors
In general, model fitting will happen either in a frequentist (Maximum Likelihood) or Bayesian framework. Stingray's strategy is to let the user define a posterior in both cases, but ignore the prior in the former case.
Let's first make some fake data:
End of explanation
"""
# parameters for fake data.
alpha = 2.0
amplitude = 5.0
white_noise = 2.0
"""
Explanation: We're going to pick some fairly standard parameters for our data:
End of explanation
"""
freq = np.linspace(0.01, 10.0, int(10.0/0.01))
"""
Explanation: And now a frequency array:
End of explanation
"""
from astropy.modeling.fitting import _fitter_to_model_params
_fitter_to_model_params(plc, [amplitude, alpha, white_noise])
psd_shape = plc(freq)
"""
Explanation: Now we can set the parameters in the model:
End of explanation
"""
powers = psd_shape*np.random.chisquare(2, size=psd_shape.shape[0])/2.0
"""
Explanation: As a last step, we need to add noise by picking from a chi-square distribution with 2 degrees of freedom:
End of explanation
"""
plt.figure(figsize=(12,7))
plt.loglog(freq, powers, ds="steps-mid", label="periodogram realization")
plt.loglog(freq, psd_shape, label="power spectrum")
plt.legend()
"""
Explanation: Let's plot the result:
End of explanation
"""
logmin = -1e16
class PSDLogLikelihood(object):
def __init__(self, freq, power, model, m=1):
"""
A Chi-square likelihood as appropriate for power spectral analysis.
Parameters
----------
freq : iterable
x-coordinate of the data
power : iterable
y-coordinte of the data
model: an Astropy Model instance
The model to use in the likelihood.
m : int
1/2 of the degrees of freedom, i.e. the number of powers
that were averaged to obtain the power spectrum input into
this routine.
"""
self.x = ps.freq # the x-coordinate of the data (frequency array)
self.y = ps.power # the y-coordinate of the data (powers)
self.model = model # an astropy.models instance
self.m = m
self.params = [k for k,l in self.model.fixed.items() if not l]
self.npar = len(self.params) # number of free parameters
def evaluate(self, pars, neg=False):
"""
Evaluate the log-likelihood.
Parameters
----------
pars : iterable
The list of parameters for which to evaluate the model.
neg : bool, default False
If True, compute the *negative* log-likelihood, otherwise
compute the *positive* log-likelihood.
Returns
-------
loglike : float
The log-likelihood of the model
"""
# raise an error if the length of the parameter array input into
# this method doesn't match the number of free parameters in the model
if np.size(pars) != self.npar:
raise Exception("Input parameters must" +
" match model parameters!")
# set parameters in self.model to the parameter set to be used for
# evaluation
_fitter_to_model_params(self.model, pars)
# compute the values of the model at the positions self.x
mean_model = self.model(self.x)
# if the power spectrum isn't averaged, compute simple exponential
# likelihood (chi-square likelihood for 2 degrees of freedom)
if self.m == 1:
loglike = -np.sum(np.log(mean_model)) - \
np.sum(self.y/mean_model)
# otherwise use chi-square distribution to compute likelihood
else:
loglike = -2.0*self.m*(np.sum(np.log(mean_model)) +
np.sum(self.y/mean_model) +
np.sum((2.0 / (2. * self.m) - 1.0) *
np.log(self.y)))
if not np.isfinite(loglike):
loglike = logmin
if neg:
return -loglike
else:
return loglike
def __call__(self, parameters, neg=False):
return self.evaluate(parameters, neg)
"""
Explanation: Maximum Likelihood Fitting
Let's assume we've observed this periodogram from our source. We would now like to estimate the parameters.
This requires the definition of likelihood, which describes the probability of observing the data plotted above given some underlying model with a specific set of parameters. To say it differently, the likelihood encodes what we know about the underlying model (here a power law and a constant) and the statistical properties of the data (power spectra generally follow a chi-square distribution) and then allows us to compare data and model for various parameters under the assumption of the statistical uncertainties.
In order to find the best parameter set, one generally maximizes the likelihood function using an optimization algorithm. Because optimization algorithms generally minimize functions, they effectively minimize the log-likelihood, which comes out to be the same as maximizing the likelihood itself.
Below is an implementation of the $\chi^2$ likelihood as appropriate for power spectral analysis, with comments for easier understanding. The same is also implemented in posterior.py in Stingray:
End of explanation
"""
from stingray import Powerspectrum
ps = Powerspectrum()
ps.freq = freq
ps.power = powers
ps.df = ps.freq[1] - ps.freq[0]
ps.m = 1
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
test_pars = [1, 5, 100]
loglike(test_pars)
test_pars = [4.0, 10, 2.5]
loglike(test_pars)
test_pars = [2.0, 5.0, 2.0]
loglike(test_pars)
"""
Explanation: Let's make an object and see what it calculates if we put in different parameter sets. First, we have to make our sample PSD into an actual Powerspectrum object:
End of explanation
"""
from stingray.modeling import PSDLogLikelihood
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
loglike(test_pars)
"""
Explanation: Something close to the parameters we put in should yield the largest log-likelihood. Feel free to play around with the test parameters to verify that this is true.
You can similarly import the PSDLogLikelihood class from stingray.modeling and do the same:
End of explanation
"""
from stingray.modeling import PSDParEst
parest = PSDParEst(ps, fitmethod="L-BFGS-B", max_post=False)
"""
Explanation: To estimate the parameters, we can use an optimization routine, such as those implemented in scipy.optimize.minimize.
We have wrapped some code around that, to make your lives easier. We will not reproduce the full code here, just demonstrate its functionality.
Now we can instantiate the PSDParEst (for PSD Parameter Estimation) object. This can do more than simply optimize a single model, but we'll get to that later.
The PSDParEst object allows one to specify the fit method to use (however, this must be one of the optimizers in scipy.optimize). The parameter max_post allows for doing maximum-a-posteriori fits on the Bayesian posterior rather than maximum likelihood fits (see below for more details). We'll set it to False for now, since we haven't defined any priors:
End of explanation
"""
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
loglike.model.parameters
loglike.npar
starting_pars = [3.0, 1.0, 2.4]
res = parest.fit(loglike, starting_pars)
"""
Explanation: In order to fit a model, make an instance of the appropriate LogLikelihood or Posterior subclass, andsimply call the fit method with that instance and starting parameters you would like to fit.
End of explanation
"""
res.result
"""
Explanation: The result is an OptimizationResults object, which computes various summaries and useful quantities.
For example, here's the value of the likelihood function at the maximum the optimizer found:
End of explanation
"""
print(res.p_opt)
print(res.err)
"""
Explanation: Note: Optimizers routinely get stuck in local minima (corresponding to local maxima of the likelihood function). It is usually useful to run an optimizer several times with different starting parameters in order to get close to the global maximum.
Most useful are the estimates of the parameters at the maximum likelihood and their uncertainties:
End of explanation
"""
print("AIC: " + str(res.aic))
print("BIC: " + str(res.bic))
"""
Explanation: Note: uncertainties are estimated here via the covariance matrix between parameters, i.e. the inverse of the Hessian at the maximum. This only represents the true uncertainties for specific assumptions about the likelihood function (Gaussianity), so use with care!
It also computes Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) for model comparison purposes:
End of explanation
"""
plt.figure(figsize=(12,8))
plt.loglog(ps.freq, psd_shape, label="true power spectrum",lw=3)
plt.loglog(ps.freq, ps.power, label="simulated data")
plt.loglog(ps.freq, res.mfit, label="best fit", lw=3)
plt.legend()
"""
Explanation: Finally, it also produces the values of the mean function for the parameters at the maximum. Let's plot that and compare with the power spectrum we put in:
End of explanation
"""
res.print_summary(loglike)
"""
Explanation: That looks pretty good!
You can print a summary of the fitting results by calling print_summary:
End of explanation
"""
# broken power law model
bpl = models.BrokenPowerLaw1D()
# add constant
bplc = bpl + c
bplc.param_names
# define starting parameters
bplc_start_pars = [2.0, 1.0, 3.0, 1.0, 2.5]
loglike_bplc = PSDLogLikelihood(ps.freq, ps.power, bplc, m=ps.m)
pval, plc_opt, bplc_opt = parest.compute_lrt(loglike, starting_pars, loglike_bplc, bplc_start_pars)
print("Likelihood Ratio: " + str(pval))
"""
Explanation: Likelihood Ratios
The parameter estimation code has more functionality than act as a simple wrapper around scipy.optimize. For example, it allows for easy computation of likelihood ratios. Likelihood ratios are a standard way to perform comparisons between two models (though they are not always statistically meaningful, and should be used with caution!).
To demonstrate that, let's make a broken power law model
End of explanation
"""
from stingray.modeling import PSDPosterior
lpost = PSDPosterior(ps.freq, ps.power, plc, m=ps.m)
"""
Explanation: Bayesian Parameter Estimation
For Bayesian parameter estimation, we require a prior along with the likelihood defined above. Together, they form the posterior, the probability of the parameters given the data, which is what we generally want to compute in science.
Since there are no universally accepted priors for a model (they depend on the problem at hand and your physical knowledge about the system), they cannot be easily hard-coded in stingray. Consequently, setting priors is slightly more complex.
Analogously to the LogLikelihood above, we can also define a Posterior object. Each posterior object has three methods: logprior, loglikelihood and logposterior.
We have pre-defined some Posterior objects in posterior.py for common problems, including power spectral analysis. We start by making a PSDPosterior object:
End of explanation
"""
import scipy.stats
# flat prior for the power law index
p_alpha = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
# flat prior for the power law amplitude
p_amplitude = lambda amplitude: ((0.01 <= amplitude) & (amplitude <= 10.0))
# normal prior for the white noise parameter
p_whitenoise = lambda white_noise: scipy.stats.norm(2.0, 0.1).pdf(white_noise)
priors = {}
priors["alpha_0"] = p_alpha
priors["amplitude_0"] = p_amplitude
priors["amplitude_1"] = p_whitenoise
"""
Explanation: The priors are set as a dictionary of functions:
End of explanation
"""
from stingray.modeling import set_logprior
lpost.logprior = set_logprior(lpost, priors)
"""
Explanation: There's a function set_logprior in stingray.modeling that sets the prior correctly:
End of explanation
"""
lpost = PSDPosterior(ps.freq, ps.power, plc, priors=priors, m=ps.m)
"""
Explanation: You can also set the priors when you instantiate the posterior object:
End of explanation
"""
test_pars = [1.0, 2.0, 4.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
"""
Explanation: Much like before with the log-likelihood, we can now also compute the log-posterior for various test parameter sets:
End of explanation
"""
test_pars = [6, 6, 3.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
test_pars = [5.0, 2.0, 2.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
"""
Explanation: When the prior is zero (so the log-prior is -infinity), it automatically gets set to a very small value in order to avoid problems when doing the optimization:
End of explanation
"""
parest = PSDParEst(ps, fitmethod='BFGS', max_post=True)
res = parest.fit(lpost, starting_pars)
print("best-fit parameters:")
for p,e in zip(res.p_opt, res.err):
print("%.4f +/- %.4f"%(p,e))
"""
Explanation: We can do the same parameter estimation as above, except now it's called maximum-a-posteriori instead of maximum likelihood and includes the prior (notice we set max_post=True):
End of explanation
"""
res.print_summary(lpost)
"""
Explanation: The same outputs exist as for the Maximum Likelihood case:
End of explanation
"""
sample = parest.sample(lpost, res.p_opt, cov=res.cov, nwalkers=400,
niter=100, burnin=300, namestr="psd_modeling_test")
"""
Explanation: Unlike in the maximum likelihood case, we can also sample from the posterior probability distribution. The method sample uses the emcee package to do MCMC.
Important: Do not sample from the likelihood function. This is formally incorrect and can lead to incorrect inferences about the problem, because there is no guarantee that a posterior with improper (flat, infinite) priors will be bounded!
Important: emcee has had a major upgrade to version 3, which came with a number of API changes. To ensure compatibility with stingray, please update emcee to the latest version, if you haven't already.
Much like the optimizer, the sampling method requires a model and a set of starting parameters t0. Optionally, it can be useful to also input a covariance matrix, for example from the output of the optimizer.
Finally, the user should specify the number of walkers as well as the number of steps to use for both burn-in and sampling:
End of explanation
"""
sample.acceptance
"""
Explanation: The sampling method returns an object with various attributes that are useful for further analysis, for example the acceptance fraction:
End of explanation
"""
sample.mean
sample.ci
"""
Explanation: Or the mean and confidence intervals of the parameters:
End of explanation
"""
sample.print_results()
"""
Explanation: The method print_results prints the results:
End of explanation
"""
fig = sample.plot_results(nsamples=1000, fig=None, save_plot=True,
filename="modeling_tutorial_mcmc_corner.pdf")
"""
Explanation: Similarly, the method plot_results produces a bunch of plots:
End of explanation
"""
import copy
def _generate_model(lpost, pars):
"""
Helper function that generates a fake PSD similar to the
one in the data, but with different parameters.
Parameters
----------
lpost : instance of a Posterior or LogLikelihood subclass
The object containing the relevant information about the
data and the model
pars : iterable
A list of parameters to be passed to lpost.model in oder
to generate a model data set.
Returns:
--------
model_data : numpy.ndarray
An array of model values for each bin in lpost.x
"""
# get the model
m = lpost.model
# reset the parameters
_fitter_to_model_params(m, pars)
# make a model spectrum
model_data = lpost.model(lpost.x)
return model_data
def _generate_psd(ps, lpost, pars):
"""
Generate a fake power spectrum from a model.
Parameters:
----------
lpost : instance of a Posterior or LogLikelihood subclass
The object containing the relevant information about the
data and the model
pars : iterable
A list of parameters to be passed to lpost.model in oder
to generate a model data set.
Returns:
--------
sim_ps : stingray.Powerspectrum object
The simulated Powerspectrum object
"""
model_spectrum = _generate_model(lpost, pars)
# use chi-square distribution to get fake data
model_powers = model_spectrum*np.random.chisquare(2*ps.m,
size=model_spectrum.shape[0])/(2.*ps.m)
sim_ps = copy.copy(ps)
sim_ps.powers = model_powers
return sim_ps
def _compute_pvalue(obs_val, sim):
"""
Compute the p-value given an observed value of a test statistic
and some simulations of that same test statistic.
Parameters
----------
obs_value : float
The observed value of the test statistic in question
sim: iterable
A list or array of simulated values for the test statistic
Returns
-------
pval : float [0, 1]
The p-value for the test statistic given the simulations.
"""
# cast the simulations as a numpy array
sim = np.array(sim)
# find all simulations that are larger than
# the observed value
ntail = sim[sim > obs_val].shape[0]
# divide by the total number of simulations
pval = ntail/sim.shape[0]
return pval
def calibrate_lrt(ps, lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False,
nsim=1000, niter=200, nwalker=500, burnin=200, namestr="test"):
# set up the ParameterEstimation object
parest = PSDParEst(ps, fitmethod="L-BFGS-B", max_post=False)
# compute the observed likelihood ratio
lrt_obs, res1, res2 = parest.compute_lrt(lpost1, t1,
lpost2, t2,
neg=neg,
max_post=max_post)
# simulate parameter sets from the simpler model
if not max_post:
# using Maximum Likelihood, so I'm going to simulate parameters
# from a multivariate Gaussian
# set up the distribution
mvn = scipy.stats.multivariate_normal(mean=res1.p_opt, cov=res1.cov)
# sample parameters
s_all = mvn.rvs(size=nsim)
else:
if sample is None:
# sample the posterior using MCMC
sample = parest.sample(lpost, res1.p_opt, cov=res1.cov,
nwalkers=nwalker, niter=niter,
burnin=burnin, namestr=namestr)
# pick nsim samples out of the posterior sample
s_all = sample[np.random.choice(sample.shape[0], nsim, replace=False)]
lrt_sim = np.zeros(nsim)
# now I can loop over all simulated parameter sets to generate a PSD
for i,s in enumerate(s_all):
# generate fake PSD
sim_ps = _generate_psd(ps, lpost1, s)
# make LogLikelihood objects for both:
if not max_post:
sim_lpost1 = PSDLogLikelihood(sim_ps.freq, sim_ps.power,
model=lpost1.model, m=sim_ps.m)
sim_lpost2 = PSDLogLikelihood(sim_ps.freq, sim_ps.power,
model=lpost2.model, m=sim_ps.m)
else:
# make a Posterior object
sim_lpost1 = PSDPosterior(sim_ps.freq, sim_ps.power,
lpost1.model, m=sim_ps.m)
sim_lpost1.logprior = lpost1.logprior
sim_lpost2 = PSDPosterior(sim_ps.freq, sim_ps.power,
lpost2.model, m=sim_ps.m)
sim_lpost2.logprior = lpost2.logprior
parest_sim = PSDParEst(sim_ps, max_post=max_post)
lrt_sim[i], _, _ = parest_sim.compute_lrt(sim_lpost1, t1,
sim_lpost2, t2,
neg=neg,
max_post=max_post)
# now I can compute the p-value:
pval = _compute_pvalue(lrt_obs, lrt_sim)
return pval
pval = calibrate_lrt(ps, loglike, starting_pars,
loglike_bplc, bplc_start_pars,
max_post=False, nsim=100)
print("The p-value for rejecting the simpler model is: " + str(pval))
"""
Explanation: Calibrating Likelihood Ratio Tests
In order to use likelihood ratio tests for model comparison, one must compute the p-value of obtaining a likelihood ratio at least as high as that observed given that the null hypothesis (the simpler model) is true. The distribution of likelihood ratios under that assumption will only follow an analytical distribution if
* the models are nested, i.e. the simpler model is a special case of the more complex model and
* the parameter values that transform the complex model into the simple one do not lie on the boundary of parameter space.
Imagine e.g. a simple model without a QPO, and a complex model with a QPO, where in order to make the simpler model out of the more complex one you would set the QPO amplitude to zero. However, the amplitude cannot go below zero, thus the critical parameter value transforming the complex into the simple model lie on the boundary of parameter space.
If these two conditions are not given, the observed likelihood ratio must be calibrated via simulations of the simpler model. In general, one should not simulate from the best-fit model alone: this ignores the uncertainty in the model parameters, and thus may artificially inflate the significance of the result.
In the purely frequentist (maximum likelihood case), one does not know the shape of the probability distribution for the parameters. A rough approximation can be obtained by assuming the likelihood surface to be a multi-variate Gaussian, with covariances given by the inverse Fisher information. One may sample from that distribution and then simulate fake data sets using the sampled parameters. Each simulated data set will be fit with both models to compute a likelihood ratio, which is then used to build a distribution of likelihood ratios from the simpler model to compare the observed likelihood ratio to.
In the Bayesian case, one may sample from the posterior for the parameters directly and then use these samples as above to create fake data sets in order to derive a posterior probability distribution for the likelihood ratios and thus a posterior predictive p-value.
For the statistical background of much of this, see Protassov et al, 2002.
Below, we set up code that will do exactly that, for both the frequentist and Bayesian case.
End of explanation
"""
import scipy.stats
# flat prior for the power law indices
p_alpha1 = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
p_alpha2 = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
# flat prior for the break frequency
p_x_break = lambda xbreak: ((0.01 <= xbreak) & (10.0 >= xbreak))
# flat prior for the power law amplitude
p_amplitude = lambda amplitude: ((0.01 <= amplitude) & (amplitude <= 10.0))
# normal prior for the white noise parameter
p_whitenoise = lambda white_noise: scipy.stats.norm(2.0, 0.1).pdf(white_noise)
priors = {}
priors["alpha_1_0"] = p_alpha
priors["alpha_2_0"] = p_alpha
priors["amplitude_0"] = p_amplitude
priors["amplitude_1"] = p_whitenoise
priors["x_break_0"] = p_x_break
"""
Explanation: As expected, the p-value for rejecting the powerlaw model is fairly large: since we simulated from that model, we would be surprised if it generated a small p-value, causing us to reject this model (note, however, that if the null hypothesis is true, the p-value will be uniformely distributed between 0 and 1. By definition, then, you will get a p-value smaller or equal to 0.01 in approximately one out of a hundred cases)
We can do the same with the Bayesian model, in which case the result is called a posterior predictive p-value, which, in turn, is often used in posterior model checking (not yet implemented!).
We have not yet defined a PSDPosterior object for the bent power law model, so let's do that. First, let's define some priors:
End of explanation
"""
lpost_bplc = PSDPosterior(ps.freq, ps.power, bplc, priors=priors, m=ps.m)
lpost_bplc(bplc_start_pars)
"""
Explanation: Now we can set up the PSDPosterior object:
End of explanation
"""
pval = calibrate_lrt(ps, lpost, starting_pars,
lpost_bplc, bplc_start_pars,
sample=sample.samples,
max_post=True, nsim=100)
print("The posterior predictive p-value is: p = " + str(pval))
"""
Explanation: And do the posterior predictive p-value. Since we've already sampled from the simple model, we can pass that sample to the calibrate_lrt function, in order to cut down on computation time (if the keyword sample is not given, it will automatically run MCMC:
End of explanation
"""
from stingray.modeling import PSDParEst
parest = PSDParEst(ps, fitmethod="BFGS")
pval = parest.calibrate_lrt(lpost, starting_pars, lpost_bplc, bplc_start_pars,
sample=sample.samples, nsim=100, max_post=True, seed=200)
print(pval)
"""
Explanation: Again, we find that the p-value does not suggest rejecting the powerlaw model.
Of course, a slightly modified version is implemented in stingray as a subclass of the PSDParEst class:
End of explanation
"""
# compute highest outlier in the data, and the frequency and index
# where that power occurs
max_power, max_freq, max_ind = parest._compute_highest_outlier(lpost, res)
max_power
pval = parest.calibrate_highest_outlier(lpost, starting_pars, sample=sample,
max_post=True,
nsim=100, niter=200, nwalkers=500,
burnin=200, namestr="test")
pval
"""
Explanation: Bayesian-ish QPO Searches
When searching for quasi-periodic oscillations (QPOs) in light curves that are not constant (for example because they are bursts or have other types of variability), one must take care that the variable background is accurately modelled (most standard tools assume that the light curve is constant).
In Vaughan et al, 2010, a method was introduced to search for QPOs in the presence of red noise (stochastic variability), and in Huppenkothen et al, 2013 it was extended to magnetar bursts, and in Inglis et al, 2015 and Inglis et al, 2016 a similar approach was used to find QPOs in solar flares.
Based on a model for the broadband spectral noise, the algorithm finds the highest outlier in a test statistic based on the data-model residuals (under the assumption that if the broadband model is correct, the test statistic $T_R = \max_j(2 D_j/m_j)$ for $j$ power spectral bins with powers $D_j$ and model powers $m_j$ will be distributed following a $\chi^2$ distribution with two degrees of freedom). The observed test statistic $T_R$ is then compared to a theoretical distribution based on simulated power spectra without an outlier in order to compute a posterior predictive p-value as above for the likelihood ratio.
Since the concept is very similar to that above, we do not show the full code here. Instead, the p-value can be calculated using the method calibrate_highest_outlier, which belongs to the PSDParEst class:
End of explanation
"""
from stingray import Powerspectrum
m = 1
nfreq = 100000
freq = np.linspace(1, 1000, nfreq)
np.random.seed(100) # set the seed for the random number generator
noise = np.random.exponential(size=nfreq)
model = models.PowerLaw1D() + models.Const1D()
model.x_0_0.fixed = True
alpha_0 = 2.0
amplitude_0 = 100.0
amplitude_1 = 2.0
model.alpha_0 = alpha_0
model.amplitude_0 = amplitude_0
model.amplitude_1 = amplitude_1
p = model(freq)
power = noise * p
ps = Powerspectrum()
ps.freq = freq
ps.power = power
ps.m = m
ps.df = freq[1] - freq[0]
ps.norm = "leahy"
"""
Explanation: Convenience Functions
For convenience, we have implemented some simple functions to reduce overhead with having to instantiate objects of the various classes.
Note that these convenience function use similar approaches and guesses in all cases; this might work for some simple quicklook analysis, but when preparing publication-ready results, one should approach the analysis with more care and make sure the options chosen are appropriate for the problem at hand.
Fitting a power spectrum with some model
The code above allows for a lot of freedom in building an appropriate model for your application. However, in everyday life, one might occasionally want to do a quick fit for various applications, without having to go too much into details. Below is a convenience function written for exactly that purpose.
Please note that while this aims to use reasonable defaults, this is unlikely to produce publication-ready results!
So let's fit a power law and a constant to some data, which we'll create below:
End of explanation
"""
plt.figure()
plt.loglog(ps.freq, ps.power, ds="steps-mid", lw=2, color="black")
"""
Explanation: What does this data set look like?
End of explanation
"""
from stingray.modeling import PSDLogLikelihood, PSDPosterior, PSDParEst
def fit_powerspectrum(ps, model, starting_pars, max_post=False, priors=None,
fitmethod="L-BFGS-B"):
if priors:
lpost = PSDPosterior(ps, model, priors=priors)
else:
lpost = PSDLogLikelihood(ps.freq, ps.power, model, m=ps.m)
parest = PSDParEst(ps, fitmethod=fitmethod, max_post=max_post)
res = parest.fit(lpost, starting_pars, neg=True)
return parest, res
"""
Explanation: In order to fit this, we'll write a convenience function that can take the power spectrum, a model, some starting parameters and just run with it:
End of explanation
"""
model_to_test = models.PowerLaw1D() + models.Const1D()
model_to_test.x_0_0.fixed = True
"""
Explanation: Let's see if it works. We've already defined our model above, but to be explicit, let's define it again:
End of explanation
"""
t0 = [80, 1.5, 2.5]
parest, res = fit_powerspectrum(ps, model_to_test, t0)
res.p_opt
"""
Explanation: Now we just need some starting parameters:
End of explanation
"""
plt.figure()
plt.figure()
plt.loglog(ps.freq, ps.power, ds="steps-mid", lw=2, color="black")
plt.plot(ps.freq, res.mfit, lw=3, color="red")
"""
Explanation: Looks like it worked! Let's plot the result, too:
End of explanation
"""
from stingray.modeling.scripts import fit_powerspectrum
parest, res = fit_powerspectrum(ps, model_to_test, t0)
res.p_opt
"""
Explanation: You can find the function in the scripts sub-module:
End of explanation
"""
l = models.Lorentz1D
l.param_names
def fit_lorentzians(ps, nlor, starting_pars, fit_whitenoise=True, max_post=False, priors=None,
fitmethod="L-BFGS-B"):
model = models.Lorentz1D()
if nlor > 1:
for i in range(nlor-1):
model += models.Lorentz1D()
if fit_whitenoise:
model += models.Const1D()
parest = PSDParEst(ps, fitmethod=fitmethod, max_post=max_post)
lpost = PSDPosterior(ps.freq, ps.power, model, priors=priors, m=ps.m)
res = parest.fit(lpost, starting_pars, neg=True)
return parest, res
"""
Explanation: Fitting Lorentzians
Fitting Lorentzians to power spectra is a routine task for most astronomers working with power spectra, hence there is a function that can produce either Maximum Likelihood or Maximum-A-Posteriori fits of the data.
End of explanation
"""
np.random.seed(400)
nlor = 3
x_0_0 = 0.5
x_0_1 = 2.0
x_0_2 = 7.5
amplitude_0 = 150.0
amplitude_1 = 50.0
amplitude_2 = 15.0
fwhm_0 = 0.1
fwhm_1 = 1.0
fwhm_2 = 0.5
whitenoise = 2.0
model = models.Lorentz1D(amplitude_0, x_0_0, fwhm_0) + \
models.Lorentz1D(amplitude_1, x_0_1, fwhm_1) + \
models.Lorentz1D(amplitude_2, x_0_2, fwhm_2) + \
models.Const1D(whitenoise)
p = model(ps.freq)
noise = np.random.exponential(size=len(ps.freq))
power = p*noise
plt.figure()
plt.loglog(ps.freq, power, lw=1, ds="steps-mid", c="black")
plt.loglog(ps.freq, p, lw=3, color="red")
"""
Explanation: Let's make a dataset so we can test it!
End of explanation
"""
import copy
ps_new = copy.copy(ps)
ps_new.power = power
"""
Explanation: Let's make this into a Powerspectrum object:
End of explanation
"""
t0 = [150, 0.4, 0.2, 50, 2.3, 0.6, 20, 8.0, 0.4, 2.1]
parest, res = fit_lorentzians(ps_new, nlor, t0)
"""
Explanation: So now we can fit this model with our new function, but first, we need to define the starting parameters for our fit. The starting parameters will be [amplitude, x_0, fwhm] for each component plus the white noise component at the end:
End of explanation
"""
res.p_opt
"""
Explanation: Let's look at the output:
End of explanation
"""
parest.plotfits(res, save_plot=False, namestr="lorentzian_test")
"""
Explanation: Cool, that seems to work! For convenience PSDParEst also has a plotting function:
End of explanation
"""
from stingray.modeling import fit_lorentzians
parest, res = fit_lorentzians(ps_new, nlor, t0)
res.p_opt
"""
Explanation: The function exists in the library as well for ease of use:
End of explanation
"""
|
materialsproject/mapidoc | example_notebooks/Programmatically Access Materials Project Electrolyte Genome Data.ipynb | bsd-3-clause | urlpattern = {
"results": "https://materialsproject.org/molecules/results?query={spec}",
"mol_json": "https://materialsproject.org/molecules/{mol_id}/json",
"mol_svg": "https://materialsproject.org/molecules/{mol_id}/svg",
"mol_xyz": "https://materialsproject.org/molecules/{mol_id}/xyz",
}
"""
Explanation: Programmatically Access Materials Project Electrolyte Genome Data
Donny Winston and Xiaohui Qu<br>
Created: November 18, 2015<br>
Last Update: April 19, 2018
This notebook documents URL patterns to access Electrolyte Genome data and provides examples of access using the Python requests library.
If you have questions, please contact the Materials Project team. Contact information is available at https://materialsproject.org.
URL patterns
There is one way to query for results given search criteria (results), and there are a few ways to obtain data for individual molecules, either in full with metadata (json) or simply the structure for display (svg) or analysis (xyz). Below are the four corresponding URL patterns.
End of explanation
"""
import json
import os
import sys
if sys.version_info[0] == 2:
from urllib import quote_plus
else:
from urllib.parse import quote_plus
import requests
# Ensure you have an API key, which is located on your dashboard
# (https://materialsproject.org/dashboard).
MAPI_KEY = "fAkEaP1K4y" # <-- replace with your api key
# Please do NOT share a notebook with others with your API key hard-coded in it.
# One alternative: Load API key from a set environment variable, e.g.
#
# MAPI_KEY = os.environ['PMG_MAPI_KEY']
#
# Best alternative: Store and load API key using pymatgen, e.g.
### Do once, on command line (without "!" in front) or in notebook
# !pmg config --add PMG_MAPI_KEY "your_api_key_goes_here"
### Then, in notebook/script:
# from pymatgen import SETTINGS
# MAPI_KEY = SETTINGS.get("PMG_MAPI_KEY")
"""
Explanation: Setup
End of explanation
"""
# Here is a function we'll use to get results. We'll walk though some examples that use it.
def get_results(spec, fields=None):
"""Take a specification document (a `dict`), and return a list of matching molecules.
"""
# Stringify `spec`, ensure the string uses double quotes, and percent-encode it...
str_spec = quote_plus(str(spec).replace("'", '"'))
# ...because the spec is the value of a "query" key in the final URL.
url = urlpattern["results"].format(spec=str_spec)
return (requests.get(url, headers={'X-API-KEY': MAPI_KEY})).json()
# Find molecules containing oxygen and phosphorous,
# and collect the ionization energies (relative to a lithium electrode) of the results.
# Separate elements with a "-"
spec = {"elements": "O-P"}
results = get_results(spec)
# Not all molecules have data for all available properties
ionization_energies = [molecule["IE"] for molecule in results if "IE" in molecule]
# Molecules with ionization energies ("IE") will have oxidation potentials relative to metallic electrodes,
# available as "oxidation_<ELECTRODE>" keys. "IE" itself is relative to lithium.
# There is an analogous relationship between the presence of electron affinity ("EA") values
# and corresponding "reduction_<ELECTRODE>" keys for reduction potentials using a reference metal.
# `task_id` is the molecule's identifier, which we'll use later in this notebook.
# `MW` is molecular weight
# `smiles`: https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system
for key in results[0]:
print(key)
# A "silly" example specification that demonstrates many keys available to query, and
# the expected format of their value specifications.
#
# The "$"-prefixed keys are MongoDB syntax (https://docs.mongodb.org/manual/reference/operator/query/).
spec = {
"elements": "C-H-O-F",
"notelements": ["Al", "Br"], # a list (inconsistent for now with "elements" -- sorry)
"charge": {"$in": [0, -1]}, # {0, 1, -1}
"pointgroup": "C1",
"functional_groups": {"$in": ["-COOH"]},
"base_molecule": {"$in": ["s3"]},
"nelements": 4,
"EA": {"$gte": 0.4}, # >= 0.4
"IE": {"$lt": 5}, # < 5
"formula": "H11 C11 O4 F1", # "H11C11O4F" works too
}
results = get_results(spec)
"""
Explanation: Getting a set of molecules
End of explanation
"""
results = get_results({})
print("{} molecules in total right now".format(len(results)))
"""
Explanation: What if we just want "everything"? Let's use an empty spec.
End of explanation
"""
def get_molecule(mol_id, fmt='json'):
url = urlpattern["mol_" + fmt].format(mol_id=mol_id)
response = requests.get(url, headers={'X-API-KEY': MAPI_KEY})
if fmt == 'json':
return response.json()
else:
return response.content
first_result = results[0]
mol_id = first_result['task_id']
print("ID: {}".format(mol_id))
# Get all data by default
molecule = get_molecule(mol_id)
print("There are {} key/value pairs in molecule {}. Have a look around!".format(len(molecule), mol_id))
# The SVG format provides a two-dimensional "pretty picture" of the molecular structure.
svg_of_molecule = get_molecule(mol_id, fmt='svg')
with open('molecule.svg','w') as f:
f.write(svg_of_molecule)
print("scalable vector graphic saved")
# The XYZ representation provided is the optimized geometry of the molecule in a charge-neutral state.
xyz_of_molecule = get_molecule(mol_id, fmt='xyz')
with open('molecule.xyz','w') as f:
f.write(xyz_of_molecule)
print("XYZ file saved. Can load into molecule-viewer software.")
"""
Explanation: The above request might take some time, but hopefully not much more than a few seconds. Why do we allow this? Well, we don't return all the data for each molecule, and the total size of what we send right now is less than 10 MB.
As our collection of molecules grows in size, this policy may change. So, please use targeted query specifications to get the results you need, especially if you want to periodically check for new molecules that meet some specification.
Getting data for individual molecules
You can get all data for a molecule given its ID.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_resample.ipynb | bsd-3-clause | # Authors: Marijn van Vliet <w.m.vanvliet@gmail.com>
#
# License: BSD (3-clause)
from matplotlib import pyplot as plt
import mne
from mne.datasets import sample
"""
Explanation: Resampling data
When performing experiments where timing is critical, a signal with a high
sampling rate is desired. However, having a signal with a much higher sampling
rate than is necessary needlessly consumes memory and slows down computations
operating on the data.
This example downsamples from 600 Hz to 100 Hz. This achieves a 6-fold
reduction in data size, at the cost of an equal loss of temporal resolution.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(raw_fname).crop(120, 240).load_data()
"""
Explanation: Setting up data paths and loading raw data (skip some data for speed)
End of explanation
"""
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=2, tmin=-0.1, tmax=0.8, preload=True)
# Downsample to 100 Hz
print('Original sampling rate:', epochs.info['sfreq'], 'Hz')
epochs_resampled = epochs.copy().resample(100, npad='auto')
print('New sampling rate:', epochs_resampled.info['sfreq'], 'Hz')
# Plot a piece of data to see the effects of downsampling
plt.figure(figsize=(7, 3))
n_samples_to_plot = int(0.5 * epochs.info['sfreq']) # plot 0.5 seconds of data
plt.plot(epochs.times[:n_samples_to_plot],
epochs.get_data()[0, 0, :n_samples_to_plot], color='black')
n_samples_to_plot = int(0.5 * epochs_resampled.info['sfreq'])
plt.plot(epochs_resampled.times[:n_samples_to_plot],
epochs_resampled.get_data()[0, 0, :n_samples_to_plot],
'-o', color='red')
plt.xlabel('time (s)')
plt.legend(['original', 'downsampled'], loc='best')
plt.title('Effect of downsampling')
mne.viz.tight_layout()
"""
Explanation: Since downsampling reduces the timing precision of events, we recommend
first extracting epochs and downsampling the Epochs object:
End of explanation
"""
# Resample to 300 Hz
raw_resampled = raw.copy().resample(300, npad='auto')
"""
Explanation: When resampling epochs is unwanted or impossible, for example when the data
doesn't fit into memory or your analysis pipeline doesn't involve epochs at
all, the alternative approach is to resample the continuous data. This
can only be done on loaded or pre-loaded data.
End of explanation
"""
print('Number of events before resampling:', len(mne.find_events(raw)))
# Resample to 100 Hz (generates warning)
raw_resampled = raw.copy().resample(100, npad='auto')
print('Number of events after resampling:',
len(mne.find_events(raw_resampled)))
# To avoid losing events, jointly resample the data and event matrix
events = mne.find_events(raw)
raw_resampled, events_resampled = raw.copy().resample(
100, npad='auto', events=events)
print('Number of events after resampling:', len(events_resampled))
"""
Explanation: Because resampling also affects the stim channels, some trigger onsets might
be lost in this case. While MNE attempts to downsample the stim channels in
an intelligent manner to avoid this, the recommended approach is to find
events on the original data before downsampling.
End of explanation
"""
|
dvirsamuel/MachineLearningCourses | Visual Recognision - Stanford/assignment2/Dropout.ipynb | gpl-3.0 | # As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
"""
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print 'Running tests with p = ', p
print 'Mean of input: ', x.mean()
print 'Mean of train-time output: ', out.mean()
print 'Mean of test-time output: ', out_test.mean()
print 'Fraction of train-time output set to zero: ', (out == 0).mean()
print 'Fraction of test-time output set to zero: ', (out_test == 0).mean()
print
# p = probability of dropping neron.
# so, bigger p -> more dropout, smaller p -> less dropout
"""
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
"""
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print 'dx relative error: ', rel_error(dx, dx_num)
"""
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
"""
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print 'Running check with dropout = ', dropout
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
print
"""
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
"""
# Train two identical nets, one with dropout and one without
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print dropout
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation
"""
|
smorton2/think-stats | code/chap07ex.ipynb | gpl-3.0 | from __future__ import print_function, division
%matplotlib inline
import numpy as np
import brfss
import thinkstats2
import thinkplot
"""
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
df = brfss.ReadBrfss(nrows=None)
"""
Explanation: Scatter plots
I'll start with the data from the BRFSS again.
End of explanation
"""
def SampleRows(df, nrows, replace=False):
indices = np.random.choice(df.index, nrows, replace=replace)
sample = df.loc[indices]
return sample
"""
Explanation: The following function selects a random subset of a DataFrame.
End of explanation
"""
sample = SampleRows(df, 5000)
heights, weights = sample.htm3, sample.wtkg2
"""
Explanation: I'll extract the height in cm and the weight in kg of the respondents in the sample.
End of explanation
"""
thinkplot.Scatter(heights, weights, alpha=1)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: Here's a simple scatter plot with alpha=1, so each data point is fully saturated.
End of explanation
"""
def Jitter(values, jitter=0.5):
n = len(values)
return np.random.normal(0, jitter, n) + values
"""
Explanation: The data fall in obvious columns because they were rounded off. We can reduce this visual artifact by adding some random noice to the data.
NOTE: The version of Jitter in the book uses noise with a uniform distribution. Here I am using a normal distribution. The normal distribution does a better job of blurring artifacts, but the uniform distribution might be more true to the data.
End of explanation
"""
heights = Jitter(heights, 1.4)
weights = Jitter(weights, 0.5)
"""
Explanation: Heights were probably rounded off to the nearest inch, which is 2.8 cm, so I'll add random values from -1.4 to 1.4.
End of explanation
"""
thinkplot.Scatter(heights, weights, alpha=1.0)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: And here's what the jittered data look like.
End of explanation
"""
thinkplot.Scatter(heights, weights, alpha=0.1, s=10)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: The columns are gone, but now we have a different problem: saturation. Where there are many overlapping points, the plot is not as dark as it should be, which means that the outliers are darker than they should be, which gives the impression that the data are more scattered than they actually are.
This is a surprisingly common problem, even in papers published in peer-reviewed journals.
We can usually solve the saturation problem by adjusting alpha and the size of the markers, s.
End of explanation
"""
thinkplot.HexBin(heights, weights)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: That's better. This version of the figure shows the location and shape of the distribution most accurately. There are still some apparent columns and rows where, most likely, people reported their height and weight using rounded values. If that effect is important, this figure makes it apparent; if it is not important, we could use more aggressive jittering to minimize it.
An alternative to a scatter plot is something like a HexBin plot, which breaks the plane into bins, counts the number of respondents in each bin, and colors each bin in proportion to its count.
End of explanation
"""
# Solution goes here
"""
Explanation: In this case the binned plot does a pretty good job of showing the location and shape of the distribution. It obscures the row and column effects, which may or may not be a good thing.
Exercise: So far we have been working with a subset of only 5000 respondents. When we include the entire dataset, making an effective scatterplot can be tricky. As an exercise, experiment with Scatter and HexBin to make a plot that represents the entire dataset well.
End of explanation
"""
cleaned = df.dropna(subset=['htm3', 'wtkg2'])
"""
Explanation: Plotting percentiles
Sometimes a better way to get a sense of the relationship between variables is to divide the dataset into groups using one variable, and then plot percentiles of the other variable.
First I'll drop any rows that are missing height or weight.
End of explanation
"""
bins = np.arange(135, 210, 5)
indices = np.digitize(cleaned.htm3, bins)
groups = cleaned.groupby(indices)
"""
Explanation: Then I'll divide the dataset into groups by height.
End of explanation
"""
for i, group in groups:
print(i, len(group))
"""
Explanation: Here are the number of respondents in each group:
End of explanation
"""
mean_heights = [group.htm3.mean() for i, group in groups]
cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]
"""
Explanation: Now we can compute the CDF of weight within each group.
End of explanation
"""
for percent in [75, 50, 25]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(mean_heights, weight_percentiles, label=label)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: And then extract the 25th, 50th, and 75th percentile from each group.
End of explanation
"""
# Solution goes here
"""
Explanation: Exercise: Yet another option is to divide the dataset into groups and then plot the CDF for each group. As an exercise, divide the dataset into a smaller number of groups and plot the CDF for each group.
End of explanation
"""
def Cov(xs, ys, meanx=None, meany=None):
xs = np.asarray(xs)
ys = np.asarray(ys)
if meanx is None:
meanx = np.mean(xs)
if meany is None:
meany = np.mean(ys)
cov = np.dot(xs-meanx, ys-meany) / len(xs)
return cov
"""
Explanation: Correlation
The following function computes the covariance of two variables using NumPy's dot function.
End of explanation
"""
heights, weights = cleaned.htm3, cleaned.wtkg2
Cov(heights, weights)
"""
Explanation: And here's an example:
End of explanation
"""
def Corr(xs, ys):
xs = np.asarray(xs)
ys = np.asarray(ys)
meanx, varx = thinkstats2.MeanVar(xs)
meany, vary = thinkstats2.MeanVar(ys)
corr = Cov(xs, ys, meanx, meany) / np.sqrt(varx * vary)
return corr
"""
Explanation: Covariance is useful for some calculations, but it doesn't mean much by itself. The coefficient of correlation is a standardized version of covariance that is easier to interpret.
End of explanation
"""
Corr(heights, weights)
"""
Explanation: The correlation of height and weight is about 0.51, which is a moderately strong correlation.
End of explanation
"""
np.corrcoef(heights, weights)
"""
Explanation: NumPy provides a function that computes correlations, too:
End of explanation
"""
import pandas as pd
def SpearmanCorr(xs, ys):
xranks = pd.Series(xs).rank()
yranks = pd.Series(ys).rank()
return Corr(xranks, yranks)
"""
Explanation: The result is a matrix with self-correlations on the diagonal (which are always 1), and cross-correlations on the off-diagonals (which are always symmetric).
Pearson's correlation is not robust in the presence of outliers, and it tends to underestimate the strength of non-linear relationships.
Spearman's correlation is more robust, and it can handle non-linear relationships as long as they are monotonic. Here's a function that computes Spearman's correlation:
End of explanation
"""
SpearmanCorr(heights, weights)
"""
Explanation: For heights and weights, Spearman's correlation is a little higher:
End of explanation
"""
def SpearmanCorr(xs, ys):
xs = pd.Series(xs)
ys = pd.Series(ys)
return xs.corr(ys, method='spearman')
"""
Explanation: A Pandas Series provides a method that computes correlations, and it offers spearman as one of the options.
End of explanation
"""
SpearmanCorr(heights, weights)
"""
Explanation: The result is the same as for the one we wrote.
End of explanation
"""
Corr(cleaned.htm3, np.log(cleaned.wtkg2))
"""
Explanation: An alternative to Spearman's correlation is to transform one or both of the variables in a way that makes the relationship closer to linear, and the compute Pearson's correlation.
End of explanation
"""
import first
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercises
Using data from the NSFG, make a scatter plot of birth weight versus mother’s age. Plot percentiles of birth weight versus mother’s age. Compute Pearson’s and Spearman’s correlations. How would you characterize the relationship between these variables?
End of explanation
"""
|
dynaryu/rmtk | rmtk/vulnerability/derivation_fragility/NLTHA_on_SDOF/MSA_on_SDOF.ipynb | agpl-3.0 | import MSA_on_SDOF
from rmtk.vulnerability.common import utils
import numpy as np
import MSA_utils
%matplotlib inline
"""
Explanation: Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built.
<img src="../../../../figures/MSA_example.jpg" width="500" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
"""
capacity_curves_file = '/Users/chiaracasotto/GitHub/rmtk_data/capacity_curves_sdof_first_mode.csv'
sdof_hysteresis = "/Users/chiaracasotto/GitHub/rmtk_data/pinching_parameters.csv"
from read_pinching_parameters import read_parameters
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
"""
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
End of explanation
"""
gmrs_folder = "../../../../../rmtk_data/MSA_records"
minT, maxT = 0.1, 2.0
no_bins = 2
no_rec_bin = 10
record_scaled_folder = "../../../../../rmtk_data/Scaling_factors"
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
"""
Explanation: Load ground motion records
For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required:
1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual.
3. no_bins: number of Intensity Measure bins.
4. no_rec_bin: number of records per bin
If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
"""
damage_model_file = "../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
"""
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
End of explanation
"""
damping_ratio = 0.05
degradation = False
msa = {}; msa['n. bins']=no_bins; msa['records per bin']=no_rec_bin; msa['input folder']=record_scaled_folder
PDM, Sds, IML_info = MSA_on_SDOF.calculate_fragility(capacity_curves, hysteresis, msa, gmrs,
damage_model, damping_ratio, degradation)
"""
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_ratio: This parameter defines the damping ratio for the structure.
2. degradation: This boolean parameter should be set to True or False to specify whether structural degradation should be considered in the analysis or not.
End of explanation
"""
import MSA_post_processing
IMT = "Sa"
T = 0.47
#T = np.arange(0.4,1.91,0.01)
regression_method = "least squares"
fragility_model = MSA_utils.calculate_fragility_model(PDM,gmrs,IML_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
"""
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity).
2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)).
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
"""
minIML, maxIML = 0.01, 4
utils.plot_fragility_model(fragility_model, minIML, maxIML)
print fragility_model['damage_states'][0:]
"""
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
"""
taxonomy = "HI_Intact_v4_lq"
minIML, maxIML = 0.01, 3.00
output_type = "csv"
output_path = "../../../../../phd_thesis/results/damping_0.39/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
"""
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
cons_model_file = "../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
"""
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
"""
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
"""
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
|
kaleoyster/nbi-data-science | Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Southwest+United+States.ipynb | gpl-2.0 | import pymongo
from pymongo import MongoClient
import time
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib.pyplot import *
import matplotlib.pyplot as plt
import folium
import datetime as dt
import random as rnd
import warnings
import datetime as dt
import csv
%matplotlib inline
"""
Explanation: Libraries and Packages
End of explanation
"""
warnings.filterwarnings(action="ignore")
Client = MongoClient("mongodb://bridges:readonly@nbi-mongo.admin/bridge")
db = Client.bridge
collection = db["bridges"]
"""
Explanation: Connecting to National Data Service: The Lab Benchwork's NBI - MongoDB instance
End of explanation
"""
def getData(state):
pipeline = [{"$match":{"$and":[{"year":{"$gt":1991, "$lt":2017}},{"stateCode":state}]}},
{"$project":{"_id":0,
"structureNumber":1,
"yearBuilt":1,
"yearReconstructed":1,
"deck":1, ## Rating of deck
"year":1,
'owner':1,
"countyCode":1,
"substructure":1, ## Rating of substructure
"superstructure":1, ## Rating of superstructure
"Structure Type":"$structureTypeMain.typeOfDesignConstruction",
"Type of Wearing Surface":"$wearingSurface/ProtectiveSystem.typeOfWearingSurface",
}}]
dec = collection.aggregate(pipeline)
conditionRatings = pd.DataFrame(list(dec))
## Creating new column: Age
conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt']
return conditionRatings
"""
Explanation: Extracting Data of Southwest states of the United states from 1992 - 2016.
The following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, owner, countryCode, structure type, type of wearing surface, and subtructure.
End of explanation
"""
## filter and convert them into interger
def filterConvert(conditionRatings):
before = len(conditionRatings)
print("Total Records before filteration: ",len(conditionRatings))
conditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['Structure Type'].isin([19])]
conditionRatings = conditionRatings.loc[~conditionRatings['Type of Wearing Surface'].isin(['6'])]
after = len(conditionRatings)
print("Total Records after filteration: ",len(conditionRatings))
print("Difference: ", before - after)
return conditionRatings
"""
Explanation: Filteration of NBI Data
The following routine removes the missing data such as 'N', 'NA' from deck, substructure,and superstructure , and also removing data with structure Type - 19 and type of wearing surface - 6.
End of explanation
"""
def findSurvivalProbablities(conditionRatings):
i = 1
j = 2
probabilities = []
while j < 121:
v = list(conditionRatings.loc[conditionRatings['Age'] == i]['deck'])
k = list(conditionRatings.loc[conditionRatings['Age'] == i]['structureNumber'])
Age1 = {key:int(value) for key, value in zip(k,v)}
#v = conditionRatings.loc[conditionRatings['Age'] == j]
v_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['deck'])
k_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['structureNumber'])
Age2 = {key:int(value) for key, value in zip(k_2,v_2)}
intersectedList = list(Age1.keys() & Age2.keys())
reconstructed = 0
for structureNumber in intersectedList:
if Age1[structureNumber] < Age2[structureNumber]:
if (Age1[structureNumber] - Age2[structureNumber]) < -1:
reconstructed = reconstructed + 1
try:
probability = reconstructed / len(intersectedList)
except ZeroDivisionError:
probability = 0
probabilities.append(probability*100)
i = i + 1
j = j + 1
return probabilities
"""
Explanation: Particularly in the area of determining a deterioration model of bridges, There is an observed sudden increase in condition ratings of bridges over the period of time, This sudden increase in the condition rating is attributed to the reconstruction of the bridges. NBI dataset contains an attribute to record this reconstruction of the bridge. An observation of an increase in condition rating of bridges over time without any recorded information of reconstruction of that bridge in NBI dataset suggests that dataset is not updated consistently. In order to have an accurate deterioration model, such unrecorded reconstruction activities must be accounted in the deterioration model of the bridges.
End of explanation
"""
def plotCDF(cumsum_probabilities):
fig = plt.figure(figsize=(15,8))
ax = plt.axes()
plt.title('CDF of Reonstruction Vs Age')
plt.xlabel('Age')
plt.ylabel('CDF of Reonstruction')
plt.yticks([0,10,20,30,40,50,60,70,80,90,100])
plt.ylim(0,100)
x = [i for i in range(1,120)]
y = cumsum_probabilities
ax.plot(x,y)
return plt.show()
"""
Explanation: A utility function to plot the graphs.
End of explanation
"""
states = ['48','40','35','04']
# Mapping state code to state abbreviation
stateNameDict = {'25':'MA',
'04':'AZ',
'08':'CO',
'38':'ND',
'09':'CT',
'19':'IA',
'26':'MI',
'48':'TX',
'35':'NM',
'17':'IL',
'51':'VA',
'23':'ME',
'16':'ID',
'36':'NY',
'56':'WY',
'29':'MO',
'39':'OH',
'28':'MS',
'11':'DC',
'21':'KY',
'18':'IN',
'06':'CA',
'47':'TN',
'12':'FL',
'24':'MD',
'34':'NJ',
'46':'SD',
'13':'GA',
'55':'WI',
'30':'MT',
'54':'WV',
'15':'HI',
'32':'NV',
'37':'NC',
'10':'DE',
'33':'NH',
'44':'RI',
'50':'VT',
'42':'PA',
'05':'AR',
'20':'KS',
'45':'SC',
'22':'LA',
'40':'OK',
'72':'PR',
'41':'OR',
'27':'MN',
'53':'WA',
'01':'AL',
'31':'NE',
'02':'AK',
'49':'UT'
}
def getProbs(states, stateNameDict):
# Initializaing the dataframes for deck, superstructure and subtructure
df_prob_recon = pd.DataFrame({'Age':range(1,61)})
df_cumsum_prob_recon = pd.DataFrame({'Age':range(1,61)})
for state in states:
conditionRatings_state = getData(state)
stateName = stateNameDict[state]
print("STATE - ",stateName)
conditionRatings_state = filterConvert(conditionRatings_state)
print("\n")
probabilities_state = findSurvivalProbablities(conditionRatings_state)
cumsum_probabilities_state = np.cumsum(probabilities_state)
df_prob_recon[stateName] = probabilities_state[:60]
df_cumsum_prob_recon[stateName] = cumsum_probabilities_state[:60]
#df_prob_recon.set_index('Age', inplace = True)
#df_cumsum_prob_recon.set_index('Age', inplace = True)
return df_prob_recon, df_cumsum_prob_recon
df_prob_recon, df_cumsum_prob_recon = getProbs(states, stateNameDict)
df_prob_recon.to_csv('prsouthwest.csv')
df_cumsum_prob_recon.to_csv('cprsouthwest.csv')
"""
Explanation: The following script will select all the bridges in the Southwest United States, filter missing and not required data. The script also provides information of how much of the data is being filtered.
End of explanation
"""
plt.figure(figsize=(12,8))
plt.title("CDF Probability of Reconstruction vs Age")
palette = [
'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
linestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']
for num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):
plt.plot(df_cumsum_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)
plt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction');
plt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)
plt.ylim(1,100)
plt.show()
"""
Explanation: In following figures, shows the cumulative distribution function of the probability of reconstruction over the bridges' lifespan, of bridges in the Southwest United States, as the bridges grow older the probability of reconstruction increases.
End of explanation
"""
plt.figure(figsize = (16,12))
plt.xlabel('Age')
plt.ylabel('Mean')
# Initialize the figure
plt.style.use('seaborn-darkgrid')
# create a color palette
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
# multiple line plot
num = 1
linestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']
for n, column in enumerate(df_cumsum_prob_recon.drop('Age', axis=1)):
# Find the right spot on the plot
plt.subplot(4,3, num)
# Plot the lineplot
plt.plot(df_cumsum_prob_recon['Age'], df_cumsum_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)
# Same limits for everybody!
plt.xlim(1,60)
plt.ylim(1,100)
# Add title
plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])
plt.text(30, -1, 'Age', ha='center', va='center')
plt.text(1, 50, 'Probability', ha='center', va='center', rotation='vertical')
num = num + 1
# general title
plt.suptitle("CDF Probability of Reconstruction vs Age", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)
"""
Explanation: The below figure presents CDF Probability of reconstruction, of bridge in the Southwest United States.
End of explanation
"""
plt.figure(figsize=(12,8))
plt.title("Probability of Reconstruction vs Age")
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
linestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']
for num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):
plt.plot(df_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)
plt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction');
plt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)
plt.ylim(1,25)
plt.show()
"""
Explanation: In the following figures, provides the probability of reconstruction at every age. Note this is not a cumulative probability function. the constant number of reconstruction of the bridges can be explained by various factors.
one particularly interesting reason could be funding provided to reconstruct bridges, this explain why some of the states have perfect linear curve.
End of explanation
"""
plt.figure(figsize = (16,12))
plt.xlabel('Age')
plt.ylabel('Mean')
# Initialize the figure
plt.style.use('seaborn-darkgrid')
# create a color palette
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
# multiple line plot
num = 1
linestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']
for n, column in enumerate(df_prob_recon.drop('Age', axis=1)):
# Find the right spot on the plot
plt.subplot(4,3, num)
# Plot the lineplot
plt.plot(df_prob_recon['Age'], df_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)
# Same limits for everybody!
plt.xlim(1,60)
plt.ylim(1,25)
# Add title
plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])
plt.text(30, -1, 'Age', ha='center', va='center')
plt.text(1, 12.5, 'Probability', ha='center', va='center', rotation='vertical')
num = num + 1
# general title
plt.suptitle("Probability of Reconstruction vs Age", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)
"""
Explanation: A key observation in this investigation of several state reveals a constant number of bridges are reconstructed every year, this could be an effect of fixed budget allocated for reconstruction by the state. This also highlights the fact that not all bridges that might require reconstruction are reconstructed.
To Understand this phenomena in clearing, the following figure presents probability of reconstruction vs age of all individual states in the Southwest United States.
End of explanation
"""
|
arcyfelix/Courses | 18-11-22-Deep-Learning-with-PyTorch/06-Sentiment Prediction with RNNs/Sentiment_analysis_with_RNNs.ipynb | apache-2.0 | import numpy as np
from tqdm import tqdm_notebook as tqdm
# read data from text files
with open('data/reviews.txt', 'r') as f:
reviews = f.read()
with open('data/labels.txt', 'r') as f:
labels = f.read()
print(reviews[:1000])
print()
print(labels[:20])
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis.
Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the sequence of words.
Here we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative.
<img src="images/reviews_ex.png" width=40%>
Network Architecture
The architecture for this network is shown below.
<img src="images/network_diagram.png" width=40%>
First, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.
After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells. The LSTM cells will add recurrent connections to the network and give us the ability to include information about the sequence of words in the movie review data.
Finally, the LSTM outputs will go to a sigmoid output layer. We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1.
We don't care about the sigmoid outputs except for the very last one; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg).
Load in and visualize the data
End of explanation
"""
from string import punctuation
# get rid of punctuation
reviews = reviews.lower() # lowercase, standardize
all_text = ''.join([c for c in reviews if c not in punctuation])
# split by new lines and spaces
reviews_split = all_text.split('\n')
all_text = ' '.join(reviews_split)
# create a list of words
words = all_text.split()
words[:30]
"""
Explanation: Data pre-processing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. Here are the processing steps, we'll want to take:
We'll want to get rid of periods and extraneous punctuation.
Also, you might notice that the reviews are delimited with newline characters \n. To deal with those, I'm going to split the text into each review using \n as the delimiter.
Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
# feel free to use this import
from collections import Counter
## Build a dictionary that maps words to integers
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
## use the dict to tokenize each review in reviews_split
## store the tokenized reviews in reviews_ints
reviews_ints = []
for review in reviews_split:
reviews_ints.append([vocab_to_int[word] for word in review.split()])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Stats about vocabulary
print('Unique words: ', len((vocab_to_int))) # should ~ 74000+
print()
# Print tokens in first review
print('Tokenized review: \n', reviews_ints[:1])
"""
Explanation: Test your code
As a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.
End of explanation
"""
# 1=positive, 0=negative label conversion
labels_split = labels.split('\n')
encoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively, and place those in a new list, encoded_labels.
End of explanation
"""
# Outlier review stats
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: Removing Outliers
As an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:
Getting rid of extremely long or short reviews; the outliers
Padding/truncating the remaining data so that we have reviews of the same length.
Before we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training.
End of explanation
"""
print('Number of reviews before removing outliers: ', len(reviews_ints))
## Remove any reviews/labels with zero length from the reviews_ints list.
# Get indices of any reviews with length 0
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
# Remove 0-length reviews and their labels
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
encoded_labels = np.array([encoded_labels[ii] for ii in non_zero_idx])
print('Number of reviews after removing outliers: ', len(reviews_ints))
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.
Exercise: First, remove any reviews with zero length from the reviews_ints list and their corresponding label in encoded_labels.
End of explanation
"""
def pad_features(reviews_ints, seq_length):
''' Return features of review_ints, where each review is padded with 0's
or truncated to the input seq_length.
'''
# Getting the correct rows x cols shape
features = np.zeros((len(reviews_ints), seq_length), dtype=int)
# For each review, I grab that review and
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_length]
return features
# Test your implementation!
seq_length = 200
features = pad_features(reviews_ints, seq_length=seq_length)
features = features.astype(int)
## Test statements - do not change - ##
assert len(features) == len(reviews_ints), "Your features should have as many rows as reviews."
assert len(features[0]) == seq_length, "Each feature row should contain seq_length values."
# Print first 10 values of the first 30 batches
print(features[:30,:10])
"""
Explanation: Padding sequences
To deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some seq_length, we'll pad with 0s. For reviews longer than seq_length, we can truncate them to the first seq_length words. A good seq_length, in this case, is 200.
Exercise: Define a function that returns an array features that contains the padded data, of a standard size, that we'll pass to the network.
* The data should come from review_ints, since we want to feed integers to the network.
* Each row should be seq_length elements long.
* For reviews shorter than seq_length words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128].
* For reviews longer than seq_length, use only the first seq_length words as the feature vector.
As a small example, if the seq_length=10 and an input review is:
[117, 18, 128]
The resultant, padded sequence should be:
[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]
Your final features array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified seq_length.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
split_frac = 0.8
## Split data into training, validation, and test data (features and labels, x and y)
split_idx = int(len(features)*0.8)
train_x, remaining_x = features[:split_idx], features[split_idx:]
train_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]
test_idx = int(len(remaining_x)*0.5)
val_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]
val_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]
## Print out the shapes of your resultant feature data
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets.
* You'll need to create sets for the features and the labels, train_x and train_y, for example.
* Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9.
* Whatever data is left will be split in half to create the validation and testing data.
End of explanation
"""
import torch
from torch.utils.data import TensorDataset, DataLoader
# Create Tensor datasets
train_data = TensorDataset(torch.from_numpy(train_x),
torch.from_numpy(train_y))
valid_data = TensorDataset(torch.from_numpy(val_x),
torch.from_numpy(val_y))
test_data = TensorDataset(torch.from_numpy(test_x),
torch.from_numpy(test_y))
# Dataloaders
batch_size = 50
# Make sure the SHUFFLE your training data
train_loader = DataLoader(dataset=train_data,
shuffle=True,
batch_size=batch_size)
valid_loader = DataLoader(dataset=valid_data,
shuffle=True,
batch_size=batch_size)
test_loader = DataLoader(dataset=test_data,
shuffle=True,
batch_size=batch_size)
# Obtain one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
# batch_size, seq_length
print('Sample input size: ', sample_x.size())
print('Sample input: \n', sample_x)
print()
# batch_size
print('Sample label size: ', sample_y.size())
print('Sample label: \n', sample_y)
"""
Explanation: Check your work
With train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
DataLoaders and Batching
After creating training, test, and validation data, we can create DataLoaders for this data by following two steps:
1. Create a known format for accessing our data, using TensorDataset which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.
2. Create DataLoaders and batch our training, validation, and test Tensor datasets.
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
train_loader = DataLoader(train_data, batch_size=batch_size)
This is an alternative to creating a generator function for batching our data into full batches.
End of explanation
"""
# First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
train_on_gpu = False
import torch.nn as nn
class SentimentRNN(nn.Module):
"""
The RNN model that will be used to perform Sentiment analysis.
"""
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
"""
Initialize the model by setting up the layers.
"""
super(SentimentRNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# Embedding and LSTM layers
self.embedding = nn.Embedding(num_embeddings=vocab_size,
embedding_dim=embedding_dim)
self.lstm = nn.LSTM(input_size=embedding_dim,
hidden_size=hidden_dim,
num_layers=n_layers,
dropout=drop_prob,
batch_first=True)
# Dropout layer
self.dropout = nn.Dropout(p=0.3)
# Linear and sigmoid layers
self.fc = nn.Linear(in_features=hidden_dim,
out_features=output_size)
self.sig = nn.Sigmoid()
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
# Embeddings and lstm_out
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# Stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# Dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# Sigmoid function
sig_out = self.sig(out)
# Reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1] # get last batch of labels
# Return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
Explanation: Sentiment Network with PyTorch
Below is where you'll define the network.
<img src="assets/network_diagram.png" width=40%>
The layers are as follows:
1. An embedding layer that converts our word tokens (integers) into embeddings of a specific size.
2. An LSTM layer defined by a hidden_state size and number of layers
3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size
4. A sigmoid activation layer which turns all outputs into a value 0-1; return only the last sigmoid output as the output of this network.
The Embedding Layer
We need to add an embedding layer because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights.
The LSTM Layer(s)
We'll create an LSTM to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.
Most of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships.
Exercise: Complete the __init__, forward, and init_hidden functions for the SentimentRNN model class.
Note: init_hidden should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available.
End of explanation
"""
# Instantiate the model w/ hyperparams
vocab_size = len(vocab_to_int) + 1 # +1 for the 0 padding + our word tokens
output_size = 1
embedding_dim = 200
hidden_dim = 32
n_layers = 2
net = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net)
"""
Explanation: Instantiate the network
Here, we'll instantiate the network. First up, defining the hyperparameters.
vocab_size: Size of our vocabulary or the range of values for our input, word tokens.
output_size: Size of our desired output; the number of class scores we want to output (pos/neg).
embedding_dim: Number of columns in the embedding lookup table; size of our embeddings.
hidden_dim: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
n_layers: Number of LSTM layers in the network. Typically between 1-3
Exercise: Define the model hyperparameters.
End of explanation
"""
# Loss and optimization functions
lr = 0.001
# Binary Cross Entropy Loss
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(params=net.parameters(),
lr=lr)
# Training params
epochs = 3
counter = 0
print_every = 10
# Gradient clipping
clip = 5
# Move model to GPU, if available
if(train_on_gpu):
net.cuda()
net.train()
# Train for some number of epochs
for e in range(epochs):
# Initialize hidden state
h = net.init_hidden(batch_size)
# Batch loop
for inputs, labels in tqdm(train_loader):
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# Zero accumulated gradients
net.zero_grad()
# Get the output from the model
output, h = net(inputs, h)
# Calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(parameters=net.parameters(),
max_norm=clip)
optimizer.step()
# Loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.
We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. BCELoss, or Binary Cross Entropy Loss, applies cross entropy loss to a single value between 0 and 1.
We also have some data and training hyparameters:
lr: Learning rate for our optimizer.
epochs: Number of times to iterate through the training dataset.
clip: The maximum gradient value to clip at (to prevent exploding gradients).
End of explanation
"""
# Get test data loss and accuracy
test_losses = []
num_correct = 0
# Initialize hidden state
h = net.init_hidden(batch_size)
net.eval()
# Iterate over test data
for inputs, labels in test_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# Get predicted outputs
output, h = net(inputs, h)
# Calculate loss
test_loss = criterion(output.squeeze(), labels.float())
test_losses.append(test_loss.item())
# Convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze()) # rounds to the nearest integer
# Compare predictions to true label
correct_tensor = pred.eq(labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
# -- stats! -- ##
# avg test loss
print("Test loss: {:.3f}".format(np.mean(test_losses)))
# accuracy over all test data
test_acc = num_correct/len(test_loader.dataset)
print("Test accuracy: {:.3f}".format(test_acc))
"""
Explanation: Testing
There are a few ways to test your network.
Test data performance: First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.
Inference on user-generated data: Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called inference.
End of explanation
"""
# Negative test review
test_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'
from string import punctuation
def tokenize_review(test_review):
test_review = test_review.lower() # lowercase
# get rid of punctuation
test_text = ''.join([c for c in test_review if c not in punctuation])
# splitting by spaces
test_words = test_text.split()
# tokens
test_ints = []
test_ints.append([vocab_to_int[word] for word in test_words])
return test_ints
# Test code and generate tokenized review
test_ints = tokenize_review(test_review_neg)
print(test_ints)
# Test sequence padding
seq_length=200
features = pad_features(test_ints, seq_length)
print(features)
# Test conversion to tensor and pass into your model
feature_tensor = torch.from_numpy(features)
print(feature_tensor.size())
def predict(net,
test_review,
sequence_length=200):
# Setting the evaluation mode
net.eval()
# Tokenize review
test_ints = tokenize_review(test_review)
# Pad tokenized sequence
seq_length=sequence_length
features = pad_features(test_ints, seq_length)
# Convert to tensor to pass into your model
feature_tensor = torch.from_numpy(features)
batch_size = feature_tensor.size(0)
# Initialize hidden state
h = net.init_hidden(batch_size)
if(train_on_gpu):
feature_tensor = feature_tensor.cuda()
# Get the output from the model
output, h = net(feature_tensor, h)
# Convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze())
# Printing output value, before rounding
print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))
# Print custom response
if(pred.item()==1):
print("Positive review detected!")
else:
print("Negative review detected.")
# Positive test review
test_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'
# Call function
seq_length=200
predict(net, test_review_neg, seq_length)
"""
Explanation: Inference on a test review
You can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly!
Exercise: Write a predict function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!
* You can use any functions that you've already defined or define any helper functions you want to complete predict, but it should just take in a trained net, a text review, and a sequence length.
End of explanation
"""
|
KMFleischer/PyEarthScience | Visualization/PyNGL/PyEarthScience_contour_unstructured_PyNGL.ipynb | mit | import numpy as np
import math, time
import Ngl,Nio
"""
Explanation: PyEarthScience: Python examples for Earth Scientists
contour plots
Using PyNGL
Contour plot with
- unstructured data (ICON)
- CellFill
- filled contour areas
- without contour line labels
- labelbar
- title
End of explanation
"""
t1 = time.time() #-- retrieve start time
print ""
"""
Explanation: Retrieve time for wallclock time computation.
End of explanation
"""
#-- define variables
diri = "/Users/k204045/NCL/PyNGL/User_Guide_examples/" #-- data path
fname = "ta_ps_850.nc" #-- data file
gname = "r2b4_amip.nc" #-- grid info file
#-- open file and read variables
f = Nio.open_file(diri + fname,"r") #-- add data file
g = Nio.open_file(diri + gname,"r") #-- add grid file (not contained in data file!!!)
#-- read a timestep of "ta"
var = f.variables["ta"][0,0,:] #-- first time step, lev, ncells
print "-----------------------"
print f.variables["ta"] #-- like printVarSummary
print "-----------------------"
"""
Explanation: Open and read variable and grid.
End of explanation
"""
title = "ICON: Surface temperature" #-- title string
varMin = 230 #-- data minimum
varMax = 310 #-- data maximum
varInt = 5 #-- data increment
levels = range(varMin,varMax,varInt) #-- set levels array
"""
Explanation: Define title string, minimum and maximum contour values, interval and levels.
End of explanation
"""
rad2deg = 45./np.arctan(1.) #-- radians to degrees
x = g.variables["clon"][:] #-- read clon
y = g.variables["clat"][:] #-- read clat
vlon = g.variables["clon_vertices"][:] #-- read clon_vertices
vlat = g.variables["clat_vertices"][:] #-- read clat_vertices
ncells = vlon.shape[0] #-- number of cells
nv = vlon.shape[1] #-- number of edges
x = x * rad2deg #-- cell center, lon
y = y * rad2deg #-- cell center, lat
vlat = vlat * rad2deg #-- cell lattitude vertices
vlon = vlon * rad2deg #-- cell longitude vertices
#-- longitude values -180. - 180.
for j in range(1,ncells):
for i in range(1,nv):
if vlon[j,i] < -180. :
vlon[j,i] = vlon[j,i] + 360.
if vlon[j,i] > 180. :
vlon[j,i] = vlon[j,i] - 360.
#-- print some information
print ""
print "Cell points: ", nv
print "Cells: ", str(ncells)
print "Variable ta min/max: %.2f " % np.min(var) + "/" + " %.2f" % np.max(var)
print ""
"""
Explanation: Define the x-, y-values and the polygon points.
End of explanation
"""
#-- open a workstation
wks_type = "png" #-- graphics output type
wks = Ngl.open_wks(wks_type,"plot_contour_unstructured_PyNGL") #-- open a workstation
"""
Explanation: Open a workstation, here x11 window.
End of explanation
"""
res = Ngl.Resources() #-- plot mods desired.
res.cnFillOn = True #-- color plot desired
res.cnFillMode = "CellFill" #-- set fill mode
res.cnFillPalette = "BlueWhiteOrangeRed" #-- choose colormap
res.cnLinesOn = False #-- turn off contour lines
res.cnLineLabelsOn = False #-- turn off contour labels
res.cnLevelSelectionMode = "ExplicitLevels" #-- use explicit levels
res.cnLevels = levels #-- set levels
res.lbOrientation = "Horizontal" #-- vertical by default
res.lbBoxLinesOn = False #-- turn off labelbar boxes
res.lbLabelFontHeightF = 0.01 #-- labelbar label font size
res.mpFillOn = False #-- don't use filled map
res.mpGridAndLimbOn = False #-- don't draw grid lines
res.sfXArray = x #-- transform x to mesh scalar field
res.sfYArray = y #-- transform y to mesh scalar field
res.sfXCellBounds = vlon #-- needed if set cnFillMode = "CellFill"
res.sfYCellBounds = vlat #-- needed if set cnFillMode = "CellFill"
res.tiMainString = "Unstructured grid: ICON" #-- title string
res.tiMainOffsetYF = 0.03 #-- move main title towards plot
"""
Explanation: Set resources.
End of explanation
"""
#-- create the plot
plot = Ngl.contour_map(wks,var,res)
"""
Explanation: Draw the plot.
End of explanation
"""
t2 = time.time()
print "Wallclock time: %0.3f seconds" % (t2-t1)
print ""
Ngl.delete_wks(wks) #-- this need to be done to close the graphics output file
Ngl.end()
"""
Explanation: Compute the wallclock time
End of explanation
"""
from IPython.display import Image
Image(filename='plot_contour_unstructured_PyNGL.png')
"""
Explanation: Show the plot in this notebook.
End of explanation
"""
|
michael-isaev/cse6040_qna | PythonQnA_6_sorting.ipynb | apache-2.0 | a = [2, 6, 3, 4, 1, 9]
print ("List before sorting", a)
b = a.sort()
print ("That's what list.sort() returns:", b)
print ("List after sorting", a)
"""
Explanation: 6. Sorting Things out
Another topic that is surprisingly close to mutations is sorting. That relation comes because usually you need to sort a list. Sorting a list is extremely easy in python, as easy as using list.sort() method. And as confusing as everything we've discussed so far...
End of explanation
"""
a = [2, 6, 3, 4, 1, 9]
a_copy = a[:]
a_alias = a
print ("List before sorting", a)
print ("List alias before sorting", a_alias)
print ("List copy before sorting", a_copy)
print ("------------------------------------------")
a_copy.sort()
print ("List after sorting copy", a)
print ("List alias after sorting copy", a_alias)
print ("List copy after sorting copy", a_copy)
a = [2, 6, 3, 4, 1, 9]
a_copy = a[:]
a_alias = a
a.sort()
print ("List after sorting", a)
print ("List alias after sorting", a_alias)
print ("List copy after sorting", a_copy)
a = [2, 6, 3, 4, 1, 9]
a_copy = a[:]
a_alias = a
a_alias.sort()
print ("List after sorting alias", a)
print ("List alias after sorting alias", a_alias)
print ("List copy after sorting alias", a_copy)
"""
Explanation: Two things you may noticed:
1. sort() method doesn't return anything
2. After calling sort() method, your list is sorted, and there is no way to return you previous order if you ever need that.
That happens because sort() method performs sorting as a side effect, and it does not create a new list of sorted objects. As you might know by now, if you have an alias for a, it's get sorted (unlike a copy) after any of them calls sort() method.
End of explanation
"""
a = [2, 6, 3, 4, 1, 9]
print ("List before sorting", a)
b = sorted(a)
print ("That's what 'sorted(list)' returns:", b)
print ("List after sorting", a)
"""
Explanation: But that's not the only way of sorting stuff in python. There is a built-in function, sorted(), that among other things can sort lists. In a slightly different manner:
End of explanation
"""
a = (2, 6, 3, 4, 1, 9)
print ("Tuple before sorting", a)
b = sorted(a)
print ("That's what 'sorted(tuple)' returns:", b)
print ("Tuple after sorting", a)
"""
Explanation: As you can see, sorted() not only sorts the list, it returns a new list where the elements are already sorted. On the other hand, sorted() does not change the state of original list. So here is a good rule of thumb: if you need to sort a list forever and you don't care about its previous state, use list.sort(). If you need a new list with sorted elements, use sorted().
Sorted is a very powerful function, that can sort virtually anything, if it's iterable and comparable (or in simple English, if it can be sorted).
Among those thing, you can sort tuples, because by using sorted() you don't change it. But as a result you're going to see a list:
End of explanation
"""
from operator import *
a = ((2, 1), (6, 2), (3, 3), (4, 4), (1, 5), (9, 6))
b = sorted(a, key=itemgetter(1), reverse=True)
c = sorted(a, key=itemgetter(0), reverse=True)
print ("Tuple before sorting", a)
print ("Tuple after sorting by the second element of pair", b)
print ("Tuple before sorting by the first element of pair", c)
"""
Explanation: More than that, you can pass a special function (key, or comparator) to specify how exactly you need you collection to be sorted, and even sort in the reverse order. For example, let's sort a tuple of pairs in reverse order be the second element of the pair, and then by the first:
End of explanation
"""
from collections import defaultdict
a = "how many characters are in this sentence, and which one is the most common one?"
def find_most_common_character(s):
chars = defaultdict(int)
words = s.split(" ")
for word in words:
for c in word:
chars[c] += 1
# Q: How about sorting dictionary by key?
return sorted(chars.items(), key=lambda item: item[1], reverse=True)[0]
find_most_common_character(a)
"""
Explanation: Exercise:
Now let's play around with this helpful function a little bit more: let's sort a dictionary. Can you figure out what each line of code is doing before running it?
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/recommendation_systems/solutions/deep_recommenders.ipynb | apache-2.0 | !pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
"""
Explanation: Building deep retrieval models
Learning Objectives
Converting raw input examples into feature embeddings.
Splitting the data into a training set and a testing set.
Configuring the deeper model with losses and metrics.
Introduction
In the featurization tutorial we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.
In general, deeper models are capable of learning more complex patterns than shallower models. For example, our user model incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.
Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.
Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful hyperparameter tuning. For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Preliminaries
We first import the necessary packages.
End of explanation
"""
!pip install tensorflow==2.5.0
"""
Explanation: NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.
End of explanation
"""
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
"""
Explanation: NOTE: Please ignore any incompatibility warnings and errors.
NOTE: Restart your kernel to use updated packages.
End of explanation
"""
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
"""
Explanation: This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
End of explanation
"""
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
"""
Explanation: In this tutorial we will use the models from the featurization tutorial to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
End of explanation
"""
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
"""
Explanation: We also do some housekeeping to prepare feature vocabularies.
End of explanation
"""
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
"""
Explanation: Model definition
Query model
We start with the user model defined in the featurization tutorial as the first layer of our model, tasked with converting raw input examples into feature embeddings.
End of explanation
"""
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# TODO 1a
# We first use the user model for generating embeddings.
self.embedding_model = UserModel()
# TODO 1b
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
"""
Explanation: Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:
+----------------------+
| 128 x 64 |
+----------------------+
| relu
+--------------------------+
| 256 x 128 |
+--------------------------+
| relu
+------------------------------+
| ... x 256 |
+------------------------------+
Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.
We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
End of explanation
"""
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
"""
Explanation: The layer_sizes parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models.
Candidate model
We can adopt the same approach for the movie model. Again, we start with the MovieModel from the featurization tutorial:
End of explanation
"""
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
"""
Explanation: And expand it with hidden layers:
End of explanation
"""
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
"""
Explanation: Combined model
With both QueryModel and CandidateModel defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
End of explanation
"""
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# TODO 2a
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
cached_train = train.shuffle(100_000).batch(2048)
cached_test = test.batch(4096).cache()
"""
Explanation: Training the model
Prepare the data
We first split the data into a training set and a testing set.
End of explanation
"""
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
"""
Explanation: Shallow model
We're ready to try out our first, shallow, model!
NOTE: The below cell will take approximately 15~20 minutes to get executed completely.
End of explanation
"""
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
"""
Explanation: This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models.
Deeper model
What about a deeper model with two layers?
NOTE: The below cell will take approximately 15~20 minutes to get executed completely.
End of explanation
"""
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
"""
Explanation: The accuracy here is 0.29, quite a bit better than the shallow model.
We can plot the validation accuracy curves to illustrate this:
End of explanation
"""
# TODO 3a
model = MovielensModel([128, 64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
three_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
"""
Explanation: Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.
However, even deeper models are not necessarily better. The following model extends the depth to three layers:
NOTE: The below cell will take approximately 15~20 minutes to get executed completely.
End of explanation
"""
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
"""
Explanation: In fact, we don't see improvement over the shallow model:
End of explanation
"""
|
milroy/Spark-Meetup | exercises/01_introduction.ipynb | mit | def square(x):
return x*x
numbers = [1,2,3]
def map_squares(nums):
res = []
for x in nums:
res.append( square(x) )
return res
map_squares(numbers)
"""
Explanation: <img src='https://www.rc.colorado.edu/sites/all/themes/research/logo.png'>
Introduction to Spark
Many examples courtesy Monte Lunacek
Outline
Functional programming in Python
Spark's programming model
As many examples as we can get through!
Functional Python
<blockquote>
Python acquired lambda, reduce, filter and map, courtesy of a Lisp hacker who missed them and submitted working patches. -Guido van Rossum
</blockquote>
map
reduce
filter
lambda
And more: itertools, pytoolz
We will use these concepts (and more) in Spark
The map abstraction
For the category theory inclined: a functor over functions (morphisms)! Basically an association of functions.
End of explanation
"""
results = map(square, numbers)
results
"""
Explanation: or...
End of explanation
"""
from multiprocessing import Pool
pool = Pool(5)
results = pool.map(square, numbers)
results
"""
Explanation: For parallel computing in python, map is a key abstraction.
End of explanation
"""
lambda_square = lambda x: x*x
map(lambda_square, range(10))
map(lambda x: x*x, range(10))
res = map(lambda x: x*x, range(10))
"""
Explanation: lambda
Anonymous function: a function without a name, like inlining
End of explanation
"""
def add_num(x1, x2):
return x1+x2
print reduce(add_num, res)
print reduce(lambda x,y: x+y, res)
"""
Explanation: reduce
Apply a function with two arguments cumulatively to the container.
End of explanation
"""
def less_than(x):
return x>10
filter(less_than, res)
filter(lambda x: x>10, res)
"""
Explanation: filter
Constructs a new list for items where the applied function is True.
End of explanation
"""
import findspark
import os
findspark.init() # you need that before import pyspark in Jupyter notebook
import pyspark
sc = pyspark.SparkContext()
"""
Explanation: Spark Programming Model
Everything starts with a SparkContext
End of explanation
"""
import numpy as np
rdd = sc.parallelize(np.arange(20), numSlices=5)
"""
Explanation: Create RDDs
RDD Documentation
The parallelize method is a utility for initializing RDDs.
NB: parallelized structure must fit in driver memory!
End of explanation
"""
for x in rdd.glom().collect():
print x
rdd = sc.parallelize(np.arange(20), numSlices=10)
for x in rdd.glom().collect():
print x
"""
Explanation: Transformations and Actions
Transformations return edges to new vertex in DAG, lazy evaluation, wide and narrow evaluation
map, flatmap
reduceByKey
filter
glom
Actions return values- beware of memory limitations!
collect
reduce
take
count
What does this look like?
glom: Return an RDD created by coalescing all elements within each partition into a list.
collect: Returns a list from all elements of an RDD.
End of explanation
"""
rdd = sc.parallelize([ [2, 3, 4],[0, 1],[5, 6, 7, 8] ])
rdd.collect()
rdd.map(lambda x: range(len(x))).collect()
"""
Explanation: map and Flatmap
Return a new RDD by first applying a function and then flattening the results.
End of explanation
"""
rdd.flatMap(lambda x: range(len(x))).collect()
"""
Explanation: Or I can flatten the results...
End of explanation
"""
rdd.flatMap(lambda x: x).collect()
"""
Explanation: Or flatten the original results
End of explanation
"""
rdd.flatMap(lambda x: x).reduce(lambda x,y: x+y)
rdd = sc.parallelize([("a", 1), ("b", 1), ("a", 2)])
rdd.collect()
rdd.reduceByKey(lambda x,y: x+y).collect()
rdd = sc.parallelize([("hamlet", 1), ("claudius", 1), ("hamlet", 1)])
rdd.countByKey()
"""
Explanation: Reduction
(Associative operation)
End of explanation
"""
import h5py
h5file_path='../data/hdf5_ex.h5'
def readchunk(v):
chunk = h5py.File(h5file_path, 'r')
return chunk['/chunked'][v,:]
chunked_array = sc.parallelize(range(0,10)).map(lambda v: readchunk(v))
chunked_array.take(3)
"""
Explanation: Reading HDF5 with PySpark
Example courtesy Freeman Lab: https://github.com/freeman-lab/hdf5-and-spark
End of explanation
"""
def toCSV(data):
return ','.join(str(d) for d in data)
lines = chunked_array.map(toCSV).repartition(1)
lines.saveAsTextFile('hdf5_ex.csv')
"""
Explanation: Now write it to a CSV (from stackoverflow user Daniel Darabos)
End of explanation
"""
|
karlstroetmann/Formal-Languages | Python/Parse-Table.ipynb | gpl-2.0 | r1 = ('E', ('E', '+', 'P'))
r2 = ('E', ('E', '-', 'P'))
r3 = ('E', ('P',))
r4 = ('P', ('P', '*', 'F'))
r5 = ('P', ('P', '/', 'F'))
r6 = ('P', ('F',))
r7 = ('F', ('(', 'E', ')'))
r8 = ('F', ('NUMBER',))
"""
Explanation: A Parse Table for a Shift-Reduce Parser
This notebook contains the parse table that is needed for a shift reduce parser that parses the following grammar:
$$
\begin{eqnarray}
\mathrm{expr} & \rightarrow & \mathrm{expr}\;\;\texttt{'+'}\;\;\mathrm{product} \
& \mid & \mathrm{expr}\;\;\texttt{'-'}\;\;\mathrm{product} \
& \mid & \mathrm{product} \[0.2cm]
\mathrm{product} & \rightarrow & \mathrm{product}\;\;\texttt{''}\;\;\mathrm{factor} \
& \mid & \mathrm{product}\;\;\texttt{'/'}\;\;\mathrm{factor} \
& \mid & \mathrm{factor} \[0.2cm]
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \texttt{NUMBER}
\end{eqnarray*}
$$
Below, we define the grammar rules.
End of explanation
"""
actionTable = {}
actionTable['s0', '(' ] = ('shift', 's5')
actionTable['s0', 'NUMBER'] = ('shift', 's2')
actionTable['s1', 'EOF'] = ('reduce', r6)
actionTable['s1', '+' ] = ('reduce', r6)
actionTable['s1', '-' ] = ('reduce', r6)
actionTable['s1', '*' ] = ('reduce', r6)
actionTable['s1', '/' ] = ('reduce', r6)
actionTable['s1', ')' ] = ('reduce', r6)
actionTable['s2', 'EOF'] = ('reduce', r8)
actionTable['s2', '+' ] = ('reduce', r8)
actionTable['s2', '-' ] = ('reduce', r8)
actionTable['s2', '*' ] = ('reduce', r8)
actionTable['s2', '/' ] = ('reduce', r8)
actionTable['s2', ')' ] = ('reduce', r8)
actionTable['s3', 'EOF'] = ('reduce', r3)
actionTable['s3', '+' ] = ('reduce', r3)
actionTable['s3', '-' ] = ('reduce', r3)
actionTable['s3', '*' ] = ('shift', 's12')
actionTable['s3', '/' ] = ('shift', 's11')
actionTable['s3', ')' ] = ('reduce', r3)
actionTable['s4', 'EOF'] = 'accept'
actionTable['s4', '+' ] = ('shift', 's8')
actionTable['s4', '-' ] = ('shift', 's9')
actionTable['s5', '(' ] = ('shift', 's5')
actionTable['s5', 'NUMBER'] = ('shift', 's2')
actionTable['s6', '+' ] = ('shift', 's8')
actionTable['s6', '-' ] = ('shift', 's9')
actionTable['s6', ')' ] = ('shift', 's7')
actionTable['s7', 'EOF'] = ('reduce', r7)
actionTable['s7', '+' ] = ('reduce', r7)
actionTable['s7', '-' ] = ('reduce', r7)
actionTable['s7', '*' ] = ('reduce', r7)
actionTable['s7', '/' ] = ('reduce', r7)
actionTable['s7', ')' ] = ('reduce', r7)
actionTable['s8', '(' ] = ('shift', 's5')
actionTable['s8', 'NUMBER'] = ('shift', 's2')
actionTable['s9', '(' ] = ('shift', 's5')
actionTable['s9', 'NUMBER'] = ('shift', 's2')
actionTable['s10', 'EOF'] = ('reduce', r2)
actionTable['s10', '+' ] = ('reduce', r2)
actionTable['s10', '-' ] = ('reduce', r2)
actionTable['s10', '*' ] = ('shift', 's12')
actionTable['s10', '/' ] = ('shift', 's11')
actionTable['s10', ')' ] = ('reduce', r2)
actionTable['s11', '(' ] = ('shift', 's5')
actionTable['s11', 'NUMBER'] = ('shift', 's2')
actionTable['s12', '(' ] = ('shift', 's5')
actionTable['s12', 'NUMBER'] = ('shift', 's2')
actionTable['s13', 'EOF'] = ('reduce', r4)
actionTable['s13', '+' ] = ('reduce', r4)
actionTable['s13', '-' ] = ('reduce', r4)
actionTable['s13', '*' ] = ('reduce', r4)
actionTable['s13', '/' ] = ('reduce', r4)
actionTable['s13', ')' ] = ('reduce', r4)
actionTable['s14', 'EOF'] = ('reduce', r5)
actionTable['s14', '+' ] = ('reduce', r5)
actionTable['s14', '-' ] = ('reduce', r5)
actionTable['s14', '*' ] = ('reduce', r5)
actionTable['s14', '/' ] = ('reduce', r5)
actionTable['s14', ')' ] = ('reduce', r5)
actionTable['s15', 'EOF'] = ('reduce', r1)
actionTable['s15', '+' ] = ('reduce', r1)
actionTable['s15', '-' ] = ('reduce', r1)
actionTable['s15', '*' ] = ('shift', 's12')
actionTable['s15', '/' ] = ('shift', 's11')
actionTable['s15', ')' ] = ('reduce', r1)
"""
Explanation: Next, we define the action table as a dictionary.
End of explanation
"""
gotoTable = {}
gotoTable['s0', 'E'] = 's4'
gotoTable['s0', 'P'] = 's3'
gotoTable['s0', 'F'] = 's1'
gotoTable['s5', 'E'] = 's6'
gotoTable['s5', 'P'] = 's3'
gotoTable['s5', 'F'] = 's1'
gotoTable['s8', 'P'] = 's15'
gotoTable['s8', 'F'] = 's1'
gotoTable['s9', 'P'] = 's10'
gotoTable['s9', 'F'] = 's1'
gotoTable['s11', 'F'] = 's14'
gotoTable['s12', 'F'] = 's13'
"""
Explanation: Below is the definition of the goto table.
End of explanation
"""
stateTable = {}
stateTable['s0'] = { 'S -> • E',
'E -> • E "+" P', 'E -> • E "-" P', 'E -> • P',
'P -> • P "*" F', 'P -> • P "/" F', 'P -> • F',
'F -> • "(" E ")"', 'F -> • NUMBER'
}
stateTable['s1'] = { 'P -> F •' }
stateTable['s2'] = { 'F -> NUMBER •' }
stateTable['s3'] = { 'P -> P • "*" F', 'P -> P • "/" F', 'E -> P •' }
stateTable['s4'] = { 'S -> E •', 'E -> E • "+" P', 'E -> E • "-" P' }
stateTable['s5'] = { 'F -> "(" • E ")"',
'E -> • E "+" P', 'E -> • E "-" P', 'E -> • P',
'P -> • P "*" F', 'P -> • P "/" F', 'P -> • F',
'F -> • "(" E ")"', 'F -> • NUMBER'
}
stateTable['s6'] = { 'F -> "(" E • ")"', 'E -> E • "+" P', 'E -> E • "-" P' }
stateTable['s7'] = { 'F -> "(" E ")" •' }
stateTable['s8'] = { 'E -> E "+" • P',
'P -> • P "*" F', 'P -> • P "/" F', 'P -> • F',
'F -> • "(" E ")"', 'F -> • NUMBER'
}
stateTable['s9' ] = { 'E -> E "-" • P',
'P -> • P "*" F', 'P -> • P "/" F', 'P -> • F',
'F -> • "(" E ")"', 'F -> • NUMBER'
}
stateTable['s10'] = { 'E -> E "-" P •', 'P -> P • "*" F', 'P -> P • "/" F' }
stateTable['s11'] = { 'P -> P "/" • F', 'F -> • "(" E ")"', 'F -> • NUMBER' }
stateTable['s12'] = { 'P -> P "*" • F', 'F -> • "(" E ")"', 'F -> • NUMBER' }
stateTable['s13'] = { 'P -> P "*" F •' }
stateTable['s14'] = { 'P -> P "/" F •' }
stateTable['s15'] = { 'E -> E "+" P •', 'P -> P • "*" F', 'P -> P • "/" F' }
"""
Explanation: Finally, we define the state table. This is table is only used for pretty printing. This table gives us a clue what is actually the information that is stored in the different stats.
End of explanation
"""
|
afeiguin/comp-phys | 09_02_random_distributions.ipynb | mit | %matplotlib inline
import numpy as np
from matplotlib import pyplot
N = 10000
r = np.random.random(N)
xlambda = 0.1
x = -np.log(r)/xlambda
binwidth=xlambda*5
pyplot.hist(x,bins=np.arange(0.,100., binwidth),density=True);
pyplot.plot(np.arange(0.,100.,binwidth),xlambda*np.exp(-xlambda*np.arange(0.,100.,binwidth)),ls='-',c='red',lw=3);
"""
Explanation: Non-uniform random distributions
In the previous section we learned how to generate random deviates with
a uniform probability distribution in an interval $[a,b]$. This
distributioon is normalized, so that $$\int _a^b {P(x)dx}=1.$$ Hence,
$P(x)=1/(b-a)$.
Now, suppose that we generate a sequence ${x_i}$ and we take some
function of it to generate ${y(x_i)}={y_i}$. This new sequence is
going to be distributed according to some probability density $P(y)$,
such that $$P(y)dy=P(x)dx$$ or $$P(y)=P(x)\frac{dx}{dy}.$$
If we want to generate a desired normalized distribution $P(y)$, we need
to solve the differential equation: $$\frac{dx}{dy}=P(y).$$ But the
solution of this is $$x=\int _0^y {P(y')dy'}=F(y).$$ Therefore,
$$y(x)=F^{-1}(x),
$$ where $F^{-1}$ is the inverse of $F$.
Exponential distribution
As an example, let us take $y(x)=-\ln{(x)}$ with $P(x)$ representing a
uniform distribution in the interval $[0,1]$. Then
$$P(y)=\frac{dx}{dy}=e^{-y},$$ which is distributed exponentially. This
distribution occurs frequently in real problems such as the radioactive
decay of nuclei. You can also see that the quantity $y/\lambda$ has the
distribution $\lambda
e^{-\lambda y}$.
End of explanation
"""
N = 100000
xmax = 60
ymax = xlambda
rx = np.random.random(N)*xmax
ry = np.random.random(N)*ymax
values = []
Nin = 0
for i in range(N):
if(ry[i] <= xlambda*np.exp(-xlambda*rx[i])):
# Accept
values.append(rx[i])
Nin += 1
x = np.asarray(values)
print("Acceptance Ratio: ",Nin/float(N))
binwidth=xlambda*5
pyplot.hist(x,bins=np.arange(0.,100., binwidth),density=True);
pyplot.plot(np.arange(0.,100.,binwidth),xlambda*np.exp(-xlambda*np.arange(0.,100.,binwidth)),ls='-',c='red',lw=3);
"""
Explanation: von Neumann rejection
A simple and ingenious method for generating random points with a
probability distribution $P(x)$ was deduced by von Neumann. Draw a plot
with you probability distribution, and on the same graph, plot another
curve $f(x)$ which has finite area and lies everywhere above your
original distribution. We will call $f(x)$ the “comparison function”.
Generate random pairs $(x_i,y_i)$ with uniform distribution inside
$f(x)$. Whenever the point lies inside the area of the original
probability, we accept it, otherwise, we reject it. All the accepted
points will be uniformly distributed within the original area, and
therefore will have the desired distribution. The fraction of points
accepted/rejected will deppend on the ratio between the two areas. The
closer the comparison function $f(x)$ resembles $P(x)$, the more points
will be accepted. Ideally, for $P(x)=f(x)$, all the points will be
accepted, and none rejected. However, in practice, this is not always
possible, but we can try to pick $f(x)$ such that we minimize the
fraction of rejected points.
It only remains how to pick a number with probability $f(x)$. For this
purpose, we utilize the method shown in the previous section, using a
function whose indefinite intergral is know analitically, and is also
analitically invertible. We then pick a random number $x$ and retrieve
the corresponding $y(x)$ according to ([random_invert]). Then, we
generate a second random number and we use the rejection criterion.
An equivalent procedure consists of picking the second number between 0
and 1 and accept or reject according to wether is it respectively less
than or greater than the ratio $P(x)/f(x)$. Clearly, if $f(x)=P(x)$ all the points will be accepted.
End of explanation
"""
N = 100000
x = np.zeros(N)
delta = 2.
sigma = 20.
sigma2 = sigma**2
def metropolis(xold):
xtrial = np.random.random()
xtrial = xold+(2*xtrial-1)*delta
weight = np.exp(-0.5*(xtrial**2-xold**2)/sigma2)
# weight = np.exp(-0.5*(xtrial-xold)/sigma2)
# if(xtrial < 0):
# weight = 0
xnew = xold
if(weight >= 1): #Accept
xnew = xtrial
else:
r = np.random.random()
if(r <= weight): #Accept
xnew = xtrial
return xnew
xwalker = 20.
Nwarmup = 5
for i in range(Nwarmup):
xwalker = metropolis(xwalker)
x[0] = xwalker
tot = x[0]
for i in range(1,N):
x0 = x[i-1]
for j in range(10):
x0 = metropolis(x0)
x[i] = metropolis(x0)
binwidth=sigma/10
pyplot.hist(x,bins=np.arange(-50,50., binwidth),density=True);
norm = 1./(sigma*np.sqrt(2*np.pi))
pyplot.plot(np.arange(-50.,50.,binwidth),norm*np.exp(-0.5*np.arange(-50.,50.,binwidth)**2/sigma2),ls='-',c='red',lw=3);
"""
Explanation: Challenge 9.1:
Improve the acceptance ratio by using a linear function $f(x)=1-\alpha x$, with a ppropriate choice of $\alpha$
Random walk methods: the Metropolis algorithm
Suppose that we want to generate random variables according to an
arbitrary probability density $P(x)$. The Metropolis algorithm produces
a “random walk” of points ${x_i}$ whose asymptotic probability
approaches $P(x)$ after a large number of steps. The random walk is
defined by a “transition probability” $w(x_i \rightarrow x_j)$ for one
value $x_i$ to another $x_j$ in order that the distribution of points
$x_0$, $x_1$, $x_2$, ... converges to $P(x)$. In can be shown that it is
sufficient (but not necessary) to satisfy the “detailed balance”
condition $$p(x_i)w(x_i \rightarrow x_j) = p(x_j)w(x_j \rightarrow x_i).
$$ This relation dos not specify $w(x_i \rightarrow x_j)$
uniquely. A simple choice is
$$w(x_i \rightarrow x_j)=\min{\left[ 1,\frac{P(x_j)}{P(x_i)} \right] }.$$
This choice can be described by the following steps. Suppose that the
“random walker” is a position $x_n$. To generate $x_{n+1}$ we
choose a trial position $x_t=x_n+\delta _n$ , where the $\delta _n$
is a random number in the interval $[-\delta ,\delta]$.
Calculate $w=P(x_t)/P(x_n)$.
If $w \geq 1$ we accept the change and let $x_{n+1}=x_t$.
If $w \leq 1$, generate a random number $r$.
If $r \leq w$, accept the change and let $x_{n+1} = x_t$.
If the trial change is not accepted, the let $x_{n+1}=x_n$.
It is necessary to sample a number of points of the random walk before
the asymptotic probability $P(x)$ is attained. How do we choose the
“step size” $\delta$? If $\delta$ is too large, only a small fraction of
changes will be accepted and the sampling will be inefficient. If
$\delta$ is too small, a large number will be accepted, but it would
take too long to sample $P(x)$ over the whole interval of interest.
Ideally, we want at least 1/3-1/2 of the trial steps to be accepted. We
also want to choose $x_0$ such that the distribution ${x_i}$ converges
to $P(x)$ as quickly as possible. An obvious choice is to begin the
random walk at the point where $P(x)$ is maximum.
Exercise 9.1: The Gaussian distribution
Use the Metropolis algorithm to generate a Gaussian distribution
$P(x)=A \exp{(-x^2/2\sigma ^2)}$. Is the numerical value of the
normalization constant $A$ relevant? Determine the qualitative
dependence of the acceptance ratio and the equilibrium time on the
maximum step size $\delta$. One possible criterion for equilibrium
is that $\langle x^2
\rangle \approx \sigma ^2$. For $\sigma = 1$, what is a reasonable
choice of $\delta$? (choose $x_0 = 0$.)
Plot the asymptotic probability distribution generated by the
Metropolis algorithm.
End of explanation
"""
|
Lstyle1/Deep_learning_projects | autoencoder/Simple_Autoencoder_Solution.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
maartenbreddels/ipyvolume | docs/source/examples/popup.ipynb | mit | import ipyvolume as ipv
import ipywidgets as widgets
f = ipv.figure()
scatter = ipv.examples.gaussian(show=False, description="Blob")
scatter.popup = widgets.IntText()
ipv.show()
"""
Explanation: Popups
Ipyvolume has the option to show a popup widgets when hovering above a mark. When hovering, the widget will be shown near the mouse position, and it's value attribute will be set to the index of the mark hovered above (e.g. when you have 12 points, value will be between 0 and 11). Also, the description will be set to the description of the scatter object. These two attributes are used in the ipywidget IntText and thus can be used as a popop widget:
End of explanation
"""
import ipyvolume as ipv
import ipywidgets as widgets
f = ipv.figure()
scatter = ipv.examples.gaussian(show=False,
description="Blob",
description_color="#CC0000",
icon='mdi-star-four-points')
scatter.popup = ipv.ui.Popup()
ipv.show()
"""
Explanation: While sufficient, ipyvolume also comes with a custom dedicated Popup widget, build using the ipyvuetify library. This popup will also show a nice icon (see https://materialdesignicons.com/) and the color used.
End of explanation
"""
widget_hovered = widgets.Valid(description="Hovering", readout="-")
widget_hovered_index = widgets.Text(description="Hovered on")
widgets.jsdlink((scatter, 'hovered'), (widget_hovered, 'value'))
widgets.jsdlink((scatter, 'hovered_index'), (widget_hovered_index, 'value'))
widgets.HBox([widget_hovered, widget_hovered_index])
# workaround for vaex, which has special behaviour on read the docs
import os
key = "READTHEDOCS"
if key in os.environ:
del os.environ[key]
import ipyvolume as ipv
import vaex.ml
df = vaex.ml.datasets.load_iris()
df
import ipywidgets as widgets
int_widget = widgets.IntText(description="index", value=2)
int_widget
import traitlets
# Custom popup showing a url to wikipedia
class MyPopup(ipv.ui.Popup):
# the event handler will fill this in
template_file = None # disable the loading from file
url = traitlets.Unicode('').tag(sync=True)
@traitlets.default("template")
def _default_template(self):
return """
<template>
<div>
<div :style="{padding: '4px', 'background-color': color, color: 'white'}">
<v-icon color="white">{{icon}}</v-icon>
Iris-{{description}}(#<i>{{value}}</i>) <span v-if="extra_html" v-html="extra_html"></span>
<p>
<a :href="url" target="_black" style="color: white">Visit wikipedia</a>
</p>
More information:
<ul v-if="record" style="margin-top: 0">
<li v-for="(value, name) in record">{{name}}={{value}}</li>
</ul>
</div>
</div>
</template>
"""
popup = MyPopup()
classes = ["Setosa", "Versicolour", "Virginica"]
urls = {
"Setosa": "https://en.wikipedia.org/wiki/Iris_setosa",
"Versicolour": "https://en.wikipedia.org/wiki/Iris_versicolor",
"Virginica": "https://en.wikipedia.org/wiki/Iris_virginica"
}
colors = ["red", "green", "blue"]
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
x, y, z = features[:3]
ipv.figure()
for class_index, name in enumerate(classes):
dfc = df[df.class_==class_index]
color = colors[class_index]
s = ipv.scatter(dfc[x].to_numpy(), dfc[y].to_numpy(), dfc[z].to_numpy(),
color=color, description=name, marker='sphere')
s.popup = popup
def set_extra(index, class_index=class_index, name=name):
dfc = df[df.class_==class_index]
records = dfc[features].to_records()
popup.record = records[index]
popup.url = urls[name]
set_extra(0)
s.observe(set_extra, "hovered")
ipv.show()
# while debugging/developing .vue files in the ipyvolume/vue directory,
# execute this to get hot reloading
# ipv.ui.watch()
"""
Explanation: Note that while hovering, the scatter attributes hovered (a boolean indicates you are hovering above a mark) and hovered_index, which mark you are hovering above, are set, and can be linked to other widgets.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.2/tutorials/LC.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: 'lc' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc')
print(b.get_dataset(kind='lc', check_visible=False))
"""
Explanation: Dataset Parameters
Let's add a lightcurve dataset to the Bundle (see also the lc API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
End of explanation
"""
print(b.get_parameter(qualifier='times'))
"""
Explanation: times
End of explanation
"""
print(b.get_parameter(qualifier='fluxes'))
"""
Explanation: fluxes
End of explanation
"""
print(b.get_parameter(qualifier='sigmas'))
"""
Explanation: sigmas
End of explanation
"""
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
print(b.get_parameter(qualifier='compute_phases_t0'))
"""
Explanation: compute_times / compute_phases
See the Compute Times & Phases tutorial.
End of explanation
"""
print(b.get_parameter(qualifier='ld_mode', component='primary'))
"""
Explanation: ld_mode
See the Limb Darkening tutorial
End of explanation
"""
b.set_value('ld_mode', component='primary', value='lookup')
print(b.get_parameter(qualifier='ld_func', component='primary'))
"""
Explanation: ld_func
ld_func will only be available if ld_mode is not 'interp', so let's set it to 'lookup'. See the limb darkening tutorial for more details.
End of explanation
"""
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary'))
"""
Explanation: ld_coeffs_source
ld_coeffs_source will only be available if ld_mode is 'lookup'. See the limb darkening tutorial for more details.
End of explanation
"""
b.set_value('ld_mode', component='primary', value='manual')
print(b.get_parameter(qualifier='ld_coeffs', component='primary'))
"""
Explanation: ld_coeffs
ld_coeffs will only be available if ld_mode is set to 'manual'. See the limb darkening tutorial for more details.
End of explanation
"""
print(b.get_parameter(qualifier='passband'))
"""
Explanation: passband
See the Atmospheres & Passbands tutorial
End of explanation
"""
print(b.get_parameter(qualifier='intens_weighting'))
"""
Explanation: intens_weighting
See the Intensity Weighting tutorial
End of explanation
"""
print(b.get_parameter(qualifier='pblum_mode'))
"""
Explanation: pblum_mode
See the Passband Luminosity tutorial
End of explanation
"""
b.set_value('pblum_mode', value='component-coupled')
print(b.get_parameter(qualifier='pblum_component'))
"""
Explanation: pblum_component
pblum_component is only available if pblum_mode is set to 'component-coupled'. See the passband luminosity tutorial for more details.
End of explanation
"""
b.set_value('pblum_mode', value='dataset-coupled')
print(b.get_parameter(qualifier='pblum_dataset'))
"""
Explanation: pblum_dataset
pblum_dataset is only available if pblum_mode is set to 'dataset-coupled'. In this case we'll get a warning because there is only one dataset. See the passband luminosity tutorial for more details.
End of explanation
"""
b.set_value('pblum_mode', value='decoupled')
print(b.get_parameter(qualifier='pblum', component='primary'))
"""
Explanation: pblum
pblum is only available if pblum_mode is set to 'decoupled' (in which case there is a pblum entry per-star) or 'component-coupled' (in which case there is only an entry for the star chosen by pblum_component). See the passband luminosity tutorial for more details.
End of explanation
"""
print(b.get_parameter(qualifier='l3_mode'))
"""
Explanation: l3_mode
See the "Third" Light tutorial
End of explanation
"""
b.set_value('l3_mode', value='flux')
print(b.get_parameter(qualifier='l3'))
"""
Explanation: l3
l3 is only avaible if l3_mode is set to 'flux'. See the "Third" Light tutorial for more details.
End of explanation
"""
b.set_value('l3_mode', value='fraction')
print(b.get_parameter(qualifier='l3_frac'))
"""
Explanation: l3_frac
l3_frac is only avaible if l3_mode is set to 'fraction'. See the "Third" Light tutorial for more details.
End of explanation
"""
print(b.get_compute())
"""
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to computing fluxes and the LC dataset.
Other compute options are covered elsewhere:
* parameters related to dynamics are explained in the section on the orb dataset
* parameters related to meshing, eclipse detection, and subdivision are explained in the section on the mesh dataset
End of explanation
"""
print(b.get_parameter(qualifier='lc_method'))
"""
Explanation: lc_method
End of explanation
"""
print(b.get_parameter(qualifier='irrad_method'))
"""
Explanation: irrad_method
End of explanation
"""
print(b.get_parameter(qualifier='boosting_method'))
"""
Explanation: For more details on irradiation, see the Irradiation tutorial
boosting_method
End of explanation
"""
print(b.get_parameter(qualifier='atm', component='primary'))
"""
Explanation: For more details on boosting, see the Beaming and Boosting example script
atm
End of explanation
"""
b.set_value('times', phoebe.linspace(0,1,101))
b.run_compute()
print(b.filter(context='model').twigs)
print(b.get_parameter(qualifier='times', kind='lc', context='model'))
print(b.get_parameter(qualifier='fluxes', kind='lc', context='model'))
"""
Explanation: For more details on atmospheres, see the Atmospheres & Passbands tutorial
Synthetics
End of explanation
"""
afig, mplfig = b.plot(show=True)
"""
Explanation: Plotting
By default, LC datasets plot as flux vs time.
End of explanation
"""
afig, mplfig = b.plot(x='phases', show=True)
"""
Explanation: Since these are the only two columns available in the synthetic model, the only other option is to plot in phase instead of time.
End of explanation
"""
print(b.filter(qualifier='period').components)
afig, mplfig = b.plot(x='phases:binary', show=True)
"""
Explanation: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01')
print(b.get_parameter(qualifier='columns').choices)
b.set_value('columns', value=['intensities@lc01',
'abs_intensities@lc01',
'normal_intensities@lc01',
'abs_normal_intensities@lc01',
'pblum_ext@lc01',
'boost_factors@lc01'])
b.run_compute()
print(b.get_model().datasets)
"""
Explanation: Mesh Fields
By adding a mesh dataset and setting the columns parameter, light-curve (i.e. passband-dependent) per-element quantities can be exposed and plotted.
Let's add a single mesh at the first time of the light-curve and re-call run_compute
End of explanation
"""
print(b.filter(dataset='lc01', kind='mesh', context='model').twigs)
"""
Explanation: These new columns are stored with the lc's dataset tag, but with the 'mesh' dataset-kind.
End of explanation
"""
afig, mplfig = b.filter(kind='mesh').plot(fc='intensities', ec='None', show=True)
"""
Explanation: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the mesh dataset).
End of explanation
"""
print(b.get_parameter(qualifier='pblum_ext',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: Now let's look at each of the available fields.
pblum
For more details, see the tutorial on Passband Luminosities
End of explanation
"""
print(b.get_parameter(qualifier='abs_normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: pblum_ext is the extrinsic passband luminosity of the entire star/mesh - this is a single value (unlike most of the parameters in the mesh) and does not have per-element values.
abs_normal_intensities
End of explanation
"""
print(b.get_parameter(qualifier='normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: abs_normal_intensities are the absolute normal intensities per-element.
normal_intensities
End of explanation
"""
print(b.get_parameter(qualifier='abs_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: normal_intensities are the relative normal intensities per-element.
abs_intensities
End of explanation
"""
print(b.get_parameter(qualifier='intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: abs_intensities are the projected absolute intensities (towards the observer) per-element.
intensities
End of explanation
"""
print(b.get_parameter(qualifier='boost_factors',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: intensities are the projected relative intensities (towards the observer) per-element.
boost_factors
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cnrm-cerfacs/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
hanhanwu/Hanhan_Data_Science_Practice | sequencial_analysis/Time_Series_Movement_Prediction.ipynb | mit | from IPython.display import Image
import pandas as pd
import numpy as np
path="MovementAAL.jpg"
Image(path, width=600, height=400)
"""
Explanation: Time Series Movement Prediction
With the data provided by sensonrs, it's trying to predict whether the person moved or not.
Detailed data description & data download can be found here: https://archive.ics.uci.edu/ml/datasets/Indoor+User+Movement+Prediction+from+RSS+data
The paper: https://www.researchgate.net/publication/257435359_An_experimental_characterization_of_reservoir_computing_in_ambient_assisted_living_applications
End of explanation
"""
# The data has been collected from 4 sensors each second
sample_df = pd.read_csv('dataset/MovementAAL_RSS_1.csv')
sample_df.head()
# 1 means has movement, -1 means no
## 314 records, each is for one MovementAAL_RSS file (314 files in total)
target = pd.read_csv('dataset/MovementAAL_target.csv')
print(target[' class_label'].value_counts())
target.head()
# groups seperate all the 314 files into 3 groups, based on 6 types of the movement paths
# We can use group 2 as training data, group 1 as validation data and group 3 as testing data
groups = pd.read_csv('groups/MovementAAL_DatasetGroup.csv')
print(groups[' dataset_ID'].value_counts())
groups.head()
print('6 Paths')
path="6paths.png"
Image(path, width=500, height=300)
print('3 Groups')
path="3groups.png"
Image(path, width=500, height=300)
"""
Explanation: Data Exploration
End of explanation
"""
# There are 314 time series data files in total, let's collect them in a list
file_lst = []
ts_folder = 'dataset/'
for i in range(314):
file_path = ts_folder + 'MovementAAL_RSS_'+str(i+1)+'.csv'
tmp_df = pd.read_csv(file_path)
file_lst.append(tmp_df.values) # append each file into the list
file_lst[0]
# most anonying part - make each file the same length
## All pad last row to the max file length, then all truncate to 90th percentile length
# find 90th percentile & max length
file_len_lst = [len(f) for f in file_lst]
print(pd.Series(file_len_lst).describe())
print(pd.Series(file_len_lst).quantile(0.9)) # 90th percentile length
# For each file, keep padding the last row till reach to max length 129
max_len = int(pd.Series(file_len_lst).describe()['max'])
print(max_len)
for i in range(len(file_lst)):
original_len = len(file_lst[i])
add_len = max_len - original_len
for j in range(add_len):
file_lst[i] = np.vstack((file_lst[i], file_lst[i][-1])) # pad the last row towards the max length
print(len(file_lst[0]), len(file_lst[-1]))
# Now truncate each sequence to 90th percentile length,
## so that we can keep a balance between losing data and adding too much data
from keras.preprocessing import sequence
seq_len = 60
final_seq = sequence.pad_sequences(file_lst, maxlen=seq_len, padding='post', dtype='float', truncating='post')
print(len(final_seq), len(final_seq[0]))
# get the labels and save as numpy array
label_df = pd.read_csv('dataset/MovementAAL_target.csv')
labels = label_df.values[:,1]
labels
# We can set group 2 as training, group 1 as validation set and group 3 as testing data
groups = pd.read_csv('groups/MovementAAL_DatasetGroup.csv')
print(groups[' dataset_ID'].value_counts())
groups.head()
groups.values[1]
train_data = [final_seq[i] for i in range(len(final_seq)) if groups.values[i][1] == 2]
val_data = [final_seq[i] for i in range(len(final_seq)) if groups.values[i][1] == 1]
test_data = [final_seq[i] for i in range(len(final_seq)) if groups.values[i][1] == 3]
train_labels = [labels[i] for i in range(len(final_seq)) if groups.values[i][1] == 2]
val_labels = [labels[i] for i in range(len(final_seq)) if groups.values[i][1] == 1]
test_labels = [labels[i] for i in range(len(final_seq)) if groups.values[i][1] == 3]
train = np.array(train_data)
val = np.array(val_data)
test = np.array(test_data)
print(train.shape) # 106 files, each file has length 60, each record contains 4 values
train[0][0]
train_target = np.array(train_labels)
val_target = np.array(val_labels)
test_target = np.array(test_labels)
print(train_target.shape)
# for target, have to convert to 0,1
train_target = (train_target + 1)/2
val_target = (val_target + 1)/2
test_target = (test_target + 1)/2
"""
Explanation: Data Preprocessing
End of explanation
"""
from keras.preprocessing import sequence
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.optimizers import Adam
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
%matplotlib inline
model = Sequential()
model.add(LSTM(256, input_shape=(seq_len, 4)))
model.add(Dense(1, activation='sigmoid'))
model.summary()
adam = Adam(lr=0.001)
# with check point, the training will stop at optimized testing results, reducing overfitting
chk = ModelCheckpoint('best_model.pkl', monitor='val_acc', save_best_only=True, mode='max', verbose=1)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])
model.fit(train, train_target, epochs=200, batch_size=128, callbacks=[chk], validation_data=(val,val_target))
model = load_model('best_model.pkl')
from sklearn.metrics import accuracy_score
test_preds = model.predict_classes(test)
accuracy_score(test_target, test_preds)
"""
Explanation: Model Prediction
Use sequence model LSTM
End of explanation
"""
|
jgomezc1/medios | NOTEBOOKS/Ej1_EcPlano.ipynb | mit | from numpy import array, cross, dot
"""
Explanation: Ejemplo 1. Determinar la ecuación del plano que pasa por 3 puntos
Esta es de la forma:
$ax+by+cz=d$
End of explanation
"""
r1 = array([2,-1,1])
r2 = array([3,2,-1])
r3 = array([-1,3,2])
"""
Explanation: Primero determinemos el vector posición $\vec{r_{1}}$, $\vec{r_{2}}$ y $\vec{r_{3}}$ de cada punto:
End of explanation
"""
A = r2 - r1
B = r3 - r1
print A, B
"""
Explanation: Posteriormente se determina una base vectorial en el plano por medio de los vectores:
$\vec{A}=\vec{r_{2}}-\vec{r_{1}}$ y $\vec{B}=\vec{r_{3}}-\vec{r_{1}}$
End of explanation
"""
N = cross(A,B)
print N
"""
Explanation: Via producto cruz se determina el vector normal al plano
$$ \vec{N} = \vec{A} \times \vec{B} $$
End of explanation
"""
D=array([0,0,0])
D = -dot(N,r1)
print D
"""
Explanation: Con este vector la ecuación general del plano será:
$$Ax + By + Cz + D = 0$$
Donde $A$, $B$, $C$ son las componentes de $\vec{N}$ y D es una constante por evaluar.
Para ello en la ecuación del plano reemplazamos las coordenadas del punto $P_1$, por ejemplo. Así el valor de D, podría construirse como el negativo producto punto entre $\vec{N}$ y el vector $\vec{r_1}$
End of explanation
"""
N = str(N[0]) + 'x + ' + str(N[1]) + 'y + ' + str(N[2]) + 'z = ' + str(-D)
print N
from IPython.core.display import HTML
def css_styling():
styles = open('./custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
"""
Explanation: De esta forma, la ecuación de plano es:
End of explanation
"""
|
appleby/fastai-courses | deeplearning1/nbs/lesson3.ipynb | apache-2.0 | from theano.sandbox import cuda
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/dogscats/sample/"
path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
batch_size=64
"""
Explanation: Training a better model
End of explanation
"""
model = vgg_ft(2)
"""
Explanation: Are we underfitting?
Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:
How is this possible?
Is this desirable?
The answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability p (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.
The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.
So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!
(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.)
Removing dropout
Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:
- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)
- Split the model between the convolutional (conv) layers and the dense layers
- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch
- Create a new model with just the dense layers, and dropout p set to zero
- Train this new model using the output of the conv layers as training data.
As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent...
End of explanation
"""
model.load_weights(model_path+'finetune3.h5')
"""
Explanation: ...and load our fine-tuned weights.
End of explanation
"""
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
last_conv_idx
layers[last_conv_idx]
conv_layers = layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# Dense layers - also known as fully connected or 'FC' layers
fc_layers = layers[last_conv_idx+1:]
"""
Explanation: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:
End of explanation
"""
batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
save_array(model_path + 'train_convlayer_features.bc', trn_features)
save_array(model_path + 'valid_convlayer_features.bc', val_features)
trn_features = load_array(model_path+'train_convlayer_features.bc')
val_features = load_array(model_path+'valid_convlayer_features.bc')
trn_features.shape
"""
Explanation: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
End of explanation
"""
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
# Such a finely tuned model needs to be updated very slowly!
opt = RMSprop(lr=0.00001, rho=0.7)
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc_model = get_fc_model()
"""
Explanation: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
End of explanation
"""
fc_model.fit(trn_features, trn_labels, nb_epoch=8,
batch_size=batch_size, validation_data=(val_features, val_labels))
fc_model.save_weights(model_path+'no_dropout.h5')
fc_model.load_weights(model_path+'no_dropout.h5')
"""
Explanation: And fit the model in the usual way:
End of explanation
"""
# dim_ordering='tf' uses tensorflow dimension ordering,
# which is the same order as matplotlib uses for display.
# Therefore when just using for display purposes, this is more convenient
gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1,
channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
"""
Explanation: Reducing overfitting
Now that we've gotten the model to overfit, we can take a number of steps to reduce this.
Approaches to reducing overfitting
We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):
Add more data
Use data augmentation
Use architectures that generalize well
Add regularization
Reduce architecture complexity.
We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.
Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)
We recommend always using at least some light data augmentation, unless you have so much data that your model will never see the same input twice.
About data augmentation
Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation:
End of explanation
"""
# Create a 'batch' of a single image
img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0)
# Request the generator to create batches from this image
aug_iter = gen.flow(img)
# Get eight examples of these augmented images
aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]
# The original
plt.imshow(img[0])
"""
Explanation: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
End of explanation
"""
# Augmented data
plots(aug_imgs, (20,7), 2)
# Ensure that we return to theano dimension ordering
K.set_image_dim_ordering('th')
"""
Explanation: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
End of explanation
"""
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
batches = get_batches(path+'train', gen, batch_size=batch_size)
# NB: We don't want to augment or shuffle the validation set
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
"""
Explanation: Adding data augmentation
Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it:
End of explanation
"""
fc_model = get_fc_model()
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
"""
Explanation: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable:
End of explanation
"""
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.save_weights(model_path + 'aug1.h5')
conv_model.load_weights(model_path + 'aug1.h5')
"""
Explanation: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
End of explanation
"""
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(4096, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(1000, activation='softmax')
]
def load_fc_weights_from_vgg16bn(model):
"Load weights for model from the dense layers of the Vgg16BN model."
# See imagenet_batchnorm.ipynb for info on how the weights for
# Vgg16BN can be generated from the standard Vgg16 weights.
from vgg16bn import Vgg16BN
vgg16_bn = Vgg16BN()
_, fc_layers = split_at(vgg16_bn.model, Convolution2D)
copy_weights(fc_layers, model.layers)
p=0.6
bn_model = Sequential(get_bn_layers(0.6))
load_fc_weights_from_vgg16bn(bn_model)
def proc_wgts(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6))
bn_model.pop()
for layer in bn_model.layers: layer.trainable=False
bn_model.add(Dense(2,activation='softmax'))
bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))
bn_model.save_weights(model_path+'bn.h5')
bn_model.load_weights(model_path+'bn.h5')
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
for layer in final_model.layers: layer.trainable = False
for layer in bn_layers: final_model.add(layer)
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
final_model.compile(optimizer=Adam(),
loss='categorical_crossentropy', metrics=['accuracy'])
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final1.h5')
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final2.h5')
final_model.optimizer.lr=0.001
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
bn_model.save_weights(model_path + 'final3.h5')
"""
Explanation: Batch normalization
About batch normalization
Batch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.
Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights.
Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that all modern networks should use batchnorm, or something equivalent. There are two reasons for this:
1. Adding batchnorm to a model can result in 10x or more improvements in training speed
2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to reduce overfitting.
As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:
1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean
2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.
This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so.
Adding batchnorm to the model
We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers):
End of explanation
"""
|
IBM/differential-privacy-library | notebooks/linear_regression.ipynb | mit | from sklearn.model_selection import train_test_split
from sklearn import datasets
dataset = datasets.load_diabetes()
X_train, X_test, y_train, y_test = train_test_split(dataset.data[:, :2], dataset.target, test_size=0.2)
print("Train examples: %d, Test examples: %d" % (X_train.shape[0], X_test.shape[0]))
"""
Explanation: Linear Regression
We will follow the example given by scikit-learn, and use the diabetes dataset to train and test a linear regressor. We begin by loading the dataset (using only two features for this example) and splitting it into training and testing samples (an 80/20 split).
End of explanation
"""
from sklearn.linear_model import LinearRegression as sk_LinearRegression
regr = sk_LinearRegression()
regr.fit(X_train, y_train)
baseline = regr.score(X_test, y_test)
print("Non-private baseline R2 score: %.2f" % baseline)
"""
Explanation: Non-private baseline
We now use scikit-learn's native LinearRegression function to establish a non-private baseline for our experiments. We will use the r-squared score to evaluate the goodness-of-fit of the model, which is built into LinearRegression.
End of explanation
"""
from diffprivlib.models import LinearRegression
regr = LinearRegression()
regr.fit(X_train, y_train)
print("R2 score for epsilon=%.2f: %.2f" % (regr.epsilon, regr.score(X_test, y_test)))
"""
Explanation: Differentially private Linear Regression
Let's now train a differentially private linear regressor, where the trained model is differentially private with respect to the training data. We will pass additional hyperparameters to the regressor later to suppress the PrivacyLeakWarning.
End of explanation
"""
import numpy as np
epsilons = np.logspace(-1, 2, 100)
accuracy = []
for epsilon in epsilons:
regr = LinearRegression(epsilon=epsilon, bounds_X=(-0.138, 0.2), bounds_y=(25, 346))
regr.fit(X_train, y_train)
accuracy.append(regr.score(X_test, y_test))
"""
Explanation: Plotting r-squared versus epsilon
We want to evaluate the tradeoff between goodness-of-fit and privacy budget (epsilon), and plot the result using matplotlib. For this example, we evaluate the score for epsilon between 1e-2 and 1e2. To ensure no privacy leakage from the hyperparameters of the model, data_norm, range_X and range_y should all be set independently of the data, i.e. using domain knowledge.
End of explanation
"""
import matplotlib.pyplot as plt
plt.semilogx(epsilons, accuracy, label="Differentially private linear regression", zorder=10)
plt.semilogx(epsilons, baseline * np.ones_like(epsilons), dashes=[2,2], label="Non-private baseline", zorder=5)
plt.xlabel("epsilon")
plt.ylabel("r-squared score")
plt.ylim(-5, 1.5)
plt.xlim(epsilons[0], epsilons[-1])
plt.legend(loc=2)
"""
Explanation: And then plot the result in a semi-log plot.
End of explanation
"""
|
kpei/cs-rating | wl_model/wlbet_player_model.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
import datetime as dt
from scipy.stats import norm, bernoulli
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from spcl_case import *
plt.style.use('fivethirtyeight')
"""
Explanation: Win/Loss Betting Model
End of explanation
"""
h_matches = pd.read_csv('hltv_csv/matchResults.csv').set_index('Match ID')
h_matches['Date'] = pd.to_datetime(h_matches['Date'])
h_teams = pd.read_csv('hltv_csv/teams_w_ranking.csv')
h_teams = fix_teams(h_teams.set_index('ID'))
h_players = pd.read_csv('hltv_csv/matchLineups.csv').set_index('Match ID')
h_player_names = pd.read_csv('hltv_csv/players.csv').set_index('ID')
MIN_DATE = dt.datetime(2017,1,1)
EVENT_SET = 'eslpl'
FILTER_TEAMS = {'eslpl': ['OpTic', 'SK', 'Cloud9', 'Liquid', 'Luminosity', 'Misfits', 'Renegades', 'Immortals',
'Splyce', 'compLexity', 'Rogue', 'Ghost', 'CLG', 'NRG', 'FaZe', 'North',
'BIG', 'LDLC', 'mousesports', 'EnVyUs', 'NiP', 'Virtus.pro',
'Astralis', 'G2', 'GODSENT', 'Heroic', 'fnatic', 'NiP', 'Heroic'],
'mdleu': ['Virtus.pro', 'FlipSid3', 'eXtatus', 'AGO', 'Fragsters', 'Gambit', 'PRIDE', '1337HUANIA',
'VITALIS', 'Epsilon', 'CHAOS', 'Crowns', 'MK', 'Japaleno', 'Not Academy', 'aAa', 'Space Soldiers',
'Singularity', 'Nexus', 'Invictus Aquilas', 'Spirit', 'Kinguin', 'Seed', 'Endpoint', 'iGame.com', 'TEAM5',
'ALTERNATE aTTaX'],
'mdlna': ['Gale Force', 'FRENCH CANADIANS', 'Mythic', 'GX', 'Beacon', 'Torqued', 'Rise Nation', 'Denial', 'subtLe',
'SoaR', 'Muffin Lightning', 'Iceberg', 'ex-Nitrious', 'Adaptation', 'Morior Invictus', 'Naventic', 'CheckSix', 'Good People'
, 'LFAO', 'CLG Academy', 'Ambition', 'Mostly Harmless', 'Gorilla Core', 'ex-Nitrious', 'ANTI ECO'],
'mdlau': ['Grayhound', 'Tainted Minds', 'Kings', 'Chiefs', 'Dark Sided', 'seadoggs', 'Athletico', 'Legacy',
'SIN', 'Noxide', 'Control', 'SYF', 'Corvidae', 'Funkd', 'Masterminds', 'Conspiracy', 'AVANT']
}
h_matches = h_matches[h_matches['Date'] >= MIN_DATE]
h_matches['winner'] = h_matches.apply(lambda x: x['Team 1 Score'] > x['Team 2 Score'], axis=1)
h_matches['score_diff'] = h_matches['Team 1 Score'] - h_matches['Team 2 Score']
h_matches = h_matches.join(h_players)
player_plays = h_matches[['Map', 'score_diff', 'winner'] + player_col_names].melt(value_vars=player_col_names)
player_plays = player_plays['value'].value_counts()
player_plays.hist(bins=30)
print(np.mean(player_plays > 10))
filt_players = player_plays[player_plays > 10].index
h_matches = h_matches[h_matches[player_col_names].isin(filt_players).all(axis=1)]
print(len(filt_players))
player_col_names = ['Team 1 Player 1', 'Team 1 Player 2', 'Team 1 Player 3', 'Team 1 Player 4', 'Team 1 Player 5',
'Team 2 Player 1', 'Team 2 Player 2', 'Team 2 Player 3', 'Team 2 Player 4', 'Team 2 Player 5',]
obs = h_matches[['Map', 'score_diff', 'winner'] + player_col_names]
obs = obs[obs.Map != 'Default'].dropna(axis=0)
obs.head()
players = np.sort(np.unique(np.concatenate(obs[player_col_names].values)))
maps = obs.Map.unique()
tmap = {v:k for k,v in dict(enumerate(players)).items()}
mmap = {v:k for k,v in dict(enumerate(maps)).items()}
n_players = len(players)
n_maps = len(maps)
print('Number of Players: %i ' % n_players)
print('Number of Matches: %i ' % len(h_matches))
print('Number of Maps: %i '% n_maps)
"""
Explanation: Obtain results of teams within the past year
End of explanation
"""
import pymc3 as pm
import theano.tensor as tt
obs_map = obs['Map'].map(mmap).values
obs_team = obs.reset_index()[player_col_names].apply(lambda x: x.map(tmap).values, axis=1).values
obs_team_1 = obs_team[:, :5]
obs_team_2 = obs_team[:, 5:10]
with pm.Model() as rating_model:
omega = pm.HalfCauchy('omega', 0.5)
tau = pm.HalfCauchy('tau', 0.5)
rating = pm.Normal('rating', 0, omega, shape=n_players)
theta_tilde = pm.Normal('rate_t', mu=0, sd=1, shape=(n_maps, n_players))
rating_map = pm.Deterministic('rating | map', rating + tau * theta_tilde).flatten()
diff = tt.sum(rating_map[obs_map[:,np.newaxis]*n_players+obs_team_1], axis=1) - tt.sum(rating_map[obs_map[:,np.newaxis]*n_players+obs_team_2], axis=1)
#p = 0.5*tt.tanh(diff)+0.5
alpha = 0.5
sigma = pm.HalfCauchy('sigma', 0.5)
sc = pm.Normal('observed score diff', 16*tt.tanh(alpha*diff), sigma, observed=obs['score_diff'])
#wl = pm.Bernoulli('observed wl', p=p, observed=obs['winner'].values)
with rating_model:
approx = pm.fit(20000, method='advi')
ap_trace = approx.sample(1000)
with rating_model:
trace = pm.sample(1000, n_init=20000, init='jitter+adapt_diag', nuts_kwargs={'target_accept': 0.90, 'max_treedepth': 14}, tune=550) # tune=1000, nuts_kwargs={'target_accept': 0.95}
some_special_list = [3741, 4959, 8797, 9216, 9219, 1916, 317, 2553,8611]
filt = h_player_names.loc[some_special_list]
sns.set_palette('Paired', 10)
f, ax = plt.subplots(figsize=(16,10))
ax.set_ylim(0,4.0)
[sns.kdeplot(ap_trace['rating'][:,tmap[i]], shade=True, alpha=0.55, legend=True, ax=ax, label=v['Name']) for i,v in filt.iterrows()]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
{v['Name']: [ap_trace['rating'][:,tmap[i]].mean(), ap_trace['rating'][:,tmap[i]].std()] for i,v in filt.iterrows()}
"""
Explanation: Pymc Model
Determining Binary Win Loss: $wl_{m,i,j}$
$$
\omega, \tau, \sim HC(0.5) \
R_{k} \sim N(0, \omega^2) \
\tilde{\theta}{m,k} \sim N(0,1) \
R{m,k} = R_{k} + \tau\tilde{\theta} \
wl_{m,i,j} \sim B(p = \text{Sig}(R_{m,i}-R_{m,j})) \
$$
and score difference: $sc_{m,i,j}$
$$
\alpha \sim Gamma(10,5) \
\kappa_{m,i,j} = 32\text{Sig}(\alpha(R_{m,i}-R_{m,j}))-16 \
\sigma_{m} \sim HC(0.5) \
sc_{m,i,j} \sim N(\kappa, \sigma_{m}^2)
$$
End of explanation
"""
EVENT_SET = 'all_player_sc'
pm.backends.text.dump('saved_model/'+EVENT_SET+'/trace', trace)
np.save('saved_model/'+EVENT_SET+'/players.npy', players)
np.save('saved_model/'+EVENT_SET+'/maps.npy', maps)
"""
Explanation: Save Model
End of explanation
"""
with rating_model:
approx = pm.fit(15000)
ap_trace = approx.sample(5000)
print('Gelman Rubin: %s' % pm.diagnostics.gelman_rubin(trace))
print('Effective N: %s' % pm.diagnostics.effective_n(trace))
print('Accept Prob: %.4f' % trace.get_sampler_stats('mean_tree_accept').mean())
print('Percentage of Divergent %.5f' % (trace['diverging'].nonzero()[0].size/float(len(trace))))
pm.traceplot(trace, varnames=['sigma', 'omega', 'tau'])
rating_model.profile(pm.gradient(rating_model.logpt, rating_model.vars), n=100).summary()
rating_model.profile(rating_model.logpt, n=100).summary()
"""
Explanation: Diagnostics
End of explanation
"""
sns.set_palette('Paired', n_teams)
f, ax = plt.subplots(figsize=(16,10))
ax.set_ylim(0,2.0)
[sns.kdeplot(trace['sigma'][:,i], shade=True, alpha=0.55, legend=True, ax=ax, label=m) for i,m in enumerate(maps)]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
f, axes = plt.subplots(n_maps,1,figsize=(12,34), sharex=True)
for m, ax in enumerate(axes):
ax.set_title(dict(enumerate(maps))[m])
ax.set_ylim(0,2.0)
[sns.kdeplot(trace['rating | map'][:,m,tmap[i]], shade=True, alpha=0.55, legend=False ,
ax=ax, label=v['Name']) for i,v in filt.iterrows()]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
filt
i = np.where(teams==7880)
j = np.where(teams==7924)
diff = (trace['rating'][:,j] - trace['rating'][:,i]).flatten()
kappa = 32./(1+np.exp(-1.*trace['alpha']*diff))-16.
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(10,6))
sns.kdeplot(kappa, ax=ax2)
sns.kdeplot(diff, ax=ax1)
"""
Explanation: Moar Plots
End of explanation
"""
def vec2dict(s, n_teams):
return {
'mu': np.array(s[:n_teams]),
'sigma': np.array(s[n_teams:n_teams*2]),
'beta': s[-1],
}
def dict2vec(s):
return s['mu'] + s['sigma'] + [s['beta']]
skills_0 = dict2vec({
'mu': [1000]*n_teams,
'sigma': [300]*n_teams,
'beta': 50
})
from scipy.optimize import minimize
def loglike(y,p):
return -1.*(np.sum(y*np.log(p)+(1-y)*np.log(1.-p)))
def obj(skills):
s = vec2dict(skills, n_teams)
mean_diff = s['mu'][obs['Team 1 ID'].map(tmap).values] - s['mu'][obs['Team 2 ID'].map(tmap).values]
var_diff = s['sigma'][obs['Team 1 ID'].map(tmap).values]**2 + s['sigma'][obs['Team 2 ID'].map(tmap).values]**2 + skills[-1]**2
p = 1.-norm.cdf(0., loc=mean_diff, scale = np.sqrt(var_diff))
return loglike((obs['Team 1 ID'] == obs['winner']).values, p)
obj(skills_0)
opt_skill = g.x
print(opt_skill)
plots = norm.rvs(opt_skill[:5], opt_skill[5:-1], size=(2000,5))
f, ax = plt.subplots(figsize=(12,8))
[sns.kdeplot(plots[:,i], shade=True, alpha=0.55, legend=True, ax=ax, label=i) for i in range(5)]
"""
Explanation: Non-MCMC Model
End of explanation
"""
|
Centre-Alt-Rendiment-Esportiu/att | notebooks/Serial Ports.ipynb | gpl-3.0 | import sys
#sys.path.insert(0, '/home/asanso/workspace/att-spyder/att/src/python/')
sys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/')
"""
Explanation: <h1>Serial Ports</h1>
<hr style="border: 1px solid #000;">
<span>
<h2>Serial Port abstraction for ATT.</h2>
</span>
<br>
<span>
This notebook shows the ATT Serial Port abstraction module.<br>
This module was created for enabling testing on ATT framework.
The Serial Port abstraction provides an Abstract base class so it can be extended and implement whatever kind of serial port we need.
We have used this class hierarchy to build some Mocks, in order to test the ATT framework.
</span>
Set modules path first:
End of explanation
"""
import hit.serial.serial_port
port=""
baud=0
dummySerialPort = hit.serial.serial_port.DummySerialPort(port, baud)
"""
Explanation: The main abstract base class is the following one:
class SerialPort:
metaclass = abc.ABCMeta
@abc.abstractmethod
def isOpen(self):
pass
@abc.abstractmethod
def readline(self):
pass
@abc.abstractmethod
def close(self):
pass
@abc.abstractmethod
def get_port(self):
return ""
@abc.abstractmethod
def get_baudrate(self):
return 0
As an example, we can see a dummy implementation:
class DummySerialPort (SerialPort):
def init(self, port = None, baud = None):
pass
def isOpen(self):
return True
def close(self):
pass
def get_port(self):
return ""
def get_baudrate(self):
return 0
def readline(self):
time_delay = int(3*random.random())+1
time.sleep(time_delay)
return self.gen_random_line()
def gen_random_line(self):
return "Hee"
<h2>Building Serial Ports</h2>
<span>
In order to build an instance of a SerialPort class, we have 2 options:
<ul>
<li>Call the constructor directly</li>
<li>Use a Builder</li>
</ul>
</span>
<h3>Calling the constructor</h3>
End of explanation
"""
print dummySerialPort.readline()
"""
Explanation: <span>
The DummSerialPort is very simple. It just says "Hee" (after a few seconds) when its method "readline()" is called.<br>
Port and Baud are useless here.
</span>
End of explanation
"""
import hit.serial.serial_port
port=""
baud=0
emulatedSerialPort = hit.serial.serial_port.ATTEmulatedSerialPort(port, baud)
"""
Explanation: <span>
Let's create a more interesting Serialport instance,
</span>
End of explanation
"""
print emulatedSerialPort.readline()
"""
Explanation: <span>
The ATTEmulatedSerialPort will emulate a real ATT serial port reading.<br>
Port and Baud are useless here.
</span>
End of explanation
"""
import hit.serial.serial_port_builder
builder = hit.serial.serial_port_builder.ATTEmulatedSerialPortBuilder()
port=""
baud=0
emulatedSerialPort1 = builder.build_serial_port(port, baud)
emulatedSerialPort2 = builder.build_serial_port(port, baud)
emulatedSerialPort3 = builder.build_serial_port(port, baud)
emulatedSerialPort4 = builder.build_serial_port(port, baud)
emulatedSerialPort5 = builder.build_serial_port(port, baud)
emulatedSerialPort6 = builder.build_serial_port(port, baud)
emulatedSerialPort7 = builder.build_serial_port(port, baud)
"""
Explanation: <h3>Using a Builder</h3>
<span>
Let's use a builder now.
</span>
<span>
We can choose the builder we want and build as many SerialPorts we want.
</span>
End of explanation
"""
print emulatedSerialPort5.readline()
"""
Explanation: <span>
And call "readline()"
</span>
End of explanation
"""
!head -10 train_points_import_data/arduino_raw_data.txt
import hit.serial.serial_port_builder
builder = hit.serial.serial_port_builder.ATTHitsFromFilePortBuilder()
port="train_points_import_data/arduino_raw_data.txt"
baud=0
fileSerialPort = builder.build_serial_port(port, baud)
"""
Explanation: <span>
There is a special Serial port abstraction that is fed from a file.<br>
This is useful when we want to "mock" the serial port and give it previously stored readings.
</span>
<span>
This is interesting, for example, in order to reproduce, or visualize the repetition of an interesting set of hits in a game. Because Serial line is Real-Time, there are situations where it is needed to provide the ATT framework with a set of know hits, previously stored.
</span>
<span>
We can use the data use in "Train points importer".
</span>
End of explanation
"""
for i in range(20):
print fileSerialPort.readline()
"""
Explanation: <span>
And now we will read some lines:
</span>
End of explanation
"""
|
Pinafore/ds-hw | python-tutorials/defaultdict.ipynb | mit | data = [
('california', 1),
('california', 3),
('colorado', 0),
('colorado', 10),
('washington', 2),
('washington', 4)
]
"""
Explanation: Python default dictionary vs dictionary
This notebook motivates and explains why python has default dictionaries
Read more here: https://docs.python.org/3/library/collections.html#collections.defaultdict
Suppose you have a list of tuples where each one has a string key and integer value. Your task is to sum all the values which have the same key
End of explanation
"""
# This won't work because I haven't initialized keys
summed = dict()
for row in data:
key, value = row # destructure the tuple
summed[key] = summed[key] + value
"""
Explanation: With an ordinary dictionary, I would need to check if they key exists. If it doesn't I need to initialize it with a value. For instrutional purposes I will call the int() function which will return the default value for an integer which is 0.
End of explanation
"""
summed = dict()
for row in data:
key, value = row
if key not in summed:
summed[key] = int()
summed[key] = summed[key] + value
summed
"""
Explanation: As expected, the first time we try to set the value for california, it doesn't exist in the dictionary so the right handside of the equal sign errors. Thats easy to fix like this
End of explanation
"""
merged = dict()
for row in data:
key, value = row
if key not in merged:
merged[key] = list()
merged[key].append(value)
merged
"""
Explanation: Lets see one more example that instead of summing the numbers we wan't to collect everything into a list. So lets replace int() with list() since we wan't to make an empty list. We also need to change the summing term to use append instead
End of explanation
"""
from collections import defaultdict
summed = defaultdict(int)
for row in data:
key, value = row
summed[key] = summed[key] + value
summed
merged = defaultdict(list)
for row in data:
key, value = row
merged[key].append(value)
merged
def myinit():
return -100
summed = defaultdict(myinit)
for row in data:
key, value = row
summed[key] += value
summed
"""
Explanation: Its inconvenient to do this check every time so python has a nice way to make this pattern simpler. This is what collections.defaultdict was designed for. It does the following:
Takes a single argument which is a function which we will call func
When a key is accessed (for example with merged[key], check if it exists. If it doesn't, instead of erroring initialize it to the return of func then proceed as normal
Lets see both examples from above using this
End of explanation
"""
d = defaultdict(str)
# initially this is empty so all of these should be false
print('pedro in dictionary:', 'pedro' in d)
print('jordan in dictionary:', 'jordan' in d)
# Lets set something in the dictionary now and check that again
d['jordan'] = 'professor'
print('jordan is in dictionary:', 'jordan' in d)
print('pedro is in dictionary:', 'pedro' in d)
# Lets accidentally access 'pedro' before setting it then see what happens
pedro_job = d['pedro']
print('pedro is in dictionary:', 'pedro' in d)
print(d)
print('-->', d['pedro'], '<--', type(d['pedro']))
"""
Explanation: As expected, the results are exactly the same, and it is based on the initial method you pass it. This function is called a factory method since each time a key needs to be initialized you can imagine that the function acts as a factory which creates new values. Lets cover one of the common mistakes with default dictionaries before concluding. The source of this mistake is that any time a non-existent key is accessed its initialized.
End of explanation
"""
d['pedro'] = 'PhD Student'
print('pedro is in dictionary:', 'pedro' in d)
print(d)
print('-->', d['pedro'], '<--', type(d['pedro']))
"""
Explanation: So this is odd! You never set a key (only accessed it), but nonetheless pedro is in the dictionary. This is because when the 'pedro' key was accessed and not there, python set it to the return of str which returns an empty string. Lets set this to the real value and be done
End of explanation
"""
|
EstevaoVieira/udacity_projects | titanic_survival_exploration/.ipynb_checkpoints/titanic_survival_exploration-checkpoint.ipynb | mit | # Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
"""
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
"""
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
"""
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
"""
def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
"""
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
"""
def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
"""
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
vs.survival_stats(data, outcomes, 'Sex')
"""
Explanation: Answer: Predictions have an accuracy of 61.62%.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
"""
def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append(passenger.Sex=='female')
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
"""
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
"""
Explanation: Answer: Predictions have an accuracy of 78.68%.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
"""
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
if passenger.Sex=='female':
predictions.append(1)
elif passenger.Age < 10: #passed first if mean it is male (do not need to explicit)
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
"""
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
vs.survival_stats(data, outcomes, 'Embarked', [ "Sex == 'female'", 'Pclass == 3','Age < 20'])
"""
Explanation: Answer: Predictions have an accuracy of 79.24%.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
"""
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
if passenger.Pclass ==3:
if passenger.Sex=='female' and passenger.Age<20 and passenger.Embarked!='S':
predictions.append(1)
else:
predictions.append(0)
elif passenger.Sex=='female':
predictions.append(1)
elif passenger.Age <= 10:
if passenger.SibSp >= 3:
predictions.append(0)
else:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
"""
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation
"""
|
erickpeirson/statistical-computing | .ipynb_checkpoints/Hamiltonian MCMC (HMC)-checkpoint.ipynb | cc0-1.0 | dtarget = lambda x: multivariate_normal.pdf(x, mean=(3, 10), cov=[[1, 0], [0, 1]])
x1 = np.linspace(-6, 12, 101)
x2 = np.linspace(-11, 31, 101)
X, Y = np.meshgrid(x1, x2)
Z = np.array(map(dtarget, zip(X.flat, Y.flat))).reshape(101, 101)
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
plt.xlim(0, 6)
plt.ylim(7, 13)
plt.show()
"""
Explanation: Use the (local) shape of the distribution to make smarter proposals.
Hamiltonian: quantity that is conserved regardless of position in space.
Metaphor: hockey puck sliding on a (non-flat) surface. Want to be able to describe the state of the puck. The state has two quantities:
Current position, $q$
Momentum, $p$
Hamiltonian: $H(q, p) = U(q) + K(p)$
$U(q)$ -- potential energy
$K(p)$ -- kinetic energy
End of explanation
"""
def HMC_one_step(U, current_q, Eps, L, m=1):
"""
One step of the Hamiltonian Monte Carlo.
Parameters
----------
U : callable
A function that takes a single argument, the position.
q : array-like
Current position.
Eps : float
The step size, epsilon.
L : int
Number of leapfrog stpes.
m : float
Mass of the particle.
Returns
-------
q_out : array
Path from ``q`` to the proposed position.
"""
q = copy.copy(current_q)
Nq = len(q)
p = multivariate_normal.rvs([0. for i in xrange(Nq)])
current_p = copy.copy(p)
out = {}
out['p'] = np.zeros((L, Nq))
out['p'][0,:] = copy.copy(p)
out['q'] = np.zeros((L, Nq))
out['q'][0,:] = copy.copy(q)
for i in xrange(1, L):
p -= Eps*derivative(U, q, 0.01)/2.
q += (Eps/m)*p
out['q'][i, :] = copy.copy(q)
p -= Eps*derivative(U, q, 0.01)/2.
out['p'][i, :] = copy.copy(p)
current_U = U(current_q)
current_K = (current_p**2).sum()/2.
proposed_U = U(q)
proposed_K = (p**2).sum()/2.
if uniform.rvs() < exp(current_U - proposed_U + current_K - proposed_K):
out['value'] = q
else:
out['value'] = current_q
return out
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
U = lambda x: -1.*np.log(dtarget(x))
chain = HMC_one_step(U, np.array([4., 10.]), Eps=0.2, L=10, m=2)['q']
plt.plot(chain[:, 0], chain[:, 1], 'ro')
plt.plot(chain[:, 0], chain[:, 1], 'r-')
plt.plot(chain[0, 0], chain[0,1], 'bo')
plt.xlim(0, 6)
plt.ylim(7, 13)
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def HMC(dtarget, start, Eps=0.2, L=10, m=2, N=1000, num_chains=4):
"""
Perform an HMC simulation.
Parameters
----------
dtarget : callable
Target PDF.
"""
# Invert the target PDF into a concave surface.
neg_log_dtarget = lambda x: -1.*np.log(dtarget(x))
# If only one starting position is provided, use it for all chains.
if len(start.shape) == 1:
start = np.array([np.array(start) for i in xrange(num_chains)])
chains = []
for j in xrange(num_chains):
chain = [start[j, :]]
for i in xrange(N):
proposal = HMC_one_step(neg_log_dtarget,
copy.copy(chain[-1]),
Eps, L, m)['value']
chain.append(proposal)
chains.append(np.array(chain))
return np.array(chains)
"""
Explanation: The surface of interest will be $U(q) = -\log{f(q)}$
$K(p) = \frac{p^T p}{2m}$, where $m$ = mass of the puck.
Position over time is a function of momentum:
$\frac{dq_i}{dt} = \frac{p_i}{m}$
Change in momentum over time is a function of surface gradient:
$\frac{dp_i}{dt} = -\frac{\delta U}{\delta q_i}$
Leap-frog algorithm
$ p_i(t + \frac{\epsilon}{2}) = p_i(t) - \frac{\epsilon}{2} \frac{\delta U}{\delta q_i} U(q(t))$
$ q_i(t + \epsilon) = q_i(t) + \frac{\epsilon}{m}p_i(t+\frac{\epsilon}{2})$
$ p_i(t + \epsilon) = p_i(t + \frac{\epsilon}{2}) - \frac{\epsilon}{2} \frac{\delta U}{\delta q_i}(q(t+\epsilon))$
$\epsilon$ -- step size
End of explanation
"""
def Gelman(chains):
if len(chains.shape) == 3:
N_p = chains.shape[2]
else:
N_p = 1
generate = lambda ptn: np.array([np.array([np.array([ptn(p, i, c)
for p in xrange(N_p)
for i in xrange(chains.shape[1])])
for c in xrange(chains.shape[0])])])
params = generate(lambda p, i, c: 'x{0}'.format(p))
iters = generate(lambda p, i, c: i)
labels = generate(lambda p, i, c: c)
data = zip(chains.flat, params.flat, iters.flat, labels.flat)
dataframe = pd.DataFrame(data, columns=('Value', 'Parameter', 'Iteration', 'Chain'))
xbar = dataframe.groupby('Parameter').Value.mean()
m = chains.shape[0]
xbar_i = dataframe.groupby(('Parameter', 'Chain')).Value.mean()
s2_i = dataframe.groupby(('Parameter', 'Chain')).Value.var()
n = dataframe.groupby(('Parameter', 'Chain')).Value.count().mean()
W = s2_i.mean()
B = (n/(m-1.)) * ((xbar_i - xbar)**2).sum()
sigma2_hat = W*(n-1.)/n + B/n
R_hat = np.sqrt(sigma2_hat/W)
n_eff = m*n*sigma2_hat/B # I missed what this was for.
return R_hat, n_eff
chains = HMC(dtarget, array([4., 10.]), Eps=0.2, L=5, N=1000)
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
plt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.5)
plt.plot(chains[1][:, 0], chains[1][:, 1], alpha=0.5)
plt.plot(chains[2][:, 0], chains[2][:, 1], alpha=0.5)
plt.plot(chains[3][:, 0], chains[3][:, 1], alpha=0.5)
plt.xlim(0, 6)
plt.ylim(7, 13)
plt.show()
plt.subplot(211)
for i in xrange(chains.shape[0]):
plt.plot(chains[i,:,0])
plt.ylabel('x1')
plt.subplot(212)
for i in xrange(chains.shape[0]):
plt.plot(chains[i,:,1])
plt.ylabel('x2')
Gelman(chains)
"""
Explanation: Tuning parameters: step size, number of steps, and "mass"
HMC does not work discrete parameters. STAN is all HMC.
Gelman metric still applies -- we just have a better way of proposing values.
End of explanation
"""
dtarget = lambda x: exp( (-x[0]**2)/200. - 0.5*(x[1]+(0.05*x[0]**2) - 100.*0.05)**2)
x1 = np.linspace(-20, 20, 101)
x2 = np.linspace(-15, 10, 101)
X, Y = np.meshgrid(x1, x2)
Z = np.array(map(dtarget, zip(X.flat, Y.flat))).reshape(101, 101)
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
plt.show()
start = np.array([[uniform.rvs(loc=-10., scale=15.),
uniform.rvs(loc=0., scale=10)]
for i in xrange(4)])
chains = HMC(dtarget, start, Eps=0.7, L=12, m=2, N=10000)
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
plt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.5)
plt.plot(chains[1][:, 0], chains[1][:, 1], alpha=0.5)
plt.plot(chains[2][:, 0], chains[2][:, 1], alpha=0.5)
plt.plot(chains[3][:, 0], chains[3][:, 1], alpha=0.5)
plt.show()
plt.subplot(211)
plt.title(Gelman(chains)[0])
for i in xrange(chains.shape[0]):
plt.plot(chains[i,:,0])
plt.ylabel('x1')
plt.subplot(212)
for i in xrange(chains.shape[0]):
plt.plot(chains[i,:,1])
plt.ylabel('x2')
plt.tight_layout()
plt.show()
"""
Explanation: Banana-shaped target distribution
End of explanation
"""
def Leapfrog(U, theta, r, Eps, m=1.):
"""
Slightly different update rules, since the negative log of the
target PDF is not used.
"""
gradient = lambda U, theta: derivative(U, theta, 0.01)
r += (Eps/2.)*gradient(U, theta)
theta += (Eps/m)*r
r += (Eps/2.)*gradient(U, theta)
return copy.copy(theta), copy.copy(r)
def BuildTree(U, theta, r, u, v, j, Eps, m=1., delta_max=1000):
"""
Recursive tree-building.
TODO: Make this less ugly.
"""
if j == 0:
# Take one leapfrog step in the direction v.
theta_p, r_p = Leapfrog(U, theta, r, v*Eps, m=m)
n_p = float(u <= exp(U(theta_p) - np.dot(0.5*r_p, r_p)))
s_p = float(u < exp(delta_max + U(theta_p) - np.dot(0.5*r_p, r_p)))
return theta_p, r_p, theta_p, r_p, theta_p, n_p, s_p
else:
# Recursion -- implicitly build the left and right subtrees.
rargs = (u, v, j-1., Eps)
rkwargs = {'m':m}
theta_n, r_n, theta_f, r_f, theta_p, n_p, s_p = BuildTree(U, theta, r, *rargs, **rkwargs)
if s_p == 1:
if v == -1:
theta_n, r_n, null, null, theta_dp, n_dp, s_dp = BuildTree(U, theta_n, r_n, *rargs, **rkwargs)
else:
null, null, theta_f, r_f, theta_dp, n_dp, s_dp = BuildTree(U, theta_f, r_f, *rargs, **rkwargs)
try:
if uniform.rvs() <= (n_dp/(n_p + n_dp)):
theta_p = copy.copy(theta_dp)
except ZeroDivisionError:
pass
s_p = s_p*s_dp*int(np.dot((theta_f - theta_n), r_n) >= 0)*int( np.dot((theta_f - theta_n), r_f) >= 0)
n_p += n_dp
return theta_n, r_n, theta_f, r_f, theta_p, n_p, s_p
def NUTS_one_step(U, theta_last, Eps, m=1.):
"""
TODO: clean up all the copies -- stop being so paranoid.
"""
r_not = norm.rvs(0, 1., size=len(theta_last))
u = uniform.rvs(0, exp(U(theta_last) - np.dot(0.5*r_not, r_not)))
# Initialize.
theta_m = copy.copy(theta_last)
theta_n, theta_f = copy.copy(theta_last), copy.copy(theta_last)
r_n, r_f = copy.copy(r_not), copy.copy(r_not)
j = 0.
s = 1.
n = 1.
while s == 1.:
v_j = np.random.choice(np.array([-1., 1.])) # Choose a direction.
if v_j == -1:
theta_n, r_n, null, null, theta_p, n_p, s_p = BuildTree(U, theta_n, r_n, u, v_j, j, Eps, m=m)
else:
null, null, theta_f, r_f, theta_p, n_p, s_p = BuildTree(U, theta_f, r_f, u, v_j, j, Eps, m=m)
if s_p == 1:
try:
if uniform.rvs() <= min(1., (n_p/n)):
theta_m = copy.copy(theta_p)
except ZeroDivisionError:
pass
s = s_p*int(np.dot((theta_f - theta_n), r_n) >= 0)*int( np.dot((theta_f - theta_n), r_f) >= 0)
j += 1.
return theta_m
NUTS_one_step(lambda x: np.log(dtarget(x)), np.array([3.2, 9.1]), 0.02)
def NUTS(dtarget, theta_not, Eps, num_iters=1000, delta_max=1000, m=1.):
U = lambda x: np.log(dtarget(x))
theta = [theta_not]
for i in xrange(num_iters):
theta_i = NUTS_one_step(U, theta[-1], Eps, m=m)
theta.append(theta_i)
return theta
"""
Explanation: NUTS Sampler
Toy implementation of No-U-Turn Sampler, described by Hoffman and Gelman (2011). Algorithm 3, page 14.
End of explanation
"""
start = np.array([[uniform.rvs(loc=-10., scale=15.),
uniform.rvs(loc=0., scale=10)]
for i in xrange(4)])
chains = np.array([ np.array(NUTS(dtarget, start[i, :], Eps=0.55, m=1.5, num_iters=10000)) for i in xrange(start.shape[0])])
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
for i in xrange(chains.shape[0]):
plt.scatter(chains[i, :, 0], chains[i, :, 1], alpha=0.5, s=0.02)
plt.show()
plt.subplot(211)
plt.title(Gelman(chains)[0])
for i in xrange(chains.shape[0]):
plt.plot(chains[i, :, 0])
plt.ylabel('x1')
plt.subplot(212)
for i in xrange(chains.shape[0]):
plt.plot(chains[i, :, 1])
plt.ylabel('x2')
plt.tight_layout()
plt.show()
plt.hist(chains[0,:,0])
"""
Explanation: Testing on the banana
End of explanation
"""
|
verdverm/pypge | notebooks/Dissertation/data_gen/explicit_problems_5d.ipynb | mit | from pypge.benchmarks import explicit
import numpy as np
# visualization libraries
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import gridspec
# plot the visuals in ipython
%matplotlib inline
"""
Explanation: Explicit 5D Benchmarks
This file demonstrates how to generate, plot, and output data for 1d benchmarks
Choose from:
Korns_01
Korns_02
Korns_03
Korns_04
Korns_05
Korns_06
Korns_07
Korns_08
Korns_09
Korns_10
Korns_11
Korns_12
Korns_13
Korns_14
Korns_15
Imports
End of explanation
"""
# Set your output directories
img_dir = "../img/benchmarks/explicit/"
data_dir = "../data/benchmarks/explicit/"
# used for plotting
manual_scale = True
ymin = -2000
ymax = 2000
do_enable = False
xs_params = [
(-3.14,3.14),
(-3.14,3.14),
(0.001,1000),
(-3.14,3.14),
(-3.14,3.14)
]
# choose your problem here
prob = explicit.Korns_15(noise=1.0, npts=4000, xs_params=xs_params)
# you can also specify the following params as keyword arguments
#
# params = {
# 'name': "Koza_01",
# 'xs_str': ["x"],
# 'eqn_str': "x**4 + x**3 + x**2 + x",
# 'xs_params': [ (-4.0,4.0) ],
# 'npts': 200,
# 'noise': 1.0
# }
# or make your own with the following
#
# explicit.Explicit_1D(params):
"""
Explanation: Generate the data with noise
End of explanation
"""
print prob['name'], prob['eqn']
print prob['xpts'].shape
xs = prob['xpts'][0]
ys = prob['xpts'][1]
zs = prob['xpts'][2]
vs = prob['xpts'][3]
ws = prob['xpts'][4]
Ys = prob['ypure']
fig = plt.figure()
fig.set_size_inches(16, 20)
gs = gridspec.GridSpec(5, 2)
fig.suptitle(prob['name'] + " Clean", fontsize=36)
ax0 = fig.add_subplot(gs[0,:])
ax0.scatter(xs, Ys, marker='.')
ax0.set_xlabel('X')
ax0.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
ax1 = fig.add_subplot(gs[1,:])
ax1.scatter(ys, Ys, marker='.')
ax1.set_xlabel('Y')
ax1.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
ax2 = fig.add_subplot(gs[2,:])
ax2.scatter(zs, Ys, marker='.')
ax2.set_xlabel('Z')
ax2.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
ax3 = fig.add_subplot(gs[3,:])
ax3.scatter(vs, Ys, marker='.')
ax3.set_xlabel('V')
ax3.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
ax4 = fig.add_subplot(gs[4,:])
ax4.scatter(ws, Ys, marker='.')
ax4.set_xlabel('W')
ax4.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
plt.savefig(img_dir + prob['name'].lower() + "_clean.png", dpi=200)
plt.show()
Ys = prob['ypts']
fig = plt.figure()
fig.set_size_inches(16, 20)
gs = gridspec.GridSpec(5, 2)
fig.suptitle(prob['name'] + " Noisy", fontsize=36)
ax0 = fig.add_subplot(gs[0,:])
ax0.scatter(xs, Ys, marker='.')
ax0.set_xlabel('X')
ax0.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
ax1 = fig.add_subplot(gs[1,:])
ax1.scatter(ys, Ys, marker='.')
ax1.set_xlabel('Y')
ax1.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
ax2 = fig.add_subplot(gs[2,:])
ax2.scatter(zs, Ys, marker='.')
ax2.set_xlabel('Z')
ax2.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
ax3 = fig.add_subplot(gs[3,:])
ax3.scatter(vs, Ys, marker='.')
ax3.set_xlabel('V')
ax3.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
ax4 = fig.add_subplot(gs[4,:])
ax4.scatter(ws, Ys, marker='.')
ax4.set_xlabel('W')
ax4.set_ylabel('OUT')
if manual_scale:
plt.autoscale(enable=do_enable)
plt.ylim(ymin,ymax)
plt.savefig(img_dir + prob['name'].lower() + "_noisy.png", dpi=200)
plt.show()
"""
Explanation: Plot inline and save image
End of explanation
"""
data = np.array([prob['xpts'][0],prob['xpts'][1],prob['xpts'][2],prob['xpts'][3],prob['xpts'][4], prob['ypts']]).T
print data.shape
cols = [['x', 'y', 'z', 'v', 'w', 'out']]
out_data = cols + data.tolist()
import json
json_out = json.dumps( out_data, indent=4)
# print json_out
f_json = open(data_dir + prob['name'].lower() + ".json", 'w')
f_json.write(json_out)
f_json.close()
f_csv = open(data_dir + prob['name'].lower() + ".csv", 'w')
for row in out_data:
line = ", ".join([str(col) for col in row]) + "\n"
f_csv.write(line)
f_csv.close()
"""
Explanation: Output json and csv data
End of explanation
"""
data = np.array([prob['xpts'][0],prob['xpts'][1],prob['xpts'][2],prob['xpts'][3],prob['xpts'][4], prob['ypure']]).T
print data.shape
cols = [['x', 'y', 'z', 'v', 'w', 'out']]
out_data = cols + data.tolist()
import json
json_out = json.dumps( out_data, indent=4)
# print json_out
f_json = open(data_dir + prob['name'].lower() + "_clean.json", 'w')
f_json.write(json_out)
f_json.close()
f_csv = open(data_dir + prob['name'].lower() + "_clean.csv", 'w')
for row in out_data:
line = ", ".join([str(col) for col in row]) + "\n"
f_csv.write(line)
f_csv.close()
"""
Explanation: Output clean json and csv data
End of explanation
"""
|
TurkuNLP/BINF_Programming | lectures/week-3-uniprot.ipynb | gpl-2.0 | import requests as R
"""
Explanation: Sequence records, part 2
Instructions
This part of the course material does not rely on the Biopython tutorial. Rather, it shows how sequences can be searched and fetched from UniProt databases and how to use other online services.
Read the documentation for programmatic access to UniProt. It is also recommended to read the Proteins REST API, which is another way to access UniProt data.
Objectives
search and fetch sequence records from UniProt databases
use other online services programmatically
use named tuples to represent structural information
Summary
UniProt is a resource of protein sequences and their annotations. Biopython does not support the online access to UniProt but can parse the XML format used by UniProt. Fortunately, UniProt has a simple API to search and fetch data.
The requests module is a simple-to-use module for HTTP-based communication. With this module, the UniProt API can be used in a similar manner as EUtils can be used via Biopython, which makes the approaches used in the previous part of the course relevant. The requests module will also be used to communicate with other online services.
End of explanation
"""
# API address
# (%s are placeholders for user input)
base = "https://www.uniprot.org/uniprot/%s.%s"
# record ID
uid = 'P12345'
# output format
fmt = 'fasta'
# replace the first %s with ID and the second %s with format
url = base%(uid, fmt)
# send query and get response
r = R.get(url)
# web address that was used (you can try and see what it looks like in browser)
print(r.url)
# (the address 'https://www.uniprot.org/uniprot/P12345' would return HTML that you normally see in the browser)
# record in FASTA format as requested
print(r.text)
"""
Explanation: UniProt can supply records in various formats
The simplest query is to fetch a single record from UniProt. The web address specifies both the ID of the record and the format of the output.
End of explanation
"""
# API address
base = "https://www.uniprot.org/uniprot/%s.%s"
uid = 'P12345'
fmt = 'xml'
url = base%(uid, fmt)
r = R.get(url)
# save record to file
with open('P12345.xml', 'w') as f:
f.write(r.text)
"""
Explanation: Full records can be obtained in the XML format.
End of explanation
"""
import Bio.SeqIO as BSIO
# parse a single record in the UniProt XML format to a SeqRecord object
r = BSIO.read("P12345.xml", "uniprot-xml")
# SeqRecord object
print(r)
"""
Explanation: Biopython can parse UniProt XML
The UniProt XML can be parsed by the Bio.SeqIO module.
End of explanation
"""
# API address
url = "https://www.uniprot.org/uniprot"
# required parameters as dictionary
data = {
# query
# - reviewed:yes == manually annotated
# - name:p53 == proteins with 'p53' in their names
# - organism:"Homo sapiens (Human) [9606]" == proteins from human
'query': 'reviewed:yes AND name:p53 AND organism:"Homo sapiens (Human) [9606]"',
# output in FASTA format
'format': 'fasta',
# fetch the first three records
'limit': '3',
}
# send query and get response
r = R.get(url, params=data)
# save to file
with open('sequences.fasta', 'w') as f:
f.write(r.text)
"""
Explanation: UniProt can be queried for records satisfying specific conditions
Queries to UniProt can be made by sending the query (as you would write it into the search box in a browser) to the address https://www.uniprot.org/uniprot. The details of the query are supplied as parameters rather than as part of the address.
Take a look at the Query API for a list of parameters that can be used. Note particularly the limit and offset parameters, which correspond to the retstart and retmax arguments of EUtils.
End of explanation
"""
# the url of the query
# (try it in the browser after removing the "format=fasta" parameter)
print(r.url)
# show file content
with open('sequences.fasta') as f:
print(f.read())
"""
Explanation: The UniProt website uses the same API when it is accessed with a browser. It is therefore possible to first design a query in the browser and then implement it in the code.
End of explanation
"""
# API address
url = "https://www.uniprot.org/uniprot"
# required parameters as dictionary
data = {
'query': 'reviewed:yes AND name:p53 AND organism:"Homo sapiens (Human) [9606]"',
# output as list of IDs
'format': 'list',
# fetch the first ten records
'limit': '10',
}
# send query and get response
r = R.get(url, params=data)
# store data to variable
ids = r.text
# raw text output
print(ids)
"""
Explanation: If you only need the list of matching IDs, there is no need to fetch any sequences.
End of explanation
"""
# remove surrounding whitespace and split at newlines
ids = ids.strip().split("\n")
# Python list of ids
print(ids)
"""
Explanation: The output can be easily parsed into a Python list.
End of explanation
"""
# API address
url = "https://www.uniprot.org/uniprot"
# required parameters as dictionary
data = {
'query': 'reviewed:yes AND name:p53 AND organism:"Homo sapiens (Human) [9606]"',
# output as table
'format': 'tab',
# desired fields as comma-separated list
'columns': 'id,entry name,length',
# fetch the first ten records
'limit': '10',
}
# send query and get response
r = R.get(url, params=data)
# store data to variable
text = r.text
# raw text output
print(text)
"""
Explanation: It is also possible to fetch specific fields in a tabular format. The columns parameter speficies which fields to fetch. See the Query API for the details of fields.
End of explanation
"""
import collections as C
# data as tuple
record = ('P04637', 'P53_HUMAN', 393)
print(record)
# access via index
print(record[1])
# named tuple class that models some UniProt fields
# ('Entry' is the name of the class)
Entry = C.namedtuple('Entry', ['id', 'name', 'length'])
# data as named tuple
record = Entry('P04637', 'P53_HUMAN', 393)
print(record)
# access via index
print(record[1])
# access via name
print(record.name)
# data as tuple
data = ['P04637', 'P53_HUMAN', 393]
# data as named tuple
# (the * operator converts a list into positional arguments)
record = Entry(*data)
print(record)
"""
Explanation: Named tuples are convenient for simple data structures
The collections module of Python contains functions for handling and organising data. The namedtuple function can be used to create custom classes that are tuples but have named fields. They are convenient in storing simple structured data, like fields from UniProt records.
End of explanation
"""
# API address
url = 'https://www.uniprot.org/uploadlists/'
# required parameters as dictionary
data = {
# map from this database
'from': 'ACC',
# map to this database
'to': 'PIR',
# output format
'format': 'tab',
# space-separated list of IDs
'query': 'P12345 P12346'
}
# send query and get response
r = R.get(url, params=data)
# store data to variable
text = r.text
# raw text output
print(text)
"""
Explanation: UniProt provides a mapping service between database IDs
Each database has its own set of records, but much of the biological information is shared between databases. UniProt can map between database IDs such that a record in one database is paired with the same (or corresponding) record(s) in another database.
End of explanation
"""
# parse a single UniProt record
r = BSIO.read("P12345.xml", "uniprot-xml")
# the PIR ID 'B27103' is listed as a cross-reference for the UniProt record 'P12345'
for ref in r.dbxrefs:
if ref.startswith('PIR:'):
print(ref)
"""
Explanation: If you are mapping from a UniProt record, the cross-references could also be accessed via the dbxrefs attribute of the SeqRecord object. If you do not have the full record or if you need to map to UniProt from another database, the UniProt mapping service is useful.
End of explanation
"""
import Bio.Seq as BS
# sequences to use as input
# (since the service accepts FASTA format, we can use the file content as such)
with open('sequences.fasta') as f:
seqs = f.read()
# it seems that the SignalP service expects Windows newlines
# (i.e. replace \n with \r\n)
seqs = seqs.replace('\n', '\r\n')
# service address
url = 'http://www.cbs.dtu.dk/cgi-bin/webface2.fcgi'
# required parameters as dictionary
data = {
# hidden input
'configfile': '/usr/opt/www/pub/CBS/services/SecretomeP-2.0/SecretomeP.athena.cf',
# input from text area
'SEQPASTE': seqs,
# input from radio buttons
'orgtype': 'mam',
}
# send query as POST and get response
r = R.post(url, data=data)
# store response to variable
html = r.text
# raw response as HTML
print(html)
"""
Explanation: Other online services can be accessed by scraping HTML
You should primarily use APIs to access online services because they are intended for programmatic access. Always read the documentation of the service to see how the owners of the service ask you to use their service.
Some online services do not have any API. These services can still be used by communicating with the website like a browser would. In these cases, the services respond in the HTML format because that is what browsers except. Since each website has its own HTML structure, there is no simple way to extract the desired information from the HTML response. One must look at the HTML of the website and implement your own solution.
SecretomeP service can be accessed via its website
The SecretomeP service by DTU Bioinformatics serves as an example in the course. Look at the source of the website to see how the user input is collected with the online form and send to the server.
To simulate the behaviour of the online form in a browser, it is good to first locate the form element and then match the visible form elements to their argument names and values. For example, the text area to supply the sequence is provided by
<textarea name="SEQPASTE" rows=3 cols=64>
and the "Organism group" radio buttons are provided by
<input name="orgtype" type="radio" value="gram-" ...>
<input name="orgtype" type="radio" value="gram+" ...>.
<input name="orgtype" type="radio" value="mam" ...>
The form also contains a hidden field
<input type=HIDDEN name=configfile value="/usr/opt/www/pub/CBS/services/SecretomeP-2.0/SecretomeP.athena.cf">,
the value of which is sent along the user input. The destination is defined in the element
<form ... action="/cgi-bin/webface2.fcgi" method="POST">,
which indicates that the data is sent to http://www.cbs.dtu.dk/cgi-bin/webface2.fcgi as a POST request. Note the relative address in the action field.
End of explanation
"""
import re as RE
# regular expression to match "jobid: X status:" where X is 24 non-whitespace characters
# (it will match for "jobid: XXXXXXXXXXXXXXXXXXXXXXXX status:" as required)
match = RE.search(r"jobid: (\S{24}) status:", html)
jobid = ""
# check if there is a match (there should be...)
if match:
# extract the section that was enclosed in the parenthesis
jobid = match.group(1)
# extracted job ID
print(jobid)
"""
Explanation: The SignalP service does not respond with the results but rather gives a job ID, which can be used to fetch the actual results as soon as they are ready. The extraction of the jobid value from the HTML response can be achieved with a regular expression, for example.
The job ID is mentioned several times in the HTML. The code below extracts it from the line that has the following format:
<!-- jobid: 56B2792F00004B9894F74F8C status: queued -->
End of explanation
"""
# service address
url = 'http://www.cbs.dtu.dk/cgi-bin/webface2.fcgi'
# request data as dictionary
data = {
# job ID
'jobid': jobid,
}
# fetch the actual results
r = R.get(url, params=data)
print(r.text)
"""
Explanation: Based on the first response, the results can be fetched from the same address to which the job was submitted, but this time the request should be a GET request with jobid as a parameter:
http://www.cbs.dtu.dk//cgi-bin/webface2.fcgi?jobid=X,
where X is the job ID. This is a GET request as indicated by the presence of ?.
End of explanation
"""
|
amueller/pydata-amsterdam-2016 | Cross-validation.ipynb | cc0-1.0 | from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
from sklearn.cross_validation import cross_val_score
from sklearn.svm import LinearSVC
cross_val_score(LinearSVC(), X, y, cv=5)
cross_val_score(LinearSVC(), X, y, cv=5, scoring="f1_macro")
"""
Explanation: Cross-Validation
End of explanation
"""
y % 2
cross_val_score(LinearSVC(), X, y % 2)
cross_val_score(LinearSVC(), X, y % 2, scoring="average_precision")
cross_val_score(LinearSVC(), X, y % 2, scoring="roc_auc")
from sklearn.metrics.scorer import SCORERS
print(SCORERS.keys())
"""
Explanation: Let's go to a binary task for a moment
End of explanation
"""
from sklearn.cross_validation import ShuffleSplit
shuffle_split = ShuffleSplit(len(X), 10, test_size=.4)
cross_val_score(LinearSVC(), X, y, cv=shuffle_split)
from sklearn.cross_validation import StratifiedKFold, KFold, ShuffleSplit
def plot_cv(cv, n_samples):
masks = []
for train, test in cv:
mask = np.zeros(n_samples, dtype=bool)
mask[test] = 1
masks.append(mask)
plt.figure(figsize=(10, 4))
plt.subplots_adjust(left=0, bottom=0, right=1, top=1)
plt.imshow(masks, interpolation='none')
plot_cv(StratifiedKFold(y, n_folds=5), len(y))
plot_cv(KFold(len(iris.target), n_folds=5), len(iris.target))
plot_cv(ShuffleSplit(len(iris.target), n_iter=20, test_size=.2),
len(iris.target))
"""
Explanation: There are other ways to do cross-valiation
End of explanation
"""
# %load solutions/cross_validation_iris.py
"""
Explanation: Exercises
Use KFold cross validation and StratifiedKFold cross validation (3 or 5 folds) for LinearSVC on the iris dataset.
Why are the results so different? How could you get more similar results?
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/lattice/tutorials/shape_constraints.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
#@test {"skip": true}
!pip install tensorflow-lattice
"""
Explanation: TensorFlow Lattice を使った形状制約
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
概要
このチュートリアルは、TensorFlow Lattice(TFL)ライブラリが提供する制約と正規化の概要です。ここでは、合成データセットに TFL 缶詰 Estimator を使用しますが、このチュートリアルの内容は TFL Keras レイヤーから構築されたモデルでも実行できます。
続行する前に、ランタイムに必要なすべてのパッケージがインストールされていることを確認してください(以下のコードセルでインポートされるとおりに行います)。
セットアップ
TF Lattice パッケージをインストールします。
End of explanation
"""
import tensorflow as tf
from IPython.core.pylabtools import figsize
import itertools
import logging
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
"""
Explanation: 必要なパッケージをインポートします。
End of explanation
"""
NUM_EPOCHS = 1000
BATCH_SIZE = 64
LEARNING_RATE=0.01
"""
Explanation: このガイドで使用されるデフォルト値です。
End of explanation
"""
def click_through_rate(avg_ratings, num_reviews, dollar_ratings):
dollar_rating_baseline = {"D": 3, "DD": 2, "DDD": 4, "DDDD": 4.5}
return 1 / (1 + np.exp(
np.array([dollar_rating_baseline[d] for d in dollar_ratings]) -
avg_ratings * np.log1p(num_reviews) / 4))
"""
Explanation: レストランのランク付けに使用するトレーニングデータセット
ユーザーがレストランの検索結果をクリックするかどうかを判定する、単純なシナリオを想定しましょう。このタスクでは、次の特定の入力特徴量でクリック率(CTR)を予測します。
平均評価(avg_rating): [1,5] の範囲の値による数値特徴量。
レビュー数(num_reviews): 最大値 200 の数値特徴量。流行状況の測定値として使用します。
ドル記号評価(dollar_rating): {"D", "DD", "DDD", "DDDD"} セットの文字列値による分類特徴量。
ここでは、真の CTR を式 $$ CTR = 1 / (1 + exp{\mbox{b(dollar_rating)}-\mbox{avg_rating}\times log(\mbox{num_reviews}) /4 }) $$ で得る合成データセットを作成します。$b(\cdot)$ は各 dollar_rating をベースラインの値 $$ \mbox{D}\to 3,\ \mbox{DD}\to 2,\ \mbox{DDD}\to 4,\ \mbox{DDDD}\to 4.5. $$ に変換します。
この式は、典型的なユーザーパターンを反映します。たとえば、ほかのすべてが固定された状態で、ユーザーは星評価の高いレストランを好み、"$$" のレストランは "$" のレストランよりも多いクリック率を得、"$$$"、"$$$$" となればさらに多いクリック率を得るというパターンです。
End of explanation
"""
def color_bar():
bar = matplotlib.cm.ScalarMappable(
norm=matplotlib.colors.Normalize(0, 1, True),
cmap="viridis",
)
bar.set_array([0, 1])
return bar
def plot_fns(fns, split_by_dollar=False, res=25):
"""Generates contour plots for a list of (name, fn) functions."""
num_reviews, avg_ratings = np.meshgrid(
np.linspace(0, 200, num=res),
np.linspace(1, 5, num=res),
)
if split_by_dollar:
dollar_rating_splits = ["D", "DD", "DDD", "DDDD"]
else:
dollar_rating_splits = [None]
if len(fns) == 1:
fig, axes = plt.subplots(2, 2, sharey=True, tight_layout=False)
else:
fig, axes = plt.subplots(
len(dollar_rating_splits), len(fns), sharey=True, tight_layout=False)
axes = axes.flatten()
axes_index = 0
for dollar_rating_split in dollar_rating_splits:
for title, fn in fns:
if dollar_rating_split is not None:
dollar_ratings = np.repeat(dollar_rating_split, res**2)
values = fn(avg_ratings.flatten(), num_reviews.flatten(),
dollar_ratings)
title = "{}: dollar_rating={}".format(title, dollar_rating_split)
else:
values = fn(avg_ratings.flatten(), num_reviews.flatten())
subplot = axes[axes_index]
axes_index += 1
subplot.contourf(
avg_ratings,
num_reviews,
np.reshape(values, (res, res)),
vmin=0,
vmax=1)
subplot.title.set_text(title)
subplot.set(xlabel="Average Rating")
subplot.set(ylabel="Number of Reviews")
subplot.set(xlim=(1, 5))
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
figsize(11, 11)
plot_fns([("CTR", click_through_rate)], split_by_dollar=True)
"""
Explanation: この CTR 関数の等高線図を見てみましょう。
End of explanation
"""
def sample_restaurants(n):
avg_ratings = np.random.uniform(1.0, 5.0, n)
num_reviews = np.round(np.exp(np.random.uniform(0.0, np.log(200), n)))
dollar_ratings = np.random.choice(["D", "DD", "DDD", "DDDD"], n)
ctr_labels = click_through_rate(avg_ratings, num_reviews, dollar_ratings)
return avg_ratings, num_reviews, dollar_ratings, ctr_labels
np.random.seed(42)
avg_ratings, num_reviews, dollar_ratings, ctr_labels = sample_restaurants(2000)
figsize(5, 5)
fig, axs = plt.subplots(1, 1, sharey=False, tight_layout=False)
for rating, marker in [("D", "o"), ("DD", "^"), ("DDD", "+"), ("DDDD", "x")]:
plt.scatter(
x=avg_ratings[np.where(dollar_ratings == rating)],
y=num_reviews[np.where(dollar_ratings == rating)],
c=ctr_labels[np.where(dollar_ratings == rating)],
vmin=0,
vmax=1,
marker=marker,
label=rating)
plt.xlabel("Average Rating")
plt.ylabel("Number of Reviews")
plt.legend()
plt.xlim((1, 5))
plt.title("Distribution of restaurants")
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
"""
Explanation: データを準備する
次に、合成データセットを作成する必要があります。シミュレーション済みのレストランのデータセットとその特徴量を生成するところから始めます。
End of explanation
"""
def sample_dataset(n, testing_set):
(avg_ratings, num_reviews, dollar_ratings, ctr_labels) = sample_restaurants(n)
if testing_set:
# Testing has a more uniform distribution over all restaurants.
num_views = np.random.poisson(lam=3, size=n)
else:
# Training/validation datasets have more views on popular restaurants.
num_views = np.random.poisson(lam=ctr_labels * num_reviews / 50.0, size=n)
return pd.DataFrame({
"avg_rating": np.repeat(avg_ratings, num_views),
"num_reviews": np.repeat(num_reviews, num_views),
"dollar_rating": np.repeat(dollar_ratings, num_views),
"clicked": np.random.binomial(n=1, p=np.repeat(ctr_labels, num_views))
})
# Generate datasets.
np.random.seed(42)
data_train = sample_dataset(500, testing_set=False)
data_val = sample_dataset(500, testing_set=False)
data_test = sample_dataset(500, testing_set=True)
# Plotting dataset densities.
figsize(12, 5)
fig, axs = plt.subplots(1, 2, sharey=False, tight_layout=False)
for ax, data, title in [(axs[0], data_train, "training"),
(axs[1], data_test, "testing")]:
_, _, _, density = ax.hist2d(
x=data["avg_rating"],
y=data["num_reviews"],
bins=(np.linspace(1, 5, num=21), np.linspace(0, 200, num=21)),
density=True,
cmap="Blues",
)
ax.set(xlim=(1, 5))
ax.set(ylim=(0, 200))
ax.set(xlabel="Average Rating")
ax.set(ylabel="Number of Reviews")
ax.title.set_text("Density of {} examples".format(title))
_ = fig.colorbar(density, ax=ax)
"""
Explanation: トレーニング、評価、およびテストデータセットを生成しましょう。検索結果でレストランが閲覧されるときに、ユーザーのエンゲージメント(クリック有りまたはクリック無し)をサンプルポイントとして記録できます。
実際には、ユーザーが全検索結果を見ることはほとんどありません。つまり、ユーザーは、使用されている現在のランキングモデルによってすでに「良い」とみなされているレストランのみを閲覧する傾向にあるでしょう。そのため、トレーニングデータセットでは「良い」レストランはより頻繁に表示されて、過剰表現されます。さらに多くの特徴量を使用する際に、トレーニングデータセットでは、特徴量空間の「悪い」部分に大きなギャップが生じてしまいます。
モデルがランキングに使用される場合、トレーニングデータセットで十分に表現されていないより均一な分布を持つ、すべての関連結果で評価されることがほとんどです。この場合、過剰に表現されたデータポイントの過適合によって一般化可能性に欠けることから、柔軟で複雑なモデルは失敗する可能性があります。この問題には、トレーニングデータセットから形状制約を拾えない場合に合理的な予測を立てられるようにモデルを誘導する形状制約を追加するドメインナレッジを適用して対処します。
この例では、トレーニングデータセットは、人気のある良いレストランとのユーザーインタラクションで構成されており、テストデータセットには、上記で説明した評価設定をシミュレーションする一様分布があります。このようなテストデータセットは、実際の問題設定では利用できないことに注意してください。
End of explanation
"""
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
shuffle=False,
)
# feature_analysis_input_fn is used for TF Lattice estimators.
feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
val_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_val,
y=data_val["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_test,
y=data_test["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
"""
Explanation: トレーニングと評価に使用する input_fn を定義します。
End of explanation
"""
def analyze_two_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def two_d_pred(avg_ratings, num_reviews):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
def two_d_click_through_rate(avg_ratings, num_reviews):
return np.mean([
click_through_rate(avg_ratings, num_reviews,
np.repeat(d, len(avg_ratings)))
for d in ["D", "DD", "DDD", "DDDD"]
],
axis=0)
figsize(11, 5)
plot_fns([("{} Estimated CTR".format(name), two_d_pred),
("CTR", two_d_click_through_rate)],
split_by_dollar=False)
"""
Explanation: 勾配ブースティング木を適合させる
まずは、avg_rating と num_reviews の 2 つの特徴量から始めましょう。
検証とテストのメトリックを描画および計算する補助関数をいくつか作成します。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
gbt_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
n_batches_per_layer=1,
max_depth=2,
n_trees=50,
learning_rate=0.05,
config=tf.estimator.RunConfig(tf_random_seed=42),
)
gbt_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(gbt_estimator, "GBT")
"""
Explanation: データセットに TensorFlow 勾配ブースティング決定木を適合できます。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
hidden_units=[16, 8, 8],
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
dnn_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(dnn_estimator, "DNN")
"""
Explanation: モデルは本来の CTR の一般的な形状をキャプチャし、まともな検証メトリックを使用していますが、入力空間のいくつかの部分に直感に反する振る舞いがあります。推定される CTR は平均評価またはレビュー数が増加するにつれ降下しているところです。これは、トレーニングデータセットがうまくカバーしていない領域のサンプルポイントが不足しているためです。単に、モデルにはデータのみから正しい振る舞いを推測する術がないのです。
この問題を解決するには、モデルが平均評価とレビュー数の両方に対して単調的に増加する値を出力しなければならないように、形状制約を強制します。TFL にこれを実装する方法は、後で説明します。
DNN を適合させる
DNN 分類器で、同じ手順を繰り返すことができます。レビュー数が少なく、十分なサンプルポイントがないため、同様の、意味をなさない外挿パターンとなります。検証メトリックが木のソリューションより優れていても、テストメトリックが悪化するところに注意してください。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
"""
Explanation: 形状制約
TensorFlow Lattice(TFL)の焦点は、トレーニングデータを超えてモデルの振る舞いを守るために形状制約を強制することに当てられます。形状制約は TFL Keras レイヤーに適用されます。その詳細は、TensorFlow の JMLR 論文をご覧ください。
このチュートリアルでは、TF 缶詰 Estimator を使用してさまざまな形状制約を説明しますが、手順はすべて、TFL Keras レイヤーから作成されたモデルで実行することができます。
ほかの TensorFlow Estimator と同様に、TFL 缶詰 Estimator では、特徴量カラムを使用して入力形式を定義し、トレーニングの input_fn を使用してデータを渡します。TFL 缶詰 Estimator を使用するには、次の項目も必要です。
モデルの構成: モデルのアーキテクチャと特徴量ごとの形状制約とレギュラライザを定義します。
特徴量分析 input_fn: TFL 初期化を行うために TF input_fn でデータを渡します。
より詳しい説明については、缶詰 Estimator のチュートリアルまたは API ドキュメントをご覧ください。
単調性
最初に、単調性形状制約を両方の特徴量に追加して、単調性に関する問題を解決します。
TFL に形状制約を強制するように指示するには、特徴量の構成に制約を指定します。次のコードは、monotonicity="increasing" を設定することによって、num_reviews と avg_rating の両方に対して単調的に出力を増加するようにする方法を示します。
End of explanation
"""
def save_and_visualize_lattice(tfl_estimator):
saved_model_path = tfl_estimator.export_saved_model(
"/tmp/TensorFlow_Lattice_101/",
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec=tf.feature_column.make_parse_example_spec(
feature_columns)))
model_graph = tfl.estimators.get_model_graph(saved_model_path)
figsize(8, 8)
tfl.visualization.draw_model_graph(model_graph)
return model_graph
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: CalibratedLatticeConfig を使用して、各入力にキャリブレータを適用(数値特徴量のピース単位の線形関数)してから格子レイヤーを適用して非線形的に較正済みの特徴量を融合する缶詰分類器を作成します。モデルの視覚化には、tfl.visualization を使用できます。特に、次のプロットは、缶詰分類器に含まれるトレーニング済みのキャリブレータを示します。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: 制約が追加されると、推定される CTR は平均評価またはレビュー数が増加するにつれて、必ず増加するようになります。これは、キャリブレータと格子を確実に単調にすることで行われます。
収穫逓減
収穫逓減とは、特定の特徴量値を増加すると、それを高める上で得る限界利益は減少することを意味します。このケースでは、num_reviews 特徴量はこのパターンに従うと予測されるため、それに合わせてキャリブレータを構成することができます。収穫逓減を次の 2 つの十分な条件に分けることができます。
キャリブレータが単調的に増加している
キャリブレータが凹状である
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
# Larger num_reviews indicating more trust in avg_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
model_graph = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: テストメトリックが、凹状の制約を追加することで改善しているのがわかります。予測図もグラウンドトゥルースにより似通っています。
2D 形状制約: 信頼
1 つか 2 つのレビューのみを持つレストランの 5 つ星評価は、信頼できない評価である可能性があります(レストランは実際には良くない可能性があります)が、数百件のレビューのあるレストランの 4 つ星評価にははるかに高い信頼性があります(この場合、レストランは良い可能性があります)。レストランのレビュー数によって平均評価にどれほどの信頼を寄せるかが変化することを見ることができます。
ある特徴量のより大きな(または小さな)値が別の特徴量の高い信頼性を示すことをモデルに指示する TFL 信頼制約を訓練することができます。これは、特徴量の構成で、reflects_trust_in 構成を設定することで実行できます。
End of explanation
"""
lat_mesh_n = 12
lat_mesh_x, lat_mesh_y = tfl.test_utils.two_dim_mesh_grid(
lat_mesh_n**2, 0, 0, 1, 1)
lat_mesh_fn = tfl.test_utils.get_hypercube_interpolation_fn(
model_graph.output_node.weights.flatten())
lat_mesh_z = [
lat_mesh_fn([lat_mesh_x.flatten()[i],
lat_mesh_y.flatten()[i]]) for i in range(lat_mesh_n**2)
]
trust_plt = tfl.visualization.plot_outputs(
(lat_mesh_x, lat_mesh_y),
{"Lattice Lookup": lat_mesh_z},
figsize=(6, 6),
)
trust_plt.title("Trust")
trust_plt.xlabel("Calibrated avg_rating")
trust_plt.ylabel("Calibrated num_reviews")
trust_plt.show()
"""
Explanation: 次の図は、トレーニング済みの格子関数を示します。信頼制約により、較正済みの num_reviews のより大きな値によって、較正済みの avg_rating に対してより高い勾配が強制され、格子出力により大きな変化が生じることが期待されます。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: キャリブレータを平滑化する
では、avg_rating のキャリブレータを見てみましょう。単調的に上昇してはいますが、勾配の変化は突然起こっており、解釈が困難です。そのため、regularizer_configs にレギュラライザーをセットアップして、このキャリブレータを平滑化したいと思います。
ここでは、反りの変化を縮減するために wrinkle レギュラライザを適用します。また、laplacian レギュラライザを使用してキャリブレータを平らにし、hessian レギュラライザを使用してより線形にします。
End of explanation
"""
def analyze_three_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def three_d_pred(avg_ratings, num_reviews, dollar_rating):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
"dollar_rating": dollar_rating,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
figsize(11, 22)
plot_fns([("{} Estimated CTR".format(name), three_d_pred),
("CTR", click_through_rate)],
split_by_dollar=True)
"""
Explanation: キャリブレータがスムーズになり、全体的な推定 CTR がグラウンドトゥルースにより一致するように改善されました。これは、テストメトリックと等高線図の両方に反映されます。
分類較正の部分単調性
これまで、モデルには 2 つの数値特徴量のみを使用してきました。ここでは、分類較正レイヤーを使用した 3 つ目の特徴量を追加します。もう一度、描画とメトリック計算用のヘルパー関数のセットアップから始めます。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: 3 つ目の特徴量である dollar_rating を追加するには、TFL での分類特徴量の取り扱いは、特徴量カラムと特徴量構成の両方においてわずかに異なることを思い出してください。ここでは、ほかのすべての特徴量が固定されている場合に、"DD" レストランの出力が "D" よりも大きくなるように、部分単調性を強制します。これは、特徴量構成の monotonicity 設定を使用して行います。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
output_calibration=True,
output_calibration_num_keypoints=5,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="output_calib_wrinkle", l2=0.1),
],
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: この分類キャリブレータは、DD > D > DDD > DDDD というモデル出力の優先を示します。このセットアップではこれらは定数です。欠落する値のカラムもあることに注意してください。このチュートリアルのトレーニングデータとテストデータには欠落した特徴量はありませんが、ダウンストリームでモデルが使用される場合に値の欠落が生じたときには、モデルは欠損値の帰属を提供します。
ここでは、dollar_rating で条件付けされたモデルの予測 CTR も描画します。必要なすべての制約が各スライスで満たされているところに注意してください。
出力較正
ここまでトレーニングしてきたすべての TFL モデルでは、格子レイヤー(モデルグラフで "Lattice" と示される部分)はモデル予測を直接出力しますが、格子出力をスケーリングし直してモデル出力を送信すべきかわからないことがたまにあります。
特徴量が $log$ カウントでラベルがカウントである。
格子は頂点をほとんど使用しないように構成されているが、ラベル分布は比較的複雑である。
こういった場合には、格子出力とモデル出力の間に別のキャリブレータを追加して、モデルの柔軟性を高めることができます。ここでは、今作成したモデルにキーポイントを 5 つ使用したキャリブレータレイヤーを追加することにしましょう。また、出力キャリブレータのレギュラライザも追加して、関数の平滑性を維持します。
End of explanation
"""
|
xMyrst/BigData | python/howto/012_Módulo_NumPy_Procesado_Datos.ipynb | gpl-3.0 | # Importamos la libería numpy
import numpy as np
# Creamos dos arrays [inicio,fin,salto]
x = np.arange(1,5)
y = np.arange(5,9)
# Creamos un array máscara
cond = np.array([True, False, False, True])
x, y
"""
Explanation: MÓDULO NumPy
PROCESADO DE DATOS MEDIANTE ARRAYS
Los arrays realizan una gestión de la memoria mucho más eficiente que las listas y por tanto se mejora el rendimiento.
Se realizan muchas operaciones mediante expresiones sobre arrays que en otros casos requerirían múltiples y costosos loops. A esto se le llama vectorización.
Las funciones de NumPy se ejecutan de forma tan eficiente como se ejecutarían en otros lenguajes como por ejemplo Fortran, C y C++.
Para los casos en los que la ejecución sea más eficiente en otros lenguajes, como por ejemplo Fortran, existen herramientas que nos permiten ejecutar desde Python nuestros códigos en otros lenguajes como f2py.
EXPRESIONES CONDICIONALES
La función where() es la versión vectorizada de la expresión ternaria x if cond else y que ya hemos visto anteriormente.
Supongamos que tenemos los tres arrays:
End of explanation
"""
# Primera versión sin operacion vectorizada where()
# Se realiza mediante una expresión ternaria de un bucle for
z1 = np.array([x if cond else y for x, y, cond in zip(x, y, cond)])
z1
# Segunda versión - operación vectorizada where()
z2 = np.where(cond, x, y)
z2
"""
Explanation: Supongamos que queremos obtener el valor de x cuando se cumpla la condición en cond.
Queremos obtener por tanto el array [1,6,7,4].
End of explanation
"""
# Creamos un array de valores aleatorios con la función randn()
a = np.random.randn(3,5)
a
# Uso de la función where()
# Cuando el valor de 'a' sea menor o igual a 0, muestra un 1, sino un 0
r = np.where( a >= 0, 1, 0)
r
"""
Explanation: Los dos últimos argumentos de la operación where no tienen por qué ser arrays, pueden ser escalares.
En análisis de datos la operación where se utiliza mucho para crear nuevos arrays a partir de los datos de otros. Supongamos que tenemos un array a de dos dimensiones y queremos construir otro array r tal que:
$$r(x,y) = \begin{cases} 1 &\mbox{if } a(x,y) \ge 0 \-1 &\mbox{if } a(x,y) \lt 0. \end{cases}$$
End of explanation
"""
# Uso de la función where()
# Cuando el valor de 'a' sea mayor o igual a 0, multiplica el valor por 10, sino un muestra un 0
r = np.where( a >= 0, a * 10 , 0)
r
"""
Explanation: Supongamos ahora que el array r es tal que:
$$r(x,y) = \begin{cases} a(x,y)*10 &\mbox{if } a(x,y) \ge 0 \
0 & \mbox{if } a(x,y) \lt 0 \end{cases}$$
End of explanation
"""
# Uso de la función where()
# Transformamos el array 'a' en uno de tipo int32
a = a.astype(np.int32)
print(a)
# Cuando el valor de 'a' sea mayor que 0, muestra 1
# Cuando el valor de 'a' sea menor que 0, muestra 0
# Cuando el valor de 'a' sea igual a 0, muestra -1
r = np.where(a > 0, 1 , np.where( a < 0, -1 , 0))
r
"""
Explanation: Pero también podemos tener expresiones más complicadas. Por ejemplo:
$$r(x,y) = \begin{cases} 1 &\mbox{if } a(x,y) \gt 0 \
0 &\mbox{if } a(x,y) \lt 0 \
-1 &\mbox{if } a(x,y) = 0 \end{cases}$$
End of explanation
"""
# 6 valores aleatorios en el intervalo [0,1)
a = np.random.rand(6)
print("a: ",a)
print('Suma: ',a.sum())
print('Valor mínimo: ',a.min())
print('Valor máximo: ',a.max())
"""
Explanation: <BR>
MÉTODOS MATEMÁTICOS Y ESTADÍSTICOS
El módulo NumPy proporciona métodos que permiten realizar otras operaciones, como el mínimo elemento de un array, el máximo, la media de los elementos de un array, etc.
sum
cumsum
cumprod
max
argmax
min
argmin
mean
var
std
Se puede encontrar la lista de funciones en ScyPy.org
End of explanation
"""
# 6 valores en el intervalo [0,6)
a = np.arange(6)
print("a: ",a)
# 'b' se define a partir de los valores de 'a', distribuyendo los valores en 2 filas, 3 columnas
b = a.reshape(2,3)
print( "b: ")
print(b)
print ( b.sum(axis = 0) ) # suma por columnas
print ( b.sum(axis = 1) ) # suma por filas
"""
Explanation: Las operaciones anteriores se han realizado para todos los valores del array, independientemente de su forma.
Si tenemos un array bidimensional, es posible calcular la suma de las columnas, o de las filas.
Lo que tenemos que hacer es indicarlo mediante el parámetro axis en el método.
End of explanation
"""
# Valores aleatorios con distribución gaussiana, 4 filas, 5 columnas
a = np.random.randn(4,5)
a
# Muestra 'True' si se cumple la condición, sino 'False'
(a>0)
# Suma todos los 'True' = 1
(a>0).sum()
"""
Explanation: <br>
OPERACIONES LÓGICAS
Supongamos que queremos contar el número de elementos positivos en un array multidimensional.
Podemos hacer uso de que True tiene valor 1 y False vale -1
End of explanation
"""
# Devuelve 'True' si todos son 'True'
b = (a > 0)
b.all()
# Devuelve 'True' si algunos son 'True'
b.any()
"""
Explanation: Los métodos all y any son útiles para trabajar con arrays de booleanos.
End of explanation
"""
# Ordena los valores del array 'a' y elimina los elementos repetidos
a = np.array([1,5,6,4,1,4,5,3,1,1,4,4,4,3,2,2,2,2])
# No se modifica el array original 'a'
np.unique(a)
"""
Explanation: <br>
OPERACIONES SOBRE CONJUNTOS
La operación unique aplicado a un array A devuelve un array ordenado de valores en A sin repetición:
End of explanation
"""
# Redimensionamos el array 'b' como 6 filas, 3 columnas
b = a.reshape(6,3)
b
np.in1d(b, [1,2,3])
"""
Explanation: La función in1d comprueba si los valores de un array están contenidos en otro conjunto de valores. El valor devuelto es un array de booleanos.
End of explanation
"""
# Creamos un array unidimensional
y = np.array([2.,4.,6.,8.])
print(y)
# Se guarda en formato binario con extensión .npy
np.save('mi_array', y)
# Para cargar el fichero guardado
a = np.load('mi_array.npy')
a
"""
Explanation: <br>
LECTURA Y ESCRITURA DE ARRAYS EN FICHEROS
Formato binario
NumPy dispone de las funciones save y load para grabar y cargar arrays en disco en formato formato binario.
End of explanation
"""
# Creamos un array 2x2 a partir de 'a'
b = a.reshape(2,2)
# Lo guardamos en formato txt
np.savetxt("mi_otro_array.txt", b, fmt='%d', delimiter=',')
# Lo cargamos y visualizamos
c = np.loadtxt('mi_otro_array.txt', delimiter=',')
c
"""
Explanation: <br>
Formato txt
Las operaciones savetxt y loadtxt son las equivalentes a save y load.
End of explanation
"""
|
maubarsom/ORFan-proteins | phage_assembly/5_annotation/asm_v1.2/assembly_homologues/4_parse_blastn_results.ipynb | mit | #Load blastn hits
blastn_hits = pd.read_csv("blastn_hits.csv")
"""
Explanation: Load blast hits
End of explanation
"""
#List of sequences to extract
seqs_for_msa = blastn_hits[blastn_hits.db == "env_nt"].sort_values(by="ali_len",ascending=False).head(n=10)
#Export megahit ids to extract directly from fasta : Empty!
#seqs_for_msa[seqs_for_msa.db == "hmp_nuc"]["subject_id"].to_csv("d9539_hmp_homologs.txt",sep="\t",index=False,header=False)
"""
Explanation: 1. Analyze blastn hits
1.1 Extract best env_nt hits to perform a genome MSA
The main goal is to perform an MSA between our D9539 assembly and the similar seqs in the dbs
End of explanation
"""
#Use efetch to extract and save to a file the fsta with the sequences
gis_to_get = ",".join(set(str(int(x)) for x in seqs_for_msa[seqs_for_msa.db == "env_nt"]["gi"]))
print(gis_to_get)
r = requests.get("http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=nuccore&id={}&rettype=fasta&retmode=text".format(gis_to_get))
with open("d9539_env_nt_homologs.fa","w") as env_nt_fh:
env_nt_fh.write(str(r.content.decode()))
#Remove whitespace between seqs
!sed "/^$/d" d9539_env_nt_homologs.fa > d9539_env_nt_homologs.fasta
!rm d9539_env_nt_homologs.fa
"""
Explanation: Obtain fastas for env_nt homologs from eutils
End of explanation
"""
|
yhilpisch/dx | 08_dx_fourier_pricing.ipynb | agpl-3.0 | import dx
import datetime as dt
"""
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Fourier-based Option Pricing
For several reasons, it is beneficial to have available alternative valuation and pricing approaches to the Monte Carlo simulation approach. One application area is to benchmark Monte Carlo-based valuation results against other (potentially more accurate) results. Another area is model calibration to liquidly traded vanilla instruments where generally faster numerial methods can be applied.
This part introduces Fouried-based valuation functions and benchmarks valuation results from the "standard", simulation-based DX Analytics modeling approach to output of those functions.
End of explanation
"""
# constant short rate
r = dx.constant_short_rate('r', 0.01)
# geometric Brownian motion
me = dx.market_environment('me', dt.datetime(2015, 1, 1))
me.add_constant('initial_value', 100.)
me.add_constant('volatility', 0.2)
me.add_constant('final_date', dt.datetime(2015, 12, 31))
me.add_constant('currency', 'EUR')
# jump component
me.add_constant('lambda', 0.4)
me.add_constant('mu', -0.6)
me.add_constant('delta', 0.2)
# stochastic volatiltiy component
me.add_constant('rho', -.5)
me.add_constant('kappa', 5.0)
me.add_constant('theta', 0.02)
me.add_constant('vol_vol', 0.3)
# valuation environment
val_env = dx.market_environment('val_env', dt.datetime(2015, 1, 1))
val_env.add_constant('paths', 55000)
# 25,000 paths
val_env.add_constant('frequency', 'D')
# weekly frequency
val_env.add_curve('discount_curve', r)
val_env.add_constant('starting_date', dt.datetime(2015, 1, 1))
val_env.add_constant('final_date', dt.datetime(2015, 12, 31))
# add valuation environment to market environment
me.add_environment(val_env)
"""
Explanation: Risk Factors
The examples and benchmarks to follow rely on four different models:
geometric Brownian motion (Black-Scholes-Merton 1973)
jump diffusion (Merton 1976)
stochastic volatility (Heston 1993)
stochastic volatility jump diffusion (Bates 1996)
For details on these models and the Fourier-based option pricing approach refer to Hilpisch (2015) (cf. http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1119037999.html).
We first define the single market and valuation environments.
End of explanation
"""
gbm = dx.geometric_brownian_motion('gbm', me)
jd = dx.jump_diffusion('jd', me)
sv = dx.stochastic_volatility('sv', me)
svjd = dx.stoch_vol_jump_diffusion('svjd', me)
"""
Explanation: Equipped with the single market environments and the valuation environment, we can instantiate the simulation model objects.
End of explanation
"""
# market environment for the options
me_option = dx.market_environment('option', dt.datetime(2015, 1, 1))
me_option.add_constant('maturity', dt.datetime(2015, 12, 31))
me_option.add_constant('strike', 100.)
me_option.add_constant('currency', 'EUR')
me_option.add_environment(me)
me_option.add_environment(val_env)
euro_put_gbm = dx.valuation_mcs_european_single('euro_put', gbm, me_option,
'np.maximum(strike - maturity_value, 0)')
euro_call_gbm = dx.valuation_mcs_european_single('euro_call', gbm, me_option,
'np.maximum(maturity_value - strike, 0)')
euro_put_jd = dx.valuation_mcs_european_single('euro_put', jd, me_option,
'np.maximum(strike - maturity_value, 0)')
euro_call_jd = dx.valuation_mcs_european_single('euro_call', jd, me_option,
'np.maximum(maturity_value - strike, 0)')
euro_put_sv = dx.valuation_mcs_european_single('euro_put', sv, me_option,
'np.maximum(strike - maturity_value, 0)')
euro_call_sv = dx.valuation_mcs_european_single('euro_call', sv, me_option,
'np.maximum(maturity_value - strike, 0)')
euro_put_svjd = dx.valuation_mcs_european_single('euro_put', svjd, me_option,
'np.maximum(strike - maturity_value, 0)')
euro_call_svjd = dx.valuation_mcs_european_single('euro_call', svjd, me_option,
'np.maximum(maturity_value - strike, 0)')
"""
Explanation: Plain Vanilla Put and Call Options
Based on the just defined risk factors, we define 8 diffent options---a European put and call option per risk factor, respectively.
End of explanation
"""
import numpy as np
import pandas as pd
"""
Explanation: Valuation Benchmarking
In this sub-section, we benchmark the Monte Carlo value estimates against the Fourier-based pricing results.
End of explanation
"""
freq = '2m' # used for maturity definitions
periods = 3 # number of intervals for maturity grid
strikes = 5 # number of strikes per maturity
initial_value = 100 # initial value for all risk factors
start = 0.8 # lowest strike in percent of spot
end = 1.2 # highest strike in percent of spot
start_date = '2015/3/1' # start date for simulation/pricing
"""
Explanation: We first define some parameters used throughout.
End of explanation
"""
euro_put_gbm.present_value()
# method call needed for initialization
"""
Explanation: Geometric Brownian Motion
We need to initialize the valuation object first.
End of explanation
"""
bsm_option = dx.BSM_european_option('bsm_opt', me_option)
"""
Explanation: There is a valuation class for European put and call options in the Black-Scholes-Merton model available called BSM_european_option. It is based on the analytical pricing formula for that model and is instantiated as follows:
End of explanation
"""
%%time
# European put
print('%4s | %7s | %7s | %7s | %7s | %7s' % ('T', 'strike', 'mcs', 'fou', 'dif', 'rel'))
for maturity in pd.date_range(start=start_date, freq=freq, periods=periods):
bsm_option.maturity = maturity
euro_put_gbm.update(maturity=maturity)
for strike in np.linspace(start, end, strikes) * initial_value:
T = (maturity - me_option.pricing_date).days / 365.
euro_put_gbm.update(strike=strike)
mcs = euro_put_gbm.present_value()
bsm_option.strike = strike
ana = bsm_option.put_value()
print('%4.3f | %7.3f | %7.4f | %7.4f | %7.4f | %7.2f '
% (T, strike, mcs, ana, mcs - ana, (mcs - ana) / ana * 100))
"""
Explanation: The following routine benchmarks the Monte Carlo value estimates for the European put option against the output from the valuation object based on the analytical pricing formula. The results are quite good since this model is quite easy to discretize exactly and therefore generally shows good convergence of the Monte Carlo estimates.
End of explanation
"""
euro_call_gbm.present_value()
# method call needed for initialization
%%time
# European calls
print('%4s | %7s | %7s | %7s | %7s | %7s' % ('T', 'strike', 'mcs', 'fou', 'dif', 'rel'))
for maturity in pd.date_range(start=start_date, freq=freq, periods=periods):
euro_call_gbm.update(maturity=maturity)
for strike in np.linspace(start, end, strikes) * initial_value:
T = (maturity - me_option.pricing_date).days / 365.
euro_call_gbm.update(strike=strike)
mcs = euro_call_gbm.present_value()
bsm_option.strike = strike
bsm_option.maturity = maturity
ana = bsm_option.call_value()
print('%4.3f | %7.3f | %7.4f | %7.4f | %7.4f | %7.2f ' \
% (T, strike, mcs, ana, mcs - ana, (mcs - ana) / ana * 100))
"""
Explanation: The same now for the European call option.
End of explanation
"""
def valuation_benchmarking(valuation_object, fourier_function):
print('%4s | %7s | %7s | %7s | %7s | %7s' % ('T', 'strike', 'mcs', 'fou', 'dif', 'rel'))
for maturity in pd.date_range(start=start_date, freq=freq, periods=periods):
valuation_object.update(maturity=maturity)
me_option.add_constant('maturity', maturity)
for strike in np.linspace(start, end, strikes) * initial_value:
T = (maturity - me_option.pricing_date).days / 365.
valuation_object.update(strike=strike)
mcs = valuation_object.present_value()
me_option.add_constant('strike', strike)
fou = fourier_function(me_option)
print('%4.3f | %7.3f | %7.4f | %7.4f | %7.4f | %7.3f '
% (T, strike, mcs, fou, mcs - fou, (mcs - fou) / fou * 100))
"""
Explanation: Benchmarking Function
All other valuation benchmarks are generated with Fourier-based pricing functions for which the handling is identical. We therefore use the following function for the benchmarks from now on:
End of explanation
"""
euro_put_jd.present_value()
# method call needed for initialization
"""
Explanation: Jump Diffusion
The next model is the jump diffusion as proposed by Merton (1976).
End of explanation
"""
%time valuation_benchmarking(euro_put_jd, dx.M76_put_value)
"""
Explanation: There is a Fourier-based pricing function available which is called M76_put_value and which is used for the benchmarking for the European put options that follows.
End of explanation
"""
euro_call_jd.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_call_jd, dx.M76_call_value)
"""
Explanation: Accordingly, the benchmarking for the European call options based on the Fourier-based M76_call_value function.
End of explanation
"""
euro_put_sv.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_put_sv, dx.H93_put_value)
"""
Explanation: Stochastic Volatility
Stochastic volatility models like the one of Heston (1993) are popular to reproduce implied volatility smiles observed in markets. First, the benchmarking for the European put options using the Fourier-based H93_put_value function.
End of explanation
"""
euro_call_sv.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_call_sv, dx.H93_call_value)
"""
Explanation: Second, the benchmarking for the European call options based on the Fourier-based H93_call_value function.
End of explanation
"""
euro_put_svjd.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_put_svjd, dx.B96_put_value)
"""
Explanation: Stochastic Volatility Jump-Diffusion
Finally, we consider the combination of the stochastic volatility and jump diffusion models from before as proposed by Bates (1996). The Fourier-based pricing function for European put options is called B96_put_value.
End of explanation
"""
euro_call_svjd.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_call_svjd, dx.B96_call_value)
"""
Explanation: The Fourier-based counterpart function for European call options is called B96_call_value.
End of explanation
"""
|
cschnaars/intro-to-coding-in-python | notebooks/intro_to_coding_in_python_part_2_lists_and_dictionaries_with_code.ipynb | mit | my_friends = ['Aaron', 'Sue', 'Chris', 'Renee']
"""
Explanation: Introduction to Coding in Python, Part 2
Investigative Reporters and Editors Conference, New Orleans, June 2016<br />
By Aaron Kessler and Christopher Schnaars<br />
Lists
A list is a mutable (meaning it can be changed), ordered collection of objects. Everything in Python is an object, so a list can contain not only strings and numbers, but also functions and even other lists.
Let's make a list of new friends we've made at the IRE conference: Aaron, Sue, Chris and Renee. We'll call our list my_friends. To create a list, put a comma-separated list of strings (our friends' names) inside square brackets ([]). These brackets are how we tell Python we're building a list. See if you can figure out what to do in the box below. If you can't figure it out, don't sweat it, and read on for the answer:
End of explanation
"""
my_friends
"""
Explanation: Did you get it? The answer: my_friends = ['Aaron', 'Sue', 'Chris', 'Renee']
Type my_friends in the box below, and you'll see Python remembers the order of the names:
End of explanation
"""
my_friends.append('Cora')
my_friends
"""
Explanation: We met Cora at an awesome Python class we just attended, so let's add her to our list of friends. To do that, we're going to use a method called append. A method is a bit of code associated with a Python object (in this case, our list) to provide some built-in functionality. Every time you create a list in Python, you get the functionality of the append method (and a bunch of other methods, too) for free.
To use the append method, type the name of your list, followed by a period and the name of the method, and then put the string we want to add to our list ('Cora') in parentheses. Try it:
End of explanation
"""
my_friends[0]
"""
Explanation: In a list, you can retrieve a single item by index. To do this, type the name of your list, followed by the numeric position of the item you want inside square brackets. There's just one sticking point: Indices in Python are zero-based, which means the first item is at position 0, the second item is at position 1 and so on. If that sounds confusing, don't worry about it. There actually are very good, logical reasons for this behavior that we won't dive into here. For now, just accept our word that you'll get used to it and see if you can figure out what to type to get the first name in our list (Aaron):
End of explanation
"""
# The first two names in the list (['Aaron', 'Sue'])
my_friends[:2]
# The second and third names in the list (['Sue', 'Chris'])
my_friends[1:3]
# The second and fourth names in the list (['Sue', 'Renee'])
my_friends[1::2]
# The last three names in the list, in reverse order (['Cora', 'Renee', 'Chris'])
my_friends[4:1:-1]
"""
Explanation: You can retrieve a contiguous subset of names from your list, called a slice. To do this, type the name of your list and provide up to three parameters in square brackets, separated by colons. Just leave any parameter you don't need blank, and Python will use its default value. These parameters, in order, are:
<ul><li>The index of the first item you want. Default is 0.</li>
<li>The index of the first item you *don't* want. You can set this to a negative number to skip a specific number of items at the end of your `list`. For example, a value of -1 here would stop at the next-to-last item in your `list`. Default is the number of items in your `list` (also called the *length*).</li>
<li>The *step* value, which we can use to skip over names in our `list`. For example, if you want every other name, you could set the *step* to 2. If you want to go backwards through your `list`, set this to a negative number. Default is 1.</li></ul>
Use the boxes below to see if you can figure out how to retrieve these lists of names. There could be more than one way to answer each question:
<ol><li>The first two names in the list (`['Aaron', 'Sue']`)</li>
<li>The second and third names in the list (`['Sue', 'Chris']`)</li>
<li>The second and fourth names in the list (`['Sue', 'Renee']`)</li>
<li>The last three names in the list, in reverse order (`['Cora', 'Renee', 'Chris']`)</li></ol>
End of explanation
"""
my_friends = ['Aaron', 'Sue', 'Chris', 'Renee', 'Cora']
your_friends = my_friends
print('My friends are: ')
print(my_friends)
print('\nAnd your friends are: ') # \n is code for newline.
print(your_friends)
"""
Explanation: Mutable objects
In many programming languages, it's common to assign a value (such as an integer or string) to a variable. Python works a bit differently. In our example above, Python creates a list in memory to house the names of our friends and then creates the object my_friends to point to the location in memory where this list is located. Why is that important? Well, for one thing, it means that if we make a copy of a list, Python keeps only one list in memory and just creates a second pointer. While this is not a concern in our example code, it could save a lot of computer memory for a large list containing hundreds or even thousands of objects. Consider this code:
End of explanation
"""
your_friends.remove('Cora')
your_friends
"""
Explanation: Here's where mutability will bite you, if you're not careful. You haven't met Cora yet and don't know how nice she is, so you decide to remove her from your list of friends, at least for now. See if you can figure out what to type in the box below. You want to use the remove method to remove 'Cora' from your_friends. Use the second box below to verify Cora has been removed:
End of explanation
"""
my_friends
"""
Explanation: Perfect! Or is it? Let's take another look at my_friends:
End of explanation
"""
my_friends.append('Cora')
your_friends = my_friends.copy()
your_friends.remove('Cora')
my_friends
your_friends
"""
Explanation: Uh-oh! You've unfriended Cora for me too! Remember that my_friends and your_friends are just pointers to the same list, so when you change one, you're really changing both. If you want the two lists to be independent, you must explicitly make a copy using, you guessed it, the copy method. In the box below:
<ul><li>Add Cora back to `my_friends`.</li>
<li>Use the `copy` method to assign a copy of `my_friends` to `your_friends`.</li>
<li>Remove Cora from `your_friends`.</li></ul>
You can use the second and third boxes below to test whether your code is correct.
End of explanation
"""
friend = {'last_name': 'Schnaars', 'first_name': 'Christopher', 'works_for': 'USA Today', 'favorite_food': 'spam'}
"""
Explanation: Dictionaries
In Python, a dictionary is a mutable, unordered collection of key-value pairs. Consider:
End of explanation
"""
friend
"""
Explanation: Note that our data is enclosed in curly braces, which tell Python you are building a dictionary.<br />
<br />
Now notice what happens when we ask Python to spit this information back to us:
End of explanation
"""
friend['favorite_sketch'] = 'dead parrot'
friend
"""
Explanation: Notice that Python did not return the list of key-value pairs in the same order as we entered them. Remember that dictionaries are unordered collections. This might bother you, but it shouldn't. You'll find in practice it is not a problem. Because key order varies, you can't access a value by index as you might with a list, so something like friend[0] will not work.
You might notice that the keys are listed in alphabetical order. This is <u>not</u> always the case. You can't assume keys will be in any other particular order.
To add a new key-value pair, simply put the new key in brackets and assign the value with an = sign. Try to add the key favorite_sketch to our dictionary, and set it's value to dead parrot:
End of explanation
"""
friend['first_name'] = 'Chris'
friend
"""
Explanation: To replace an existing value, simply re-assign it. Change first_name to Chris:
End of explanation
"""
|
palandatarxcom/sklearn_tutorial_cn | notebooks/04.3-Density-GMM.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
"""
Explanation: 这个分析笔记由Jake Vanderplas编辑汇总。 源代码和license文件在GitHub。 中文翻译由派兰数据在派兰大数据分析平台上完成。 源代码在GitHub上。
密度估计:高斯混合模型
我们在这里将会讨论 高斯混合模型,它是一个无监督聚类和密度估计的算法。
我们首先进行基本的设置:
End of explanation
"""
np.random.seed(2)
x = np.concatenate([np.random.normal(0, 2, 2000),
np.random.normal(5, 5, 2000),
np.random.normal(3, 0.5, 600)])
plt.hist(x, 80, normed=True)
plt.xlim(-10, 20);
"""
Explanation: 高斯混合模型介绍
我们之前看过K-Means的例子,它是一个使用EM方法的聚类算法。
在这里我们需要考虑一个既适用于聚类也适用于密度估计的方法。
举个例子,我们现在有一个呈某种分布的一维数据:
End of explanation
"""
from sklearn.mixture import GMM
X = x[:, np.newaxis]
clf = GMM(4, n_iter=500, random_state=3).fit(X)
xpdf = np.linspace(-10, 20, 1000)
density = np.exp(clf.score(xpdf[:, np.newaxis]))
plt.hist(x, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
"""
Explanation: 高斯混合模型可以帮助我们近似数据密度:
End of explanation
"""
clf.means_
clf.covars_
clf.weights_
plt.hist(x, 80, normed=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covars_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
"""
Explanation: 我们注意到,这个密度是由混合高斯算法估计的,我们可以通过看它的 means_, covars_, 和 weights_ 参数进行校验:
End of explanation
"""
print(clf.bic(X))
print(clf.aic(X))
"""
Explanation: 这些独立的高斯分布是用EM算法估计的,除了对数据类别的规定,其他就像在K-means中一样。后验概率是用来计算加权平均数和协方差的。
有些让人惊讶的是,这个算法被证明可以在最优值处收敛(虽然并不需要达到全局最优)。
有多少高斯分布?
给出一个模型,我们可以用以下的几种方法之一去评估这个模型对数据拟合的效果。
举个例子,这里有赤池信息量(AIC)和贝叶斯信息量(BIC)两种度量方法。
End of explanation
"""
n_estimators = np.arange(1, 10)
clfs = [GMM(n, n_iter=1000).fit(X) for n in n_estimators]
bics = [clf.bic(X) for clf in clfs]
aics = [clf.aic(X) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
"""
Explanation: 让我们来看一下模型的优劣随着高斯分布数的改变会有什么变化:
End of explanation
"""
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, n_iter=500, random_state=0).fit(y[:, np.newaxis])
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.exp(clf.score(xpdf[:, np.newaxis]))
plt.hist(y, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
plt.xlim(-15, 30);
"""
Explanation: 从图中我们可以看到,无论是AIC还是BIC,4个高斯分布的情况是最好的。
例子:GMM在异常检测中的应用
GMM就是所谓的生成模型:它是一个概率模型,通过这个模型我们可以生成一个数据集。
对于异常检测,GMM在以下这个方面会有作用:我们可以在生成模型下简单的评估每一个点的概率;那些具有相对低的概率的点可以被视为是异常数据(相对的标准是由您的对偏置和方差的偏好来决定的)。
我们通过创建一个带异常数据的数据集来看一看他的应用:
End of explanation
"""
log_likelihood = clf.score_samples(y[:, np.newaxis])[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
"""
Explanation: 现在我们来评估一下在生成模型下,每个点出现的概率,我们将这些点作为y的函数画出来。
End of explanation
"""
set(true_outliers) - set(detected_outliers)
"""
Explanation: 这个算法漏掉了一些期望的异常点(有一些异常点甚至实际上处在整个分布的中央!)
下面这些异常数据被忽视了:
End of explanation
"""
set(detected_outliers) - set(true_outliers)
"""
Explanation: 还有一些点被错误的估计成了异常点:
End of explanation
"""
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
"""
Explanation: 最后,我们需要注意,虽然上面的所有处理都是针对一维数据的。GMM实际上生成了多维的数据,我们将在后面的部分看到。
其他的密度估计算法
这儿也有一些其他的主流算法,比如核密度估计,您可以在sklearn.neighbors.KernelDensity 中获得。从某种程度上来说,这个可以被认为是GMM的一个生成模型,其中每一个训练点的位置都存在一个高斯核!
End of explanation
"""
|
huiyi1990/maths-with-python | 02-programs.ipynb | mit | import math
x = math.sin(1.2)
"""
Explanation: Programs
Using the Python console to type in commands works fine, but has serious drawbacks. It doesn't save the work for the future. It doesn't allow the work to be re-used. It's frustrating to edit when you make a mistake, or want to make a small change. Instead, we want to write a program.
A program is a text file containing Python commands. It can be written in any text editor. Something like the editor in spyder is ideal: it has additional features for helping you write code. However, any plain text editor will work. A program like Microsoft Word will not work, as it will try and save additional information with the file.
Let us use a simple pair of Python commands:
End of explanation
"""
x
"""
Explanation: Go to the editor in spyder and enter those commands in a file:
python
import math
x = math.sin(1.2)
Save this file in a suitable location and with a suitable name, such as lab1_basic.py (the rules and conventions for filenames are similar to those for variable names laid out above: descriptive, lower case names without spaces). The file extension should be .py: spyder should add this automatically.
To run this program, either
press the green "play" button in the toolbar;
press the function key F5;
select "Run" from the "Run" menu.
In the console you should see a line like
runfile('/Users/ih3/PythonLabs/lab1_basic.py', wdir='/Users/ih3/PythonLabs')
appear, and nothing else. To check that the program has worked, check the value of x. In the console just type x:
End of explanation
"""
from math import pi
theta_d = 30.0
theta_r = pi / 180.0 * theta_d
print(theta_r)
"""
Explanation: Also, in the top right of the spyder window, select the "Variable explorer" tab. It shows the variables that it currently knows, which should include x, its type (float) and its value.
If there are many variables known, you may worry that your earlier tests had already set the value for x and that the program did not actually do anything. To get back to a clean state, type %reset in the console to delete all variables - you will need to confirm that you want to do this. You can then re-run the program to test that it worked.
Using programs and modules
In previous sections we have imported and used standard Python libraries, packages or modules, such as math. This is one way of using a program, or code, that someone else has written. To do this for ourselves, we use exactly the same syntax.
Suppose we have the file lab1_basic.py exactly as above. Write a second file containing the lines
python
import lab1_basic
print(lab1_basic.x)
Save this file, in the same directory as lab1_basic.py, say as lab1_import.py. When we run this program, the console should show something like
runfile('/Users/ih3/PythonLabs/lab2_import.py', wdir='/Users/ih3/PythonLabs')
0.9320390859672263
This shows what the import statement is doing. All the library imports, definitions and operations in the imported program (lab1_basic) are performed. The results are then available to us, using the dot notation, via lab1_basic.<variable>, or lab1_basic.<function>.
To build up a program, we write Python commands into plain text files. When we want to use, or re-use, those definitions or results, we use import on the name of the file to recover their values.
Note
We saved both files - the original lab1_basic.py, and the program that imported lab1_basic.py, in the same directory. If they were in different directories then Python would not know where to find the file it was trying to import, and would give an error. The solution to this is to create a package, which is rather more work.
Functions
We have already seen and used some functions, such as the log and sin functions from the math package. However, in programming, a function is more general; it is any set of commands that acts on some input parameters and returns some output.
Functions are central to effective programming, as they stop you from having to repeat yourself and reduce the chances of making a mistake. Defining and using your own functions is the next step.
Let us write a function that converts angles from degrees to radians. The formula is
\begin{equation}
\theta_r = \frac{\pi}{180} \theta_d,
\end{equation}
where $\theta_r$ is the angle in radians, and $\theta_d$ is the angle in degrees. If we wanted to do this for, eg, $\theta_d = 30^{\circ}$, we could use the commands
End of explanation
"""
from math import pi
def degrees_to_radians(theta_d):
"""
Convert an angle from degrees to radians.
Parameters
----------
theta_d : float
The angle in degrees.
Returns
-------
theta_r : float
The angle in radians.
"""
theta_r = pi / 180.0 * theta_d
return theta_r
"""
Explanation: This is effective for a single angle. If we want to repeat this for many angles, we could copy and paste the code. However, this is dangerous. We could make a mistake in editing the code. We could find a mistake in our original code, and then have to remember to modify every location where we copied it to. Instead we want to have a single piece of code that performs an action, and use that piece of code without modification whenever needed.
This is summarized in the "DRY" principle: do not repeat yourself. Instead, convert the code into a function and use the function.
We will define the function and show that it works, then discuss how:
End of explanation
"""
print(degrees_to_radians(30.0))
print(degrees_to_radians(60.0))
print(degrees_to_radians(90.0))
"""
Explanation: We check that it works by printing the result for multiple angles:
End of explanation
"""
help(degrees_to_radians)
"""
Explanation: How does the function definition work?
First we need to use the def command:
python
def degrees_to_radians(theta_d):
This command effectively says "what follows is a function". The first word after def will be the name of the function, which can be used to call it later. This follows similar rules and conventions to variables and files (no spaces, lower case, words separated by underscores, etc.).
After the function name, inside brackets, is the list of input parameters. If there are no input parameters the brackets still need to be there. If there is more than one parameter, they should be separated by commas.
After the bracket there is a colon :. The use of colons to denote special "blocks" of code happens frequently in Python code, and we will see it again later.
After the colon, all the code is indented by four spaces or one tab. Most helpful text editors, such as the spyder editor, will automatically indent the code after a function is defined. If not, use the tab key to ensure the indentation is correct. In Python, whitespace and indentation is essential: it defines where blocks of code (such as functions) start and end. In other languages special keywords or characters may be used, but in Python the indentation of the code statements is the key.
The statement on the next few lines in the function documentation, or docstring.
```python
"""
Convert an angle from degrees to radians.
...
"""
```
This is optional: it's not needed to make the code run. However, documentation is extremely useful for the next user of the code. As the next user is likely to be you in a week (or a month), when you'll have forgotten the details of what you did, documentation helps you first.
The docstring can be any string within quotes. Using "triple quotes" allows the string to go across multiple lines. The docstring can be rapidly printed using the help function:
End of explanation
"""
help(math.sin)
"""
Explanation: This allows you to quickly use code correctly without having to look at the code. We can do the same with functions from packages, such as
End of explanation
"""
%reset
"""
Explanation: You can put whatever you like in the docstring. The format used above in the degrees_to_radians function follows the numpydoc convention, but there are other conventions that work well. One reason for following this convention can be seen in spyder. Copy the function degrees_to_radians into the console, if you have not done so already. Then, in the top right part of the window, select the "Object inspector" tab. Ensure that the "Source" is "Console". Type degrees_to_radians into the "Object" box. You should see the help above displayed, but nicely formatted.
Going back to the function itself. After the comment, the code to convert from degrees to radians starts. Compare it to the original code typed directly into the console. In the console we had
python
from math import pi
theta_d = 30.0
theta_r = pi / 180.0 * theta_d
In the function we have
python
theta_r = pi / 180.0 * theta_d
return theta_r
The line
python
from math import pi
is in the function file, but outside the definition of the function itself.
There are four differences.
The function code is indented by four spaces, or one tab.
The input parameter theta_d must be defined in the console, but not in the function. When the function is called the value of theta_d is given, but inside the function itself it is not: the function knows that the specific value of theta_d will be given as input.
The output of the function theta_r is explicitly returned, using the return statement.
The import statement is moved outside the function definition - this is the convention recommended by PEP8.
Aside from these points, the code is identical. A function, like a program, is a collection of Python statements exactly as you would type into a console. The first three differences above are the essential differences to keep in mind: the first is specific to Python (other programming languages have something similar), whilst the other differences are common to most programming languages.
Scope
Names used internally by the function are not visible externally. Also, the name used for the output of the function need not be used externally. To see an example of this, start with a clean slate by typing %reset into the console.
End of explanation
"""
from math import pi
def degrees_to_radians(theta_d):
"""
Convert an angle from degrees to radians.
Parameters
----------
theta_d : float
The angle in degrees.
Returns
-------
theta_r : float
The angle in radians.
"""
theta_r = pi / 180.0 * theta_d
return theta_r
"""
Explanation: Then copy and paste the function definition again:
End of explanation
"""
angle = degrees_to_radians(45.0)
print(angle)
"""
Explanation: (Alternatively you can use the history in the console by pressing the up arrow until the definition of the function you previously entered appears. Then click at the end of the function and press Return). Now call the function as
End of explanation
"""
theta_d
pi
"""
Explanation: But the variables used internally, theta_d and theta_r, and also pi, are not known outside the function:
End of explanation
"""
x1 = 1.1
def print_x1():
print(x1)
print(x1)
print_x1()
x2 = 1.2
def print_x2():
x2 = 2.3
print(x2)
print(x2)
print_x2()
"""
Explanation: This is an example of scope: the existence of variables, and their values, is restricted inside functions (and files).
You may note that above, we had a value of theta_d outside the function (from when we were working in the console), and a value of theta_d inside the function (as the input parameter). These do not have to match. If a variable is assigned a value inside the function then Python will take this "local" value. If not, Python will look outside the function. Two examples will illustrate this:
End of explanation
"""
x3 = 1.3
def print_x3():
print(x3)
x3 = 2.4
print(x3)
print_x3()
"""
Explanation: In the first (x1) example, the variable x1 was not defined within the function, but it was used. When x1 is printed, Python has to look for the definition outside of the scope of the function, which it does successfully.
In the second (x2) example, the variable x2 is defined within the function. The value of x2 does not match the value of the variable with the same name defined outside the function, but that does not matter: within the function, its local value is used. When printed outside the function, the value of x2 uses the external definition, as the value defined inside the function is not known (it is "not in scope").
Some care is needed with using scope in this way, as Python reads the whole function at the time it is defined when deciding scope. As an example:
End of explanation
"""
from math import sqrt
def drop_time(height, speed, gravity):
"""
Return how long it takes an object released from a height h,
in a gravitational field of strength g, with initial vertical speed v,
to hit the ground.
Parameters
----------
height : float
Initial height h
speed : float
Initial vertical speed v
gravity : float
Gravitional field strength g
Returns
-------
t : float
Time the object hits the ground
"""
return (speed + sqrt(speed**2 + 2.0*height*gravity)) / gravity
"""
Explanation: The only significant change from the second example is the order of the print statement and the assignment to x3 inside the function. Because x3 is assigned inside the function, Python wants to use the local value within the function, and will ignore the value defined outside the function. However, the print function is called before x3 has been set within the function, leading to an error.
Keyword and default arguments
Our original function degrees_to_radians only had one argument, the angle to be converted theta_d. Many functions will take more than one argument, and sometimes the function will take arguments that we don't always want to set. Python can make life easier in these cases.
Suppose we wanted to know how long it takes an object released from a height $h$, in a gravitational field of strength $g$, with initial vertical speed $v$, to hit the ground. The answer is
\begin{equation}
t = \frac{1}{g} \left( v + \sqrt{v^2 + 2 h g} \right).
\end{equation}
We can write this as a function:
End of explanation
"""
print(drop_time(10.0, 0.0, 9.8))
print(drop_time(10.0, 1.0, 9.8))
print(drop_time(100.0, 9.8, 15.0))
"""
Explanation: But when we start using it, it can be a bit confusing:
End of explanation
"""
print(drop_time(height=10.0, speed=0.0, gravity=9.8))
"""
Explanation: Is that last case correct? Did we really want to change the gravitational field, whilst at the same time using an initial velocity of exactly the value we expect for $g$?
A far clearer use of the function comes from using keyword arguments. This is where we explicitly use the name of the function arguments. For example:
End of explanation
"""
print(drop_time(height=100.0, gravity=9.8, speed=15.0))
"""
Explanation: The result is exactly the same, but now it's explicitly clear what we're doing.
Even more useful: when using keyword arguments, we don't have to ensure that the order we use matches the order of the function definition:
End of explanation
"""
def drop_time(height, speed, gravity=9.8):
"""
Return how long it takes an object released from a height h,
in a gravitational field of strength g, with initial vertical speed v,
to hit the ground.
Parameters
----------
height : float
Initial height h
speed : float
Initial vertical speed v
gravity : float
Gravitional field strength g
Returns
-------
t : float
Time the object hits the ground
"""
return (speed + sqrt(speed**2 + 2.0*height*gravity)) / gravity
"""
Explanation: This is the same as the confusing case above, but now there is no ambiguity. Whilst it is good practice to match the order of the arguments to the function definition, it is only needed when you don't use the keywords. Using the keywords is always useful.
What if we said that we were going to assume that the gravitational field strength $g$ is nearly always going to be that of Earth, $9.8$ms${}^{-2}$? We can re-define our function using a default argument:
End of explanation
"""
print(drop_time(10.0, 0.0))
print(drop_time(height=50.0, speed=1.0))
print(drop_time(gravity=15.0, height=50.0, speed=1.0))
"""
Explanation: Note that there is only one difference here, in the very first line: we state that gravity=9.8. What this means is that if this function is called and the value of gravity is not specified, then it takes the value 9.8.
For example:
End of explanation
"""
import math
x = 1.2
name = "Alice"
print("Hello")
print(6)
print(name)
print(x)
print(math.pi)
print(math.sin(x))
print(math.sin)
print(math)
"""
Explanation: So, we can still give a specific value for gravity when we don't want to use the value 9.8, but it isn't needed if we're happy for it to take the default value of 9.8. This works both if we use keyword arguments and if not, with certain restrictions.
Some things to keep in mind.
Default arguments can only be used without specifying the keyword if they come after arguments without defaults. It is a very strong convention that arguments with a default come at the end of the argument list.
The value of default arguments can be pretty much anything, but care should be taken to get the behaviour you expect. In particular, it is strongly discouraged to allow the default value to be anything that might change, as this can lead to odd behaviour that is hard to find. In particular, allowing a default value to be a container such as a list (seen below) can lead to unexpected behaviour. See, for example, this discussion, pointing out why, and that the value of the default argument is fixed when the function is defined, not when it's called.
Printing and strings
We have already seen the print function used multiple times. It displays its argument(s) to the screen when called, either from the console or from within a program. It prints some representation of what it is given in the form of a string: it converts simple numbers and other objects to strings that can be shown on the screen. For example:
End of explanation
"""
print("Hello {}. We set x={}.".format(name, x))
"""
Explanation: We see that variables are converted to their values (such as name and math.pi) and functions are called to get values (such as math.sin(x)), which are then converted to strings displayed on screen. However, functions (math.sin) and modules (math) are also "printed", in that a string saying what they are, and where they come from, is displayed.
Often we want to display useful information to the screen, which means building a message that is readable and printing that. There are many ways of doing this: here we will just look at the format command. Here is an example:
End of explanation
"""
print ("The function {} applied to x={} gives {}".format(math.sin, x, math.sin(x)))
"""
Explanation: The format command takes the string (here "Hello {}. We set x={}.") and replaces the {} with the values of the variables (here name and x in order).
We can use the format command in this way for anything that has a string representation. For example:
End of explanation
"""
name = "Alice"
number = "13"
sentence = " a b c d e "
print(name.upper())
print(name.lower())
print(name.isdigit())
print(number.isdigit())
print(sentence.strip())
print(sentence.split())
"""
Explanation: There are many more ways to use the format command which can be helpful.
We note that format is a function, but a function applied to the string before the dot. This type of function is called a method, and we shall return to them later.
Strings
We have just printed a lot of strings out, but it is useful to briefly talk about what a string is.
In Python a string is not just a sequence of characters. It is a Python object that contains additional information that "lives on it". If this information is a constant property it is called an attribute. If it is a function it is called a method. We can access this information to tell us things about the string, and to manipulate it.
Here are some basic string methods:
End of explanation
"""
print("Hello" + "Alice")
"""
Explanation: The use of the "dot" notation appears here. We saw this with accessing functions in modules and packages above; now we see it with accessing attributes and methods. It appears repeatedly in Python. The format method used above is particularly important for our purposes, but there are a lot of methods available.
There are other ways of manipulating strings.
We can join two strings using the + operator.
End of explanation
"""
print("Hello" * 3)
"""
Explanation: We can repeat strings using the * operator.
End of explanation
"""
print(str(3.4))
"""
Explanation: We can convert numbers to strings using the str function.
End of explanation
"""
print("Hello"[0])
print("Hello"[2])
print("Hello"[1:3])
"""
Explanation: We can also access individual characters (starting from 0!), or a range of characters:
End of explanation
"""
|
Kismuz/btgym | examples/setting_up_environment_basic.ipynb | lgpl-3.0 | from btgym import BTgymEnv
# Handy function:
def under_the_hood(env):
"""Shows environment internals."""
for attr in ['dataset','strategy','engine','renderer','network_address']:
print ('\nEnv.{}: {}'.format(attr, getattr(env, attr)))
for params_name, params_dict in env.params.items():
print('\nParameters [{}]: '.format(params_name))
for key, value in params_dict.items():
print('{} : {}'.format(key,value))
# Simpliest trading environment,
# using year-long dataset of one minute bars for EUR/USD currency pair:
MyEnvironment = BTgymEnv(filename='./data/DAT_ASCII_EURUSD_M1_2016.csv',)
# Print environment configuration:
under_the_hood(MyEnvironment)
# Clean up:
MyEnvironment.close()
"""
Explanation: Basic settings and parameters
EBTgymEnv() class comes preconfigured for quick setting. Basicaly one need to provide at least data file keyword argument to set it up.
BTgym relies on Backtrader framework for actual environment rendering. Environment customisation can be done either via setting basic set of parameters, inherited from Backtrader computational engine, or passing to env. complete engine subclass. This example covers basic setting, while later option gives complete control over backtasting logic and environment becames as flexible as Backtrader itself.
Besides, there is another bunch of vital options related to reinforcement learning setting: observation and action space parameters and episode setting.
One can eyeball internal environment parameters by looking at nested MyEnvironment.params dictionary consisting of these subdictionaries:
- params['dataset'],
- params['engine'],
- params['strategy'],
- params['render'].
Look at source files for exact parameters descriptions, since complete doc. reference is yet to come.
Here all parameters are left to defaults values:
End of explanation
"""
from gym import spaces
MyEnvironment = BTgymEnv(filename='../examples/data/DAT_ASCII_EURUSD_M1_2016.csv',
# Dataset and single random episode related parameters:
# We start trading on mondays, thuesdays and wednesdays:
start_weekdays=[0, 1, 2],
# Want total episode duration to be no more than 1 day 23h 55min:
episode_duration={'days': 1, 'hours': 23, 'minutes': 55},
# Want to start every episode at the begiining of the day:
start_00=True,
# Broker and trade realted:
# Set initial capital:
start_cash=100,
# Set broker commission as 0.2% of operation value:
broker_commission=0.002,
# We use fixed stake of size 10:
fixed_stake=10,
# We want stop episode if 30% of initial capital is lost:
drawdown_call=30,
# RL environment related parameters:
# Set observation shape. By convention, first dimension
# is time embedding dimensionality;
# that's basically means we get sequence of 30 last
# [o,h,l,c] candels as our one-step environment observation:
state_shape=dict(raw=spaces.Box(low=0,high=1,shape=(30,4))),
# BTgym uses multi-modal observation space which is basically dictionary
# consisting of simple gym spaces (Box, discrete, etc.)
# For the built-in `raw_state` setting high and low is dummy, because
# environment will infer values from entire dataset statistic.
# Other parameters:
# Network port to use; note that using multiply environments at once reqires expliciltly
# setting different ports to avoid meesing up. If your jupyter kernel suddenly dies
# when running new environment - that's may be because of port conflict,
# or 'previous' environment instance (client-side) is still running.
# Don't panic, just clear up and restart kernel,
# or use env.close() to shut down all the services.
port=5555,
# Data-server port to use, same as above apply:
#data_port=4600,
# Be chatty: settting this to 1 makes environment report what's going on;
# 2 is for debugging, dumps out a lot of data:
verbose=1,)
# Eyeball configuration:
under_the_hood(MyEnvironment)
# Clean up:
MyEnvironment.close()
"""
Explanation: More control:
One can tweak environment setup by passing set of kwargs:
End of explanation
"""
import gym
from gym import spaces
# Set single dictionary of parameters:
env_params = dict(filename='../examples/data/DAT_ASCII_EURUSD_M1_2016.csv',
start_weekdays=[0, 1, 2],
episode_duration={'days': 1, 'hours': 23, 'minutes': 55},
start_00=True,
start_cash=100,
broker_commission=0.002,
fixed_stake=10,
drawdown_call=30,
state_shape=dict(raw=spaces.Box(low=0,high=1,shape=(30,4))),
port=5002,
data_port=4800,
verbose=1,)
# Register with unique name (watch out for OpenAI namesetting conventions):
gym.envs.register(id='backtrader-v46',
entry_point='btgym:BTgymEnv',
kwargs=env_params)
# Make environment:
MyEnvironment = gym.make('backtrader-v46')
# Clean up
MyEnvironment.close()
"""
Explanation: Registering environment:
OpenaAI way of making environment is to register it with cpecific set of parameters under some unique name and instantiate it via calling make() method. This helps for standartization and correct evaluation of results uploaded to Gym board.
That's how you do it (same parameters as above):
End of explanation
"""
import itertools
import random
# Will need those
# to display rendered images inline:
import IPython.display as Display
import PIL.Image as Image
# Some utility functions:
def to_string(dictionary):
"""Convert dictionary to block of text."""
text = ''
for k, v in dictionary.items():
if type(v) in [float]:
v = '{:.4f}'.format(v)
text += '{}: {}\n'.format(k, v)
return(text)
def show_rendered_image(rgb_array):
"""
Convert numpy array to RGB image using PILLOW and
show it inline using IPykernel.
This method doesn't requires matplotlib to be loaded.
"""
Display.display(Image.fromarray(rgb_array))
# Number episodes to run:
num_episodes = 2
# Render state every:
state_render=500
"""
Explanation: Running agent:
Just for giving sense of env. operation flow, our agent will be just mindless random picker; it performs no actual training. Run it for several episodes to see how fast all the money get lost.
- we'll plot states observationas every 500th and final step, episode summary and rendering;
- set verbosity=0 to turn of excesive messaging.
End of explanation
"""
# Run it:
for episode in range(num_episodes):
# Calling reset() before every episode.
init_state = MyEnvironment.reset()
print('\nEPISODE [{}]:'.format(episode + 1))
# Render and show first step:
show_rendered_image(MyEnvironment.render('human'))
# Repeat until episode end:
for _ in itertools.count():
#Choose random action:
rnd_action = MyEnvironment.action_space.sample()
# Make a step in the environment:
obs, reward, done, info = MyEnvironment.step(rnd_action)
# Show state every 500th step
# and when episode is finished:
if info[-1]['step'] % state_render == 0 or done:
show_rendered_image(MyEnvironment.render('human'))
if done: break
# Print episode statistic (quite modest for now since we didn't added any observers etc.)
print('SUMMARY:\n{}\nINFO [last observation]:\n{}'.
format(to_string(MyEnvironment.get_stat()), to_string(info[-1])))
# Render and show episode statisic:
print('BACKTRADER SUMMARY PLOT:')
show_rendered_image(MyEnvironment.render('episode'))
# Clean up:
MyEnvironment.close()
MyEnvironment.close()
"""
Explanation: Pay attention to log output: when called for first time, env.reset() will start the server and calls for episode; server than samples episode data, checks it for consistency, starts backtesting and returns initial state observation.
End of explanation
"""
|
rochelleterman/scrape-interwebz | 1_APIs/4_api_solutions.ipynb | mit | # Import required libraries
import requests
import json
from __future__ import division
import math
import csv
import matplotlib.pyplot as plt
"""
Explanation: Accessing Databases via Web APIs
End of explanation
"""
# set key
key="be8992a420bfd16cf65e8757f77a5403:8:44644296"
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
"""
Explanation: 1. Constructing API GET Request
In the first place, we know that every call will require us to provide:
a base URL for the API,
some authorization code or key, and
a format for the response.
So let's put store those in some variables.
Use the following demonstration keys for now, but in the future, get your own!
ef9055ba947dd842effe0ecf5e338af9:15:72340235
25e91a4f7ee4a54813dca78f474e45a0:15:73273810
e15cea455f73cc47d6d971667e09c31c:19:44644296
b931c838cdb745bbab0f213cfc16b7a5:12:44644296
1dc1475b6e7d5ff5a982804cc565cd0b:6:44644296
18046cd15e21e1b9996ddfb6dafbb578:4:44644296
be8992a420bfd16cf65e8757f77a5403:8:44644296
End of explanation
"""
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key}
"""
Explanation: You often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we want articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term.
End of explanation
"""
# make request
r = requests.get(base_url+response_format, params=search_params)
"""
Explanation: Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request.
End of explanation
"""
print(r.url)
"""
Explanation: Now, we have a response object called r. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens.
End of explanation
"""
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key,
"begin_date": "20150101", # date must be in YYYYMMDD format
"end_date": "20151231"}
# Uncomment to test
r = requests.get(base_url+response_format, params=search_params)
print(r.url)
"""
Explanation: Click on that link to see it returns!
Challenge 1: Adding a date range
What if we only want to search within a particular date range? The NYT Article Api allows us to specify start and end dates.
Alter the search_params code above so that the request only searches for articles in the year 2015.
You're gonna need to look at the documentation here to see how to do this.
End of explanation
"""
search_params["page"] = 0
# Uncomment to test
r = requests.get(base_url+response_format, params=search_params)
print(r.url)
"""
Explanation: Challenge 2: Specifying a results page
The above will return the first 10 results. To get the next ten, you need to add a "page" parameter. Change the search parameters above to get the second 10 resuls.
End of explanation
"""
# Inspect the content of the response, parsing the result as text
response_text= r.text
print(response_text[:1000])
"""
Explanation: 2. Parsing the response text
We can read the content of the server’s response using .text
End of explanation
"""
# Convert JSON response to a dictionary
data = json.loads(response_text)
# data
"""
Explanation: What you see here is JSON text, encoded as unicode text. JSON stands for "Javascript object notation." It has a very similar structure to a python dictionary -- both are built on key/value pairs. This makes it easy to convert JSON response to a python dictionary.
End of explanation
"""
print(data.keys())
# this is boring
data['status']
# so is this
data['copyright']
# this is what we want!
# data['response']
data['response'].keys()
data['response']['meta']['hits']
# data['response']['docs']
type(data['response']['docs'])
"""
Explanation: That looks intimidating! But it's really just a big dictionary. Let's see what keys we got in there.
End of explanation
"""
docs = data['response']['docs']
docs[0]
"""
Explanation: That looks what we want! Let's put that in it's own variable.
End of explanation
"""
# set key
key="ef9055ba947dd842effe0ecf5e338af9:15:72340235"
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key,
"begin_date":"20150101", # date must be in YYYYMMDD format
"end_date":"20151231"}
# make request
r = requests.get(base_url+response_format, params=search_params)
# convert to a dictionary
data=json.loads(r.text)
# get number of hits
hits = data['response']['meta']['hits']
print("number of hits: ", str(hits))
# get number of pages
pages = int(math.ceil(hits/10))
# make an empty list where we'll hold all of our docs for every page
all_docs = []
# now we're ready to loop through the pages
for i in range(pages):
print("collecting page", str(i))
# set the page parameter
search_params['page'] = i
# make request
r = requests.get(base_url+response_format, params=search_params)
# get text and convert to a dictionary
data=json.loads(r.text)
# get just the docs
docs = data['response']['docs']
# add those docs to the big list
all_docs = all_docs + docs
len(all_docs)
"""
Explanation: 3. Putting everything together to get all the articles.
That's great. But we only have 10 items. The original response said we had 93 hits! Which means we have to make 93 /10, or 10 requests to get them all. Sounds like a job for a loop!
But first, let's review what we've done so far.
End of explanation
"""
# DEFINE YOUR FUNCTION HERE
def get_api_data(term, year):
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":term,
"api-key":key,
"begin_date": str(year) + "0101", # date must be in YYYYMMDD format
"end_date":str(year) + "1231"}
# make request
r = requests.get(base_url+response_format, params=search_params)
# convert to a dictionary
data=json.loads(r.text)
# get number of hits
hits = data['response']['meta']['hits']
print("number of hits:", str(hits))
# get number of pages
pages = int(math.ceil(hits/10))
# make an empty list where we'll hold all of our docs for every page
all_docs = []
# now we're ready to loop through the pages
for i in range(pages):
print("collecting page", str(i))
# set the page parameter
search_params['page'] = i
# make request
r = requests.get(base_url+response_format, params=search_params)
# get text and convert to a dictionary
data=json.loads(r.text)
# get just the docs
docs = data['response']['docs']
# add those docs to the big list
all_docs = all_docs + docs
return(all_docs)
# uncomment to test
# get_api_data("Duke Ellington", 2014)
"""
Explanation: Challenge 3: Make a function
Turn the code above into a function that inputs a search term and a year, and returns all the documents containing that search term in that year.
End of explanation
"""
all_docs[0]
"""
Explanation: 4. Formatting
Let's take another look at one of these documents.
End of explanation
"""
def format_articles(unformatted_docs):
'''
This function takes in a list of documents returned by the NYT api
and parses the documents into a list of dictionaries,
with 'id', 'header', and 'date' keys
'''
formatted = []
for i in unformatted_docs:
dic = {}
dic['id'] = i['_id']
dic['headline'] = i['headline']['main']
dic['date'] = i['pub_date'][0:10] # cutting time of day.
formatted.append(dic)
return(formatted)
all_formatted = format_articles(all_docs)
all_formatted[:5]
"""
Explanation: This is all great, but it's pretty messy. What we’d really like to to have, eventually, is a CSV, with each row representing an article, and each column representing something about that article (header, date, etc). As we saw before, the best way to do this is to make a lsit of dictionaries, with each dictionary representing an article and each dictionary representing a field of metadata from that article (e.g. headline, date, etc.) We can do this with a custom function:
End of explanation
"""
def format_articles(unformatted_docs):
'''
This function takes in a list of documents returned by the NYT api
and parses the documents into a list of formated dictionaries,
with 'id', 'header', and 'date' keys
'''
formatted = []
for i in unformatted_docs:
dic = {}
dic['id'] = i['_id']
dic['headline'] = i['headline']['main']
dic['date'] = i['pub_date'][0:10] # cutting time of day.
if i['lead_paragraph']:
dic['lead_paragraph'] = i['lead_paragraph']
dic['word_count'] = i['word_count']
dic['keywords'] = [keyword['value'] for keyword in i['keywords']]
formatted.append(dic)
return(formatted)
# uncomment to test
all_formatted = format_articles(all_docs)
# all_formatted[:5]
"""
Explanation: Challenge 4 Collect more fields
Edit the function above so that we include the lead_paragraph and word_count fields.
HINT: Some articles may not contain a lead_paragraph, in which case, it'll throw an error if you try to address this value (which doesn't exist.) You need to add a conditional statement that takes this into consideraiton. If
Advanced: Add another key that returns a list of keywords associated with the article.
End of explanation
"""
keys = all_formatted[1]
# writing the rest
with open('all-formated.csv', 'w') as output_file:
dict_writer = csv.DictWriter(output_file, keys)
dict_writer.writeheader()
dict_writer.writerows(all_formatted)
"""
Explanation: 5. Exporting
We can now export the data to a CSV.
End of explanation
"""
# for this challenge, we just need the number of hits.
def get_api_hits(term, year):
'''
returns an integer, the number of hits (or articles) mentioning the given term
in the given year
'''
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":term,
"api-key":key,
"begin_date": str(year) + "0101", # date must be in YYYYMMDD format
"end_date":str(year) + "1231"}
# make request
r = requests.get(base_url+response_format, params=search_params)
# convert to a dictionary
data=json.loads(r.text)
# get number of hits
hits = data['response']['meta']['hits']
return(hits)
get_api_hits("Duke Ellington", 2014)
# collect data
years = range(2005, 2016)
years
all_duke = []
for i in years:
all_duke.append(get_api_hits("Duke Ellington", i))
all_duke
%matplotlib inline
plt.plot(years, all_duke)
plt.axis([2005, 2015, 0, 200])
"""
Explanation: Capstone Challenge
Using what you learned, tell me if Chris' claim (i.e. that Duke Ellington has gotten more popular lately) holds water.
End of explanation
"""
|
otavio-r-filho/AIND-Deep_Learning_Notebooks | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
daviddesancho/MasterMSM | examples/alanine_dipeptide/ala_dipeptide.ipynb | gpl-2.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="ticks", color_codes=True, font_scale=1.5)
sns.set_style({"xtick.direction": "in", "ytick.direction": "in"})
"""
Explanation: MSM of the alanine dipeptide
Here we run through most of the things that can be done with this package using a simple two-state model. There are more sophisticated examples that enable for further possibilities.
The first thing one must do is download the data from the following link. Once this is done, we will import a number of libraries we will need as we run this example.
End of explanation
"""
import mdtraj as md
from mastermsm.trajectory import traj
tr = traj.TimeSeries(top='data/alaTB.gro', traj=['data/protein_only.xtc'])
print (tr.mdt)
"""
Explanation: Discretizing the trajectory
We start loading the simulation data using the trajectory module. For this we use the external library MDtraj, which contains all sorts of methods for parsing and calculating interestign properties of our time-series data.
End of explanation
"""
phi = md.compute_phi(tr.mdt)
psi = md.compute_psi(tr.mdt)
res = [x for x in tr.mdt.topology.residues]
fig,ax = plt.subplots(figsize=(4,4))
ax.plot(180./math.pi*phi[1],180./math.pi*psi[1],'o', markersize=1)
ax.set_xlim(-180,180)
ax.set_ylim(-180,180)
ax.xaxis.set_ticks(range(-180,181,90))
ax.yaxis.set_ticks(range(-180,181,90))
ax.set_xlabel(r'$\phi$', fontsize=18)
ax.set_ylabel(r'$\psi$', fontsize=18)
"""
Explanation: So does what we have calculated look somewhat like a Ramachandran map?
End of explanation
"""
tr.discretize(states=['A', 'E'])
"""
Explanation: Next we proceed to discretize the trajectory based on the Ramachandran angles.
End of explanation
"""
y = [0 if x == 'A' else 1 for x in tr.distraj]
fig, (ax1, ax2) = plt.subplots(2,1, sharex=True)
ax1.plot(psi[1]*180/math.pi,'o', markersize=1)
ax2.plot(y)
ax1.set_ylabel(r'$\psi$', fontsize=14)
ax1.set_xlim(0,2000)
ax1.set_ylim(-180,180)
ax1.yaxis.set_ticks(range(-180,181,90))
ax2.set_ylabel('State')
ax2.set_xlim(0,2000)
ax2.set_ylim(-0.2,1.2)
ax2.yaxis.set_ticks([0,1])
labels = [item.get_text() for item in ax2.get_xticklabels()]
labels[0] = 'c'
labels[1] = 'h'
ax2.set_yticklabels(labels)
ax2.set_xlabel('Time [ps]')
"""
Explanation: For plotting we convert helical configurations in 1 and beta in 0.
End of explanation
"""
tr.find_keys()
tr.keys
tr.file_name
"""
Explanation: In the plot we see how we go from the time series of continuous torsion angles converts into a time series of discrete states. We can obtain a list of states in the following way.
End of explanation
"""
from mastermsm.msm import msm
msm_alaTB = msm.SuperMSM([tr])
"""
Explanation: Building the master equation model
After having loaded our trajectory using the functionalities from the trajectory module we start building the master equation model. For this, we make use of the msm module. There are two steps corresponding to the two main classes within that module. First we create an instance of the SuperMSM, which can be used to direct the whole process of constructing and validating the MSM.
End of explanation
"""
lagt = 1
msm_alaTB.do_msm(lagt)
msm_alaTB.msms[lagt].do_trans()
msm_alaTB.msms[lagt].boots()
"""
Explanation: Then, using the do_msm method, we produce instances of the MSM class at a desired lag time, $\Delta t$. Each of these contains an MSM built at a specific lag time. These are stored as a dictionary in the msms attribute of the SuperMSM class.
End of explanation
"""
fig, ax = plt.subplots(1, 2, figsize=(5,2.5))
ax[0].errorbar([1], msm_alaTB.msms[lagt].tau_ave, msm_alaTB.msms[lagt].tau_std ,fmt='o-', markersize=10)
ax[1].errorbar([1,2], msm_alaTB.msms[lagt].peq_ave, msm_alaTB.msms[lagt].peq_std ,fmt='o-', markersize=10)
ax[0].set_ylabel(r'$\tau$ [ps]', fontsize=18)
ax[0].set_xlabel(r'$\lambda_1$', fontsize=18)
ax[1].set_ylabel(r'$P_{eq}$', fontsize=18)
ax[0].set_xticks([])
ax[1].set_xticks([1,2])
ax[1].set_xticklabels(labels[:2])
ax[1].set_xlim(0.5,2.5)
ax[0].set_ylim(0,50)
ax[1].set_ylim(0,1)
plt.tight_layout(w_pad=1)
"""
Explanation: The resulting model has a number of things we may be interested in, like its eigenvalue spectrum (in this case limited to a single relaxation time, corresponding to the exchange of helix and coil) or the equilibrium probabilities of the microstates.
End of explanation
"""
msm_alaTB.convergence_test(time=[1, 2, 5, 7, 10, 20, 50, 100], error=True)
tau_vs_lagt = np.array([[x,msm_alaTB.msms[x].tauT[0],msm_alaTB.msms[x].tau_std[0]] \
for x in sorted(msm_alaTB.msms.keys())])
fig, ax = plt.subplots()
ax.errorbar(tau_vs_lagt[:,0],tau_vs_lagt[:,1],fmt='o-', yerr=tau_vs_lagt[:,2], markersize=10)
#ax.plot(tau_vs_lagt[:,0],tau_vs_lagt[:,0])
ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray', alpha=0.5)
ax.set_xlabel(r'$\Delta$t [ps]', fontsize=16)
ax.set_ylabel(r'$\tau$ [ps]', fontsize=16)
ax.set_xlim(0.8,200)
ax.set_ylim(0,70)
_ = ax.set_xscale('log')
#ax.set_yscale('log')
"""
Explanation: Validation
However, from simply calculating these quantities we do not know how informative they really are. In order to understand whether the values we calculate are really reflective of the properties of the underlying system we resort to validation of the MSM. The two-level structure that we have described, consisting of the SuperMSM and MSM classes, allows for the user to test some global convergence properties first (at the level of the SuperMSM).
Convergence tests
For validating the model we first see at which point the relaxation times are sufficiently well converged.
End of explanation
"""
pMSM_E, pMD_E, epMD_E = msm_alaTB.ck_test(time=[1, 2, 5, 7, 10, 20, 50], init=['E'])
pMSM_A, pMD_A, epMD_A = msm_alaTB.ck_test(time=[1, 2, 5, 7, 10, 20, 50], init=['A'])
fig, ax = plt.subplots(1,2, figsize=(8,3.5), sharex=True, sharey=True)
ax[0].errorbar(pMD_E[:,0], pMD_E[:,1], epMD_E, fmt='o')
for p in pMSM_E:
ax[0].plot(p[0], p[1], label="$\Delta t$=%g"%p[0][0])
ax[0].legend(fontsize=10)
ax[1].errorbar(pMD_A[:,0], pMD_A[:,1], epMD_A, fmt='o')
for p in pMSM_A:
ax[1].plot(p[0], p[1])
ax[0].set_xscale('log')
ax[0].set_ylabel('P(t)')
ax[0].set_xlabel('Time (ps)')
ax[1].set_xlabel('Time (ps)')
plt.tight_layout()
"""
Explanation: Here we see that from the very beginning the relaxation times are independent of the lag time ($\Delta$t) used in the construction of the model. This convergence is a good indicator of the Markovianity of the model and is a result of the use of transition based assignment. The shaded area corresponds to the range of lag times where the information we obtain is largely unreliable, because the lag time itself is longer than the relaxation time.
Chapman-Kolmogorov test
Another important step in the validation is to carry out is the so-called Chapman-Kolmogorov test. In this case, the predictions from the MSM are validated against the simulation data used for its construction.
End of explanation
"""
msm_alaTB.msms[2].do_trans(evecs=True)
acf = msm_alaTB.msms[2].acf_mode()
time = np.arange(len(acf[1]))*msm_alaTB.data[0].dt
fig, ax = plt.subplots()
ax.plot(time, acf[1], 'o')
ax.plot(time,np.exp(-time*1./msm_alaTB.msms[2].tauT[0]))
ax.set_xlim(0,200)
ax.set_ylim(0,1)
ax.set_xlabel('Time [ps]')
ax.set_ylabel('C$_{11}$(t)')
"""
Explanation: These plots show the decay of the population from a given initial condition. In this case, the left and right plots corresponds to starting in the E and A basins respectively. In both cases we compare the calculation from the simulation data (as circles) and the propagation from MSMs calculated at different lag times (lines). The agreement between the simulation data and the model predictions confirm the result from the convergence analysis.
Autocorrelation functions
The MSM can also be validated against the autocorrelation function (ACF) of the eigenmodes. If the simulation data is projected in the eigenmodes, then the ACF for mode $n$ should decay with a timescale equal to $-1/\lambda_n$. In this case there is only one mode to reproduce.
End of explanation
"""
fig, ax = plt.subplots(1,2, figsize=(7.5,3.5))
for i in [1, 2, 5, 7, 10, 20]:
msm_alaTB.msms[i].do_rate()
ax[0].errorbar(msm_alaTB.msms[i].tauT, msm_alaTB.msms[i].tauK, fmt='o', xerr=msm_alaTB.msms[i].tau_std, markersize=10, label=str(i))
ax[1].errorbar(msm_alaTB.msms[i].peqT, msm_alaTB.msms[i].peqK, fmt='o', xerr=msm_alaTB.msms[i].peq_std, markersize=10, label=str(i))
ax[0].plot([0,100],[0,100],'--', color='lightgray')
ax[0].set_xlabel(r'$\tau_T$ [ps]', fontsize=20)
ax[0].set_ylabel(r'$\tau_K$ [ps]', fontsize=20)
ax[0].set_xlim(0,60)
ax[0].set_ylim(0,60)
ax[1].plot([0.1,1],[0.1,1],'--', color='lightgray')
ax[1].set_xlabel(r'$p_T$', fontsize=20)
ax[1].set_ylabel(r'$p_K$', fontsize=20)
ax[1].set_xlim(0.2,0.8)
ax[1].set_ylim(0.2,0.8)
ax[0].legend(fontsize=9, bbox_to_anchor=(1.0, 0.65))
plt.tight_layout(pad=0.4, w_pad=3)
"""
Explanation: Calculation of the rate matrix
From the transition matrix we can calculate the rate matrix. One possibility is to use an approximate method based simply on a Taylor expansion (De Sancho, Mittal and Best, JCTC, 2013). We can check whether our approximate method gives a good result. We use short times since we have checked that short times are sufficient in this case for obtaining converged relaxation times.
End of explanation
"""
|
dostrebel/working_place_ds_17 | 03 python II/01 Python II .ipynb | mit | lst = [11,2,34,4,5,5111]
len([11,2,'sort',4,5,5111])#zählt die Elemente einer Liste
sorted(lst)
lst.sort()
min(lst)
max(lst)
str(1212)
sum([1,2,2])
lst.remove(4)
lst.append(4)
string = 'hello, wie geht Dir?'
string.split(',')
"""
Explanation: Python II
Wiederholung: die wichtigsten Funktion
Viel mächtigere Funktion: Modules und Libraries
Schauen wir uns diese simplen Funktionen genauer an
Bauen wir die eigenen Funktionen
Struktur und Troubleshooting
1 Wichtigste Funktionen
Eine Übersicht der 64 wichtigsten simplen Python-Funktionen sind hier gelistet.
End of explanation
"""
import urllib
import requests
import glob
import pandas
import BeautifulSoup
import re
#etc. etc.
!pip install requests
!pip install glob
"""
Explanation: 2 Viel mächtigere Funktion: Modules und Libraries
Modules & Libraries
End of explanation
"""
import os
#Funktioniert leider nicht mit allen Built in Functionen
os.path.split??
#Beispiel Sort
def sort(list):
for index in range(1,len(list)):
value = list[index]
i = index-1
while i>=0:
if value < list[i]:
list[i+1] = list[i]
list[i] = value
i -= 1
else:
break
return list
#Ganz komplexe. Wenn Du nicht mit dem Modul urllib, bzw. urlretrieve
#arbeiten könntest, müsstest Du jetzt all das eintippen.
def urlretrieve(url, filename=None, reporthook=None, data=None):
url_type, path = splittype(url)
with contextlib.closing(urlopen(url, data)) as fp:
headers = fp.info()
# Just return the local path and the "headers" for file://
# URLs. No sense in performing a copy unless requested.
if url_type == "file" and not filename:
return os.path.normpath(path), headers
# Handle temporary file setup.
if filename:
tfp = open(filename, 'wb')
else:
tfp = tempfile.NamedTemporaryFile(delete=False)
filename = tfp.name
_url_tempfiles.append(filename)
with tfp:
result = filename, headers
bs = 1024*8
size = -1
read = 0
blocknum = 0
if "content-length" in headers:
size = int(headers["Content-Length"])
if reporthook:
reporthook(blocknum, bs, size)
while True:
block = fp.read(bs)
if not block:
break
read += len(block)
tfp.write(block)
blocknum += 1
if reporthook:
reporthook(blocknum, bs, size)
if size >= 0 and read < size:
raise ContentTooShortError(
"retrieval incomplete: got only %i out of %i bytes"
% (read, size), result)
return result
"""
Explanation: 3 Aber wie sind Funktion, Modules und Libraries aufgebaut?
End of explanation
"""
lst = ['ich', 'habe', 'ganz', 'kalt']
def join(mylist):# ich definiere eine Funktion mit Name join(Variable)
long_str = ''# ich definiere einen leeren String
for elem in mylist:
long_str = long_str + ' ' + elem
return long_str.strip()#strip schneidet leerstellen hinten und vorne ab. return generiert erst den output
"""
Explanation: 4 Bauen wir die eigenen Funktion
Bauen wir ganze Sätze, aus Listen von Strings
End of explanation
"""
join(lst)
"""
Explanation: Und zum aufrufen packe ich meine List in Klammen ()
End of explanation
"""
satz = "Die Unabhängigkeit der Notenbanken von der Politik gilt bisher als anerkannter Grundpfeiler der modernen Wirtschafts- und Geldpolitik in fortgeschrittenen Volkswirtschaften. Zu gross wäre sonst das Risiko, dass gewählte Politiker die Notenpresse anwerfen, wenn es ihren persönlichen Zielen gerade gelegen kommt, und dass dadurch die Stabilität des Geldes und das Vertrauen in das Zahlungsmittel untergraben wird."
def find(string):
elem = input('Bitte geben Sie den Suchbegriff ein: ')
if elem in string:
return 'Treffer'
else:
return 'Kein Treffer'
find(satz)
"""
Explanation: Bauen wir eine simple Suche
End of explanation
"""
print('Immer im Code verwenden, um zu wissen wo der Fehler nun ganz genau passiert.')
#Beispiel Sort
def sort(list):
for index in range(1,len(list)):
value = list[index]
print(value)
i = index-1
print(i)
while i>=0:
if value < list[i]:
list[i+1] = list[i]
list[i] = value
i -= 1
else:
break
return list
sort(lst)
lst
"""
Explanation: 5 Struktur und Troubleshooting
Zuerst die Imports
Dann die eigenen Funktionen
Nun der eigentliche Code
End of explanation
"""
|
gfeiden/Notebook | Daily/20150902_phoenix_bol_corrs.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate as scint
"""
Explanation: Phoenix BT-Settl Bolometric Corrections
Figuring out the best method of handling Phoenix bolometric correction files.
End of explanation
"""
cd /Users/grefe950/Projects/starspot/starspot/color/tab/phx/
"""
Explanation: Change to directory containing bolometric correction files.
End of explanation
"""
bc_table = np.genfromtxt('colmag.BT-Settl.server.JOHNSON.AB.bolcor', comments='!')
"""
Explanation: Load a bolometric correction table, say for the Cousins AB photometric system.
End of explanation
"""
test_surface = scint.LinearNDInterpolator(bc_table[:, :3], bc_table[:, 4:])
"""
Explanation: Now, the structure of the file is quite irregular. The grid is not rectangular, which is not an immediate problem. The table is strucutred such that column 0 contains Teff in increasing order, followed by logg in column 1 in increasing order. However, metallicities in column 2 appear to be in decreasing order, which may be a problem for simple interpolation routines. Alpha abundances follow and are in increasing order, but since this is a "standard" grid, whereby alpha enrichment is a function of metallicity, we can ignore it for the moment.
Let's take a first swing at the problem by using the LinearND Interpolator from SciPy.
End of explanation
"""
test_surface(np.array([1500., 5.0, 0.0]))
"""
Explanation: The surface compiled, but that is not a guarantee that the interpolation will work successfully. Some tests are required to confirm this is the case. Let's try a few Teffs at logg = 5 with solar metallicity.
End of explanation
"""
test_surface(np.array([3000., 5.0, 0.0]))
"""
Explanation: This agrees with data in the bolometric correciton table.
Teff logg [Fe/H] [a/Fe] B V R I
1500.00 5.00 0.00 0.00 -15.557 -16.084 -11.560 -9.291
Now, let's raise the temperature.
End of explanation
"""
test_surface(np.array([3000., 5.0, 0.1]))
"""
Explanation: Again, we have a good match to tabulated values,
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 0.00 0.00 -6.603 -5.641 -4.566 -3.273
However, since we are using a tabulated metallicity, the interpolation may proceed without too much trouble. If we select a metallicity between grid points, how do we fare?
End of explanation
"""
test_surface(np.array([3000., 5.0, -0.2]))
"""
Explanation: This appears consistent. What about progressing to lower metallicity values?
End of explanation
"""
iso = np.genfromtxt('/Users/grefe950/evolve/dmestar/iso/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso')
"""
Explanation: For reference, at [Fe/H] = $-0.5$ dex, we have
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 -0.50 0.20 -6.533 -5.496 -4.424 -3.154
The interpolation routine has seemingly handled the non-monotonic nature of the metallicity column, as all interpolate values lie between values at the two respective nodes.
Now let's import an isochrone and calcuate colors for stellar models for comparison against MARCS bolometric corrections.
End of explanation
"""
iso.shape
"""
Explanation: Make sure there are magnitudes and colors associated with this isochrone.
End of explanation
"""
test_bcs = test_surface(10**iso[:,1], iso[:, 2], 0.0)
test_bcs.shape
"""
Explanation: A standard isochrone would only have 6 columns, so 11 indicates this isochrone does have photometric magnitudes computed, likely BV(Ic) (JK)2MASS.
End of explanation
"""
bol_mags = 4.74 - 2.5*iso[:, 3]
for i in range(test_bcs.shape[1]):
bcs = -1.0*np.log10(10**iso[:, 1]/5777.) + test_bcs[:, i] - 5.0*iso[:, 4]
if i == 0:
test_mags = bol_mags - bcs
else:
test_mags = np.column_stack((test_mags, bol_mags - bcs))
iso[50, 0:4], iso[50, 6:], test_mags[50]
"""
Explanation: For each Teff and logg combination we now have BCs for BV(RI)c from BT-Settl models. Now we need to convert the bolometric corrections to absolute magnitudes.
End of explanation
"""
col_table = np.genfromtxt('colmag.BT-Settl.server.COUSINS.AB', comments='!')
"""
Explanation: Let's try something different: using the color tables provided by the Phoenix group, from which the bolometric corrections are calculated.
End of explanation
"""
col_surface = scint.LinearNDInterpolator(col_table[:, :3], col_table[:, 4:8])
"""
Explanation: Create an interpolation surface from the magnitude table.
End of explanation
"""
phx_mags = col_surface(10.0**iso[:, 1], iso[:, 2], 0.0)
"""
Explanation: Compute magnitudes for a Dartmouth isochrone.
End of explanation
"""
for i in range(phx_mags.shape[1]):
phx_mags[:, i] = phx_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0
"""
Explanation: Convert surface magnitudes to absolute magnitudes using the distance modulus and the radius of the star.
End of explanation
"""
iso[40, :5], iso[40, 6:], phx_mags[40]
"""
Explanation: Now compare against MARCS values.
End of explanation
"""
phx_iso = np.genfromtxt('/Users/grefe950/Notebook/Projects/ngc2516_spots/data/phx_isochrone_120myr.txt')
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharey=True)
ax[0].set_xlim(0.0, 2.0)
ax[1].set_xlim(0.0, 4.0)
ax[0].set_ylim(16, 2)
ax[0].plot(iso[:, 6] - iso[:, 7], iso[:, 7], lw=3, c="#b22222")
ax[0].plot(phx_mags[:, 0] - phx_mags[:, 1], phx_mags[:, 1], lw=3, c="#1e90ff")
ax[0].plot(phx_iso[:, 7] - phx_iso[:, 8], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555")
ax[1].plot(iso[:, 7] - iso[:, 8], iso[:, 7], lw=3, c="#b22222")
ax[1].plot(phx_mags[:, 1] - phx_mags[:, 3], phx_mags[:, 1], lw=3, c="#1e90ff")
ax[1].plot(phx_iso[:, 8] - phx_iso[:, 10], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555")
"""
Explanation: Load an isochrone from the Lyon-Phoenix series.
End of explanation
"""
new_isochrone = np.column_stack((iso[:, :6], phx_mags))
np.savetxt('/Users/grefe950/Notebook/Projects/pleiades_colors/data/dmestar_00120.0myr_z+0.00_a+0.00_mixed.iso',
new_isochrone, fmt='%16.8f')
"""
Explanation: Export a new isochrone with colors from AGSS09 (PHX)
End of explanation
"""
tmp = -10.*np.log10(3681./5777.) + test_surface(3681., 4.78, 0.0) #+ 5.0*np.log10(0.477)
tmp
4.74 - 2.5*(-1.44) - tmp
"""
Explanation: Separate Test Case
These are clearly not correct and are between 1 and 2 magnitudes off from expected values. Need to reproduce the Phoenix group's results, first.
End of explanation
"""
|
woutdenolf/spectrocrunch | doc/source/tutorials/xrfquant.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
from spectrocrunch.materials import xrfstandards
from spectrocrunch.detectors import xrf as xrfdetectors
from spectrocrunch.geometries import xrf as xrfgeometries
from spectrocrunch.sources import xray as xraysources
source = xraysources.factory("synchrotron")
detector = xrfdetectors.factory("leia")
geometry = xrfgeometries.factory("sxm120",detectorposition=-10,positionunits="mm",\
detector=detector,source=source)
addnoise = True # add noise to simulations done below
method = "fisx" # XRF simulation method
realsample = xrfstandards.factory("RF7-200-S2371-03",geometry=geometry,\
filmthickness=10e-7) # 10 nm
print(realsample)
"""
Explanation: XRF quantification
The fluorescence/scattered intensity $I_f$ due to irradiation with flux $I_s$
$$
\begin{equation}
I_f=I_s(Hz)\Delta t(s)\Omega(sr)\sum_{i,j}\epsilon_{i,j} c_{i,j}(sr^{-1})
\end{equation}
$$
where $I$ the detected intensity (sum of selected fluorescence and or scattering lines), $\Delta t$ the exposure time, $\Omega$ the solid angle of the detector, $\epsilon_{i,j}$ a product of filter transmission and detector absorbance and $c_{i,j}$ the rate of line $j$ with energy $E_j$ due to source line $i$ with energy $E_i$ (depending on sample composition).
As an example, the fluorescence rate of a flat-multilayer sample can be written as (only primary interactions)
$$
\begin{equation}
\begin{split}
c_{i,j}(sr^{-1})=&\frac{d\mu_{i,j}}{d\Omega}\sum_k w_{j,k}\rho_k t_k^\prime(E_i,E_j)\
\frac{d\mu_{i,j}^{fluo}}{d\Omega} =& \frac{\mu_j(E_i)}{4\pi}\
\frac{d\mu_{i,j}^R}{d\Omega} =& r_e^2 K_R(\phi,\theta) \frac{N_A}{M_j}f_j^2(E_i,\theta)\
\frac{d\mu_{i,j}^C}{d\Omega} =& r_e^2 K_C(\phi,\theta) \frac{N_A}{M_j}S_j(E_i,\theta)
\end{split}
\end{equation}
$$
where $k$ loops over the layers. Note that $j$ refers to a particular interaction type (fluorescence of element $Z$, elastic or inelastic scattering). See polarization for the definition of the differential scattering cross-sections (SI units $cm^2/g/sr$ and $M_j$ the molar mass of the atom ($g/mol$), $N_A$ the Avogadro constant ($1/mol$), $f$ the atomic form factor and $S$ the incoherent scattering function of the atom).
The corrected layer thickness $t_k^\prime$ takes attenuation of primary X-rays and fluorescence/scattering into account. For a single layer in reflection geometry it can be written as
$$
\begin{equation}
\begin{split}
t^\prime(E_i,E_j) =& \frac{e^{\chi(E_i,E_j) t}-1}{\chi(E_i,E_j)\cos\alpha_{in}}\
\chi(E_i,E_j) =& \rho\left(\frac{\mu(E_j)}{\cos\alpha_{out}}-\frac{\mu(E_i)}{\cos\alpha_{in}}\right)
\end{split}
\end{equation}
$$
where $\alpha$ the angle between the sample surface normal (pointing away from the source) and the incident(in) or fluorescence/scattering(out) direction ($\alpha_{out}>90^\circ$ in reflection geometry). Note that $\lim_{\chi\to\infty}t^\prime=\frac{t}{\cos\alpha_{in}}$.
Geometry calibration
Solid-angle parameterization (without standard)
See the notebook on diodes on how $I_s$ is measured. We will assume the detector has a centric-cone geometry with solid angle
$$
\begin{equation}
\Omega=2\pi\left(1-\frac{x+d_0}{\sqrt{\frac{A}{\pi}+\left(x+d_0\right)^2}}\right)
\end{equation}
$$
where $A(mm^2)$ the active area of the detector, $x(mm)$ the position of the detector and $d_0(mm)$ the distance to the sample for $x=0$. To determine $A$ and $d_0$ we can measure the fluorescence of any sample as function of $x$:
$$
\begin{equation}
I_f(x,c,d_0,A)=c\Omega(x,d_0,A)
\end{equation}
$$
As an illustration we will define a detector geometry and multilayer sample. A thin-film standard is used here but any other material can be considered:
End of explanation
"""
from spectrocrunch.utils import units
from spectrocrunch.math import noisepropagation
from spectrocrunch.materials import pymca
# Geometry at which the data is collected
geometry.zerodistance = units.Quantity(5.,"cm")
detector.activearea = units.Quantity(70.,"mm^2")
print("\nTheoretical geometry:")
print(" Zero-distance: {:~}".format(geometry.zerodistance.to("cm")))
print(" Active area: {:~}".format(detector.activearea.to("mm^2")))
# Simulate measurement at current distance
energy = 7.3
flux = 1e9
time = 5
pymcahandle = pymca.PymcaHandle(sample=realsample,energy=energy,flux=flux,time=time,\
linear=True,escape=False,continuum=False,scatter=False)
mcaref = pymcahandle.mca(histogram=True,scattering=False,method=method)
# Simulate detector scan
n = 100
x = units.Quantity(np.linspace(-20,60,n),"mm")
I0 = np.full(n,flux*time)
solidangle = geometry.detector.solidangle_calc(activearea=detector.activearea,distance=x+geometry.zerodistance)
fluo = mcaref.sum()/geometry.solidangle*solidangle
if addnoise:
I0 = np.random.poisson(np.round(I0).astype(int))
fluo = np.random.poisson(np.round(fluo).astype(int))
fig,axs = plt.subplots(1,2,figsize=(12,5))
u = x.units
plt.sca(axs[0])
plt.plot(x,fluo/I0.astype(float))
xref = geometry.detectorposition.to(u).magnitude
iref = mcaref.sum()/(flux*time)
lines = plt.plot([xref,xref,x[0].magnitude],[0,iref,iref])
color = lines[0].get_color()
plt.ylabel("Normalized fluorescence")
plt.xlabel("Motor position ({:~})".format(u))
plt.sca(axs[1])
plt.plot(mcaref,color=color)
plt.gca().set_yscale('log', base=10)
plt.xlim([0,len(mcaref)-1])
plt.ylim([1,np.max(mcaref)*1.1])
plt.ylabel("ph/channel")
plt.xlabel("MCA channels")
plt.title("\nSpectrum at x ={:~}:".format(geometry.detectorposition.to("mm")))
plt.show()
"""
Explanation: Simulate a detector scan $I_f(x,c,d_0,A)$:
End of explanation
"""
# Calibration resources
intensities = noisepropagation.poisson(fluo)/noisepropagation.poisson(I0)
calibrc = {"signal":noisepropagation.E(intensities),\
"var":noisepropagation.VAR(intensities),\
"detectorposition":x.magnitude,\
"positionunits":x.units}
# Calibrate the geometry (starting from wrong values)
geometry.zerodistance += units.Quantity(-5.,"cm")
detector.activearea += units.Quantity(10.,"mm^2")
print("\nInitial geometry:")
print("Zero-distance: {:~}".format(geometry.zerodistance.to("cm")))
print("Active area: {:f~}".format(detector.activearea.to("mm^2")))
geometry.calibrate(calibrc=calibrc,plot=True,fit=True,fixedactivearea=False)
plt.show()
print("Calibrated geometry:")
print("Zero-distance: {:~}".format(geometry.zerodistance_rv.to("cm")))
print("Active area: {:f~}".format(detector.activearea_rv.to("mm^2")))
"""
Explanation: Calibrate the geometry (starting from different values as the ones used to simulate the data):
End of explanation
"""
# Thin film standards have an unknown film thickness and density,
# only the areal densities of the different elements and the
# composition and thickness of the substrate are known.
thinfilmapprox = True
# Geometry at which the data is collected
geometry.zerodistance = units.Quantity(5.,"cm")
detector.activearea = units.Quantity(70.,"mm^2")
print("\nTheoretical geometry:")
print(" Zero-distance: {:~}".format(geometry.zerodistance_rv.to("cm")))
print(" Active area: {:~}".format(detector.activearea_rv.to("mm^2")))
# Simulate measurement (use the sample with known film thickness)
energy = 7.3
flux = 1e9
time = 1000 # the sum spectrum of a 2D map
pymcahandle = pymca.PymcaHandle(sample=realsample,energy=energy,flux=flux,time=time,\
linear=True,escape=False,continuum=False,scatter=False)
mca = pymcahandle.mca(histogram=True,scattering=False,method=method)
if method=="fisx":
mca1 = mca
mca2 = pymcahandle.mca(histogram=True,scattering=False,method="analytical")
if addnoise:
mca = np.random.poisson(np.round(mca).astype(int))
# Calibrate with unknown film thickness
if thinfilmapprox:
thinfilmsample = xrfstandards.factory("RF7-200-S2371-03",geometry=geometry)
pymcahandle.sample = thinfilmsample
# Initialize fit with the wrong geometry
pymcahandle.setdata(mca)
geometry.zerodistance += units.Quantity(5.,"cm")
detector.activearea += units.Quantity(-10.,"mm^2")
pymcahandle.addtopymca(fresh=True)
# Adapt config manually if needed:
#config = pymcahandle.mcafit.getConfiguration()
#config["fit"]["stripflag"] = 0
#...
#pymcahandle.mcafit.configure(config)
# Perform fit
fitresult = pymcahandle.fit()
# Print errors
def strwerror(e,wfrac,exwfrac):
error = (wfrac-exwfrac)/exwfrac
return " {}: {:6.02f} wt% (expected: {:6.02f} wt%, error: {:.02f}%)".\
format(e,wfrac*100,exwfrac*100,error*100)
def straderror(e,ad,exad):
error = (ad-exad)/exad
return " {}: {:6.02f} ng/mm^2 (expected: {:6.02f} ng/mm^2, error: {:.02f}%)".\
format(e,ad*1e7,exad*1e7,error*100)
def printerrors(fitresult,sample):
out = {}
if thinfilmapprox:
exarealdensities = sample.arealdensity()
rho = sample[0].density
t = sample[0].thickness
for k,wfrac in fitresult["massfractions"].items():
element = k.element
ad = wfrac*rho*t
if element in exarealdensities:
exad = exarealdensities[element]
exwfrac = exad/(rho*t)
out[element] = {"ad":straderror(element,ad,exad),\
"wfrac":strwerror(element,wfrac,exwfrac)}
else:
exarealdensities = sample.arealdensity()
arealdensities = {}
massfractions = {}
exmassfractions = {}
exarealdensities = {}
for layer,wfracs in zip(sample,fitresult["lmassfractions"]):
rho = layer.density
t = layer.thickness
exwfracs = layer.elemental_massfractions()
exad = layer.arealdensity()
for k,wfrac in wfracs.items():
if wfrac!=0:
element = k.element
arealdensities[k] = wfrac*rho*t
massfractions[k] = wfrac
exmassfractions[k] = exwfracs[element]
exarealdensities[k] = exad[element]
for k,wfrac in massfractions.items():
if k in exmassfractions:
element = k.element
exwfrac = exmassfractions[k]
exad = exarealdensities[k]
ad = arealdensities[k]
out[element] = {"ad":straderror(element,ad,exad),\
"wfrac":strwerror(element,wfrac,exwfrac)}
print(" Mass fractions and areal densities (within one layer):")
for k in out:
print(out[k]["wfrac"])
print(out[k]["ad"])
print("\nFitted vs. theory (before geometry calibration):")
printerrors(fitresult,pymcahandle.sample)
# Plot fit
def plotfit(fitresult):
plt.plot(fitresult["energy"],fitresult["y"],label='data')
plt.plot(fitresult["energy"],fitresult["yfit"],label='pymca fit')
backfunc = fitresult["interpol_energy"](fitresult["yback"])
plt.plot(fitresult["energy"],backfunc(fitresult["energy"]),label='background')
plt.gca().set_yscale('log', base=10)
plt.ylim([1,np.max(fitresult["y"])*1.1])
plt.ylabel("ph/channel")
plt.xlabel("Energy (keV)")
plotfit(fitresult)
plt.show()
if method=="fisx":
plt.plot(mca1,label="fisx")
plt.plot(mca2,label="xraylib")
plt.gca().set_yscale('log', base=10)
plt.ylim([1,np.max(mca)*1.1])
plt.ylabel("ph/ch")
plt.xlabel("Channels")
plt.legend()
plt.show()
"""
Explanation: The correlation between $c$, $A$ and $d_0$ is too high to provide a usable result (also when fixing the active area).
Solid-angle parameterization (with standard)
To solve the correlation issue, we determine $\Omega_{ref}$ at a particular motor position $x=x_{ref}$ by fitting the fluorescence spectrum of a standard measured with the detector at this position:
$$
\begin{equation}
\Omega_{ref}=\frac{I_{f}}{I_s(Hz)\Delta t(s)\sum_{i,j}\epsilon_{i,j} c_{i,j}(sr^{-1})}
\end{equation}
$$
where $I_s$, $\Delta t$, $\epsilon_{i,j}$ and $c_{i,j}$ are assumed to be known (flux measured by calibrated diodes, known sample and filter composition).
This provides a fixed relationship between $A$ and $d_0$ which can be substituted in the expression used for calibrating the geometry
$$
\begin{equation}
\begin{split}
I_f(x,c,d_0)=&c\Omega(x,d_0,A(d_0))\
A(d_0)=&\pi\left(\frac{\left(x_{ref}+d_0\right)^2}{\left(1-\frac{\Omega_{ref}}{2\pi}\right)^2}-\left(x_{ref}+d_0\right)^2\right)
\end{split}
\end{equation}
$$
When using a thin-film standard, the thickness and density of the film are unknown but the areal densities of the elements in the film are known. For elements only present in the film and assuming absorption and other secondary effects are negligible, we can write
$$
\begin{equation}
\begin{split}
c_{i,j}^{film}=&\frac{d\mu_{i,j}}{d\Omega} w_{j}^{film}\rho_{film} t_{film}^{\prime}(E_i,E_j)\
\approx&\frac{1}{\cos\alpha_{in}}\frac{d\mu_{i,j}}{d\Omega} w_{Z}^{film}\rho_{film} t_{film}\
=&\frac{1}{\cos\alpha_{in}}\frac{d\mu_{i,j}}{d\Omega}\rho_{Z,A}^{film}
\end{split}
\end{equation}
$$
Hence for elements in the thin-film, it is enough to know their areal densities. In practice however we use mass fractions calculated from the areal densities using the density and the thickness of the substrate. The mass fractions obtained are physically meaningless but valid for the purpose of calculating $\Omega_{ref}$.
For elements in the substrate, density and thickness need to be known if self-absorption is not non-negligible:
$$
\begin{equation}
t_{subs}^{\prime}(E_i,E_j)\neq \frac{t_{subs}}{\cos\alpha_{in}}
\end{equation}
$$
Simulate and fit an XRF spectrum of a thin-film standard (simulation and fit are done with a different $d_0$ and $A$; scattering, escape and sum peaks are omitted):
End of explanation
"""
caliblines = ["Ca"]
useline = lambda k: any(str(k).startswith(e) for e in caliblines)
# rate = Ifluo/I0 with I0 = flux * time
Rfit = {k:v for k,v in fitresult["fitrates"].items() if useline(k)}
Rinit = {k:v for k,v in fitresult["rates"].items() if useline(k)}
if thinfilmapprox:
# for an element within the film:
# - pymca mass fraction = 1
# - substrate density and thicknes
rho = pymcahandle.sample[0].density
t = pymcahandle.sample[0].thickness
arealdensities = pymcahandle.sample.arealdensity()
substrate = pymcahandle.sample[0].elements
for k in Rinit:
el = k.element
if el not in substrate:
Rinit[k] *= arealdensities[el]/(rho*t)
solidangleref = geometry.solidangle * sum(Rfit.values())/sum(Rinit.values())
"""
Explanation: Determine $\Omega_{ref}$ by comparing the fitted and theoretical fluorescence intensities:
End of explanation
"""
geometry.calibrate(calibrc=calibrc,solidanglecalib=solidangleref,\
plot=True,fit=True,fixedactivearea=False)
# Force to real values for testing:
#geometry.zerodistance = units.Quantity(5,"cm")
#detector.activearea = units.Quantity(70,"mm^2")
print("\nCalibrate geometry using {}:".format(caliblines))
print(" Zero-distance: {:~}".format(geometry.zerodistance_rv.to("cm")))
print(" Active area: {:~}".format(detector.activearea_rv.to("mm^2")))
print("\nCurrent distance:")
print(geometry.detectorposition)
print(" Motor position = {:~}".format(geometry.detectorposition))
print(" Distance: {:~}".format(geometry.distance_rv.to("cm")))
"""
Explanation: Calibrate the geometry ($d_0$ and $A$) with a known $[\Omega_{ref},x_{ref}]$ pair as constraint:
End of explanation
"""
pymcahandle.addtopymca(fresh=False)
fitresult = pymcahandle.fit()
print("\nFitted vs. theory (after geometry calibration):")
printerrors(fitresult,pymcahandle.sample)
plt.figure(figsize=(12,5))
plotfit(fitresult)
spectrum = realsample.xrayspectrum(energy,emin=1,emax = energy+0.5,scattering=False,method=method)
matplotlib.rcParams.update({'font.size': 15})
spectrum.plot(histogram=True,decompose=True,fluxtime=pymcahandle.I0,\
legend=False,forcelines=True)
matplotlib.rcParams.update({'font.size': 14})
plt.show()
"""
Explanation: The correlation between the two unknowns $c$ and $d_0$ is low enough to provide estimates of $d_0$ and $A$ with acceptable uncertainty. Known and fitted areal densities should be the same:
End of explanation
"""
|
ueapy/ueapy.github.io | content/notebooks/2015-11-27-meeting-summary.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Today we discussed some of the basic matplotlib functions and also had a look at different ways of running Jupyter Notebooks.
Creating subplots in matplotlib
Back to basics
End of explanation
"""
plt.rcParams['figure.facecolor'] = '0.8' # grey background
"""
Explanation: To show figure area:
End of explanation
"""
x = np.random.rand(100)
"""
Explanation: Sample 1D array of size 100:
End of explanation
"""
fig = plt.figure(figsize=(5,5))
ax_a = fig.add_subplot(131)
ax_b = fig.add_subplot(132)
ax_c = fig.add_subplot(133)
for iax in [ax_a, ax_b, ax_c]:
iax.plot(x)
fig, (ax_a, ax_b, ax_c) = plt.subplots(ncols=3)
for iax in [ax_a, ax_b, ax_c]:
iax.plot(x)
"""
Explanation: Two simplest ways of creating a bunch of subplots within one figure are shown below.
End of explanation
"""
fig, axs = plt.subplots(nrows=3, ncols=2)
for iax in axs.flat:
iax.plot(x, color='r')
"""
Explanation: Grid of subplots with shared axes
Using the same handy subplots() command, we can create an $2\times 3$ grid of subplots.
End of explanation
"""
fig, axs = plt.subplots(nrows=3, ncols=2, sharex=True, sharey=True)
for iax in axs.flat:
iax.plot(x, color='r')
"""
Explanation: If the subplots have same axes, it's natural to leave axis labels only in outermost subplots. We can do this setting sharex and sharey keywords in subplots() function to True.
End of explanation
"""
import cartopy
import cartopy.crs as ccrs
fig = plt.figure(figsize=(30, 10))
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
ax.coastlines()
gl = ax.gridlines(draw_labels=True)
gl.xlabels_top = gl.ylabels_right = False
gl.xformatter = cartopy.mpl.gridliner.LONGITUDE_FORMATTER
gl.yformatter = cartopy.mpl.gridliner.LATITUDE_FORMATTER
lon, lat = np.linspace(-100,50,10), np.linspace(10,60,20)
arr = np.random.rand(20,10)
c = ax.contourf(lon, lat, arr)
cb = plt.colorbar(c, ax=ax, orientation='horizontal',
fraction=0.046, pad=0.04) # magically sets colorbar size correctly...
fig.tight_layout(pad=0)
"""
Explanation: Gridspec
A more sophisticated grid of subplots can be created using matplotlib.gridspec submodule. The quickstart guide can be found on this page:
* Customizing Location of Subplot Using GridSpec
Removing empty space from matplotlib figures
The issue of getting rid of white space in figures was also mentioned during the meeting. Matplotlib has the feature called tight_layout() that in some cases is enough to fit all subplots in a figure.
Example
End of explanation
"""
type(ax)
"""
Explanation: And by the way, ax is not just a matplotlib.axes._subplots.AxesSubplot instance anymore, but has geoaxes attributes:
End of explanation
"""
HTML(html)
"""
Explanation: Jupyter Notebooks
Jupyter Notebooks can be launched not only on a local PC, but also on a remote host with the help of some web-services.
We took a glimpse at
* mybinder.org
* Example: test_binder (click 'launch binder')
* wakari.io
End of explanation
"""
|
opesci/devito | examples/userapi/01_dsl.ipynb | mit | from devito import *
"""
Explanation: The Devito domain specific language: an overview
This notebook presents an overview of the Devito symbolic language, used to express and discretise operators, in particular partial differential equations (PDEs).
For convenience, we import all Devito modules:
End of explanation
"""
grid = Grid(shape=(5, 6), extent=(1., 1.))
grid
"""
Explanation: From equations to code in a few lines of Python
The main objective of this tutorial is to demonstrate how Devito and its SymPy-powered symbolic API can be used to solve partial differential equations using the finite difference method with highly optimized stencils in a few lines of Python. We demonstrate how computational stencils can be derived directly from the equation in an automated fashion and how Devito can be used to generate and execute, at runtime, the desired numerical scheme in the form of optimized C code.
Defining the physical domain
Before we can begin creating finite-difference (FD) stencils we will need to give Devito a few details regarding the computational domain within which we wish to solve our problem. For this purpose we create a Grid object that stores the physical extent (the size) of our domain and knows how many points we want to use in each dimension to discretise our data.
<img src="figures/grid.png" style="width: 220px;"/>
End of explanation
"""
print(Function.__doc__)
"""
Explanation: Functions and data
To express our equation in symbolic form and discretise it using finite differences, Devito provides a set of Function types. A Function object:
Behaves like a sympy.Function symbol
Manages data associated with the symbol
To get more information on how to create and use a Function object, or any type provided by Devito, we can take a look at the documentation.
End of explanation
"""
f = Function(name='f', grid=grid)
f
f.data
"""
Explanation: Ok, let's create a function $f(x, y)$ and look at the data Devito has associated with it. Please note that it is important to use explicit keywords, such as name or grid when creating Function objects.
End of explanation
"""
g = TimeFunction(name='g', grid=grid)
g
"""
Explanation: By default, Devito Function objects use the spatial dimensions (x, y) for 2D grids and (x, y, z) for 3D grids. To solve a PDE over several timesteps a time dimension is also required by our symbolic function. For this Devito provides an additional function type, the TimeFunction, which incorporates the correct dimension along with some other intricacies needed to create a time stepping scheme.
End of explanation
"""
g.shape
"""
Explanation: Since the default time order of a TimeFunction is 1, the shape of f is (2, 5, 6), i.e. Devito has allocated two buffers to represent g(t, x, y) and g(t + dt, x, y):
End of explanation
"""
g.dt
"""
Explanation: Derivatives of symbolic functions
The functions we have created so far all act as sympy.Function objects, which means that we can form symbolic derivative expressions from them. Devito provides a set of shorthand expressions (implemented as Python properties) that allow us to generate finite differences in symbolic form. For example, the property f.dx denotes $\frac{\partial}{\partial x} f(x, y)$ - only that Devito has already discretised it with a finite difference expression. There are also a set of shorthand expressions for left (backward) and right (forward) derivatives:
| Derivative | Shorthand | Discretised | Stencil |
| ---------- |:---------:|:-----------:|:-------:|
| $\frac{\partial}{\partial x}f(x, y)$ (right) | f.dxr | $\frac{f(x+h_x,y)}{h_x} - \frac{f(x,y)}{h_x}$ | <img src="figures/stencil_forward.png" style="width: 180px;"/> |
| $\frac{\partial}{\partial x}f(x, y)$ (left) | f.dxl | $\frac{f(x,y)}{h_x} - \frac{f(x-h_x,y)}{h_x}$ | <img src="figures/stencil_backward.png" style="width: 180px;"/> |
A similar set of expressions exist for each spatial dimension defined on our grid, for example f.dy and f.dyl. Obviously, one can also take derivatives in time of TimeFunction objects. For example, to take the first derivative in time of g you can simply write:
End of explanation
"""
g.dt.evaluate
"""
Explanation: We may also want to take a look at the stencil Devito will generate based on the chosen discretisation:
End of explanation
"""
g.forward
g.backward
"""
Explanation: There also exist convenient shortcuts to express the forward and backward stencil points, g(t+dt, x, y) and g(t-dt, x, y).
End of explanation
"""
g.forward.dt
g.forward.dy
"""
Explanation: And of course, there's nothing to stop us taking derivatives on these objects:
End of explanation
"""
from examples.cfd import init_smooth, plot_field
nt = 100 # Number of timesteps
dt = 0.2 * 2. / 80 # Timestep size (sigma=0.2)
c = 1 # Value for c
# Then we create a grid and our function
grid = Grid(shape=(81, 81), extent=(2., 2.))
u = TimeFunction(name='u', grid=grid)
# We can now set the initial condition and plot it
init_smooth(field=u.data[0], dx=grid.spacing[0], dy=grid.spacing[1])
init_smooth(field=u.data[1], dx=grid.spacing[0], dy=grid.spacing[1])
plot_field(u.data[0])
"""
Explanation: A linear convection operator
Note: The following example is derived from step 5 in the excellent tutorial series CFD Python: 12 steps to Navier-Stokes.
In this simple example we will show how to derive a very simple convection operator from a high-level description of the governing equation. We will go through the process of deriving a discretised finite difference formulation of the state update for the field variable $u$, before creating a callable Operator object. Luckily, the automation provided by SymPy makes the derivation very nice and easy.
The governing equation we want to implement is the linear convection equation:
$$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0.$$
Before we begin, we must define some parameters including the grid, the number of timesteps and the timestep size. We will also initialize our velocity u with a smooth field:
End of explanation
"""
eq = Eq(u.dt + c * u.dxl + c * u.dyl)
eq
"""
Explanation: Next, we wish to discretise our governing equation so that a functional Operator can be created from it. We begin by simply writing out the equation as a symbolic expression, while using shorthand expressions for the derivatives provided by the Function object. This will create a symbolic object of the dicretised equation.
Using the Devito shorthand notation, we can express the governing equations as:
End of explanation
"""
stencil = solve(eq, u.forward)
update = Eq(u.forward, stencil)
update
"""
Explanation: We now need to rearrange our equation so that the term $u(t+dt, x, y)$ is on the left-hand side, since it represents the next point in time for our state variable $u$. Devito provides a utility called solve, built on top of SymPy's solve, to rearrange our equation so that it represents a valid state update for $u$. Here, we use solve to create a valid stencil for our update to u(t+dt, x, y):
End of explanation
"""
op = Operator(update, opt='noop')
op(time=nt+1, dt=dt)
plot_field(u.data[0])
"""
Explanation: The right-hand side of the 'update' equation should be a stencil of the shape
<img src="figures/stencil_convection.png" style="width: 160px;"/>
Once we have created this 'update' expression, we can create a Devito Operator. This Operator will basically behave like a Python function that we can call to apply the created stencil over our associated data, as long as we provide all necessary unknowns. In this case we need to provide the number of timesteps to compute via the keyword time and the timestep size via dt (both have been defined above):
End of explanation
"""
print(op.ccode)
"""
Explanation: Note that the real power of Devito is hidden within Operator, it will automatically generate and compile the optimized C code. We can look at this code (noting that this is not a requirement of executing it) via:
End of explanation
"""
u = TimeFunction(name='u', grid=grid, space_order=2)
u.dx2
u.dx2.evaluate
"""
Explanation: Second derivatives and high-order stencils
In the above example only a combination of first derivatives was present in the governing equation. However, second (or higher) order derivatives are often present in scientific problems of interest, notably any PDE modeling diffusion. To generate second order derivatives we must give the devito.Function object another piece of information: the desired discretisation of the stencil(s).
First, lets define a simple second derivative in x, for which we need to give $u$ a space_order of (at least) 2. The shorthand for this second derivative is u.dx2.
End of explanation
"""
u = TimeFunction(name='u', grid=grid, space_order=4)
u.dx2
u.dx2.evaluate
"""
Explanation: We can increase the discretisation arbitrarily if we wish to specify higher order FD stencils:
End of explanation
"""
grid_3d = Grid(shape=(5, 6, 7), extent=(1., 1., 1.))
u = TimeFunction(name='u', grid=grid_3d, space_order=2)
u
"""
Explanation: To implement the diffusion or wave equations, we must take the Laplacian $\nabla^2 u$, which is the sum of the second derivatives in all spatial dimensions. For this, Devito also provides a shorthand expression, which means we do not have to hard-code the problem dimension (2D or 3D) in the code. To change the problem dimension we can create another Grid object and use this to re-define our Function's:
End of explanation
"""
u = TimeFunction(name='u', grid=grid_3d, space_order=12)
u.laplace
"""
Explanation: We can re-define our function u with a different space_order argument to change the discretisation order of the stencil expression created. For example, we can derive an expression of the 12th-order Laplacian $\nabla^2 u$:
End of explanation
"""
u.dx2 + u.dy2 + u.dz2
"""
Explanation: The same expression could also have been generated explicitly via:
End of explanation
"""
u = TimeFunction(name='u', grid=grid, space_order=2)
v = TimeFunction(name='v', grid=grid, space_order=2, time_order=2)
v.dt2 + u.laplace
(v.dt2 + u.laplace).dx2
"""
Explanation: Derivatives of composite expressions
Derivatives of any arbitrary expression can easily be generated:
End of explanation
"""
(v.dt2 + u.laplace).dx2.evaluate
"""
Explanation: Which can, depending on the chosen discretisation, lead to fairly complex stencils:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/sandbox-2/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-2', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
weichetaru/weichetaru.github.com | notebook/data-wrangling/numpy-the-basic.ipynb | mit | import numpy as np
"""
Explanation: Numpy - Get Started
What is Numpy?
NumPy is the fundamental package for scientific computing with Python.
It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. It allows data sciebtists to easily implement methmatically concept into code. Most importantly, Numpy arrays are really fast and friendly on CPU and RAM.
Below is a note for basic usage of numpy.
End of explanation
"""
# Python List
L = [1, 2, 3]
A = np.array([1, 2, 3])
# You can operate A with mathmatically operation. L cannot.
print(2*A)
print(A**2)
print(np.sqrt(A))
print(np.log(A))
"""
Explanation: Array vs List
As you can see below, you can operate A with mathmatically operations like addition, extraction, power, squart, log etc.
Python List L cannot. It will need to do looping on each element to achieve the same and hence it's way too slow.
End of explanation
"""
a = np.array([1, 2])
b = np.array([3, 4])
# dot product in different ways:
np.dot(a, b) # 11
np.inner(a, b) # 11. dot product is also inner product
a.dot(b) # 11
b.dot(a) # 11
(a*b).sum() # 11
# you can use python looping to achieve the same, but it will be extremely slow when data is huge.
dot = 0
for i, j in zip(a, b):
dot += i*j
print(dot) # 11
"""
Explanation: Dot Product
An important operation for array is dot product as it's basic for matrix operation. Below are the various ways to achieve it.
In code below, dot product for $a$ and $b$ should be like $a\cdot b^{T}= 13+24=11$.
End of explanation
"""
# Note you can also use np.matrix but np.array is recommanded officially.
M = np.array([[1, 2], [3, 4]])
# extract element
M[0][0] # 1 ; this is the same as python list
M[0, 0] # 1
# matrix transport
M.T
# get shap
M.shap
# Matrix product. same as dot product
M.dot(M)
np.inner(M, M)
# inverse matrix
np.linalg.inv(M)
# determination
np.linalg.det(M)
# diagonal element
np.diag(M) # [1, 4]
# note this will rerurn diagonal matrix
np.diag([1, 4]) # [[1, 0], [0, 4]]
# trace
np.diag(M).sum()
np.trace(M)
# product various 10x10 matrix
# all zero
Z = np.zeros((10, 10))
# all one
O = np.ones((10, 10))
# random from uniform distribution
R = np.random.random((10, 10))
# random from normal distribution(0, 1)
# Note randn take each dimension as individual argument, others use turple
N = np.random.randn(10, 10)
print(N.mean())
print(N.var())
"""
Explanation: Matrix
We can still use np.array to product Matrix. Below is the code for various operations on matrix.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.