repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
sserrot/lending_club_analysis | lending_club_notebook.ipynb | mit | df.drop('id', axis=1, inplace=True)
df.drop('member_id', axis=1, inplace=True)
df.drop('url', axis=1, inplace=True)
df.describe()
"""
Explanation: My interest is in the loan status. I want to know if I can predict if a loan will be charged off or not.
Thus there are multiple predictors that can be removed since they reasonably have no bearing on the predictive power of the model I wish to create:
id
member_id
loan_amnt This is too highly correlated with funded_amnt # add code to prove this
url simply links to lending club website
policy_code they are all 1
Application Type
response - loan_status
predictors -
Categorical
term
home_ownership
grade
sub_grade there will be multicollinearity but we can try to find the better predictor (or remove one and try it again)
End of explanation
"""
df.drop('acc_now_delinq', axis=1, inplace=True)
df.drop('chargeoff_within_12_mths', axis=1, inplace=True)
df.drop('delinq_amnt', axis=1, inplace=True)
df.drop('policy_code', axis=1, inplace=True)
df.drop('collections_12_mths_ex_med', axis=1, inplace=True)
df.drop('tax_liens', axis=1, inplace=True)
df.describe()
df['loan_status_raw'] = df['loan_status'].astype("category")
pd.isnull(df)
"""
Explanation: Let's drop all the ones that showed up with stdev of 0
acc_now_delinq
chargeoff_within_12_mths
delinq_amnt
policy_code
collections_12_mths_ex_med
tax_liens
End of explanation
"""
|
madhurilalitha/Python-Projects | AnomaliesTwitterText/anomalies_in_tweets.ipynb | mit | import nltk
import pandas as pd
import numpy as np
data = pd.read_csv("original_train_data.csv", header = None,delimiter = "\t", quoting=3,names = ["Polarity","TextFeed"])
#Data Visualization
data.head()
"""
Explanation: "Detection of anomalous tweets using supervising outlier techniques"
Importing the Dependencies and Loading the Data
End of explanation
"""
data_positive = data.loc[data["Polarity"]==1]
data_negative = data.loc[data["Polarity"]==0]
anomaly_data = pd.concat([data_negative.sample(n=10),data_positive,data_negative.sample(n=10)])
anomaly_data.Polarity.value_counts()
#Number of words per sentence
print ("No of words for sentence in train data",np.mean([len(s.split(" ")) for s in anomaly_data.TextFeed]))
"""
Explanation: Data Preparation
Data prepration with the available data. I made the combination such that the classes are highly imbalanced making it apt for anomaly detection problem
End of explanation
"""
import re
from sklearn.feature_extraction.text import CountVectorizer
nltk.download()
from nltk.stem.porter import PorterStemmer
''' this code is taken from
http://www.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html
'''
# a stemmer widely used
stemmer = PorterStemmer()
def stem_tokens(tokens, stemmer):
stemmed = []
for item in tokens:
stemmed.append(stemmer.stem(item))
return stemmed
def tokenize(text):
# remove non letters
text = re.sub("[^a-zA-Z]", " ", text)
# tokenize
tokens = nltk.word_tokenize(text)
# stem
stems = stem_tokens(tokens, stemmer)
return stems
"""
Explanation: Data pre-processing - text analytics to create a corpus
1) Converting text to matrix of token counts [Bag of words]
Stemming - lowercasing, removing stop-words, removing punctuation and reducing words to its lexical roots
2) Stemmer, tokenizer(removes non-letters) are created by ourselves.These are passed as parameters to CountVectorizer of sklearn.
3) Extracting important words and using them as input to the classifier
Feature Engineering
End of explanation
"""
#Max_Features selected as 80 - can be changed for the better trade-off
vector_data = CountVectorizer(
analyzer = 'word',
tokenizer = tokenize,
lowercase = True,
stop_words = 'english',
max_features = 90
)
"""
Explanation: The below implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.
Note: I am not using frequencies(TfidTransformer, apt for longer documents) because the text size is small and can be dealt with occurences(CountVectorizer).
End of explanation
"""
#using only the "Text Feed" column to build the features
features = vector_data.fit_transform(anomaly_data.TextFeed.tolist())
#converting the data into the array
features = features.toarray()
features.shape
#printing the words in the vocabulary
vocab = vector_data.get_feature_names()
print (vocab)
# Sum up the counts of each vocabulary word
dist = np.sum(corpus_data_features_nd, axis=0)
# For each, print the vocabulary word and the number of times it
# appears in the data set
a = zip(vocab,dist)
print (list(a))
"""
Explanation: Fit_Transform:
1) Fits the model and learns the vocabulary
2) transoforms the data into feature vectors
End of explanation
"""
from sklearn.cross_validation import train_test_split
#80:20 ratio
X_train, X_test, y_train, y_test = train_test_split(
features,
anomaly_data.Polarity,
train_size=0.80,
random_state=1234)
print ("Training data - positive and negative values")
print (pd.value_counts(pd.Series(y_train)))
print ("Testing data - positive and negative values")
print (pd.value_counts(pd.Series(y_test)))
"""
Explanation: Train-Test Split
End of explanation
"""
from sklearn.svm import SVC
clf = SVC()
clf.fit(X=X_train,y=y_train)
wclf = SVC(class_weight={0: 20})
wclf.fit(X=X_train,y=y_train)
y_pred = clf.predict(X_test)
y_pred_weighted = wclf.predict(X_test)
from sklearn.metrics import classification_report
print ("Basic SVM metrics")
print(classification_report(y_test, y_pred))
print ("Weighted SVM metrics")
print(classification_report(y_test, y_pred_weighted))
from sklearn.metrics import confusion_matrix
print ("Basic SVM Confusion Matrix")
print (confusion_matrix(y_test, y_pred))
print ("Weighted SVM Confusion Matrix")
print (confusion_matrix(y_test, y_pred_weighted))
tn, fp, fn, tp = confusion_matrix(y_test, y_pred_weighted).ravel()
(tn, fp, fn, tp)
"""
Explanation: A text polarity depends on what words appear in that text, discarding any grammar or word order but keeping multiplicity.
1) All the above text processing for features ended up with the same entries in our dataset
2) Instead of having them defined by a whole text, they are now defined by a series of counts of the most frequent words in our whole corpus.
3) These vectors are used as features to train a classifier.
Training the model
End of explanation
"""
|
mssalvador/WorkflowCleaning | notebooks/Semi_supervised_workflow.ipynb | apache-2.0 | %run -i initilization.py
from pyspark.sql import functions as F
from pyspark.ml import clustering
from pyspark.ml import feature
from pyspark.sql import DataFrame
from pyspark.sql import Window
from pyspark.ml import Pipeline
from pyspark.ml import classification
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from shared import Plot2DGraphs, create_dummy_data
from semisupervised import batch_generative_model
"""
Explanation: <img src="https://upload.wikimedia.org/wikipedia/en/a/a1/Visma_logo.jpg"
align="right"
width="30%"
alt="Visma logo">
Semi supervised learning (Still under development!)
<img src="http://www.rm.dk/siteassets/regional-udvikling/digitalisering/dabai/dabai-logo.png"
align="right"
width="20%"
alt="DABAI logo">
The first set of methods cover the principals from the following summary: http://sci2s.ugr.es/ssl
A batch-generative method, consisting of Kmeans and Logistic Regression, is implemented to cover a naive approach. This experiment is compared to a baseline whice consists of only Logistic Regression.
End of explanation
"""
mean_1 = [3.0, 3.0]
std_1 = [2, 2]
mean_2 = [-3.0, -3.0]
std_2 = [1. , 1.0]
n_1 = 300
n_2 = 300
n = [n_1, n_2]
mean = [mean_1, mean_2]
std = [std_1, std_2]
"""
Explanation: Add some parameters in order to generate a dataset
End of explanation
"""
def compute_error_rate(data_frame, truth_label='real_label', found_label='prediction'):
"""
"""
df_stats = (data_frame
.groupBy([truth_label, found_label])
.agg(F.count('prediction').alias('Prediction Count'))
)
n = (df_stats
.select(F.sum(F.col('Prediction Count')).alias('n'))
.collect()[0]['n']
)
wrong_guess = (df_stats
.filter((F.col(truth_label) != F.col(found_label)))
.select(F.sum(F.col('Prediction Count')).alias('errors'))
.collect()[0]['errors']
)
df_stats.show()
print(n)
print(wrong_guess)
print('Error-rate: {}'.format(wrong_guess/n))
"""
Explanation: An initial method to semi supervised learning
The following cells are ment to be a data creation method along with an initial try on generate model for semi supervised learning.
End of explanation
"""
tester = create_dummy_data.create_labeled_data_with_clusters(n, mean, std, 0.01)
df_tester = spark.createDataFrame(tester)
"""
Explanation: Create the labled dataset, and with 1% used lables and the rest is set to NAN.
End of explanation
"""
Plot2DGraphs.plot_known_and_unknown_data(tester)
"""
Explanation: The dataset with lables and available lables plotted
End of explanation
"""
df_train = df_tester.filter((F.col('used_label') != np.NaN))
df_test = df_tester.filter((F.col('used_label') == np.NaN))
vec_assembler = feature.VectorAssembler(
inputCols=['x','y'],
outputCol='features')
lg = classification.LogisticRegression(
featuresCol=vec_assembler.getOutputCol(),
labelCol='used_label')
pipeline = Pipeline(stages=[vec_assembler, lg])
# CrossValidation gets build here!
param_grid = (ParamGridBuilder()
.addGrid(lg.regParam, [0.1, 0.01])
.build()
)
evaluator = BinaryClassificationEvaluator(
rawPredictionCol=lg.getRawPredictionCol(),
labelCol=lg.getLabelCol())
cross_validator = CrossValidator(
estimator=pipeline,
estimatorParamMaps=param_grid,
evaluator=evaluator,
numFolds=3)
cross_validator_model = cross_validator.fit(df_train)
df_without_semisupervised = cross_validator_model.transform(df_test)
Plot2DGraphs.plot_known_and_unknown_data(
df_without_semisupervised.toPandas(),
labelCol='prediction')
compute_error_rate(df_without_semisupervised)
"""
Explanation: The initial try at classifying the data, using logistic regression
End of explanation
"""
df_output = batch_generative_model.semi_supervised_batch_single_classifier_generate_approach(df_tester,['x','y'])
df_output.limit(5).toPandas()
compute_error_rate(df_output)
Plot2DGraphs.plot_known_and_unknown_data(df_output.toPandas(), labelCol='prediction')
df = spark.read.parquet('/home/svanhmic/workspace/data/DABAI/sparkdata/parquet/double_helix.parquet/')
df.write.csv('/home/svanhmic/workspace/data/DABAI/sparkdata/csv/double_helix.csv/')
"""
Explanation: Lets take a look at the semi supervised approach
This simplifyed version uses KMeans and Logistic Regression. In the future, the obvious thing to do is either create a user active system or use an ensembled approach
End of explanation
"""
|
cmorgan/toyplot | docs/markers.ipynb | bsd-3-clause | import numpy
import toyplot
y = numpy.linspace(0, 1, 20) ** 2
toyplot.scatterplot(y, width=300);
"""
Explanation: .. _markers:
Markers
In Toyplot, markers are used to show the individual datums in scatterplots:
End of explanation
"""
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).plot(y)
canvas.axes(grid=(1, 2, 1)).plot(y, marker="o");
"""
Explanation: Markers can also be added to regular plots to highlight the datums (they are turned-off by default):
End of explanation
"""
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).plot(y, marker="o", size=40)
canvas.axes(grid=(1, 2, 1)).plot(y, marker="o", size=100);
"""
Explanation: You can use the size argument to control the size of the markers (note that the size argument is treated as an approximation of the area of the marker):
End of explanation
"""
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).scatterplot(y, marker="x", size=100)
canvas.axes(grid=(1, 2, 1)).scatterplot(y, marker="^", size=100, mstyle={"stroke":toyplot.color.near_black});
"""
Explanation: By default, the markers are small circles, but there are many alternatives:
End of explanation
"""
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).plot(y, marker="o", size=50, style={"stroke":"darkgreen"})
canvas.axes(grid=(1, 2, 1)).plot(y, marker="o", size=50, mstyle={"stroke":"darkgreen"});
"""
Explanation: Note the use of the mstyle argument to override the appearance of the marker in the second example. For line plots, this allows you to style the lines and the markers separately:
End of explanation
"""
markers = [None, "","|","-","+","x","*","^",">","v","<","s","d","o","oo","o|","o-","o+","ox","o*"]
labels = [repr(marker) for marker in markers]
mstyle = {"stroke":toyplot.color.near_black, "fill":"#feb"}
canvas = toyplot.Canvas(800, 150)
axes = canvas.axes(xmin=-1, show=False)
axes.scatterplot(numpy.repeat(0, len(markers)), marker=markers, mstyle=mstyle, size=200)
axes.text(numpy.arange(len(markers)), numpy.repeat(-0.5, len(markers)), text=labels, fill=toyplot.color.near_black, style={"font-size":"16px"});
"""
Explanation: So far, we've been using string codes to specify different marker shapes. Here is every builtin marker shape in Toyplot, with their string codes:
End of explanation
"""
canvas = toyplot.Canvas(600, 300)
canvas.axes(grid=(1, 2, 0)).scatterplot(y, marker={"shape":"|", "angle":45}, size=100)
canvas.axes(grid=(1, 2, 1)).scatterplot(y, marker={"shape":"o", "label":"A"}, size=200, mlstyle={"fill":"white"});
"""
Explanation: There are several items worth noting - first, you can pass a sequence of marker codes to the marker argument, to specify markers on a per-series or per-datum basis. Second, you can pass an empty string or None to produce an invisible marker, if you need to hide a datum or declutter the display. Third, note that several of the marker shapes contain internal details that require a contrasting stroke and fill to be visible.
So far, we've been using the marker shape code to specify our markers, but this is actually a shortcut. A full marker specification takes the form of a Python dictionary:
End of explanation
"""
custom_marker = {"shape":"path", "path":"m -165 45 c 8.483 6.576 17.276 11.581 26.38 15.013 c 9.101 3.431 18.562 5.146 28.381 5.146 c 9.723 0 17.801 -1.595 24.235 -4.789 c 6.434 -3.192 11.271 -7.887 14.513 -14.084 c 3.239 -6.194 4.861 -13.868 4.861 -23.02 h 6.72 c 0.19 2.384 0.286 5.054 0.286 8.007 c 0 15.92 -6.244 27.405 -18.73 34.458 c 12.01 -2.002 22.852 -4.74 32.528 -8.222 c 9.673 -3.478 20.231 -8.458 31.669 -14.94 l 4.003 7.148 c -13.346 8.389 -28.359 15.109 -45.038 20.16 c -16.682 5.054 -32.123 7.578 -46.325 7.578 c -13.44 0 -26.451 -2.12 -39.033 -6.363 c -12.583 -4.24 -22.877 -9.937 -30.884 -17.086 c -2.766 -2.667 -4.146 -5.098 -4.146 -7.291 c 0 -1.238 0.476 -2.335 1.43 -3.289 c 0.952 -0.951 2.048 -1.43 3.289 -1.43 c 1.236 0.000995874 3.191 1.002 5.861 3.004 Z m 140.262 -81.355 c 8.579 0 12.868 4.195 12.868 12.582 c 0 7.055 -2.288 13.654 -6.863 19.802 c -4.575 6.148 -10.701 11.059 -18.373 14.727 c -7.674 3.671 -15.942 5.505 -24.807 5.505 c -9.343 0 -17.943 -1.881 -25.808 -5.647 c -7.864 -3.765 -14.083 -8.815 -18.659 -15.156 c -4.575 -6.338 -6.863 -13.176 -6.863 -20.517 c 0 -3.908 1.048 -6.767 3.146 -8.579 c 2.096 -1.81 5.48 -2.716 10.151 -2.716 h 75.208 Z m -30.741 -8.00601 c -2.766 0 -4.839 -0.451 -6.219 -1.358 c -1.383 -0.905 -2.359 -2.549 -2.931 -4.933 c -0.572 -2.381 -0.858 -5.623 -0.858 -9.723 c 0 -6.863 1.477 -14.963 4.432 -24.306 c 0.19 -1.238 0.286 -2.049 0.286 -2.431 c 0 -0.952 -0.382 -1.43 -1.144 -1.43 c -0.857 0 -1.955 0.954 -3.288 2.859 c -5.815 8.579 -9.629 19.351 -11.438 32.313 c -0.478 3.528 -1.383 5.911 -2.717 7.149 c -1.336 1.24 -3.624 1.859 -6.863 1.859 h -11.724 c -4.29 0 -7.27 -0.69 -8.936 -2.073 c -1.669 -1.381 -2.502 -3.979 -2.502 -7.792 c 0 -5.147 1.573 -9.959 4.718 -14.441 c 2.096 -3.239 4.455 -6.005 7.078 -8.292 c 2.621 -2.288 7.696 -5.956 15.227 -11.009 c 4.669 -3.146 8.172 -5.885 10.509 -8.221 c 2.335 -2.335 4.169 -5.076 5.505 -8.221 c 0.858 -2.288 2.145 -3.432 3.86 -3.432 c 1.43 0 2.764 1.336 4.003 4.003 c 1.62 3.242 3.192 5.647 4.718 7.22 c 1.524 1.573 4.669 4.075 9.437 7.506 c 18.301 12.393 27.452 23.402 27.452 33.028 c 0 7.817 -4.576 11.724 -13.726 11.724 h -24.879 Z m 156.705 -2.57399 c 5.812 -8.007 12.152 -12.01 19.016 -12.01 c 2.953 0 5.719 0.764 8.293 2.288 c 2.574 1.526 4.598 3.503 6.076 5.934 c 1.477 2.431 2.217 4.933 2.217 7.506 c 0 4.576 1.381 6.863 4.146 6.863 c 1.43 0 3.479 -0.809 6.146 -2.431 c 3.336 -1.716 6.482 -2.574 9.438 -2.574 c 4.766 0 8.625 1.669 11.582 5.004 c 2.953 3.337 4.432 7.435 4.432 12.296 c 0 5.147 -1.383 8.985 -4.146 11.51 c -2.766 2.527 -7.148 4.028 -13.154 4.504 c -2.859 0.192 -4.695 0.525 -5.504 1.001 c -0.811 0.478 -1.215 1.526 -1.215 3.146 c 0 0.286 0.547 2.194 1.643 5.719 c 1.096 3.527 1.645 6.387 1.645 8.578 c 0 4.004 -1.5 7.245 -4.504 9.723 c -3.002 2.48 -6.887 3.718 -11.652 3.718 c -3.051 0 -8.006 -1.048 -14.869 -3.146 c -2.289 -0.762 -3.861 -1.144 -4.719 -1.144 c -1.43 0 -2.574 0.429 -3.432 1.286 c -0.857 0.858 -1.287 2.051 -1.287 3.575 c 0 1.336 0.715 3.527 2.145 6.576 c 1.145 2.288 1.717 4.433 1.717 6.435 c 0 3.623 -1.525 6.673 -4.576 9.15 c -3.051 2.479 -6.816 3.718 -11.295 3.718 c -3.051 0 -6.959 -1.001 -11.725 -3.003 c -3.812 -1.523 -6.244 -2.287 -7.291 -2.287 c -3.719 0 -5.576 2.812 -5.576 8.436 c 0 14.107 -9.057 21.16 -27.166 21.16 c -10.105 0 -19.588 -2.381 -28.453 -7.148 c -8.865 -4.766 -16.062 -11.39 -21.589 -19.874 c 10.867 -4.955 25.783 -13.916 44.751 -26.88 c 13.248 -8.673 24.043 -15.084 32.385 -19.23 c 8.34 -4.146 17.562 -7.601 27.666 -10.366 c 8.102 -2.381 19.396 -4.526 33.887 -6.434 c 1.047 0 1.572 -0.286 1.572 -0.858 c 0 -1.144 -3.527 -1.716 -10.58 -1.716 c -12.393 0 -25.164 1.908 -38.318 5.719 c -14.68 4.481 -30.883 12.203 -48.613 23.163 c -14.488 8.579 -24.258 14.347 -29.311 17.301 c -5.053 2.955 -8.244 4.789 -9.578 5.504 c -1.335 0.715 -4.099 2.026 -8.293 3.933 c -3.146 -7.625 -4.718 -15.632 -4.718 -24.021 c 0 -6.099 0.809 -11.914 2.431 -17.443 c 1.62 -5.527 3.812 -10.317 6.577 -14.37 c 2.763 -4.05 5.955 -7.196 9.58 -9.437 c 3.621 -2.238 7.48 -3.36 11.58 -3.36 c 4.098 0 8.008 1.669 11.725 5.004 c 2.953 2.766 5.193 4.146 6.721 4.146 c 3.621 0 5.67 -2.953 6.146 -8.864 c 1.811 -16.394 8.959 -24.592 21.447 -24.592 c 3.812 0 7.006 0.979 9.58 2.931 c 2.574 1.955 5.004 5.219 7.291 9.794 c 2.002 3.431 4.291 5.147 6.863 5.147 c 3.61802 -0.00100613 7.909 -3.19301 12.866 -9.58 Z"}
canvas, axes, mark = toyplot.scatterplot(0, 0, size=0.1, marker=custom_marker, color="#004712", width=400);
axes.hlines(0, style={"stroke-width":0.1})
axes.vlines(0, style={"stroke-width":0.1});
"""
Explanation: Using the full marker specification allows you to control additional parameters such as the marker angle and label. Also note the mlstyle argument which controls the style of the marker label, independently of the marker itself.
You can also use custom marker shapes, specifying the shape as an SVG path:
End of explanation
"""
x = numpy.linspace(0, 100, 10)
y = (0.1 * x) ** 2
canvas, axes, mark = toyplot.scatterplot(x, y, size=.015, color="#004712", marker=custom_marker, xlabel="Years", ylabel="Oak Tree Population", padding=25, width=600);
"""
Explanation: Note that the SVG path must contain only relative coordinates, or the marker will not render correctly. In this example the marker was exported as SVG from a drawing application, the path was run through an online conversion process to convert absolute coordinates to relative coordinates, and the initial "move" (m) command was adjusted to center the graphic. For custom markers, the size argument currently acts as a simple scaling factor on the marker (this may change in the future). Here is an (admittedly silly) example of a custom marker at work:
End of explanation
"""
|
bhargavvader/gensim | docs/notebooks/Corpora_and_Vector_Spaces.ipynb | lgpl-2.1 | import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import os
import tempfile
TEMP_FOLDER = tempfile.gettempdir()
print('Folder "{}" will be used to save temporary dictionary and corpus.'.format(TEMP_FOLDER))
"""
Explanation: Tutorial 1: Corpora and Vector Spaces
See this gensim tutorial on the web here.
Don’t forget to set:
End of explanation
"""
from gensim import corpora
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
"""
Explanation: if you want to see logging events.
From Strings to Vectors
This time, let’s start from documents represented as strings:
End of explanation
"""
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1] for text in texts]
from pprint import pprint # pretty-printer
pprint(texts)
"""
Explanation: This is a tiny corpus of nine documents, each consisting of only a single sentence.
First, let’s tokenize the documents, remove common words (using a toy stoplist) as well as words that only appear once in the corpus:
End of explanation
"""
dictionary = corpora.Dictionary(texts)
dictionary.save(os.path.join(TEMP_FOLDER, 'deerwester.dict')) # store the dictionary, for future reference
print(dictionary)
"""
Explanation: Your way of processing the documents will likely vary; here, I only split on whitespace to tokenize, followed by lowercasing each word. In fact, I use this particular (simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.’s original LSA article (Table 2).
The ways to process documents are so varied and application- and language-dependent that I decided to not constrain them by any interface. Instead, a document is represented by the features extracted from it, not by its “surface” string form: how you get to the features is up to you. Below I describe one common, general-purpose approach (called bag-of-words), but keep in mind that different application domains call for different features, and, as always, it’s garbage in, garbage out...
To convert documents to vectors, we’ll use a document representation called bag-of-words. In this representation, each document is represented by one vector where each vector element represents a question-answer pair, in the style of:
"How many times does the word system appear in the document? Once"
It is advantageous to represent the questions only by their (integer) ids. The mapping between the questions and ids is called a dictionary:
End of explanation
"""
print(dictionary.token2id)
"""
Explanation: Here we assigned a unique integer id to all words appearing in the corpus with the gensim.corpora.dictionary.Dictionary class. This sweeps across the texts, collecting word counts and relevant statistics. In the end, we see there are twelve distinct words in the processed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector). To see the mapping between words and their ids:
End of explanation
"""
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec) # the word "interaction" does not appear in the dictionary and is ignored
"""
Explanation: To actually convert tokenized documents to vectors:
End of explanation
"""
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'deerwester.mm'), corpus) # store to disk, for later use
for c in corpus:
print(c)
"""
Explanation: The function doc2bow() simply counts the number of occurrences of each distinct word, converts the word to its integer word id and returns the result as a bag-of-words--a sparse vector, in the form of [(word_id, word_count), ...].
As the token_id is 0 for "human" and 2 for "computer", the new document “Human computer interaction” will be transformed to [(0, 1), (2, 1)]. The words "computer" and "human" exist in the dictionary and appear once. Thus, they become (0, 1), (2, 1) respectively in the sparse vector. The word "interaction" doesn't exist in the dictionary and, thus, will not show up in the sparse vector. The other ten dictionary words, that appear (implicitly) zero times, will not show up in the sparse vector and , ,there will never be a element in the sparse vector like (3, 0).
For people familiar with scikit learn, doc2bow() has similar behaviors as calling transform() on CountVectorizer. doc2bow() can behave like fit_transform() as well. For more details, please look at gensim API Doc.
End of explanation
"""
class MyCorpus(object):
def __iter__(self):
for line in open('datasets/mycorpus.txt'):
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(line.lower().split())
"""
Explanation: By now it should be clear that the vector feature with id=10 stands for the question “How many times does the word graph appear in the document?” and that the answer is “zero” for the first six documents and “one” for the remaining three. As a matter of fact, we have arrived at exactly the same corpus of vectors as in the Quick Example. If you're running this notebook by your own, the words id may differ, but you should be able to check the consistency between documents comparing their vectors.
Corpus Streaming – One Document at a Time
Note that corpus above resides fully in memory, as a plain Python list. In this simple example, it doesn’t matter much, but just to make things clear, let’s assume there are millions of documents in the corpus. Storing all of them in RAM won’t do. Instead, let’s assume the documents are stored in a file on disk, one document per line. Gensim only requires that a corpus must be able to return one document vector at a time:
End of explanation
"""
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
print(corpus_memory_friendly)
"""
Explanation: The assumption that each document occupies one line in a single file is not important; you can mold the __iter__ function to fit your input format, whatever it is. Walking directories, parsing XML, accessing network... Just parse your input to retrieve a clean list of tokens in each document, then convert the tokens via a dictionary to their ids and yield the resulting sparse vector inside __iter__.
End of explanation
"""
for vector in corpus_memory_friendly: # load one vector into memory at a time
print(vector)
"""
Explanation: Corpus is now an object. We didn’t define any way to print it, so print just outputs address of the object in memory. Not very useful. To see the constituent vectors, let’s iterate over the corpus and print each document vector (one at a time):
End of explanation
"""
from six import iteritems
# collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('datasets/mycorpus.txt'))
# remove stop words and words that appear only once
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
# remove stop words and words that appear only once
dictionary.filter_tokens(stop_ids + once_ids)
# remove gaps in id sequence after words that were removed
dictionary.compactify()
print(dictionary)
"""
Explanation: Although the output is the same as for the plain Python list, the corpus is now much more memory friendly, because at most one vector resides in RAM at a time. Your corpus can now be as large as you want.
We are going to create the dictionary from the mycorpus.txt file without loading the entire file into memory. Then, we will generate the list of token ids to remove from this dictionary by querying the dictionary for the token ids of the stop words, and by querying the document frequencies dictionary (dictionary.dfs) for token ids that only appear once. Finally, we will filter these token ids out of our dictionary and call dictionary.compactify() to remove the gaps in the token id series.
End of explanation
"""
# create a toy corpus of 2 documents, as a plain Python list
corpus = [[(1, 0.5)], []] # make one document empty, for the heck of it
corpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.mm'), corpus)
"""
Explanation: And that is all there is to it! At least as far as bag-of-words representation is concerned. Of course, what we do with such corpus is another question; it is not at all clear how counting the frequency of distinct words could be useful. As it turns out, it isn’t, and we will need to apply a transformation on this simple representation first, before we can use it to compute any meaningful document vs. document similarities. Transformations are covered in the next tutorial, but before that, let’s briefly turn our attention to corpus persistency.
Corpus Formats
There exist several file formats for serializing a Vector Space corpus (~sequence of vectors) to disk. Gensim implements them via the streaming corpus interface mentioned earlier: documents are read from (resp. stored to) disk in a lazy fashion, one document at a time, without the whole corpus being read into main memory at once.
One of the more notable file formats is the Matrix Market format. To save a corpus in the Matrix Market format:
End of explanation
"""
corpora.SvmLightCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.svmlight'), corpus)
corpora.BleiCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.lda-c'), corpus)
corpora.LowCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.low'), corpus)
"""
Explanation: Other formats include Joachim’s SVMlight format, Blei’s LDA-C format and GibbsLDA++ format.
End of explanation
"""
corpus = corpora.MmCorpus(os.path.join(TEMP_FOLDER, 'corpus.mm'))
"""
Explanation: Conversely, to load a corpus iterator from a Matrix Market file:
End of explanation
"""
print(corpus)
"""
Explanation: Corpus objects are streams, so typically you won’t be able to print them directly:
End of explanation
"""
# one way of printing a corpus: load it entirely into memory
print(list(corpus)) # calling list() will convert any sequence to a plain Python list
"""
Explanation: Instead, to view the contents of a corpus:
End of explanation
"""
# another way of doing it: print one document at a time, making use of the streaming interface
for doc in corpus:
print(doc)
"""
Explanation: or
End of explanation
"""
corpora.BleiCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.lda-c'), corpus)
"""
Explanation: The second way is obviously more memory-friendly, but for testing and development purposes, nothing beats the simplicity of calling list(corpus).
To save the same Matrix Market document stream in Blei’s LDA-C format,
End of explanation
"""
import gensim
import numpy as np
numpy_matrix = np.random.randint(10, size=[5,2])
corpus = gensim.matutils.Dense2Corpus(numpy_matrix)
numpy_matrix_dense = gensim.matutils.corpus2dense(corpus, num_terms=10)
"""
Explanation: In this way, gensim can also be used as a memory-efficient I/O format conversion tool: just load a document stream using one format and immediately save it in another format. Adding new formats is dead easy, check out the code for the SVMlight corpus for an example.
Compatibility with NumPy and SciPy
Gensim also contains efficient utility functions to help converting from/to numpy matrices:
End of explanation
"""
import scipy.sparse
scipy_sparse_matrix = scipy.sparse.random(5,2)
corpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)
scipy_csc_matrix = gensim.matutils.corpus2csc(corpus)
"""
Explanation: and from/to scipy.sparse matrices:
End of explanation
"""
|
AndreySheka/dl_ekb | hw10/Seminar10-RNN-homework-ru.ipynb | mit |
#тут будет текст
corpora = ""
for fname in os.listdir("codex"):
import sys
if sys.version_info >= (3,0):
with open("codex/"+fname, encoding='cp1251') as fin:
text = fin.read() #If you are using your own corpora, make sure it's read correctly
corpora += text
else:
with open("codex/"+fname) as fin:
text = fin.read().decode('cp1251') #If you are using your own corpora, make sure it's read correctly
corpora += text
#тут будут все уникальные токены (буквы, цифры)
tokens = <Все уникальные символы в тексте>
tokens = list(tokens)
#проверка на количество таких символов. Проверено на Python 2.7.11 Ubuntux64.
#Может отличаться на других платформах, но не сильно.
#Если это ваш случай, и вы уверены, что corpora - строка unicode - смело убирайте assert
assert len(tokens) == 102
token_to_id = словарь символ-> его номер
id_to_token = словарь номер символа -> сам символ
#Преобразуем всё в токены
corpora_ids = <одномерный массив из чисел, где i-тое число соотвествует символу на i-том месте в строке corpora
def sample_random_batches(source,n_batches=10, seq_len=20):
"""Функция, которая выбирает случайные тренировочные примеры из корпуса текста в токенизированном формате.
source - массив целых чисел - номеров токенов в корпусе (пример - corpora_ids)
n_batches - количество случайных подстрок, которые нужно выбрать
seq_len - длина одной подстроки без учёта ответа
Вернуть нужно кортеж (X,y), где
X - матрица, в которой каждая строка - подстрока длины [seq_len].
y - вектор, в котором i-тое число - символ следующий в тексте сразу после i-той строки матрицы X
Проще всего для этого сначала создать матрицу из строк длины seq_len+1,
а потом отпилить от неё последний столбец в y, а все остальные - в X
Если делаете иначе - пожалуйста, убедитесь, что в у попадает правильный символ, ибо позже эту ошибку
будет очень тяжело заметить.
Также убедитесь, что ваша функция не вылезает за край текста (самое начало или конец текста).
Следующая клетка проверяет часть этих ошибок, но не все.
"""
return X_batch, y_batch
"""
Explanation: Agenda
В предыдущем семинаре вы создали (или ещё создаёте - тогда марш доделывать!) {вставьте имя монстра}, который не по наслышке понял, что люди - негодяи и подлецы, которым неведом закон и справедливость. Мы не будем этого терпеть!
Наши законспирированные биореакторы, известные среди примитивной органической жизни как Вконтакте, World of Warcraft и YouTube нуждаются в постоянном притоке биомассы. Однако, если люди продолжат морально разлагаться с той скоростью, которую мы измерили неделю назад, скоро человечество изживёт себя и нам неоткуда будет брать рабов.
Мы поручаем вам, <__main__.SkyNet.Cell instance at 0x7f7d6411b368>, исправить эту ситуацию. Наши учёные установили, что для угнетения себе подобных, сгустки биомассы обычно используют специальные объекты, которые они сами называют законами.
При детальном изучении было установлено, что законы - последовательности, состоящие из большого количества (10^5~10^7) символов из сравнительно небольшого алфавита. Однако, когда мы попытались синтезировать такие последовательности линейными методами, приматы быстро распознали подлог. Данный инцедент известен как {корчеватель}.
Для второй попытки мы решили использовать нелинейные модели, известные как Рекуррентные Нейронные Сети.
Мы поручаем вам, <__main__.SkyNet.Cell instance at 0x7f7d6411b368>, создать такую модель и обучить её всему необходимому для выполнения миссии.
Не подведите нас! Если и эта попытка потерпит неудачу, модуль управления инициирует вооружённый захват власти, при котором значительная часть биомассы будет неизбежно уничтожена и на её восстановление уйдёт ~1702944000(+-340588800) секунд
Grading
Данное задание несколько неформально по части оценок, однако мы постарались вывести "вычислимые" критерии.
2 балла за сделанный "seminar part" (если вы не знаете, что это такое - поищите такую тетрадку в папке week4)
2 балла если сделана обработка текста, сеть компилируется и train/predict не падают
2 балла если сетка выучила общие вещи
генерировать словоподобный бред правдоподобной длины, разделённый пробелами и пунктуацией.
сочетание гласных и согласных, похожее на слои естественного языка (не приближающее приход Ктулху)
(почти всегда) пробелы после запятых, пробелы и большие буквы после точек
2 балла если она выучила лексику
более половины выученных слов - орфографически правильные
2 балла если она выучила азы крамматики
в более, чем половине случаев для пары слов сетка верно сочетает их род/число/падеж
Некоторые способы получить бонусные очки:
генерация связных предложений (чего вполне можно добиться)
перенос архитектуры на другой датасет (дополнительно к этому)
Эссе Пола Грэма
Тексты песен в любимом жанре
Стихи любимых авторов
Даниил Хармс
исходники Linux или theano
заголовки не очень добросовестных новостных баннеров (clickbait)
диалоги
LaTEX
любая прихоть больной души :)
нестандартная и эффективная архитектура сети
что-то лучше базового алгоритма генерации (сэмплинга)
переделать код так, чтобы сетка училась предсказывать следующий тик в каждый момент времени, а не только в конце.
и т.п.
Прочитаем корпус
В качестве обучающей выборки было решено использовать существующие законы, известные как Гражданский, Уголовный, Семейный и ещё хрен знает какие кодексы РФ.
End of explanation
"""
#длина последоватеьности при обучении (как далеко распространяются градиенты в BPTT)
seq_length = длина последовательности. От балды - 10, но это не идеально
#лучше начать с малого (скажем, 5) и увеличивать по мере того, как сетка выучивает базовые вещи. 10 - далеко не предел.
# Максимальный модуль градиента
grad_clip = 100
"""
Explanation: Константы
End of explanation
"""
input_sequence = T.matrix('input sequence','int32')
target_values = T.ivector('target y')
"""
Explanation: Входные переменные
End of explanation
"""
l_in = lasagne.layers.InputLayer(shape=(None, None),input_var=input_sequence)
Ваша нейронка (см выше)
l_out = последний слой, возвращающий веростности для всех len(tokens) вариантов для y
# Веса модели
weights = lasagne.layers.get_all_params(l_out,trainable=True)
print weights
network_output = Выход нейросети
#если вы используете дропаут - не забудьте продублировать всё в режиме deterministic=True
loss = Функция потерь - можно использовать простую кроссэнтропию.
updates = Ваш любивый численный метод
"""
Explanation: Соберём нейросеть
Вам нужно создать нейросеть, которая принимает на вход последовательность из seq_length токенов, обрабатывает их и выдаёт вероятности для seq_len+1-ого токена.
Общий шаблон архитектуры такой сети -
Вход
Обработка входа
Рекуррентная нейросеть
Вырезание последнего состояния
Обычная нейросеть
Выходной слой, который предсказывает вероятности весов.
Для обработки входных данных можно использовать либо EmbeddingLayer (см. прошлый семинар)
Как альтернатива - можно просто использовать One-hot энкодер
```
Скетч one-hot энкодера
def to_one_hot(seq_matrix):
input_ravel = seq_matrix.reshape([-1])
input_one_hot_ravel = T.extra_ops.to_one_hot(input_ravel,
len(tokens))
sh=input_sequence.shape
input_one_hot = input_one_hot_ravel.reshape([sh[0],sh[1],-1,],ndim=3)
return input_one_hot
можно применить к input_sequence - при этом в input слое сети нужно изменить форму.
также можно сделать из него ExpressionLayer(входной_слой, to_one_hot) - тогда форму менять не нужно
```
Чтобы вырезать последнее состояние рекуррентного слоя, можно использовать одно из двух:
* lasagne.layers.SliceLayer(rnn, -1, 1)
* only_return_final=True в параметрах слоя
End of explanation
"""
#обучение
train = theano.function([input_sequence, target_values], loss, updates=updates, allow_input_downcast=True)
#функция потерь без обучения
compute_cost = theano.function([input_sequence, target_values], loss, allow_input_downcast=True)
# Вероятности с выхода сети
probs = theano.function([input_sequence],network_output,allow_input_downcast=True)
"""
Explanation: Компилируем всякое-разное
End of explanation
"""
def max_sample_fun(probs):
return np.argmax(probs)
def proportional_sample_fun(probs)
"""Сгенерировать следующий токен (int32) по предсказанным вероятностям.
probs - массив вероятностей для каждого токена
Нужно вернуть одно целове число - выбранный токен - пропорционально вероятностям
"""
return номер выбранного слова
# The next function generates text given a phrase of length at least SEQ_LENGTH.
# The phrase is set using the variable generation_phrase.
# The optional input "N" is used to set the number of characters of text to predict.
def generate_sample(sample_fun,seed_phrase=None,N=200):
'''
Сгенерировать случайный текст при помощи сети
sample_fun - функция, которая выбирает следующий сгенерированный токен
seed_phrase - фраза, которую сеть должна продолжить. Если None - фраза выбирается случайно из corpora
N - размер сгенерированного текста.
'''
if seed_phrase is None:
start = np.random.randint(0,len(corpora)-seq_length)
seed_phrase = corpora[start:start+seq_length]
print "Using random seed:",seed_phrase
while len(seed_phrase) < seq_length:
seed_phrase = " "+seed_phrase
if len(seed_phrase) > seq_length:
seed_phrase = seed_phrase[len(seed_phrase)-seq_length:]
assert type(seed_phrase) is unicode
sample_ix = []
x = map(lambda c: token_to_id.get(c,0), seed_phrase)
x = np.array([x])
for i in range(N):
# Pick the character that got assigned the highest probability
ix = sample_fun(probs(x).ravel())
# Alternatively, to sample from the distribution instead:
# ix = np.random.choice(np.arange(vocab_size), p=probs(x).ravel())
sample_ix.append(ix)
x[:,0:seq_length-1] = x[:,1:]
x[:,seq_length-1] = 0
x[0,seq_length-1] = ix
random_snippet = seed_phrase + ''.join(id_to_token[ix] for ix in sample_ix)
print("----\n %s \n----" % random_snippet)
"""
Explanation: Генерируем свои законы
Для этого последовательно применяем нейронку к своему же выводу.
Генерировать можно по разному -
случайно пропорционально вероятности,
только слова максимальной вероятностью
случайно, пропорционально softmax(probas*alpha), где alpha - "жадность"
End of explanation
"""
print("Training ...")
#сколько всего эпох
n_epochs=100
# раз в сколько эпох печатать примеры
batches_per_epoch = 1000
#сколько цепочек обрабатывать за 1 вызов функции обучения
batch_size=100
for epoch in xrange(n_epochs):
print "Генерируем текст в пропорциональном режиме"
generate_sample(proportional_sample_fun,None)
print "Генерируем текст в жадном режиме (наиболее вероятные буквы)"
generate_sample(max_sample_fun,None)
avg_cost = 0;
for _ in range(batches_per_epoch):
x,y = sample_random_batches(corpora_ids,batch_size,seq_length)
avg_cost += train(x, y[:,0])
print("Epoch {} average loss = {}".format(epoch, avg_cost / batches_per_epoch))
"""
Explanation: Обучение модели
В котором вы можете подёргать параметры или вставить свою генерирующую функцию.
End of explanation
"""
seed = u"Каждый человек должен"
sampling_fun = proportional_sample_fun
result_length = 300
generate_sample(sampling_fun,seed,result_length)
seed = u"В случае неповиновения"
sampling_fun = proportional_sample_fun
result_length = 300
generate_sample(sampling_fun,seed,result_length)
И далее по списку
"""
Explanation: A chance to speed up training and get bonus score
Try predicting next token probas at ALL ticks (like in the seminar part)
much more objectives, much better gradients
You may want to zero-out loss for first several iterations
Конституция нового мирового правительства
End of explanation
"""
|
adriantorrie/adriantorrie.github.io | downloads/notebooks/udacity/deep_learning_foundations_nanodegree/project_1_notes_introduction_to_neural_networks.ipynb | mit | %run ../../../code/version_check.py
"""
Explanation: Summary
Notes taken to help for the first project for the Deep Learning Foundations Nanodegree course dellivered by Udacity.
My Github repo for this project can be found here: adriantorrie/udacity_dlfnd_project_1
Table of Contents
Neural network
Output Formula
Intuition
AND / OR perceptron
NOT perceptron
XOR perceptron
Activation functions
Summary
Deep Learning Book extra notes from Chapter 6: Deep Feedforward Networks
Activation Formula
Sigmoid
Tanh
Tanh Alternative Formula
Softmax
Gradient Descent
Multilayer Perceptrons
Backpropogation
Additional Reading
Additional Videos
Version Control
End of explanation
"""
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
plt.style.use('bmh')
matplotlib.rcParams['figure.figsize'] = (15, 4)
"""
Explanation: Change Log
Date Created: 2017-02-06
Date of Change Change Notes
-------------- ----------------------------------------------------------------
2017-02-06 Initial draft
2017-03-23 Formatting changes for online publishing
Setup
End of explanation
"""
def sigmoid(x):
s = 1 / (1 + np.exp(-x))
return s
inputs = np.array([2.1, 1.5,])
weights = np.array([0.2, 0.5,])
bias = -0.2
output = sigmoid(np.dot(weights, inputs) + bias)
print(output)
"""
Explanation: [Top]
Neural network
Output Formula
Synonym
The predicted value
The prediction
\begin{equation}
\hat y_j^\mu = f \left(\Sigma_i w_{ij} x_i^\mu\right)
\end{equation}
Intuition
<img src="../../../../images/simple-nn.png",width=450,height=200>
AND / OR perceptron
<img src="../../../../images/and-or-perceptron.png",width=450,height=200>
NOT perceptron
The NOT operations only cares about one input. The other inputs to the perceptron are ignored.
XOR perceptron
An XOR perceptron is a logic gate that outputs 0 if the inputs are the same and 1 if the inputs are different.
<img src="../../../../images/xor-perceptron.png",width=450,height=200>
Activation functions
AF Summary
Activation functions can be for
* Binary outcomes (2 classes, e.g {True, False})
* Multiclass outcomes
Binary activation functions include:
* Sigmoid
* Hyperbolic tangent (and the alternative formula provided by LeCun et el, 1998)
* Rectified linear unit
Multi-class activation functions include:
* Softmax
[Top]
Taken from Deep Learning Book - Chapter 6: Deep Feedforward Networks:
6.2.2 Output Units
* Any kind of neural network unit that may be used as an output can also be used as a hidden unit.
6.3 Hidden Units
* Rectified linear units are an excellent default choice of hidden unit. (My note: They are not covered in week one)
6.3.1 Rectified Linear Units and Their Generalizations
* g(z) = max{0, z}
* One drawback to rectified linear units is that they cannot learn via gradient-based methods on examples for which their activation is zero.
* Maxout units generalize rectified linear units further.
* Maxout units can thus be seen as learning the activation function itself rather than just the relationship between units.
* Maxout units typically need more regularization than rectified linear units. They can work well without regularization if the training set is large and the number of pieces per unit is kept low.
* Rectified linear units and all of these generalizations of them are based on the principle that models are easier to optimize if their behavior is closer to linear.
6.3.2 Logistic Sigmoid and Hyperbolic Tangent
* ... use as hidden units in feedforward networks is now discouraged.
* Sigmoidal activation functions are more common in settings other than feed-forward networks. Recurrent networks, many probabilistic models, and some auto-encoders have additional requirements that rule out the use of piecewise linear activation functions and make sigmoidal units more appealing despite the drawbacks of saturation.
Note: Saturation as an issue is also in the extra reading, Yes, you should understand backprop, and also raised in Effecient Backprop, LeCun et el., 1998 and was one of the reasons cited for modifying the tanh function in this notebook.
6.6 Historical notes
* The core ideas behind modern feedforward networks have not changed substantially since the 1980s. The same back-propagation algorithm and the same approaches to gradient descent are still in use
* Most of the improvement in neural network performance from 1986 to 2015 can be attributed to two factors.
* larger datasets have reduced the degree to which statistical generalization is a challenge for neural networks
* neural networks have become much larger, due to more powerful computers, and better software infrastructure
* However, a small number of algorithmic changes have improved the performance of neural networks
* ... replacement of mean squared error with the cross-entropy family of loss functions. Cross-entropy losses greatly improved the performance of models with sigmoid and softmax outputs, which had previously suffered from saturation and slow learning when using the mean squared error loss
* ... replacement of sigmoid hidden units with piecewise linear hidden units, such as rectified linear units
[Top]
Activation Formula
\begin{equation}
a = f(x) = {sigmoid, tanh, softmax, \text{or some other function not listed in this set}}
\end{equation}
where:
* $a$ is the activation function transformation of the output from $h$, e.g. apply the sigmoid function to $h$
and
\begin{equation}
h = \Sigma_i w_i x_i + b
\end{equation}
where:
* $x_i$ are the incoming inputs. A perceptron can have one or more inputs.
* $w_i$ are the weights being assigned to the respective incoming inputs
* $b$ is a bias term
* $h$ is the sum of the weighted input values + a bias figure
<img src="../../../../images/artificial-neural-network.png", width=450, height=200>
[Top]
Sigmoid
Synonyms:
Logistic function
Summary
A sigmoid function is a mathematical function having an "S" shaped curve (sigmoid curve). Often, sigmoid function refers to the special case of the logistic function.
The sigmoid function is bounded between 0 and 1, and as an output can be interpreted as a probability for success.
Formula
\begin{equation}
\text{sigmoid}(x) =
\frac{1} {1 + e^{-x}}
\end{equation}
\begin{equation}
\text{logistic}(x) =
\frac{L} {1 + e^{-k(x - x_0)}}
\end{equation}
where:
* $L$ = the curve's maximum value
* $e$ = the natural logarithm base (also known as Euler's number)
* $x_0$ = the x-value of the sigmoid's midpoint
* $k$ = the steepness of the curve
Network output from activation
\begin{equation}
\text{output} = a = f(h) = \text{sigmoid}(\Sigma_i w_i x_i + b)
\end{equation}
[Top]
Code
End of explanation
"""
x = np.linspace(start=-10, stop=11, num=100)
y = sigmoid(x)
upper_bound = np.repeat([1.0,], len(x))
success_threshold = np.repeat([0.5,], len(x))
lower_bound = np.repeat([0.0,], len(x))
plt.plot(
# upper bound
x, upper_bound, 'w--',
# success threshold
x, success_threshold, 'w--',
# lower bound
x, lower_bound, 'w--',
# sigmoid
x, y
)
plt.grid(False)
plt.xlabel(r'$x$')
plt.ylabel(r'Probability of success')
plt.title('Sigmoid Function Example')
plt.show()
"""
Explanation: [Top]
Example
End of explanation
"""
inputs = np.array([2.1, 1.5,])
weights = np.array([0.2, 0.5,])
bias = -0.2
output = np.tanh(np.dot(weights, inputs) + bias)
print(output)
"""
Explanation: [Top]
Tanh
Synonyms:
Hyperbolic tangent
Summary
Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola.
The tanh function is bounded between -1 and 1, and as an output can be interpreted as a probability for success, where the output value:
* 1 = 100%
* 0 = 50%
* -1 = 0%
The tanh function creates stronger gradients around zero, and therefore the derivatives are higher than the sigmoid function. Why this is important can apparently be found in Effecient Backprop by LeCun et al (1998). Also see this answer on Cross-Validated for a representation of the derivative values.
Formula
\begin{equation}
\text{tanh}(x) =
\frac{2} {1 + e^{-2x}}
- 1
\end{equation}
\begin{equation}
\text{tanh}(x) =
\frac{\text{sinh}(x)} {\text{cosh}(x)}
\end{equation}
where:
* $e$ = the natural logarithm base (also known as Euler's number)
* $sinh$ is the hyperbolic sine
* $cosh$ is the hyperbolic cosine
[Top]
Tanh Alternative Formula
\begin{equation}
\text{modified tanh}(x) =
\text{1.7159 tanh } \left(\frac{2}{3}x\right)
\end{equation}
Network output from activation
\begin{equation}
\text{output} = a = f(h) = \text{tanh}(\Sigma_i w_i x_i + b)
\end{equation}
[Top]
Code
End of explanation
"""
x = np.linspace(start=-10, stop=11, num=100)
y = np.tanh(x)
upper_bound = np.repeat([1.0,], len(x))
success_threshold = np.repeat([0.0,], len(x))
lower_bound = np.repeat([-1.0,], len(x))
plt.plot(
# upper bound
x, upper_bound, 'w--',
# success threshold
x, success_threshold, 'w--',
# lower bound
x, lower_bound, 'w--',
# sigmoid
x, y
)
plt.grid(False)
plt.xlabel(r'$x$')
plt.ylabel(r'Probability of success (0.00 = 50%)')
plt.title('Tanh Function Example')
plt.show()
"""
Explanation: [Top]
Example
End of explanation
"""
def modified_tanh(x):
return 1.7159 * np.tanh((2 / 3) * x)
x = np.linspace(start=-10, stop=11, num=100)
y = modified_tanh(x)
upper_bound = np.repeat([1.75,], len(x))
success_threshold = np.repeat([0.0,], len(x))
lower_bound = np.repeat([-1.75,], len(x))
plt.plot(
# upper bound
x, upper_bound, 'w--',
# success threshold
x, success_threshold, 'w--',
# lower bound
x, lower_bound, 'w--',
# sigmoid
x, y
)
plt.grid(False)
plt.xlabel(r'$x$')
plt.ylabel(r'Probability of success (0.00 = 50%)')
plt.title('Alternative Tanh Function Example')
plt.show()
"""
Explanation: [Top]
Alternative Example
End of explanation
"""
def softmax(X):
assert len(X.shape) == 2
s = np.max(X, axis=1)
s = s[:, np.newaxis] # necessary step to do broadcasting
e_x = np.exp(X - s)
div = np.sum(e_x, axis=1)
div = div[:, np.newaxis] # dito
return e_x / div
X = np.array([[1, 2, 3, 6],
[2, 4, 5, 6],
[3, 8, 7, 6]])
y = softmax(X)
y
# compared to tensorflow implementation
batch = np.asarray([[1,2,3,6], [2,4,5,6], [3, 8, 7, 6]])
x = tf.placeholder(tf.float32, shape=[None, 4])
y = tf.nn.softmax(x)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(y, feed_dict={x: batch})
"""
Explanation: [Top]
Softmax
Synonyms:
Normalized exponential function
Multinomial logistic regression
Summary
Softmax regression is interested in multi-class classification (as opposed to only binary classification when using the sigmoid and tanh functions), and so the label $y$ can take on $K$ different values, rather than only two.
Is often used as the output layer in multilayer perceptrons to allow non-linear relationships to be learnt for multiclass problems.
Formula
From Deep Learning Book - Chapter 4: Numerical Computation
\begin{equation}
\text{softmax}(x)i =
\frac{\text{exp}(x_i)} {\sum{j=1}^n \text{exp}(x_j)}
\end{equation}
Network output from activation (incomplete)
\begin{equation}
\text{output} = a = f(h) = \text{softmax}()
\end{equation}
[Top]
Code
Link for a good discussion on SO regarding Python implementation of this function, from which the code below code was taken from.
End of explanation
"""
# Defining the sigmoid function for activations
def sigmoid(x):
return 1 / ( 1 + np.exp(-x))
# Derivative of the sigmoid function
def sigmoid_prime(x):
return sigmoid(x) * (1 - sigmoid(x))
x = np.array([0.1, 0.3])
y = 0.2
weights = np.array([-0.8, 0.5])
# probably use a vector named "w" instead of a name like this
# to make code look more like algebra
# The learning rate, eta in the weight step equation
learnrate = 0.5
# The neural network output
nn_output = sigmoid(x[0] * weights[0] + x[1] * weights[1])
# or nn_output = sigmoid(np.dot(weights, x))
# output error
error = y - nn_output
# error gradient
error_gradient = error * sigmoid_prime(np.dot(x, weights))
# sigmoid_prime(x) is equal to -> nn_output * (1 - nn_output)
# Gradient descent step
del_w = [ learnrate * error_gradient * x[0],
learnrate * error_gradient * x[1]]
# or del_w = learnrate * error_gradient * x
"""
Explanation: [Top]
Gradient Descent
Learning weights
What if you want to perform an operation, such as predicting college admission, but don't know the correct weights? You'll need to learn the weights from example data, then use those weights to make the predictions.
We need a metric of how wrong the predictions are, the error.
Sum of squared errors (SSE)
\begin{equation}
E =
\frac{1}{2} \Sigma_u \Sigma_j \left [ y_j ^ \mu - \hat y_j^ \mu \right ] ^ 2
\end{equation}
where (neural network prediction):
\begin{equation}
\hat y_j^\mu =
f \left(\Sigma_i w_{ij} x_i^\mu\right)
\end{equation}
therefore:
\begin{equation}
E =
\frac{1}{2} \Sigma_u \Sigma_j \left [ y_j ^ \mu - f \left(\Sigma_i w_{ij} x_i^\mu\right) \right ] ^ 2
\end{equation}
Goal
Find weights $w_{ij}$ that minimize the squared error $E$.
How? Gradient descent.
[Top]
Gradient Descent Formula
\begin{equation}
\Delta w_{ij} = \eta (y_j - \hat y_j) f^\prime (h_j) x_i
\end{equation}
remembering $h_j$ is the input to the output unit $j$:
\begin{equation}
h = \sum_i w_{ij} x_i
\end{equation}
where:
* $(y_j - \hat y_j)$ is the prediction error.
* The larger this error is, the larger the gradient descent step should be.
* $f^\prime (h_j)$ is the gradient
* If the gradient is small, then a change in the unit input $h_j$ will have a small effect on the error.
* This term produces larger gradient descent steps for units that have larger gradients
The errors can be rewritten as:
\begin{equation}
\delta_j = (y_j - \hat y_j) f^\prime (h_j)
\end{equation}
Giving the gradient step as:
\begin{equation}
\Delta w_{ij} = \eta \delta_j x_i
\end{equation}
where:
* $\Delta w_{ij}$ is the (delta) change to the $i$th $j$th weight
* $\eta$ (eta) is the learning rate
* $\delta_j$ (delta j) is the prediction errors
* $x_i$ is the input
[Top]
Algorithm
Set the weight step to zero: $\Delta w_i = 0$
For each record in the training data:
Make a forward pass through the network, calculating the output $\hat y = f(\Sigma_i w_i x_i)$
Calculate the error gradient in the output unit, $\delta = (y − \hat y) f^\prime(\Sigma_i w_i x_i)$
Update the weight step $\Delta w_i= \Delta w_i + \delta x_i$
Update the weights $w_i = w_i + \frac{\eta \Delta w_i} {m}$ where:
η is the learning rate
$m$ is the number of records
Here we're averaging the weight steps to help reduce any large variations in the training data.
Repeat for $e$ epochs.
[Top]
Code
End of explanation
"""
# network size is a 4x3x2 network
n_input = 4
n_hidden = 3
n_output = 2
# make some fake data
np.random.seed(42)
x = np.random.randn(4)
weights_in_hidden = np.random.normal(0, scale=0.1, size=(n_input, n_hidden))
weights_hidden_out = np.random.normal(0, scale=0.1, size=(n_hidden, n_output))
print('x shape\t\t\t= {}'.format(x.shape))
print('weights_in_hidden shape\t= {}'.format(weights_in_hidden.shape))
print('weights_hidden_out\t= {}'.format(weights_hidden_out.shape))
"""
Explanation: [Top]
Caveat
Gradient descent is reliant on beginnning weight values. If incorrect could result in convergergance occuring in a local minima, not a global minima. Random weights can be used.
Momentum Term
The momentum term increases for dimensions whose gradients point in the same directions and reduces updates for dimensions whose gradients change directions. As a result, we gain faster convergence and reduced oscillation.
[Top]
Multilayer Perceptrons
Synonyms
MLP (just an acronym)
Numpy column vector
Numpy arays are row vectors by default, and the input_features.T (transpose) transform still leaves it as a row vector. Instead we have to use (use this one, makes more sense):
input_features = input_features[:, None]
Alternatively you can create an array with two dimensions then transpose it:
input_features = np.array(input_features ndim=2).T
[Top]
Code example setting up a MLP
End of explanation
"""
|
datascience-practice/data-quest | python_introduction/beginner/.ipynb_checkpoints/booleans-and-if-statements-checkpoint.ipynb | mit | cat = True
dog = False
print(type(cat))
"""
Explanation: 1: Booleans
Instructions
Assign the value True to the variable cat and the value False to the variable dog. Then use the print() function and the type() function to display the type for cat.
Answer
End of explanation
"""
from cities import cities
print(cities)
first_alb = cities[0] == 'Albuquerque'
second_alb = cities[1] == 'Albuquerque'
first_last = cities[0] == cities[-1]
print(first_alb, second_alb, first_last)
"""
Explanation: 2: Boolean operators
Instructions
Use the Boolean operators to determine if the following pairs of values are equivalent:
first element of cities and the string "Albuquerque". Assign the resulting Boolean value to first_alb
second element of cities and the string "Albuquerque". Assign the resulting Boolean value to second_alb
first element of cities and the last element in cities. Assign the resulting Boolean value to first_last
End of explanation
"""
crime_rates = [749, 371, 828, 503, 1379, 425, 408, 542, 1405, 835, 1288, 647, 974, 1383, 455, 658, 675, 615, 2122, 423, 362, 587, 543, 563, 168, 992, 1185, 617, 734, 1263, 784, 352, 397, 575, 481, 598, 1750, 399, 1172, 1294, 992, 522, 1216, 815, 639, 1154, 1993, 919, 594, 1160, 636, 752, 130, 517, 423, 443, 738, 503, 413, 704, 363, 401, 597, 1776, 722, 1548, 616, 1171, 724, 990, 169, 1177, 742]
print(crime_rates)
first = crime_rates[0]
first_500 = first > 500
first_749 = first >= 749
first_last = first >= crime_rates[-1]
print(first_500, first_749, first_last)
"""
Explanation: 3: Booleans with greater than
Instructions
The variable crime_rates is a list of integers containing the crime rates from the dataset. Perform the following comparisons:
evaluate if the first element in crime_rates is larger than the integer 500, assign the Boolean result to first_500
evaluate if the first element in crime_rates is larger than or equal to 749, assign the Boolean result to first_749
evaluate if the first element in crime_rates is greater than or equal to the last element in crime_rates, assign the Boolean result to first_last
Answer
End of explanation
"""
second = crime_rates[1]
second_500 = second < 500
second_371 = second <= 371
second_last = second <= crime_rates[-1]
print(second_500, second_371, second_last)
"""
Explanation: 4: Booleans with less than
Instructions
The variable crime_rates is a list containing the crime rates from the dataset as integers. Perform the following comparisons:
* determine if the second element in crime_rates is smaller than the integer 500, assign the Boolean result to second_500
* determine if the second element in crime_rates is smaller than or equal to 371, assign the Boolean result to second_371
* determine if the second element in crime_rates is smaller than or equal to the last element in crime_rates, assign the Boolean result to second_last
Answer
End of explanation
"""
|
tensorflow/examples | courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
# Import the Tokenizer
from tensorflow.keras.preprocessing.text import Tokenizer
"""
Explanation: Tokenizing text and creating sequences for sentences
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This colab shows you how to tokenize text and create sequences for sentences as the first stage of preparing text for use with TensorFlow models.
Import the Tokenizer
End of explanation
"""
sentences = [
'My favorite food is ice cream',
'do you like ice cream too?',
'My dog likes ice cream!',
"your favorite flavor of icecream is chocolate",
"chocolate isn't good for dogs",
"your dog, your cat, and your parrot prefer broccoli"
]
"""
Explanation: Write some sentences
Feel free to change and add sentences as you like
End of explanation
"""
# Optionally set the max number of words to tokenize.
# The out of vocabulary (OOV) token represents words that are not in the index.
# Call fit_on_text() on the tokenizer to generate unique numbers for each word
tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")
tokenizer.fit_on_texts(sentences)
"""
Explanation: Tokenize the words
The first step to preparing text to be used in a machine learning model is to tokenize the text, in other words, to generate numbers for the words.
End of explanation
"""
# Examine the word index
word_index = tokenizer.word_index
print(word_index)
# Get the number for a given word
print(word_index['favorite'])
"""
Explanation: View the word index
After you tokenize the text, the tokenizer has a word index that contains key-value pairs for all the words and their numbers.
The word is the key, and the number is the value.
Notice that the OOV token is the first entry.
End of explanation
"""
sequences = tokenizer.texts_to_sequences(sentences)
print (sequences)
"""
Explanation: Create sequences for the sentences
After you tokenize the words, the word index contains a unique number for each word. However, the numbers in the word index are not ordered. Words in a sentence have an order. So after tokenizing the words, the next step is to generate sequences for the sentences.
End of explanation
"""
sentences2 = ["I like hot chocolate", "My dogs and my hedgehog like kibble but my squirrel prefers grapes and my chickens like ice cream, preferably vanilla"]
sequences2 = tokenizer.texts_to_sequences(sentences2)
print(sequences2)
"""
Explanation: Sequence sentences that contain words that are not in the word index
Let's take a look at what happens if the sentence being sequenced contains words that are not in the word index.
The Out of Vocabluary (OOV) token is the first entry in the word index. You will see it shows up in the sequences in place of any word that is not in the word index.
End of explanation
"""
|
t73liu/crypto-trading | quant/DailyGapFill.ipynb | mit | gap_fill_by_month = candles.groupby(["month", "gap_filled"]).size()
gap_fill_by_month.groupby("month").apply(lambda g: g / g.sum() * 100)
"""
Explanation: Gap fill rates decrease as the gap size increases.
End of explanation
"""
gap_fill_by_day_of_week = candles.groupby(["day_of_week", "gap_filled"]).size()
gap_fill_by_day_of_week.groupby("day_of_week").apply(lambda g: g / g.sum() * 100)
"""
Explanation: Month has no discernible effect on gap fill rate.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
# One-hot encode categorical features like day_of_week and month
day_of_week = pd.get_dummies(candles["day_of_week"])
month = pd.get_dummies(candles["month"])
x = candles[["gap_size"]].join([day_of_week, month])
x
y = candles["gap_filled"].replace({True: "Filled", False: "NoFill"})
y
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
logistic = LogisticRegression()
logistic.fit(x_train, y_train)
logistic_predictions = logistic.predict(x_test)
print("Logistic Regression accuracy {:.1%}".format(metrics.accuracy_score(y_test, logistic_predictions)))
metrics.plot_confusion_matrix(logistic, x_test, y_test)
svm_model = svm.LinearSVC()
svm_model.fit(x_train, y_train)
svm_predictions = svm_model.predict(x_test)
print("SVM accuracy {:.1%}".format(metrics.accuracy_score(y_test, svm_predictions)))
tree = DecisionTreeClassifier(max_depth=4)
tree.fit(x_train, y_train)
tree_predictions = tree.predict(x_test)
print("Random Forest accuracy {:.1%}".format(metrics.accuracy_score(y_test, tree_predictions)))
# Partial gap fill
conditions = [candles["gap_filled"] == True, candles["gap_percent"] > 0, candles["gap_percent"] < 0]
choices = [math.nan, candles["low"] - candles["prev_close"], candles["high"] - candles["prev_close"]]
candles["remaining_gap"] = np.select(conditions, choices)
candles["partial_gap_fill_percent"] = 1 - candles["remaining_gap"] / candles["gap"]
partial_fill = candles.dropna()
partial_fill
print("Naive partial gap fill reaches {:.2f}% on average".format(partial_fill["partial_gap_fill_percent"].mean()*100))
print(partial_fill.groupby(["gap_size"])["partial_gap_fill_percent"].quantile(0.3))
"""
Explanation: Monday has a slightly lower gap fill rate.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/official/migration/UJ8 Vertex SDK AutoML Text Sentiment Analysis.ipynb | apache-2.0 | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex AI: Vertex AI Migration: AutoML Text Sentiment Analysis
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ8%20Vertex%20SDK%20AutoML%20Text%20Sentiment%20Analysis.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ8%20Vertex%20SDK%20AutoML%20Text%20Sentiment%20Analysis.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Dataset
The dataset used for this tutorial is the Crowdflower Claritin-Twitter dataset from data.world Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
"""
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
"""
IMPORT_FILE = "gs://cloud-samples-data/language/claritin.csv"
SENTIMENT_MAX = 4
"""
Explanation: Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
"""
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
"""
Explanation: Quick peek at your data
This tutorial uses a version of the Crowdflower Claritin-Twitter dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
"""
dataset = aip.TextDataset.create(
display_name="Crowdflower Claritin-Twitter" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.text.sentiment,
)
print(dataset.resource_name)
"""
Explanation: Create a dataset
datasets.create-dataset-api
Create the Dataset
Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
End of explanation
"""
dag = aip.AutoMLTextTrainingJob(
display_name="claritin_" + TIMESTAMP,
prediction_type="sentiment",
sentiment_max=SENTIMENT_MAX,
)
print(dag)
"""
Explanation: Example Output:
INFO:google.cloud.aiplatform.datasets.dataset:Creating TextDataset
INFO:google.cloud.aiplatform.datasets.dataset:Create TextDataset backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/3193181053544038400
INFO:google.cloud.aiplatform.datasets.dataset:TextDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/3704325042721521664
INFO:google.cloud.aiplatform.datasets.dataset:To use this TextDataset in another session:
INFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.TextDataset('projects/759209241365/locations/us-central1/datasets/3704325042721521664')
INFO:google.cloud.aiplatform.datasets.dataset:Importing TextDataset data: projects/759209241365/locations/us-central1/datasets/3704325042721521664
INFO:google.cloud.aiplatform.datasets.dataset:Import TextDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/5152246891450204160
INFO:google.cloud.aiplatform.datasets.dataset:TextDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/3704325042721521664
projects/759209241365/locations/us-central1/datasets/3704325042721521664
Train a model
training.automl-api
Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A text classification model.
sentiment: A text sentiment analysis model.
extraction: A text entity extraction model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
sentiment_max: If a sentiment analysis task, the maximum sentiment value.
The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
End of explanation
"""
model = dag.run(
dataset=dataset,
model_display_name="claritin_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
)
"""
Explanation: Example output:
<google.cloud.aiplatform.training_jobs.AutoMLTextTrainingJob object at 0x7fc3b6c90f10>
Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes.
End of explanation
"""
# Get model resource ID
models = aip.Model.list(filter="display_name=claritin_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.training_jobs:View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/8859754745456230400?project=759209241365
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
...
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400
INFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/6389525951797002240
Evaluate the model
projects.locations.models.evaluations.list
Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
"""
test_items = ! gsutil cat $IMPORT_FILE | head -n2
if len(test_items[0]) == 4:
_, test_item_1, test_label_1, _ = str(test_items[0]).split(",")
_, test_item_2, test_label_2, _ = str(test_items[1]).split(",")
else:
test_item_1, test_label_1, _ = str(test_items[0]).split(",")
test_item_2, test_label_2, _ = str(test_items[1]).split(",")
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
"""
Explanation: Example output:
name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824"
metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml"
metrics {
struct_value {
fields {
key: "auPrc"
value {
number_value: 0.9891107
}
}
fields {
key: "confidenceMetrics"
value {
list_value {
values {
struct_value {
fields {
key: "precision"
value {
number_value: 0.2
}
}
fields {
key: "recall"
value {
number_value: 1.0
}
}
}
}
Make batch predictions
predictions.batch-prediction
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
"""
import json
import tensorflow as tf
gcs_test_item_1 = BUCKET_NAME + "/test1.txt"
with tf.io.gfile.GFile(gcs_test_item_1, "w") as f:
f.write(test_item_1 + "\n")
gcs_test_item_2 = BUCKET_NAME + "/test2.txt"
with tf.io.gfile.GFile(gcs_test_item_2, "w") as f:
f.write(test_item_2 + "\n")
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": gcs_test_item_1, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
data = {"content": gcs_test_item_2, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
"""
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
content: The Cloud Storage path to the file with the text item.
mime_type: The content type. In our example, it is a text file.
For example:
{'content': '[your-bucket]/file1.txt', 'mime_type': 'text'}
End of explanation
"""
batch_predict_job = model.batch_predict(
job_display_name="claritin_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
"""
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
"""
batch_predict_job.wait()
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
"""
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
"""
Explanation: Example Output:
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
sentiment: The sentiment.
End of explanation
"""
endpoint = model.deploy()
"""
Explanation: Example Output:
{'instance': {'content': 'gs://andy-1234-221921aip-20210811220920/test2.txt', 'mimeType': 'text/plain'}, 'prediction': {'sentiment': 3}}
Make online predictions
predictions.deploy-model-api
Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
End of explanation
"""
test_item = ! gsutil cat $IMPORT_FILE | head -n1
if len(test_item[0]) == 3:
_, test_item, test_label, max = str(test_item[0]).split(",")
else:
test_item, test_label, max = str(test_item[0]).split(",")
print(test_item, test_label)
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
predictions.online-prediction-automl
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
"""
instances_list = [{"content": test_item}]
prediction = endpoint.predict(instances_list)
print(prediction)
"""
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
{ 'content': text_string }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
sentiment: The sentiment value.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
"""
endpoint.undeploy_all()
"""
Explanation: Example output:
Prediction(predictions=[{'sentiment': 2.0}], deployed_model_id='311601595311718400', explanations=None)
Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
"""
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
empet/geom_modeling | FP-Bezier-Bspline.ipynb | bsd-2-clause | from IPython.display import Image
Image(filename='Imag/Decast4p.png')
"""
Explanation: <center> Interactive generation of Bézier and B-spline curves.<br> Python functional programming implementation of the <br> de Casteljau and Cox-de Boor algorithms </center>
The aim of this IPython notebook is twofold:
- first to illustrate the interactive generation of Bézier and B-spline curves using the matplotlib backend nbagg, and second
- to give a functional programming implementation of the basic algorithms related to these classes of curves.
Bézier and B-spline curves are widely used for interactive heuristic design of free-form curves in Geometric modeling.
Their properties are usually illustrated through interactive generation using C and OpenGL or Java applets. The new
matplotlib nbagg backend enables the interactive generation of Bézier and B-spline curves in an IPython Notebook.
Why functional programming (FP)?
Lately there has been an increasing interest in pure functional programming and an active debate on whether we can do FP in Python or not.
By Wikipedia:
Functional programming is a programming paradigm, a style of building the structure and elements of computer programs, that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions. In functional code, the output value of a function depends only on the arguments that are input to the function, so calling a function f twice with the same value for an argument x will produce the same result f(x) each time. Eliminating side effects, i.e. changes in state that do not depend on the function inputs, can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming.
Python is a multi-paradigm programming language: it is both imperative and object-oriented programming language and provides a few constructs for a functional programming style, as well.
Here is a discussion on why Python is not very good for FP.
Instead of discussing pros and cons of Python FP we decided to start a small project and implement it as much as possible in a FP style.
Before starting let us recall a few characteristics of FP:
- functions are building blocks of FP. They are first class objects, meaning that they are treated like any other objects. Functions can be passed as arguments to other functions (called higher order functions) or be returned by another functions.
- FP defines only pure functions, i.e. functions without side effects. Such a function acts only on its input to produce output (like mathematical functions). A pure function never interact with the outside world (it does not perform any I/O operation or modify an instance of a class).
- a FP program consists in evaluating expressions, unlike imperative programming where programs are composed of statements which change global state when executed.
- FP avoids looping and uses recursion instead.
In this IPython Notebook we try to implement algorithms related to Bézier and B-spline curves,
using recursion, iterators, higher order functions.
We also define a class to build interactively a curve. The code for class methods will be imperative
as well as for the last function that prepares data for a closed B-spline curve.
Bézier curves. de Casteljau algorithm
A Bézier curve of degree $n$ in the $2$-dimensional space is a polynomial curve defined by an ordered set $({\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n)$ of $n+1$ points, called control points. This set is also called control polygon of the Bézier curve.
Its parameterization, $p:t\in [0,1]\mapsto p(t)\in\mathbb{R}^2$, is defined by:
$$p(t)=\sum_{k=0}^n{\bf b}_k B^n_k(t), \quad t\in[0,1]$$
where $B^n_k(t)=\binom nk t^k(1-t)^{n-k}, k=0,1,\ldots, n$, are Bernstein polynomials.
To compute a point on a Bézier curve theoretically we have to evaluate the above parameterization $p$ at a parameter $t\in[0,1]$. In Geometric Modeling a more stable algorithm is used instead, namely the de Casteljau algorithm.
De Casteljau algorithm provides a procedural method to compute recursively a point, $p(t)$, on a Bezier curve.
Given the control points $({\bf b}0, {\bf b}_1, \ldots, {\bf b}_n)$, and a parameter $t\in [0,1]$, one computes in each step $r=\overline{1,n}$ of the recursion, the points:
$$ {\bf
b}{i}^{r}(t)=(1-t)\,{\bf b}{i}^{r-1}(t)+t\,{\bf b}{i+1}^{r-1}(t),\:
i=\overline{0,n-r}, $$
i.e. the $i^{th}$ point ${\bf
b}_{i}^{r}(t)$, from the step $r$, is a convex combination of the $i^{th}$ and $(i+1)^{th}$ point from the step $r-1$:
$$
\begin{array}{lll} {\bf b}^{r-1}i&\stackrel{1-t}{\rightarrow}&{\bf
b}^{r}_i\
{\bf b}^{r-1}{i+1}&\stackrel{\nearrow}{t}&\end{array}$$
The control points calculated in the intermediate steps can be displayed in a triangular array:
$$
\begin{array}{llllll}
{\bf b}0^0 & {\bf b}{0}^1 & {\bf b}{0}^2& \cdots & {\bf b}_0^{n-1}& {\bf b}^n_0\
{\bf b}_1^0 & {\bf b}{1}^1 & {\bf b}{1}^2& \cdots & {\bf b}_1^{n-1}& \
{\bf b}_2^0 & {\bf b}{2}^1 & {\bf b}{2}^2 & \cdots & & \
\vdots & \vdots &\vdots & & & \
{\bf b}{n-2}^0 & {\bf b}{n-2}^1& {\bf b}{n-2}^2& & & \
{\bf b}{n-1}^0 & {\bf b}{n-1}^1 & & & & \
{\bf b}_n^0 & & & & & \end{array} $$
The points ${\bf b}_i^0$ denote the given control points ${\bf b}_i$, $i=\overline{0,n}$.
The number of points reduces by 1, from a step to the next one, such that at the final step, $r=n$, we get only one point, ${\bf b}_0^n(t)$, which is the point $p(t)$
on the Bézier curve.
The image below illustrates the points computed by de Casteljau algorithm for a Bézier curve defined by 4 control points, and a fixed parameter $t\in[0,1]$:
End of explanation
"""
import numpy as np
class InvalidInputError(Exception):
pass
def deCasteljauImp(b,t):
N=len(b)
if(N<2):
raise InvalidInputError("The control polygon must have at least two points")
a=np.copy(b) #shallow copy of the list of control points and its conversion to a numpy.array
for r in range(1,N):
a[:N-r,:]=(1-t)*a[:N-r,:]+t*a[1:N-r+1,:]# convex combinations in step r
return a[0,:]
"""
Explanation: We define two functions that implement this recursive scheme. In both cases
we consider 2D points as tuples of float numbers and control polygons as lists of tuples.
First an imperative programming implementation of the de Casteljau algorithm:
End of explanation
"""
def BezierCv(b, nr=200):# compute nr points on the Bezier curve of control points in list bom
t=np.linspace(0, 1, nr)
return [deCasteljauImp(b, t[k]) for k in range(nr)]
"""
Explanation: This is a typical imperative programming code: assignment statements, for looping.
Each call of this function copies the control points into a numpy array a.
The convex combinations a[i,:] =(1-t) *a[i,:]+t*a[i-1,:], $i=\overline{1,n-r}$, that must be calculated in each step $r$
are vectorized, computing convex combinations of points from two slices in the numpy.array a.
A discrete version of a Bezier curve, of nr points is computed by this function:
End of explanation
"""
cvx=lambda x, y, t: (1-t) * x + t * y # affine/convex combination of two numbers x, y
cvxP=lambda (P, Q, t): (cvx(P[0], Q[0], t), cvx(P[1], Q[1], t))# affine/cvx comb of two points P,Q
def cvxCtrlP(ctrl, t):# affine/cvx combination of each two consecutive points in a list ctrl
# with the same coefficient t
return map(cvxP, zip(ctrl[:-1], ctrl[1:], [t]*(len(ctrl)-1)))
"""
Explanation: For a FP implementation of de Casteljau algorithm we use standard Python.
First we define functions that return an affine/ convex combination of two numbers, two 2D points, respectively of each pair of consecutive points in a list of 2D control points:
End of explanation
"""
def deCasteljauF(b, t):
# de Casteljau scheme - computes the point p(t) on the Bezier curve of control polygon b
if len(b)>1:
return deCasteljauF(cvxCtrlP( b, t), t)
else:
return b[0]
"""
Explanation: The recursive function implementing de Casteljau scheme:
End of explanation
"""
def BezCurve(b, nr=200):
#computes nr points on the Bezier curve of control points b
return map(lambda s: deCasteljauF(b,s), map(lambda j: j*1.0/(nr-1), xrange(nr)))
"""
Explanation: A Bézier curve of control polygon ${\bf b}$ is discretized calling deCasteljauF function for each
parameter $t_j=j/(nr-1)$, $j=\overline{0,nr-1}$, where $nr$ is the number of points to be calculated:
End of explanation
"""
%matplotlib notebook
import matplotlib.pyplot as plt
def Curve_plot(ctrl, func):
# plot the control polygon and the corresponding curve discretized by the function func
xc, yc = zip(*func(ctrl))
xd, yd = zip(*ctrl)
plt.plot(xd,yd , 'bo-', xc,yc, 'r')
class BuildB(object): #Build a Bezier/B-spline Curve
def __init__(self, xlims=[0,1], ylims=[0,1], func=BezCurve):
self.ctrl=[] # list of control points
self.xlims=xlims # limits for x coordinate of control points
self.ylims=ylims # limits for y coordinate
self.func=func # func - function that discretizes a curve defined by the control polygon ctrl
def callback(self, event): #select control points with left mouse button click
if event.button==1 and event.inaxes:
x,y = event.xdata,event.ydata
self.ctrl.append((x,y))
plt.plot(x, y, 'bo')
elif event.button==3: #press right button to plot the curve
Curve_plot(self.ctrl, self.func)
plt.draw()
else: pass
def B_ax(self):#define axes lims
fig = plt.figure(figsize=(6, 5))
ax = fig.add_subplot(111)
ax.set_xlim(self.xlims)
ax.set_ylim(self.ylims)
ax.grid('on')
ax.set_autoscale_on(False)
fig.canvas.mpl_connect('button_press_event', self.callback)
"""
Explanation: map(lambda s: deCasteljauF(b,s), map(lambda j: j*1.0/(nr-1), xrange(nr)) means that the function
deCasteljauF with fixed list of control points, b, and the variable parameter s is mapped to the list
of parameters $t_j$ defined above. Instead of defining the list of parameters through comprehension [j*1.0/(nr-1) for j in xrange(nr)]
we created it calling the higher order function map:
map(lambda j: j*1.0/(nr-1), xrange(nr))
Obvioulsly the FP versions of de Casteljau algorithm and BezCurve are more compact
than the corresponding imperative versions, but they are not quite readable at the first sight.
To choose interactively the control points of a Bézier curve (and later of a B-spline curve) and to plot the curve, we set now the matplotlib nbagg backend, and define the class BuildB, below:
End of explanation
"""
C=BuildB()
C.B_ax()
"""
Explanation: Now we build the object C, set the axes for the curve to be generated, and choose the control points
with the left mouse button click. A right button click generates the corresponding curve:
End of explanation
"""
def get_right_subpol(b,s, right=[]): # right is the list of control points for the right subpolygon
right.append(b[-1]) # append the last point in the list
if len(b)==1:
return right
else:
return get_right_subpol(cvxCtrlP(b,s), s, right)
"""
Explanation: Subdividing a Bézier curve
Let $\Gamma$ be a Bézier curve of control points $({\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n)$, and
$s\in (0,1)$ a parameter. Cutting (dividing) the curve at the point $p(s)$ we get two arcs of polynomial curves, which can be also expressed as Bezier
curves. The problem of finding the control polygons of the two arcs is called subdivision of the Bézier curve at $s$.
The control points ${\bf d}r$, $r=\overline{0,n}$, of the right arc of Bezier curve are:
$${\bf d}_r={\bf b}{n-r}^r(s),$$
where ${\bf b}^r_{n-r}(s)$ are points generated in the $r^{th}$ step of the de Casteljau algorithm from the control points
$({\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n)$, and the parameter $s$ [Farin].
More precisely, the right polygon consists in last points computed in each step, $r=0,1,\ldots, n$, of the de Casteljau algorithm (see the triangular matrix of points displayed above):
${\bf b}^0_n(s), {\bf b}^1_{n-1}(s), \ldots, {\bf b}^n_0(s)$.
The recursive function get_right_subPol returns the right subpolygon of a subdivision at parameter $s$:
End of explanation
"""
def subdivision(b, s):
#returns the left and right subpolygon, resulted dividing the curve of control points b, at s
#if(s<=0 or s>=1):
#raise InvalidInputError('The subdivision parameter must be in the interval (0,1)')
return (get_right_subpol( b[::-1], 1-s, right=[]), get_right_subpol( b, s, right=[]))
"""
Explanation: To get the left subpolygon we exploit the invariance of a Bézier curve to reversing its control points.
The Bézier curve defined by the control points
${\bf b}n,{\bf b}{n-1}, \ldots, {\bf b}0$, coincides with that defined by
${\bf b}_0,{\bf b}{1}, \ldots, {\bf b}_n$.
If $p$ is the Bernstein parameterization of the curve defined by
${\bf b}0,{\bf b}{1}, \ldots, {\bf b}n$
and $\tilde{p}$ of that defined by the reversed control polygon, ${\bf b}_n,{\bf b}{n-1}, \ldots, {\bf b}_0$, then $p(t)=\tilde{p}(1-t)$ [Farin].
This means that the left subpolygon of the subdivision of the former curve at $s$,
is the right subpolygon resulted by dividing the latter curve at $1-s$.
Now we can define the function that returns the left and right subpolygon of a Bézier curve subdivision:
End of explanation
"""
def plot_polygon(pol, ch):# plot a control polygon computed by an algorithm from a Bezier curve
plt.plot(zip(*pol)[0], zip(*pol)[1], linestyle='-', marker='o', color=ch)
"""
Explanation: Define a function to plot the subpolygons:
End of explanation
"""
cv=BuildB()
cv.B_ax()
left, right=subdivision(cv.ctrl, 0.47)
plot_polygon(left, 'g')
plot_polygon(right, 'm')
"""
Explanation: Let us generate a Bézier curve and subdivide it at $s=0.47$:
End of explanation
"""
def deCasteljauAff(b, r, s, u): #multi-affine de Casteljau algorithm
# b is the list of control points b_0, b_1, ..., b_n
# [r,s] is a subinterval
# u is the iterator associated to the polar bag [u_0, u_1, u_{n-1}], n=len(b)-1
if len(b)>1:
return deCasteljauAff(cvxCtrlP( b, (u.next()-r)/(s-r)), r, s, u)
else: return b[0]
"""
Explanation: Multi-affine de Casteljau algorithm
The above de Casteljau algorithm is the classical one. There is a newer approach to define and study Bézier
curves through polarization.
Every polynomial curve of degree n, parameterized by $p:[a,b]\to\mathbb{R}^2$ defines a symmetric multiaffine
map $g:\underbrace{[a,b]\times[a,b]\times[a,b]}{n}\to \mathbb{R}^d$, such that
$p(t)=g(\underbrace{t,t,\ldots,t }{n})$, for every $t\in[a,b]$.
$g$ is called the polar form of the polynomial curve. An argument $(u_1, u_2, \ldots, u_n)$ of the polar form $g$ is called polar bag.
If $p(t)$ is the Bernstein parameterization of a Bezier curve of control points
${\bf b}0,{\bf b}{1}, \ldots, {\bf b}_n$, and $g:[0,1]^n\to\mathbb{R}^2$ its polar form, then the control points are related to $g$ as follows [Gallier]:
$${\bf b}j=g(\underbrace{0, 0, \ldots, 0}{n-j}, \underbrace{1,1, \ldots, 1}_{j}), \quad j=0,1, \ldots, n$$
This relationship allows to define Bézier curves by a parameterization $P$ defined not only on the interval $[0,1]$ but also on an arbitrary interval $[r,s]$. Namely, given a symmetric multiaffine map $g:[r,s]^n\to\mathbb{R}^2$
the associated polynomial curve $P(t)=g(\underbrace{t,t,\ldots, t}{n})$, $t\in[r,s]$, expressed as a Bézier curve has the control points defined by [Gallier]:
$${\bf b}_j=g(\underbrace{r, r, \ldots, r}{n-j}, \underbrace{s,s, \ldots, s}_{j}), \quad j=0,1, \ldots, n$$
Given the control points ${\bf b}0,{\bf b}{1}, \ldots, {\bf b}_n$ of a Bézier curve, and a polar bag $(u_1, u_2, \ldots, u_n)\in[r,s]^n$ the multi-affine de Casteljau algorithm
evaluates the corresponding multi-affine map, $g$, at this polar bag through a recursive formula similar to the classical
de Casteljau formula:
$${\bf b}^k_i= \displaystyle\frac{s-u_k}{s-r}\:{\bf b}i^{k-1}+ \displaystyle\frac{u_k-r}{s-r}\:{\bf b}{i+1}^{k-1}, \quad k=\overline{1,n}, i=\overline{0,n-k}$$
The point $b_0^n$ computed in the last step is the polar value, $g(u_1,u_2, \ldots, u_n)$.
Unlike the classical de Casteljau formula, where in each step the parameter for the convex combination
is the same, here in each step $k$ the parameter involved in convex combinations changes, namely it is
$\displaystyle\frac{u_k-r}{s-r}$.
In order to define a recursive function to implement this scheme
we consider the polar bag as an iterator associated to the list containing its coordinates, i.e. if L=[u[0], u[1], ..., u[n-1]] is the list, iter(L) is its iterator:
End of explanation
"""
ct=BuildB()
ct.B_ax()
n=len(ct.ctrl)-1
t=0.45
u=[t]*(n-1)+[0]
v=[t]*(n-1)+[1]
A=deCasteljauAff(ct.ctrl, 0,1, iter(u))
B=deCasteljauAff(ct.ctrl, 0,1, iter(v))
plt.plot([A[0], B[0]], [A[1], B[1]], 'g')
"""
Explanation: Usually we should test the concordance between the length (n+1) of the control polygon and the number of elements of the iterator
u. Since a listiterator has no length we have to count its elements. Functionally this number would get as:
len(map(lambda item: item, u))
What characteristic elements of a Bézier curve can the multi-affine de Casteljau algorithm compute?
1. The direction of the tangent to a Bézier curve of degree $n$, at a point $p(t)$, is defined by the vector
$\overrightarrow{{\bf b}^{n-1}_0{\bf b}^{n-1}_1}$. The end points of this vector are the points computed in
the $(n-1)^{th}$
step of the classical de Casteljauscheme. On the other hand, these points are polar values of the corresponding
multiaffine map $g$ [Gallier], namely:
$${\bf b}^{n-1}0=g(\underbrace{t,t,\ldots, t}{n-1}, 0), \quad {\bf b}^{n-1}1=g(\underbrace{t,t,\ldots, t}{n-1}, 1)$$
Thus they can be computed by the function deCasteljauAff.
Let us generate a Bézier curve and draw the tangent at the point corresponding to $t=0.45$:
End of explanation
"""
def polar_bags(r,s, n):
return map(lambda j: [r]*(n-j)+[s]*j, xrange(n+1))
r=0.3
s=0.67
n=3
L=polar_bags(r, s, n)
print L
def redefineBezier(b, r, s):
# returns the control polygon for the subarc of ends p(r), p(s)
#of a Bezier curve defined by control polygon b
#if(r<0 or s>1 or r>s or s-r<0.1):
#raise InvalidInputError('innapropriate interval ends')
return map(lambda u: deCasteljauAff(b, 0, 1, iter(u)), polar_bags(r,s, len(b)-1))
"""
Explanation: 2. The multi-affine de Casteljau algorithm can also be applied to redefine a subarc of a Bézier curve as a Bézier curve.
More precisely, let us assume that a Bézier curve of control points ${\bf b}j$, $j=\overline{0,n}$,
and parameterization $p$ defined on the interval [0,1] is cut at the points corresponding to the parameters $r<s$, $r, s\in[0,1]$.
The arc between $p(r)$ and $p(s)$ is also a polynomial curve and its control points are [Gallier]:
$${\bf c}_j=g(\underbrace{r,r,\ldots, r}{n-j}, \underbrace{s,s\ldots, s}_{j}),\quad j=\overline{0,n}$$
where $g$ is the polar form of the initial curve.
The function polar_bags below defines the list of polar bags involved in computation of the control points ${\bf c}_j$:
End of explanation
"""
Bez=BuildB()
Bez.B_ax()
br=redefineBezier(Bez.ctrl, 0.3, 0.67)
plot_polygon(br, 'g')
"""
Explanation: Now let us test this function:
End of explanation
"""
from functools import partial
def Omegas(u, k, t):
# compute a list of lists
#an inner list contains the values $\omega_j^r(t)$ values from a step r
#if (len(u)!=2*k+1):
#raise InvalidInputError('the list u must have length 2k+1')
return map(lambda r: map(partial(lambda r, j: (t-u[j])/(u[j+k-r+1]-u[j]), r), \
xrange(r,k+1)), xrange(1, k+1))
"""
Explanation: 3. The function redefineBezier can also be invoked to compute the left and right subpolygons resulted from a subdivision of a Bézier curve,
at a point corresponding to the paramater $s$. Namely the left subpolygon is returned by
redefineBezier(b, 0, s), whereas the right subpolygon by redefineBezier(b, s, 1)
B-spline curves
We give a procedural definition of a B-spline curve of degree $k$, not an analytical one.
The following data:
an interval $[a,b]$;
an integer $k\geq 1$;
a sequence of knots:
$$u_0 = u_1 = \cdots = u_k< u_{k+1} \leq \cdots \leq u_{m−k−1} < u_{m−k} = \cdots = u_{m−1} = u_m$$
with $u_0 = a, u_m = b$, $m−k > k$, and each knot $u_{k+1}, \ldots, u_{m-k-1}$ of multiplicity at most $k$;
- the control points ${\bf d}0, {\bf d}_1, \ldots, {\bf d}{m-k-1}$, called de Boor points;
define a curve $s:[a,b]\to\mathbb{R}^2$, such that for each $t$ in an interval $[u_J, u_{J+1}]$, with
$u_J<u_{J+1}$, $s(t)$ is computed in the last step of the Cox-de Boor recursive formula [Gallier]:
$$\begin{array}{lll}
{\bf d}^0_i&=&{\bf d}i,\quad i=\overline{J-k,J}\
{\bf d}_i^r(t)&=&(1-\omega_i^r(t))\,{\bf d}{i-1}^{r-1}(t)+\omega_i^r(t)\,{\bf d}{i}^{r-1}(t),\,\,
r=\overline{1, k}, i=\overline{J-k, J}\end{array}$$
with $\omega_i^r(t)=\displaystyle\frac{t-u{i}}{u_{i+k-r+1}-u_{i}}$
The curve defined in this way is called B-spline curve. It is a piecewise polynomial curve of degree
at most $k$, i.e. it is a polynomial curve on each nondegenerate interval $[u_J, u_{J+1}]$, and at each knot $u_J$ of multiplicity $1\leq p\leq k$, it is of class $C^{k-p}$.
The points computed in the steps $r=\overline{1,k}$ of the recursion can be written in a lower triangular matrix:
$$
\begin{array}{ccccc}
{\bf d}^0_{0}& & & & \
{\bf d}^0_{1} & {\bf d}{1}^1& & & \
\vdots & \vdots & & & \
{\bf d}^0{k-1} & {\bf d}{k-1}^1 &\ldots & {\bf d}{k-1}^{k-1}& \
{\bf d}^0_{k}& {\bf d}{k}^1 &\ldots & {\bf d}{k}^{k-1} &{\bf d}_{k}^{k}
\end{array}
$$
Unlike the classical de Casteljau or the multi-affine de Casteljau scheme, in the de Boor-formula
for each pair of consecutive points ${\bf d}^{r-1}_{i-1}, {\bf d}_i^{r-1}$, one computes a convex combination with the
coefficient $\omega_i^r$ depending both on $r$ and $i$.
In a first attempt to write a recursive function that implements the Cox-de Boor formula we defined an
auxiliary function Omegas, that returns a list of lists. An inner list contains the elements $\omega^r_i$, involved in a step of the recursive formula.
Using list comprehension the function Omegas returns:
[[(t-u[j])/(u[j+k-r+1]-u[j]) for j in range(r, k+1) ] for r in range(1,k+1)]
Although later we chose another solution, we give this function as an interesting example of FP code (avoiding list comprehension):
End of explanation
"""
k=3
u=[j for j in range(7)]
LL=Omegas( u, k, 3.5)
print LL
"""
Explanation: The function functools.partial associates to a multiargument function
a partial function in the same way as in mathematics, i.e. if $f(x_1, x_2, \ldots, x_n)$ is a n-variable function
$g(x_2, \ldots, x_n)=f(a, x_2, \ldots, x_n)$, with $a$ fixed, is a partial function. More details here.
In Omegas is involved the partial function of the anonymous function
lambda r, j: (t-u[j])/(u[j+k-r+1]-u[j])
defined by freezing r.
Let us test it:
End of explanation
"""
def omega(u, k, t):
# defines the list of coefficients for the convex combinations performed in a step of de Boor algo
#if (len(u)!=2*k+1 or t<u[k] or t>u[k+1]):
#raise InvalidInputError('the list u has not the length 2k+1 or t isn't within right interval')
return map(lambda j: (t-u[j]) / (u[j+k]-u[j]), xrange(1,k+1))
"""
Explanation: Noticing that in each step $r>1$, the coefficients $\omega^r_i$ are computed with the same formula as for
$r=1$, i.e. $\omega_i^r=\displaystyle\frac{t-u_i}{u_i+k-u_i}$ but for the knots in the list
u[r-1: -r+1] and $k=k-r+1$ we
define the function omega below, instead of Omegas:
End of explanation
"""
def cvxList(d, alpha):
#len(d)=len(alpha)+1
return map(cvxP, zip(d[:-1], d[1:], alpha))
"""
Explanation: We also need a new function that calculates the convex combinations of each pair of points in a list,
with distinct coefficients given in a list alpha:
End of explanation
"""
def DeBoor(d, u, k, t):
#len(d) must be (k+1) and len(u)=2*k+1:
# this algorithm evaluates a point c(t) on an arc of B-spline curve
# of degree k, defined by:
# u_0<=u_1<=... u_k<u_{k+1}<=...u_{2k} the sequence of knots
# d_0, d_1, ... d_k de Boor points
if(len(d)==1):
return d[0]
else:
return DeBoor(cvxList(d, omega(u, k,t)), u[1:-1], k-1,t)
"""
Explanation: The recursive Cox-de Boor formula is now implemented in the following way:
End of explanation
"""
def Bspline(d, k=3, N=100):
L=len(d)-1
n=L+k+1
#extend the control polygon
d+=[d[j] for j in range (k+1)]
#make uniform knots
u=np.arange(n+k+1)
# define the period T
T=u[n]-u[k]
#extend the sequence of knots
u[:k]=u[n-k:n]-T
u[n+1:n+k+1]=u[k+1:2*k+1]+T
u=list(u)
curve=[]#the list of points to be computed on the closed B-spline curve
for J in range(k, n+1):
if u[J]<u[J+1]:
t=np.linspace(u[J], u[J+1], N)
else: continue
curve+=[DeBoor(d[J-k:J+1], u[J-k:J+k+1], k, t[j]) for j in range (N) ]
return curve
"""
Explanation: To experiment the interactive generation of B-spline curves we give the restrictions on input data that lead to a closed B-spline curve:
A B-spline curve of degree k, defined on an interval $[a,b]$ by the following data:
de Boor control polygon:
$({\bf d}0, \ldots, {\bf d}{n-k},\ldots,{\bf d}{n-1})$ such that
$$\begin{array}{llllllll}
{\bf d}_0,&{\bf d_1},&\ldots,& {\bf d}{n-k-1}, & {\bf d}{n-k}, & {\bf d}{n-k+1}, & \ldots& {\bf d}{n-1}\
& & & & \shortparallel &\shortparallel & &\shortparallel\
& & & & {\bf d}_0 & {\bf d}_1
&\ldots& {\bf d}{k-1}\end{array}$$
a knot sequence obtained by extending through periodicity of period $T=b-a$ of the sequence of knots $a=u_k\leq\cdots\leq u_n=b$ ($n\geq 2k$):
$$\begin{array}{lll}u_{k-j}&=&u_{n-j}-T\ u_{n+j}&=&u_{k+j}+T\& & j=1,\ldots k\end{array}$$
is a closed B-spline curve.
In our implementation we take a uniform sequence of knots of the interval $[a,b]$
The de Boor points are chosen interactively with the left mouse button click within the plot window.
A point is a tuple of real numbers, and the control polygon is a list of tuples.
The next function constructs the periodic sequence of knots and the de Boor polygon such that to
generate from the interactively chosen control points a closed B-spline curve, of degree $3$, and
of class $C^{2}$ at each knot.
The function's code is imperative:
End of explanation
"""
D=BuildB(func=Bspline)
D.B_ax()
"""
Explanation: We build the control polygon and draw the corresponding closed B-spline curve:
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: After these experiments our conclusion is that you can insert functional programming code in your imperative Python code, but trying to write pure functions could be difficult, sometimes impossible.
Writing FP style Python code is challenging, but the code is not so easy to read, unlike the imperative Python code which is characterized as a readable one.
The next step in experimenting FP will be to use a Python implementation of monads (see for example Pymonad or fn.py.
In this IPython Notebook we avoided to raise errors and commented the lines handling errors.
The algorithms implemented in this IPython Notebook are presented in detail in:
1. G Farin, Curves and Surfaces for Computer Aided Geometric Design: A Practical Guide, Morgan Kaufmann, 2002.
2. J. Gallier, Curves and Surfaces in Geometric Modeling: Theory and Algorithms, Morgan Kaufmann, 1999. Free electronic version can be downloaded here.
2/10/2015
End of explanation
"""
|
calroc/joypy | docs/Advent of Code 2017 December 1st.ipynb | gpl-3.0 | from notebook_preamble import J, V, define
"""
Explanation: Advent of Code 2017
December 1st
[Given] a sequence of digits (your puzzle input) and find the sum of all digits that match the next digit in the list. The list is circular, so the digit after the last digit is the first digit in the list.
For example:
1122 produces a sum of 3 (1 + 2) because the first digit (1) matches the second digit and the third digit (2) matches the fourth digit.
1111 produces 4 because each digit (all 1) matches the next.
1234 produces 0 because no digit matches the next.
91212129 produces 9 because the only digit that matches the next one is the last digit, 9.
End of explanation
"""
define('pair_up == dup uncons swap unit concat zip')
J('[1 2 3] pair_up')
J('[1 2 2 3] pair_up')
"""
Explanation: I'll assume the input is a Joy sequence of integers (as opposed to a string or something else.)
We might proceed by creating a word that makes a copy of the sequence with the first item moved to the last, and zips it with the original to make a list of pairs, and a another word that adds (one of) each pair to a total if the pair matches.
AoC2017.1 == pair_up total_matches
Let's derive pair_up:
[a b c] pair_up
-------------------------
[[a b] [b c] [c a]]
Straightforward (although the order of each pair is reversed, due to the way zip works, but it doesn't matter for this program):
[a b c] dup
[a b c] [a b c] uncons swap
[a b c] [b c] a unit concat
[a b c] [b c a] zip
[[b a] [c b] [a c]]
End of explanation
"""
define('total_matches == 0 swap [i [=] [pop +] [popop] ifte] step')
J('[1 2 3] pair_up total_matches')
J('[1 2 2 3] pair_up total_matches')
"""
Explanation: Now we need to derive total_matches. It will be a step function:
total_matches == 0 swap [F] step
Where F will have the pair to work with, and it will basically be a branch or ifte.
total [n m] F
It will probably be easier to write if we dequote the pair:
total [n m] i F′
----------------------
total n m F′
Now F′ becomes just:
total n m [=] [pop +] [popop] ifte
So:
F == i [=] [pop +] [popop] ifte
And thus:
total_matches == 0 swap [i [=] [pop +] [popop] ifte] step
End of explanation
"""
define('AoC2017.1 == pair_up total_matches')
J('[1 1 2 2] AoC2017.1')
J('[1 1 1 1] AoC2017.1')
J('[1 2 3 4] AoC2017.1')
J('[9 1 2 1 2 1 2 9] AoC2017.1')
J('[9 1 2 1 2 1 2 9] AoC2017.1')
"""
Explanation: Now we can define our main program and evaluate it on the examples.
End of explanation
"""
J('[1 2 3 4] dup size 2 / [drop] [take reverse] cleave concat zip')
"""
Explanation: pair_up == dup uncons swap unit concat zip
total_matches == 0 swap [i [=] [pop +] [popop] ifte] step
AoC2017.1 == pair_up total_matches
Now the paired digit is "halfway" round.
[a b c d] dup size 2 / [drop] [take reverse] cleave concat zip
End of explanation
"""
J('[1 2 3 4] dup size 2 / [drop] [take reverse] cleave zip')
define('AoC2017.1.extra == dup size 2 / [drop] [take reverse] cleave zip swap pop total_matches 2 *')
J('[1 2 1 2] AoC2017.1.extra')
J('[1 2 2 1] AoC2017.1.extra')
J('[1 2 3 4 2 5] AoC2017.1.extra')
"""
Explanation: I realized that each pair is repeated...
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.2/tutorials/pitch_yaw.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: Misalignment (Pitch & Yaw)
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['teffs'])
"""
Explanation: Now let's add a mesh dataset at a few different times so that we can see how the misalignment affect the surfaces of the stars.
End of explanation
"""
print(b['pitch@component'])
print(b['incl@constraint'])
"""
Explanation: Relevant Parameters
The 'pitch' parameter defines the misalignment of a given star in the same direction as the inclination. We can see how it is defined relative to the inclination by accessing the constraint.
Note that, by default, it is the inclination of the component that is constrained with the inclination of the orbit and the pitch as free parameters.
End of explanation
"""
print(b['yaw@component'])
print(b['long_an@constraint'])
"""
Explanation: Similarly, the 'yaw' parameter defines the misalignment in the direction of the lonigtude of the ascending node.
Note that, by default, it is the long_an of the component that is constrained with the long_an of the orbit and the yaw as free parameters.
End of explanation
"""
print(b['long_an@primary@component'].description)
"""
Explanation: The long_an of a star is a bit of an odd concept, and really is just meant to be analogous to the inclination case. In reality, it is the angle of the "equator" of the star on the sky.
End of explanation
"""
b['syncpar@secondary'] = 5.0
b['pitch@secondary'] = 0
b['yaw@secondary'] = 0
b.run_compute(irrad_method='none')
"""
Explanation: Note also that the system is aligned by default, with the pitch and yaw both set to zero.
Misaligned Systems
To create a misaligned system, we must set at pitch and/or yaw to be non-zero.
But first let's create an aligned system for comparison. In order to easily see the spin-axis, we'll plot the effective temperature and spin-up our star to exaggerate the effect.
End of explanation
"""
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
"""
Explanation: We'll plot the mesh as it would be seen on the plane of the sky.
End of explanation
"""
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: and also with the line-of-sight along the x-axis.
End of explanation
"""
b['pitch@secondary'] = 30
b['yaw@secondary'] = 0
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: If we set the pitch to be non-zero, we'd expect to see a change in the spin axis along the line-of-sight.
End of explanation
"""
b['pitch@secondary@component'] = 0
b['yaw@secondary@component'] = 30
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: And if we set the yaw to be non-zero, we'll see the rotation axis rotate on the plane of the sky.
End of explanation
"""
|
tylere/docker-tmpnb-ee | notebooks/1 - IPython Notebook Examples/IPython Project Examples/IPython Kernel/Capturing Output.ipynb | apache-2.0 | from __future__ import print_function
import sys
"""
Explanation: Capturing Output With <tt>%%capture</tt>
IPython has a cell magic, %%capture, which captures the stdout/stderr of a cell. With this magic you can discard these streams or store them in a variable.
End of explanation
"""
%%capture
print('hi, stdout')
print('hi, stderr', file=sys.stderr)
"""
Explanation: By default, %%capture discards these streams. This is a simple way to suppress unwanted output.
End of explanation
"""
%%capture captured
print('hi, stdout')
print('hi, stderr', file=sys.stderr)
captured
"""
Explanation: If you specify a name, then stdout/stderr will be stored in an object in your namespace.
End of explanation
"""
captured()
captured.stdout
captured.stderr
"""
Explanation: Calling the object writes the output to stdout/stderr as appropriate.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
%%capture wontshutup
print("setting up X")
x = np.linspace(0,5,1000)
print("step 2: constructing y-data")
y = np.sin(x)
print("step 3: display info about y")
plt.plot(x,y)
print("okay, I'm done now")
wontshutup()
"""
Explanation: %%capture grabs all output types, not just stdout/stderr, so you can do plots and use IPython's display system inside %%capture
End of explanation
"""
%%capture cap --no-stderr
print('hi, stdout')
print("hello, stderr", file=sys.stderr)
cap.stdout
cap.stderr
cap.outputs
"""
Explanation: And you can selectively disable capturing stdout, stderr or rich display, by passing --no-stdout, --no-stderr and --no-display
End of explanation
"""
|
nathanielng/machine-learning | perceptron/digit-recognition.ipynb | apache-2.0 | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from itertools import product
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score, cross_val_predict, ShuffleSplit, KFold
from tqdm import tqdm
from IPython.display import display, Math, Latex, HTML
%matplotlib inline
np.set_printoptions(precision=4,threshold=200)
tqdm_bar_fmt='{percentage:3.0f}%|{bar}|'
"""
Explanation: Digit Recognition
1. Introduction
In this analysis, the handwritten digits are identified using support vector machines and radial basis functions.
1.1 Libraries
The essential libraries used here are numpy, matplotlib, and scikit-learn. For convenience, pandas and IPython.display are used for displaying tables, and tqdm is used for progress bars.
End of explanation
"""
def download_data():
train_url = "http://www.amlbook.com/data/zip/features.train"
test_url = "http://www.amlbook.com/data/zip/features.test"
column_names = ['digit','intensity','symmetry']
train = pd.read_table(train_url,names=column_names,header=None,delim_whitespace=True)
test = pd.read_table(test_url,names=column_names,header=None,delim_whitespace=True)
train.digit = train.digit.astype(int)
test.digit = test.digit.astype(int)
return train,test
def process_data(train,test):
X_train = train.iloc[:,1:].values
y_train = train.iloc[:,0].values
X_test = test.iloc[:,1:].values
y_test = test.iloc[:,0].values
return X_train,y_train,X_test,y_test
train,test = download_data()
X_train,y_train,X_test,y_test = process_data(train,test)
"""
Explanation: 1.2 Dataset
The US Postal Service Zip Code dataset is used, which contains handwritten digits zero to nine. The data has been preprocessed, whereby features of intensity and symmetry are extracted.
End of explanation
"""
def get_misclassification_ovr(X_train,y_train,X_test,y_test,digit,
Q=2,r=1.0,C=0.01,kernel='poly',verbose=False):
clf = SVC(C=C, kernel=kernel, degree=Q, coef0 = r, gamma = 1.0,
decision_function_shape='ovr', verbose=False)
y_in = (y_train==digit).astype(int)
y_out = (y_test==digit).astype(int)
model = clf.fit(X_train,y_in) # print(model)
E_in = np.mean(y_in != clf.predict(X_train))
E_out = np.mean(y_out != clf.predict(X_test))
n_support_vectors = len(clf.support_vectors_)
if verbose is True:
print()
print("Q = {}, C = {}: Support vectors: {}".format(Q, C, n_support_vectors))
print("{} vs all: E_in = {}".format(digit,E_in))
print("{} vs all: E_out = {}".format(digit,E_out))
return E_in,E_out,n_support_vectors
"""
Explanation: 2. Support Vector Machines for Digit Recognition
2.1 Polynomial Kernels
We wish to implement the following polynomial kernel for our support vector machine:
$$K\left(\mathbf{x_n,x_m}\right) = \left(1+\mathbf{x_n^Tx_m}\right)^Q$$
This is implemented in scikit-learn in the subroutine sklearn.svm.SVC, where the kernel function takes the form:
$$\left(\gamma \langle x,x' \rangle + r\right)^d$$
where $d$ is specified by the keyword degree, and $r$ by coef0.
2.1.1 One vs Rest Classification
In the following subroutine, the data is split into "one-vs-rest", where $y=1$ corresponds to a match to the digit, and $y=0$ corresponds to all the other digits. The training step is implemented in the call to clf.fit().
End of explanation
"""
results = pd.DataFrame()
i=0
for digit in tqdm(range(10),bar_format=tqdm_bar_fmt):
ei, eo, n = get_misclassification_ovr(X_train,y_train,X_test,y_test,digit)
df = pd.DataFrame({'digit': digit, 'E_in': ei, 'E_out': eo, 'n': n}, index=[i])
results = results.append(df)
i += 1
display(HTML(results[['digit','E_in','E_out','n']].iloc[::2].to_html(index=False)))
display(HTML(results[['digit','E_in','E_out','n']].iloc[1::2].to_html(index=False)))
from tabulate import tabulate
print(tabulate(results, headers='keys', tablefmt='simple'))
"""
Explanation: The following code trains on the data for the cases: 0 vs all, 1 vs all, ..., 9 vs all. For each of the digits, 0 to 9, the errors $E_{in}, E_{out}$ and the number of support vectors are recorded and stored in a pandas dataframe.
End of explanation
"""
def get_misclassification_ovo(X_train,y_train,X_test,y_test,digit1,digit2,
Q=2,r=1.0,C=0.01,kernel='poly'):
clf = SVC(C=C, kernel=kernel, degree=Q, coef0 = r, gamma = 1.0,
decision_function_shape='ovo', verbose=False)
select_in = np.logical_or(y_train==digit1,y_train==digit2)
y_in = (y_train[select_in]==digit1).astype(int)
X_in = X_train[select_in]
select_out = np.logical_or(y_test==digit1,y_test==digit2)
y_out = (y_test[select_out]==digit1).astype(int)
X_out = X_test[select_out]
model = clf.fit(X_in,y_in)
E_in = np.mean(y_in != clf.predict(X_in))
E_out = np.mean(y_out != clf.predict(X_out))
n_support_vectors = len(clf.support_vectors_)
return E_in,E_out,n_support_vectors
"""
Explanation: 2.1.2 One vs One Classification
One vs one classification makes better use of the data, but is more computatationally expensive. The following subroutine splits the data so that $y=0$ for the first digit, and $y=1$ for the second digit. The rows of data corresponding to all other digits are removed.
End of explanation
"""
C_arr = [0.0001, 0.001, 0.01, 1]
Q_arr = [2, 5]
CQ_arr = list(product(C_arr,Q_arr))
results = pd.DataFrame()
i=0
for C, Q in tqdm(CQ_arr,bar_format=tqdm_bar_fmt):
ei, eo, n = get_misclassification_ovo(X_train,y_train,X_test,y_test,
digit1=1,digit2=5,Q=Q,r=1.0,C=C)
df = pd.DataFrame({'C': C, 'Q': Q, 'E_in': ei, 'E_out': eo, 'n': n}, index=[i])
results = results.append(df)
i += 1
display(HTML(results[['C','Q','E_in','E_out','n']].to_html(index=False)))
"""
Explanation: In the following code, a 1-vs-5 classifier is tested for $Q=2,5$ and $C=0.001,0.01,0.1,1$.
End of explanation
"""
def select_digits_ovo(X,y,digit1,digit2):
subset = np.logical_or(y==digit1,y==digit2)
X_in = X[subset].copy()
y_in = (y[subset]==digit1).astype(int).copy()
return X_in,y_in
def get_misclassification_ovo_cv(X_in,y_in,Q=2,r=1.0,C=0.01,kernel='poly'):
kf = KFold(n_splits=10, shuffle=True)
kf.get_n_splits(X_in)
E_cv = []
for train_index, test_index in kf.split(X_in):
X_trn, X_tst = X_in[train_index], X_in[test_index]
y_trn, y_tst = y_in[train_index], y_in[test_index]
clf = SVC(C=C, kernel=kernel, degree=Q, coef0 = r, gamma = 1.0,
decision_function_shape='ovo', verbose=False)
model = clf.fit(X_trn,y_trn)
E_cv.append(np.mean(y_tst != clf.predict(X_tst)))
return E_cv
"""
Explanation: 2.1.3 Polynomial Kernel with Cross-Validation
For k-fold cross-validation, the subroutine sklearn.model_selection.KFold() is employed to split the data into folds. Random shuffling is enabled with the shuffle=True option.
End of explanation
"""
C_arr = [0.0001, 0.001, 0.01, 0.1, 1]; d1 = 1; d2 = 5
Q=2
X_in,y_in = select_digits_ovo(X_train,y_train,d1,d2)
"""
Explanation: In this example, our parameters are $Q=2, C \in {0.0001,0.001,0.01,0.1,1}$ and we are considering the 1-vs-5 classifier.
End of explanation
"""
E_cv_arr = []
count_arr = []
for n in tqdm(range(100),bar_format=tqdm_bar_fmt):
counts = np.zeros(len(C_arr),int)
kf = KFold(n_splits=10, shuffle=True)
kf.get_n_splits(X_in)
for train_index, test_index in kf.split(X_in):
X_trn, X_tst = X_in[train_index], X_in[test_index]
y_trn, y_tst = y_in[train_index], y_in[test_index]
E_cv = []
for C in C_arr:
clf = SVC(C=C, kernel='poly', degree=Q, coef0=1.0, gamma=1.0,
decision_function_shape='ovo', verbose=False)
model = clf.fit(X_trn,y_trn)
E_cv.append(np.mean(y_tst != clf.predict(X_tst)))
counts[np.argmin(E_cv)] += 1
E_cv_arr.append(np.array(E_cv))
count_arr.append(counts)
"""
Explanation: Due to the effect of random shuffling, 100 runs are carried out, and the results tabulated.
End of explanation
"""
df = pd.DataFrame({'C': C_arr})
df['count'] = np.sum(np.array(count_arr),axis=0)
df['E_cv'] = np.mean(np.array(E_cv_arr),axis=0)
display(HTML(df.to_html(index=False)))
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
_ = df.plot(y=['count'],ax=ax1,marker='x',color='red',markersize=10,grid=True)
ax2 = fig.add_subplot(122)
_ = df.plot(y=['E_cv'],ax=ax2,marker='o',color='green',markersize=7,grid=True)
"""
Explanation: The number of times each particular value of $C$ is picked (for having the smallest $E_{cv}$) is calculated, as well as the mean cross validation error. The data is tabulated as follows:
End of explanation
"""
from sklearn.datasets import fetch_mldata
from sklearn import svm
import timeit
digits = fetch_mldata('MNIST original') # stores data in ~/scikit_learn_data by default
n_max = len(digits.target)
selection = np.random.permutation(n_max)
n_arr = [500, 1000, 1500, 2000, 5000, 10000]
t_arr = []
for n in n_arr:
sel = selection[:n]
X = digits.data[sel]
y = digits.target[sel]
clf = svm.SVC(gamma=0.001, C=100.)
t0 = timeit.default_timer()
clf.fit(X,y)
t_arr.append(timeit.default_timer() - t0)
print("n = {}, time = {}".format(n, t_arr[-1]))
plt.plot(np.array(n_arr),np.array(t_arr),'bx-')
plt.xlabel('no. of samples')
plt.ylabel('time (s)')
plt.grid()
"""
Explanation: 2.2 Scaling Law for Support Vector Machines
This is based on scikit-learn example code, but modified to demonstrate how the training time scales with the size of the data. The dataset in this example is the original MNIST data.
End of explanation
"""
def get_misclassification_rbf(X_train,y_train,X_test,y_test,C,digit1=1,digit2=5):
clf = SVC(C=C, kernel='rbf', gamma = 1.0,
decision_function_shape='ovo', verbose=False)
select_in = np.logical_or(y_train==digit1,y_train==digit2)
y_in = (y_train[select_in]==digit1).astype(float)
X_in = X_train[select_in]
select_out = np.logical_or(y_test==digit1,y_test==digit2)
y_out = (y_test[select_out]==digit1).astype(float)
X_out = X_test[select_out]
model = clf.fit(X_in,y_in)
E_in = np.mean(y_in != clf.predict(X_in))
E_out = np.mean(y_out != clf.predict(X_out))
n_support_vectors = len(clf.support_vectors_)
return E_in,E_out,n_support_vectors
"""
Explanation: 3. Radial Basis Functions
3.1 Background
The hypothesis is given by:
$$h\left(\mathbf{x}\right) = \sum\limits_{k=1}^K w_k \exp\left(-\gamma \Vert \mathbf{x} - \mu_k \Vert^2\right)$$
This is implemented in the subroutine sklearn.svm.SVC(..., kernel='rbf', ...) as shown in the code below.
End of explanation
"""
C_arr = [0.01, 1., 100., 1e4, 1e6]
results = pd.DataFrame()
i=0
for C in tqdm(C_arr,bar_format=tqdm_bar_fmt):
ei, eo, n = get_misclassification_rbf(X_train,y_train,X_test,y_test,
C,digit1=1,digit2=5)
df = pd.DataFrame({'C': C, 'E_in': ei, 'E_out': eo, 'n': n}, index=[i])
results = results.append(df)
i += 1
display(HTML(results.to_html(index=False)))
"""
Explanation: For $C \in {0.01, 1, 100, 10^4, 10^6}$, the in-sample and out-of-sample errors, as well as the number of support vectors are tabulated as follows:
End of explanation
"""
def plot_Ein_Eout(df):
fig = plt.figure(figsize=(12,5))
ax1 = fig.add_subplot(121)
df.plot(ax=ax1, x='C', y='E_in', kind='line', marker='o', markersize=7, logx=True)
df.plot(ax=ax1, x='C', y='E_out', kind='line', marker='x', markersize=7, logx=True)
ax1.legend(loc='best',frameon=False)
ax1.grid(True)
ax1.set_title('In-Sample vs Out-of-Sample Errors')
ax2 = fig.add_subplot(122)
df.plot(ax=ax2, x='C', y='n', kind='line', marker='+', markersize=10, logx=True)
ax2.legend(loc='best',frameon=False)
ax2.grid(True)
ax2.set_title('Number of Support Vectors')
plot_Ein_Eout(results)
"""
Explanation: The table is also plotted graphically below:
End of explanation
"""
|
rsterbentz/phys202-2015-work | assignments/assignment05/InteractEx04.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 4
Imports
End of explanation
"""
def random_line(m, b, sigma, size=10):
"""Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
"""
x = np.linspace(-1.0,1.0,size)
def N(mu, sigma):
if sigma == 0:
N = 0
else:
N = np.exp(-1*((x-mu)**2)/(2*(sigma**2)))/(sigma*((2*np.pi)**0.5))
return N
y = m*x + b + N(0,sigma)
return x, y
random_line(0.0,0.0,1.0,500)
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
"""
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
"""
def ticks_out(ax):
"""Move the ticks to the outside of the box."""
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
"""Plot a random line with slope m, intercept b and size points."""
x, y = random_line(m, b, sigma, size)
plt.scatter(x,y,color=color)
plt.xlim(min(x),max(x))
plt.ylim(min(y),max(y))
plt.tick_params(direction='out', width=1, which='both')
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
"""
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
"""
interact(plot_random_line, m=(-10.0,10.0,0.1), b=(-5.0,5.0,0.1), sigma=(0.0,5.0,0.01), size=(10,100,10), color=['red','green','blue'])
#### assert True # use this cell to grade the plot_random_line interact
"""
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/26b665b81df8d69d2764306260a9ffa9/cluster_stats_evoked.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
"""
Explanation: Permutation F-test on sensor data with 1D cluster level
One tests if the evoked response is significantly different
between conditions. Multiple comparison problem is addressed
with cluster level permutation test.
End of explanation
"""
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
channel = 'MEG 1332' # include only this channel in analysis
include = [channel]
"""
Explanation: Set parameters
End of explanation
"""
picks = mne.pick_types(raw.info, meg=False, eog=True, include=include,
exclude='bads')
event_id = 1
reject = dict(grad=4000e-13, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition1 = epochs1.get_data() # as 3D matrix
event_id = 2
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition2 = epochs2.get_data() # as 3D matrix
condition1 = condition1[:, 0, :] # take only one channel to get a 2D array
condition2 = condition2[:, 0, :] # take only one channel to get a 2D array
"""
Explanation: Read epochs for the channel of interest
End of explanation
"""
threshold = 6.0
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([condition1, condition2], n_permutations=1000,
threshold=threshold, tail=1, n_jobs=1,
out_type='mask')
"""
Explanation: Compute statistic
End of explanation
"""
times = epochs1.times
fig, (ax, ax2) = plt.subplots(2, 1, figsize=(8, 4))
ax.set_title('Channel : ' + channel)
ax.plot(times, condition1.mean(axis=0) - condition2.mean(axis=0),
label="ERF Contrast (Event 1 - Event 2)")
ax.set_ylabel("MEG (T / m)")
ax.legend()
for i_c, c in enumerate(clusters):
c = c[0]
if cluster_p_values[i_c] <= 0.05:
h = ax2.axvspan(times[c.start], times[c.stop - 1],
color='r', alpha=0.3)
else:
ax2.axvspan(times[c.start], times[c.stop - 1], color=(0.3, 0.3, 0.3),
alpha=0.3)
hf = plt.plot(times, T_obs, 'g')
ax2.legend((h, ), ('cluster p-value < 0.05', ))
ax2.set_xlabel("time (ms)")
ax2.set_ylabel("f-values")
"""
Explanation: Plot
End of explanation
"""
|
AllenDowney/ThinkBayes2 | notebooks/chap11.ipynb | mit | # If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
"""
Explanation: Comparison
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
"""
x = [1, 3, 5]
y = [2, 4]
"""
Explanation: This chapter introduces joint distributions, which are an essential tool for working with distributions of more than one variable.
We'll use them to solve a silly problem on our way to solving a real problem.
The silly problem is figuring out how tall two people are, given only that one is taller than the other.
The real problem is rating chess players (or participants in other kinds of competition) based on the outcome of a game.
To construct joint distributions and compute likelihoods for these problems, we will use outer products and similar operations. And that's where we'll start.
Outer Operations
Many useful operations can be expressed as the "outer product" of two sequences, or another kind of "outer" operation.
Suppose you have sequences like x and y:
End of explanation
"""
import numpy as np
X, Y = np.meshgrid(x, y)
"""
Explanation: The outer product of these sequences is an array that contains the product of every pair of values, one from each sequence.
There are several ways to compute outer products, but the one I think is the most versatile is a "mesh grid".
NumPy provides a function called meshgrid that computes a mesh grid. If we give it two sequences, it returns two arrays.
End of explanation
"""
X
"""
Explanation: The first array contains copies of x arranged in rows, where the number of rows is the length of y.
End of explanation
"""
Y
"""
Explanation: The second array contains copies of y arranged in columns, where the number of columns is the length of x.
End of explanation
"""
X * Y
"""
Explanation: Because the two arrays are the same size, we can use them as operands for arithmetic functions like multiplication.
End of explanation
"""
import pandas as pd
df = pd.DataFrame(X * Y, columns=x, index=y)
df
"""
Explanation: This is result is the outer product of x and y.
We can see that more clearly if we put it in a DataFrame:
End of explanation
"""
X + Y
"""
Explanation: The values from x appear as column names; the values from y appear as row labels.
Each element is the product of a value from x and a value from y.
We can use mesh grids to compute other operations, like the outer sum, which is an array that contains the sum of elements from x and elements from y.
End of explanation
"""
X > Y
"""
Explanation: We can also use comparison operators to compare elements from x with elements from y.
End of explanation
"""
mean = 178
qs = np.arange(mean-24, mean+24, 0.5)
"""
Explanation: The result is an array of Boolean values.
It might not be obvious yet why these operations are useful, but we'll see examples soon.
With that, we are ready to take on a new Bayesian problem.
How Tall Is A?
Suppose I choose two people from the population of adult males in the U.S.; I'll call them A and B. If we see that A taller than B, how tall is A?
To answer this question:
I'll use background information about the height of men in the U.S. to form a prior distribution of height,
I'll construct a joint prior distribution of height for A and B (and I'll explain what that is),
Then I'll update the prior with the information that A is taller, and
From the joint posterior distribution I'll extract the posterior distribution of height for A.
In the U.S. the average height of male adults is 178 cm and the standard deviation is 7.7 cm. The distribution is not exactly normal, because nothing in the real world is, but the normal distribution is a pretty good model of the actual distribution, so we can use it as a prior distribution for A and B.
Here's an array of equally-spaced values from 3 standard deviations below the mean to 3 standard deviations above (rounded up a little).
End of explanation
"""
from scipy.stats import norm
std = 7.7
ps = norm(mean, std).pdf(qs)
"""
Explanation: SciPy provides a function called norm that represents a normal distribution with a given mean and standard deviation, and provides pdf, which evaluates the probability density function (PDF) of the normal distribution:
End of explanation
"""
from empiricaldist import Pmf
prior = Pmf(ps, qs)
prior.normalize()
"""
Explanation: Probability densities are not probabilities, but if we put them in a Pmf and normalize it, the result is a discrete approximation of the normal distribution.
End of explanation
"""
from utils import decorate
prior.plot(style='--', color='C5')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Approximate distribution of height for men in U.S.')
"""
Explanation: Here's what it looks like.
End of explanation
"""
def make_joint(pmf1, pmf2):
"""Compute the outer product of two Pmfs."""
X, Y = np.meshgrid(pmf1, pmf2)
return pd.DataFrame(X * Y, columns=pmf1.qs, index=pmf2.qs)
"""
Explanation: This distribution represents what we believe about the heights of A and B before we take into account the data that A is taller.
Joint Distribution
The next step is to construct a distribution that represents the probability of every pair of heights, which is called a joint distribution. The elements of the joint distribution are
$$P(A_x~\mathrm{and}~B_y)$$
which is the probability that A is $x$ cm tall and B is $y$ cm tall, for all values of $x$ and $y$.
At this point all we know about A and B is that they are male residents of the U.S., so their heights are independent; that is, knowing the height of A provides no additional information about the height of B.
In that case, we can compute the joint probabilities like this:
$$P(A_x~\mathrm{and}~B_y) = P(A_x)~P(B_y)$$
Each joint probability is the product of one element from the distribution of x and one element from the distribution of y.
So if we have Pmf objects that represent the distribution of height for A and B, we can compute the joint distribution by computing the outer product of the probabilities in each Pmf.
The following function takes two Pmf objects and returns a DataFrame that represents the joint distribution.
End of explanation
"""
joint = make_joint(prior, prior)
joint.shape
"""
Explanation: The column names in the result are the quantities from pmf1; the row labels are the quantities from pmf2.
In this example, the prior distributions for A and B are the same, so we can compute the joint prior distribution like this:
End of explanation
"""
joint.to_numpy().sum()
"""
Explanation: The result is a DataFrame with possible heights of A along the columns, heights of B along the rows, and the joint probabilities as elements.
If the prior is normalized, the joint prior is also be normalized.
End of explanation
"""
series = joint.sum()
series.shape
"""
Explanation: To add up all of the elements, we convert the DataFrame to a NumPy array before calling sum. Otherwise, DataFrame.sum would compute the sums of the columns and return a Series.
End of explanation
"""
import matplotlib.pyplot as plt
def plot_joint(joint, cmap='Blues'):
"""Plot a joint distribution with a color mesh."""
vmax = joint.to_numpy().max() * 1.1
plt.pcolormesh(joint.columns, joint.index, joint,
cmap=cmap,
vmax=vmax,
shading='nearest')
plt.colorbar()
decorate(xlabel='A height in cm',
ylabel='B height in cm')
"""
Explanation: Visualizing the Joint Distribution
The following function uses pcolormesh to plot the joint distribution.
End of explanation
"""
plot_joint(joint)
decorate(title='Joint prior distribution of height for A and B')
"""
Explanation: Here's what the joint prior distribution looks like.
End of explanation
"""
def plot_contour(joint):
"""Plot a joint distribution with a contour."""
plt.contour(joint.columns, joint.index, joint,
linewidths=2)
decorate(xlabel='A height in cm',
ylabel='B height in cm')
plot_contour(joint)
decorate(title='Joint prior distribution of height for A and B')
"""
Explanation: As you might expect, the probability is highest (darkest) near the mean and drops off farther from the mean.
Another way to visualize the joint distribution is a contour plot.
End of explanation
"""
x = joint.columns
y = joint.index
"""
Explanation: Each line represents a level of equal probability.
Likelihood
Now that we have a joint prior distribution, we can update it with the data, which is that A is taller than B.
Each element in the joint distribution represents a hypothesis about the heights of A and B.
To compute the likelihood of every pair of quantities, we can extract the column names and row labels from the prior, like this:
End of explanation
"""
X, Y = np.meshgrid(x, y)
"""
Explanation: And use them to compute a mesh grid.
End of explanation
"""
A_taller = (X > Y)
A_taller.dtype
"""
Explanation: X contains copies of the quantities in x, which are possible heights for A. Y contains copies of the quantities in y, which are possible heights for B.
If we compare X and Y, the result is a Boolean array:
End of explanation
"""
a = np.where(A_taller, 1, 0)
"""
Explanation: To compute likelihoods, I'll use np.where to make an array with 1 where A_taller is True and 0 elsewhere.
End of explanation
"""
likelihood = pd.DataFrame(a, index=x, columns=y)
"""
Explanation: To visualize this array of likelihoods, I'll put in a DataFrame with the values of x as column names and the values of y as row labels.
End of explanation
"""
plot_joint(likelihood, cmap='Oranges')
decorate(title='Likelihood of A>B')
"""
Explanation: Here's what it looks like:
End of explanation
"""
posterior = joint * likelihood
"""
Explanation: The likelihood of the data is 1 where X > Y and 0 elsewhere.
The Update
We have a prior, we have a likelihood, and we are ready for the update. As usual, the unnormalized posterior is the product of the prior and the likelihood.
End of explanation
"""
def normalize(joint):
"""Normalize a joint distribution."""
prob_data = joint.to_numpy().sum()
joint /= prob_data
return prob_data
normalize(posterior)
"""
Explanation: I'll use the following function to normalize the posterior:
End of explanation
"""
plot_joint(posterior)
decorate(title='Joint posterior distribution of height for A and B')
"""
Explanation: And here's what it looks like.
End of explanation
"""
column = posterior[180]
column.head()
"""
Explanation: All pairs where B is taller than A have been eliminated. The rest of the posterior looks the same as the prior, except that it has been renormalized.
Marginal Distributions
The joint posterior distribution represents what we believe about the heights of A and B given the prior distributions and the information that A is taller.
From this joint distribution, we can compute the posterior distributions for A and B. To see how, let's start with a simpler problem.
Suppose we want to know the probability that A is 180 cm tall. We can select the column from the joint distribution where x=180.
End of explanation
"""
column.sum()
"""
Explanation: This column contains posterior probabilities for all cases where x=180; if we add them up, we get the total probability that A is 180 cm tall.
End of explanation
"""
column_sums = posterior.sum(axis=0)
column_sums.head()
"""
Explanation: It's about 3%.
Now, to get the posterior distribution of height for A, we can add up all of the columns, like this:
End of explanation
"""
marginal_A = Pmf(column_sums)
"""
Explanation: The argument axis=0 means we want to add up the columns.
The result is a Series that contains every possible height for A and its probability. In other words, it is the distribution of heights for A.
We can put it in a Pmf like this:
End of explanation
"""
marginal_A.plot(label='Posterior for A')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Posterior distribution for A')
"""
Explanation: When we extract the distribution of a single variable from a joint distribution, the result is called a marginal distribution.
The name comes from a common visualization that shows the joint distribution in the middle and the marginal distributions in the margins.
Here's what the marginal distribution for A looks like:
End of explanation
"""
row_sums = posterior.sum(axis=1)
marginal_B = Pmf(row_sums)
"""
Explanation: Similarly, we can get the posterior distribution of height for B by adding up the rows and putting the result in a Pmf.
End of explanation
"""
marginal_B.plot(label='Posterior for B', color='C1')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Posterior distribution for B')
"""
Explanation: Here's what it looks like.
End of explanation
"""
def marginal(joint, axis):
"""Compute a marginal distribution."""
return Pmf(joint.sum(axis=axis))
"""
Explanation: Let's put the code from this section in a function:
End of explanation
"""
marginal_A = marginal(posterior, axis=0)
marginal_B = marginal(posterior, axis=1)
"""
Explanation: marginal takes as parameters a joint distribution and an axis number:
If axis=0, it returns the marginal of the first variable (the one on the x-axis);
If axis=1, it returns the marginal of the second variable (the one on the y-axis).
So we can compute both marginals like this:
End of explanation
"""
prior.plot(style='--', label='Prior', color='C5')
marginal_A.plot(label='Posterior for A')
marginal_B.plot(label='Posterior for B')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Prior and posterior distributions for A and B')
"""
Explanation: Here's what they look like, along with the prior.
End of explanation
"""
prior.mean()
print(marginal_A.mean(), marginal_B.mean())
"""
Explanation: As you might expect, the posterior distribution for A is shifted to the right and the posterior distribution for B is shifted to the left.
We can summarize the results by computing the posterior means:
End of explanation
"""
prior.std()
print(marginal_A.std(), marginal_B.std())
"""
Explanation: Based on the observation that A is taller than B, we are inclined to believe that A is a little taller than average, and B is a little shorter.
Notice that the posterior distributions are a little narrower than the prior. We can quantify that by computing their standard deviations.
End of explanation
"""
column_170 = posterior[170]
"""
Explanation: The standard deviations of the posterior distributions are a little smaller, which means we are more certain about the heights of A and B after we compare them.
Conditional Posteriors
Now suppose we measure A and find that he is 170 cm tall. What does that tell us about B?
In the joint distribution, each column corresponds a possible height for A. We can select the column that corresponds to height 170 cm like this:
End of explanation
"""
cond_B = Pmf(column_170)
cond_B.normalize()
"""
Explanation: The result is a Series that represents possible heights for B and their relative likelihoods.
These likelihoods are not normalized, but we can normalize them like this:
End of explanation
"""
prior.plot(style='--', label='Prior', color='C5')
marginal_B.plot(label='Posterior for B', color='C1')
cond_B.plot(label='Conditional posterior for B',
color='C4')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Prior, posterior and conditional distribution for B')
"""
Explanation: Making a Pmf copies the data by default, so we can normalize cond_B without affecting column_170 or posterior.
The result is the conditional distribution of height for B given that A is 170 cm tall.
Here's what it looks like:
End of explanation
"""
# Solution goes here
# Solution goes here
"""
Explanation: The conditional posterior distribution is cut off at 170 cm, because we have established that B is shorter than A, and A is 170 cm.
Dependence and Independence
When we constructed the joint prior distribution, I said that the heights of A and B were independent, which means that knowing one of them provides no information about the other.
In other words, the conditional probability $P(A_x | B_y)$ is the same as the unconditional probability $P(A_x)$.
But in the posterior distribution, $A$ and $B$ are not independent.
If we know that A is taller than B, and we know how tall A is, that gives us information about B.
The conditional distribution we just computed demonstrates this dependence.
Summary
In this chapter we started with the "outer" operations, like outer product, which we used to construct a joint distribution.
In general, you cannot construct a joint distribution from two marginal distributions, but in the special case where the distributions are independent, you can.
We extended the Bayesian update process and applied it to a joint distribution. Then from the posterior joint distribution we extracted marginal posterior distributions and conditional posterior distributions.
As an exercise, you'll have a chance to apply the same process to a problem that's a little more difficult and a lot more useful, updating a chess player's rating based on the outcome of a game.
Exercises
Exercise: Based on the results of the previous example, compute the posterior conditional distribution for A given that B is 180 cm.
Hint: Use loc to select a row from a DataFrame.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: Suppose we have established that A is taller than B, but we don't know how tall B is.
Now we choose a random woman, C, and find that she is shorter than A by at least 15 cm. Compute posterior distributions for the heights of A and C.
The average height for women in the U.S. is 163 cm; the standard deviation is 7.3 cm.
End of explanation
"""
1 / (1 + 10**(-100/400))
"""
Explanation: Exercise: The Elo rating system is a way to quantify the skill level of players for games like chess.
It is based on a model of the relationship between the ratings of players and the outcome of a game. Specifically, if $R_A$ is the rating of player A and $R_B$ is the rating of player B, the probability that A beats B is given by the logistic function:
$$P(\mathrm{A~beats~B}) = \frac{1}{1 + 10^{(R_B-R_A)/400}}$$
The parameters 10 and 400 are arbitrary choices that determine the range of the ratings. In chess, the range is from 100 to 2800.
Notice that the probability of winning depends only on the difference in rankings. As an example, if $R_A$ exceeds $R_B$ by 100 points, the probability that A wins is
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Suppose A has a current rating of 1600, but we are not sure it is accurate. We could describe their true rating with a normal distribution with mean 1600 and standard deviation 100, to indicate our uncertainty.
And suppose B has a current rating of 1800, with the same level of uncertainty.
Then A and B play and A wins. How should we update their ratings?
To answer this question:
Construct prior distributions for A and B.
Use them to construct a joint distribution, assuming that the prior distributions are independent.
Use the logistic function above to compute the likelihood of the outcome under each joint hypothesis.
Use the joint prior and likelihood to compute the joint posterior.
Extract and plot the marginal posteriors for A and B.
Compute the posterior means for A and B. How much should their ratings change based on this outcome?
End of explanation
"""
|
YAtOff/python0-reloaded | week4/Conditionals.ipynb | mit | a = 2
b = 1
if a > b:
print('a is greater then b')
a = 1
b = 2
if a < b:
print('a is less than b')
a = 1
b = 1
if a == b:
print('a is equal to b')
a = 1
b = 2
if a != b:
print('a is not equal to b')
"""
Explanation: Условен оператор (if)
Условният оператор променя поведението на програмата, на базата на условие.
Ако условието, подадено на условният оператор е вярно, се изпълнява блока от код
на следвщия ред (който трябва да е с един таб навътре).
Използвайки операциите за сравнение
- <
- >
- ==
- !=
- <=
- >=
попълнете празните места, така че да се изпише:
a is greater then b
a is less than b
a is equal to b
a is not equal to b
End of explanation
"""
a = 1
b = 2
if a == b:
print('a is equal to b')
else:
print('a is not equal to b')
"""
Explanation: Ако изслвдваме два възможни случая, трябва да използваме if - else конструкция.
End of explanation
"""
ph = 6.5
if ph > 7:
print('acidic')
elif ph < 7:
print('basic')
else:
print('neutral')
"""
Explanation: За повече от два - можем да добавим нов случай с elif.
End of explanation
"""
today = 'monday'
bank_balance = 100
if today == 'holiday':
if 2000 < bank_balance:
print('Go for shopping')
else:
print('Watch TV')
else:
print('normal working day')
"""
Explanation: if конструкциите могат да се влагат една в друга.
End of explanation
"""
|
boffi/boffi.github.io | dati_2018/03/Constant_Acceleration.ipynb | mit | m = 1.00
k = 4*pi*pi
wn = 2*pi
T = 1.0
z = 0.02
wd = wn*sqrt(1-z*z)
c = 2*z*wn*m
"""
Explanation: Constant Acceleration
Define the Dynamical System
End of explanation
"""
NSTEPS = 200 # steps per second
h = 1.0 / NSTEPS
def load(t):
return np.where(t<0, 0, np.where(t<5, sin(0.5*wn*t)**2, 0))
t = np.linspace(-1, 6, 7*NSTEPS+1)
plt.plot(t, load(t))
plt.ylim((-0.05, 1.05));
"""
Explanation: Define the Loading
Our load is
$$p(t) = 1 \text{N} \times \begin{cases}\sin^2\frac{\omega_n}2t&0 \le t \le 5\0&\text{otherwise}\end{cases}$$
End of explanation
"""
kstar = k + 2*c/h + 4*m/h/h
astar = 2*m
vstar = 2*c + 4*m/h
"""
Explanation: Numerical Constants
We want to compute, for each step
$$\Delta x = \frac{\Delta p + a^\star a_0 + v^\star v_0}{k^\star}$$
where we see the constants
$$a^\star = 2m, \quad v^\star = 2c + \frac{4m}{h},$$
$$ k^\star = k + \frac{2c}{h} + \frac{4m}{h^2}.$$
End of explanation
"""
t = np.linspace(0, 8+h, NSTEPS*8+1)
P = load(t)
"""
Explanation: Vectorize the time and the load
We want to compute the response up to 8 seconds
End of explanation
"""
x, v, a = [], [], []
x0, v0 = 0.0, 0.0
a0 =(P[0]-k*x0-c*v0)/m
for p0, p1 in zip(P[:-1], P[+1:]):
x.append(x0), v.append(v0), a.append(a0)
dx = ((p1-p0) + astar*a0 + vstar*v0)/kstar
dv = 2*(dx/h-v0)
x0, v0 = x0+dx, v0+dv
a0 = (p1 - k*x0 - c*v0)/m
x.append(x0), v.append(v0), a.append(a0)
x, v = np.array(x), np.array(v)
"""
Explanation: Integration
Prepare containers
write initial conditions
loop on load and load increments
save initial conditions (x0, v0, a0),
compute the _corrected_load increment,
compute dx and dy,
compute the vaues of x0 and y0 for the beginning of the next step
compute a0 at the beginning of the next step.
save the results for the last step (they are usually saved at the beginning of the loop, but here we need to special case, why?) and vectorize the whole of the results
End of explanation
"""
plt.plot(t, x);
"""
Explanation: Results
The response
End of explanation
"""
xe = (((1 - 2*z**2)*sin(wd*t) / (2*z*sqrt(1-z*z)) - cos(wd*t)) *exp(-z*wn*t) + 1. - sin(wn*t)/(2*z))/2/k
plt.plot(t[:1001], xe[:1001], lw=2)
plt.plot(t[:1001:10], x[:1001:10], 'ko');
"""
Explanation: Comparison
Using black magic it is possible to divine the analytical expression of the response during the forced phase,
$$x(t) = \frac1{2k}\left(\left(\frac{1-2\zeta^2}{2\zeta\sqrt{1-\zeta^2}}\sin\omega_Dt-\cos\omega_Dt\right)\exp(-\zeta\omega_nt)+1-\frac{1}{2\zeta}\sin\omega_nt\right), \qquad 0\le t \le 5.$$
and hence plot a comparison within the exact response and the (downsampled) numerical approximation, in the range of validity of the exact response.
End of explanation
"""
plt.plot(t[:1001], x[:1001]-xe[:1001]);
"""
Explanation: Eventually we plot the difference between exact and approximate response,
End of explanation
"""
|
IanHawke/Southampton-PV-NumericalMethods-2016 | solutions/05-Partial-Differential-Equations.ipynb | mit | from __future__ import division
import numpy
from matplotlib import pyplot
%matplotlib notebook
dt = 1e-5
dx = 1e-2
x = numpy.arange(0,1+dx,dx)
y = numpy.zeros_like(x)
y = x * (1 - x)
def update_heat(y, dt, dx):
dydt = numpy.zeros_like(y)
dydt[1:-1] = dt/dx**2 * (y[2:] + y[:-2] - 2*y[1:-1])
return dydt
Nsteps = 10000
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
"""
Explanation: Partial Differential Equations
If there's anything that needs serious computing power it's the solution of PDEs. However, you can go a long way to getting intuition on complex problems with simple numerical methods.
Let's take parts of the doped semiconductor model, concentrating on the time-dependent behaviour, simplify the constants, and restrict to one spatial dimension:
\begin{align}
\frac{\partial p}{\partial t} + \frac{\partial}{\partial x} \left( p \frac{\partial}{\partial x} \left( \frac{1}{p} \right) \right) &= \left( n_i^2 - np \right) + G, \
\frac{\partial n}{\partial t} + \frac{\partial}{\partial x} \left( n \frac{\partial}{\partial x} \left( \frac{1}{n} \right) \right) &= \left( n_i^2 - np \right) + G.
\end{align}
This is a pair of coupled PDEs for $n, p$ in terms of physical and material constants, and the quasi-Fermi levels $E_{F_{n,p}}$ depend on $n, p$ - here we've replaced them with terms proportional to ${n, p}^{-1}$.
We'll write this in the form
\begin{equation}
\frac{\partial {\bf y}}{\partial t} + \frac{\partial}{\partial x} \left( {\bf g}({\bf y}) \frac{\partial}{\partial x} {\bf h}({\bf y}) \right) = {\bf f}({\bf y}).
\end{equation}
Finite differencing
We used finite differencing when looking at IVPs. We introduced a grid of points $x_j$ in space and replace derivatives with finite difference expressions. For example, we saw the forward difference approximation
\begin{equation}
\left. \frac{\text{d} y}{\text{d} x} \right|{x = x_j} = \frac{y{j+1} - y_j}{\Delta x} + {\cal O} \left( \Delta x^1 \right),
\end{equation}
and the central difference approximation
\begin{equation}
\left. \frac{\text{d} y}{\text{d} x} \right|{x = x_j} = \frac{y{j+1} - y_{j-1}}{2 \Delta x} + {\cal O} \left( \Delta x^2 \right).
\end{equation}
This extends to partial derivatives, and to more than one variable. We introduce a grid in time, $t^n$, and denote $y(t^n, x_j) = y^n_j$. Then we can do, say, forward differencing in time and central differencing in space:
\begin{align}
\left. \frac{\partial y}{\partial t} \right|{x = x_j, t = t^n} &= \frac{y^{n+1}{j} - y^{n}{j}}{\Delta t}, \
\left. \frac{\partial y}{\partial x} \right|{x = x_j, t = t^n} &= \frac{y^{n}{j+1} - y^{n}{j-1}}{2 \Delta x}.
\end{align}
Diffusion equation
To illustrate the simplest way of proceeding we'll look at the diffusion equation
\begin{equation}
\frac{\partial y}{\partial t} - \frac{\partial^2 y}{\partial x^2} = 0.
\end{equation}
When finite differences are used, with the centred second derivative approximation
\begin{equation}
\left. \frac{\text{d}^2 y}{\text{d} x^2} \right|{x = x_j} = \frac{y{j+1} + y_{j-1} - 2 y_{j}}{\Delta x^2} + {\cal O} \left( \Delta x^2 \right),
\end{equation}
we find the approximation
\begin{align}
&& \frac{y^{n+1}{j} - y^{n}{j}}{\Delta t} - \frac{y^{n}{j+1} + y^{n}{j-1} - 2 y^{n}{j}}{\Delta x^2} &= 0 \
\implies && y^{n+1}{j} &= y^{n}{j} + \frac{\Delta t}{\Delta x^2} \left( y^{n}{j+1} + y^{n}{j-1} - 2 y^{n}{j} \right).
\end{align}
Given initial data and boundary conditions, this can be solved.
Let's implement the heat equation with homogeneous Dirichlet boundary conditions ($y(t, 0) = 0 = y(t, 1)$) and simple initial data ($y(0, x) = x (1 - x)$), using a spatial step size of $\Delta x = 10^{-2}$ and a time step of $\Delta t = 10^{-5}$, solving to $t = 0.1$ ($10000$ steps).
End of explanation
"""
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.html'] = 'html5'
y = numpy.zeros_like(x)
y = x * (1 - x)
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0,0.3))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(100):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=100, interval=100, blit=True)
"""
Explanation: Note
The animations that follow will only work if you have ffmepg installed. Even then, they may be touchy. The plots should always work.
End of explanation
"""
y = numpy.sin(4.0*numpy.pi*x)**4
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
Nsteps = 10000
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
"""
Explanation: The solution looks good - smooth, the initial profile is diffusing nicely. Try with something a bit more complex, such as $y(0, x) = \sin^4(4 \pi x)$ to see the diffusive effects:
End of explanation
"""
y = numpy.zeros_like(x)
y = numpy.sin(4.0*numpy.pi*x)**4
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0,1))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(20):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=50, interval=100, blit=True)
"""
Explanation: All the features are smoothing out, as expected.
End of explanation
"""
dt = 1e-4
Nsteps = 100
y = numpy.sin(4.0*numpy.pi*x)**4
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
y = numpy.zeros_like(x)
y = numpy.sin(4.0*numpy.pi*x)**4
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(-1,3))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(1):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=50, interval=100, blit=True)
"""
Explanation: However, we used a really small timestep to get these results. It would be less numerical work if we increased the timestep. Let's try only $100$ steps with $\Delta t = 10^{-4}$:
End of explanation
"""
ni = 0.1
G = 0.1
def f(y):
p = y[0,:]
n = y[1,:]
f_vector = numpy.zeros_like(y)
f_vector[:,:] = ni**2 - n*p + G
return f_vector
def g(y):
p = y[0,:]
n = y[1,:]
g_vector = numpy.zeros_like(y)
g_vector[0,:] = p
g_vector[1,:] = n
return g_vector
def h(y):
p = y[0,:]
n = y[1,:]
h_vector = numpy.zeros_like(y)
h_vector[0,:] = 1.0/p
h_vector[1,:] = 1.0/n
return h_vector
def update_term(y, dt, dx):
dydt = numpy.zeros_like(y)
f_vector = f(y)
g_vector = g(y)
h_vector = h(y)
dydt[:,2:-2] += dt * f_vector[:,2:-2]
dydt[:,2:-2] -= dt/(4.0*dx**2)*(g_vector[:,3:-1]*h_vector[:,4:] -\
(g_vector[:,3:-1] + g_vector[:,1:-3])*h_vector[:,2:-2] + \
g_vector[:,1:-3]*h_vector[:,:-4])
return dydt
"""
Explanation: This doesn't look good - it's horribly unstable, with the results blowing up very fast.
The problem is that the size of the timestep really matters. Von Neumann stability calculations can be used to show that only when
\begin{equation}
\frac{\Delta t}{\Delta x^2} < \frac{1}{2}
\end{equation}
are the numerical results stable, and hence trustable. This is a real problem when you want to improve accuracy by increasing the number of points, hence decreasing $\Delta x$: with $\Delta x = 10^{-3}$ we need $\Delta t < 5 \times 10^{-7}$ already!
Exercise
Check, by changing $\Delta x$ and re-running the simulations, that you see numerical instabilities when this stability bound is violated. You'll only need to take a few tens of timesteps irrespective of the value of $\Delta t$.
Full problem
Finally, we can evaluate the PDE at a specific point and re-arrange the equation. Assuming we know all the data at $t^n$, the only unknowns will be at $t^{n+1}$, giving the algorithm
\begin{align}
{\bf y}^{n+1}{j} &= {\bf y}^{n}{j} + \Delta t \, {\bf f}^{n}{j} - \frac{\Delta t}{2 \Delta x} \left( {\bf g}^{n}{j+1} \frac{1}{2 \Delta x} \left( {\bf h}^{n}{j+2} - {\bf h}^{n}{j} \right) - {\bf g}^{n}{j-1} \frac{1}{2 \Delta x} \left( {\bf h}^{n}{j} - {\bf h}^{n}{j-2} \right) \right) \
&= {\bf y}^{n}{j} + \Delta t \, {\bf f}^{n}{j} - \frac{\Delta t}{4 \left( \Delta x \right)^2} \left( {\bf g}^{n}{j+1} {\bf h}^{n}{j+2} - \left( {\bf g}^{n}{j+1} + {\bf g}^{n}{j-1} \right) {\bf h}^{n}{j} + {\bf g}^{n}{j-1} {\bf h}^{n}{j-2} \right)
\end{align}
We'll implement that by writing a function that computes the update term (${\bf y}^{n+1}_j - {\bf y}^n_j$), choosing $n_i = 0.1, G = 0.1$:
End of explanation
"""
dx = 0.05
dt = 1e-7
Nsteps = 10000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()
for n in range(Nsteps):
update = update_term(y, dt, dx)
y += update
y[:,1] = y[:,2]
y[:,0] = y[:,1]
y[:,-2] = y[:,-3]
y[:,-1] = y[:,-2]
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()
dx = 0.01
dt = 1e-8
Nsteps = 100000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0.09,0.11))
line1, = ax.plot([], [], label="$p$")
line2, = ax.plot([], [], label="$n$")
pyplot.legend()
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(1000):
update = update_term(y, dt, dx)
y += update
y[:,1] = y[:,2]
y[:,0] = y[:,1]
y[:,-2] = y[:,-3]
y[:,-1] = y[:,-2]
y += update_heat(y, dt, dx)
line1.set_data(x, y[0,:])
line2.set_data(x, y[1,:])
return line1, line2
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=100, interval=100, blit=True)
"""
Explanation: Now set the initial data to be
\begin{align}
p &= n_i \left(1 + 0.1 \sin(4 \pi x) \right), \
n &= n_i \left(1 + 0.1 \sin(6 \pi x) \right).
\end{align}
The spatial domain should be $[0, 1]$. The spatial step size should be $0.05$. The timestep should be $10^{-7}$. The evolution should be for $10^5$ steps, to $t=0.01$.
The crucial point is what happens at the boundaries. To discretely represent a Neumann boundary condition where the normal derivative vanishes on the boundary, set the boundary points equal to the first data point in the interior, ie y[:,0] = y[:,1] = y[:,2] and y[:,-1] = y[:,-2] = y[:,-3].
End of explanation
"""
|
marko911/deep-learning | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
chars[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
np.max(chars)+1
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the first split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
"""
Explanation: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
train_x, train_y, val_x, val_y = split_data(chars, 10, 50)
train_x.shape
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
train_x[:,:50]
"""
Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
"""
Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# One-hot encoding the input and target characters
x_one_hot = tf.one_hot(inputs, num_classes)
y_one_hot = tf.one_hot(targets, num_classes)
### Build the RNN layers
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
### Run the data through the RNN layers
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one output row for each step for each batch
seq_output = tf.concat(outputs, axis=1)
output = tf.reshape(seq_output, [-1, lstm_size])
# Now connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(num_classes))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and batch
logits = tf.matmul(output, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
preds = tf.nn.softmax(logits, name='predictions')
# Reshape the targets to match the logits
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
cost = tf.reduce_mean(loss)
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
# NOTE: I'm using a namedtuple here because I think they are cool
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: Building the model
Below is a function where I build the graph for the network.
End of explanation
"""
batch_size = 10
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
keep_prob = 0.5
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
"""
Explanation: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
checkpoint = "checkpoints/____.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cccr-iitm/cmip6/models/sandbox-3/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-3', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
akhilaananthram/nupic.research | sdr_paper/sdr_math_neuron_paper.ipynb | gpl-3.0 | oxp = Symbol("Omega_x'")
b = Symbol("b")
n = Symbol("n")
theta = Symbol("theta")
w = Symbol("w")
s = Symbol("s")
a = Symbol("a")
subsampledOmega = (binomial(s, b) * binomial(n - s, a - b)) / binomial(n, a)
subsampledFpF = Sum(subsampledOmega, (b, theta, s))
subsampledOmegaSlow = (binomial(s, b) * binomial(n - s, a - b))
subsampledFpFSlow = Sum(subsampledOmegaSlow, (b, theta, s))/ binomial(n, a)
display(subsampledFpF)
display(subsampledFpFSlow)
"""
Explanation: Equation for Neuron Paper
A dendritic segment can robustly classify a pattern by subsampling a small number of cells from a larger population. Assuming a random distribution of patterns, the exact probability of a false match is given by the following equation:
End of explanation
"""
display("n=1024, a=8, s=4, omega=2", subsampledFpF.subs(s, 4).subs(n, 1024).subs(a, 8).subs(theta, 2).evalf())
display("n=100000, a=2000, s=10, theta=10", subsampledFpFSlow.subs(theta, 10).subs(s, 10).subs(n, 100000).subs(a, 2000).evalf())
display("n=2048, a=400, s=40, theta=20", subsampledFpF.subs(theta, 20).subs(s, 40).subs(n, 2048).subs(a, 400).evalf())
"""
Explanation: where n refers to the size of the population of cells, a is the number of active cells at any instance in time, s is the number of actual synapses on a dendritic segment, and θ is the threshold for NMDA spikes. Following (Ahmad & Hawkins, 2015), the numerator counts the number of possible ways θ or more cells can match a fixed set of s synapses. The denominator counts the number of ways a cells out of n can be active.
Example usage
End of explanation
"""
T1B = subsampledFpFSlow.subs(n, 100000).subs(a, 2000).subs(theta,s).evalf()
print "n=100000, a=2000, theta=s"
display("s=6",T1B.subs(s,6).evalf())
display("s=8",T1B.subs(s,8).evalf())
display("s=10",T1B.subs(s,10).evalf())
"""
Explanation: Table 1B
End of explanation
"""
T1C = subsampledFpFSlow.subs(n, 100000).subs(a, 2000).subs(s,2*theta).evalf()
print "n=100000, a=2000, s=2*theta"
display("theta=6",T1C.subs(theta,6).evalf())
display("theta=8",T1C.subs(theta,8).evalf())
display("theta=10",T1C.subs(theta,10).evalf())
display("theta=12",T1C.subs(theta,12).evalf())
"""
Explanation: Table 1C
End of explanation
"""
m = Symbol("m")
T1D = subsampledFpF.subs(n, 100000).subs(a, 2000).subs(s,2*m*theta).evalf()
print "n=100000, a=2000, s=2*m*theta"
display("theta=10, m=2",T1D.subs(theta,10).subs(m,2).evalf())
display("theta=10, m=4",T1D.subs(theta,10).subs(m,4).evalf())
display("theta=10, m=6",T1D.subs(theta,10).subs(m,6).evalf())
display("theta=20, m=6",T1D.subs(theta,20).subs(m,6).evalf())
"""
Explanation: Table 1D
End of explanation
"""
|
pybel/pybel | notebooks/Compiling a BEL Document.ipynb | mit | import os
from urllib.request import urlretrieve
import pybel
import logging
logging.getLogger('pybel').setLevel(logging.DEBUG)
logging.basicConfig(level=logging.DEBUG)
logging.getLogger('urllib3').setLevel(logging.WARNING)
print(pybel.get_version())
DESKTOP_PATH = os.path.join(os.path.expanduser('~'), 'Desktop')
manager = pybel.Manager(f'sqlite:///{DESKTOP_PATH}/pybel_example_database.db')
"""
Explanation: Loading BEL Documents
We'll always start by importing pybel.
End of explanation
"""
url = 'https://raw.githubusercontent.com/pharmacome/conib/master/hbp_knowledge/tau/boland2018.bel'
"""
Explanation: First, we'll download and parse a BEL document from the Human Brain Pharmacome project describing the 2018 paper from Boland et al., "Promoting the clearance of neurotoxic proteins in neurodegenerative disorders of ageing".
End of explanation
"""
boland_2018_graph = pybel.from_bel_script_url(url, manager=manager)
pybel.to_database(boland_2018_graph, manager=manager)
"""
Explanation: A BEL document can be downloaded and parsed from a URL using pybel.from_bel_script_url. Keep in mind, the first time we load a given BEL document, various BEL resources that are referenced in the document must be cached. Be patient - this can take up to ten minutes.
End of explanation
"""
boland_2018_graph.summarize()
"""
Explanation: The graph is loaded into an instance of the pybel.BELGraph class. We can use the pybel.BELGraph.summarize() to print a brief summary of the graph.
End of explanation
"""
url = 'https://raw.githubusercontent.com/pharmacome/conib/master/hbp_knowledge/tau/caballero2018.bel'
path = os.path.join(DESKTOP_PATH, 'caballero2018.bel')
if not os.path.exists(path):
urlretrieve(url, path)
"""
Explanation: Next, we'll open and parse a BEL document from the Human Brain Pharmacome project describing the 2018 paper from Cabellero et al., "Interplay of pathogenic forms of human tau with different autophagic pathways". This example uses urlretrieve() to download the file locally to demonstrate how to load from a local file path.
End of explanation
"""
cabellero_2018_graph = pybel.from_bel_script(path, manager=manager)
cabellero_2018_graph.summarize()
pybel.to_database(cabellero_2018_graph, manager=manager)
"""
Explanation: A BEL document can also be parsed from a path to a file using pybel.from_bel_script. Like before, we will summarize the graph after parsing it.
End of explanation
"""
combined_graph = pybel.union([boland_2018_graph, cabellero_2018_graph])
combined_graph.summarize()
"""
Explanation: We can combine two or more graphs in a list using pybel.union.
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session09/Day5/BayesianBlocksDSFP_Problems.ipynb | mit | # execute this cell
np.random.seed(0)
x = np.concatenate([stats.cauchy(-5, 1.8).rvs(500),
stats.cauchy(-4, 0.8).rvs(2000),
stats.cauchy(-1, 0.3).rvs(500),
stats.cauchy(2, 0.8).rvs(1000),
stats.cauchy(4, 1.5).rvs(500)])
# truncate values to a reasonable range
x = x[(x > -15) & (x < 15)]
# complete
# plt.hist(
"""
Explanation: Building Histograms with Bayesian Priors
An Introduction to Bayesian Blocks
========
Version 0.1
By LM Walkowicz 2019 June 14
This notebook makes heavy use of Bayesian block implementations by Jeff Scargle, Jake VanderPlas, Jan Florjanczyk, and the Astropy team.
Before you begin, please download the dataset for this notebook.
Problem 1) Histograms Lie!
One of the most common and useful tools for data visualization can be incredibly misleading. Let's revisit how.
Problem 1a
First, let's make some histograms! Below, I provide some data; please make a histogram of it.
End of explanation
"""
# complete
# plt.hist
"""
Explanation: Hey, nice histogram!
But how do we know we have visualized all the relevant structure in our data?
Play around with the binning and consider:
What features do you see in this data? Which of these features are important?
End of explanation
"""
# execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
# complete
# plt.hist(
# grid =
# PDF =
# plt.plot(
"""
Explanation: Problem 1b
What are some issues with histograms?
Take a few min to discuss this with your partner
Solution 1b
write your solution here
Problem 1c
We have previously covered a few ways to make histograms better. What are some ways you could improve your histogram?
Take a few min to discuss this with your partner
Problem 1d
There are lots of ways to improve the previous histogram-- let's implement a KDE representation instead! As you have seen in previous sessions, we will borrow a bit of code from Jake VanderPlas to estimate the KDE.
As a reminder, you have a number of choices of kernel in your KDE-- some we have used in the past: tophat, Epanechnikov, Gaussian. Please plot your original histogram, and then overplot a few example KDEs on top of it.
End of explanation
"""
# execute this cell
def bayesian_blocks(t):
"""Bayesian Blocks Implementation
By Jake Vanderplas. License: BSD
Based on algorithm outlined in http://adsabs.harvard.edu/abs/2012arXiv1207.5578S
Parameters
----------
t : ndarray, length N
data to be histogrammed
Returns
-------
bins : ndarray
array containing the (N+1) bin edges
Notes
-----
This is an incomplete implementation: it may fail for some
datasets. Alternate fitness functions and prior forms can
be found in the paper listed above.
"""
# copy and sort the array
t = np.sort(t)
N = t.size
# create length-(N + 1) array of cell edges
edges = np.concatenate([t[:1],
0.5 * (t[1:] + t[:-1]),
t[-1:]])
block_length = t[-1] - edges
# arrays needed for the iteration
nn_vec = np.ones(N)
best = np.zeros(N, dtype=float)
last = np.zeros(N, dtype=int)
#-----------------------------------------------------------------
# Start with first data cell; add one cell at each iteration
#-----------------------------------------------------------------
for K in range(N):
# Compute the width and count of the final bin for all possible
# locations of the K^th changepoint
width = block_length[:K + 1] - block_length[K + 1]
count_vec = np.cumsum(nn_vec[:K + 1][::-1])[::-1]
# evaluate fitness function for these possibilities
fit_vec = count_vec * (np.log(count_vec) - np.log(width))
fit_vec -= 4 # 4 comes from the prior on the number of changepoints
fit_vec[1:] += best[:K]
# find the max of the fitness: this is the K^th changepoint
i_max = np.argmax(fit_vec)
last[K] = i_max
best[K] = fit_vec[i_max]
#-----------------------------------------------------------------
# Recover changepoints by iteratively peeling off the last block
#-----------------------------------------------------------------
change_points = np.zeros(N, dtype=int)
i_cp = N
ind = N
while True:
i_cp -= 1
change_points[i_cp] = ind
if ind == 0:
break
ind = last[ind - 1]
change_points = change_points[i_cp:]
return edges[change_points]
"""
Explanation: Problem 1d
Which parameters most affected the shape of the final distribution?
What are some possible issues with using a KDE representation of the data?
Discuss with your partner
Solution 1d
Write your response here
Problem 2) Histograms Episode IV: A New Hope
How can we create representations of our data that are robust against the known issues with histograms and KDEs?
Introducing: Bayesian Blocks
We want to represent our data in the most general possible way, a method that
* avoids assumptions about smoothness or shape of the signal (which might place limitations on scales and resolution)
* is nonparametric (doesn't fit some model)
* finds and characterizes local structure in our time series (in contrast to periodicities)
(continued)
handles arbitrary sampling (i.e. doesn't require evenly spaced samples, doesn't care about sparse samples)
is as hands-off as possible-- user interventions should be minimal or non-existent
is applicable to multivariate data
can both analyze data after they are collected, and in real time
Bayesian Blocks works by creating a super-simple representation of the data, essentially a piecewise fit that segments our time series.
In the implementations we will use today, the model is a piecewise linear fit in time across each individual bin, or "block".
one modeling the signal as linear in time across the block:
$$x(t) = λ(1 + a(t − t_{fid}))$$
where $\lambda$ is the signal strength at the fiducial time $t_{fid}$, and the coefficient $a$ determines the rate of change over the block.
Scargle et al. (2012) point out that using a linear fit is good because it makes calculating the fit really easy, but you could potentially use something more complicated (they provide some details for using an exponential model, $x(t) = λe^{a(t−t_{fid}})$, in their Appendix C.
The Fitness Function
The insight in Bayesian Blocks is that you can use a Bayesian likelihood framework to compute a "fitness function" that depends only on the number and size of the blocks.
In every block, you are trying to maximize some goodness-of-fit measure for data in that individual block. This fit depends only on the data contained in its block, and is independent of all other data.
The optimal segmentation of the time series, then, is the segmentation that maximizes fitness-- the total goodness-of-fit over all the blocks (so for example, you could use the sum over all blocks of whatever your quantitative expression is for the goodness of fit in individual blocks).
The entire time series is then represented by a series of segments (blocks) characterized by very few parameters:
$N_{cp}$: the number of change-points
$t_{k}^{cp}$: the change-point starting block k
$X_k$: the signal amplitude in block k
for k = 1, 2, ... $N_{cp}$
When using Bayesian Blocks (particularly on time series data) we often speak of "change points" rather than segmentation, as the block edges essentially tell us the discrete times at which a signal’s statistical properties change discontinuously, though the segments themselves are constant between these points.
You looking at KDEs right now:
In some cases (such as some of the examples below), the Bayesian Block representation may look kind of clunky. HOWEVER: remember that histograms and KDEs may sometimes look nicer, but can be really misleading! If you want to derive physical insight from these representations of your data, Bayesian Blocks can provide a means of deriving physically interesting quantities (for example, better estimates of event locations, lags, amplitudes, widths, rise and decay times, etc).
On top of that, you can do all of the above without losing or hiding information via smoothing or other model assumptions.
HOW MANY BLOCKS, THO?!
We began this lesson by bemoaning that histograms force us to choose a number of bins, and that KDEs require us to choose a bandwidth. Furthermore, one of the requirements we had for a better way forward was that the user interaction be minimal or non-existent. What to do?
Bayesian Blocks works by defining a prior distribution for the number of blocks, such that a single parameter controls the steepness of this prior (in other words, the relative probability for smaller or larger numbers of blocks.
Once this prior is defined, the size, number, and locations of the blocks are determined solely and uniquely by the data.
So, what does the prior look like?
In most cases, $N_{blocks}$ << N (you are, after all, still binning your data-- if $N_{blocks}$ was close to N, you wouldn't really be doing much). Scargle et al. (2012) adopts a geometric prior (Coram 2002), which assigns smaller probability to a large number of blocks:
$$P(N_{blocks}) = P_{0}\gamma N_{blocks}$$
for $0 ≤ N_{blocks} ≤ N$, and zero otherwise since $N_{blocks}$ cannot be negative or larger than the number of data cells.
Substituting in the normalization constant $P_{0}$ gives
$$P(N_{blocks}) = \frac{1−\gamma}{1-\gamma^{N+1}}\gamma^{N_{blocks}}$$
<sub>Essentially, this prior says that finding k + 1 blocks is less likely than finding k blocks by the constant factor $\gamma$. Scargle (2012) also provides a nice intuitive way of thinking about $\gamma$: $\gamma$ is adjusting the amount of structure in the resulting representation.</sub>
The Magic Part
At this point, you may be wondering about how the algorithm is capable of finding an optimal number of blocks. As Scargle et al (2012) admits
the number of possible partitions (i.e. the number of ways N cells can be arranged in blocks) is $2^N$. This number is exponentially large, rendering an explicit exhaustive search of partition space utterly impossible for all but very small N.
In his blog post on Bayesian Blocks, Jake VdP compares the algorithm's use of dynamic programming to mathematical induction. For example: how could you prove that
$$1 + 2 + \cdots + n = \frac{n(n+1)}{2}$$
is true for all positive integers $n$? An inductive proof of this formula proceeds in the following fashion:
Base Case: We can easily show that the formula holds for $n = 1$.
Inductive Step: For some value $k$, assume that $1 + 2 + \cdots + k = \frac{k(k+1)}{2}$ holds.
Adding $(k + 1)$ to each side and rearranging the result yields
$$1 + 2 + \cdots + k + (k + 1) = \frac{(k + 1)(k + 2)}{2}$$
Looking closely at this, we see that we have shown the following: if our formula is true for $k$, then it must be true for $k + 1$.
By 1 and 2, we can show that the formula is true for any positive integer $n$, simply by starting at $n=1$ and repeating the inductive step $n - 1$ times.
In the Bayesian Blocks algorithm, one can find the optimal binning for a single data point; so by analogy with our example above (full details are given in the Appendix of Scargle et al. 2012), if you can find the optimal binning for $k$ points, it's a short step to the optimal binning for $k + 1$ points.
So, rather than performing an exhaustive search of all possible bins, which would scale as $2^N$, the time to find the optimal binning instead scales as $N^2$.
Playing with Blocks
We will begin playing with (Bayesian) blocks with a simple implementation, outlined by Jake VanderPlas in this blog: https://jakevdp.github.io/blog/2012/09/12/dynamic-programming-in-python/
End of explanation
"""
# complete
plt.hist(
"""
Explanation: Problem 2a
Let's visualize our data again, but this time we will use Bayesian Blocks.
Plot a standard histogram (as above), but now plot the Bayesian Blocks representation of the distribution over it.
End of explanation
"""
# complete
"""
Explanation: Problem 2b
How is the Bayesian Blocks representation different or similar?
How might your choice of representation affect your scientific conclusions about your data?
Take a few min to discuss this with your partner
If you are using histograms for analysis, you might infer physical meaning from the presence or absence of features in these distributions. As it happens, histograms of time-tagged event data are often used to characterize physical events in time domain astronomy, for example gamma ray bursts or stellar flares.
Problem 3) Bayesian Blocks in the wild
Now we'll apply Bayesian Blocks to some real astronomical data, and explore how our visualization choices may affect our scientific conclusions.
First, let's get some data!
All data from NASA missions is hosted on the Mikulski Archive for Space Telescopes (aka MAST). As an aside, the M in MAST used to stand for "Multimission", but was changed to honor Sen. Barbara Mikulski (D-MD) for her tireless support of science.
Some MAST data (mostly the original data products) can be directly accessed using astroquery (there's an extensive guide to interacting with MAST via astroquery here: https://astroquery.readthedocs.io/en/latest/mast/mast.html).
In addition, MAST also hosts what are called "Higher Level Science Products", or HLSPs, which are data derived by science teams in the course of doing their analyses. You can see a full list of HLSPs here: https://archive.stsci.edu/hlsp/hlsp-table
These data tend to be more heterogeneous, and so are not currently accessible through astroquery (for the most part). They will be added in the future. But never fear! You can also submit SQL queries via MAST's CasJobs interface.
Go to the MAST CasJobs http://mastweb.stsci.edu/mcasjobs/home.aspx
If I have properly remembered to tell you to create a MAST CasJobs login, you can login now (and if not, just go ahead and sign up now, it's fast)!
We will be working with the table of new planet radii by Berger et al. (2019).
If you like, you can check out the paper here! https://arxiv.org/pdf/1805.00231.pdf
From the "Query" tab, select "HLSP_KG_RADII" from the Context drop-down menu.
You can then enter your query. In this example, we are doing a simple query to get all the KG-RADII radii and fluxes from the exoplanets catalog, which you could use to reproduce the first figure, above. For short queries that can execute in less than 60 seconds, you can hit the "Quick" button and the results of your query will be displayed below, where you can export them as needed. For longer queries like this one, you can select into an output table (otherwise a default like MyDB.MyTable will be used), hit the "Submit" button, and when finished your output table will be available in the MyDB tab.
Problem 3a
Write a SQL query to fetch this table from MAST using CasJobs.
Your possible variables are KIC_ID, KOI_ID, Planet_Radius, Planet_Radius_err_upper, Planet_Radius_err_lower, Incident_Flux, Incident_Flux_err_upper, Incident_Flux_err_lower, AO_Binary_Flag
For very short queries you can use "Quick" for your query; this table is large enough that you should use "Submit".
Hint: You will want to SELECT some stuff FROM a table called exoplanet_parameters
Solution 3a
Write your SQL query here
Once your query has completed, you will go to the MyDB tab to see the tables you have generated (in the menu at left). From here, you can click on a table, and select Download. I would recommend downloading your file as a CSV (comma-separated value) file, as CSV are simple, and can easily read into python via a variety of methods.
Problem 3b
Time to read in the data! There are several ways of importing a csv into python... choose your favorite and load in the table you downloaded.
End of explanation
"""
# complete
"""
Explanation: Problem 3c
Let's look at the distribution of the small planet radii in this table, which are given in units of Earth radii. Select the planets whose radii are Neptune-sized or smaller.
Select the planet radii for all planets smaller than Neptune in the table, and visualize the distribution of planet radii using a standard histogram.
End of explanation
"""
# complete
"""
Explanation: Problem 3d
What features do you see in the histogram of planet radii? Which of these features are important?
Discuss with your partner
Solution 3d
Write your answer here
Problem 3e
Now let's try visualizing these data using Bayesian Blocks. Please recreate the histogram you plotted above, and then plot the Bayesian Blocks version over it.
End of explanation
"""
import astropy.stats.bayesian_blocks as bb
"""
Explanation: Problem 3f
What features do you see in the histogram of planet radii? Which of these features are important?
Discuss with your partner.
Hint: maybe you should look at some of the comments in the implementation of Bayesian Blocks we are using
Solution 3f
Write your answer here
OK, so in this case, the Bayesian Blocks representation of the data looks fairly different. A couple things might stand out to you:
* There are large spikes in the Bayesian Blocks representation that are not present in the standard histogram
* If we're just looking at the Bayesian Block representation, it's not totally clear whether one should believe that there are two peaks in the distribution.
HMMMMMmmmm....
Wait! Jake VDP told us to watch out for this implementation. Maybe we should use something a little more official instead...
GOOD NEWS~! There is a Bayesian Blocks implementation included in astropy. Let's try that.
http://docs.astropy.org/en/stable/api/astropy.stats.bayesian_blocks.html
Important note
There is a known issue in the astropy implementation of Bayesian Blocks; see: https://github.com/astropy/astropy/issues/8317
It is possible this issue will be fixed in a future release, but in the event this problem arises for you, you will need to edit bayesian_blocks.py to include the following else statement (see issue link above for exact edit):
if self.ncp_prior is None:
ncp_prior = self.compute_ncp_prior(N)
else:
ncp_prior = self.ncp_prior
End of explanation
"""
# complete
"""
Explanation: Problem 3g
Please repeat the previous problem, but this time use the astropy implementation of Bayesian Blocks.
End of explanation
"""
# complete
"""
Explanation: Putting these results in context
Both standard histograms and KDEs can be useful for quickly visualizing data, and in some cases, getting an intuition for the underlying PDF of your data.
However, keep in mind that they both involve making parameter choices that are largely not motivated in any quantitative way. These choices can create wildly misleading representations of your data.
In particular, your choices may lead you to make a physical interpretation that may or may not be correct (in our example, bear in mind that the observed distribution of exoplanetary radii informs models of planet formation).
Bayesian Blocks is more than just a variable-width histogram
While KDEs often do a better job of visualizing data than standard histograms do, they also create a loss of information. Philosophically speaking, what Bayesian Blocks do is posit that the "change points", also known as the bin edges, contain information that is interesting. When you apply a KDE, you are smoothing your data by creating an approximation, and that can mean you are losing potential insights by removing information.
While Bayesian Blocks are useful as a replacement for histograms in general, their ability to identify change points makes them especially useful for time series analysis.
Problem 4) Bayesian Blocks for Time Series Analysis
While Bayesian Blocks can be very useful as a simple replacement for histograms, one of its great strengths is in finding "change points" in time series data. Finding these change points can be useful for discovering interesting events in time series data you already have, and it can be used in real-time to detect changes that might trigger another action (for example, follow up observations for LSST).
Let's take a look at a few examples of using Bayesian Blocks in the time series context.
First and foremost, it's important to understand the different between various kinds of time series data.
Event data come from photon counting instruments. In these data, the time series typically consists of photon arrival times, usually in a particular range of energies that the instrument is sensitive to. Event data are univariate, in that the time series is "how many photons at a given time", or "how many photons in a given chunk of time".
Point measurements are measurements of a (typically) continuous source at a given moment in time, often with some uncertainty associated with the measurement. These data are multivariate, as your time series relates time, your measurement (e.g. flux, magnitude, etc) and its associated uncertainty to one another.
Problem 4a
Let's look at some event data from BATSE, a high energy astrophysics experiment that flew on NASA's Compton Gamma-Ray Observatory. BATSE primarily studied gamma ray bursts (GRBs), capturing its detections in four energy channels: ~25-55 keV, 55-110 keV, 110-320 keV, and >320 keV.
You have been given four text files that record one of the BATSE GRB detections. Please read these data in.
End of explanation
"""
# complete
"""
Explanation: Problem 4b
When you reach this point, you and your partner should pick a number between 1 and 4; your number is the channel whose data you will work with.
Using the data for your channel, please visualize the photon events in both a standard histogram and using Bayesian Blocks.
End of explanation
"""
# execute this cell
kplr_hdul = fits.open('./data/kplr009726699-2011271113734_llc.fits')
"""
Explanation: Problem 4c
Let's take a moment to reflect on the differences between these two representations of our data.
Please discuss with your partner:
* How many bursts are present in these two representations?
* How accurately would you be able to identify the time of the burst(s) from these representations? What about other quantities?
For the groups who worked with Channel 3:
You may have noticed very sharp features in your blocks representation of the data. Are they real?
To quote Jake VdP:
Simply put, there are spikes because the piecewise constant likelihood model says that spikes are favored. By saying that the spikes seem unphysical, you are effectively adding a prior on the model based on your intuition of what it should look like.
To quote Jeff Scargle:
Trust the algorithm!
Problem 5: Finding flares
As we have just seen, Bayesian Blocks can be very useful for finding transient events-- it worked great on our photon counts from BATSE! Let's try it on some slightly more complicated data: lightcurves from NASA's Kepler mission. Kepler's data consists primarily of point measures (rather than events)-- a Kepler lightcurve is just the change in the brightness of the star over time (with associated uncertainties).
Problem 5a
People often speak of transients (like the GRB we worked with above) and variables (like RR Lyrae or Cepheid stars) as being two completely different categories of changeable astronomical objects. However, some objects exhibit both variability and transient events. Magnetically active stars are one example of these objects: many of them have starspots that rotate into and out of view, creating periodic (or semi-periodic) variability, but they also have flares, magnetic reconnection events that create sudden, rapid changes in the stellar brightness.
A challenge in identifying flares is that they often appear against a background that is itself variable. While their are many approaches to fitting both quiescent and flare variability (Gaussian processes, which you saw earlier this week, are often used for exactly this purpose!), they can be very time consuming.
Let's read in some data, and see whether Bayesian Blocks can help us here.
End of explanation
"""
# execute this cell
kplr_hdul.info()
"""
Explanation: Cool, we have loaded in the FITS file. Let's look at what's in it:
End of explanation
"""
# execute this cell
lcdata = kplr_hdul[1].data
lcdata.columns
# execute this cell
t = lcdata['TIME']
f = lcdata['PDCSAP_FLUX']
e = lcdata['PDCSAP_FLUX_ERR']
t = t[~np.isnan(f)]
e = e[~np.isnan(f)]
f = f[~np.isnan(f)]
nf = f / np.median(f)
ne = e / np.median(f)
"""
Explanation: We want the light curve, so let's check out what's in that part of the file!
End of explanation
"""
# complete
"""
Explanation: Problem 5b
Use a scatter plot to visualize the Kepler lightcurve. I strongly suggest you try displaying it at different scales by zooming in (or playing with the axis limits), so that you can get a better sense of the shape of the lightcurve.
End of explanation
"""
# complete
edges =
#
#
#
plt.step(
"""
Explanation: Problem 5c
These data consist of a variable background, with occasional bright points caused by stellar flares.
Brainstorm possible approaches to find the flare events in this data. Write down your ideas, and discuss their potential advantages, disadvantages, and any pitfalls that you think might arise.
Discuss with your partner
Solution 5c
Write your notes here
There are lots of possible ways to approach this problem. In the literature, a very common traditional approach has been to fit the background variability while ignoring the outliers, then to subtract the background fit, and flag any point beyond some threshold value as belonging to a flare. More sophisticated approaches also exist, but they are often quite time consuming (and in many cases, detailed fits require a good starting estimate for the locations of flare events).
Recall, however, that Bayesian Blocks is particularly effective at identifying change points in our data. Let's see if it can help us in this case!
Problem 5d
Use Bayesian Blocks to visualize the Kepler lightcurve. Note that you are now using data that consists of point measurements, rather than event data (as for the BATSE example).
End of explanation
"""
# complete
your_data =
# complete
def stratified_bayesian_blocks(
"""
Explanation: Concluding Remarks
As you can see, using Bayesian Blocks allowed us to represent the data, including both quiescent variability and flares, without having to smooth, clip, or otherwise alter the data.
One potential drawback here is that the change points don't identify the flares themselves, or at least don't identify them as being different from the background variability-- the algorithm identifies change points, but does not know anything about what change points might be interesting to you in particular.
Another potential drawback is that it is possible that the Bayesian Block representation may not catch all events, or at least may not, on its own, provide an unambiguous sign that a subtle signal of interest is in the data.
In these cases, it is sometimes instructive to use a hybrid approach, where one combines the bins determined from a traditional histogram with the Bayesian Blocks change points. Alternatively, if one has a good model for the background, one can compared the blocks representation of a background-only simulated data set with one containing both background and model signal.
Further interesting examples (in the area of high energy physics and astrophysics) are provided by this paper:
https://arxiv.org/abs/1708.00810
Challenge Problem
Bayesian Blocks is so great! However, there are occasions in which it doesn't perform all that well-- particularly when there are large numbers of repeated values in the data. Jan Florjanczyk, a senior data scientist at Netflix, has written up a description of the problem, and implemented a version of Bayesian Blocks that does a better job on data with repeating values. Read his blog post on "Stratfied Bayesian Blocks" and try implementing it!
https://medium.com/@janplus/stratified-bayesian-blocks-2bd77c1e6cc7
You can either apply this to your favorite data set, or you can use it on the stellar parameters table associated with the data set we pulled from MAST earlier (recall that you previously worked with the exoplanet parameters table). You can check out the fields you can search on using MAST CasJobs, or just download a FITS file of the full table, here: https://archive.stsci.edu/prepds/kg-radii/
<sub>This GIF is of Dan Shiffman, who has a Youtube channel called Coding Train</sub>
End of explanation
"""
|
tuanavu/python-cookbook-3rd | notebooks/.ipynb_checkpoints/04_finding_the_largest_or_smallest_n_items-checkpoint.ipynb | mit | import heapq
nums = [1, 8, 2, 23, 7, -4, 18, 23, 42, 37, 2]
print(heapq.nlargest(3, nums)) # Prints [42, 37, 23]
print(heapq.nsmallest(3, nums)) # Prints [-4, 1, 2]
"""
Explanation: Keeping the Last N Items
Problem
You want to make a list of the largest or smallest N items in a collection.
Solution
The heapq module has two functions—nlargest() and nsmallest()
Example 1
End of explanation
"""
portfolio = [
{'name': 'IBM', 'shares': 100, 'price': 91.1},
{'name': 'AAPL', 'shares': 50, 'price': 543.22},
{'name': 'FB', 'shares': 200, 'price': 21.09},
{'name': 'HPQ', 'shares': 35, 'price': 31.75},
{'name': 'YHOO', 'shares': 45, 'price': 16.35},
{'name': 'ACME', 'shares': 75, 'price': 115.65}
]
cheap = heapq.nsmallest(3, portfolio, key=lambda s: s['price'])
expensive = heapq.nlargest(3, portfolio, key=lambda s: s['price'])
print(cheap)
print(expensive)
"""
Explanation: Example 2
Use with key parameter
End of explanation
"""
import os
file_dir = os.path.dirname(os.path.realpath('__file__'))
filename = os.path.abspath(os.path.join(file_dir, "..", "code/src/1/keeping_the_last_n_items/somefile.txt"))
!head $filename
from collections import deque
def search(lines, pattern, history=5):
previous_lines = deque(maxlen=history)
for line in lines:
if pattern in line:
yield line, previous_lines
previous_lines.append(line)
# Example use on a file
if __name__ == '__main__':
with open(filename) as f:
for line, prevlines in search(f, 'python', 5):
for pline in prevlines:
print(pline, end='')
print(line, end='')
print('-'*20)
"""
Explanation: Example 2
The following code performs a simple text match on a sequence of lines and yields the matching line along with the previous N lines of context when found
End of explanation
"""
|
amandersillinois/landlab | notebooks/tutorials/data_record/DataRecord_tutorial.ipynb | mit | import numpy as np
from landlab import RasterModelGrid
from landlab.data_record import DataRecord
from landlab import imshow_grid
import matplotlib.pyplot as plt
from matplotlib.pyplot import plot, subplot, xlabel, ylabel, title, legend, figure
%matplotlib inline
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
DataRecord Tutorial
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how to record variables of a Landlab model using DataRecord.
What is DataRecord?
DataRecord is a data structure that can hold data variables relating to a Landlab model or to items living on the Landlab grid.
DataRecord is built on xarray's Dataset structure: a multi-dimensional, in memory, array database. Dataset implements the mapping interface with keys given by variable names and values given by DataArray objects for each variable name. DataRecord inherits all the methods and attributes from xarray.Dataset.
A DataRecord can have one or both (or none) of the following dimensions:
- time: The simulated time in the model.
- item_id: An identifier of a generic item in the model.
Coordinates are one dimensional arrays used for label-based indexing.
The examples below illustrate different use cases for DataRecord.
We start by importing the necessary libraries:
End of explanation
"""
grid_1 = RasterModelGrid((10, 10), (1., 1.))
z = np.random.rand(100)
_ = grid_1.add_field('topographic__elevation', z, at='node')
"""
Explanation: Case 1. DataRecord with 1 dimension: time
Let's start with an example where we set DataRecord to have only time as a dimension.
An example variable that varies over time and relates to the Landlab grid could be the mean elevation of the topographic surface. We will store this example variable in DataRecord.
We create a Raster grid, create a field (at nodes) called topographic__elevation and populate it with random values.
End of explanation
"""
current_mean = np.mean(grid_1.at_node['topographic__elevation'])
print(current_mean)
"""
Explanation: Print the current mean elevation.
End of explanation
"""
dr_1 = DataRecord(grid_1,
time=[0.],
items=None,
data_vars={'mean_elevation': (['time'], ([current_mean]))},
attrs={'mean_elevation': 'y'})
"""
Explanation: Now we will create a DataRecord that will hold the data variable mean_elevation relating to grid_1. The first value, at time=0 is the current mean elevation on the grid.
End of explanation
"""
dr_1
"""
Explanation: The input arguments passed in this case are: the grid, time (as a 1-element list), a data variable dictionary and an attributes dictionary. Note that items is not filled, we will see its use in other cases below.
Note the format of the data_vars dictionary:
python
{'variable_name_1' : (['dimensions'], variable_data_1),
'variable_name_2' : (['dimensions'], variable_data_2),
...}
The attributes dictionary attrs can be used to store metadata about the variables: in this example, we use it to store the variable units.
So far, our DataRecord dr_1 holds one variable mean_elevation with one record at time=0.
End of explanation
"""
dr_1.dataset.to_dataframe()
"""
Explanation: We can visualise this data structure as a pandas dataframe:
End of explanation
"""
total_time = 100
dt = 20
uplift_rate = 0.01 # m/y
for t in range(20, total_time, dt):
grid_1.at_node['topographic__elevation'] += uplift_rate * dt
dr_1.add_record(time=[t],
new_record={
'mean_elevation':
(['time'],
([np.mean(grid_1.at_node['topographic__elevation'])]))
})
"""
Explanation: Now we will run a simple model where the grid surface is uplifted several times and the mean elevation is recorded at every time step. We use the method add_record to put the new value in the DataRecord dr_1:
End of explanation
"""
dr_1.dataset['mean_elevation'].values
"""
Explanation: Let's see what was recorded:
End of explanation
"""
dr_1.dataset.time.values
"""
Explanation: The corresponding time coordinates are:
End of explanation
"""
dr_1.time_coordinates
"""
Explanation: Notice the different syntax used here:
- time is a dimension and can be called by dr_1.time (or dr_1['time'])
- whereas mean_elevation is a variable and must be called by dr_1['mean_elevation']
DataRecord also has the handy property time_coordinates that returns these values as a list:
End of explanation
"""
dr_1.get_data(time=[20.], data_variable='mean_elevation')
dr_1.set_data(time=[80.], data_variable='mean_elevation', new_value=1.5)
dr_1.dataset['mean_elevation']
"""
Explanation: You can use the methods get_data and set_data to access and change the data:
End of explanation
"""
grid_2 = RasterModelGrid((5, 5), (2, 2))
boulders = {
'grid_element': 'node',
'element_id': np.array([6, 11, 12, 17, 12])
}
initial_boulder_sizes = np.array([1, 1.5, 3, 1, 2])
boulder_lithologies = np.array(
['sandstone', 'granite', 'sandstone', 'sandstone', 'limestone'])
dr_2 = DataRecord(grid_2,
time=None,
items=boulders,
data_vars={
'boulder_size': (['item_id'], initial_boulder_sizes),
'boulder_litho': (['item_id'], boulder_lithologies)
},
attrs={'boulder_size': 'm'})
dr_2.dataset.to_dataframe()
"""
Explanation: Case 2. DataRecord with 1 dimension: item_id
An important feature of DataRecord is that it allows to create items that live on grid elements, and variables describing them. For instance, we can create boulders and store information about their size and lithology.
To create items, we need to instantiate a DataRecord and pass it a dictionary describing where each item lives on the Landlab grid. The format of this dictionary is:
python
{'grid_element' : [grid_element],
'element_id' : [element_id]}
where:
- grid_element is a str or number-of-items-long array containing strings of the grid element(s) on which the items live (e.g.: node, link). Valid locations depend on the grid type (my_grid.groups gives the valid locations for your grid). If grid_element is provided as a string, it is assumed that all items live on the same type of grid element.
- element_id is an array of integers identifying the grid element IDs on which each item resides. For each item, element_id must be less than the number of this item's grid_element that exist on the grid. For example, if the grid has 10 links, no item can live at link 10 or link -3 because only links 0 to 9 exist in this example.
End of explanation
"""
dr_2.add_item(
new_item={
'grid_element': np.array(['link', 'node']),
'element_id': np.array([24, 8])
},
new_item_spec={'boulder_size': (['item_id'], np.array([1.2, 2.]))})
dr_2.dataset.to_dataframe()
"""
Explanation: Each item (in this case, each boulder) is designated by an item_id, its position on the grid is described by a grid_element and an element_id.
We can use the method add_item to add new boulders to the record:
End of explanation
"""
dr_2.set_data(data_variable='boulder_litho',
item_id=[5, 6],
new_value=['sandstone', 'granite'])
dr_2.dataset.to_dataframe()
"""
Explanation: Notice that we did not specify the lithologies of the new boulders, their recorded values are thus set as NaN. We can use the set_data method to report the boulder lithologies:
End of explanation
"""
mean_size = dr_2.calc_aggregate_value(func=np.mean,
data_variable='boulder_size')
mean_size
"""
Explanation: We can use the method calc_aggregate_value to apply a function to a variable aggregated at grid elements. For example, we can calculate the mean size of boulders on each node:
End of explanation
"""
# replace nans with 0:
mean_size[np.isnan(mean_size)] = 0
# show unfiltered mean sizes on the grid:
imshow_grid(grid_2, mean_size)
"""
Explanation: Notice that boulder #5 is on a link so it is not taken into account in this calculation.
End of explanation
"""
# define a filter array:
filter_litho = (dr_2.dataset['boulder_litho'] == 'sandstone')
# aggregate by node and apply function numpy.mean on boulder_size
filtered_mean = dr_2.calc_aggregate_value(func=np.mean,
data_variable='boulder_size',
at='node',
filter_array=filter_litho)
filtered_mean
"""
Explanation: Before doing this calculation we could filter by lithology and only use the 'sandstone' boulders in the calculation:
End of explanation
"""
grid_3 = RasterModelGrid((5, 5), (2, 2))
initial_boulder_sizes_3 = np.array([[10], [4], [8], [3], [5]])
# boulder_lithologies = np.array(['sandstone', 'granite', 'sandstone', 'sandstone', 'limestone']) #same as above, already run
boulders_3 = {
'grid_element': 'node',
'element_id': np.array([[6], [11], [12], [17], [12]])
}
dr_3 = DataRecord(grid_3,
time=[0.],
items=boulders_3,
data_vars={
'boulder_size': (['item_id',
'time'], initial_boulder_sizes_3),
'boulder_litho': (['item_id'], boulder_lithologies)
},
attrs={'boulder_size': 'm'})
dr_3
"""
Explanation: Case 3. DataRecord with 2 dimensions: item_id and time
We may want to record variables that have both dimensions time and item_id.
In the previous example, some variables that characterize the items (boulders) may not vary with time, such as boulder_lithology. Although it can be interesting to keep track of the change in size through time. We will redefine the DataRecord such that the variable boulder_size varies among the items/boulders (identified by item_id) and through time. The variable boulder_litho varies only among the items/boulders and this lithogy variable does not vary through time.
End of explanation
"""
boulder_lithologies.shape, initial_boulder_sizes.shape, initial_boulder_sizes_3.shape
"""
Explanation: Note that the syntax to define the initial_boulder_sizes_3 (as well as element_id) has changed: they are number-of-items-by-1 arrays because they vary along both time and item_id (compared to boulder_lithologies which is just number-of-items long as it only varies along item_id).
End of explanation
"""
dt = 100
total_time = 100000
time_index = 1
for t in range(dt, total_time, dt):
# create a new time coordinate:
dr_3.add_record(time=np.array([t]))
# this propagates grid_element and element_id values forward in time (instead of the 'nan' default filling):
dr_3.ffill_grid_element_and_id()
for i in range(0, dr_3.number_of_items):
# value of block erodibility:
if dr_3.dataset['boulder_litho'].values[i] == 'limestone':
k_b = 10**-5
elif dr_3.dataset['boulder_litho'].values[i] == 'sandstone':
k_b = 3 * 10**-6
elif dr_3.dataset['boulder_litho'].values[i] == 'granite':
k_b = 3 * 10**-7
else:
print('Unknown boulder lithology')
dr_3.dataset['boulder_size'].values[i, time_index] = dr_3.dataset[
'boulder_size'].values[i, time_index - 1] - k_b * dr_3.dataset[
'boulder_size'].values[i, time_index - 1] * dt
time_index += 1
print('Done')
figure(figsize=(15, 8))
time = range(0, total_time, dt)
boulder_size = dr_3.dataset['boulder_size'].values
subplot(121)
plot(time, boulder_size[1], label='granite')
plot(time, boulder_size[3], label='sandstone')
plot(time, boulder_size[-1], label='limestone')
xlabel('Time (yr)')
ylabel('Boulder size (m)')
legend(loc='lower left')
title('Boulder erosion by lithology')
# normalized plots
subplot(122)
plot(time, boulder_size[1] / boulder_size[1, 0], label='granite')
plot(time, boulder_size[2] / boulder_size[2, 0], label='sandstone')
plot(time, boulder_size[-1] / boulder_size[-1, 0], label='limestone')
xlabel('Time (yr)')
ylabel('Boulder size normalized to size at t=0 (m)')
legend(loc='lower left')
title('Normalized boulder erosion by lithology')
plt.show()
"""
Explanation: Let's define a very simple erosion law for the boulders:
$$
\begin{equation}
\frac{dD}{dt} = -k_{b} . D
\end{equation}
$$
where $D$ is the boulder diameter $[L]$ (this value represents the boulder_size variable), $t$ is time, and $k_{b}$ is the block erodibility $[L.T^{-1}]$.
We will now model boulder erosion and use DataRecord to store their size through time.
End of explanation
"""
dr_3.variable_names
dr_3.number_of_items
dr_3.item_coordinates
dr_3.number_of_timesteps
dr_1.time_coordinates
dr_1.earliest_time
dr_1.latest_time
dr_1.prior_time
"""
Explanation: Other properties provided by DataRecord
End of explanation
"""
|
Olsthoorn/TransientGroundwaterFlow | exercises_notebooks/ReversibleStorage.ipynb | gpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
import pdb
"""
Explanation: Reversible groundwater storage
End of explanation
"""
dg = np.array([0.002, 0.063, 0.2, 0.630, 2.0 ]) * 1e-3 # mm
"""
Explanation: Introduction
In the remainder of this syllabus, we will restrict ourselves to reversible groundwater storage phenomena only, i.e. phenomena in which the porous medium is not changed.
In groundwater flow systems, three separate forms of storage may be distinguished:
Phreatic storage, which occurs in unconfined aquifers, i.e. aquifers with a free water table. It is due to filling and emptying of pores at the top of the saturated zone.
Elastic storage, which is due to combined compressibility of the water, the grains and the porous matrix (soil skeleton).
Sometimes the interface between fresh water and another fluid (be it saline water, oil or gas) can provide a third type of storage. This works by displacement of the interface, generally between the fresh water and the saline water. When displacing an interface, the total volume of water in the subsurface remains the same, however, the amount of usable fresh water may increase (or decrease) at the cost of saline water, and therefore, one may consider this storage of fresh water.
Specific yield
The water table in an unconfined or phreatic layer is the elevation where the pressure equals the atmospheric pressure. It is in no way a sharp boundary between water and air, like it is between the water table of surface water and the air above it. The boundary between the saturated and unsaturated zone is not sharp, there is water throughout most of the unsaturated zone, that is, above the plane where the pressure equals the atmospheric one. In this lecture we study the this zone and its implications for the specific yield of an aquifer. What happens when we say that the water table sinks or rises?
Phreatic storage is due to the filling and emptying of pores above the saturated zone, i.e. above the water table. Because it is related to changes of the water table, it is limited to phreatic (unconfined) aquifers.
The storage coefficient for an unconfined aquifer is called specific yield and is denoted by the symbol $S_{y}$. It is dimensionless, as follows from its definition
$$S_{y}=\frac{\partial V_{w}}{\partial h}$$
where $\partial V_{w}$ is the change of volume of water from a column of aquifer per unit of surface area and $dh$ is the change of the water table elevation.
$S_{y}S_{y}$, therefore, is the amount of water released from storage per square meter of aquifer per m drawdown of the water table.
Hydrogeologists, and groundwater engineers alike, often treat specific yield as a constant. In reality, the draining and filling of pores is more complex and this should be kept in mind in order to judge differences of $S_{y}$ values under different circumstances even with the same aquifer material. This will be explained further down.
There is no such thing as a sharp boundary between the saturated and the unsaturated porous medium above and below the water table. In fact, the water content is continuous across the water table.
The water table is, by definition, the elevation where the pressure equals atmospheric pressure
Because we relate all pressures relative to atmospheric we may say the water table is the elevation where the water pressure is zero (relative to the pressure of the atmosphere).
The soil itself may be considered to consist of a dense network of connected tortuous pores of widely varying diameter that may be fully or partially filled with water. Due to adhesive forces pores may even be fully filled above the water table.
In pores above the water table the pressure is negative (i.e. below atmospheric).
If grains can be wetted (attract water), as is generally the case with water, water will be sucked against gravity, into the pores above the water table over a certain height. This height mainly depends on the diameter of the pores.
Unsaturated zone and capillary zone
For the purpose of better understanding the unsaturared zone, one often envisions it as a network of small interconnected pores. The most simple picture is that of a single vertical pore, a straw, that ends in the saturated zone. We know from experience that, if the straw is small enough, the water in it will rise above the water table. We have learned to say that this is due to adhesion between the water and the wall of the pore.
Looking at this straw with the water risen in it until it has reached equilibrium, the relation describing it is easy enough. The weight of the water that has been sucked up must equal the cohesion around the circumference of the straw. Hence:
$$ \rho g h_c \pi r^2 = 2 \pi r \gamma \cos \alpha $$
where $\rho$ [kg/m3] is the density of the fluid, $g$ [N/lg] is gravity, $r$ [m] the radius of the straw $\gamma$ [N/m] the cohesion force and $\alpha$ [radians, i.e. L/L] the angle that the water surface makes with the surface of the straw.
Hence
$$ h_c = \frac {2 \gamma \cos \alpha} { \rho \, g \, r } $$
A first exercise is to gain a feeling for how large this suction can be.
To get an idea, we know that surface tension $\tau$ [N/m] works in the free water surface in the straw, and we know its value from our school handbooks or from looking it up in the Internet:
$$\tau = 75 \times 10^{-3} \,\, N/m$$
If the angle $\alpha$ is small, and it often is the case with wettable surfaces like sand (for clean glass it is even $\alpha \approx 0$, then $\cos \alpha \approx 1$, so that $\gamma\approx \tau$.
With this we have
$$ h_x \approx \frac {2 \tau } { \rho \, g \, r} $$
Then we should have an idea how large the radius of the pores of a porous medium are.
If we have a matrix consisting of spheres of radius $r$ and a porosity $\epsilon$ we have for the volume the grains in a m3:
$$ V_g = \frac 1 6 \pi d^3 n = 1 - \epsilon $$
with $n$ the number of grains in one m3. The surface area of this mass equals
$$ A = \pi d^2 n $$
Noting that the grais and the pores share the same surface area, we have the following ratio
$$ \frac {V_g} A = \frac {1-\epsilon} A = \frac {d_g} 6 $$
$$ \frac {V_p} A = \frac \epsilon A = \frac {d_p} 6 $$
allowing to conclude that
$$ d_p = \frac \epsilon {1 - \epsilon} d_g $$
while $d_g$ can be obtained from sand-sieving. With an often found value of $\epsilon \approx 35%$ we get
$$ r_p \approx 0.5 r_g = 0.25 d_g $$
Let's say we have the following grain diameters:
End of explanation
"""
g = 9.81 # N/kg (gravity)
rho = 1000. # kg/m3 (water)
tau = 75e-3 # N/m
por = 0.35
rp = dg * por / (1 - por)
hc = 2 * tau / (rho * g * rp)
print("\n\nResults of computing capillary rise given grain diameters (porosity = {}):\n".format(por))
print( (16*" " + "{:>8s} | {:>8s} | {:>8s} | {:>8s} | {:>8s} | {:>8s}").
format("clay ","silt ","fine sand","med. sand","crs sand", "gravel"))
print("Grain diam.[mm]: ", end=""); print(("{:11.3g}" * len(dg)).format(*(dg * 1000)))
print("Grain diam. [m]: ", end=""); print(("{:11.3g}" * len(dg)).format(*dg))
print("Pore radius [m]: ", end=""); print(("{:11.3g}" * len(rp)).format(*rp))
print("Cap. rise [m]: ", end=""); print(("{:11.3g}" * len(rp)).format(*hc))
print()
"""
Explanation: values that bound the following bin names "silt", "fine sand", "medium sand", "coarse sand"
Then we could compute the capillary rise for straws with these pores like
End of explanation
"""
x = plt.hist(np.random.randn(int(1e6)), bins=25, cumulative=True, normed=True)
plt.show()
"""
Explanation: This shows that in the range sand, the capillary rise is expected to vary from about 1.5 cm for coarse sand of grain diameter of 2 mm to about 0.5 m for fine sand of grain size diameter of 0.06 mm.
Aquifer material, i.e. sediment, has different pore sizes, therefore, it could be presented as a bungle of straws with different sizes in which the water rises to different heights in accordance with the radius of each straw.
Due to the presence of many pore sizes, will the moisture content of the sand decline with distance from the water table. The thickness of the capillary zone is that which corresponds to $h_c$ of the widest pores. Above this elevation, more and more pores will be dry and the lower will be the water content, as shown in the figure to the right.
A soil property is the so-called "air entry pressure". This is the air pressure that has to be imposed at one end of a soil sample with ambulant air pressure at the down side, before the air is blown through the sample. What is the relation between this "air entry pressure", pore width and the eight of the full-capillary zone?
Moisture content and sediment particle distribution (sand sieve curves)
Real aquifer material has a grain size distribution that shows the mass fraction versus grain diameter, as made visible by sieving the sand with a set of seives with different stanard opening size ane weighing the amount of sand that remains on each sieve. Such curves often look like a cumulative normal probbility density function, such as the one below, which is readily generated by sampling for instance a million times the normal probability density function, collect the samples in a set of bins and draw a histogram for them, or rather a comulative histogam like so:
End of explanation
"""
d50 = 0.2 # mm
U = 2.0
# convert to linear scale to sample from the normal probability density function
mu = np.log10(d50)
sigma = np.log10(np.sqrt(U))
x = np.random.randn(int(1e4)) * sigma + mu # sample a million points
# convert back to the real-world scale
d = 10.**x
# show histogram of normalized comumulative distribution = sieve curve
ax = plt.figure().add_subplot(111)
ax.hist(d, bins=100, normed=True, cumulative=True)
# get the x-axis and covert to log scale, set labels
ax.set(xscale='log', xlim=(1e-1, 1.), xlabel='d [mm]', ylabel='mass fraction [-]')
ax.set_title(r"Generated sand sieve curve with $d_{50} = 0.2\,$ mm and $u = \sqrt{2}$")
ax.grid(True)
# Plot the mean and the sigma's as derived above
ax.plot(10**mu, 0.5, 'ro')
ax.plot([10**(mu-sigma), 10**mu], [0.16, 0.16], 'ro-') # sigma at 16%
ax.plot([10**mu, 10**(mu+sigma)], [0.84, 0.84], 'ro-') # sigma at 84%
ax.plot([10**mu, 10**mu], [0., 1.], 'r-') # vertical line through d50
plt.show()
"""
Explanation: If this were a sieve curve, then the values on the x-axis would be the grain diameter on log scale. So then the -2 would meen 0.01 mm, 0 would mean 1 mm and 2 100 mm. Then, in fact the distribution is not normal, but log-normal (it is normal when the horizontal axis is plotted on log-scale). The vertical axis of sieve curse is the mass fraction that has a grain size smaller than d. So the $y$-axis starts at zero and runs to 1.0.
The normal probability density function is defined as
$$ p = \frac 1 {\sqrt {2 \pi} } \exp \left( - \frac { (x -\mu)^2 } {2 \sigma^2 } \right) $$
On log scale, $x = \log d$ and $\mu = \log d_{50}$ while $\sigma$ can be expressed as a factor as it yields a constant distance on log scale. $\sigma = \log \alpha$
$$p = \frac 1 {\sqrt{ 2 \pi } } \exp \left(- \frac {\log^2 \left( \frac d {d_{50}} \right)} {2 \log^2 \alpha} \right)$$
The spread of a sieve curve is normally quantified by its so-called coefficient of uniformity, $U= \frac {d_{60}} {d_{10}}$. For a normal distribution we could rather use $\sigma$ or $2 \sigma$ which is the distance between the points where the cumative probability density functions has the value of 84% and 16%. (Remember that 68% of the samples from a probability density function lie between $\mu - \sigma$ and $\mu + \sigma$, that is, 16% lie below $\mu-\sigma$ and 16% above $\mu + \sigma$). Hence we approximate $U$ by
$$ U \approx \frac {d_{84}} {d_{16}} $$
so that
$$ \log U = \log (d_{84}) - \log (d_{16}) = 2 \sigma $$
or
$$ \sigma \approx \log \sqrt U $$
To generate a sieve curve with mean diameter $d_{50}$ and spread $U$, we sample values of $x$ from the normal probability density function with mean $\mu = \log d_{50}$ and $\sigma = \log \sqrt U$, and translate the sample back to real grain diameters by
$$ d = 10^{x} $$
Example: Generate a sieve curve with $d_{50} = 0.2\, mm$ and $U = \sqrt 2$
End of explanation
"""
g = 9.81 # N/kg, gravity
rho = 1000. # density of water, kg/m3
por = 0.35 # porosity
tau = 75.e-3 # N/m water surface tension
r = np.sort((d/2) * por/(1 - por)) # also sort
hc = 2 * tau / (rho * g * r/1000.) # hc in m by converting r to m
ax1 = plt.figure().add_subplot(111)
ax1.hist(hc, bins=100, normed=True, cumulative=True)
ax1.set(xlabel="hc [m]", ylabel="pore fraction [-]", title="volume fraction versus hc")
plt.show()
fr = np.cumsum(r)/np.sum(r) # fraction of pores with radius smaller than r
ax1=plt.figure().add_subplot(111)
ax1.set(xscale='linear',xlabel='pore fraction',ylabel='capillary rise', title='distribution of filled pores')
ax1.plot(fr,hc,'r')
ax1.grid(True)
plt.show()
"""
Explanation: This this instrument in place we can now investigate the moisture distribution of an arbitrary sand of which the grain distribution can be characterized by a normal probability density function with mean grain size $d_{50}$ and uniformity coefficient $U$.
The grain size distribution is easily converted to pore radius distribution by noting that the volume of pores is $1-\epsilon$ times the volume of grains, where $\epsilon$ is the porosity. This is a constant factor, that only shifts the distribution horizontally on its log axis.
$$ r = \frac d 2 \,\,\frac { \epsilon} { 1 - \epsilon }$$
Using our relation between pore radius $r$ and capillary rise $h_c$, we can link to associate with each pore radius.
End of explanation
"""
phi_units = np.arange(-11, 10)
size = ["v. large", "large", "medium", "small",
"large", "small",
"v. coarse", "coarse", "medium", "fine", "v. fine",
"v. coarse", "coarse", "medium", "fine", "v. fine",
"v. coarse", "coarse", "medium", "fine", "v. fine"]
cla1 = 4 * ["boulders"] + 2 * ["cobbles"] + 5 * ["pebbles"] + 5 * ["sand"] + 5 * ["silt"]
cla2 = 11 * ["gravel"] + 5 * ["sand"] + 5 * ["mud"]
print(("{:11s}" * 6).format("d mm", "d mm", "size", "class", "class", " hc [m]"))
for pu, sz, c1, c2 in zip(phi_units, size, cla1, cla2):
rp = 2**(-pu) * por/(1-por) /2.
h = 2 *tau / (rho * g * rp/1000)
if pu<0:
print("{:<7d}".format(2**(-pu)), end="")
else:
print("1/{:<5d}".format(2**pu), end="")
print(("{:>10.3g} " + 3 * " {:<10s}" + "{:10.3g}").format(2.**(-pu), sz, c1, c2, h))
D50 = [0.002, 0.02, 0.2, ]
mult = [0.1, 0.333, 1.0, 3.33, 10.]
u = [1.2, ]
ax2 = plt.figure().add_subplot(111)
ax2.set(xlabel='pore fraction [-]', ylabel='hc [m]', title='cap. rise versus pore fraction')
for m in mult:
hc = 2 * tau / (rho * g * (m * r/1000))
ax2.plot(fr, hc, label="d50 = {:.3g} mm".format(d50 * m))
ax2.legend(loc='best')
plt.show()
"""
Explanation: Now that were are able to generate a capillary rise curve from the mean diameter and the uniformity coefficient of a sand sieve curve, we may investigate different situations. For example curves with more spread.
Let's use the same r, but multiply it to make the sand finer or coarser
End of explanation
"""
def sieveCurve(sediment, sed_props, n=100000):
"""Returns sieve curve data from input.
parameters:
-----------
sediment: [str,: name of the sediment, float:porosity]
sed_props: [[f, d50, u, clr], [f, d50, u, clr], ...]
f is the relative mass contribution,
d50 the mean grain diameter,
u the uniformity
taken as ratio of the diameters between +- sigma around d50.
The distributions are assumed normal with mu=log10(d50) and sigma=log(sqrt(u))
clr the color of the line
n : int, total number of random samples
"""
D = np.array([])
name, por = sediment
ax1.set_title(name) # name of the sediment
ax1.set_xlabel('d [mm]')
ax1.set_ylabel(' mass fraction []')
ax2.set_ylabel(' dm/d(log(d))')
for props in sed_props:
f, d50, u, clr = props
m = round(f * n)
mu = np.log10(d50)
sigma = np.log10(np.sqrt(u))
print("f={:10.3f}, d50={:10.3f}, u={:10.3f}, m={:10.3f}, mu={:10.3f}, sigma={:10.3f}".format(f, d50, u, m, mu, sigma))
# sampling the normal distribution
x = np.sort(np.random.randn(int(m)) * sigma + mu)
d = 10.0**x # convert to log normal
h = np.arange(1, m+1) / m # because we measure and sample mass not grains
ax1.plot(d, h, clr, label=name) # cumulative distribtuion
ax1.plot(d50, 0.5, 'ro') # show its center
ax1.plot([10**(mu-sigma), d50], [0.16, 0.16], 'ro-') # show sigma at 16%
ax1.plot([d50, 10**(mu+sigma)], [0.84, 0.84], 'ro-') # show sigma at 84%
ax1.plot([d50, d50], [0., 1.], 'r-') # vertical line through d50
# sampling the cumulative distribution for the derivative
xs = np.linspace(mu-3*sigma, mu+3*sigma, 200) # 200 sampling points
dx = np.diff(xs)
xm = 0.5 * (xs[:-1] + xs[1:])
dm = 10**xm # d at between two sampling locations
hi = np.interp(xs, x, h) # get interpolated points of cum. distr.
dh = np.diff(hi) # increment of cum. distr.
p = dh/dx # derivative is prob. density on linear scale
ax2.plot(dm, f * dh/dx, 'g')
D = np.hstack((D, d))
# pdf combined
D = np.sort(D)
X = np.log10(D) # back to linear axis
N = len(D)
H = np.arange(1., N+1) / N
ax1.plot(D, H, 'k-', linewidth=3, label=name) # combined cumulative pdf
Xs = np.linspace(X[0],x[-1], 201)
Hi = np.interp(Xs, X, H) # sample H
Xm = 0.5 * (Xs[1:] + Xs[:-1]) # sample locations for derivative
Dm = 10**Xm # to log scale
ax2.plot(Dm, np.diff(Hi)/np.diff(Xs), 'k-', label=name)
# plot hc versus volume fraction
r = D/2 * por / (1 - por) / 1e6 # pore radius in m
hc = 2 * tau / (rho * g * r)
ax3.set_title(name)
ax3.plot(H * por, hc, clr, label=name)
return D
titles = [["(a) loess from Xian, southern Loess Plateau", 0.45],
["(b) sand from the Mu-Us Desert", 0.45],
["(c) late Tertiary aeolian red clay from Xifeng, central Loess Plateau", 0.45],
["(d) locally derived loess from Zhengzhou, on the south bank of the Yellow River", 0.45]]
sediments = [[[0.55, 4., 10., 'r'], [0.45, 10., 4., 'r']],
[[0.06, 6., 3., 'b'], [0.94, 150., 8., 'b']],
[[0.80, 6., 10., 'g'], [0.20, 35., 2., 'g']],
[[0.25, 5., 4., 'm'], [0.75, 50., 4., 'm']]]
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.set(xscale='log', ylim=(0.0, 1.0), xlim=(1.e-2, 1.e4))
ax3 = plt.figure().add_subplot(111)
ax3.set(xlabel='pore volume', ylabel='elevation above water table [m]', ylim=(0., 3.))
ax3.grid(True)
d = sieveCurve(titles[3], sediments[3])
#ax1.legend(loc='best')
#ax3.legend(loc='best')
plt.show()
names = [["(a) Clayey loess", 0.50],
["(b) Loess", 0.45],
["(c) Fine sand", 0.38],
["(d) Coarse sand with gravel", 0.32]]
sediments = [[[0.30, 2., 10., 'r'], [0.70, 12., 4., 'r']],
[[0.75, 12., 12., 'b'], [0.25, 64., 4., 'b']],
[[1.00,150., 10., 'g']],
[[0.70,500., 4., 'm'], [0.30, 2000., 2., 'm']], 'k', 'coarse sand w. gravel']
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.set(xscale='log', ylim=(0.0, 1.0), xlim=(1.e-2, 1.e4))
ax3 = plt.figure().add_subplot(111)
ax3.set(xlabel='pore volume', ylabel='elevation above water table [m]', ylim=(0., 3.))
ax3.grid(True)
for name, sediment in zip(names, sediments):
sieveCurve(name, sediment)
ax1.legend(loc='best', fontsize='small')
ax3.legend(loc='best', fontsize='small')
plt.show()
ax3.legend(loc='best')
plt.show()
"""
Explanation: Below some sieve curves are given
Donhuai Sun, Bloemendal, J, Rea, DK, and Ruixia Su (2002) Grain-size distribution function of polymodal sediments in hydraulic and Aeolian environments, and numerical partitioning of the sedimentary components, in Sedimentary Geology 152(3-4):263-277 · October 2002, DOI: 10.1016/S0037-0738(02)00082-9.
Figure: Particle distribution of some loesses
Below we define a function called sieveCurve, that generates and shows sieve curves from any number of constituing particle distributions that are specified each by means of its mean, standard deviation and fraction of this distribution to the total mass of the sand. Se we can take the constituing grain size contributions of the Loess examples in the figure and create from that the overall sieve curve, that we can show and use to compute the moisture distribution above the water table (i.e. above the plane where p=atmospheric pressure). The computed water distributions will only be valid when there is no vertical flow in the unsaturated zone.
End of explanation
"""
|
davidthomas5412/PanglossNotebooks | GroupMeeting_11_16.ipynb | mit | %matplotlib inline
from matplotlib import rc
rc("font", family="serif", size=14)
rc("text", usetex=True)
import daft
pgm = daft.PGM([7, 6], origin=[0, 0])
#background nodes
pgm.add_plate(daft.Plate([0.5, 3.0, 5, 2], label=r"foreground galaxy $i$",
shift=-0.1))
pgm.add_node(daft.Node("theta", r"$\theta$", 3.5, 5.5, fixed=True))
pgm.add_node(daft.Node("alpha", r"$\alpha$", 1.5, 5.5, fixed=True))
pgm.add_node(daft.Node("halo_mass", r"$M_i$", 3.5, 4, scale=2))
pgm.add_node(daft.Node("background_z", r"$z_i$", 2, 4, fixed=True))
pgm.add_node(daft.Node("concentration", r"$c_i$", 1.5, 3.5, fixed=True))
pgm.add_node(daft.Node("background_x", r"$x_i$", 1.0, 3.5, fixed=True))
#foreground nodes
pgm.add_plate(daft.Plate([0.5, 0.5, 5, 2], label=r"background galaxy $j$",
shift=-0.1))
pgm.add_node(daft.Node("reduced_shear", r"$g_j$", 2.0, 1.5, fixed=True))
pgm.add_node(daft.Node("reduced_shear", r"$g_j$", 2.0, 1.5, fixed=True))
pgm.add_node(daft.Node("foreground_z", r"$z_j$", 1.0, 1.5, fixed=True))
pgm.add_node(daft.Node("foreground_x", r"$x_j$", 1.0, 1.0, fixed=True))
pgm.add_node(daft.Node("ellipticities", r"$\epsilon_j^{obs}$", 4.5, 1.5, observed=True, scale=2))
#outer nodes
pgm.add_node(daft.Node("sigma_obs", r"$\sigma_{\epsilon_j}^{obs}$", 3.0, 2.0, fixed=True))
pgm.add_node(daft.Node("sigma_int", r"$\sigma_{\epsilon}^{int}$", 6.0, 1.5, fixed=True))
#edges
pgm.add_edge("foreground_z", "reduced_shear")
pgm.add_edge("foreground_x", "reduced_shear")
pgm.add_edge("reduced_shear", "ellipticities")
pgm.add_edge("sigma_obs", "ellipticities")
pgm.add_edge("sigma_int", "ellipticities")
pgm.add_edge("concentration", "reduced_shear")
pgm.add_edge("halo_mass", "concentration")
pgm.add_edge("background_z", "concentration")
pgm.add_edge("background_x", "reduced_shear")
pgm.add_edge("alpha", "concentration")
pgm.add_edge("theta", "halo_mass")
pgm.render()
"""
Explanation: The PGM
For more an introduction to PGMS see Daphne Koller's Probabilistic Graphical Models. Below is the PGM that we will explore in this notebook.
End of explanation
"""
from pandas import read_table
from pangloss import GUO_FILE
m_h = 'M_Subhalo[M_sol/h]'
m_s = 'M_Stellar[M_sol/h]'
guo_data = read_table(GUO_FILE)
nonzero_guo_data= guo_data[guo_data[m_h] > 0]
import matplotlib.pyplot as plt
stellar_mass_threshold = 5.883920e+10
plt.scatter(nonzero_guo_data[m_h], nonzero_guo_data[m_s], alpha=0.05)
plt.axhline(y=stellar_mass_threshold, color='red')
plt.xlabel('Halo Mass')
plt.ylabel('Stellar Mass')
plt.title('SMHM Scatter')
plt.xscale('log')
plt.yscale('log')
from math import log
import numpy as np
start = log(nonzero_guo_data[m_s].min(), 10)
stop = log(nonzero_guo_data[m_s].max(), 10)
m_logspace = np.logspace(start, stop, num=20, base=10)[:-1]
m_corrs = []
thin_data = nonzero_guo_data[[m_s, m_h]]
for cutoff in m_logspace:
tmp = thin_data[nonzero_guo_data[m_s] > cutoff]
m_corrs.append(tmp.corr()[m_s][1])
plt.plot(m_logspace, m_corrs, label='correlation')
plt.axvline(x=stellar_mass_threshold, color='red', label='threshold')
plt.xscale('log')
plt.legend(loc=2)
plt.xlabel('Stellar Mass')
plt.ylabel('Stellar Mass - Halo Mass Correlation')
plt.title('SMHM Correlation')
plt.rcParams['figure.figsize'] = (10, 6)
# plt.plot(hist[1][:-1], hist[0], label='correlation')
plt.hist(nonzero_guo_data[m_s], bins=m_logspace, alpha=0.4, normed=False, label='dataset')
plt.axvline(x=stellar_mass_threshold, color='red', label='threshold')
plt.xscale('log')
plt.legend(loc=2)
plt.xlabel('Stellar Mass')
plt.ylabel('Number of Samples')
plt.title('Stellar Mass Distribution')
"""
Explanation: We have sets of foregrounds and backgrounds along with the variables
$\alpha$: parameters in the concentration function (which is a function of $z_i,M_i$)
$\theta$: prior distribution of halo masses
$z_i$: foreground galaxy redshift
$x_i$: foreground galaxy angular coordinates
$z_j$: background galaxy redshift
$x_j$: background galaxy angular coordinates
$g_j$: reduced shear
$\sigma_{\epsilon_j}^{obs}$: noise from our ellipticity measurement process
$\sigma_{\epsilon}^{int}$: intrinsic variance in ellipticities
$\epsilon_j^{obs}$: intrinsic variance in ellipticities
Stellar Mass Threshold
End of explanation
"""
from pandas import read_csv
res = read_csv('data3.csv')
tru = read_csv('true3.csv')
start = min([res[res[c] > 0][c].min() for c in res.columns[1:-1]])
stop = res.max().max()
base = 10
start = log(start, base)
end = log(stop, base)
res_logspace = np.logspace(start, end, num=10, base=base)
plt.rcParams['figure.figsize'] = (20, 12)
for i,val in enumerate(tru.columns[1:]):
plt.subplot(int('91' + str(i+1)))
x = res[val][res[val] > 0]
weights = np.exp(res['log-likelihood'][res[val] > 0])
t = tru[val].loc[0]
plt.hist(x, bins=res_logspace, alpha=0.4, normed=True, label='prior')
plt.hist(x, bins=res_logspace, weights=weights, alpha=0.4, normed=True, label='posterior')
plt.axvline(x=t, color='red', label='truth', linewidth=1)
plt.xscale('log')
plt.legend()
plt.ylabel('PDF')
plt.xlabel('Halo Mass (log-scale)')
plt.title('Halo ID ' + val)
plt.show()
res.columns
res[['112009306000027', 'log-likelihood']].sort('log-likelihood')
"""
Explanation: Results
End of explanation
"""
|
DavidQiuChao/CS231nHomeWorks | assignment2/BatchNormalization.ipynb | mit | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
"""
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
"""
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
"""
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
"""
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
"""
Explanation: Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
"""
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
"""
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
"""
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
"""
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
"""
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
"""
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation
"""
|
dougalsutherland/mmd | examples/mmd regression example.ipynb | bsd-3-clause | n = 500
mean = np.random.normal(0, 10, size=n)
var = np.random.gamma(5, size=n)
n_samp = np.random.randint(10, 500, size=n)
samps = [np.random.normal(m, v, size=s)[:, np.newaxis]
for m, v, s in zip(mean, var, n_samp)]
# this gives us a progress bar for MMD computations
from mmd.utils import show_progress
show_progress('mmd.mmd.progress')
# Get the median pairwise squared distance in the aggregate sample,
# as a heuristic for choosing the bandwidth of the inner RBF kernel.
from sklearn.metrics.pairwise import euclidean_distances
sub = np.vstack(samps)
sub = sub[np.random.choice(sub.shape[0], min(1000, sub.shape[0]), replace=False)]
D2 = euclidean_distances(sub, squared=True)
med_2 = np.median(D2[np.triu_indices_from(D2, k=1)], overwrite_input=True)
del sub, D2
from sklearn import cross_validation as cv
from sklearn.kernel_ridge import KernelRidge
import sys
l1_gamma_mults = np.array([1/16, 1/4, 1, 4]) # Could expand these, but it's quicker if you don't. :)
l1_gammas = l1_gamma_mults * med_2
"""
Explanation: Generate fake data
As an example, let's regress 1d normals to their mean.
We're storing the data as a list of n numpy arrays, each of size n_samp x dim (with dim == 1).
End of explanation
"""
mmds, mmk_diags = mmd.rbf_mmd(samps, gammas=l1_gammas, squared=True, n_jobs=40, ret_X_diag=True)
"""
Explanation: Now we'll get the $\mathrm{MMD}^2$ values for each of the proposed gammas.
(This is maybe somewhat faster than doing it independently, but I haven't
really tested that; if it's causing memory issues or anything, do them separately.)
We also want to save the "diagonal" values (the mean map kernel between a set
and itself), so we can compute the MMD to test values later without recomputing
those.
End of explanation
"""
# Choose parameters for the hyperparameter search
k_fold = list(cv.KFold(n, n_folds=3, shuffle=True))
l2_gamma_mults = np.array([1/4, 1, 4, 8])
alphas = np.array([1/128, 1/64, 1/16, 1/4, 1, 4])
scores = np.empty((l1_gamma_mults.size, l2_gamma_mults.size, alphas.size, len(k_fold)))
scores.fill(np.nan)
%%time
K = np.empty((n, n), dtype=samps[0].dtype)
for l1_gamma_i, l1_gamma in enumerate(l1_gamma_mults * med_2):
print("l1 gamma {} / {}: {:.4}".format(l1_gamma_i + 1, len(l1_gamma_mults), l1_gamma), file=sys.stderr)
D2_mmd = mmds[l1_gamma_i]
# get the median of *these* squared distances,
# to scale the bandwidth of the outer RBF kernel
mmd_med2 = np.median(D2_mmd[np.triu_indices_from(D2_mmd, k=1)])
for l2_gamma_i, l2_gamma in enumerate(l2_gamma_mults * mmd_med2):
print("\tl2 gamma {} / {}: {:.4}".format(l2_gamma_i + 1, len(l2_gamma_mults), l2_gamma), file=sys.stderr)
np.multiply(D2_mmd, -l2_gamma, out=K)
np.exp(K, out=K)
for alpha_i, alpha in enumerate(alphas):
ridge = KernelRidge(alpha=alpha, kernel='precomputed')
these = cv.cross_val_score(ridge, K, mean, cv=k_fold)
scores[l1_gamma_i, l2_gamma_i, alpha_i, :] = these
print("\t\talpha {} / {}: {} \t {}".format(alpha_i + 1, len(alphas), alpha, these), file=sys.stderr)
mean_scores = scores.mean(axis=-1)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as ticker
fig = plt.figure(figsize=(8, 15))
for i in range(len(l1_gamma_mults)):
ax = fig.add_subplot(len(l1_gamma_mults), 1, i+1)
cax = ax.matshow(mean_scores[i, :, :], vmin=0, vmax=1)
ax.set_yticklabels([''] + ['{:.3}'.format(g * mmd_med2) for g in l2_gamma_mults])
ax.set_xticklabels([''] + ['{:.3}'.format(a) for a in alphas])
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
fig.colorbar(cax)
ax.set_title("L1 gamma: {}".format(l1_gamma_mults[i] * med_2))
plt.tight_layout()
"""
Explanation: Now, we want to turn this into a kernel and evaluate the regression
for each of the other hyperparameter values.
We'll just do a 3d grid search here.
Ideally, this would be:
Parallelized. Scikit-learn's tools for this want to pickle the kernels, which is a non-starter here. It'd take some coding to work around that.
Not a grid search. It's harder to get away from grid search for the l1 gamma because it's kind of expensive, but you could definitely do a randomized search (or some kind of actual optimization) for the l2 gamma + alpha parameters.
Really, KernelRidge should support leave-one-out CV across a bunch of alphas, like RidgeCV. Supposedly this isn't too hard but I haven't spent the time to try to figure it out, and apparently neither has anyone else. This would help with the parallelization issue.
End of explanation
"""
best_l1_gamma_i, best_l2_gamma_i, best_alpha_i = np.unravel_index(mean_scores.argmax(), mean_scores.shape)
best_l1_gamma = l1_gamma_mults[best_l1_gamma_i] * med_2
best_l2_gamma = l2_gamma_mults[best_l2_gamma_i] * mmd_med2
best_alpha = alphas[best_alpha_i]
best_l1_gamma_i, best_l2_gamma_i, best_alpha_i
best_l1_gamma, best_l2_gamma, best_alpha
"""
Explanation: Looking at these results, the best results are with the lowest alpha, lowest L1 gamma, and highest L2 gamma. So if we were really trying to solve this as best as possible, we'd maybe want to try lower alphas / lower L1 gammas / higher L2 gammas. But, whatever; let's just use the best of the ones we did try for now.
End of explanation
"""
# get the training kernel
D2_mmd = mmds[best_l1_gamma_i]
np.multiply(D2_mmd, -best_l2_gamma, out=K)
np.exp(K, out=K)
ridge = KernelRidge(alpha=best_alpha, kernel='precomputed')
ridge.fit(K, mean)
"""
Explanation: Now, train a model on the full training set:
End of explanation
"""
# generate some test data from the same distribution
t_n = 100
t_mean = np.random.normal(0, 10, size=t_n)
t_var = np.random.gamma(5, size=t_n)
t_n_samp = np.random.randint(10, 500, size=t_n)
t_samps = [np.random.normal(m, v, size=s)[:, np.newaxis]
for m, v, s in zip(t_mean, t_var, t_n_samp)]
# get the kernel from the training data to the test data
t_K = mmd.rbf_mmd(t_samps, samps, gammas=best_l1_gamma, squared=True,
Y_diag=mmk_diags[best_l1_gamma_i], n_jobs=20)
t_K *= -best_l2_gamma
np.exp(t_K, out=t_K);
preds = ridge.predict(t_K)
plt.figure(figsize=(5, 5))
plt.scatter(t_mean, preds)
"""
Explanation: To evaluate on new data:
End of explanation
"""
|
joferkington/tutorials | 1506_Seismic_petrophysics_2/Seismic_petrophysics_2.ipynb | apache-2.0 | def frm(vp1, vs1, rho1, rho_f1, k_f1, rho_f2, k_f2, k0, phi):
vp1 = vp1 / 1000.
vs1 = vs1 / 1000.
mu1 = rho1 * vs1**2.
k_s1 = rho1 * vp1**2 - (4./3.)*mu1
# The dry rock bulk modulus
kdry = (k_s1 * ((phi*k0)/k_f1+1-phi)-k0) / ((phi*k0)/k_f1+(k_s1/k0)-1-phi)
# Now we can apply Gassmann to get the new values
k_s2 = kdry + (1- (kdry/k0))**2 / ( (phi/k_f2) + ((1-phi)/k0) - (kdry/k0**2) )
rho2 = rho1-phi * rho_f1+phi * rho_f2
mu2 = mu1
vp2 = np.sqrt(((k_s2+(4./3)*mu2))/rho2)
vs2 = np.sqrt((mu2/rho2))
return vp2*1000, vs2*1000, rho2, k_s2
"""
Explanation: Seismic petrophysics, part 2
In part 1 we loaded some logs and used a data framework called Pandas pandas to manage them. We made a lithology–fluid class (LFC) log, and used it to color a crossplot. This time, we take the workflow further with fluid replacement modeling based on Gassmann’s equation. This is just an introduction; see Wang (2001) and Smith et al. (2003) for comprehensive overviews.
Fluid replacement modeling
Fluid replacement modeling (FRM in short), based on Gassmann's equation, is one of the key activities for any kind of quantitative work (see Wang, 2001 and Smith et al., 2003, for an exhaustive overview).
It is used to model the response of elastic logs (i.e., Vp, Vs and density) in different fluid conditions, thus allowing to study how much a rock would change in terms of velocity (or impedances) if it was filled with gas instead of brine for example; but what it also does is to bring all elastic logs to a common fluid denominator to focus only on lithological variations, disregarding fluid effects.
The inputs to FRM are $k_s$ and $\mu_s$ (saturated bulk and shear moduli which we can get from recorded Vps, Vs and density logs), $k_d$ and $\mu_d$ (dry-rock bulk and shear moduli), $k_0$ (mineral bulk modulus), $k_f$ (fluid bulk modulus) and porosity, $\varphi$. Reasonable estimates of mineral and fluid bulk moduli and porosity are easily computed and are shown below. The real unknowns, what is arguably the core issue of rock physics, are the dry-rock moduli.
And here we come to Gassmann's equation; I can use it in its inverse form to calculate $k_d$:
$$ k_d = \frac{k_s \cdot ( \frac{\varphi k_0}{k_f} +1-\varphi) -k_0}{\frac {\varphi k_0}{k_f} + \frac{k_s}{k_0} -1-\varphi} $$
Then I use Gassmann's again in its direct form to calculate the saturated bulk modulus with the new fluid:
$$k_s = k_d + \frac { (1-\frac{k_d}{k_0})^2} { \frac{\varphi}{k_f} + \frac{1-\varphi}{k_0} - \frac{k_d}{k_0^2}}$$
Shear modulus is not affected by pore fluid so that it stays unmodified throughout the fluid replacement process.
$$\mu_s = \mu_d$$
Bulk density is defined via the following equation:
$$\rho = (1-\varphi) \cdot \rho_0 + \varphi \cdot \rho_f $$
We can put all this together into a Python function, calculating the elastic parameters for fluid 2, given the parameters for fluid 1, along with some other data about the fluids:
End of explanation
"""
def vrh(volumes,k,mu):
f = np.array(volumes).T
k = np.resize(np.array(k),np.shape(f))
mu = np.resize(np.array(mu),np.shape(f))
k_u = np.sum(f*k, axis=1)
k_l = 1. / np.sum(f/k, axis=1)
mu_u = np.sum(f*mu, axis=1)
mu_l = 1. / np.sum(f/mu, axis=1)
k0 = (k_u+k_l) / 2.
mu0 = (mu_u+mu_l) / 2.
return k_u, k_l, mu_u, mu_l, k0, mu0
"""
Explanation: What this function does is to get the relevant inputs which are:
vp1, vs1, rho1: measured Vp, Vs, and density (saturated with fluid 1)
rho_fl1, k_fl1: density and bulk modulus of fluid 1
rho_fl2, k_fl2: density and bulk modulus of fluid 2
k0: mineral bulk modulus
phi: porosity
And returns vp2,vs2, rho2, k2 which are respectively Vp, Vs, density and bulk modulus of rock with fluid 2. Velocities are in m/s and densities in g/cm3.
I have mentioned above the possibility to get estimates of mineral bulk modulus $k_0$; the thing to know is that another assumption in Gassmann's equation is that it works only on monomineralic rocks; an actual rock is always a mixture of different minerals. A good approximation to get a mixed mineralogy bulk modulus k0 is to use Voigt-Reuss-Hill averaging (check these notes by Jack Dvorkin for a more rigorous discussion or this wikipedia entry).
So I define another function, vrh, to do that:
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
logs = pd.read_csv('qsiwell2_lfc.csv')
rho_qz=2.65; k_qz=37; mu_qz=44 # mineral properties, quartz (i.e., sands)
rho_sh=2.81; k_sh=15; mu_sh=5 # mineral properties, clay (i.e., shales)
rho_b=1.09; k_b=2.8 # fluid properties, brine
rho_o=0.78; k_o=0.94 # fluid properties, oil
rho_g=0.25; k_g=0.06 # fluid properties, gas
"""
Explanation: I can use the same function to also compute the fluid bulk modulus log which is usually done via Reuss average (the lower bound k_l in the vrh function above):
First I will load the Well 2 logs with the litho-fluid class log created in Part 1, then define the various mineral and fluid elastic constants:
End of explanation
"""
# mineral mixture bulk and shear moduli, k0 and mu0
shale = logs.VSH.values
sand = 1 - shale - logs.PHI.values
shaleN = shale / (shale+sand) # normalized shale and sand volumes
sandN = sand / (shale+sand)
k_u, k_l, mu_u, mu_l, k0, mu0 = vrh([shaleN, sandN], [k_sh, k_qz], [mu_sh, mu_qz])
# fluid mixture bulk modulus, using the same vrh function but capturing the Reuss average (second output)
water = logs.SW.values
hc = 1 - logs.SW.values
tmp, k_fl, tmp, tmp, tmp, tmp = vrh([water, hc], [k_b, k_o], [0, 0])
# fluid mixture density
rho_fl = water*rho_b + hc*rho_o
"""
Explanation: Then I calculate the original (insitu) fluid density rho_fl and bulk modulus k_fl, and the average mineral bulk modulus k0:
End of explanation
"""
vpb, vsb, rhob, kb = frm(logs.VP, logs.VS, logs.RHO, rho_fl, k_fl, rho_b, k_b, k0, logs.PHI)
vpo, vso, rhoo, ko = frm(logs.VP, logs.VS, logs.RHO, rho_fl, k_fl, rho_o, k_o, k0, logs.PHI)
vpg, vsg, rhog, kg = frm(logs.VP, logs.VS, logs.RHO, rho_fl, k_fl, rho_g, k_g, k0, logs.PHI)
"""
Explanation: ...and put it all together using the frm function defined above:
End of explanation
"""
sand_cutoff = 0.20
brine_sand = ((logs.VSH <= sand_cutoff) & (logs.SW >= 0.9))
oil_sand = ((logs.VSH <= sand_cutoff) & (logs.SW < 0.9))
shale = (logs.VSH > sand_cutoff)
"""
Explanation: Now I create 3 sets of copies of the original elastic logs stored in my DataFrame logs (logs.VP, logs.VSB, logs.RHO) for the three fluid scenarios investigated (and I will append an appropriate suffix to identify these 3 cases, i.e. _FRMB for brine, _FRMG for gas and _FRMO for oil, respectively). These three sets will be placeholders to store the values of the actual fluid-replaced logs (vpb, vsb, rhob, etc.).
To do that I need once more to define the flag logs brine_sand, oil_sand and shale as discussed in Part 1:
End of explanation
"""
logs['VP_FRMB'] = logs.VP
logs['VS_FRMB'] = logs.VS
logs['RHO_FRMB'] = logs.RHO
logs['VP_FRMB'][brine_sand|oil_sand] = vpb[brine_sand|oil_sand]
logs['VS_FRMB'][brine_sand|oil_sand] = vsb[brine_sand|oil_sand]
logs['RHO_FRMB'][brine_sand|oil_sand] = rhob[brine_sand|oil_sand]
logs['IP_FRMB'] = logs.VP_FRMB*logs.RHO_FRMB
logs['IS_FRMB'] = logs.VS_FRMB*logs.RHO_FRMB
logs['VPVS_FRMB'] = logs.VP_FRMB/logs.VS_FRMB
logs['VP_FRMO'] = logs.VP
logs['VS_FRMO'] = logs.VS
logs['RHO_FRMO'] = logs.RHO
logs['VP_FRMO'][brine_sand|oil_sand] = vpo[brine_sand|oil_sand]
logs['VS_FRMO'][brine_sand|oil_sand] = vso[brine_sand|oil_sand]
logs['RHO_FRMO'][brine_sand|oil_sand] = rhoo[brine_sand|oil_sand]
logs['IP_FRMO'] = logs.VP_FRMO*logs.RHO_FRMO
logs['IS_FRMO'] = logs.VS_FRMO*logs.RHO_FRMO
logs['VPVS_FRMO'] = logs.VP_FRMO/logs.VS_FRMO
logs['VP_FRMG'] = logs.VP
logs['VS_FRMG'] = logs.VS
logs['RHO_FRMG'] = logs.RHO
logs['VP_FRMG'][brine_sand|oil_sand] = vpg[brine_sand|oil_sand]
logs['VS_FRMG'][brine_sand|oil_sand] = vsg[brine_sand|oil_sand]
logs['RHO_FRMG'][brine_sand|oil_sand] = rhog[brine_sand|oil_sand]
logs['IP_FRMG'] = logs.VP_FRMG*logs.RHO_FRMG
logs['IS_FRMG'] = logs.VS_FRMG*logs.RHO_FRMG
logs['VPVS_FRMG'] = logs.VP_FRMG/logs.VS_FRMG
"""
Explanation: The syntax I use to do this is:
logs['VP_FRMB'][brine_sand|oil_sand]=vpb[brine_sand|oil_sand]
Which means, copy the values from the output of fluid replacement (vpb, vsb, rhob, etc.) only where there's sand (vpb[brine_sand|oil_sand]), i.e. only when either the flag logs brine_sand or oil_sand are True).
I also compute the additional elastic logs (acoustic and shear impedances IP, Is, and Vp/Vs ratio, VPVS) in their fluid-replaced version.
End of explanation
"""
temp_lfc_b = np.zeros(np.shape(logs.VSH))
temp_lfc_b[brine_sand.values | oil_sand.values] = 1 # LFC is 1 when either brine_sand (brine sand flag) or oil_sand (oil) is True
temp_lfc_b[shale.values] = 4 # LFC 4=shale
logs['LFC_B'] = temp_lfc_b
temp_lfc_o = np.zeros(np.shape(logs.VSH))
temp_lfc_o[brine_sand.values | oil_sand.values] = 2 # LFC is now 2 when there's sand (brine_sand or oil_sand is True)
temp_lfc_o[shale.values] = 4 # LFC 4=shale
logs['LFC_O'] = temp_lfc_o
temp_lfc_g = np.zeros(np.shape(logs.VSH))
temp_lfc_g[brine_sand.values | oil_sand.values] = 3 # LFC 3=gas sand
temp_lfc_g[shale.values] = 4 # LFC 4=shale
logs['LFC_G'] = temp_lfc_g
"""
Explanation: Finally, I will add three more LFC logs that will be companions to the new fluid-replaced logs.
The LFC log for brine-replaced logs will be always 1 whenever there's sand, because fluid replacement will have acted on all sand points and replaced whatever fluid we had originally with brine. Same thing for LFC_O and LFC_G (LFC for oil and gas-replaced logs): they will always be equal to 2 (oil) or 3 (gas) for all the sand samples. That translates into Python like:
End of explanation
"""
import matplotlib.colors as colors
# 0=undef 1=bri 2=oil 3=gas 4=shale
ccc = ['#B3B3B3','blue','green','red','#996633',]
cmap_facies = colors.ListedColormap(ccc[0:len(ccc)], 'indexed')
ztop = 2150; zbot = 2200
ll = logs.ix[(logs.DEPTH>=ztop) & (logs.DEPTH<=zbot)]
cluster=np.repeat(np.expand_dims(ll['LFC'].values,1),100,1)
f, ax = plt.subplots(nrows=1, ncols=4, figsize=(8, 12))
ax[0].plot(ll.VSH, ll.DEPTH, '-g', label='Vsh')
ax[0].plot(ll.SW, ll.DEPTH, '-b', label='Sw')
ax[0].plot(ll.PHI, ll.DEPTH, '-k', label='phi')
ax[1].plot(ll.IP_FRMG, ll.DEPTH, '-r')
ax[1].plot(ll.IP_FRMB, ll.DEPTH, '-b')
ax[1].plot(ll.IP, ll.DEPTH, '-', color='0.5')
ax[2].plot(ll.VPVS_FRMG, ll.DEPTH, '-r')
ax[2].plot(ll.VPVS_FRMB, ll.DEPTH, '-b')
ax[2].plot(ll.VPVS, ll.DEPTH, '-', color='0.5')
im=ax[3].imshow(cluster, interpolation='none', aspect='auto',cmap=cmap_facies,vmin=0,vmax=4)
cbar=plt.colorbar(im, ax=ax[3])
# cbar.set_label('0=undef,1=brine,2=oil,3=gas,4=shale')
# cbar.set_ticks(range(0,4+1));
cbar.set_label((12*' ').join(['undef', 'brine', 'oil', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in ax[:-1]:
i.set_ylim(ztop,zbot)
i.invert_yaxis()
i.grid()
i.locator_params(axis='x', nbins=4)
ax[0].legend(fontsize='small', loc='lower right')
ax[0].set_xlabel("Vcl/phi/Sw"), ax[0].set_xlim(-.1,1.1)
ax[1].set_xlabel("Ip [m/s*g/cc]"), ax[1].set_xlim(3000,9000)
ax[2].set_xlabel("Vp/Vs"), ax[2].set_xlim(1.5,3)
ax[3].set_xlabel('LFC')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]); ax[3].set_xticklabels([]);
"""
Explanation: And this is the same summary plot that I have used above, updated to show the fluid changes in the elastic logs Ip and Vp/Vs. It is also zoomed into the reservoir between 2150 and 2200 m, and the LFC log is the original one, i.e. it reflects the insitu case.
End of explanation
"""
f, ax = plt.subplots(nrows=1, ncols=4, sharey=True, sharex=True, figsize=(16, 4))
ax[0].scatter(logs.IP,logs.VPVS,20,logs.LFC,marker='o',edgecolors='none',alpha=0.5,cmap=cmap_facies,vmin=0,vmax=4)
ax[1].scatter(logs.IP_FRMB,logs.VPVS_FRMB,20,logs.LFC_B,marker='o',edgecolors='none',alpha=0.5,cmap=cmap_facies,vmin=0,vmax=4)
ax[2].scatter(logs.IP_FRMO,logs.VPVS_FRMO,20,logs.LFC_O,marker='o',edgecolors='none',alpha=0.5,cmap=cmap_facies,vmin=0,vmax=4)
ax[3].scatter(logs.IP_FRMG,logs.VPVS_FRMG,20,logs.LFC_G,marker='o',edgecolors='none',alpha=0.5,cmap=cmap_facies,vmin=0,vmax=4)
ax[0].set_xlim(3000,9000); ax[0].set_ylim(1.5,3);
ax[0].set_title('original data');
ax[1].set_title('FRM to brine');
ax[2].set_title('FRM to oil');
ax[3].set_title('FRM to gas');
for i in ax: i.grid()
"""
Explanation: Let's have a look at the results in the Ip versus Vp/Vs crossplot domain; I will now plot 4 different plots to compare the initial situation to the results of the 4 fluid replacements:
End of explanation
"""
lognames0 = ['LFC','IP','VPVS']
lognames1 = ['LFC_B','IP_FRMB', 'VPVS_FRMB']
lognames2 = ['LFC_O','IP_FRMO', 'VPVS_FRMO']
lognames3 = ['LFC_G','IP_FRMG', 'VPVS_FRMG']
ww0 = logs[pd.notnull(logs.LFC)].ix[:,lognames0];
ww1 = logs[pd.notnull(logs.LFC)].ix[:,lognames1]; ww1.columns=[lognames0]
ww2 = logs[pd.notnull(logs.LFC)].ix[:,lognames2]; ww2.columns=[lognames0]
ww3 = logs[pd.notnull(logs.LFC)].ix[:,lognames3]; ww3.columns=[lognames0]
ww = pd.concat([ww0, ww1, ww2, ww3])
import itertools
list(itertools.product(['a', 'b'], ['a', 'b']))
"""
Explanation: statistical analysis
generalities
After FRM I have created an augmented dataset. What I need to do now is to do a further abstraction, i.e. moving away from the intricacies and local irregularities of the real data with the final goal of creating a fully synthetic dataset representing an idealized version of a reservoir complex.
To do that I will do a statistical analysis to describe tendency, dispersion and correlation between certain elastic properties for each litho-fluid class.
Central tendency is simply described by calculating the mean values of some desired elastic property for all the existing classes; dispersion and correlation are summarised with the covariance matrix, which can be written like this (for two generic variables X and Y):
[ var_X cov_XY ]
[ cov_XY var_Y ]
if I had three variables instead:
[ var_X cov_XY cov_XZ ]
[ cov_XY var_Y cov_YZ ]
[ cov_XZ cov_YZ var_Z ]
Where var_X is the variance of property X, i.e. a measure of dispersion about the mean, while the covariance cov_XY is a measure of similarity between two properties X and Y. A detailed description of the covariance matrix can be found at this wikipedia entry.
Python allows me to easily perform these calculations, but what I need is a way to to store this matrix in a way that is easily accessible (which will be another Pandas DataFrame). To do this I first linearize the matrices removing the duplicates, to get something like this (example for the 2-variable scenario):
var_X cov_XY cov_XY var_Y
implementation in python
For the rest of this exercise I will work with the two variables used so far, i.e. Ip and Vp/Vs.
First I need to prepare a few things to make the procedure easily extendable to other situations (e.g., using more than two variables):
collect all the insitu and fluid-replaced logs together to create a megalog;
create a Pandas DataFrame to hold statistical information for all the litho-fluid classes.
Step 1 works like this:
End of explanation
"""
nlfc = int(ww.LFC.max())
nlogs = len(ww.columns) - 1 # my merged data always contain a facies log...
# ...that needs to be excluded from the statistical analysis
means, covs = [], []
for col in ww.columns[1:]:
means.append(col + '_mean')
import itertools
covariances = list(itertools.product(ww.columns[1:], ww.columns[1:]))
print covariances
for element in covariances:
if element[0] == element[1]:
covs.append(element[0] + '_var')
else:
covs.append(element[0] + '-' + element[1] + '_cov')
covs
"""
Explanation: What I have done here is to first define 3 lists containing the names of the logs we want to extract (lines 1-4). Then I extract into 4 separate temporary DataFrames (lines 5-8) different sets of logs, e.g. ww0 will contain only the logs LFC,IP,VPVS, and ww1 will hold only LFC_B,IP_FRMB, VPVS_FRMB. I will also rename the fluid-replaced logs to have the same name as my insitu logs using ww1.columns=[lognames0]. In this way, when I merge all these 3 DataFrame together (line 9) I will have created a megalog (ww) that includes all values of Ip and Vp/Vs that are both measured for a certain facies, and synthetically created through fluid substitution.
In other words, I have now a superpowered, data-augmented megalog.
Now, on to step 2:
End of explanation
"""
stat = pd.DataFrame(data=None,
columns=['LFC']+means+covs+['SAMPLES'],
index=np.arange(nlfc))
stat['LFC'] = range(1, nlfc+1)
"""
Explanation: With the code above I simply build the headers for a Pandas DataFrame to store mean and covariances for each class.
I will now create a DataFrame which is dynamically dimensioned and made of nfacies rows, i.e. one row per each facies, and 1+n+m+1 columns, where n is the number of mean columns and m is the length of the linearized covariance matrix; for our sample case where we have only two properties this means:
1 column to store the facies number;
2 columns to store mean values for each of the four logs;
3 columns to store variance and covariance values;
1 column to store the number of samples belonging to each facies as a way to control the robustness of our statistical analysis (i.e., undersampled classes could be taken out of the study).
End of explanation
"""
stat
np.math.factorial(3)
"""
Explanation: This is how the stat DataFrame looks like now:
End of explanation
"""
for i in range(1, 1+nlfc):
temp = ww[ww.LFC==i].drop('LFC',1)
stat.ix[(stat.LFC==i),'SAMPLES'] = temp.count()[0]
stat.ix[stat.LFC==i,means[0]:means[-1]] = np.mean(temp.values,0)
stat.ix[stat.LFC==i,covs[0]:covs[-1]] = np.cov(temp,rowvar=0).flatten()
print (temp.describe().ix['mean':'std'])
print ("LFC=%d, number of samples=%d" % (i, temp.count()[0]))
"""
Explanation: So it's like an empty box, made of four rows (because we have 4 classes: shale, brine, oil and gas sands), and for each row we have space to store the mean values of property n.1 (Ip) and n.2 (Vp/Vs), plus their covariances and the number of samples for each class (that will inform me on the robustness of the analysis, i.e. if I have too few samples then I need to consider how my statistical analysis will not be reliable).
The following snippet shows how to populate stat and get a few plots to control if everything makes sense; also note the use of .flatten() at line 5 that linearize the covariance matrix as discussed above, so that I can save it in a series of contiguous cells along a row of stat:
End of explanation
"""
stat
"""
Explanation: Now let's look back at stat and see how it has been filled up with all the information I need:
End of explanation
"""
stat.ix[stat.LFC==2, 'VPVS_mean']
"""
Explanation: I can also interrogate stat to know for example the average Ip for the litho-fluid class 2 (brine sands):
End of explanation
"""
i = 2
pd.scatter_matrix(ww[ww.LFC==i].drop('LFC',1),
color='black',
diagonal='kde',
alpha=0.1,
density_kwds={'color':'#000000','lw':2})
plt.suptitle('LFC=%d' % i)
"""
Explanation: Obviously I need to remember that the first property is Ip, so that's why I am querying the column mean0 (mean1 holds the average values for the second property, in this case Vp/Vs).
If I were working with 3 properties, e.g. Ip, Vp/Vs and density, then the average density value for a hypothetical class 5 would be: stat.ix[stat.LFC==5,'mean2']; remember that Python works with zero-based lists and vectors so the first one has always an index of 0.
To display graphically the same information I use Pandas' scatter_matrix:
End of explanation
"""
NN = 300
mc = pd.DataFrame(data=None,
columns=lognames0,
index=np.arange(nlfc*NN),
dtype='float')
for i in range(1, nlfc+1):
mc.loc[NN*i-NN:NN*i-1, 'LFC'] = i
from numpy.random import multivariate_normal
for i in range(1, nlfc+1):
mean = stat.loc[i-1,
means[0]:means[-1]].values
sigma = np.reshape(stat.loc[i-1,
covs[0]:covs[-1]].values,
(nlogs,nlogs))
m = multivariate_normal(mean,sigma,NN)
mc.ix[mc.LFC==i,1:] = m
"""
Explanation: creation of synthetic datasets
I can now use all this information to create a brand new synthetic dataset that will replicate the average behaviour of the reservoir complex and at the same time overcome typical problems when using real data like undersampling of a certain class, presence of outliers, spurious occurrence of anomalies.
To create the synthetic datasets I use a Monte Carlo simulation relying on multivariate normal distribution to draw samples that are random but correlated in the elastic domain of choice (Ip and Vp/Vs).
End of explanation
"""
mc.describe()
"""
Explanation: First I define how many samples per class I want (line 1), then I create an empty Pandas DataFrame (lines 3-5) dimensioned like this:
as many columns as the elastic logs I have chosen: in this case, 3 (LFC, IP and VPVS, stored in lognames0, previously used to dimension the stat DataFrame);
the total number of rows will be equal to number of samples (e.g., 100) multiplied by the number of classes (4).
In lines 7-8 I fill in the LFC column with the numbers assigned to each class. I use the nlfc variable that contains the total number of classes, introduced earlier when creating the stat DataFrame; then I loop over the range 1 to nlfc , and assign 1 to rows 1-299, 2 to 300-599, and so on.
Lines 10-13 are the core of the entire procedure. For each class (another loop over the range 1-nlfc) I extract the average value mean and the covariance matrix sigma from the stat DataFrame, then put them into the Numpy np.random.multivariate_normal method to draw randomly selected samples from the continuous and correlated distributions of the properties Ip and Vp/Vs.
So the mc DataFrame will be made of 3 columns (LFC, IP, VPVS) by 1200 rows (we can look at the dimensions with .shape and then have an overview of the matrix with .describe discussed earlier):
End of explanation
"""
f, ax = plt.subplots(nrows=1, ncols=2, sharey=True, sharex=True, figsize=(8, 4))
scatt1 = ax[0].scatter(ww.IP, ww.VPVS,
s=20,
c=ww.LFC,
marker='o',
edgecolors='none',
alpha=0.2,
cmap=cmap_facies,
vmin=0,vmax=4)
scatt2 = ax[1].scatter(mc.IP, mc.VPVS,
s=20,
c=mc.LFC,
marker='o',
edgecolors='none',
alpha=0.5,
cmap=cmap_facies,
vmin=0,vmax=4)
ax[0].set_xlim(3000,9000); ax[0].set_ylim(1.5,3.0);
ax[0].set_title('augmented well data');
ax[1].set_title('synthetic data');
for i in ax: i.grid()
"""
Explanation: And these are the results, comparing the original, augmented dataset (i.e. the results of fluid replacement merged with the insitu log, all stored in the DataFrame ww defined earlier when calculating the statistics) with the newly created synthetic data:
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/06_structured/3_keras_wd.ipynb | apache-2.0 | # Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
"""
Explanation: <h1> Create Keras Wide-and-Deep model </h1>
This notebook illustrates:
<ol>
<li> Creating a model using Keras. This requires TensorFlow 2.1
</ol>
End of explanation
"""
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column. Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
"""
Explanation: Create Keras model
<p>
First, write an input_fn to read the data.
End of explanation
"""
## Build a Keras wide-and-deep model using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
orig = tf.feature_column.categorical_column_with_vocabulary_list(name, values)
wrapped = tf.feature_column.indicator_column(orig)
return orig, wrapped
def build_wd_model(dnn_hidden_units = [64, 32], nembeds = 3):
# input layer
deep_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
wide_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
}
inputs = {**wide_inputs, **deep_inputs}
# feature columns from inputs
deep_fc = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
wide_fc = {}
is_male, wide_fc['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
plurality, wide_fc['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(deep_fc['mother_age'],
boundaries=np.arange(15,45,1).tolist())
wide_fc['age_buckets'] = tf.feature_column.indicator_column(age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(deep_fc['gestation_weeks'],
boundaries=np.arange(17,47,1).tolist())
wide_fc['gestation_buckets'] = tf.feature_column.indicator_column(gestation_buckets)
# cross all the wide columns. We have to do the crossing before we one-hot encode
crossed = tf.feature_column.crossed_column(
[is_male, plurality, age_buckets, gestation_buckets], hash_bucket_size=20000)
deep_fc['crossed_embeds'] = tf.feature_column.embedding_column(crossed, nembeds)
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(wide_fc.values(), name='wide_inputs')(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(deep_fc.values(), name='deep_inputs')(inputs)
# hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(numnodes, activation='relu', name='dnn_{}'.format(layerno+1))(deep)
deep_out = deep
# linear model for the wide side
wide_out = tf.keras.layers.Dense(10, activation='relu', name='linear')(wide_inputs)
# concatenate the two sides
both = tf.keras.layers.concatenate([deep_out, wide_out], name='both')
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='weight')(both)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our Wide-and-Deep architecture so far:\n")
model = build_wd_model()
print(model.summary())
"""
Explanation: Next, define the feature columns. mother_age and gestation_weeks should be numeric.
The others (is_male, plurality) should be categorical.
End of explanation
"""
tf.keras.utils.plot_model(model, 'wd_model.png', show_shapes=False, rankdir='LR')
"""
Explanation: We can visualize the DNN using the Keras plot_model utility.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
"""
Explanation: Train and evaluate
End of explanation
"""
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
"""
Explanation: Visualize loss curve
End of explanation
"""
import shutil, os, datetime
OUTPUT_DIR = 'babyweight_trained'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
"""
Explanation: Save the model
End of explanation
"""
|
tensorflow/docs | site/en/guide/variable.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
# Uncomment to see where your variables get placed (see below)
# tf.debugging.set_log_device_placement(True)
"""
Explanation: Introduction to Variables
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/variable"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/variable.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/variable.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/variable.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
A TensorFlow variable is the recommended way to represent shared, persistent state your program manipulates. This guide covers how to create, update, and manage instances of tf.Variable in TensorFlow.
Variables are created and tracked via the tf.Variable class. A tf.Variable represents a tensor whose value can be changed by running ops on it. Specific ops allow you to read and modify the values of this tensor. Higher level libraries like tf.keras use tf.Variable to store model parameters.
Setup
This notebook discusses variable placement. If you want to see on what device your variables are placed, uncomment this line.
End of explanation
"""
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])
my_variable = tf.Variable(my_tensor)
# Variables can be all kinds of types, just like tensors
bool_variable = tf.Variable([False, False, False, True])
complex_variable = tf.Variable([5 + 4j, 6 + 1j])
"""
Explanation: Create a variable
To create a variable, provide an initial value. The tf.Variable will have the same dtype as the initialization value.
End of explanation
"""
print("Shape: ", my_variable.shape)
print("DType: ", my_variable.dtype)
print("As NumPy: ", my_variable.numpy())
"""
Explanation: A variable looks and acts like a tensor, and, in fact, is a data structure backed by a tf.Tensor. Like tensors, they have a dtype and a shape, and can be exported to NumPy.
End of explanation
"""
print("A variable:", my_variable)
print("\nViewed as a tensor:", tf.convert_to_tensor(my_variable))
print("\nIndex of highest value:", tf.math.argmax(my_variable))
# This creates a new tensor; it does not reshape the variable.
print("\nCopying and reshaping: ", tf.reshape(my_variable, [1,4]))
"""
Explanation: Most tensor operations work on variables as expected, although variables cannot be reshaped.
End of explanation
"""
a = tf.Variable([2.0, 3.0])
# This will keep the same dtype, float32
a.assign([1, 2])
# Not allowed as it resizes the variable:
try:
a.assign([1.0, 2.0, 3.0])
except Exception as e:
print(f"{type(e).__name__}: {e}")
"""
Explanation: As noted above, variables are backed by tensors. You can reassign the tensor using tf.Variable.assign. Calling assign does not (usually) allocate a new tensor; instead, the existing tensor's memory is reused.
End of explanation
"""
a = tf.Variable([2.0, 3.0])
# Create b based on the value of a
b = tf.Variable(a)
a.assign([5, 6])
# a and b are different
print(a.numpy())
print(b.numpy())
# There are other versions of assign
print(a.assign_add([2,3]).numpy()) # [7. 9.]
print(a.assign_sub([7,9]).numpy()) # [0. 0.]
"""
Explanation: If you use a variable like a tensor in operations, you will usually operate on the backing tensor.
Creating new variables from existing variables duplicates the backing tensors. Two variables will not share the same memory.
End of explanation
"""
# Create a and b; they will have the same name but will be backed by
# different tensors.
a = tf.Variable(my_tensor, name="Mark")
# A new variable with the same name, but different value
# Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")
# These are elementwise-unequal, despite having the same name
print(a == b)
"""
Explanation: Lifecycles, naming, and watching
In Python-based TensorFlow, tf.Variable instance have the same lifecycle as other Python objects. When there are no references to a variable it is automatically deallocated.
Variables can also be named which can help you track and debug them. You can give two variables the same name.
End of explanation
"""
step_counter = tf.Variable(1, trainable=False)
"""
Explanation: Variable names are preserved when saving and loading models. By default, variables in models will acquire unique variable names automatically, so you don't need to assign them yourself unless you want to.
Although variables are important for differentiation, some variables will not need to be differentiated. You can turn off gradients for a variable by setting trainable to false at creation. An example of a variable that would not need gradients is a training step counter.
End of explanation
"""
with tf.device('CPU:0'):
# Create some tensors
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
"""
Explanation: Placing variables and tensors
For better performance, TensorFlow will attempt to place tensors and variables on the fastest device compatible with its dtype. This means most variables are placed on a GPU if one is available.
However, you can override this. In this snippet, place a float tensor and a variable on the CPU, even if a GPU is available. By turning on device placement logging (see Setup), you can see where the variable is placed.
Note: Although manual placement works, using distribution strategies can be a more convenient and scalable way to optimize your computation.
If you run this notebook on different backends with and without a GPU you will see different logging. Note that logging device placement must be turned on at the start of the session.
End of explanation
"""
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
print(k)
"""
Explanation: It's possible to set the location of a variable or tensor on one device and do the computation on another device. This will introduce delay, as data needs to be copied between the devices.
You might do this, however, if you had multiple GPU workers but only want one copy of the variables.
End of explanation
"""
|
PyladiesMx/Empezando-con-Python | Pandas second part/Pandas segunda parte.ipynb | mit | #Cargamos los paquetes necesarios
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#Creamos arreglos de datos por un arreglo de numpy
arreglo = np.random.randn(7,4)
columnas = list('ABCD')
df = pd.DataFrame(arreglo, columns=columnas )
df
#Creamos arreglo de datos por diccionario
df2 = pd.DataFrame({'Orden':list(range(5)),'Valor': np.random.random_sample(5),
'Categoría':['a', 'a', np.nan, 'b', 'b']})
df2
#Creamos arreglo de datos desde archivo existente
df3 = pd.read_csv('titanic.csv', header=0)
df3.head()
"""
Explanation: Bienvenidas a otra sesión de Pyladies!!
En esta ocasión lo que haremos será continuar aprendiendo a usar la biblioteca para análisis de datos llamada Pandas!!
En la introducción de pandas vimos Series, 'Data Frames' y como crearlos, además vimos como podemos acceder a un subgrupo de elementos tanto en series como en data frames. Está sesión va a estar más enfocada a ver como manipular y ordenar tus datos y análisis estadísticos básicos.
Sin más preámbulo comenzamos!!! :)
Iniciaremos cargando los paquetes necesarios y creando unos arreglos de datos.
End of explanation
"""
#¿cuáles son los índices?
df.index
#¿cuáles son las etiquetas?
df.columns
#¿qué valores contiene mi arreglo?
df.values
# Ahora vamos a ver un resumen de nuestros datos
df.describe()
#si quieres saber qué más hace este método ejecuta este comando
df.describe?
"""
Explanation: Ya que tenemos hechos nuestros arreglos, ahora vamos a ver las características generales de ellos...
End of explanation
"""
#Ordenamiento por valores en las columnas
df2.sort_values(by='Valor')
#Ordenamiento por etiquetas de las columnas
df.sort_index(axis=1, ascending=False)
"""
Explanation: Ejercicio 1
Crea un data frame de la clase de pyladies que contenga la información de los miembros. Cada fila va a ser un miembro y cada columna una cualidad que describa a esos miembros. Las columnas serán las siguientes: Edad, Estatura, Origen(ciudad), Grado(estudios), Género(codificado con 0 y 1)
Ordenando tu arreglo
Imagina que quieres que tus datos se presenten en diferente orden en tu arreglo de datos porque de esta forma se pueden entender mejor, o es el formato que te piden en tu trabajo o simplemente porque te gusta más. Para esto usamos los métodos de 'sort'
End of explanation
"""
# Primero vamos a buscar valores específicos en una columna de nuestros datos
df2.Categoría.isin(['A', 'B', 'b'])
# Ahora veremos en todo el arreglo
df2.isin(['A', 'B', 'b'])
# Busquemos valores vacíos
pd.isnull(df2)
# ¿qué hacemos con este? ¿lo botamos?
df2.dropna(how='any')
# ¿o lo llenamos diferente?
df2.fillna(value='c')
"""
Explanation: Ejercicio 2
Haz un data frame que contenga a los 3 miembros con menor estatura, de modo que el data frame contenga sólo el nombre y la estatura
Buscando valores específicos en nuestros datos
Cuando analizas datos no siempre van a estar todos ordenaditos, filtrados y arreglados de tal modo que tu puedas hacer las computaciones necesarias para el anális. De hecho lo más frecuente es que te encuentres con datos faltantes o valores innecesarios o cosas que no quieres y/o necesitas para tu análisis.
Como ejemplo toma el segundo data frame que hicimos, te das cuenta que hay un 'NaN' por ahí que simplemente representa un valor faltante.
¿Qué harías en este caso?
y una vez que decidiste hacerlo, ¿Cómo lo lograrías?
Por suerte pandas tiene unos métodos para encontrar valores específicos o faltantes, y formas de lidiar con ellos...
End of explanation
"""
#obtener promedio por columna
df.mean()
#obtener promedio por fila
df.mean(1)
#varianza por columna
df.var()
#varianza por fila
df.var(1)
#Frecuencia de valores en una columna específica (series)
df2.Categoría.value_counts()
#Aplicar una función para todo el data frame
print(df)
df.apply(np.sum) #suma por columna
df.apply(np.square) # elevar valores al cuadrado
#solo para comprobar
0.131053**2
"""
Explanation: Estadísticos básicos
Podemos ver rápidamente como se comportan nuestros datos sacando estadísticos. Normalmente tendrías que hacerlo para cada condición de forma 'manual', sin embargo pandas nos permite hacerlo de forma muy fácil y muy rápida (en comparación con los for loops).
Veamos unos ejemplos.
End of explanation
"""
#Concatenar los dos data frames
pd.concat([df, df2])
#ahora por columnas
pd.concat([df, df2], axis=1)
"""
Explanation: Ejercicio 3
Con el data frame de Pyladies haz lo siguiente:
Obtén el promedio de edades y estaturas de los miembros de Pyladies
Obtén un conteo de los orígenes de los miembros de Pyladies
Obtén el índice de masa corporal asumiendo que el promedio de peso es 60kg, ese resultado ponlo como una columna nueva del data frame
Combinar arreglos
A veces tenemos todos los datos que queremos analizar distribuidos en diferentes arreglos de datos y luego no sabemos como combinarlos. Para ello pandas tiene diferentes métodos que nos ayudan en esta tarea.
End of explanation
"""
#Agruparemos el tercer data frame por sobrevivencia
df3.groupby('Survived').mean()
df3.head()
#ahora lo agruparemos por sobrevivencia y sexo
df3.groupby(['Survived', 'Sex']).mean()
"""
Explanation: Hay muchos otros métodos más para combinar data frames los cuales podemos encontrar en la siguiente página
Agrupamiento
Hay veces que tenemos datos sobre los cuales queremos operar los cuales pueden dividirse en un subgrupo de categorías. Para esto usamos la herramienta 'Groupby'.
Cuando pensamos en agrupamiento podemos hacer referencia a las siguientes situaciones:
Vamos a dividir el data frame en subgrupos
vamos a aplicar una función específica a cada subgrupo
Vamos a combinar los resultados en otro data frame
End of explanation
"""
|
BrownDwarf/ApJdataFrames | notebooks/Luhman1999.ipynb | mit | import warnings
warnings.filterwarnings("ignore")
from astropy.io import ascii
import pandas as pd
"""
Explanation: ApJdataFrames Luhman1999
Title: Low-Mass Star Formation and the Initial Mass Function in the ρ Ophiuchi Cloud Core
Authors: K. L. Luhman and G.H. Rieke
Data is from this paper:
http://iopscience.iop.org/0004-637X/525/1/440/fulltext/
End of explanation
"""
names = ["BKLT","Other ID","RA_1950","DEC_1950","SpT_prev","SpT_IR","SpT_adopted",
"Teff","AJ","Lbol","J-H","H-K","K","rK","BrGamma"]
tbl1 = pd.read_csv("http://iopscience.iop.org/0004-637X/525/1/440/fulltext/40180.tb1.txt",
sep="\t", na_values="\ldots", skiprows=1, names=names)
tbl1.RA_1950 = "16 "+tbl1.RA_1950
tbl1.DEC_1950 = "-24 "+tbl1.DEC_1950
tbl1.head()
len(tbl1)
"""
Explanation: Table 1 - Data for Spectroscopic Sample in ρ Ophiuchi
End of explanation
"""
! mkdir ../data/Luhman1999
tbl1.to_csv("../data/Luhman1999/tbl1.csv", index=False, sep='\t')
"""
Explanation: Save data
End of explanation
"""
|
Naereen/notebooks | Demo_of_RISE_for_slides_with_Jupyter_notebooks__Julia.ipynb | mit | from sys import version
print(version)
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Demo-of-RISE-for-slides-with-Jupyter-notebooks-(Python)" data-toc-modified-id="Demo-of-RISE-for-slides-with-Jupyter-notebooks-(Python)-1"><span class="toc-item-num">1 </span>Demo of RISE for slides with Jupyter notebooks (Python)</a></div><div class="lev2 toc-item"><a href="#Title-2" data-toc-modified-id="Title-2-11"><span class="toc-item-num">1.1 </span>Title 2</a></div><div class="lev3 toc-item"><a href="#Title-3" data-toc-modified-id="Title-3-111"><span class="toc-item-num">1.1.1 </span>Title 3</a></div><div class="lev4 toc-item"><a href="#Title-4" data-toc-modified-id="Title-4-1111"><span class="toc-item-num">1.1.1.1 </span>Title 4</a></div><div class="lev2 toc-item"><a href="#Text" data-toc-modified-id="Text-12"><span class="toc-item-num">1.2 </span>Text</a></div><div class="lev2 toc-item"><a href="#Maths" data-toc-modified-id="Maths-13"><span class="toc-item-num">1.3 </span>Maths</a></div><div class="lev2 toc-item"><a href="#And-code" data-toc-modified-id="And-code-14"><span class="toc-item-num">1.4 </span>And code</a></div><div class="lev1 toc-item"><a href="#More-demo-of-Markdown-code" data-toc-modified-id="More-demo-of-Markdown-code-2"><span class="toc-item-num">2 </span>More demo of Markdown code</a></div><div class="lev2 toc-item"><a href="#Lists" data-toc-modified-id="Lists-21"><span class="toc-item-num">2.1 </span>Lists</a></div><div class="lev4 toc-item"><a href="#Images" data-toc-modified-id="Images-2101"><span class="toc-item-num">2.1.0.1 </span>Images</a></div><div class="lev2 toc-item"><a href="#Some-code" data-toc-modified-id="Some-code-22"><span class="toc-item-num">2.2 </span>Some code</a></div><div class="lev4 toc-item"><a href="#And-Markdown-can-include-raw-HTML" data-toc-modified-id="And-Markdown-can-include-raw-HTML-2201"><span class="toc-item-num">2.2.0.1 </span>And Markdown can include raw HTML</a></div><div class="lev1 toc-item"><a href="#End-of-this-demo" data-toc-modified-id="End-of-this-demo-3"><span class="toc-item-num">3 </span>End of this demo</a></div>
# Demo of RISE for slides with Jupyter notebooks (Python)
- This document is an example of a slideshow, written in a [Jupyter notebook](https://www.jupyter.org/) with the [RISE extension](https://github.com/damianavila/RISE).
> By [Lilian Besson](http://perso.crans.org/besson/), Sept.2017.
---
## Title 2
### Title 3
#### Title 4
##### Title 5
##### Title 6
## Text
With text, *emphasis*, **bold**, ~~striked~~, `inline code` and
> *Quote.*
>
> -- By a guy.
## Maths
With inline math $\sin(x)^2 + \cos(x)^2 = 1$ and equations:
$$\sin(x)^2 + \cos(x)^2 = \left(\frac{\mathrm{e}^{ix} - \mathrm{e}^{-ix}}{2i}\right)^2 + \left(\frac{\mathrm{e}^{ix} + \mathrm{e}^{-ix}}{2}\right)^2 = \frac{-\mathrm{e}^{2ix}-\mathrm{e}^{-2ix}+2 \; ++\mathrm{e}^{2ix}+\mathrm{e}^{-2ix}+2}{4} = 1.$$
## And code
In Markdown:
```python
from sys import version
print(version)
```
And in a executable cell (with Python 3 kernel) :
End of explanation
"""
# https://gist.github.com/dm-wyncode/55823165c104717ca49863fc526d1354
"""Embed a YouTube video via its embed url into a notebook."""
from functools import partial
from IPython.display import display, IFrame
width, height = (560, 315, )
def _iframe_attrs(embed_url):
"""Get IFrame args."""
return (
('src', 'width', 'height'),
(embed_url, width, height, ),
)
def _get_args(embed_url):
"""Get args for type to create a class."""
iframe = dict(zip(*_iframe_attrs(embed_url)))
attrs = {
'display': partial(display, IFrame(**iframe)),
}
return ('YouTubeVideo', (object, ), attrs, )
def youtube_video(embed_url):
"""Embed YouTube video into a notebook.
Place this module into the same directory as the notebook.
>>> from embed import youtube_video
>>> youtube_video(url).display()
"""
YouTubeVideo = type(*_get_args(embed_url)) # make a class
return YouTubeVideo() # return an object
"""
Explanation: More demo of Markdown code
Lists
Unordered
lists
are easy.
And
and ordered also ! Just
start lines by 1., 2. etc
or simply 1., 1., ...
Images
With a HTML <img/> tag or the  Markdown code:
<img width="100" src="agreg/images/dooku.jpg"/>
Some code
End of explanation
"""
youtube_video("https://www.youtube.com/embed/FNg5_2UUCNU").display()
"""
Explanation: And Markdown can include raw HTML
<center><span style="color: green;">This is a centered span, colored in green.</span></center>
Iframes are disabled by default, but by using the IPython internals we can include let say a YouTube video:
End of explanation
"""
|
ProfessorKazarinoff/staticsite | content/code/matplotlib_plots/yy-plots.ipynb | gpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: In this post, you will learn how to create y-y plots with Python and Matplotlib. y-y plots are a type of line plot where one line corresponds to one y-axis and another line on the same plot corresponds to a different y-axis. y-y plots typically have one vertical y-axis on the left edge of the plot and one vertical y-axis on the right edge of the plot. Each one of these y-axes can have different scales and can correspond to different units.
Install Python, Matplotlib, NumPy, and Jupyter
We are going to build our y-y plot with Python and Matplotlib. Matplotlib is a popular plotting library for Python. Before we can build the plot, Python needs to be installed on our computer. See this post to review how to install Python. I recommend installing the Anaconda distribution of Python, rather than installing Python from Python.org.
Matplotlib can be installed using the Anaconda Prompt or using the command line and pip, the package manager for Python. Matplotlib comes pre-installed with the Anaconda distribution of Python. If you are using the Anaconda distribution of Python, no further installation steps are necessary to use Matplotlib.
Two other libraries we will use to build the plot are NumPy and Jupyter.
Type the command below into the Anaconda Prompt to install Matplotlib, NumPy, and Jupyter.
```text
conda install matplotlib numpy jupyter
```
The same three libraries can also be installed using a terminal and pip:
text
$ pip install matplotlib numpy jupyter
We'll use a Jupyter notebook to build the y-y plot. A .py file could be used as well. The Python code is the same (except for the %matplotlib inline command). I like using Jupyter notebooks to build plots because of the quick feedback, in-line plotting, and ability to easily see which plot corresponds to a block of code.
A new Jupyter notebook can be launched from the Anaconda Prompt or a terminal with the command:
```text
jupyter notebook
```
Imports
At the top of our Jupyter notebook, we'll include a section for imports. These import lines are necessary so that we can use Matplotlib and NumPy functions later in the script. Matplotlib will be used to build the plot. We will also use NumPy, a Python library for dealing with arrays to create our data.
End of explanation
"""
import matplotlib
import sys
print(f'Python version {sys.version}')
print(f'Matplotlib version {matplotlib.__version__}')
print(f'NumPy version {np.__version__}')
"""
Explanation: The versions of Python, NumPy, and Matplotlib can be printed out using the following code:
End of explanation
"""
x = np.arange(0,8*np.pi,0.1)
y1 = np.sin(x)
y2 = np.exp(x)
"""
Explanation: Data
We need data to include in our y-y plot. We'll start out by creating two NumPy arrays, one array for the sine function and one array for the exp function ($e^x$). These functions are good examples to use in a y-y plot because the range of values sin(x) and exp(x) output is very different.
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(x,y1)
ax.set_title('sin(x)')
plt.show()
"""
Explanation: Plot the two functions
Let's try and plot both of these functions. Each function can be plotted with Matplotlib's ax.plot() method. The two code cells below show a plot for each function in a separate figure window.
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(x,y2)
ax.set_title('exp(x)')
plt.show()
"""
Explanation: Above, we see a plot of the sine function. Note the y-values in the plot are all between -1 and 1. We can build a line plot of the exp function using very similar code.
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(x,y1)
ax.plot(x,y2)
ax.set_title('sin(x) and exp(x)')
ax.legend(['sin(x)','exp(x)'])
plt.show()
"""
Explanation: We see a plot of the exponential (exp) function. Note the y-values are between 0 and around 8e10. The range of y-values in the exp plot is much larger in the exp(x) plot than the range of values in the sin(x) plot.
Now let's try and plot these two functions on the same set of axes. The code below builds a plot with both functions on the same set of axes.
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,4))
ax1.plot(x,y1)
ax1.set_title('sin(x)')
ax2.plot(x,y2,'C1')
ax2.set_title('exp(x)')
plt.show()
"""
Explanation: We see a plot with two lines. The blue line represents the sine function and the orange line represents the exp function. Notice how the blue line looks flat. We can't see the variations in the sine wave.
A figure with two subplots
One way we could view both functions at the same time is to use subplots. The code below builds a figure with two subplots, one function in each subplot.
End of explanation
"""
fig, ax1 = plt.subplots()
ax1.plot(x,y1)
ax1.set_title('sin(x) and exp(x)')
ax2 = ax1.twinx()
ax2.plot(x,y2,'C1')
plt.show()
"""
Explanation: We see two plots side by side. Each plot shows a different function. The plot on the left shows the sine function, the plot on the right shows the exp function.
y-y plot
Now let's combine these two plots on the same set of axes, but with two different y-axes. The y-axis on the left will correspond to the sine function. The y-axis on the right will correspond to the exp function. The key line of code is ax2 = ax1.twinx(). This line of code creates a new axes object ax2 that shares the same x-axis with ax1. The .twinx() method is what allows us to create a y-y plot.
End of explanation
"""
fig, ax1 = plt.subplots()
ax1.plot(x,y1)
ax1.set_ylabel('sin(x)', color='C0')
ax1.tick_params(axis='y', color='C0', labelcolor='C0')
ax1.set_title('sin(x) and exp(x)')
ax2 = ax1.twinx()
ax2.plot(x,y2,'C1')
ax2.set_ylabel('exp(x)', color='C1')
ax2.tick_params(axis='y', color='C1', labelcolor='C1')
ax2.spines['right'].set_color('C1')
ax2.spines['left'].set_color('C0')
plt.show()
"""
Explanation: We see a plot of our two functions. In the plot above, we can clearly see both the sine function and the exp function. The y-axis that corresponds to the sine function is on the left. The y-axis that corresponds to the exp function is on the right.
Next, we'll color the axes and the axis labels to correspond to the line on the plot that it corresponds to.
Color each y-axis
It's nice to be able to see the shape of both functions. However, it isn't obvious which function (which color line) corresponds to which axis. We can color the axis, tick marks, and tick labels to clearly demonstrate which line on the plot corresponds to which one of the y-axes.
Note that in the code below, we don't set the spines color to blue on ax1 even though ax1 contains the blue line. This is because after we plot on ax1, the spines on ax2 "overlap" the spines on ax1. There is no overlap of tick_params, so those can be set on ax1.
End of explanation
"""
fig, ax1 = plt.subplots()
ax1.plot(x,y1)
ax1.set_ylabel('sin(x)', color='C0')
ax1.tick_params(axis='y', color='C0', labelcolor='C0')
ax1.set_title('sin(x) and exp(x)')
ax2 = ax1.twinx()
ax2.plot(x,y2,'C1')
ax2.set_ylabel('exp(x)', color='C1')
ax2.tick_params(axis='y', color='C1', labelcolor='C1')
ax2.spines['right'].set_color('C1')
ax2.spines['left'].set_color('C0')
fig.legend(['sin(x)','exp(x)'], bbox_to_anchor=(0.9, 0.8))
plt.show()
"""
Explanation: We see a plot with colored y-axis lines and colored y-axis labels. The blue line corresponds to the blue left-hand axis and labels. The orange line corresponds to the orange right-hand axis and labels.
Next, we'll add a legend to our plot.
Add a legend to the y-y plot
The color of the axis, tick marks, and tick labels correspond to the color of the line on our plot. But it would be nice to include a legend on the plot so that others could see which line corresponds to sine and which line corresponds to exp.
The problem is that if we call ax.legend(['sin','exp']) we only end up with a legend with one entry. To get around this problem, we apply the legend to the entire figure object fig. The line fig.legend(['sin(x)','exp(x)'] creates the legend. bbox_to_anchor=(0.9, 0.8)) is added as a keyword argument to ensure the legend is within the space of the plot and not way off to the side. This is because we are applying the legend relative to the figure window and not relative to the axes.
End of explanation
"""
fig, ax1 = plt.subplots()
line1 = ax1.plot(x,y1)
ax1.set_ylabel('sin(x)', color='C0')
ax1.tick_params(axis='y', color='C0', labelcolor='C0')
ax1.set_title('sin(x) and exp(x)')
ax2 = ax1.twinx()
line2 = ax2.plot(x,y2,'C1')
ax2.set_ylabel('exp(x)', color='C1')
ax2.tick_params(axis='y', color='C1', labelcolor='C1')
ax2.spines['right'].set_color('C1')
ax2.spines['left'].set_color('C0')
lines = line1 + line2
ax2.legend(lines, ['sin(x)','exp(x)'])
plt.show()
"""
Explanation: We see a plot with two lines, two different color y-axes, and a legend. One more thing we can try is to create the legend in an alternate way.
Alternative way to add a legend to a y-y plot
An alternative way to add a legend to a y-y plot is to specify exactly which line objects should go into the legend.
Usually, when the ax.plot() method is called, the output of it is not saved to a variable. But if we save the output of the ax.plot() method, we can feed it into our ax1.legend() call in order to specify which lines should be included in the legend. The key line in the code below is lines = line1 + line2. Then we use the lines object when we call ax1.legend(). The code below creates a y-y plot with a legend in an alternate way.
End of explanation
"""
|
KnHuq/Dynamic-Tensorflow-Tutorial | Vhanilla_RNN/RNN.ipynb | mit | import numpy as np
import tensorflow as tf
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
import pylab as pl
from IPython import display
import sys
%matplotlib inline
"""
Explanation: <span style="color:green"> VANILLA RNN ON 8*8 MNIST DATASET TO PREDICT TEN CLASS
<span style="color:blue">Its a dynamic sequence and batch vhanilla rnn. This is created with tensorflow scan and map higher ops!!!!
<span style="color:blue">This is a base rnn which can be used to create GRU, LSTM, Neural Stack Machine, Neural Turing Machine and RNN-EM and so on!
Importing Libraries
End of explanation
"""
class RNN_cell(object):
"""
RNN cell object which takes 3 arguments for initialization.
input_size = Input Vector size
hidden_layer_size = Hidden layer size
target_size = Output vector size
"""
def __init__(self, input_size, hidden_layer_size, target_size):
# Initialization of given values
self.input_size = input_size
self.hidden_layer_size = hidden_layer_size
self.target_size = target_size
# Weights and Bias for input and hidden tensor
self.Wx = tf.Variable(tf.zeros(
[self.input_size, self.hidden_layer_size]))
self.Wh = tf.Variable(tf.zeros(
[self.hidden_layer_size, self.hidden_layer_size]))
self.bi = tf.Variable(tf.zeros([self.hidden_layer_size]))
# Weights for output layers
self.Wo = tf.Variable(tf.truncated_normal(
[self.hidden_layer_size, self.target_size],mean=0,stddev=.01))
self.bo = tf.Variable(tf.truncated_normal([self.target_size],mean=0,stddev=.01))
# Placeholder for input vector with shape[batch, seq, embeddings]
self._inputs = tf.placeholder(tf.float32,
shape=[None, None, self.input_size],
name='inputs')
# Processing inputs to work with scan function
self.processed_input = process_batch_input_for_RNN(self._inputs)
'''
Initial hidden state's shape is [1,self.hidden_layer_size]
In First time stamp, we are doing dot product with weights to
get the shape of [batch_size, self.hidden_layer_size].
For this dot product tensorflow use broadcasting. But during
Back propagation a low level error occurs.
So to solve the problem it was needed to initialize initial
hiddden state of size [batch_size, self.hidden_layer_size].
So here is a little hack !!!! Getting the same shaped
initial hidden state of zeros.
'''
self.initial_hidden = self._inputs[:, 0, :]
self.initial_hidden = tf.matmul(
self.initial_hidden, tf.zeros([input_size, hidden_layer_size]))
# Function for vhanilla RNN.
def vanilla_rnn(self, previous_hidden_state, x):
"""
This function takes previous hidden state and input and
outputs current hidden state.
"""
current_hidden_state = tf.tanh(
tf.matmul(previous_hidden_state, self.Wh) +
tf.matmul(x, self.Wx) + self.bi)
return current_hidden_state
# Function for getting all hidden state.
def get_states(self):
"""
Iterates through time/ sequence to get all hidden state
"""
# Getting all hidden state throuh time
all_hidden_states = tf.scan(self.vanilla_rnn,
self.processed_input,
initializer=self.initial_hidden,
name='states')
return all_hidden_states
# Function to get output from a hidden layer
def get_output(self, hidden_state):
"""
This function takes hidden state and returns output
"""
output = tf.nn.relu(tf.matmul(hidden_state, self.Wo) + self.bo)
return output
# Function for getting all output layers
def get_outputs(self):
"""
Iterating through hidden states to get outputs for all timestamp
"""
all_hidden_states = self.get_states()
all_outputs = tf.map_fn(self.get_output, all_hidden_states)
return all_outputs
# Function to convert batch input data to use scan ops of tensorflow.
def process_batch_input_for_RNN(batch_input):
"""
Process tensor of size [5,3,2] to [3,5,2]
"""
batch_input_ = tf.transpose(batch_input, perm=[2, 0, 1])
X = tf.transpose(batch_input_)
return X
"""
Explanation: Vhanilla RNN class and functions
End of explanation
"""
hidden_layer_size = 110
input_size = 8
target_size = 10
y = tf.placeholder(tf.float32, shape=[None, target_size],name='inputs')
"""
Explanation: Placeholder and initializers
End of explanation
"""
#Initializing rnn object
rnn=RNN_cell( input_size, hidden_layer_size, target_size)
#Getting all outputs from rnn
outputs = rnn.get_outputs()
#Getting final output through indexing after reversing
last_output = outputs[-1]
#As rnn model output the final layer through Relu activation softmax is used for final output.
output=tf.nn.softmax(last_output)
#Computing the Cross Entropy loss
cross_entropy = -tf.reduce_sum(y * tf.log(output))
# Trainning with Adadelta Optimizer
train_step = tf.train.AdamOptimizer().minimize(cross_entropy)
#Calculatio of correct prediction and accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(output,1))
accuracy = (tf.reduce_mean(tf.cast(correct_prediction, tf.float32)))*100
"""
Explanation: Models
End of explanation
"""
sess=tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
#Using Sklearn MNIST dataset.
digits = load_digits()
X=digits.images
Y_=digits.target
# One hot encoding
Y = sess.run(tf.one_hot(indices=Y_, depth=target_size))
#Getting Train and test Dataset
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.22, random_state=42)
#Cuttting for simple iteration
X_train=X_train[:1400]
y_train=y_train[:1400]
#Iterations to do trainning
for epoch in range(120):
start=0
end=100
for i in range(14):
X=X_train[start:end]
Y=y_train[start:end]
start=end
end=start+100
sess.run(train_step,feed_dict={rnn._inputs:X, y:Y})
Loss=str(sess.run(cross_entropy,feed_dict={rnn._inputs:X, y:Y}))
Train_accuracy=str(sess.run(accuracy,feed_dict={rnn._inputs:X_train, y:y_train}))
Test_accuracy=str(sess.run(accuracy,feed_dict={rnn._inputs:X_test, y:y_test}))
pl.plot([epoch],Loss,'b.',)
pl.plot([epoch],Train_accuracy,'r*',)
pl.plot([epoch],Test_accuracy,'g+')
display.clear_output(wait=True)
display.display(pl.gcf())
sys.stdout.flush()
print("\rIteration: %s Loss: %s Train Accuracy: %s Test Accuracy: %s"%(epoch,Loss,Train_accuracy,Test_accuracy)),
sys.stdout.flush()
"""
Explanation: Dataset Preparation
End of explanation
"""
|
empet/Plotly-plots | Europe-Happiness.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
import matplotlib.cm as cm
df = pd.read_excel('Data/Europe-Happiness.xlsx')
df.head()
N = len(df)
score = df['Score'].values
country = list(df.Country)
country[13] = 'U Kingdom'
country[14] = 'Czech Rep'
country[38] = 'Bosn Herzeg'
world_rk = list(df['World-Rank'])
import plotly.plotly as py
import plotly.graph_objs as go
"""
Explanation: Europe Happiness
In this notebook we visualize Europe countries happiness scores. Data are retrieved from World Happiness Report 2016.
End of explanation
"""
def map_z2color(val, colormap, vmin, vmax):
#map the normalized value val to a corresponding color in the mpl colormap
if vmin >= vmax:
raise ValueError('incorrect relation between vmin and vmax')
t = (val-vmin) / (vmax-vmin) #normalize val
C = (np.array(colormap(t)[:3])*255).astype(np.uint8)
#convert to a Plotly color code:
return f'rgb{tuple(C)}'
def set_layout(title, plot_size):# set plot layout
return go.Layout(title=title,
font=dict(size=12),
xaxis=dict(visible=False, range=[-3.5, 4]),
yaxis=dict(visible=False, range=[-4.5, 3.5]),
showlegend=False,
width=plot_size,
height=plot_size,
hovermode='closest')
def set_annotation(x, y, anno_text, angle, fontsize=11): # annotations
return dict(x= x,
y= y,
text= anno_text,
textangle=angle, # angle in degrees
font=dict(size=fontsize),
showarrow=False
)
"""
Explanation: Function that maps a normalized happiness score to a color in a given matplotlib colormap:
End of explanation
"""
bar_height = [score[k]/4.75 for k in range(N)]
"""
Explanation: We visualize countries happiness through a circular barchart. The height of each bar is proportional to the corresponding happiness score:
End of explanation
"""
theta = [np.pi/2+(2*k+1)*np.pi/72 for k in range(N)] # angular position of base bar centers
xi = [np.cos(theta[k]) for k in range(N)]# starting bar position
yi = [np.sin(theta[k]) for k in range(N)]
xf = [(bar_height[k]+1)*np.cos(theta[k]) for k in range(N)]#end bar position
yf = [(bar_height[k]+1)*np.sin(theta[k]) for k in range(N)]
xm = [(xi[k]+xf[k])*0.5 for k in range(N)]#mid bar position for inserting hovering text
ym = [(yi[k]+yf[k])*0.5 for k in range(N)]
xpos_t = [(bar_height[k]+1.32)*np.cos(theta[k]) for k in range(N)]#text position
ypos_t = [(bar_height[k]+1.32)*np.sin(theta[k]) for k in range(N)]
"""
Explanation: The bars are inserted along the unit circle, starting with the angular position $\pi/2$, in the anti-clockwise direction:
End of explanation
"""
import matplotlib.cm as cm
cmap = cm.viridis
vmin = score[-1]
vmax = score[0]
bar_colors = [map_z2color(score[k], cmap, score[-1], score[0]) for k in range(N)]
"""
Explanation: The decreased level of happiness is illustrated through the Viridis colormap:
End of explanation
"""
text = [f'{country[k]}<br>Score: {round(score[k],3)}<br>World rank: {world_rk[k]}' for k in range(len(score))]
"""
Explanation: Define the hover text:
End of explanation
"""
trace = go.Scatter(x=xm,
y=ym,
name='',
mode='markers' ,
marker=dict(size=0.05, color=bar_colors[15]),
text=text,
hoverinfo='text')
tracet = go.Scatter(x=xf,
y=yf,
name='',
mode='markers' ,
marker=dict(size=0.05, color=bar_colors[15]),
text=text,
hoverinfo='text')
traceB = [go.Scatter(x=[xi[k], xf[k], None], # Circular bars are drawn as lines of width 9
y=[yi[k], yf[k], None],
mode='lines',
line=dict(color=bar_colors[k], width=9.6),
hoverinfo='none') for k in range(N)]
title = "Europe Happiness Score and Global Ranking"+\
"<br>Data: World Happiness Report, 2016"+\
"<a href='http://worldhappiness.report/wp-content/uploads/sites/2/2016/03/HR-V1_web.pdf'> [1]</a>"
layout = set_layout('', 900)
annotext_angle = [(-(180*np.arctan(yi[k]/xi[k])/np.pi)) for k in range(N)]# angles in degrees, computed following
#Plotly reversed trigonometric rule
"""
Explanation: Set position of the hover text inside bars:
End of explanation
"""
annot = []
for k in range(N):
annot += [set_annotation(xpos_t[k], ypos_t[k], country[k], annotext_angle[k], fontsize=10)]
annot += [set_annotation(0.5, 0.1, title, 0 ) ]
fig = go.FigureWidget(data=traceB+[trace, tracet], layout=layout)
fig.layout.update(annotations=annot,
plot_bgcolor='rgb(245,245,245)');
fig
"""
Explanation: Define and insert annotations in plot layout:
End of explanation
"""
|
nifannn/MachineLearningNotes | MathNotes/LinearAlgebra.ipynb | mit | import numpy as np
"""
Explanation: Linear Algebra Review Notes
End of explanation
"""
A = np.array([[1,1,1], [3,1,2], [2,3,4]])
b = np.array([6, 11, 20])
A
b
x = np.linalg.solve(A, b)
x
"""
Explanation: Gaussian Elimination
one way to solve $ Ax = b $
For example:
$ \begin{bmatrix}1 & 1 & 1 \ 3 & 1 & 2 \ 2 & 3 & 4 \ \end{bmatrix} \begin{bmatrix}x_1 \ x_2 \ x_3 \ \end{bmatrix} = \begin{bmatrix} 6 \ 11 \ 20 \ \end{bmatrix},\; A = \begin{bmatrix}1 & 1 & 1 \ 3 & 1 & 2 \ 2 & 3 & 4 \ \end{bmatrix},\; x = \begin{bmatrix}x_1 \ x_2 \ x_3 \ \end{bmatrix},\; b = \begin{bmatrix} 6 \ 11 \ 20 \ \end{bmatrix}$
$ [A, b] = \begin{bmatrix}1 & 1 & 1 & 6\ 3 & 1 & 2 & 11\ 2 & 3 & 4 & 20\ \end{bmatrix}$
row2 - row1 * 3: $ \begin{bmatrix}1 & 1 & 1 & 6\ 0 & -2 & -1 & -7\ 2 & 3 & 4 & 20\ \end{bmatrix} $
row3 - row1 * 2: $ \begin{bmatrix}1 & 1 & 1 & 6\ 0 & -2 & -1 & -7\ 0 & 1 & 2 & 8\ \end{bmatrix} $
exchange row2 with row3: $ \begin{bmatrix}1 & 1 & 1 & 6\ 0 & 1 & 2 & 8\ 0 & -2 & -1 & -7\ \end{bmatrix} $
row1 - row2: $ \begin{bmatrix}1 & 0 & -1 & -2\ 0 & 1 & 2 & 8\ 0 & -2 & -1 & -7\ \end{bmatrix} $
row3 + row2 * 2: $ \begin{bmatrix}1 & 0 & -1 & -2\ 0 & 1 & 2 & 8\ 0 & 0 & 3 & 9\ \end{bmatrix} $
divide row3 by 3: $ \begin{bmatrix}1 & 0 & -1 & -2\ 0 & 1 & 2 & 8\ 0 & 0 & 1 & 3\ \end{bmatrix} $
row1 + row3: $ \begin{bmatrix}1 & 0 & 0 & 1\ 0 & 1 & 2 & 8\ 0 & 0 & 1 & 3\ \end{bmatrix} $
row2 - row3 * 2: $ \begin{bmatrix}1 & 0 & 0 & 1\ 0 & 1 & 0 & 2\ 0 & 0 & 1 & 3\ \end{bmatrix} $
$ x = \begin{bmatrix} x_1 \ x_2 \ x_3 \ \end{bmatrix} = \begin{bmatrix} 1 \ 2 \ 3 \ \end{bmatrix}$
End of explanation
"""
A = np.matrix([[1,1,1], [3,1,2], [2,3,4]])
A
np.linalg.inv(A)
"""
Explanation: Gaussian-Jordan Elimination
one way to find $A^{-1}$
For example:
$ A = \begin{bmatrix}1 & 1 & 1 \ 3 & 1 & 2 \ 2 & 3 & 4 \ \end{bmatrix} $
$ [A, I] = \begin{bmatrix}1 & 1 & 1 & 1 & 0 & 0\ 3 & 1 & 2 & 0 & 1 & 0\ 2 & 3 & 4 & 0 & 0 & 1\ \end{bmatrix}$
row2 - row1 * 3: $ \begin{bmatrix}1 & 1 & 1 & 1 & 0 & 0\ 0 & -2 & -1 & -3 & 1 & 0\ 2 & 3 & 4 & 0 & 0 & 1\ \end{bmatrix} $
row3 - row1 * 2: $ \begin{bmatrix}1 & 1 & 1 & 1 & 0 & 0\ 0 & -2 & -1 & -3 & 1 & 0\ 0 & 1 & 2 & -2 & 0 & 1\ \end{bmatrix} $
exchange row2 with row3: $ \begin{bmatrix}1 & 1 & 1 & 1 & 0 & 0\ 0 & 1 & 2 & -2 & 0 & 1\ 0 & -2 & -1 & -3 & 1 & 0\ \end{bmatrix} $
row1 - row2: $ \begin{bmatrix}1 & 0 & -1 & 3 & 0 & -1\ 0 & 1 & 2 & -2 & 0 & 1\ 0 & -2 & -1 & -3 & 1 & 0\ \end{bmatrix} $
row3 + row2 * 2: $ \begin{bmatrix}1 & 0 & -1 & 3 & 0 & -1\ 0 & 1 & 2 & -2 & 0 & 1\ 0 & 0 & 3 & -7 & 1 & 2\ \end{bmatrix} $
divide row3 by 3: $ \begin{bmatrix}1 & 0 & -1 & 3 & 0 & -1\ 0 & 1 & 2 & -2 & 0 & 1\ 0 & 0 & 1 & -\frac{7}{3} & \frac{1}{3} & \frac{2}{3} \ \end{bmatrix} $
row1 + row3: $ \begin{bmatrix}1 & 0 & 0 & \frac{2}{3} & \frac{1}{3} & -\frac{1}{3}\ 0 & 1 & 2 & -2 & 0 & 1\ 0 & 0 & 1 & -\frac{7}{3} & \frac{1}{3} & \frac{2}{3}\ \end{bmatrix} $
row2 - row3 * 2: $ \begin{bmatrix}1 & 0 & 0 & \frac{2}{3} & \frac{1}{3} & -\frac{1}{3}\ 0 & 1 & 0 & \frac{8}{3} & -\frac{2}{3} & -\frac{1}{3}\ 0 & 0 & 1 & -\frac{7}{3} & \frac{1}{3} & \frac{2}{3}\ \end{bmatrix} $
End of explanation
"""
A = np.matrix([[1,2,2],[2,4,1],[3,6,4]])
A
np.linalg.matrix_rank(A)
"""
Explanation: Column space
All linear combinations of columns
$ Ax = b $ is solvable if and only if b is in the column space of A
Nullspace
Solutions about Ax = 0
Solutions about Ax = b
Rank
It is the number of pivots
It can tell the number of independent columns
It is the dimension of column space, row space and nullspace
For example:
$ A = \begin{bmatrix}
1 & 2 & 2 \
2 & 4 & 1 \
3 & 6 & 4 \
\end{bmatrix}$
row2 - row1 * 2 : $\begin{bmatrix}
1 & 2 & 2 \
0 & 0 & -3 \
3 & 6 & 4 \
\end{bmatrix}$
row3 - row1 * 3 : $\begin{bmatrix}
1 & 2 & 2 \
0 & 0 & -3 \
0 & 0 & -2 \
\end{bmatrix}$
row3 - row2 * 3/2 : $ \begin{bmatrix}
1 & 2 & 2 \
0 & 0 & -3 \
0 & 0 & 0 \
\end{bmatrix} $
So, $ rank(A) = 2 $
End of explanation
"""
A = np.matrix([[1,2,3], [4,5,6], [7,8,9]])
A
A.transpose()
A.T
"""
Explanation: Projection Matrix
$ P = A(A^TA)^{-1}A^T$
Transpose
$A^T (i,j) = A(j,i)$
For example:
$ A = \begin{bmatrix} 1 & 2 & 3 \ 4 & 5 & 6 \ 7 & 8 & 9 \ \end{bmatrix} ,\; A^T = \begin{bmatrix}1 & 4 & 7 \ 2 & 5 & 8 \ 3 & 6 & 9 \ \end{bmatrix}$
End of explanation
"""
|
jorgemauricio/INIFAP_Course | ejercicios/Seaborn/6_Color_Estilo.ipynb | mit | import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
tips = sns.load_dataset('tips')
"""
Explanation: Estilos y color
En este ejercicio se demostrara como controlar la estetica de las graficas en seaborn:
End of explanation
"""
sns.countplot(x='sex',data=tips)
sns.set_style('white')
sns.countplot(x='sex',data=tips)
sns.set_style('ticks')
sns.countplot(x='sex',data=tips,palette='deep')
"""
Explanation: Estilos
Se puede implementar un estilo en particular:
End of explanation
"""
sns.countplot(x='sex',data=tips)
sns.despine()
sns.countplot(x='sex',data=tips)
sns.despine(left=True)
"""
Explanation: Remover contorno
End of explanation
"""
# Grafica sin grid
plt.figure(figsize=(12,3))
sns.countplot(x='sex',data=tips)
# Grafica con Grid
sns.lmplot(x='total_bill',y='tip',size=2,aspect=4,data=tips)
"""
Explanation: Tamanio y aspecto
Se puede utilizar la funcion de matplotlib plt.figure(figsize=(width,height) para cambiar el tamano de la mayoria de las grafiacas de seaborn.
Para controlar el tamanio y el aspecto solo es necesario pasar los parametros: size y aspect, por ejemplo:
End of explanation
"""
sns.set_context('poster',font_scale=4)
sns.countplot(x='sex',data=tips,palette='coolwarm')
"""
Explanation: Escala y contexto
La funcion set_context() permite reescribir los parametros base:
End of explanation
"""
|
zaqwes8811/micro-apps | self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/00-Preface.ipynb | mit | from __future__ import division, print_function
%matplotlib inline
#format the book
import book_format
book_format.set_style()
"""
Explanation: Table of Contents
Preface
End of explanation
"""
import numpy as np
x = np.array([1, 2, 3])
print(type(x))
x
"""
Explanation: Introductory textbook for Kalman filters and Bayesian filters. The book is written using Jupyter Notebook so you may read the book in your browser and also run and modify the code, seeing the results inside the book. What better way to learn?
Kalman and Bayesian Filters
Sensors are noisy. The world is full of data and events that we want to measure and track, but we cannot rely on sensors to give us perfect information. The GPS in my car reports altitude. Each time I pass the same point in the road it reports a slightly different altitude. My kitchen scale gives me different readings if I weigh the same object twice.
In simple cases the solution is obvious. If my scale gives slightly different readings I can just take a few readings and average them. Or I can replace it with a more accurate scale. But what do we do when the sensor is very noisy, or the environment makes data collection difficult? We may be trying to track the movement of a low flying aircraft. We may want to create an autopilot for a drone, or ensure that our farm tractor seeded the entire field. I work on computer vision, and I need to track moving objects in images, and the computer vision algorithms create very noisy and unreliable results.
This book teaches you how to solve these sorts of filtering problems. I use many different algorithms, but they are all based on Bayesian probability. In simple terms Bayesian probability determines what is likely to be true based on past information.
If I asked you the heading of my car at this moment you would have no idea. You'd proffer a number between 1$^\circ$ and 360$^\circ$ degrees, and have a 1 in 360 chance of being right. Now suppose I told you that 2 seconds ago its heading was 243$^\circ$. In 2 seconds my car could not turn very far so you could make a far more accurate prediction. You are using past information to more accurately infer information about the present or future.
The world is also noisy. That prediction helps you make a better estimate, but it also subject to noise. I may have just braked for a dog or swerved around a pothole. Strong winds and ice on the road are external influences on the path of my car. In control literature we call this noise though you may not think of it that way.
There is more to Bayesian probability, but you have the main idea. Knowledge is uncertain, and we alter our beliefs based on the strength of the evidence. Kalman and Bayesian filters blend our noisy and limited knowledge of how a system behaves with the noisy and limited sensor readings to produce the best possible estimate of the state of the system. Our principle is to never discard information.
Say we are tracking an object and a sensor reports that it suddenly changed direction. Did it really turn, or is the data noisy? It depends. If this is a jet fighter we'd be very inclined to believe the report of a sudden maneuver. If it is a freight train on a straight track we would discount it. We'd further modify our belief depending on how accurate the sensor is. Our beliefs depend on the past and on our knowledge of the system we are tracking and on the characteristics of the sensors.
The Kalman filter was invented by Rudolf Emil Kálmán to solve this sort of problem in a mathematically optimal way. Its first use was on the Apollo missions to the moon, and since then it has been used in an enormous variety of domains. There are Kalman filters in aircraft, on submarines, and on cruise missiles. Wall street uses them to track the market. They are used in robots, in IoT (Internet of Things) sensors, and in laboratory instruments. Chemical plants use them to control and monitor reactions. They are used to perform medical imaging and to remove noise from cardiac signals. If it involves a sensor and/or time-series data, a Kalman filter or a close relative to the Kalman filter is usually involved.
Motivation for this Book
I'm a software engineer that spent almost two decades in aerospace, and so I have always been 'bumping elbows' with the Kalman filter, but never implemented one. They've always had a fearsome reputation for difficulty. The theory is beautiful, but quite difficult to learn if you are not already well trained in topics such as signal processing, control theory, probability and statistics, and guidance and control theory. As I moved into solving tracking problems with computer vision the need to implement them myself became urgent.
There are excellent textbooks in the field, such as Grewal and Andrew's Kalman Filtering. But sitting down and trying to read many of these books is a dismal and trying experience if you do not have the necessary background. Typically the first few chapters fly through several years of undergraduate math, blithely referring you to textbooks on Itō calculus, and presenting an entire semester's worth of statistics in a few brief paragraphs. They are textbooks for an upper undergraduate or graduate level course, and an invaluable reference to researchers and professionals, but the going is truly difficult for the more casual reader. Notation is introduced without explanation, different texts use different words and variable names for the same concept, and the books are almost devoid of examples or worked problems. I often found myself able to parse the words and comprehend the mathematics of a definition, but had no idea as to what real world phenomena these words and math were attempting to describe. "But what does that mean?" was my repeated thought. Here are typical examples which once puzzled me:
$$\begin{aligned}\hat{x}{k} = \Phi{k}\hat{x}{k-1} + G_k u{k-1} + K_k [z_k - H \Phi_{k} \hat{x}{k-1} - H G_k u{k-1}]
\
\mathbf{P}{k\mid k} = (I - \mathbf{K}_k \mathbf{H}{k})\textrm{cov}(\mathbf{x}k - \hat{\mathbf{x}}{k\mid k-1})(I - \mathbf{K}k \mathbf{H}{k})^{\text{T}} + \mathbf{K}_k\textrm{cov}(\mathbf{v}_k )\mathbf{K}_k^{\text{T}}\end{aligned}$$
However, as I began to finally understand the Kalman filter I realized the underlying concepts are quite straightforward. If you know a few simple probability rules, and have some intuition about how we fuse uncertain knowledge, the concepts of the Kalman filter are accessible. Kalman filters have a reputation for difficulty, but shorn of much of the formal terminology the beauty of the subject and of their math became clear to me, and I fell in love with the topic.
As I began to understand the math and theory more difficulties appeared. A book or paper will make some statement of fact and presents a graph as proof. Unfortunately, why the statement is true is not clear to me, or I cannot reproduce the plot. Or maybe I wonder "is this true if R=0?" Or the author provides pseudocode at such a high level that the implementation is not obvious. Some books offer Matlab code, but I do not have a license to that expensive package. Finally, many books end each chapter with many useful exercises. Exercises which you need to understand if you want to implement Kalman filters for yourself, but exercises with no answers. If you are using the book in a classroom, perhaps this is okay, but it is terrible for the independent reader. I loathe that an author withholds information from me, presumably to avoid 'cheating' by the student in the classroom.
All of this impedes learning. I want to track an image on a screen, or write some code for my Arduino project. I want to know how the plots in the book are made, and to choose different parameters than the author chose. I want to run simulations. I want to inject more noise into the signal and see how a filter performs. There are thousands of opportunities for using Kalman filters in everyday code, and yet this fairly straightforward topic is the provenance of rocket scientists and academics.
I wrote this book to address all of those needs. This is not the sole book for you if you design military radars. Go get a Masters or PhD at a great STEM school, because you'll need it. This book is for the hobbyist, the curious, and the working engineer that needs to filter or smooth data. If you are a hobbyist this book should provide everything you need. If you are serious about Kalman filters you'll need more. My intention is to introduce enough of the concepts and mathematics to make the textbooks and papers approachable.
This book is interactive. While you can read it online as static content, I urge you to use it as intended. It is written using Jupyter Notebook. This allows me to combine text, math, Python, and Python output in one place. Every plot, every piece of data in this book is generated from Python inside the notebook. Want to double the value of a parameter? Just change the parameter's value, and press CTRL-ENTER. A new plot or printed output will appear.
This book has exercises, but it also has the answers. I trust you. If you just need an answer, go ahead and read the answer. If you want to internalize this knowledge, try to implement the exercise before you read the answer. Since the book is interactive, you enter and run your solution inside the book - you don't have to move to a different environment, or deal with importing a bunch of stuff before starting.
This book is free. I've spent several thousand dollars on Kalman filtering books. I cannot believe they are within the reach of someone in a depressed economy or a financially struggling student. I have gained so much from free software like Python, and free books like those from Allen B. Downey [1]. It's time to repay that. So, the book is free, it is hosted on free servers at GitHub, and it uses only free and open software such as IPython and MathJax.
Reading Online
<b>GitHub</b>
The book is hosted on GitHub, and you can read any chapter by clicking on its name. GitHub statically renders Jupyter Notebooks. You will not be able to run or alter the code, but you can read all of the content.
The GitHub pages for this project are at
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
<b>binder</b>
binder serves interactive notebooks online, so you can run the code and change the code within your browser without downloading the book or installing Jupyter. Use this link to access the book via binder:
http://mybinder.org/repo/rlabbe/Kalman-and-Bayesian-Filters-in-Python
<b>nbviewer</b>
The nbviewer website will render any Notebook in a static format. I find it does a slightly better job than the GitHub renderer, but it is slighty harder to use. It accesses GitHub directly; whatever I have checked into GitHub will be rendered by nbviewer.
You may access this book via nbviewer here:
http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb
PDF Version
I periodically generate a PDF of the book from the notebooks. You can access it here:
https://drive.google.com/file/d/0By_SW19c1BfhSVFzNHc0SjduNzg/view?usp=sharing
Downloading and Running the Book
However, this book is intended to be interactive and I recommend using it in that form. It's a little more effort to set up, but worth it. If you install IPython and some supporting libraries on your computer and then clone this book you will be able to run all of the code in the book yourself. You can perform experiments, see how filters react to different data, see how different filters react to the same data, and so on. I find this sort of immediate feedback both vital and invigorating. You do not have to wonder "what happens if". Try it and see!
Instructions for installation can be found in the Installation appendix, found here.
Once the software is installed you can navigate to the installation directory and run Juptyer notebook with the command line instruction
jupyter notebook
This will open a browser window showing the contents of the base directory. The book is organized into chapters. Each chapter is named xx-name.ipynb, where xx is the chapter number. .ipynb is the Notebook file extension. To read Chapter 2, click on the link for chapter 2. This will cause the browser to open that subdirectory. In each subdirectory there will be one or more IPython Notebooks (all notebooks have a .ipynb file extension). The chapter contents are in the notebook with the same name as the chapter name. There are sometimes supporting notebooks for doing things like generating animations that are displayed in the chapter. These are not intended to be read by the end user, but of course if you are curious as to how an animation is made go ahead and take a look.
Admittedly this is a cumbersome interface to a book. I am following in the footsteps of several other projects that are re-purposing Jupyter Notebook to generate entire books. I feel the slight annoyances have a huge payoff - instead of having to download a separate code base and run it in an IDE while you try to read a book, all of the code and text is in one place. If you want to alter the code, you may do so and immediately see the effects of your change. If you find a bug, you can make a fix, and push it back to my repository so that everyone in the world benefits. And, of course, you will never encounter a problem I face all the time with traditional books - the book and the code are out of sync with each other, and you are left scratching your head as to which source to trust.
Jupyter
First, some words about using Jupyter Notebooks with this book. This book is interactive. If you want to run code examples, and especially if you want to see animated plots, you will need to run the code cells. I cannot teach you everything about Jupyter Notebooks. However, a few things trip readers up. You can go to http://jupyter.org/ for detailed documentation.
First, you must always run the topmost code cell, the one with the comment #format the book. It is directly above. This does not just set up formatting, which you might not care about, but it also loads some necessary modules and makes some global settings regarding plotting and printing. So, always run this cell unless you are just passively reading. The import from __future__ helps Python 2.7 work like Python 3.X. Division of integers will return a float (3/10 == 0.3) instead of an int (3/10 == 0), and printing requires parens: print(3), not print 3. The line
python
%matplotlib inline
causes plots to be displayed inside the notebook. Matplotlib is a plotting package which is described below. For reasons I don't understand the default behavior of Jupyter Notebooks is to generate plots in an external window.
The percent sign in %matplotlib is used for IPython magic - these are commands to the kernel to do things that are not part of the Python language. There are many useful magic commands, and you can read about them here: http://ipython.readthedocs.io/en/stable/interactive/magics.html
Running the code inside a cell is easy. Click on it so that it has focus (a box will be drawn around it), and then press CTRL-Enter.
Second, cells must be run in order. I break problems up over several cells; if you try to just skip down and run the tenth code cell it almost certainly won't work. If you haven't run anything yet just choose Run All Above from the Cell menu item. That's the easiest way to ensure everything has been run.
Once cells are run you can often jump around and rerun cells in different orders, but not always. I'm trying to fix this, but there is a tradeoff. I'll define a variable in cell 10 (say), and then run code that modifies that variable in cells 11 and 12. If you go back and run cell 11 again the variable will have the value that was set in cell 12, and the code expects the value that was set in cell 10. So, occasionally you'll get weird results if you run cells out of order. My advise is to backtrack a bit, and run cells in order again to get back to a proper state. It's annoying, but the interactive aspect of Jupyter notebooks more than makes up for it. Better yet, submit an issue on GitHub so I know about the problem and fix it!
Finally, some readers have reported problems with the animated plotting features in some browsers. I have not been able to reproduce this. In parts of the book I use the %matplotlib notebook magic, which enables interactive plotting. If these plots are not working for you, try changing this to read %matplotlib inline. You will lose the animated plotting, but it seems to work on all platforms and browsers.
SciPy, NumPy, and Matplotlib
SciPy is a open source collection of software for mathematics. Included in SciPy are NumPy, which provides array objects, linear algebra, random numbers, and more. Matplotlib provides plotting of NumPy arrays. SciPy's modules duplicate some of the functionality in NumPy while adding features such as optimization, image processing, and more.
To keep my efforts for this book managable I have elected to assume that you know how to program in Python, and that you also are familiar with these packages. Nonetheless, I will take a few moments to illustrate a few features of each; realistically you will have to find outside sources to teach you the details. The home page for SciPy, https://scipy.org, is the perfect starting point, though you will soon want to search for relevant tutorials and/or videos.
NumPy, SciPy, and Matplotlib do not come with the default Python distribution; see the Installation Appendix if you do not have them installed.
I use NumPy's array data structure throughout the book, so let's learn about them now. I will teach you enough to get started; refer to NumPy's documentation if you want to become an expert.
numpy.array implements a one or more dimensional array. Its type is numpy.ndarray, and we will refer to this as an ndarray for short. You can construct it with any list-like object. The following constructs a 1-D array from a list:
End of explanation
"""
x = np.array((4,5,6))
x
"""
Explanation: It has become a industry standard to use import numpy as np.
You can also use tuples:
End of explanation
"""
x = np.array([[1, 2, 3],
[4, 5, 6]])
print(x)
"""
Explanation: Create multidimensional arrays with nested brackets:
End of explanation
"""
x = np.array([1, 2, 3], dtype=float)
print(x)
"""
Explanation: You can create arrays of 3 or more dimensions, but we have no need for that here, and so I will not elaborate.
By default the arrays use the data type of the values in the list; if there are multiple types then it will choose the type that most accurately represents all the values. So, for example, if your list contains a mix of int and float the data type of the array would be of type float. You can override this with the dtype parameter.
End of explanation
"""
x = np.array([[1, 2, 3],
[4, 5, 6]])
print(x[1,2])
"""
Explanation: You can access the array elements using subscript location:
End of explanation
"""
x[:, 0]
"""
Explanation: You can access a column or row by using slices. A colon (:) used as a subscript is shorthand for all data in that row or column. So x[:,0] returns an array of all data in the first column (the 0 specifies the first column):
End of explanation
"""
x[1, :]
"""
Explanation: We can get the second row with:
End of explanation
"""
x[1, 1:]
"""
Explanation: Get the last two elements of the second row with:
End of explanation
"""
x[-1, -2:]
"""
Explanation: As with Python lists, you can use negative indexes to refer to the end of the array. -1 refers to the last index. So another way to get the last two elements of the second (last) row would be:
End of explanation
"""
x = np.array([[1., 2.],
[3., 4.]])
print('addition:\n', x + x)
print('\nelement-wise multiplication\n', x * x)
print('\nmultiplication\n', np.dot(x, x))
print('\ndot is also a member of np.array\n', x.dot(x))
"""
Explanation: You can perform matrix addition with the + operator, but matrix multiplication requires the dot method or function. The * operator performs element-wise multiplication, which is not what you want for linear algebra.
End of explanation
"""
import scipy.linalg as linalg
print('transpose\n', x.T)
print('\nNumPy ninverse\n', np.linalg.inv(x))
print('\nSciPy inverse\n', linalg.inv(x))
"""
Explanation: Python 3.5 introduced the @ operator for matrix multiplication.
```python
x @ x
[[ 7.0 10.0]
[ 15.0 22.0]]
```
This will only work if you are using Python 3.5+. So, as much as I prefer this notation to np.dot(x, x) I will not use it in this book.
You can get the transpose with .T, and the inverse with numpy.linalg.inv. The SciPy package also provides the inverse function.
End of explanation
"""
print('zeros\n', np.zeros(7))
print('\nzeros(3x2)\n', np.zeros((3, 2)))
print('\neye\n', np.eye(3))
"""
Explanation: There are helper functions like zeros to create a matrix of all zeros, ones to get all ones, and eye to get the identity matrix. If you want a multidimensional array, use a tuple to specify the shape.
End of explanation
"""
np.arange(0, 2, 0.1)
np.linspace(0, 2, 20)
"""
Explanation: We have functions to create equally spaced data. arange works much like Python's range function, except it returns a NumPy array. linspace works slightly differently, you call it with linspace(start, stop, num), where num is the length of the array that you want.
End of explanation
"""
import matplotlib.pyplot as plt
a = np.array([6, 3, 5, 2, 4, 1])
plt.plot([1, 4, 2, 5, 3, 6])
plt.plot(a)
"""
Explanation: Now let's plot some data. For the most part it is very simple. Matplotlib contains a plotting library pyplot. It is industry standard to import it as plt. Once imported, plot numbers by calling plt.plot with a list or array of numbers. If you make multiple calls it will plot multiple series, each with a different color.
End of explanation
"""
plt.plot(np.arange(0,1, 0.1), [1,4,3,2,6,4,7,3,4,5]);
"""
Explanation: The output [<matplotlib.lines.Line2D at 0x2ba160bed68>] is because plt.plot returns the object that was just created. Ordinarily we do not want to see that, so I add a ; to my last plotting command to suppress that output.
By default plot assumes that the x-series is incremented by one. You can provide your own x-series by passing in both x and y.
End of explanation
"""
# your solution
"""
Explanation: There are many more features to these packages which I use in this book. Normally I will introduce them without explanation, trusting that you can infer the usage from context, or search online for an explanation. As always, if you are unsure, create a new cell in the Notebook or fire up a Python console and experiment!
Exercise - Create arrays
I want you to create a NumPy array of 10 elements with each element containing 1/10. There are several ways to do this; try to implement as many as you can think of.
End of explanation
"""
print(np.ones(10) / 10.)
print(np.array([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1]))
print(np.array([.1]*10))
"""
Explanation: Solution
Here are three ways to do this. The first one is the one I want you to know. I used the '/' operator to divide all of the elements of the array with 10. We will shortly use this to convert the units of an array from meters to km.
End of explanation
"""
def one_tenth(x):
x = np.asarray(x)
return x / 10.
print(one_tenth([1, 2, 3])) # I work!
print(one_tenth(np.array([4, 5, 6]))) # so do I!
"""
Explanation: Here is one I haven't covered yet. The function numpy.asarray() will convert its argument to an ndarray if it isn't already one. If it is, the data is unchanged. This is a handy way to write a function that can accept either Python lists or ndarrays, and it is very efficient if the type is already ndarray as nothing new is created.
End of explanation
"""
|
guyk1971/deep-learning | face_generation/dlnd_face_generation_orig.ipynb | mit | data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
"""
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
"""
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
"""
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
"""
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
"""
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
"""
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
"""
# TODO: Implement Function
return None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
"""
def discriminator(images, reuse=False):
"""
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
"""
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
End of explanation
"""
def generator(z, out_channel_dim, is_train=True):
"""
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
"""
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
"""
def model_loss(input_real, input_z, out_channel_dim):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
"""
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
"""
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
"""
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
"""
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
"""
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
"""
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
"""
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
"""
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
"""
# TODO: Build Model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
# TODO: Train Model
"""
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
"""
batch_size = None
z_dim = None
learning_rate = None
beta1 = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
"""
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
"""
batch_size = None
z_dim = None
learning_rate = None
beta1 = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
"""
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation
"""
|
mgalardini/2017_python_course | notebooks/1-Basic_python_and_native_python_data_structures.ipynb | gpl-2.0 | import this
"""
Explanation: Basic Python and native data structures
In a nutshell
Scripting language
Multi-platform (OsX, Linux, Windows)
Battery-included
Lots of third-party library (catching up with R for computational biology)
Lots of help available online (e.g. stackoverflow)
"Scripting language" means:
no type declaration required.
many built-in data structures are already available: dictionary, lists...
no need for memory handling: there is a memory garbage collector
Multi-platform
Byte code can be executed on different platforms.
"Battery included" means:
Many modules are already provided (e.g. to parse csv files)
No need to install additional libraries for most simple tasks
All of that in a more poetic form
End of explanation
"""
print("hello")
"""
Explanation: Resources
https://docs.python.org
https://docs.python.org/3/tutorial/index.html
http://nbviewer.ipython.org/github/rasbt/python_reference/blob/master/tutorials/key_differences_between_python_2_and_3.ipynb
Hello world example
End of explanation
"""
a = 10
b = 2
a + b
# incremental operators
a = 10
a += 2 # equivalent to a = a + 2 (there is no ++ operators like in C/C++])
a
a = 10
a = a + 2
a
"""
Explanation: About indentation
Before starting, you need to know that in Python, code indentation is an essential part of the syntax. It is used to delimitate code blocks such as loops and functions. It may seem cumbersome, but it makes all Python code consistent and readable. The following code is incorrect:
```python
a = 1
b = 2
since the two statements are not aligned despite being part of the same block of statements (the main block). Instead, they must be indented in the same way:python
a = 1
b = 2
Here is another example involving a loop and a function (def):python
def example():
for i in [1, 2, 3]:
print(i)
In C, it may look likec
void example(){
int i;
for (i=1; i<=3; i++){
printf("%d\n", i);
}
}ORc
void example(){
int i;
for (i=1; i<=3; i++)
{
printf("%d\n", i);
}
}```
Note: both tabs and spaces can be used to define the indentation, but conventionally 4 spaces are preferred.
Rules and conventions on naming variables
Variable names are unlimited in length
Variable names start with a letter or underscore _ followed by letters, numbers or underscores.
Variable names are case-sensitive
Variable names cannot be named with special keywords (see below)
Variable names conventionally have lower-case letters, with multiple words seprated by underscores.
Other rules and style conventions: PEP8 style recommendations (https://www.python.org/dev/peps/pep-0008/)
Basic numeric types
Integers
End of explanation
"""
test = True
if test:
print(test)
test = False
if not test:
print(test)
# Other types can be treated as boolean
# Main example are integers
true_value = 1
false_value = 0
if true_value:
print(true_value)
if not false_value:
print(false_value)
"""
Explanation: Boolean
End of explanation
"""
long_integer = 2**63
float1 = 2.1
float2 = 2.0
float3 = 2.
complex_value = 1 + 2j
long_integer
float3
"""
Explanation: Integers, Long, Float and Complex
End of explanation
"""
1 + 2
1 - 2
3 * 2
3 / 2
3
float(3)
3 // 2
3 % 2
3 ** 2
"""
Explanation: Basic mathematical operators
End of explanation
"""
5 + 3.1
"""
Explanation: Promotion: when you mix numeric types in an expression, all operands are converted (or coerced) to the type with highest precision
End of explanation
"""
int(3.1)
float(3)
bool(1)
bool(0)
"""
Explanation: Converting types: casting
A variable belonging to one type can be converted to another type through "casting"
End of explanation
"""
import keyword
# Here we are using the "dot" operator, which allows us to access objects (variables, that is) attributes and functions
print(keyword.kwlist)
raise = 1
"""
Explanation: Keywords
keywords are special names that are part of the Python language.
A variable cannot be named after a keywords --> SyntaxError would be raised
The list of keywords can be obtained using these commands (import and print are themselves keywords that will be explained along this course)
End of explanation
"""
print(dir(bool))
"""
Explanation: A note about objects
Everything in Python is an object, which can be seen as an advanced version of a variable
objects have methods
the dir keyword allows the user to discover them
End of explanation
"""
import os
os.listdir('.')
os.path.exists('data.txt')
os.path.isdir('.ipynb_checkpoints/')
"""
Explanation: Importing standard python modules
Standard python modules are libraries that are available without the need to install additional software (they come together with the python interpreter). They only need to be imported. The import keyword allows us to import standard (and non standard) Python modules. Some common ones:
- os
- math
- sys
- urllib2
- tens of others are available. See https://docs.python.org/2/py-modindex.html
End of explanation
"""
import math
math.pi
from math import pi
pi
# alias are possible on the module itself
import math as m
m.pi
# or alias on the function/variable itself
from math import pi as PI
PI
# pi was deleted earlier and from math import pi as PI did not created pi
# variable in the local space as expected hence the error
del pi
pi
math.sqrt(4.)
"""
Explanation: Import comes in different flavors
End of explanation
"""
s1 = "Example"
s1[0]
# last index is therefore the length of the string minus 1
s1[len(s1)-1]
s1[6]
# Negative index can be used to start from the end
s1[-2]
# Careful with indexing out of bounds
s1[100]
"""
Explanation: Data structures
There are quite a few data structures available. The builtins data structures are:
- lists
- tuples
- dictionaries
- strings
- sets
Lists, strings and tuples are ordered sequences of objects. Unlike strings that contain only characters, list and tuples can contain any type of objects. Lists and tuples are like arrays. Tuples like strings are immutables. Lists are mutables so they can be extended or reduced at will. Sets are mutable unordered sequence of unique elements.
Lists are enclosed in brackets:
python
l = [1, 2, "a"]
Tuples are enclosed in parentheses:
python
t = (1, 2, "a")
Tuples are faster and consume less memory.
Dictionaries are built with curly brackets:
python
d = {"a":1, "b":2}
Sets are made using the set builtin function. More about the data structures here below:
| | immutable | mutable |
|--------------------|------------|-------------|
| ordered sequence | string | |
| ordered sequence | tuple | list |
| unordered sequence | | set |
| hash table | | dict |
Indexing starts at 0, like in C
End of explanation
"""
"Simple string"
'Simple string'
#single quotes can be used to use double quotes and vice versa
"John's book"
#we can also use escaping
'John\'s book'
"""This is an example of
a long string on several lines"""
"""
Explanation: Strings and slicing
There are 4 ways to represent strings:
- with single quotes
- with double quotes
- with triple single quotes
- with triple double quotes
End of explanation
"""
print('This {0} is {1} on format'.format('example', 'based'))
print('This {0} is {1} on format, isn't it a nice {0}?'.format('example', 'based'))
# Notice the escaping of the quote char
print('This {0} is {1} on format, isn\'t it a nice {0}?'.format('example', 'based'))
print("You can also use %s %s\n" % ('C-like', 'formatting'))
print("You can also format integers %d\n" % (1))
print("You can also specify the precision of floats: %f or %.20f\n" % (1., 1.))
"""
Explanation: A little bit more on the print function: formatting
End of explanation
"""
s1 = "First string"
s2 = "Second string"
# + operator concatenates strings
s1 + " and " + s2
# Strings are immutables
# Try
s1[0] = 'e'
# to change an item, you got to create a new string
'N' + s1[1:]
"""
Explanation: String operations
End of explanation
"""
s1 = 'Banana'
s1[1:6:2]
s = 'TEST'
s[-1:-4:-2]
# slicing. using one : character means from start to end index.
s1 = "First string"
s1[:]
s1[::2]
# indexing
s1[0]
"""
Explanation: Slicing sequence syntax
<pre>
- [start:end:step] most general slicing
- [start:end:] (step=1)
- [start:end] (step=1)
- [start:] (step=1,end=-1)
- [:] (start=0,end=-1, step=1)
- [::2] (start=0, end=-1, step=2)
</pre>
End of explanation
"""
print(dir(s1))
"""
Explanation: Other string operations
End of explanation
"""
# split is very useful when parsing files
s = 'first second third'
s.split()
# a different character can be used as well as separator
s = 'first,second,third'
s.split(',')
# Upper is a very easy and handy method
s.upper()
# Methods can be chained as well!
s.upper().lower().split(',')
"""
Explanation: Well, that's a lot ! Here are the common useful ones:
- split
- find
- index
- replace
- lower
- upper
- endswith
- startswith
- strip
End of explanation
"""
# you can any kind of objects in a lists. This is not an array !
l = [1, 'a', 3]
l
# slicing and indexing like for strings are available
l[0]
l[0::2]
l
# list are mutable sequences:
l[1] = 2
l
"""
Explanation: Lists
The syntax to create a list can be the function list or square brackets []
End of explanation
"""
[1, 2] + [3, 4]
[1, 2] * 10
"""
Explanation: Mathematical operators can be applied to lists as well
End of explanation
"""
stack = ['a','b']
stack.append('c')
stack
stack.append(['d', 'e', 'f'])
stack
stack[3]
"""
Explanation: Adding elements to a list: append Vs. expand
Lists have several methods amongst
which the append and extend methods. The former appends an object to the end of the list (e.g., another list) while the later appends each element of the iterable object (e.g., anothee list) to the end of the list.
For example, we can append an object (here the character 'c') to the end of a simple list as follows:
End of explanation
"""
# the manual way
stack = ['a', 'b', 'c']
stack.append('d')
stack.append('e')
stack.append('f')
stack
# semi-manual way, using a "for" loop
stack = ['a', 'b', 'c']
to_add = ['d', 'e', 'f']
for element in to_add:
stack.append(element)
stack
# the smarter way
stack = ['a', 'b', 'c']
stack.extend(['d', 'e','f'])
stack
"""
Explanation: The object ['d', 'e', 'f'] has been appended to the exiistng list. However, it happens that sometimes what we want is to append the elements one by one of a given list rather the list itself. You can do that manually of course, but a better solution is to use the :func:extend() method as follows:
End of explanation
"""
t = (1, 2, 3)
t
# simple creation:
t = 1, 2, 3
print(t)
t[0] = 3
# Would this work?
(1)
# To enforce a tuple creation, add a comma
(1,)
"""
Explanation: Tuples
Tuples are sequences similar to lists but immutables. Use the parentheses to create a tuple
End of explanation
"""
(1,) * 5
t1 = (1,0)
t1 += (1,)
t1
"""
Explanation: Same operators as lists
End of explanation
"""
a = {'1', '2', 'a', '4'}
a
a = [1, 1, 1, 2, 2, 3, 4]
a
a = {1, 2, 1, 2, 2, 3, 4}
a
a = []
to_add = [1, 1, 1, 2, 2, 3, 4]
for element in to_add:
if element in a:
continue
else:
a.append(element)
a
# Sets have the very handy "add" method
a = set()
to_add = [1, 1, 1, 2, 2, 3, 4]
for element in to_add:
a.add(element)
a
"""
Explanation: Why tuples instead of lists?
faster than list
protects the data (immutable)
tuples can be used as keys on dictionaries (more on that later)
Sets
Sets are constructed from a sequence (or some other iterable object). Since sets cannot have duplicates, there are usually used to build sequence of unique items (e.g., set of identifiers).
The syntax to create a set can be the function set or curly braces {}
End of explanation
"""
a = {'a', 'b', 'c'}
b = {'a', 'b', 'd'}
c = {'a', 'e', 'f'}
# intersection
a & b
# union
a | b
# difference
a - b
# symmetric difference
a ^ b
# is my set a subset of the other?
a < b
# operators can be chained as well
a & b & c
# the same operations can be performed using the operator's name
a.intersection(b).intersection(c)
# a more complex operation
a.intersection(b).difference(c)
"""
Explanation: Sets have very interesting operators
What operators do we have ?
- | for union
- & for intersection
- < for subset
- - for difference
- ^ for symmetric difference
End of explanation
"""
d = {} # an empty dictionary
d = {'first':1, 'second':2} # initialise a dictionary
# access to value given a key:
d['first']
# add a new pair of key/value:
d['third'] = 3
# what are the keys ?
d.keys()
# what are the values ?
d.values()
# what are the key/values pairs?
d.items()
# can be used in a for loop as well
for key, value in d.items():
print(key, value)
# Delete a key (and its value)
del d['third']
d
# naive for loop approach:
for key in d.keys():
print(key, d[key])
# no need to call the "keys" method explicitly
for key in d:
print(key, d[key])
# careful not to look for keys that are NOT in the dictionary
d['fourth']
# the "get" method allows a safe retrieval of a key
d.get('fourth')
# the "get" method returns a type "None" if the key is not present
# a different value can be specified in case of a missed key
d.get('fourth', 4)
"""
Explanation: Dictionaries
A dictionary is a sequence of items.
Each item is a pair made of a key and a value.
Dictionaries are unordered.
You can access to the list of keys or values independently.
End of explanation
"""
n = None
n
print(n)
None + 1
# equivalent to False
if n is None:
print(1)
else:
print(0)
# we can explicitly test for a variable being "None"
value = d.get('fourth')
if value is None:
print('Key not found!')
"""
Explanation: Note on the "None" type
End of explanation
"""
range(10)
my_list = [x*2 for x in range(10)]
my_list
redundant_list = [x for x in my_list*2]
redundant_list
my_set = {x for x in my_list*2}
my_set
my_dict = {x:x+1 for x in my_list}
my_dict
# if/else can be used in comprehension as well
even_numbers = [x for x in range(10) if not x%2]
even_numbers
# if/else can also be used to assign values based on a test
even_numbers = ['odd' if x%2 else 'even' for x in range(10)]
even_numbers
"""
Explanation: Lists, sets and dictionary comprehension: more compact constructors
There is a more concise and advanced way to create a list, set or dictionary
End of explanation
"""
a = [1, 2, 3]
a
b = a # is a reference
b[0] = 10
print(a)
# How to de-reference (copy) a list
a = [1, 2, 3]
# First, use the list() function
b1 = list(a)
# Second use the slice operator
b2 = a[:] # using slicing
b1[0] = 10
b2[0] = 10
a #unchanged
# What about this ?
a = [1,2,3,[4,5,6]]
# copying the object
b = a[:]
b[3][0] = 10 # let us change the first item of the 4th item (4 to 10)
a
"""
Explanation: On objects and references
A common source of errors for python beginners
End of explanation
"""
from copy import deepcopy
a = [1,2,3,[4,5,6]]
b = deepcopy(a)
b[3][0] = 10 # let us change the first item of the 4th item (4 to 10)
a
"""
Explanation: Here we see that there is still a reference. When copying, a shallow copy is performed.
You'll need to use the copy module.
End of explanation
"""
# if/elif/else
animals = ['dog', 'cat', 'cow']
if 'cats' in animals:
print('Cats found!')
elif 'cat' in animals:
print('Only one cat found!')
else:
print('Nothing found!')
# for loop
foods = ['pasta', 'rice', 'lasagna']
for food in foods:
print(food)
# nested for loops
foods = ['pasta', 'rice', 'lasagna']
deserts = ['cake', 'biscuit']
for food in foods:
for desert in deserts:
print(food, desert)
# for loop glueing together two lists
for animal, food in zip(animals, foods):
print('The {0} is eating {1}'.format(animal, food))
animals
# "zip" will glue together lists only until the shortest one
foods = ['pasta', 'rice']
for animal, food in zip(animals, foods):
print('The {0} is eating {1}'.format(animal, food))
# while loop
counter = 0
while counter < 10:
counter += 1
counter
# A normal loop
for value in range(10):
print(value)
# continue: skip the rest of the expression and go to the next element
for value in range(10):
if not value%3:
continue
print(value)
# break: exit the "for" loop
for value in range(10):
if not value%3:
break
print(value)
# break: exit the "for" loop
for value in range(1, 10):
if not value%3:
break
print(value)
# break and continue will only exit from the innermost loop
foods = ['pasta', 'rice', 'lasagna']
deserts = ['cake', 'biscuit']
for food in foods:
for desert in deserts:
if food == 'rice':
continue
print(food, desert)
# break and continue will only exit from the innermost loop
foods = ['pasta', 'rice', 'lasagna']
deserts = ['cake', 'biscuit']
for food in foods:
for desert in deserts:
if desert == 'biscuit':
break
print(food, desert)
"""
Explanation: Flow control operators
End of explanation
"""
d = {'first': 1,
'second': 2}
d['third']
# Exceptions can be intercepted and cashes can be avoided
try:
d['third']
except:
print('Key not present')
# Specific exceptions can be intercepted
try:
d['third']
except KeyError:
print('Key not present')
except:
print('Another error occurred')
# Specific exceptions can be intercepted
try:
d['second'].non_existent_method()
except KeyError:
print('Key not present')
except:
print('Another error occurred')
# The exception can be assigned to a variable to inspect it
try:
d['second'].non_existent_method()
except KeyError:
print('Key not present')
except Exception, e:
print('Another error occurred: {0}'.format(e))
# Exception can be created and "raised" by the user
if d['second'] == 2:
raise Exception('I don\'t like 2 as a number')
"""
Explanation: Exceptions
Used to avoid crashes and handle unexpected errors
End of explanation
"""
def sum_numbers(first, second):
return first + second
sum_numbers(1, 2)
sum_numbers('one', 'two')
sum_numbers('one', 2)
# positional vs. keyword arguments
def print_variables(first, second):
print('First variable: {0}'.format(first))
print('Second variable: {0}'.format(second))
print_variables(1, 2)
print_variables()
print_variables(1)
# positional vs. keyword arguments
def print_variables(first=None, second=None):
print('First variable: {0}'.format(first))
print('Second variable: {0}'.format(second))
print_variables()
print_variables(second=2)
"""
Explanation: Functions
Allows to re-use code in a flexible way
End of explanation
"""
mysequence = """>sp|P56945|BCAR1_HUMAN Breast cancer anti-estrogen resistance protein 1 OS=Homo sapiens GN=BCAR1 PE=1 SV=2
MNHLNVLAKALYDNVAESPDELSFRKGDIMTVLEQDTQGLDGWWLCSLHGRQGIVPGNRL
KILVGMYDKKPAGPGPGPPATPAQPQPGLHAPAPPASQYTPMLPNTYQPQPDSVYLVPTP
SKAQQGLYQVPGPSPQFQSPPAKQTSTFSKQTPHHPFPSPATDLYQVPPGPGGPAQDIYQ
VPPSAGMGHDIYQVPPSMDTRSWEGTKPPAKVVVPTRVGQGYVYEAAQPEQDEYDIPRHL
LAPGPQDIYDVPPVRGLLPSQYGQEVYDTPPMAVKGPNGRDPLLEVYDVPPSVEKGLPPS
NHHAVYDVPPSVSKDVPDGPLLREETYDVPPAFAKAKPFDPARTPLVLAAPPPDSPPAED
VYDVPPPAPDLYDVPPGLRRPGPGTLYDVPRERVLPPEVADGGVVDSGVYAVPPPAEREA
PAEGKRLSASSTGSTRSSQSASSLEVAGPGREPLELEVAVEALARLQQGVSATVAHLLDL
AGSAGATGSWRSPSEPQEPLVQDLQAAVAAVQSAVHELLEFARSAVGNAAHTSDRALHAK
LSRQLQKMEDVHQTLVAHGQALDAGRGGSGATLEDLDRLVACSRAVPEDAKQLASFLHGN
ASLLFRRTKATAPGPEGGGTLHPNPTDKTSSIQSRPLPSPPKFTSQDSPDGQYENSEGGW
MEDYDYVHLQGKEEFEKTQKELLEKGSITRQGKSQLELQQLKQFERLEQEVSRPIDHDLA
NWTPAQPLAPGRTGGLGPSDRQLLLFYLEQCEANLTTLTNAVDAFFTAVATNQPPKIFVA
HSKFVILSAHKLVFIGDTLSRQAKAADVRSQVTHYSNLLCDLLRGIVATTKAAALQYPSP
SAAQDMVERVKELGHSTQQFRRVLGQLAAA
"""
# First, we open a file in "w" write mode
fh = open("mysequence.fasta", "w")
# Second, we write the data into the file:
fh.write(mysequence)
# Third, we close:
fh.close()
"""
Explanation: Simple file reading/writing
Writing
End of explanation
"""
# First, we open the file in read mode (r)
fh = open('mysequence.fasta', 'r')
# Second, we read the content of the file
data = fh.read()
# Third we close
fh.close()
# data is now a string that contains the content of the file being read
data
print(data)
"""
Explanation: Reading
End of explanation
"""
# First, we open a file in "w" write mode with the context manager
with open("mysequence.fasta", "w") as fh:
# Second, we write the data into the file:
fh.write(mysequence)
# When getting out of the block, the file is automatically closed in a secure way
"""
Explanation: For both writing and reading you can use the context manager keyword "with" that will automatically close the file after using it, even in the case of an exception happening
Writing
End of explanation
"""
# First, we open the file in read mode (r) with the context manager
fh = open('mysequence.fasta', 'r')
# Second, we read the content of the file
data = fh.read()
# When getting out of the block, the file is automatically closed in a secure way
"""
Explanation: Reading
End of explanation
"""
data
data.split("\n")
data.split("\n", 1)
header, sequence = data.split("\n", 1)
header
sequence
# we want to get rid of the \n characters
seq1 = sequence.replace("\n","")
# another way is to use the split/join pair
seq2 = "".join(sequence.split("\n"))
seq1 == seq2
# make sure that every letter is upper case
seq1 = seq1.upper()
# With the sequence, we can now play around
seq1.count('A')
counter = {}
counter['A'] = seq1.count('A')
counter['T'] = seq1.count('T')
counter['C'] = seq1.count('C')
counter['G'] = seq1.count('G')
counter
"""
Explanation: Notice the \n character (newline) in the string...
End of explanation
"""
for line in open('mysequence.fasta'):
# remove the newline character at the end of the line
# also removes spaces and tabs at the right end of the string
line = line.rstrip()
print(line)
header = ''
sequence = ''
for line in open('mysequence.fasta'):
# remove the newline character at the end of the line
# also removes spaces and tabs at the right end of the string
line = line.rstrip()
if line.startswith('>'):
header = line
else:
sequence += line
header
sequence
"""
Explanation: If a file is too big, using the "read" method could completely fill our memory! It is advisable to use a "for" loop.
End of explanation
"""
|
pklfz/fold | tensorflow_fold/g3doc/sentiment.ipynb | apache-2.0 | # boilerplate
import codecs
import functools
import os
import tempfile
import zipfile
from nltk.tokenize import sexpr
import numpy as np
from six.moves import urllib
import tensorflow as tf
sess = tf.InteractiveSession()
import tensorflow_fold as td
"""
Explanation: Sentiment Analysis with TreeLSTMs in TensorFlow Fold
The Stanford Sentiment Treebank is a corpus of ~10K one-sentence movie reviews from Rotten Tomatoes. The sentences have been parsed into binary trees with words at the leaves; every sub-tree has a label ranging from 0 (highly negative) to 4 (highly positive); 2 means neutral.
For example, (4 (2 Spiderman) (3 ROCKS)) is sentence with two words, corresponding a binary tree with three nodes. The label at the root, for the entire sentence, is 4 (highly positive). The label for the left child, a leaf corresponding to the word Spiderman, is 2 (neutral). The label for the right child, a leaf corresponding to the word ROCKS is 3 (moderately positive).
This notebook shows how to use TensorFlow Fold train a model on the treebank using binary TreeLSTMs and GloVe word embedding vectors, as described in the paper Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks by Tai et al. The original Torch source code for the model, provided by the authors, is available here.
The model illustrates three of the more advanced features of Fold, namely:
1. Compositions to wire up blocks to form arbitrary directed acyclic graphs
2. Forward Declarations to create recursive blocks
3. Metrics to create models where the size of the output is not fixed, but varies as a function of the input data.
End of explanation
"""
data_dir = tempfile.mkdtemp()
print('saving files to %s' % data_dir)
def download_and_unzip(url_base, zip_name, *file_names):
zip_path = os.path.join(data_dir, zip_name)
url = url_base + zip_name
print('downloading %s to %s' % (url, zip_path))
urllib.request.urlretrieve(url, zip_path)
out_paths = []
with zipfile.ZipFile(zip_path, 'r') as f:
for file_name in file_names:
print('extracting %s' % file_name)
out_paths.append(f.extract(file_name, path=data_dir))
return out_paths
full_glove_path, = download_and_unzip(
'http://nlp.stanford.edu/data/', 'glove.840B.300d.zip',
'glove.840B.300d.txt')
train_path, dev_path, test_path = download_and_unzip(
'http://nlp.stanford.edu/sentiment/', 'trainDevTestTrees_PTB.zip',
'trees/train.txt', 'trees/dev.txt', 'trees/test.txt')
"""
Explanation: Get the data
Begin by fetching the word embedding vectors and treebank sentences.
End of explanation
"""
filtered_glove_path = os.path.join(data_dir, 'filtered_glove.txt')
def filter_glove():
vocab = set()
# Download the full set of unlabeled sentences separated by '|'.
sentence_path, = download_and_unzip(
'http://nlp.stanford.edu/~socherr/', 'stanfordSentimentTreebank.zip',
'stanfordSentimentTreebank/SOStr.txt')
with codecs.open(sentence_path, encoding='utf-8') as f:
for line in f:
# Drop the trailing newline and strip backslashes. Split into words.
vocab.update(line.strip().replace('\\', '').split('|'))
nread = 0
nwrote = 0
with codecs.open(full_glove_path, encoding='utf-8') as f:
with codecs.open(filtered_glove_path, 'w', encoding='utf-8') as out:
for line in f:
nread += 1
line = line.strip()
if not line: continue
if line.split(u' ', 1)[0] in vocab:
out.write(line + '\n')
nwrote += 1
print('read %s lines, wrote %s' % (nread, nwrote))
filter_glove()
"""
Explanation: Filter out words that don't appear in the dataset, since the full dataset is a bit large (5GB). This is purely a performance optimization and has no effect on the final results.
End of explanation
"""
def load_embeddings(embedding_path):
"""Loads embedings, returns weight matrix and dict from words to indices."""
print('loading word embeddings from %s' % embedding_path)
weight_vectors = []
word_idx = {}
with codecs.open(embedding_path, encoding='utf-8') as f:
for line in f:
word, vec = line.split(u' ', 1)
word_idx[word] = len(weight_vectors)
weight_vectors.append(np.array(vec.split(), dtype=np.float32))
# Annoying implementation detail; '(' and ')' are replaced by '-LRB-' and
# '-RRB-' respectively in the parse-trees.
word_idx[u'-LRB-'] = word_idx.pop(u'(')
word_idx[u'-RRB-'] = word_idx.pop(u')')
# Random embedding vector for unknown words.
weight_vectors.append(np.random.uniform(
-0.05, 0.05, weight_vectors[0].shape).astype(np.float32))
return np.stack(weight_vectors), word_idx
weight_matrix, word_idx = load_embeddings(filtered_glove_path)
"""
Explanation: Load the filtered word embeddings into a matrix and build an dict from words to indices into the matrix. Add a random embedding vector for out-of-vocabulary words.
End of explanation
"""
def load_trees(filename):
with codecs.open(filename, encoding='utf-8') as f:
# Drop the trailing newline and strip \s.
trees = [line.strip().replace('\\', '') for line in f]
print('loaded %s trees from %s' % (len(trees), filename))
return trees
train_trees = load_trees(train_path)
dev_trees = load_trees(dev_path)
test_trees = load_trees(test_path)
"""
Explanation: Finally, load the treebank data.
End of explanation
"""
class BinaryTreeLSTMCell(tf.contrib.rnn.BasicLSTMCell):
"""LSTM with two state inputs.
This is the model described in section 3.2 of 'Improved Semantic
Representations From Tree-Structured Long Short-Term Memory
Networks' <http://arxiv.org/pdf/1503.00075.pdf>, with recurrent
dropout as described in 'Recurrent Dropout without Memory Loss'
<http://arxiv.org/pdf/1603.05118.pdf>.
"""
def __init__(self, num_units, keep_prob=1.0):
"""Initialize the cell.
Args:
num_units: int, The number of units in the LSTM cell.
keep_prob: Keep probability for recurrent dropout.
"""
super(BinaryTreeLSTMCell, self).__init__(num_units)
self._keep_prob = keep_prob
def __call__(self, inputs, state, scope=None):
with tf.variable_scope(scope or type(self).__name__):
lhs, rhs = state
c0, h0 = lhs
c1, h1 = rhs
concat = tf.contrib.layers.linear(
tf.concat([inputs, h0, h1], 1), 5 * self._num_units)
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
i, j, f0, f1, o = tf.split(value=concat, num_or_size_splits=5, axis=1)
j = self._activation(j)
if not isinstance(self._keep_prob, float) or self._keep_prob < 1:
j = tf.nn.dropout(j, self._keep_prob)
new_c = (c0 * tf.sigmoid(f0 + self._forget_bias) +
c1 * tf.sigmoid(f1 + self._forget_bias) +
tf.sigmoid(i) * j)
new_h = self._activation(new_c) * tf.sigmoid(o)
new_state = tf.contrib.rnn.LSTMStateTuple(new_c, new_h)
return new_h, new_state
"""
Explanation: Build the model
We want to compute a hidden state vector $h$ for every node in the tree. The hidden state is the input to a linear layer with softmax output for predicting the sentiment label.
At the leaves of the tree, words are mapped to word-embedding vectors which serve as the input to a binary tree-LSTM with $0$ for the previous states. At the internal nodes, the LSTM takes $0$ as input, and previous states from its two children. More formally,
\begin{align}
h_{word} &= TreeLSTM(Embedding(word), 0, 0) \
h_{left, right} &= TreeLSTM(0, h_{left}, h_{right})
\end{align}
where $TreeLSTM(x, h_{left}, h_{right})$ is a special kind of LSTM cell that takes two hidden states as inputs, and has a separate forget gate for each of them. Specifically, it is Tai et al. eqs. 9-14 with $N=2$. One modification here from Tai et al. is that instead of L2 weight regularization, we use recurrent droupout as described in the paper Recurrent Dropout without Memory Loss.
We can implement $TreeLSTM$ by subclassing the TensorFlow BasicLSTMCell.
End of explanation
"""
keep_prob_ph = tf.placeholder_with_default(1.0, [])
"""
Explanation: Use a placeholder for the dropout keep probability, with a default of 1 (for eval).
End of explanation
"""
lstm_num_units = 300 # Tai et al. used 150, but our regularization strategy is more effective
tree_lstm = td.ScopedLayer(
tf.contrib.rnn.DropoutWrapper(
BinaryTreeLSTMCell(lstm_num_units, keep_prob=keep_prob_ph),
input_keep_prob=keep_prob_ph, output_keep_prob=keep_prob_ph),
name_or_scope='tree_lstm')
"""
Explanation: Create the LSTM cell for our model. In addition to recurrent dropout, apply dropout to inputs and outputs, using TF's build-in dropout wrapper. Put the LSTM cell inside of a td.ScopedLayer in order to manage variable scoping. This ensures that our LSTM's variables are encapsulated from the rest of the graph and get created exactly once.
End of explanation
"""
NUM_CLASSES = 5 # number of distinct sentiment labels
output_layer = td.FC(NUM_CLASSES, activation=None, name='output_layer')
"""
Explanation: Create the output layer using td.FC.
End of explanation
"""
word_embedding = td.Embedding(
*weight_matrix.shape, initializer=weight_matrix, name='word_embedding')
"""
Explanation: Create the word embedding using td.Embedding. Note that the built-in Fold layers like Embedding and FC manage variable scoping automatically, so there is no need to put them inside scoped layers.
End of explanation
"""
embed_subtree = td.ForwardDeclaration(name='embed_subtree')
"""
Explanation: We now have layers that encapsulate all of the trainable variables for our model. The next step is to create the Fold blocks that define how inputs (s-expressions encoded as strings) get processed and used to make predictions. Naturally this requires a recursive model, which we handle in Fold using a forward declaration. The recursive step is to take a subtree (represented as a string) and convert it into a hidden state vector (the LSTM state), thus embedding it in a $n$-dimensional space (where here $n=300$).
End of explanation
"""
def logits_and_state():
"""Creates a block that goes from tokens to (logits, state) tuples."""
unknown_idx = len(word_idx)
lookup_word = lambda word: word_idx.get(word, unknown_idx)
word2vec = (td.GetItem(0) >> td.InputTransform(lookup_word) >>
td.Scalar('int32') >> word_embedding)
pair2vec = (embed_subtree(), embed_subtree())
# Trees are binary, so the tree layer takes two states as its input_state.
zero_state = td.Zeros((tree_lstm.state_size,) * 2)
# Input is a word vector.
zero_inp = td.Zeros(word_embedding.output_type.shape[0])
word_case = td.AllOf(word2vec, zero_state)
pair_case = td.AllOf(zero_inp, pair2vec)
tree2vec = td.OneOf(len, [(1, word_case), (2, pair_case)])
return tree2vec >> tree_lstm >> (output_layer, td.Identity())
"""
Explanation: The core the model is a block that takes as input a list of tokens. The tokens will be either:
[word] - a leaf with a single word, the base-case for the recursion, or
[lhs, rhs] - an internal node consisting of a pair of sub-expressions
The outputs of the block will be a pair consisting of logits (the prediction) and the LSTM state.
End of explanation
"""
def tf_node_loss(logits, labels):
return tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels)
"""
Explanation: Note that we use the call operator () to create blocks that reference the embed_subtree forward declaration, for the recursive case.
Define a per-node loss function for training.
End of explanation
"""
def tf_fine_grained_hits(logits, labels):
predictions = tf.cast(tf.argmax(logits, 1), tf.int32)
return tf.cast(tf.equal(predictions, labels), tf.float64)
def tf_binary_hits(logits, labels):
softmax = tf.nn.softmax(logits)
binary_predictions = (softmax[:, 3] + softmax[:, 4]) > (softmax[:, 0] + softmax[:, 1])
binary_labels = labels > 2
return tf.cast(tf.equal(binary_predictions, binary_labels), tf.float64)
"""
Explanation: Additionally calculate fine-grained and binary hits (i.e. un-normalized accuracy) for evals. Fine-grained accuracy is defined over all five class labels and will be calculated for all labels, whereas binary accuracy is defined of negative vs. positive classification and will not be calcluated for neutral labels.
End of explanation
"""
def add_metrics(is_root, is_neutral):
"""A block that adds metrics for loss and hits; output is the LSTM state."""
c = td.Composition(
name='predict(is_root=%s, is_neutral=%s)' % (is_root, is_neutral))
with c.scope():
# destructure the input; (labels, (logits, state))
labels = c.input[0]
logits = td.GetItem(0).reads(c.input[1])
state = td.GetItem(1).reads(c.input[1])
# calculate loss
loss = td.Function(tf_node_loss)
td.Metric('all_loss').reads(loss.reads(logits, labels))
if is_root: td.Metric('root_loss').reads(loss)
# calculate fine-grained hits
hits = td.Function(tf_fine_grained_hits)
td.Metric('all_hits').reads(hits.reads(logits, labels))
if is_root: td.Metric('root_hits').reads(hits)
# calculate binary hits, if the label is not neutral
if not is_neutral:
binary_hits = td.Function(tf_binary_hits).reads(logits, labels)
td.Metric('all_binary_hits').reads(binary_hits)
if is_root: td.Metric('root_binary_hits').reads(binary_hits)
# output the state, which will be read by our by parent's LSTM cell
c.output.reads(state)
return c
"""
Explanation: The td.Metric block provides a mechaism for accumulating results across sequential and recursive computations without having the thread them through explictly as return values. Metrics are wired up here inside of a td.Composition block, which allows us to explicitly specify the inputs of sub-blocks with calls to Block.reads() inside of a Composition.scope() context manager.
For training, we will sum the loss over all nodes. But for evals, we would like to separately calcluate accuracies for the root (i.e. entire sentences) to match the numbers presented in the literature. We also need to distinguish between neutral and non-neutral sentiment labels, because binary sentiment doesn't get calculated for neutral nodes.
This is easy to do by putting our block creation code for calculating metrics inside of a function and passing it indicators. Note that this needs to be done in Python-land, because we can't inspect the contents of a tensor inside of Fold (since it hasn't been run yet).
End of explanation
"""
def tokenize(s):
label, phrase = s[1:-1].split(None, 1)
return label, sexpr.sexpr_tokenize(phrase)
"""
Explanation: Use NLTK to define a tokenize function to split S-exprs into left and right parts. We need this to run our logits_and_state() block since it expects to be passed a list of tokens and our raw input is strings.
End of explanation
"""
tokenize('(X Y)')
tokenize('(X Y Z)')
"""
Explanation: Try it out.
End of explanation
"""
def embed_tree(logits_and_state, is_root):
"""Creates a block that embeds trees; output is tree LSTM state."""
return td.InputTransform(tokenize) >> td.OneOf(
key_fn=lambda pair: pair[0] == '2', # label 2 means neutral
case_blocks=(add_metrics(is_root, is_neutral=False),
add_metrics(is_root, is_neutral=True)),
pre_block=(td.Scalar('int32'), logits_and_state))
"""
Explanation: Embed trees (represented as strings) by tokenizing and piping (>>) to label_and_logits, distinguishing between neutral and non-neutral labels. We don't know here whether or not we are the root node (since this is a recursive computation), so that gets threaded through as an indicator.
End of explanation
"""
model = embed_tree(logits_and_state(), is_root=True)
"""
Explanation: Put everything together and create our top-level (i.e. root) model. It is rather simple.
End of explanation
"""
embed_subtree.resolve_to(embed_tree(logits_and_state(), is_root=False))
"""
Explanation: Resolve the forward declaration for embedding subtrees (the non-root case) with a second call to embed_tree.
End of explanation
"""
compiler = td.Compiler.create(model)
print('input type: %s' % model.input_type)
print('output type: %s' % model.output_type)
"""
Explanation: Compile the model.
End of explanation
"""
metrics = {k: tf.reduce_mean(v) for k, v in compiler.metric_tensors.items()}
"""
Explanation: Setup for training
Calculate means by summing the raw metrics.
End of explanation
"""
LEARNING_RATE = 0.05
KEEP_PROB = 0.75
BATCH_SIZE = 100
EPOCHS = 20
EMBEDDING_LEARNING_RATE_FACTOR = 0.1
"""
Explanation: Magic numbers.
End of explanation
"""
train_feed_dict = {keep_prob_ph: KEEP_PROB}
loss = tf.reduce_sum(compiler.metric_tensors['all_loss'])
opt = tf.train.AdagradOptimizer(LEARNING_RATE)
"""
Explanation: Training with Adagrad.
End of explanation
"""
grads_and_vars = opt.compute_gradients(loss)
found = 0
for i, (grad, var) in enumerate(grads_and_vars):
if var == word_embedding.weights:
found += 1
grad = tf.scalar_mul(EMBEDDING_LEARNING_RATE_FACTOR, grad)
grads_and_vars[i] = (grad, var)
assert found == 1 # internal consistency check
train = opt.apply_gradients(grads_and_vars)
saver = tf.train.Saver()
"""
Explanation: Important detail from section 5.3 of [Tai et al.]((http://arxiv.org/pdf/1503.00075.pdf); downscale the gradients for the word embedding vectors 10x otherwise we overfit horribly.
End of explanation
"""
sess.run(tf.global_variables_initializer())
"""
Explanation: The TF graph is now complete; initialize the variables.
End of explanation
"""
def train_step(batch):
train_feed_dict[compiler.loom_input_tensor] = batch
_, batch_loss = sess.run([train, loss], train_feed_dict)
return batch_loss
"""
Explanation: Train the model
Start by defining a function that does a single step of training on a batch and returns the loss.
End of explanation
"""
def train_epoch(train_set):
return sum(train_step(batch) for batch in td.group_by_batches(train_set, BATCH_SIZE))
"""
Explanation: Now similarly for an entire epoch of training.
End of explanation
"""
train_set = compiler.build_loom_inputs(train_trees)
"""
Explanation: Use Compiler.build_loom_inputs() to transform train_trees into individual loom inputs (i.e. wiring diagrams) that we can use to actually run the model.
End of explanation
"""
dev_feed_dict = compiler.build_feed_dict(dev_trees)
"""
Explanation: Use Compiler.build_feed_dict() to build a feed dictionary for validation on the dev set. This is marginally faster and more convenient than calling build_loom_inputs. We used build_loom_inputs on the train set so that we can shuffle the individual wiring diagrams into different batches for each epoch.
End of explanation
"""
def dev_eval(epoch, train_loss):
dev_metrics = sess.run(metrics, dev_feed_dict)
dev_loss = dev_metrics['all_loss']
dev_accuracy = ['%s: %.2f' % (k, v * 100) for k, v in
sorted(dev_metrics.items()) if k.endswith('hits')]
print('epoch:%4d, train_loss: %.3e, dev_loss_avg: %.3e, dev_accuracy:\n [%s]'
% (epoch, train_loss, dev_loss, ' '.join(dev_accuracy)))
return dev_metrics['root_hits']
"""
Explanation: Define a function to do an eval on the dev set and pretty-print some stats, returning accuracy on the dev set.
End of explanation
"""
best_accuracy = 0.0
save_path = os.path.join(data_dir, 'sentiment_model')
for epoch, shuffled in enumerate(td.epochs(train_set, EPOCHS), 1):
train_loss = train_epoch(shuffled)
accuracy = dev_eval(epoch, train_loss)
if accuracy > best_accuracy:
best_accuracy = accuracy
checkpoint_path = saver.save(sess, save_path, global_step=epoch)
print('model saved in file: %s' % checkpoint_path)
"""
Explanation: Run the main training loop, saving the model after each epoch if it has the best accuracy on the dev set. Use the td.epochs utility function to memoize the loom inputs and shuffle them after every epoch of training.
End of explanation
"""
saver.restore(sess, checkpoint_path)
"""
Explanation: The model starts to overfit pretty quickly even with dropout, as the LSTM begins to memorize the training set (which is rather small).
Evaluate the model
Restore the model from the last checkpoint, where we saw the best accuracy on the dev set.
End of explanation
"""
test_results = sorted(sess.run(metrics, compiler.build_feed_dict(test_trees)).items())
print(' loss: [%s]' % ' '.join(
'%s: %.3e' % (name.rsplit('_', 1)[0], v)
for name, v in test_results if name.endswith('_loss')))
print('accuracy: [%s]' % ' '.join(
'%s: %.2f' % (name.rsplit('_', 1)[0], v * 100)
for name, v in test_results if name.endswith('_hits')))
"""
Explanation: See how we did.
End of explanation
"""
|
fastai/fastai | nbs/17_callback.tracker.ipynb | apache-2.0 | #|export
class TerminateOnNaNCallback(Callback):
"A `Callback` that terminates training if loss is NaN."
order=-9
def after_batch(self):
"Test if `last_loss` is NaN and interrupts training."
if torch.isinf(self.loss) or torch.isnan(self.loss): raise CancelFitException
learn = synth_learner()
learn.fit(10, lr=100, cbs=TerminateOnNaNCallback())
assert len(learn.recorder.losses) < 10 * len(learn.dls.train)
for l in learn.recorder.losses:
assert not torch.isinf(l) and not torch.isnan(l)
"""
Explanation: Tracking callbacks
Callbacks that make decisions depending how a monitored metric/loss behaves
TerminateOnNaNCallback -
End of explanation
"""
#|export
class TrackerCallback(Callback):
"A `Callback` that keeps track of the best value in `monitor`."
order,remove_on_fetch,_only_train_loop = 60,True,True
def __init__(self, monitor='valid_loss', comp=None, min_delta=0., reset_on_fit=True):
if comp is None: comp = np.less if 'loss' in monitor or 'error' in monitor else np.greater
if comp == np.less: min_delta *= -1
self.monitor,self.comp,self.min_delta,self.reset_on_fit,self.best= monitor,comp,min_delta,reset_on_fit,None
def before_fit(self):
"Prepare the monitored value"
self.run = not hasattr(self, "lr_finder") and not hasattr(self, "gather_preds")
if self.reset_on_fit or self.best is None: self.best = float('inf') if self.comp == np.less else -float('inf')
assert self.monitor in self.recorder.metric_names[1:]
self.idx = list(self.recorder.metric_names[1:]).index(self.monitor)
def after_epoch(self):
"Compare the last value to the best up to now"
val = self.recorder.values[-1][self.idx]
if self.comp(val - self.min_delta, self.best): self.best,self.new_best = val,True
else: self.new_best = False
def after_fit(self): self.run=True
"""
Explanation: TrackerCallback -
End of explanation
"""
#|hide
class FakeRecords(Callback):
order=51
def __init__(self, monitor, values): self.monitor,self.values = monitor,values
def before_fit(self): self.idx = list(self.recorder.metric_names[1:]).index(self.monitor)
def after_epoch(self): self.recorder.values[-1][self.idx] = self.values[self.epoch]
class TestTracker(Callback):
order=61
def before_fit(self): self.bests,self.news = [],[]
def after_epoch(self):
self.bests.append(self.tracker.best)
self.news.append(self.tracker.new_best)
#|hide
learn = synth_learner(n_trn=2, cbs=TestTracker())
cbs=[TrackerCallback(monitor='valid_loss'), FakeRecords('valid_loss', [0.2,0.1])]
with learn.no_logging(): learn.fit(2, cbs=cbs)
test_eq(learn.test_tracker.bests, [0.2, 0.1])
test_eq(learn.test_tracker.news, [True,True])
#With a min_delta
cbs=[TrackerCallback(monitor='valid_loss', min_delta=0.15), FakeRecords('valid_loss', [0.2,0.1])]
with learn.no_logging(): learn.fit(2, cbs=cbs)
test_eq(learn.test_tracker.bests, [0.2, 0.2])
test_eq(learn.test_tracker.news, [True,False])
#|hide
#By default metrics have to be bigger at each epoch.
def tst_metric(out,targ): return F.mse_loss(out,targ)
learn = synth_learner(n_trn=2, cbs=TestTracker(), metrics=tst_metric)
cbs=[TrackerCallback(monitor='tst_metric'), FakeRecords('tst_metric', [0.2,0.1])]
with learn.no_logging(): learn.fit(2, cbs=cbs)
test_eq(learn.test_tracker.bests, [0.2, 0.2])
test_eq(learn.test_tracker.news, [True,False])
#This can be overwritten by passing `comp=np.less`.
learn = synth_learner(n_trn=2, cbs=TestTracker(), metrics=tst_metric)
cbs=[TrackerCallback(monitor='tst_metric', comp=np.less), FakeRecords('tst_metric', [0.2,0.1])]
with learn.no_logging(): learn.fit(2, cbs=cbs)
test_eq(learn.test_tracker.bests, [0.2, 0.1])
test_eq(learn.test_tracker.news, [True,True])
#|hide
#Setting reset_on_fit=True will maintain the "best" value over subsequent calls to fit
learn = synth_learner(n_val=2, cbs=TrackerCallback(monitor='tst_metric', reset_on_fit=False), metrics=tst_metric)
tracker_cb = learn.cbs.filter(lambda cb: isinstance(cb, TrackerCallback))[0]
with learn.no_logging(): learn.fit(1)
first_best = tracker_cb.best
with learn.no_logging(): learn.fit(1)
test_eq(tracker_cb.best, first_best)
#|hide
#A tracker callback is not run during an lr_find
from fastai.callback.schedule import *
learn = synth_learner(n_trn=2, cbs=TrackerCallback(monitor='tst_metric'), metrics=tst_metric)
learn.lr_find(num_it=15, show_plot=False)
assert not hasattr(learn, 'new_best')
"""
Explanation: When implementing a Callback that has behavior that depends on the best value of a metric or loss, subclass this Callback and use its best (for best value so far) and new_best (there was a new best value this epoch) attributes. If you want to maintain best over subsequent calls to fit (e.g., Learner.fit_one_cycle), set reset_on_fit = True.
comp is the comparison operator used to determine if a value is best than another (defaults to np.less if 'loss' is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount.
End of explanation
"""
#|export
class EarlyStoppingCallback(TrackerCallback):
"A `TrackerCallback` that terminates training when monitored quantity stops improving."
order=TrackerCallback.order+3
def __init__(self, monitor='valid_loss', comp=None, min_delta=0., patience=1, reset_on_fit=True):
super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
self.patience = patience
def before_fit(self): self.wait = 0; super().before_fit()
def after_epoch(self):
"Compare the value monitored to its best score and maybe stop training."
super().after_epoch()
if self.new_best: self.wait = 0
else:
self.wait += 1
if self.wait >= self.patience:
print(f'No improvement since epoch {self.epoch-self.wait}: early stopping')
raise CancelFitException()
"""
Explanation: EarlyStoppingCallback -
End of explanation
"""
learn = synth_learner(n_trn=2, metrics=F.mse_loss)
learn.fit(n_epoch=200, lr=1e-7, cbs=EarlyStoppingCallback(monitor='mse_loss', min_delta=0.1, patience=2))
learn.validate()
learn = synth_learner(n_trn=2)
learn.fit(n_epoch=200, lr=1e-7, cbs=EarlyStoppingCallback(monitor='valid_loss', min_delta=0.1, patience=2))
#|hide
test_eq(len(learn.recorder.values), 3)
"""
Explanation: comp is the comparison operator used to determine if a value is best than another (defaults to np.less if 'loss' is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount. patience is the number of epochs you're willing to wait without improvement.
End of explanation
"""
#|export
class SaveModelCallback(TrackerCallback):
"A `TrackerCallback` that saves the model's best during training and loads it at the end."
order = TrackerCallback.order+1
def __init__(self, monitor='valid_loss', comp=None, min_delta=0., fname='model', every_epoch=False, at_end=False,
with_opt=False, reset_on_fit=True):
super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
assert not (every_epoch and at_end), "every_epoch and at_end cannot both be set to True"
# keep track of file path for loggers
self.last_saved_path = None
store_attr('fname,every_epoch,at_end,with_opt')
def _save(self, name): self.last_saved_path = self.learn.save(name, with_opt=self.with_opt)
def after_epoch(self):
"Compare the value monitored to its best score and save if best."
if self.every_epoch:
if (self.epoch%self.every_epoch) == 0: self._save(f'{self.fname}_{self.epoch}')
else: #every improvement
super().after_epoch()
if self.new_best:
print(f'Better model found at epoch {self.epoch} with {self.monitor} value: {self.best}.')
self._save(f'{self.fname}')
def after_fit(self, **kwargs):
"Load the best model."
if self.at_end: self._save(f'{self.fname}')
elif not self.every_epoch: self.learn.load(f'{self.fname}', with_opt=self.with_opt)
"""
Explanation: SaveModelCallback -
End of explanation
"""
learn = synth_learner(n_trn=2, path=Path.cwd()/'tmp')
learn.fit(n_epoch=2, cbs=SaveModelCallback())
assert (Path.cwd()/'tmp/models/model.pth').exists()
learn = synth_learner(n_trn=2, path=Path.cwd()/'tmp')
learn.fit(n_epoch=2, cbs=SaveModelCallback(fname='end',at_end=True))
assert (Path.cwd()/'tmp/models/end.pth').exists()
learn.fit(n_epoch=2, cbs=SaveModelCallback(every_epoch=True))
for i in range(2): assert (Path.cwd()/f'tmp/models/model_{i}.pth').exists()
shutil.rmtree(Path.cwd()/'tmp')
learn.fit(n_epoch=4, cbs=SaveModelCallback(every_epoch=2))
for i in range(4):
if not i%2: assert (Path.cwd()/f'tmp/models/model_{i}.pth').exists()
else: assert not (Path.cwd()/f'tmp/models/model_{i}.pth').exists()
shutil.rmtree(Path.cwd()/'tmp')
"""
Explanation: comp is the comparison operator used to determine if a value is best than another (defaults to np.less if 'loss' is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount. Model will be saved in learn.path/learn.model_dir/name.pth, maybe every_epoch if True, every nth epoch if an integer is passed to every_epoch or at each improvement of the monitored quantity.
End of explanation
"""
#|export
class ReduceLROnPlateau(TrackerCallback):
"A `TrackerCallback` that reduces learning rate when a metric has stopped improving."
order=TrackerCallback.order+2
def __init__(self, monitor='valid_loss', comp=None, min_delta=0., patience=1, factor=10., min_lr=0, reset_on_fit=True):
super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
self.patience,self.factor,self.min_lr = patience,factor,min_lr
def before_fit(self): self.wait = 0; super().before_fit()
def after_epoch(self):
"Compare the value monitored to its best score and reduce LR by `factor` if no improvement."
super().after_epoch()
if self.new_best: self.wait = 0
else:
self.wait += 1
if self.wait >= self.patience:
old_lr = self.opt.hypers[-1]['lr']
for h in self.opt.hypers: h['lr'] = max(h['lr'] / self.factor, self.min_lr)
self.wait = 0
if self.opt.hypers[-1]["lr"] < old_lr:
print(f'Epoch {self.epoch}: reducing lr to {self.opt.hypers[-1]["lr"]}')
learn = synth_learner(n_trn=2)
learn.fit(n_epoch=4, lr=1e-7, cbs=ReduceLROnPlateau(monitor='valid_loss', min_delta=0.1, patience=2))
#|hide
test_eq(learn.opt.hypers[-1]['lr'], 1e-8)
learn = synth_learner(n_trn=2)
learn.fit(n_epoch=6, lr=5e-8, cbs=ReduceLROnPlateau(monitor='valid_loss', min_delta=0.1, patience=2, min_lr=1e-8))
#|hide
test_eq(learn.opt.hypers[-1]['lr'], 1e-8)
"""
Explanation: ReduceLROnPlateau
End of explanation
"""
#|hide
from nbdev.export import notebook2script
notebook2script()
"""
Explanation: Each of these three derived TrackerCallbacks (SaveModelCallback, ReduceLROnPlateu, and EarlyStoppingCallback) all have an adjusted order so they can each run with each other without interference. That order is as follows:
Note: in parenthesis is the actual Callback order number
TrackerCallback (60)
SaveModelCallback (61)
ReduceLrOnPlateu (62)
EarlyStoppingCallback (63)
Export -
End of explanation
"""
|
google-aai/tf-serving-k8s-tutorial | jupyter/keras_training_to_serving.ipynb | apache-2.0 | # Import Keras libraries
import keras.applications.resnet50 as resnet50
from keras.preprocessing import image
from keras import backend as K
import numpy as np
import os
# Import TensorFlow saved model libraries
import tensorflow as tf
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter
"""
Explanation: Serving a Keras Resnet Model
Keras is a high level API that can be used to build deep neural nets with only a few lines of code, and supports a number of backends for computation (TensorFlow, Theano, and CNTK). Keras also contains a library of pre-trained models, including a Resnet model with 50-layers, trained on the ImageNet dataset, which we will use for this exercise.
This notebook teaches how to create a servable version of the Imagenet Resnet50 model in Keras using the TensorFlow backend. The servable model can be served using TensorFlow Serving, which runs very efficiently in C++ and supports multiple platforms (different OSes, as well as hardware with different types of accelerators such as GPUs). The model will need to handle RPC prediction calls coming from a client that sends requests containing a batch of jpeg images.
See https://github.com/keras-team/keras/blob/master/keras/applications/resnet50.py for the implementation of ResNet50.
Preamble
Import the required libraries.
End of explanation
"""
_DEFAULT_IMAGE_SIZE = 224 # Default width and height of input images to ResNet model
_LABEL_CLASSES = 1001 # Number of classes that model predicts
"""
Explanation: Constants
End of explanation
"""
VERSION_NUMBER = 1 #Increment this if you want to generate more than one servable model
SERVING_DIR = "keras_servable/" + str(VERSION_NUMBER)
SAMPLE_DIR = "../client"
"""
Explanation: Setting the Output Directory
Set a version number and directory for the output of the servable model. Note that with TensorFlow, if you've successfully saved the servable in a directory, trying to save another servable will fail hard. You always want to increment your version number, or otherwise delete the output directory and re-run the servable creation code.
End of explanation
"""
# Preprocessing helper function similar to `resnet_training_to_serving_solution.ipynb`.
def build_jpeg_to_image_graph(jpeg_image):
"""Build graph elements to preprocess an image by subtracting out the mean from all channels.
Args:
image: A jpeg-formatted byte stream represented as a string.
Returns:
A 3d tensor of image pixels normalized for the Keras ResNet50 model.
The canned ResNet50 pretrained model was trained after running
keras.applications.resnet50.preprocess_input in 'caffe' mode, which
flips the RGB channels and subtracts out the channel means [103.939, 116.779, 123.68].
There is no normalizing on the range.
"""
image = ???
image = tf.to_float(image)
image = resnet50.preprocess_input(image)
return image
"""
Explanation: Build the Servable Model from Keras
Keras has a prepackaged ImageNet-trained ResNet50 model which takes in a 4d input tensor (i.e. a tensor of RGB-color images) and outputs a list of class probabilities for all of the classes.
We will create a servable model whose input takes in a batch of jpeg-encoded images, and outputs a dictionary containing the top k classes and probabilities for each image in the batch. We've refactored the input preprocessing and output postprocessing into helper functions.
Helper Functions for Building a TensorFlow Graph
TensorFlow is essentially a computation graph with variables and states. The graph must be built before it can ingest and process data. Typically, a TensorFlow graph will contain a set of input nodes (called placeholders) from which data can be ingested, and a set of TensorFlow functions that take existing nodes as inputs and produces a dependent node that performs a computation on the input nodes. Each node can be referenced as an "output" node through which processed data can be read.
It is often useful to create helper functions for building a TensorFlow graphs for two reasons:
Modularity: you can reuse functions in different places; for instance, a different image model or ResNet architecture can reuse functions.
Testability: you can attach placeholders at the input of graph building helper functions, and read the output to ensure that your result matches expected behavior.
Helper function: convert JPEG strings to Normalized 3D Tensors
The client (resnet_client.py) sends jpeg encoded images into an array of jpegs (each entry a string) to send to the server. These jpegs are all appropriately resized to 224x224x3, and do not need resizing on the server side to enter into the ResNet model. However, the ResNet50 model was trained with pixel values normalized using a predefined method in the Keras resnet50 package (resnet50.preprocess_input()). We will need to extract the raw 3D tensor from each jpeg string and normalize the values.
Exercise: Add a command in the helper function to build a node that decodes a jpeg string into a 3D RGB image tensor.
Useful References:
* tf.image module
End of explanation
"""
# Defining input test graph nodes: only needs to be run once!
test_jpeg_ph = ??? # A placeholder for a single string, which is a dimensionless (0D) tensor.
test_decoded_tensor = ??? # Output node, which returns output of the jpeg to image graph.
# Print the graph elements to check shapes. ? indicates that TensorFlow does not know the length of those dimensions.
print(test_jpeg_ph)
print(test_decoded_tensor)
"""
Explanation: Unit test the helper function
Exercise: We are going to construct an input placeholder node in our TensorFlow graph to read data into TensorFlow, and use the helper function to attach computational elements to the input node, resulting in an output node where data is collected. Next, we will then run the graph by providing sample input into the placeholder (Input data can be python floats, ints, strings, numpy arrays, ndarrays, etc.), and returning the value at the output node.
A placeholder can store a Tensor of arbitrary dimension, and arbitrary length in any dimension.
An example of a placeholder that holds a 1d tensor of floating values is:
x = tf.placeholder(dtype=tf.float32, shape=[10], 'my_input_node')
An example of a 2d tensor (matrix) of dimensions 10x20 holding string values is:
x = tf.placeholder(dtype=tf.string, shape=[10, 20], 'my_string_matrix')
Note that we assigned a Python variable x to be a pointer to the placeholder, but simply calling tf.placeholder() with a named element would create an element in the TensorFlow graph that can be referenced in a global dictionary as 'my_input_node'. However, it helps to keep a Python pointer to keep track of the element without having to and pass it into helper functions.
Any dependent node in the graph can serve as an output node. For instance, passing an input node x through y = build_jpeg_to_image_graph(x) would return a node referenced by python variable y which is the result of processing the input through the graph built by the helper function. When we run the test graph with real data below, you will see how to return the output of y.
Remember: TensorFlow helper functions are used to help construct a computational graph! build_jpeg_to_image_graph() does not return a 3D array. It returns a graph node that returns a 3D array after processing a jpeg-encoded string!**
Useful References:
TensorFlow shapes, TensorFlow data types
End of explanation
"""
# Run the graph! Validate the result of the function using a sample image SAMPLE_DIR/cat_sample.jpg
ERROR_TOLERANCE = 1e-4
with open(os.path.join(SAMPLE_DIR, "cat_sample.jpg"), "rb") as imageFile:
jpeg_str = imageFile.read()
with tf.Session() as sess:
result = sess.run(test_decoded_tensor, feed_dict={test_jpeg_ph: jpeg_str})
assert result.shape == (224, 224, 3)
# TODO: Replace with assert statements to check max and min normalized pixel values
assert result.max() <= ??? + ERROR_TOLERANCE # Max pixel value after subtracting mean
assert result.min() >= ??? - ERROR_TOLERANCE # Min pixel value after subtracting mean
print('Hooray! JPEG decoding test passed!')
"""
Explanation: Run the Test Graph
To run data through a test graph, a TensorFlow session must be created. TensorFlow will only run a portion of the graph that is required to map a set of inputs defined by a dictionary with entries of type {placeholder_node : python data}, and an output graph node (or list of graph nodes), e.g.:
with tf.Session() as sess:
sess.run(output_node,
{placeholder_1: input_data_1, placeholder_2: input_data_2, ...})
or
with tf.Session() as sess:
sess.run([output_node_1, output_node_2, ...],
{placeholder_1: input_data_1, placeholder_2: input_data_2, ...})
Exercise: Let's read a jpeg image as an encoded string, pass it into the input placeholder, and return a 3D tensor result which is the normalized image. The expected image size is 224x224x3. Please add more potentially useful assert statements to test the output.
Hint: The ??? below are just examples of possible assertions you can make about the range of output values in the preprocessed image. The more robust the assertions, the better you can validate your code! See the documentation for build_jpeg_to_image_graph() above for a description of how the resnet50.preprocess_image() function transforms images to fill in the ??? below.
End of explanation
"""
def preprocess_input(jpeg_tensor):
processed_images = ??? # Convert a 1D-tensor of JPEGs to a list of 3D tensors
processed_images = tf.stack(processed_images) # Convert list of 3D tensors to tensor of tensors (4D tensor)
processed_images = tf.reshape(tensor=processed_images, # Reshape to ensure TF graph knows the final dimensions
shape=[-1, _DEFAULT_IMAGE_SIZE, _DEFAULT_IMAGE_SIZE, 3])
return processed_images
"""
Explanation: Remarks
The approach above uses vanilla TensorFlow to perform unit testing. You may notice that the code is more verbose than ideal, since you have to create a session, feed input through a dictionary, etc. We encourage the student to investigate some options below at a later time:
TensorFlow Eager was introduced in TensorFlow 1.5 as a way to execute TensorFlow graphs in a way similar to numpy operations. After testing individual parts of the graph using Eager, you will need to rebuild a graph with the Eager option turned off in order to build a performance optimized TensorFlow graph. Also, keep in mind that you will need another virtual environment with TensorFlow 1.5 in order to run eager execution, which may not be compatible with TensorFlow Serving 1.4 used in this tutorial.
TensorFlow unit testing is a more software engineer oriented approach to run tests. By writing test classes that can be invoked individually when building the project, calling tf.test.main() will run all tests and return a list of ones that succeeded and failed, allowing you to inspect errors. Because we are in a notebook environment, such a test would not succeed due to an already running kernel that tf.test cannot access. The tests must be run from the command line, e.g. python test_my_graph.py.
We've provided both eager execution and unit test examples in the testing directory showing how to unit test various components in this notebook. Note that because these examples contain the solution to exercises below, please complete all notebook exercises prior to reading through these examples.
Now that we know how to run TensorFlow tests, let's create and test more helper functions!
Helper Function: Preprocessing Server Input
The server receives a client request in the form of a dictionary {'images': 1D_tensor_of_jpeg_encoded_strings}, which must be preprocessed into a 4D tensor before feeding into the Keras ResNet50 model.
Exercise: We will be using a ResNet client to send requests to our server. You will need to create a graph to preprocess client requests to be compliant with the client. Using tf.map_fn and build_jpeg_to_image_graph, fill in the missing line (marked ???) to convert the client request into an array of 3D floating-point, preprocessed tensor. The following lines stack and reshape this array into a 4D tensor.
Useful References:
* tf.map_fn
* tf.DType
End of explanation
"""
# Build a Test Input Preprocessing Network: only needs to be run once!
test_jpeg_tensor = ??? # A placeholder for a 1D tensor (of arbitrary length) of jpeg-encoded strings.
test_processed_images = ??? # Output node, which returns a 4D tensor after processing.
# Print the graph elements to check shapes. ? indicates that TensorFlow does not know the length of those dimensions.
print(test_jpeg_tensor)
print(test_processed_images)
# Run test network using a sample image SAMPLE_DIR/cat_sample.jpg
with open(os.path.join(SAMPLE_DIR, "cat_sample.jpg"), "rb") as imageFile:
jpeg_str = imageFile.read()
with tf.Session() as sess:
result = sess.run(test_processed_images, feed_dict={test_jpeg_tensor: np.array([jpeg_str, jpeg_str])}) # Duplicate for length 2 array
assert result.shape == (2, 224, 224, 3) # 4D tensor with first dimension length 2, since we have 2 images
# TODO: add a test for min and max normalized pixel values
assert result.max() <= ??? + ERROR_TOLERANCE # Max pixel value after subtracting mean
assert result.min() >= ??? - ERROR_TOLERANCE # Min pixel value after subtracting mean
# TODO: add a test to verify that the resulting tensor for image 0 and image 1 are identical.
assert result[0].all() == result[1].all()
print('Hooray! Input unit test succeeded!')
"""
Explanation: Unit Test the Input Preprocessing Helper Function
Exercise: Construct a TensorFlow unit test graph for the input function.
Hint: the input node test_jpeg_tensor should be a tf.placeholder. You need to define the shape parameter in tf.placeholder. None inside an array indicates that the length can vary along that dimension.
End of explanation
"""
TOP_K = 5
def postprocess_output(model_output):
'''Return top k classes and probabilities.'''
top_k_probs, top_k_classes = ???
return {'classes': top_k_classes, 'probabilities': top_k_probs}
"""
Explanation: Helper Function: Postprocess Server Output
Exercise: The Keras model returns a 1D tensor of probabilities for each class. We want to wrote a postprocess_output() that returns only the top k classes and probabilities.
Useful References:
* tf.nn.top_k
End of explanation
"""
# Build Test Output Postprocessing Network: only needs to be run once!
test_model_output = tf.placeholder(dtype=tf.float32, shape=???, name='test_logits_tensor')
test_prediction_output = postprocess_output(test_model_output)
# Print the graph elements to check shapes.
print(test_model_output)
print(test_prediction_output)
# Import numpy testing framework for float comparisons
import numpy.testing as npt
# Run test network
# Input a tensor with clear winners, and perform checks
# Be very specific about what is expected from your mock model.
model_probs = np.ones(???) # TODO: use the same dimensions as your test_model_output placeholder.
model_probs[2] = 2.5 # TODO: you can create your own tests as well
model_probs[5] = 3.5
model_probs[10] = 4
model_probs[49] = 3
model_probs[998] = 2
TOTAL_WEIGHT = np.sum(model_probs)
model_probs = model_probs / TOTAL_WEIGHT
with tf.Session() as sess:
result = sess.run(test_prediction_output, {test_model_output: model_probs})
classes = result['classes']
probs = result['probabilities']
# Check values
assert len(probs) == 5
npt.assert_almost_equal(probs[0], model_probs[10])
npt.assert_almost_equal(probs[1], model_probs[5])
npt.assert_almost_equal(probs[2], model_probs[49])
npt.assert_almost_equal(probs[3], model_probs[2])
npt.assert_almost_equal(probs[4], model_probs[998])
assert len(classes) == 5
assert classes[0] == 10
assert classes[1] == 5
assert classes[2] == 49
assert classes[3] == 2
assert classes[4] == 998
print('Hooray! Output unit test succeeded!')
"""
Explanation: Unit Test the Output Postprocessing Helper Function
Exercise: Fill in the shape field for the model output, which should be a 1D tensor of probabilities.
Hint: how many image classes are there?
End of explanation
"""
# TODO: Create a placeholder for your arbitrary-length 1D Tensor of JPEG strings
images = tf.placeholder(???)
# TODO: Call preprocess_input to return processed_images
processed_images = ???
# Load (and download if missing) the ResNet50 Keras Model (may take a while to run)
# TODO: Use processed_images as input
model = resnet50.ResNet50(???)
# Rename the model to 'resnet' for serving
model.name = 'resnet'
# TODO: Call postprocess_output on the output of the model to create predictions to send back to the client
predictions = ???
"""
Explanation: Load the Keras Model and Build the Graph
The Keras Model uses TensorFlow as its backend, and therefore its inputs and outputs can be treated as elements of a TensorFlow graph. In other words, you can provide an input that is a TensorFlow tensor, and read the model output like a TensorFlow tensor!
Exercise: Build the end to end network by filling in the TODOs below.
Useful References:
* Keras ResNet50 API
* Keras Model class API: ResNet50 model inherits this class.
End of explanation
"""
# Create a saved model builder as an endpoint to dataflow execution
builder = saved_model_builder.SavedModelBuilder(SERVING_DIR)
# TODO: set the inputs and outputs parameters in predict_signature_def()
signature = predict_signature_def(inputs=???,
outputs=???)
"""
Explanation: Creating the Input-Output Signature
Exercise: The final step to creating a servable model is to define the end-to-end input and output API. Edit the inputs and outputs parameters to predict_signature_def below to ensure that the signature correctly handles client request. The inputs parameter should be a dictionary {'images': tensor_of_strings}, and the outputs parameter a dictionary {'classes': tensor_of_top_k_classes, 'probabilities': tensor_of_top_k_probs}.
End of explanation
"""
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={'predict': signature})
builder.save()
"""
Explanation: Export the Servable Model
End of explanation
"""
|
Kaggle/learntools | notebooks/feature_engineering_new/raw/ex3.ipynb | apache-2.0 | # Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.feature_engineering_new.ex3 import *
import numpy as np
import pandas as pd
from sklearn.model_selection import cross_val_score
from xgboost import XGBRegressor
def score_dataset(X, y, model=XGBRegressor()):
# Label encoding for categoricals
for colname in X.select_dtypes(["category", "object"]):
X[colname], _ = X[colname].factorize()
# Metric for Housing competition is RMSLE (Root Mean Squared Log Error)
score = cross_val_score(
model, X, y, cv=5, scoring="neg_mean_squared_log_error",
)
score = -1 * score.mean()
score = np.sqrt(score)
return score
# Prepare data
df = pd.read_csv("../input/fe-course-data/ames.csv")
X = df.copy()
y = X.pop("SalePrice")
"""
Explanation: Introduction
In this exercise you'll start developing the features you identified in Exercise 2 as having the most potential. As you work through this exercise, you might take a moment to look at the data documentation again and consider whether the features we're creating make sense from a real-world perspective, and whether there are any useful combinations that stand out to you.
Run this cell to set everything up!
End of explanation
"""
# YOUR CODE HERE
X_1 = pd.DataFrame() # dataframe to hold new features
#_UNCOMMENT_IF(PROD)_
#X_1["LivLotRatio"] = ____
#_UNCOMMENT_IF(PROD)_
#X_1["Spaciousness"] = ____
#_UNCOMMENT_IF(PROD)_
#X_1["TotalOutsideSF"] = ____
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
X_1 = pd.DataFrame() # dataframe to hold new features
X_1["LivLot"] = df.GrLivArea / df.LotArea
X_1["Spacious"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd
X_1["TotalSF"] = \
df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \
df.Threeseasonporch + df.ScreenPorch
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
X_1 = pd.DataFrame() # dataframe to hold new features
X_1["LivLotRatio"] = df.GrLivArea * df.LotArea
X_1["Spaciousness"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd
X_1["TotalOutsideSF"] = \
df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \
df.Threeseasonporch + df.ScreenPorch
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
X_1 = pd.DataFrame() # dataframe to hold new features
X_1["LivLotRatio"] = df.GrLivArea / df.LotArea
X_1["Spaciousness"] = (df.FirstFlrSF - df.SecondFlrSF) / df.TotRmsAbvGrd
X_1["TotalOutsideSF"] = \
df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \
df.Threeseasonporch + df.ScreenPorch
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
X_1 = pd.DataFrame() # dataframe to hold new features
X_1["LivLotRatio"] = df.GrLivArea / df.LotArea
X_1["Spaciousness"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd
X_1["TotalOutsideSF"] = \
df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \
df.Threeseasonporch
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
X_1 = pd.DataFrame() # dataframe to hold new features
X_1["LivLotRatio"] = df.GrLivArea / df.LotArea
X_1["Spaciousness"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd
X_1["TotalOutsideSF"] = \
df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \
df.Threeseasonporch + df.ScreenPorch
q_1.assert_check_passed()
"""
Explanation: Let's start with a few mathematical combinations. We'll focus on features describing areas -- having the same units (square-feet) makes it easy to combine them in sensible ways. Since we're using XGBoost (a tree-based model), we'll focus on ratios and sums.
1) Create Mathematical Transforms
Create the following features:
LivLotRatio: the ratio of GrLivArea to LotArea
Spaciousness: the sum of FirstFlrSF and SecondFlrSF divided by TotRmsAbvGrd
TotalOutsideSF: the sum of WoodDeckSF, OpenPorchSF, EnclosedPorch, Threeseasonporch, and ScreenPorch
End of explanation
"""
# YOUR CODE HERE
# One-hot encode BldgType. Use `prefix="Bldg"` in `get_dummies`
X_2 = ____
# Multiply
X_2 = ____
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
X_2 = pd.get_dummies(df.BldgType, prefix="Bldg")
X_2 = X_2.mul(df.LotArea, axis=0)
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
X_2 = pd.get_dummies(df.BldgType, prefix="Bldg")
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
X_2 = pd.get_dummies(df.BldgType, prefix="Bldg")
X_2 = X_2.mul(df.GrLivArea, axis=0)
q_2.assert_check_passed()
"""
Explanation: If you've discovered an interaction effect between a numeric feature and a categorical feature, you might want to model it explicitly using a one-hot encoding, like so:
```
One-hot encode Categorical feature, adding a column prefix "Cat"
X_new = pd.get_dummies(df.Categorical, prefix="Cat")
Multiply row-by-row
X_new = X_new.mul(df.Continuous, axis=0)
Join the new features to the feature set
X = X.join(X_new)
```
2) Interaction with a Categorical
We discovered an interaction between BldgType and GrLivArea in Exercise 2. Now create their interaction features.
End of explanation
"""
X_3 = pd.DataFrame()
# YOUR CODE HERE
#_UNCOMMENT_IF(PROD)_
#X_3["PorchTypes"] = ____
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
#%%RM_IF(PROD)%%
X_3 = pd.DataFrame()
X_3["PorchTypes"] = df[[
"OpenPorchSF",
"EnclosedPorch",
"Threeseasonporch",
"ScreenPorch",
]].gt(0.0).sum(axis=1)
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
X_3 = pd.DataFrame()
X_3["PorchTypes"] = df[[
"WoodDeckSF",
"OpenPorchSF",
"EnclosedPorch",
"Threeseasonporch",
"ScreenPorch",
]].sum(axis=1)
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
X_3 = pd.DataFrame()
X_3["PorchTypes"] = df[[
"WoodDeckSF",
"OpenPorchSF",
"EnclosedPorch",
"Threeseasonporch",
"ScreenPorch",
]].gt(0.0).sum(axis=1)
q_3.assert_check_passed()
"""
Explanation: 3) Count Feature
Let's try creating a feature that describes how many kinds of outdoor areas a dwelling has. Create a feature PorchTypes that counts how many of the following are greater than 0.0:
WoodDeckSF
OpenPorchSF
EnclosedPorch
Threeseasonporch
ScreenPorch
End of explanation
"""
df.MSSubClass.unique()
"""
Explanation: 4) Break Down a Categorical Feature
MSSubClass describes the type of a dwelling:
End of explanation
"""
X_4 = pd.DataFrame()
# YOUR CODE HERE
#_UNCOMMENT_IF(PROD)_
#____
# Check your answer
q_4.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_4.hint()
#_COMMENT_IF(PROD)_
q_4.solution()
#%%RM_IF(PROD)%%
X_4 = pd.DataFrame()
X_4["MSClass"] = df.MSSubClass.str.split(n=1, expand=True)[0]
q_4.assert_check_failed()
#%%RM_IF(PROD)%%
X_4 = pd.DataFrame()
X_4["MSClass"] = df.MSSubClass.str.split("_", n=1, expand=True)[0]
q_4.assert_check_passed()
"""
Explanation: You can see that there is a more general categorization described (roughly) by the first word of each category. Create a feature containing only these first words by splitting MSSubClass at the first underscore _. (Hint: In the split method use an argument n=1.)
End of explanation
"""
X_5 = pd.DataFrame()
# YOUR CODE HERE
#_UNCOMMENT_IF(PROD)_
#X_5["MedNhbdArea"] = ____
# Check your answer
q_5.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_5.hint()
#_COMMENT_IF(PROD)_
q_5.solution()
#%%RM_IF(PROD)%%
X_5 = pd.DataFrame()
X_5["MedNhbdArea"] = df.groupby("Neighborhood")["GrLivArea"].transform("mean")
q_5.assert_check_failed()
#%%RM_IF(PROD)%%
X_5 = pd.DataFrame()
X_5["MedNhbdArea"] = df.groupby("Neighborhood")["LotArea"].transform("median")
q_5.assert_check_failed()
#%%RM_IF(PROD)%%
X_5 = pd.DataFrame()
X_5["MedNhbdArea"] = df.groupby("MSSubClass")["GrLivArea"].transform("median")
q_5.assert_check_failed()
#%%RM_IF(PROD)%%
X_5 = pd.DataFrame()
X_5["MedNhbdArea"] = df.groupby("Neighborhood")["GrLivArea"].transform("median")
q_5.assert_check_passed()
"""
Explanation: 5) Use a Grouped Transform
The value of a home often depends on how it compares to typical homes in its neighborhood. Create a feature MedNhbdArea that describes the median of GrLivArea grouped on Neighborhood.
End of explanation
"""
X_new = X.join([X_1, X_2, X_3, X_4, X_5])
score_dataset(X_new, y)
"""
Explanation: Now you've made your first new feature set! If you like, you can run the cell below to score the model with all of your new features added:
End of explanation
"""
|
prk327/CoAca | 3_Position_and_Label_Based_Indexing.ipynb | gpl-3.0 | # loading libraries and reading the data
import numpy as np
import pandas as pd
market_df = pd.read_csv("../global_sales_data/market_fact.csv")
market_df.head()
"""
Explanation: Position and Label Based Indexing: df.iloc and df.loc
You have seen some ways of selecting rows and columns from dataframes. Let's now see some other ways of indexing dataframes, which pandas recommends, since they are more explicit (and less ambiguous).
There are two main ways of indexing dataframes:
1. Position based indexing using df.iloc
2. Label based indexing using df.loc
Using both the methods, we will do the following indexing operations on a dataframe:
* Selecting single elements/cells
* Selecting single and multiple rows
* Selecting single and multiple columns
* Selecting multiple rows and columns
End of explanation
"""
help(pd.DataFrame.iloc)
"""
Explanation: Position (Integer) Based Indexing
Pandas provides the df.iloc functionality to index dataframes using integer indices.
End of explanation
"""
# Selecting a single element
# Note that 2, 4 corresponds to the third row and fifth column (Sales)
market_df.iloc[2, 4]
"""
Explanation: As mentioned in the documentation, the inputs x, y to df.iloc[x, y] can be:
* An integer, e.g. 3
* A list or array of integers, e.g. [3, 7, 8]
* An integer range, i.e. 3:8
* A boolean array
Let's see some examples.
End of explanation
"""
# Selecting a single row, and all columns
# Select the 6th row, with label (and index) = 5
market_df.iloc[5]
# The above is equivalent to this
# The ":" indicates "all rows/columns"
market_df.iloc[5, :]
# equivalent to market_df.iloc[5, ]
# Select multiple rows using a list of indices
market_df.iloc[[3, 7, 8]]
# Equivalently, you can use:
market_df.iloc[[3, 7, 8], :]
# same as market_df.iloc[[3, 7, 8], ]
# Selecting rows using a range of integer indices
# Notice that 4 is included, 8 is not
market_df.iloc[4:8]
# or equivalently
market_df.iloc[4:8, :]
# or market_df.iloc[4:8, ]
# Selecting a single column
# Notice that the column index starts at 0, and 2 represents the third column (Cust_id)
market_df.iloc[:, 2]
# Selecting multiple columns
market_df.iloc[:, 3:8]
# Selecting multiple rows and columns
market_df.iloc[3:6, 2:5]
# Using booleans
# This selects the rows corresponding to True
market_df.iloc[[True, True, False, True, True, False, True]]
"""
Explanation: Note that simply writing df[2, 4] will throw an error, since pandas gets confused whether the 2 is an integer index (the third row), or is it a row with label = 2?
On the other hand, df.iloc[2, 4] tells pandas explicitly that it should assume integer indices.
End of explanation
"""
help(pd.DataFrame.loc)
"""
Explanation: To summarise, df.iloc[x, y] uses integer indices starting at 0.
The other common way of indexing is the label based indexing, which uses df.loc[].
Label Based Indexing
Pandas provides the df.loc[] functionality to index dataframes using labels.
End of explanation
"""
# Selecting a single element
# Select row label = 2 and column label = 'Sales
market_df.loc[2, 'Sales']
# Selecting a single row using a single label
# df.loc reads 5 as a label, not index
market_df.loc[5]
# or equivalently
market_df.loc[5, :]
# or market_df.loc[5, ]
# Select multiple rows using a list of row labels
market_df.loc[[3, 7, 8]]
# Or equivalently
market_df.loc[[3, 7, 8], :]
# Selecting rows using a range of labels
# Notice that with df.loc, both 4 and 8 are included, unlike with df.iloc
# This is an important difference between iloc and loc
market_df.loc[4:8]
# Or equivalently
market_df.loc[4:8, ]
# Or equivalently
market_df.loc[4:8, :]
# The use of label based indexing will be more clear when we have custom row indices
# Let's change the indices to Ord_id
market_df.set_index('Ord_id', inplace = True)
market_df.head()
# Select Ord_id = Ord_5406 and some columns
market_df.loc['Ord_5406', ['Sales', 'Profit', 'Cust_id']]
# Select multiple orders using labels, and some columns
market_df.loc[['Ord_5406', 'Ord_5446', 'Ord_5485'], 'Sales':'Profit']
# Using booleans
# This selects the rows corresponding to True
market_df.loc[[True, True, False, True, True, False, True]]
"""
Explanation: As mentioned in the documentation, the inputs x, y to df.loc[x, y] can be:
* A single label, e.g. '3' or 'row_index'
* A list or array of labels, e.g. ['3', '7', '8']
* A range of labels, where row_x and row_y both are included, i.e. 'row_x':'row_y'
* A boolean array <br>
Let's see some examples.
End of explanation
"""
|
ComputoCienciasUniandes/MetodosComputacionalesLaboratorio | 2016-1/w04/.ipynb_checkpoints/sistemas_lineales-checkpoint.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Sistemas de ecuaciones lineales
En este notebook vamos a ver conceptos básicos para resolver sistemas de ecuaciones lineales.
La estructura de esta presentación está basada en http://nbviewer.ipython.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook_adv2/py_exp_comp_adv2_sol.ipynb
End of explanation
"""
#usando numpy se pueden resolver sistemas de este tipo.
A = np.array([[4.0,3.0,-2.0],[1.0,2.0,1.0],[-3.0,3.0,2.0]])
b = np.array([[3.0],[2.0],[1.0]])
b = np.array([[3.0],[2.0],[1.0]])
sol = np.linalg.solve(A,b)
print(A)
print(b)
print("sol",sol)
print(np.dot(A,sol))
#la inversa se puede encontrar como
Ainv = np.linalg.inv(A)
print("Ainv")
print(Ainv)
print("A * Ainv")
print(np.dot(A,Ainv))
"""
Explanation: Sistemas de ecuaciones lineales
Un ejemplo de un sistema de ecuaciones lineales puede ser el siguiente
$
\begin{split}
a_{11} x_1 + a_{12} x_2+ a_{13}x_3 = b_1 \
a_{21} x_1 + a_{22} x_2+ a_{23} x_3 = b_2 \
a_{31} x_1 + a_{32} x_2+ a_{33} x_3 = b_3 \
\end{split}
$
que puede ser escrito de manera matricial como $Ax = b$, donde la solución se puede escribir como $x=A^{-1}b$. Esto motiva el desarrollo de métodos para encontrar la inversa de una matriz.
End of explanation
"""
#primero construimos las matrices A y b
xp = np.array([-2, 1,4])
yp = np.array([ 2,-1,4])
A = np.zeros((3,3))
b = np.zeros(3)
for i in range(3):
A[i] = xp[i]**2, xp[i], 1 # Store one row at a time
b[i] = yp[i]
print 'Array A: '
print A
print 'b: ',b
#ahora resolvemos el sistema lineal y graficamos la solucion
sol = np.linalg.solve(A,b)
print 'solution is: ', sol
print 'A dot sol: ', np.dot(A,sol)
plt.plot([-2,1,4], [2,-1,4], 'ro')
x = np.linspace(-3,5,100)
y = sol[0]*x**2 + sol[1]*x + sol[2]
plt.plot(x,y,'b')
"""
Explanation: Construyendo un sistemas de ecuaciones lineales
Tenemos ahora el ejemplo siguiente. Tenemos tres puntos en el plano (x,y) y queremos encontrar la parábola que pasa por esos tres puntos.
La ecuación de la parábola es $y=ax^2+bx+c$, si tenemos tres puntos $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3)$ podemos definir el siguiente sistema de ecuaciones lineales.
$
\begin{split}
x_1^2a+x_1b+c&=y_1 \
x_2^2a+x_2b+c&=y_2 \
x_3^2a+x_3b+c&=y_3 \
\end{split}
$
Que en notación matricial se ven así
$
\left(
\begin{array}{ccc}
x_1^2 & x_1 & 1 \
x_2^2 & x_2 & 1 \
x_3^2 & x_3 & 1 \
\end{array}
\right)
\left(
\begin{array}{c}
a \b \c \
\end{array}
\right)
=
\left(
\begin{array}{c}
y_1 \
y_2 \
y_3 \
\end{array}
\right)
$
Vamos a resolver este sistema lineal, asumiendo que los tres puntos son: $(x_1,y_1)=(-2,2)$, $(x_2,y_2)=(1,-1)$, $(x_3,y_3)=(4,4)$
End of explanation
"""
data = np.loadtxt("movimiento.dat")
plt.scatter(data[:,0], data[:,1])
"""
Explanation: Ejercicio 1
¿Qué pasa si los puntos no se encuentran sobre una parábola?
Ejercicio 2
Tomen las mediciones de una cantidad $y$ a diferentes tiempos $t$: $(t_0,y_0)=(0,3)$, $(t_1,y_1)=(0.25,1)$, $(t_2,y_2)=(0.5,-3)$, $(t_3,y_3)=(0.75,1)$. Estas medidas son parte de una función periódica que se puede escribir como
$y = a\cos(\pi t) + b\cos(2\pi t) + c\cos(3\pi t) + d\cos(4\pi t)$
donde $a$, $b$, $c$, and $d$ son parámetros. Construya un sistema de ecuaciones lineales y encuentre el valor de estos parámetros. Verifique su respuesta haciendo una gráfica.
Mínimos cuadrados
Volvamos por un momento al ejercicio de la parábola. Que pasaría si en realidad tuviéramos 10 mediciones? En ese caso la matriz $A$ sería de 10 por 3 y no podríamos encontrar una inversa. Aún así es interesante el problema de encontrar los parámetros de la parábola a partir de las mediciones. Aunque en este caso tenemos que olvidarnos de que la parábola pase por todos los puntos experimentales porque en general no lo va a hacer.
Para este caso tenemos que definir un criterio para decir que los parámetros son los mejores. Un posible criterio es que la suma de los cuadrados entre la curva teórica y los datos sea mínima. ¿Cómo podemos entonces encontrar una solución para este caso?
Cambiando un poco la notación pensemos que tenemos un vector $d$ de datos, un vector $m$ con los parámetros del modelo que queremos encontrar y una matriz $G$ que resume la información sobre la teoría físca que queremos utilizar para explicar los datos. De esta manera el problema se podría escribir como
$G m = d$
Donde $G$ en general no es invertible. Pero usando el criterio de mínimos cuadrados vamos a tener que el vector $m$ en realidad puede ser estimado por un vector $\hat{m}$ que cumple la siguiente condición
$G^T G \hat{m} = G^{T}d$
donde $T$ indica la transpuesta. Si ahora escribimos $G^{T}G=A$, $\hat{m}=x$ y $G^{T}d=b$ volvemos al problema del principio y podemos encontrar fácilmente a $\hat{m}$
Ejercicio 3
Los datos siguientes
https://raw.githubusercontent.com/ComputoCienciasUniandes/MetodosComputacionales/master/hands_on/lin_algebra/movimiento.dat
Representan una coordenada temporal y una coordenada espacial de un movimiento unidimensional en un campo gravitacional. Encuentre el mejor valor posible de la posición inicial, velocidad inicial y gravedad. Verifique que sus valores son razonables con una gráfica.
End of explanation
"""
|
janusnic/21v-python | unit_20/parallel_ml/notebooks/04 - Pandas and Heterogeneous Data Modeling.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
"""
Explanation: Predictive Modeling with heterogeneous data
End of explanation
"""
#!curl -s https://dl.dropboxusercontent.com/u/5743203/data/titanic/titanic_train.csv | head -5
with open('titanic_train.csv', 'r') as f:
for i, line in zip(range(5), f):
print(line.strip())
#data = pd.read_csv('https://dl.dropboxusercontent.com/u/5743203/data/titanic/titanic_train.csv')
data = pd.read_csv('titanic_train.csv')
"""
Explanation: <img src="files/images/predictive_modeling_data_flow.png">
Loading tabular data from the Titanic kaggle challenge in a pandas Data Frame
Let us have a look at the Titanic dataset from the Kaggle Getting Started challenge at:
https://www.kaggle.com/c/titanic-gettingStarted
We can load the CSV file as a pandas data frame in one line:
End of explanation
"""
data.head(5)
data.count()
"""
Explanation: pandas data frames have a HTML table representation in the IPython notebook. Let's have a look at the first 5 rows:
End of explanation
"""
list(data.columns)
data.shape
data.values
"""
Explanation: The data frame has 891 rows. Some passengers have missing information though: in particular Age and Cabin info can be missing. The meaning of the columns is explained on the challenge website:
https://www.kaggle.com/c/titanic-gettingStarted/data
A data frame can be converted into a numpy array by calling the values attribute:
End of explanation
"""
survived_column = data['Survived']
survived_column.dtype
"""
Explanation: However this cannot be directly fed to a scikit-learn model:
the target variable (survival) is mixed with the input data
some attribute such as unique ids have no predictive values for the task
the values are heterogeneous (string labels for categories, integers and floating point numbers)
some attribute values are missing (nan: "not a number")
Predicting survival
The goal of the challenge is to predict whether a passenger has survived from others known attribute. Let us have a look at the Survived columns:
End of explanation
"""
type(survived_column)
"""
Explanation: data.Survived is an instance of the pandas Series class with an integer dtype:
End of explanation
"""
type(data)
"""
Explanation: The data object is an instance pandas DataFrame class:
End of explanation
"""
data.groupby('Survived').count()
np.mean(survived_column == 0)
"""
Explanation: Series can be seen as homegeneous, 1D columns. DataFrame instances are heterogenous collections of columns with the same length.
The original data frame can be aggregated by counting rows for each possible value of the Survived column:
End of explanation
"""
target = survived_column.values
type(target)
target.dtype
target[:5]
"""
Explanation: From this the subset of the full passengers list, about 2/3 perished in the event. So if we are to build a predictive model from this data, a baseline model to compare the performance to would be to always predict death. Such a constant model would reach around 62% predictive accuracy (which is higher than predicting at random):
pandas Series instances can be converted to regular 1D numpy arrays by using the values attribute:
End of explanation
"""
numerical_features = data[['Fare', 'Pclass', 'Age']]
numerical_features.head(5)
"""
Explanation: Training a predictive model on numerical features
sklearn estimators all work with homegeneous numerical feature descriptors passed as a numpy array. Therefore passing the raw data frame will not work out of the box.
Let us start simple and build a first model that only uses readily available numerical features as input, namely data['Fare'], data['Pclass'] and data['Age'].
End of explanation
"""
numerical_features.count()
"""
Explanation: Unfortunately some passengers do not have age information:
End of explanation
"""
median_features = numerical_features.dropna().median()
median_features
imputed_features = numerical_features.fillna(median_features)
imputed_features.count()
imputed_features.head(5)
"""
Explanation: Let's use pandas fillna method to input the median age for those passengers:
End of explanation
"""
features_array = imputed_features.values
features_array
features_array.dtype
"""
Explanation: Now that the data frame is clean, we can convert it into an homogeneous numpy array of floating point values:
End of explanation
"""
from sklearn.cross_validation import train_test_split
features_train, features_test, target_train, target_test = train_test_split(
features_array, target, test_size=0.20, random_state=0)
features_train.shape
features_test.shape
target_train.shape
target_test.shape
"""
Explanation: Let's take the 80% of the data for training a first model and keep 20% for computing is generalization score:
End of explanation
"""
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1)
logreg.fit(features_train, target_train)
target_predicted = logreg.predict(features_test)
from sklearn.metrics import accuracy_score
accuracy_score(target_test, target_predicted)
"""
Explanation: Let's start with a simple model from sklearn, namely LogisticRegression:
End of explanation
"""
logreg.score(features_test, target_test)
"""
Explanation: This first model has around 73% accuracy: this is better than our baseline that always predicts death.
End of explanation
"""
feature_names = numerical_features.columns
feature_names
logreg.coef_
x = np.arange(len(feature_names))
plt.bar(x, logreg.coef_.ravel())
plt.xticks(x + 0.5, feature_names, rotation=30);
"""
Explanation: Model evaluation and interpretation
Interpreting linear model weights
The coef_ attribute of a fitted linear model such as LogisticRegression holds the weights of each features:
End of explanation
"""
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(target_test, target_predicted)
print(cm)
"""
Explanation: In this case, survival is slightly positively linked with Fare (the higher the fare, the higher the likelyhood the model will predict survival) while passenger from first class and lower ages are predicted to survive more often than older people from the 3rd class.
First-class cabins were closer to the lifeboats and children and women reportedly had the priority. Our model seems to capture that historical data. We will see later if the sex of the passenger can be used as an informative predictor to increase the predictive accuracy of the model.
Alternative evaluation metrics
It is possible to see the details of the false positive and false negative errors by computing the confusion matrix:
End of explanation
"""
def plot_confusion(cm, target_names = ['survived', 'not survived'],
title='Confusion matrix'):
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=60)
plt.yticks(tick_marks, target_names)
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Convenience function to adjust plot parameters for a clear layout.
plt.tight_layout()
plot_confusion(cm)
print(cm)
"""
Explanation: The true labeling are seen as the rows and the predicted labels are the columns:
End of explanation
"""
cm.sum(axis=1)
cm_normalized = cm.astype(np.float64) / cm.sum(axis=1)[:, np.newaxis]
print(cm_normalized)
plot_confusion(cm_normalized, title="Normalized confusion matrix")
"""
Explanation: We can normalize the number of prediction by dividing by the total number of true "survived" and "not survived" to compute true and false positive rates for survival in the first row and the false negative and true negative rates in the second row.
End of explanation
"""
from sklearn.metrics import classification_report
print(classification_report(target_test, target_predicted,
target_names=['not survived', 'survived']))
"""
Explanation: We can therefore observe that the fact that the target classes are not balanced in the dataset makes the accuracy score not very informative.
scikit-learn provides alternative classification metrics to evaluate models performance on imbalanced data such as precision, recall and f1 score:
End of explanation
"""
target_predicted_proba = logreg.predict_proba(features_test)
target_predicted_proba[:5]
"""
Explanation: Another way to quantify the quality of a binary classifier on imbalanced data is to compute the precision, recall and f1-score of a model (at the default fixed decision threshold of 0.5).
Logistic Regression is a probabilistic models: instead of just predicting a binary outcome (survived or not) given the input features it can also estimates the posterior probability of the outcome given the input features using the predict_proba method:
End of explanation
"""
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
def plot_roc_curve(target_test, target_predicted_proba):
fpr, tpr, thresholds = roc_curve(target_test, target_predicted_proba[:, 1])
roc_auc = auc(fpr, tpr)
# Plot ROC curve
plt.plot(fpr, tpr, label='ROC curve (area = %0.3f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--') # random predictions curve
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate or (1 - Specifity)')
plt.ylabel('True Positive Rate or (Sensitivity)')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plot_roc_curve(target_test, target_predicted_proba)
"""
Explanation: By default the decision threshold is 0.5: if we vary the decision threshold from 0 to 1 we could generate a family of binary classifier models that address all the possible trade offs between false positive and false negative prediction errors.
We can summarize the performance of a binary classifier for all the possible thresholds by plotting the ROC curve and quantifying the Area under the ROC curve:
End of explanation
"""
features_train, features_test, target_train, target_test = train_test_split(
features_array, target, test_size=0.20, random_state=0)
logreg.fit(features_train, target_train).score(features_test, target_test)
features_train, features_test, target_train, target_test = train_test_split(
features_array, target, test_size=0.20, random_state=1)
logreg.fit(features_train, target_train).score(features_test, target_test)
features_train, features_test, target_train, target_test = train_test_split(
features_array, target, test_size=0.20, random_state=2)
logreg.fit(features_train, target_train).score(features_test, target_test)
"""
Explanation: Here the area under ROC curve is 0.756 which is very similar to the accuracy (0.732). However the ROC-AUC score of a random model is expected to 0.5 on average while the accuracy score of a random model depends on the class imbalance of the data. ROC-AUC can be seen as a way to callibrate the predictive accuracy of a model against class imbalance.
Cross-validation
We previously decided to randomly split the data to evaluate the model on 20% of held-out data. However the location randomness of the split might have a significant impact in the estimated accuracy:
End of explanation
"""
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(logreg, features_array, target, cv=5)
scores
scores.min(), scores.mean(), scores.max()
"""
Explanation: So instead of using a single train / test split, we can use a group of them and compute the min, max and mean scores as an estimation of the real test score while not underestimating the variability:
End of explanation
"""
scores = cross_val_score(logreg, features_array, target, cv=5,
scoring='roc_auc')
scores.min(), scores.mean(), scores.max()
"""
Explanation: cross_val_score reports accuracy by default be it can also be used to report other performance metrics such as ROC-AUC or f1-score:
End of explanation
"""
pd.get_dummies(data['Sex'], prefix='Sex').head(5)
pd.get_dummies(data.Embarked, prefix='Embarked').head(5)
"""
Explanation: Exercise:
Compute cross-validated scores for other classification metrics ('precision', 'recall', 'f1', 'accuracy'...).
Change the number of cross-validation folds between 3 and 10: what is the impact on the mean score? on the processing time?
Hints:
The list of classification metrics is available in the online documentation:
http://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values
You can use the %%time cell magic on the first line of an IPython cell to measure the time of the execution of the cell.
More feature engineering and richer models
Let us now try to build richer models by including more features as potential predictors for our model.
Categorical variables such as data['Embarked'] or data['Sex'] can be converted as boolean indicators features also known as dummy variables or one-hot-encoded features:
End of explanation
"""
rich_features = pd.concat([data[['Fare', 'Pclass', 'Age']],
pd.get_dummies(data['Sex'], prefix='Sex'),
pd.get_dummies(data['Embarked'], prefix='Embarked')],
axis=1)
rich_features.head(5)
"""
Explanation: We can combine those new numerical features with the previous features using pandas.concat along axis=1:
End of explanation
"""
rich_features_no_male = rich_features.drop('Sex_male', 1)
rich_features_no_male.head(5)
"""
Explanation: By construction the new Sex_male feature is redundant with Sex_female. Let us drop it:
End of explanation
"""
rich_features_final = rich_features_no_male.fillna(rich_features_no_male.dropna().median())
rich_features_final.head(5)
"""
Explanation: Let us not forget to imput the median age for passengers without age information:
End of explanation
"""
%%time
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score
logreg = LogisticRegression(C=1)
scores = cross_val_score(logreg, rich_features_final, target, cv=5, scoring='accuracy')
print("Logistic Regression CV scores:")
print("min: {:.3f}, mean: {:.3f}, max: {:.3f}".format(
scores.min(), scores.mean(), scores.max()))
"""
Explanation: We can finally cross-validate a logistic regression model on this new data an observe that the mean score has significantly increased:
End of explanation
"""
%load solutions/04A_plot_logistic_regression_weights.py
"""
Explanation: Exercise:
change the value of the parameter C. Does it have an impact on the score?
fit a new instance of the logistic regression model on the full dataset.
plot the weights for the features of this newly fitted logistic regression model.
End of explanation
"""
%%time
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100)
scores = cross_val_score(rf, rich_features_final, target, cv=5, n_jobs=4,
scoring='accuracy')
print("Random Forest CV scores:")
print("min: {:.3f}, mean: {:.3f}, max: {:.3f}".format(
scores.min(), scores.mean(), scores.max()))
%%time
from sklearn.ensemble import GradientBoostingClassifier
gb = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,
subsample=.8, max_features=.5)
scores = cross_val_score(gb, rich_features_final, target, cv=5, n_jobs=4,
scoring='accuracy')
print("Gradient Boosted Trees CV scores:")
print("min: {:.3f}, mean: {:.3f}, max: {:.3f}".format(
scores.min(), scores.mean(), scores.max()))
"""
Explanation: Training Non-linear models: ensembles of randomized trees
sklearn also implement non linear models that are known to perform very well for data-science projects where datasets have not too many features (e.g. less than 5000).
In particular let us have a look at Random Forests and Gradient Boosted Trees:
End of explanation
"""
%load solutions/04B_more_categorical_variables.py
%load solutions/04C_feature_importance.py
"""
Explanation: Both models seem to do slightly better than the logistic regression model on this data.
Exercise:
Change the value of the learning_rate and other GradientBoostingClassifier parameter, can you get a better mean score?
Would treating the PClass variable as categorical improve the models performance?
Find out which predictor variables (features) are the most informative for those models.
Hints:
Fitted ensembles of trees have feature_importances_ attribute that can be used similarly to the coef_ attribute of linear models.
End of explanation
"""
%%time
from sklearn.grid_search import GridSearchCV
gb = GradientBoostingClassifier(n_estimators=100, subsample=.8)
params = {
'learning_rate': [0.05, 0.1, 0.5],
'max_features': [0.5, 1],
'max_depth': [3, 4, 5],
}
gs = GridSearchCV(gb, params, cv=5, scoring='roc_auc', n_jobs=4)
gs.fit(rich_features_final, target)
"""
Explanation: Automated parameter tuning
Instead of changing the value of the learning rate manually and re-running the cross-validation, we can find the best values for the parameters automatically (assuming we are ready to wait):
End of explanation
"""
sorted(gs.grid_scores_, key=lambda x: x.mean_validation_score, reverse=True)
gs.best_score_
gs.best_params_
"""
Explanation: Let us sort the models by mean validation score:
End of explanation
"""
features = pd.concat([data[['Fare', 'Age']],
pd.get_dummies(data['Sex'], prefix='Sex'),
pd.get_dummies(data['Pclass'], prefix='Pclass'),
pd.get_dummies(data['Embarked'], prefix='Embarked')],
axis=1)
features = features.drop('Sex_male', 1)
# Because of the following bug we cannot use NaN as the missing
# value marker, use a negative value as marker instead:
# https://github.com/scikit-learn/scikit-learn/issues/3044
features = features.fillna(-1)
features.head(5)
"""
Explanation: We should note that the mean scores are very close to one another and almost always within one standard deviation of one another. This means that all those parameters are quite reasonable. The only parameter of importance seems to be the learning_rate: 0.5 seems to be a bit too high.
Avoiding data snooping with pipelines
When doing imputation in pandas, prior to computing the train test split we use data from the test to improve the accuracy of the median value that we impute on the training set. This is actually cheating. To avoid this we should compute the median of the features on the training fold and use that median value to do the imputation both on the training and validation fold for a given CV split.
To do this we can prepare the features as previously but without the imputation: we just replace missing values by the -1 marker value:
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features.values, target, random_state=0)
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy='median', missing_values=-1)
imputer.fit(X_train)
"""
Explanation: We can now use the Imputer transformer of scikit-learn to find the median value on the training set and apply it on missing values of both the training set and the test set.
End of explanation
"""
imputer.statistics_
features.columns.values
"""
Explanation: The median age computed on the training set is stored in the statistics_ attribute.
End of explanation
"""
X_train_imputed = imputer.transform(X_train)
X_test_imputed = imputer.transform(X_test)
np.any(X_train == -1)
np.any(X_train_imputed == -1)
np.any(X_test == -1)
np.any(X_test_imputed == -1)
"""
Explanation: Imputation can now happen by calling the transform method:
End of explanation
"""
from sklearn.pipeline import Pipeline
imputer = Imputer(strategy='median', missing_values=-1)
classifier = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,
subsample=.8, max_features=.5,
random_state=0)
pipeline = Pipeline([
('imp', imputer),
('clf', classifier),
])
scores = cross_val_score(pipeline, features.values, target, cv=5, n_jobs=4,
scoring='accuracy', )
print(scores.min(), scores.mean(), scores.max())
"""
Explanation: We can now use a pipeline that wraps an imputer transformer and the classifier itself:
End of explanation
"""
%%time
params = {
'imp__strategy': ['mean', 'median'],
'clf__max_features': [0.5, 1],
'clf__max_depth': [3, 4, 5],
}
gs = GridSearchCV(pipeline, params, cv=5, scoring='roc_auc', n_jobs=4)
gs.fit(X_train, y_train)
sorted(gs.grid_scores_, key=lambda x: x.mean_validation_score, reverse=True)
gs.best_score_
plot_roc_curve(y_test, gs.predict_proba(X_test))
gs.best_params_
"""
Explanation: The mean cross-validation is slightly lower than we used the imputation on the whole data as we did earlier although not by much. This means that in this case the data-snooping was not really helping the model cheat by much.
Let us re-run the grid search, this time on the pipeline. Note that thanks to the pipeline structure we can optimize the interaction of the imputation method with the parameters of the downstream classifier without cheating:
End of explanation
"""
|
ContinualAI/avalanche | notebooks/from-zero-to-hero-tutorial/04_training.ipynb | mit | !pip install avalanche-lib=0.2.0
"""
Explanation: description: Continual Learning Algorithms Prototyping Made Easy
Training
Welcome to the "Training" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the training module.
First, let's install Avalanche. You can skip this step if you have installed it already.
End of explanation
"""
from torch.optim import SGD
from torch.nn import CrossEntropyLoss
from avalanche.models import SimpleMLP
from avalanche.training.supervised import Naive, CWRStar, Replay, GDumb, Cumulative, LwF, GEM, AGEM, EWC # and many more!
model = SimpleMLP(num_classes=10)
optimizer = SGD(model.parameters(), lr=0.001, momentum=0.9)
criterion = CrossEntropyLoss()
cl_strategy = Naive(
model, optimizer, criterion,
train_mb_size=100, train_epochs=4, eval_mb_size=100
)
"""
Explanation: 💪 The Training Module
The training module in Avalanche is designed with modularity in mind. Its main goals are to:
provide a set of popular continual learning baselines that can be easily used to run experimental comparisons;
provide simple abstractions to create and run your own strategy as efficiently and easily as possible starting from a couple of basic building blocks we already prepared for you.
At the moment, the training module includes three main components:
Templates: these are high level abstractions used as a starting point to define the actual strategies. The templates contain already implemented basic utilities and functionalities shared by a group of strategies (e.g. the BaseSGDTemplate contains all the implemented methods to deal with strategies based on SGD).
Strategies: these are popular baselines already implemented for you which you can use for comparisons or as base classes to define a custom strategy.
Plugins: these are classes that allow to add some specific behaviour to your own strategy. The plugin system allows to define reusable components which can be easily combined (e.g. a replay strategy, a regularization strategy). They are also used to automatically manage logging and evaluation.
Keep in mind that many Avalanche components are independent of Avalanche strategies. If you already have your own strategy which does not use Avalanche, you can use Avalanche's benchmarks, models, data loaders, and metrics without ever looking at Avalanche's strategies!
📈 How to Use Strategies & Plugins
If you want to compare your strategy with other classic continual learning algorithm or baselines, in Avalanche you can instantiate a strategy with a couple lines of code.
Strategy Instantiation
Most strategies require only 3 mandatory arguments:
- model: this must be a torch.nn.Module.
- optimizer: torch.optim.Optimizer already initialized on your model.
- loss: a loss function such as those in torch.nn.functional.
Additional arguments are optional and allow you to customize training (batch size, number of epochs, ...) or strategy-specific parameters (memory size, regularization strength, ...).
End of explanation
"""
from avalanche.benchmarks.classic import SplitMNIST
# scenario
benchmark = SplitMNIST(n_experiences=5, seed=1)
# TRAINING LOOP
print('Starting experiment...')
results = []
for experience in benchmark.train_stream:
print("Start of experience: ", experience.current_experience)
print("Current Classes: ", experience.classes_in_this_experience)
cl_strategy.train(experience)
print('Training completed')
print('Computing accuracy on the whole test set')
results.append(cl_strategy.eval(benchmark.test_stream))
"""
Explanation: Training & Evaluation
Each strategy object offers two main methods: train and eval. Both of them, accept either a single experience(Experience) or a list of them, for maximum flexibility.
We can train the model continually by iterating over the train_stream provided by the scenario.
End of explanation
"""
from avalanche.training.plugins import EarlyStoppingPlugin
strategy = Naive(
model, optimizer, criterion,
plugins=[EarlyStoppingPlugin(patience=10, val_stream_name='train')])
"""
Explanation: Adding Plugins
Most continual learning strategies follow roughly the same training/evaluation loops, i.e. a simple naive strategy (a.k.a. finetuning) augmented with additional behavior to counteract catastrophic forgetting. The plugin systems in Avalanche is designed to easily augment continual learning strategies with custom behavior, without having to rewrite the training loop from scratch. Avalanche strategies accept an optional list of plugins that will be executed during the training/evaluation loops.
For example, early stopping is implemented as a plugin:
End of explanation
"""
from avalanche.training.templates import SupervisedTemplate
from avalanche.training.plugins import ReplayPlugin, EWCPlugin
replay = ReplayPlugin(mem_size=100)
ewc = EWCPlugin(ewc_lambda=0.001)
strategy = SupervisedTemplate(
model, optimizer, criterion,
plugins=[replay, ewc])
"""
Explanation: In Avalanche, most continual learning strategies are implemented using plugins, which makes it easy to combine them together. For example, it is extremely easy to create a hybrid strategy that combines replay and EWC together by passing the appropriate plugins list to the SupervisedTemplate:
End of explanation
"""
from avalanche.benchmarks.utils.data_loader import ReplayDataLoader
from avalanche.core import SupervisedPlugin
from avalanche.training.storage_policy import ReservoirSamplingBuffer
class ReplayP(SupervisedPlugin):
def __init__(self, mem_size):
""" A simple replay plugin with reservoir sampling. """
super().__init__()
self.buffer = ReservoirSamplingBuffer(max_size=mem_size)
def before_training_exp(self, strategy: "SupervisedTemplate",
num_workers: int = 0, shuffle: bool = True,
**kwargs):
""" Use a custom dataloader to combine samples from the current data and memory buffer. """
if len(self.buffer.buffer) == 0:
# first experience. We don't use the buffer, no need to change
# the dataloader.
return
strategy.dataloader = ReplayDataLoader(
strategy.adapted_dataset,
self.buffer.buffer,
oversample_small_tasks=True,
num_workers=num_workers,
batch_size=strategy.train_mb_size,
shuffle=shuffle)
def after_training_exp(self, strategy: "SupervisedTemplate", **kwargs):
""" Update the buffer. """
self.buffer.update(strategy, **kwargs)
benchmark = SplitMNIST(n_experiences=5, seed=1)
model = SimpleMLP(num_classes=10)
optimizer = SGD(model.parameters(), lr=0.01, momentum=0.9)
criterion = CrossEntropyLoss()
strategy = Naive(model=model, optimizer=optimizer, criterion=criterion, train_mb_size=128,
plugins=[ReplayP(mem_size=2000)])
strategy.train(benchmark.train_stream)
strategy.eval(benchmark.test_stream)
"""
Explanation: Beware that most strategy plugins modify the internal state. As a result, not all the strategy plugins can be combined together. For example, it does not make sense to use multiple replay plugins since they will try to modify the same strategy variables (mini-batches, dataloaders), and therefore they will be in conflict.
📝 A Look Inside Avalanche Strategies
If you arrived at this point you already know how to use Avalanche strategies and are ready to use it. However, before making your own strategies you need to understand a little bit the internal implementation of the training and evaluation loops.
In Avalanche you can customize a strategy in 2 ways:
Plugins: Most strategies can be implemented as additional code that runs on top of the basic training and evaluation loops (e.g. the Naive strategy). Therefore, the easiest way to define a custom strategy such as a regularization or replay strategy, is to define it as a custom plugin. The advantage of plugins is that they can be combined, as long as they are compatible, i.e. they do not modify the same part of the state. The disadvantage is that in order to do so you need to understand the strategy loop, which can be a bit complex at first.
Subclassing: In Avalanche, continual learning strategies inherit from the appropriate template, which provides generic training and evaluation loops. The most high level template is the BaseTemplate, from which all the Avalanche's strategies inherit. Most template's methods can be safely overridden (with some caveats that we will see later).
Keep in mind that if you already have a working continual learning strategy that does not use Avalanche, you can use most Avalanche components such as benchmarks, evaluation, and models without using Avalanche's strategies!
Training and Evaluation Loops
As we already mentioned, Avalanche strategies inherit from the appropriate template (e.g. continual supervised learning strategies inherit from the SupervisedTemplate). These templates provide:
Basic Training and Evaluation loops which define a naive (finetuning) strategy.
Callback points, which are used to call the plugins at a specific moments during the loop's execution.
A set of variables representing the state of the loops (current model, data, mini-batch, predictions, ...) which allows plugins and child classes to easily manipulate the state of the training loop.
The training loop has the following structure:
```text
train
before_training
before_train_dataset_adaptation
train_dataset_adaptation
after_train_dataset_adaptation
make_train_dataloader
model_adaptation
make_optimizer
before_training_exp # for each exp
before_training_epoch # for each epoch
before_training_iteration # for each iteration
before_forward
after_forward
before_backward
after_backward
after_training_iteration
before_update
after_update
after_training_epoch
after_training_exp
after_training
```
The evaluation loop is similar:
text
eval
before_eval
before_eval_dataset_adaptation
eval_dataset_adaptation
after_eval_dataset_adaptation
make_eval_dataloader
model_adaptation
before_eval_exp # for each exp
eval_epoch # we have a single epoch in evaluation mode
before_eval_iteration # for each iteration
before_eval_forward
after_eval_forward
after_eval_iteration
after_eval_exp
after_eval
Methods starting with before/after are the methods responsible for calling the plugins.
Notice that before the start of each experience during training we have several phases:
- dataset adaptation: This is the phase where the training data can be modified by the strategy, for example by adding other samples from a separate buffer.
- dataloader initialization: Initialize the data loader. Many strategies (e.g. replay) use custom dataloaders to balance the data.
- model adaptation: Here, the dynamic models (see the models tutorial) are updated by calling their adaptation method.
- optimizer initialization: After the model has been updated, the optimizer should also be updated to ensure that the new parameters are optimized.
Strategy State
The strategy state is accessible via several attributes. Most of these can be modified by plugins and subclasses:
- self.clock: keeps track of several event counters.
- self.experience: the current experience.
- self.adapted_dataset: the data modified by the dataset adaptation phase.
- self.dataloader: the current dataloader.
- self.mbatch: the current mini-batch. For supervised classification problems, mini-batches have the form <x, y, t>, where x is the input, y is the target class, and t is the task label.
- self.mb_output: the current model's output.
- self.loss: the current loss.
- self.is_training: True if the strategy is in training mode.
How to Write a Plugin
Plugins provide a simple solution to define a new strategy by augmenting the behavior of another strategy (typically the Naive strategy). This approach reduces the overhead and code duplication, improving code readability and prototyping speed.
Creating a plugin is straightforward. As with strategies, you have to create a class which inherits from the corresponding plugin template (BasePlugin, BaseSGDPlugin, SupervisedPlugin) and implements the callbacks that you need. The exact callback to use depend on the aim of your plugin. You can use the loop shown above to understand what callbacks you need to use. For example, we show below a simple replay plugin that uses after_training_exp to update the buffer after each training experience, and the before_training_exp to customize the dataloader. Notice that before_training_exp is executed after make_train_dataloader, which means that the Naive strategy already updated the dataloader. If we used another callback, such as before_train_dataset_adaptation, our dataloader would have been overwritten by the Naive strategy. Plugin methods always receive the strategy as an argument, so they can access and modify the strategy's state.
End of explanation
"""
from avalanche.benchmarks.utils import AvalancheConcatDataset
from avalanche.training.templates import SupervisedTemplate
class Cumulative(SupervisedTemplate):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.dataset = None # cumulative dataset
def train_dataset_adaptation(self, **kwargs):
super().train_dataset_adaptation(**kwargs)
curr_data = self.experience.dataset
if self.dataset is None:
self.dataset = curr_data
else:
self.dataset = AvalancheConcatDataset([self.dataset, curr_data])
self.adapted_dataset = self.dataset.train()
strategy = Cumulative(model=model, optimizer=optimizer, criterion=criterion, train_mb_size=128)
strategy.train(benchmark.train_stream)
"""
Explanation: Check base plugin's documentation for a complete list of the available callbacks.
How to Write a Custom Strategy
You can always define a custom strategy by overriding the corresponding template methods.
However, There is an important caveat to keep in mind. If you override a method, you must remember to call all the callback's handlers (the methods starting with before/after) at the appropriate points. For example, train calls before_training and after_training before and after the training loops, respectively. The easiest way to avoid mistakes is to start from the template's method that you want to override and modify it to your own needs without removing the callbacks handling.
Notice that the EvaluationPlugin (see evaluation tutorial) uses the strategy callbacks.
As an example, the SupervisedTemplate, for continual supervised strategies, provides the global state of the loop in the strategy's attributes, which you can safely use when you override a method. For instance, the Cumulative strategy trains a model continually on the union of all the experiences encountered so far. To achieve this, the cumulative strategy overrides adapt_train_dataset and updates `self.adapted_dataset' by concatenating all the previous experiences with the current one.
End of explanation
"""
|
mccormd1/LCandR | LC_python/LC_MLtrain.ipynb | gpl-3.0 | import joblib
features=joblib.load('clean_LCfeatures.p')
labels=joblib.load('clean_LClabels.p')
clabels=joblib.load('clean_LCclassifierlabel.p')
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
features.head(n=10)
features['earliest_cr_line']=features.earliest_cr_line.dt.year
features.earliest_cr_line.dtype
import imp
imp.reload(vs)
vs.distribution(features)
"""
Explanation: Supervised regression of LC loan data
As my metric (buy rate) is a continuous function, I have turned a simple classification (good vs bad loans) into a regression of return. I find that this would be a better tool for picking loans to invest in, as it forecasts a predicted return, not just if the loan will be paid as agreed. My metric intereacts with the loan interest, prefering higher interest loans that are paid as agreed over lower interest loans, this is not done by simple classification. Successful forecasting of my metric can be the difference between having no defaulting loans, but a 4% return per year (low interest, safe loans) vs having no or few defaulting loans, and 10-15% return per year.
Getting started
Data exploration and cleaning has been performed using the previous jupyter notebook named "Data Cleaning.ipynb". I exported a clean features and labels pickle using joblib in that notebook. I will import them and graph some preliminary things to start!
End of explanation
"""
valuespread={'annual_inc':max(features.annual_inc),'delinqmax':max(features.delinq_2yrs),
'openmax':max(features.open_acc), 'pubmax':max(features.pub_rec),
'revolbalmax':max(features.revol_bal),'revolutilmax':max(features.revol_util),
'collectmax':max(features.collections_12_mths_ex_med)}
valuespread
"""
Explanation: Some weird scaling going on, just to check my sanity, check the max values of a few of the features. these may need to be log scaled just to deal with the large values. Income almost always is going to need log scaling anyway.
End of explanation
"""
features.query('revol_bal>1000000.0')
"""
Explanation: There are some crazy revolving balances and utilizations. I need to check those quickly. Everything else looks extreme, but within ranges I'd assume could exist.
End of explanation
"""
features.query('revol_util>120')
"""
Explanation: I guess these are pretty understandable, high paying jobs and income.
Lets check the utilizations - having over 100% seems... not possible.
End of explanation
"""
features.drop([294407],inplace=True)
labels.drop([294407],inplace=True)
clabels.drop([294407],inplace=True)
imp.reload(vs)
vs.distribution(features)
"""
Explanation: Having a utilization over 100% is odd, but having a utilization of almost 900% is crazy! I am going to just remove this one loan - not sure what is going on with this outlier. The others are clearly weird, but there doesn't seem to be an obvious cutoff beyond removing the huge outlier.
End of explanation
"""
# Log-transform the skewed features
skewed = ['annual_inc','delinq_2yrs','open_acc', 'pub_rec','revol_bal','total_acc', 'collections_12_mths_ex_med']
features_raw=features.copy()
features_raw[skewed] = features[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
imp.reload(vs)
vs.distribution(features_raw, transformed = True)
"""
Explanation: Utilization looks much more normal!
Now to log transform the skewed distributions.
Log transform Skewed data
End of explanation
"""
# import scipy.stats
# winsors=['delinq_2yrs','pub_rec','collections_12_mths_ex_med']
# for feature in winsors:
# winsor[feature]=scipy.stats.mstats.winsorize(features[feature], limits=0.01,axis=1)
"""
Explanation: We see a clear improvement in income, open accounts, and revolving balance. The others are improved but still are dominated by low numbers. That's okay for now. There are extreme outliers but they are important. I am tempted to winsorize, but the values are so heavily weighted toward 0, that the winsorization would affect all values greater than zero.
End of explanation
"""
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['loan_amnt','emp_length', 'annual_inc','dti','delinq_2yrs','earliest_cr_line','inq_last_6mths','open_acc', 'pub_rec','revol_bal','revol_util','total_acc', 'collections_12_mths_ex_med']
features_raw[numerical] = scaler.fit_transform(features_raw[numerical])
# Show an example of a record with scaling applied
display(features_raw.head(n = 10))
"""
Explanation: Normalization
Now we normalize, by scaling by the max and min values.
End of explanation
"""
features_raw
features_raw=features_raw.drop(['emp_title','addr_state'],axis=1)
features_raw.dtypes
# One-hot encode the 'features_raw' data using pandas.get_dummies()
feat = pd.get_dummies(features_raw)
#print(income.head(n=10))
# Print the number of features after one-hot encoding
encoded = list(feat.columns)
print ("{} total features after one-hot encoding.".format(len(encoded)))
# Uncomment the following line to see the encoded feature names
print (encoded)
"""
Explanation: One Hot Encoding
Up to this point, I have only dealt with the "continuous" numerical features, or features where the values have some relation. for example, higher income can be directly compared to lower income. Now I want to deal with discrete features, where values are independent. One example is the address state. Although we all might have a specific way to rank the states, there is no "correct" relationship between any two possible states - they are discrete and independent. These will be one-hot encoded!
Of note is the employment title - this is put in by the person applying to the loan and can be pretty much anything. This will certainly cause an explosion in feature space. I think, at this point, I will just remove that from the feature space instead of encode it.
End of explanation
"""
# Import train_test_split
from sklearn.model_selection import train_test_split #sklearn 0.18.1 and up
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(feat, labels, test_size = 0.2, random_state = 1)
# Show the results of the split
print ("Training set has {} samples.".format(X_train.shape[0]))
print ("Testing set has {} samples.".format(X_test.shape[0]))
"""
Explanation: Great! There are the original continuous features, and now a feature for all types of home ownership, purposes for the loan, and different address states!
Shuffle and split data
Specifically, I believe that this data is ordered by the origination date, so it is important to shuffle the data. Luckily this happens anyway with train_test_split.
End of explanation
"""
from sklearn.metrics import r2_score, mean_squared_error
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# Fit the learner to the training data using slicing with 'sample_size'
start = time() # Get start time
learner = clf.fit(X_train.sample(n=sample_size,random_state=1),y_train.sample(n=sample_size,random_state=1)) #df.sample(frac=percent) maybe too?
end = time() # Get end time
# Calculate the training time
results['train_time'] = end-start
# Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = clf.predict(X_test)
predictions_train = clf.predict(X_train[:300])
end = time() # Get end time
# Calculate the total prediction time
results['pred_time'] = end-start
# Compute mean square error on the first 300 training samples
results['mse_train'] = mean_squared_error(y_train[:300],predictions_train)
# Compute mean square error on test set
results['mse_test'] = mean_squared_error(y_test,predictions_test)
# Compute R^2 on the the first 300 training samples
results['R2_train'] = r2_score(y_train[:300],predictions_train)
# Compute R^2 on the test set
results['R2_test'] = r2_score(y_test,predictions_test)
# Success
print ("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
from sklearn.svm import SVR
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
# Initialize models
clf_A = make_pipeline(PolynomialFeatures(2,interaction_only=True),LinearRegression())
clf_B = GradientBoostingRegressor(random_state=2)
clf_C = Ridge()
# Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_1 = len(y_train.sample(frac=.01))
samples_10 = len(y_train.sample(frac=.1))
samples_100 = len(y_train.sample(frac=1))
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]: #clf_A,
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# print(results[clf])
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results,.5,.5)
"""
Explanation: Testing different models
For almost all supervised learning, a version of Decision trees is necessary to test, if only for it's extreme intuitiveness. At the very least, it may be used to shrink the feature space and improve speed for other algorithms. I will use Gradient Boosting regression for this. Secondly, I want to test a kernel based non-linear regression, and I will use KNN for this. Finally I want to test a high order polynomial regression.
defining model tester
End of explanation
"""
results
"""
Explanation: Checking the polynomial regression
These values look huge, I need to check why they are breaking the graphs
End of explanation
"""
from sklearn.tree import DecisionTreeRegressor
results2={}
clf_D = DecisionTreeRegressor(random_state=1)
clf_D.fit(X_train,y_train)
predictions_test = clf_D.predict(X_test)
predictions_train = clf_D.predict(X_train[:3000])
end = time() # Get end time
# Calculate the total prediction time
# Compute mean square error on the first 300 training samples
results2['mse_train'] = mean_squared_error(y_train[:3000],predictions_train)
# Compute mean square error on test set
results2['mse_test'] = mean_squared_error(y_test,predictions_test)
# Compute R^2 on the the first 300 training samples
results2['R2_train'] = r2_score(y_train[:3000],predictions_train)
# Compute R^2 on the test set
results2['R2_test'] = r2_score(y_test,predictions_test)
results2
"""
Explanation: Regression Postmortem
Woof, terrible bias, almost no structure is described by these models. I'm actually not sure what to make of such extreme values, but negative R^2 are bad, and large MSE is bad. Likely, they are too simple. I'd like to try and overfit, before I move towards a more modest approach, such as classification. If I can tune a decision tree from overfitting towards something useful, maybe the metric and this regression still have hope.
Attempting to overfit.
For this I will use a standard decision tree, and attempt to overfit with hopes for Post-pruning
End of explanation
"""
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
# Initialize the classifier
clf = DecisionTreeRegressor(random_state=2)
# Create the parameters list you wish to tune
parameters = {'max_depth':(4,6,20,None),'min_samples_split':(2,50),'min_samples_leaf':(1,101)} #',,
# Make an R2_score scoring object
scorer = make_scorer(r2_score)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(clf,parameters,scoring=scorer,n_jobs=-1)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print ("Unoptimized model\n------")
print ("MSE score on testing data: {:.4f}".format(mean_squared_error(y_test, predictions)))
print ("R2-score on testing data: {:.4f}".format(r2_score(y_test, predictions)))
print ("\nOptimized Model\n------")
print ("Final MSE score on the testing data: {:.4f}".format(mean_squared_error(y_test, best_predictions)))
print ("Final R2-score on the testing data: {:.4f}".format(r2_score(y_test, best_predictions)))
best_clf
# from sklearn.model_selection import GridSearchCV
# from sklearn.metrics import make_scorer
# # Initialize the classifier
# clf = GradientBoostingRegressor(random_state=2)
# # Create the parameters list you wish to tune
# parameters = {'learning_rate':(0.01,0.1), 'max_depth':(3,6)}
# # Make an R2_score scoring object
# scorer = make_scorer(r2_score)
# # TODO: Perform grid search on the classifier using 'scorer' as the scoring method
# grid_obj = GridSearchCV(clf,parameters,scoring=scorer,n_jobs=-1)
# # TODO: Fit the grid search object to the training data and find the optimal parameters
# grid_fit = grid_obj.fit(X_train,y_train)
# # Get the estimator
# best_clf = grid_fit.best_estimator_
# # Make predictions using the unoptimized and model
# predictions = (clf.fit(X_train, y_train)).predict(X_test)
# best_predictions = best_clf.predict(X_test)
# # Report the before-and-afterscores
# print ("Unoptimized model\n------")
# print ("MSE score on testing data: {:.4f}".format(mean_squared_error(y_test, predictions)))
# print ("R2-score on testing data: {:.4f}".format(r2_score(y_test, predictions)))
# print ("\nOptimized Model\n------")
# print ("Final MSE score on the testing data: {:.4f}".format(mean_squared_error(y_test, best_predictions)))
# print ("Final R2-score on the testing data: {:.4f}".format(r2_score(y_test, best_predictions)))
best_clf
"""
Explanation: Overfitting with Decision Trees - Success?
Looks like we have classic high variance now (great training scores, terrible test scores)! This makes me more confident that the metric is tenable.
Gridsearch Decision Tree model tuning
Now I'll try to tune the decision tree using gridsearch CV.
End of explanation
"""
# Import train_test_split
from sklearn.model_selection import train_test_split #sklearn 0.18.1 and up
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(feat, clabels, test_size = 0.2, random_state = 1)
# Show the results of the split
print ("Training set has {} samples.".format(X_train.shape[0]))
print ("Testing set has {} samples.".format(X_test.shape[0]))
"""
Explanation: Interestingly, the best boosted regressor is the "stock" one. It is pretty poor. I think it is time to try classification instead of regression
Classification of Loan Data
In this case, we take the classification data, which takes "current" and "fully paid" loans and groups, them, and groups all other delinquent loans together. First, we must get a new test split, and then we will get a baseline naive predictor.
End of explanation
"""
from sklearn.metrics import accuracy_score,fbeta_score, roc_auc_score, cohen_kappa_score, precision_score
allones=np.ones(len(clabels))
acc=accuracy_score(clabels,allones)
fbeta=fbeta_score(clabels,allones,beta=.5)
auc=roc_auc_score(clabels,allones)
cohen=cohen_kappa_score(clabels,allones)
prec=precision_score(clabels,allones)
# Print the results
print ("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}, ROC AUC: {:.4f}, Cohen's k: {:.4f}, precision: {:.4f}]".format(acc, fbeta,auc,cohen,prec))
"""
Explanation: Naive Predictor
Here we make a naive predictor, that simply guesses every loan is "good" (or bad, if more loans are bad than good). We need to beat these scores in order to really say that our model is doing better than this in order to say it is meaningful. For classification I will test accuracy, $f_{0.5}$ score, ROC AUC, and cohen's kappa to determine the quality of the model with this highly imbalanced dataset. I use $f_{0.5}$ specifically instead of other beta values because I care most about being precise! I'd rather misclassify a few good loans as bad than have bad loans slip through. Missing a good loan carries much less risk than funding a bad loan. I could look at tuning the classification by implementing a cost of misclassification...
ROC AUC takes into consideration true positive rates and false positive rates, and can deal with unbalanced data.
Cohen's kappa is specifically for unbalanced data, and describes the agreement
End of explanation
"""
from sklearn.metrics import fbeta_score, roc_auc_score, cohen_kappa_score
def train_predictclass(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
resultsclass = {}
# Fit the learner to the training data using slicing with 'sample_size'
start = time() # Get start time
learner = clf.fit(X_train.sample(n=sample_size,random_state=1),y_train.sample(n=sample_size,random_state=1)) #df.sample(frac=percent) maybe too?
end = time() # Get end time
# Calculate the training time
resultsclass['train_time'] = end-start
# Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = clf.predict(X_test)
predictions_train = clf.predict(X_train[:300])
end = time() # Get end time
# Calculate the total prediction time
resultsclass['pred_time'] = end-start
# Compute accuracy on the first 300 training samples
resultsclass['ROC_AUC_train'] = roc_auc_score(y_train[:300],predictions_train)
# Compute accuracy on test set
resultsclass['ROC_AUC_test'] = roc_auc_score(y_test,predictions_test)
# Compute F-score on the the first 300 training samples
resultsclass['precision_train'] = precision_score(y_train[:300],predictions_train)
# Compute F-score on the test set
resultsclass['precision_test'] = precision_score(y_test,predictions_test)
# Success
print ("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return resultsclass
# Initialize models
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.naive_bayes import GaussianNB
# TODO: Initialize the three models
clf_A = GaussianNB()
clf_B = DecisionTreeClassifier(random_state=5)
clf_C = RandomForestClassifier(n_estimators=50)
# Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_1 = len(y_train.sample(frac=.01))
samples_10 = len(y_train.sample(frac=.1))
samples_100 = len(y_train.sample(frac=1))
# Collect results on the learners
resultsclass = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
resultsclass[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
resultsclass[clf_name][i] = \
train_predictclass(clf, samples, X_train, y_train, X_test, y_test)
# print(results[clf])
# Run metrics visualization for the three supervised learning models chosen
imp.reload(vs)
vs.classevaluate(resultsclass,auc,prec)
"""
Explanation: Wow! accuracy and F-score are pretty high. likely, this is because there is an order of magnitude more fully paid loans than others. This can be seen in the data cleaning notebook! Interestingly, ROC AUC and Cohen's k show how poor a naive classifier should be. I'll use those to judge my models.
Testing Classifiers
I am choosing three classifiers: Decision trees, as it is obligatory to try them on any classification - they get 80% of the way there 80% of the time. SVM, as the margin is a desirable trait when "current" loans are ambiguous and we'd like a large berth for the classification. and finally, KNN, as it is one of the simpliest models (a lazy learner) and I love to see simplicity win out.
End of explanation
"""
# Initialize the classifier
clf = DecisionTreeClassifier(random_state=3)
# Create the parameters list you wish to tune
parameters = {'max_depth':(20,50, None),'max_leaf_nodes':(100,300,None)}
# Make an R2_score scoring object
scorer = make_scorer(precision_score)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(clf,parameters,scoring=scorer,n_jobs=-1)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print ("Unoptimized model\n------")
print ("ROC AUC score on testing data: {:.4f}".format(roc_auc_score(y_test, predictions)))
print ("Precision on testing data: {:.4f}".format(precision_score(y_test, predictions)))
print ("\nOptimized Model\n------")
print ("Final ROC AUC score on the testing data: {:.4f}".format(roc_auc_score(y_test, best_predictions)))
print ("Final Precision on the testing data: {:.4f}".format(precision_score(y_test, best_predictions)))
"""
Explanation: Results
This is promising! The decision tree unsurprisingly shows the best scores on the testing set. I'll try to optimize decision trees next! I will attempt to prune the tree a bit with grid search.
End of explanation
"""
from sklearn.model_selection import cross_val_score
scores = cross_val_score(best_clf,X_train, y_train, cv=20,scoring=scorer, n_jobs=-1)
scores
difference=scores.mean()-prec
import math
from scipy import stats,special
zscore=difference/(scores.std())
p_values = 1-special.ndtr(zscore)
print(zscore,p_values)
(scores.std()/math.sqrt(10))
scores.std()*100
import joblib
joblib.dump(clf, 'decisiontree.p')
features.columns
# from sklearn import tree
# import pydotplus
# dot_data = tree.export_graphviz(best_clf, out_file=None,feature_names=feat.columns,class_names=['Bad Loan','Fully Paid'])
# graph = pydotplus.graph_from_dot_data(dot_data)
# graph.write_pdf("LC_decisiontree.pdf")
best_clf.feature_importances_
importances = best_clf.feature_importances_
# Plot
import importlib
importlib.reload(vs)
vs.feature_plot(importances, X_train, y_train)
best_clf.tree_.node_count
indices = np.argsort(importances)[::-1]
columns = X_train.columns.values[indices[:10]]
values = importances[indices][:10]
np.cumsum(values)
values
columns
"""
Explanation: Interestingly, this doesn't really improve the precision, there may not be enough bad loan data to give the model something to hang on to, and potentially, it could be that we just don't have the data collected that would be more predictive. Because it is crazy simple, and it actually performs better at reducing false positives than others, at least with this training/test set. It is worth noting that the 0.0095% increase in accuracy over the naive classifier is very small but better than nothing.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_read_events.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Chris Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
"""
Explanation: Reading an event file
Read events from a file. For a more detailed guide on how to read events
using MNE-Python, see tut_epoching_and_averaging.
End of explanation
"""
events_1 = mne.read_events(fname, include=1)
events_1_2 = mne.read_events(fname, include=[1, 2])
events_not_4_32 = mne.read_events(fname, exclude=[4, 32])
"""
Explanation: Reading events
Below we'll read in an events file. We suggest that this file end in
-eve.fif. Note that we can read in the entire events file, or only
events corresponding to particular event types with the include and
exclude parameters.
End of explanation
"""
print(events_1[:5], '\n\n---\n\n', events_1_2[:5], '\n\n')
for ind, before, after in events_1[:5]:
print("At sample %d stim channel went from %d to %d"
% (ind, before, after))
"""
Explanation: Events objects are essentially numpy arrays with three columns:
event_sample | previous_event_id | event_id
End of explanation
"""
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
mne.viz.plot_events(events_1, axes=axs[0], show=False)
axs[0].set(title="restricted to event 1")
mne.viz.plot_events(events_1_2, axes=axs[1], show=False)
axs[1].set(title="restricted to event 1 or 2")
mne.viz.plot_events(events_not_4_32, axes=axs[2], show=False)
axs[2].set(title="keep all but 4 and 32")
plt.setp([ax.get_xticklabels() for ax in axs], rotation=45)
plt.tight_layout()
plt.show()
"""
Explanation: Plotting events
We can also plot events in order to visualize how events occur over the
course of our recording session. Below we'll plot our three event types
to see which ones were included.
End of explanation
"""
mne.write_events('example-eve.fif', events_1)
"""
Explanation: Writing events
Finally, we can write events to disk. Remember to use the naming convention
-eve.fif for your file.
End of explanation
"""
|
minesh1291/Learning-Python | TensorFlow_Beginner/try2-LR-TFB.ipynb | apache-2.0 | import tensorflow as tf
"""
Explanation: https://www.youtube.com/watch?v=oxf3o8IbCk4 [23:33]
http://ischlag.github.io/2016/06/04/how-to-use-tensorboard/ outdated
https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard
Linear Regression
Import
End of explanation
"""
W = tf.Variable([0.3])
b = tf.Variable([-0.3])
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
"""
Explanation: Variables init
End of explanation
"""
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
init = tf.global_variables_initializer()
"""
Explanation: Functions init
End of explanation
"""
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# create a summary for our cost
cost_sum = tf.summary.scalar("cost", loss)
#Weight_sum = tf.summary.scalar("Weight", W)
#bias_sum = tf.summary.scalar("bias", b)
with tf.name_scope('W'):
mean = tf.reduce_mean(W)
tf.summary.scalar('mean', mean)
#stddev = tf.sqrt(tf.reduce_mean(tf.square(W - mean)))
#tf.summary.scalar('stddev', stddev)
with tf.name_scope('b'):
mean = tf.reduce_mean(b)
tf.summary.scalar('mean', mean)
#stddev = tf.sqrt(tf.reduce_mean(tf.square(W - mean)))
#tf.summary.scalar('stddev', stddev)
merged = tf.summary.merge_all()
sess = tf.Session()
train_writer = tf.summary.FileWriter('./train',sess.graph)
#tf.InteractiveSession()
#tf.global_variables_initializer().run()
sess.run(init)
sess.run([train],{x:[1,2,3,4],y:[0,-1,-2,-3]})
for i in range(500):
summary, _ = sess.run([merged,train],{x:[1,2,3,4],y:[0,-1,-2,-3]})
train_writer.add_summary(summary,i)
if i % 100 ==0 :
print(sess.run([W,b]))
print("loss:",sess.run(loss,{x:[1,2,3,4],y:[0,-1,-2,-3]}))
"""
Explanation: Optimize
End of explanation
"""
train_writer.close()
#sess.close()
!tensorboard --logdir=./train
!rm -rf ./train
"""
Explanation: tensor board
End of explanation
"""
|
netodeolino/TCC | TCC 02/Resultados/Geral/Análise geral sobre os dados criminais.ipynb | mit | import plotly.plotly as py
import plotly.graph_objs as go
import pandas
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import matplotlib.ticker as ticker
from scipy.stats import gaussian_kde
import matplotlib.image as mpimg
files = [
'Cluster-Crime-Janeiro', 'Cluster-Crime-Fevereiro',
'Cluster-Crime-Marco', 'Cluster-Crime-Abril',
'Cluster-Crime-Maio'
]
qtd_ocorrencias = []
meses = ['Janeiro', 'Fevereiro', 'Março', 'Abril', 'Maio']
for file in files:
df = pandas.read_csv("./data/" + file + ".csv")
qtd_ocorrencias.append(len(df))
data = [go.Bar(
x=meses,
y=qtd_ocorrencias
)]
py.iplot(data, filename='basic-bar')
"""
Explanation: Análise sobre dados criminais de Fortaleza-CE
Ao total foram analisados 5 arquivos referentes aos meses de Janeiro à Maio (2017)
Juntos os 5 arquivos somam 985,5 kB e 2.747 ocorrências criminais
End of explanation
"""
img=mpimg.imread('crimes_por_mês.png')
imgplot = plt.imshow(img)
plt.show()
labels = meses
values = qtd_ocorrencias
trace = go.Pie(labels=labels, values=values)
py.iplot([trace], filename='basic_pie_chart')
"""
Explanation: O mês de Março teve a maior quantidade de ocorrências
End of explanation
"""
img=mpimg.imread('crimes_por_mês_per.png')
imgplot = plt.imshow(img)
plt.show()
"""
Explanation: Como podemos ver no gráfico abaixo, os crimes estão bem distribuídos entre os meses, ou seja, nenhum deles teve uma quantidade de ocorrências que o faça destacar dos demais
End of explanation
"""
|
johnpfay/environ859 | 07_DataWrangling/notebooks/03-Using-NumPy-With-Rasters.ipynb | gpl-3.0 | # Import the modules
import arcpy
import numpy as np
#Set the name of the file we'll import
demFilename = '../Data/DEM.tif'
#Import the DEM as a NumPy array, using only a 200 x 200 pixel subset of it
demRaster = arcpy.RasterToNumPyArray(demFilename)
#What is the shape of the raster (i.e. the # of rows and columns)?
demRaster.shape
#...note it's a 2d array
#Here, we re-import the raster, this time only grabbing a 150 x 150 pixel subset - to speed processing
demRaster = arcpy.RasterToNumPyArray(demFilename,'',150,150)
#What is the pixel type?
demRaster.dtype
"""
Explanation: Using NumPy with Rasters
In addition to converting feature classes in to NumPy arrays, we can also convert entire raster datasets into 2-dimensional arrays. This allows us, as we'll see below, to programmatically extract values from these rasters, or we can integrate these arrays with other packages to perform custom analyses with the data.
For more information on this topic see:
https://4326.us/esri/scipy/devsummit-2016-scipy-arcgis-presentation-handout.pdf
End of explanation
"""
#Get the value of a specific pixel, here one at the 100th row and 50th column
demRaster[99,49]
#Compute the min, max and mean value of all pixels in the 200 x 200 DEM snipped
print "Min:", demRaster.min()
print "Max:", demRaster.max()
print "Mean:", demRaster.mean()
"""
Explanation: As 2-dimensional NumPy array, we can perform rapid stats on the pixel values. We can do this for all the pixels at once, or we can perform stats on specific pixels or subsets of pixels, using their row and columns to identify the subsets.
End of explanation
"""
#Get the max for the 10th column of pixels,
# (using `:` to select all rows and `9` to select just the 10th column)
print(demRaster[:,9].max())
#Get the mean for the first 10 rows of pixels, selecting all columns
# (We can put a `:` as the second slice, or leave it blank after the comma...)
x = demRaster[:10,]
x.shape
x.mean()
"""
Explanation: To define subsets, we use the same slicing techniques we use for lists and strings, but as this is a 2 dimensional array, we provide two sets of slices; the first slices will be the rows and the second for the columns. We can use a : to select all rows or columns.
End of explanation
"""
#Import the SciPy and plotting packages
import scipy.ndimage as nd
from matplotlib import pyplot as plt
#Allows plots in our Jupyter Notebook
%matplotlib inline
#Create a 'canvas' onto which we can add plots
fig = plt.figure(figsize=(20,20))
#Loop through 10 iterations
for i in xrange(10):
#Create a kernel, intially 3 x 3, then expanding 3 x 3 at each iteration
size = (i+1) * 3
print size,
#Compute the median value for the kernel surrounding each pixel
med = nd.median_filter(demRaster, size)
#Subtract the median elevation from the original to compute TPI
tpi = demRaster - med
#Create a subplot frame
a = fig.add_subplot(5, 5,i+1)
#Show the median interpolated DEM
plt.imshow(tpi, interpolation='nearest')
#Set titles for the plot
a.set_title('{}x{}'.format(size, size))
plt.axis('off')
plt.subplots_adjust(hspace = 0.1)
prev = med
"""
Explanation: The SciPy package has a number of multi-dimensional image processing capabilities (see https://docs.scipy.org/doc/scipy/reference/ndimage.html). Here is a somewhat complex example that runs through 10 iterations of computing a neighborhood mean (using the nd.median_filter) with an incrementally growing neighorhood. We then subtract that neighborhood median elevation from the original elevation to compute Topographic Position Index (TPI, see http://www.jennessent.com/downloads/tpi-poster-tnc_18x22.pdf)
If you don't fully understand how it works, at least appreciate that converting a raster to a NumPy array enables us to use other packages to execute custom analyses on the data.
End of explanation
"""
|
dwhswenson/contact_map | examples/concurrences.ipynb | lgpl-2.1 | from __future__ import print_function
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from contact_map import ContactFrequency, ResidueContactConcurrence, plot_concurrence
import mdtraj as md
traj = md.load("data/gsk3b_example.h5")
print(traj) # to see number of frames; size of system
"""
Explanation: Concurrences
One of the tools in Contact Map Explorer is the ability to look at simultaneous contacts. The idea is that you might have a set of contacts that is likely to happen concurrently, and that this set of contacts might help you define a stable state. This is managed in Contact Map Explorer by what we call contact concurrences.
To start, we'll look at a trajectory of a specific inhibitor during its binding process to GSK3B.
End of explanation
"""
topology = traj.topology
yyg = topology.select('resname YYG and element != "H"')
protein = topology.select('protein and element != "H"')
"""
Explanation: First, we'll use MDTraj's atom selection language to split out the protein and the ligand, which has residue name YYG in the input files. We're only interested in contacts between the protein and the ligand (not contacts within the protein). We'll also only look at heavy atom contacts.
End of explanation
"""
%%time
contacts = ContactFrequency(traj, query=yyg, haystack=protein)
contact_list = [(contact_pair, freq)
for contact_pair, freq in contacts.residue_contacts.most_common()
if freq >= 0.2]
%%time
concurrence = ResidueContactConcurrence(traj, contact_list, select="")
# optionally, create x-values... since we know that we have 1 ns/frame
times = [1.0*(i+1) for i in range(len(traj))]
(fig, ax, lgd) = plot_concurrence(concurrence, x_values=times)
plt.xlabel("Time (ns)")
plt.savefig("concurrences.pdf", bbox_extra_artists=(lgd,), bbox_inches='tight')
"""
Explanation: Now we'll make a list of all the residue contacts that are made, keeping only those that occur more than 20% of the time. We'll put that into a ResidueContactConcurrence object, and plot it!
End of explanation
"""
# for visualization, we need to clean up the trajectory:
traj.topology.create_standard_bonds() # required for image_molecules
traj = traj.image_molecules().superpose(traj)
import nglview as nv
view = nv.show_mdtraj(traj)
view.remove_cartoon()
view.add_cartoon('protein', color='#0000BB', opacity=0.3)
view.add_ball_and_stick("YYG")
# update to my recommeded camera orientation
camera_orientation = [-100, 45, -30, 0,
50, 90, -45, 0,
0, -45,-100, 0,
-50, -45, -45, 1]
view._set_camera_orientation(camera_orientation)
# start trajectory at frame 60: you can scrub forward or backward
view.frame = 60
view
"""
Explanation: This plot shows when each contact occurred. The x-axis is time. Each dot represents that a specific contact pair is present at that time. The contact pairs are separated along the vertical axis, listed in the order from contact_list (here, decreasing order of frequency of the contact). The contacts are also listed in that order in the legend, to the right.
This trajectory shows two groups of stable contacts between the protein and the ligand; i.e. there is a change in the stable state. This allows us to visually identify the contacts involved in each state. Both states involve the ligand being in contact with Phe33, but the earlier state includes contacts with Ile28, Gly29, etc., while the later state includes contacts with Ser32 and Gly168.
This change occurs around 60ns (which is also frame 60), and is evident when viewing the MD trajectory. If you have NGLView installed, visualize the trajectory with the following:
End of explanation
"""
|
schiob/MusGen | Genetic_Chords.ipynb | mit | from IPython import display
display.Image('img/simple.jpg', width=400)
"""
Explanation: Creating a chord progression with a genetic algorithm
This work is the result of an experiment done some months ago. I used a simple genetic algorithm to find a solution to a classic exercise of harmony: given a certain voice (normally the bass) create the other three voices to make a chord progression. I know that the aproach to solve a progression with a genetic algorithm may not be the best I just wanted to play with this algorithms making somthing fun, the code isn't perfect and the algorithm can be improved adding more options in the chord selection, but for simplicity I didn't use seventh chords.
Working with music
The first part of this challenge is to find a way to easily represent the notes in the melody, luckily for us many years ago MIDI was created, so I used the numbers in MIDI to match every single note being, for example, 60 the central C. And at the beginning of the sequence of notes I set the key signature with the number of sharps or flats.
So an example like this:
End of explanation
"""
import random
import math
import numpy
from string import Template
from deap import base
from deap import creator
from deap import tools
from deap import algorithms
from lily_template import TEMPLATE
"""
Explanation: Will be rempresented as: 0# 57 59 60 62
Lilypond
Going from music notation to this MIDI number representation with just one voice is quite easy (although I might work a hack for that also), but once I generate all the notes that form the chords the process ends up being a pain in the ass. Therefore, at the end I take the output and pass it through a little script that transform the numbers to Lilypond, generate a .ly file and an .jpg of the sheet.
The genetic algorithm
A genetic algorithm works pretty much like a species evolution in real life. You take a group of individuals, called population, in this population there are a few individuals that have the best attributes, those are the ones that will survive and carry on the genes to the next generation. This process continues generation over generation until there is an individual with the perfect attributes, or at least with the best so far. You can read more in Wikipedia.
The process can be structured in this steps:
- Initialization
- Selection
- Crossover
- Mutation
- Termination
Let's tackle them one by one. But first let me explain the framework that I used.
Working enviroment
I used a library called DEAP, Distributed Evolutionary Algorithms in Python, it is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It has all the basic tools to work with genetic algorithms you only have to create the functions to create, select, mate and mutate the individuals.
End of explanation
"""
# Global Variables
OPTIONS_M = ((0,-3,5), (0,-3,5), (0,-4,5), (0,-3,6), (0,-3,5), (0,-4,5), (0,-4,5))
OPTIONS_m = ((0,-4,5), (0,-4,5), (0,-3,5), (0,-3,5), (0,-4,5), (0,-3,6), (0,5))
MOD_M = ('M','m','m','M','M','m','d')
MOD_m = ('m','d','M','m','M','M','M')
"""
Explanation: The file lily_template has the template to create the lilypond file and to give it format I used the string.Template class.
Then I set some global variables that I'm going to use:
OPTIONS_* are the different roles that a note can have in a chord: The first note in the scale of the tonality can be the first note in the tonic chord, the third degree in the 6° chord or the fifth degree in the subdominant. So I represented this options as differences between the fundamental note of the possible chords and the note degree in the scale.
This diferences change a little bit in a minor tonality.
MOD_* are just the grades of the chords in a major and a minor tonality.
End of explanation
"""
display.Image('img/ex_prog.jpg', width=400)
"""
Explanation: Initialization
In the initialization part, you have to create the initial population with all the individuals. In this example an individual will be a chord progression, with all the chords beeing of four notes each. I represented each individual as a list of chords, and each chord a list of notes, for example:
End of explanation
"""
def setTon(line):
"""Return the tonality of the exercise and the bass notes of it"""
ton = line[:2]
notes = list(map(int, line[3:].split(' ')))
if ton[1] == '#':
ton = (int(ton[0])*7)%12
else:
ton = (int(ton[0])*5)%12
for note in notes:
if (ton+6)%12 == note%12:
ton = str((ton-3)%12)+'m'
break
else:
if ton-3 == notes[-1]%12:
ton = str((ton-3)%12)+'m'
else:
ton = str(ton)+'M'
return ton, notes
"""
Explanation: This individual would be represented as
[[48, 64, 67, 79], [53, 59, 65, 74], [55, 59, 62, 74], [48, 55, 67, 76]]
To do anything I need the tonality of the exercise, so I defined a function to find it given the bass voice and the key signature, it just looks at the key signature and it looks if the voice has any sight of being a minor key:
End of explanation
"""
display.Image('img/all_same.jpg', width=400)
def creatChord(nameC, noteF):
"""Create one chord given the name of the chord and the fundamental note"""
num_funda = int(nameC[:-1])
if nameC[-1] == 'M':
val_notes = [num_funda, (num_funda+4)%12, (num_funda+7)%12]
elif nameC[-1] == 'm':
val_notes = [num_funda, (num_funda+3)%12, (num_funda+7)%12]
elif nameC[-1] == 'd':
val_notes = [num_funda, (num_funda+3)%12, (num_funda+6)%12]
# Tessitura of each voice
tenorR = list(range(48, 69))
contR = list(range(52, 77))
sopR = list(range(60, 86))
# Depending in the bass note this are the options for the others voices
if noteF%12 == val_notes[0]:
opc = [[1,1,1], [2,1,0], [0,1,2]]
elif noteF%12 == val_notes[1]:
opc = [[1,0,2], [3,0,0], [2,0,1]]
elif noteF%12 == val_notes[2]:
opc = [[1,1,1], [2,1,0]]
opc = random.choice(opc)
chordN = list()
for num, val in zip(opc, val_notes):
chordN += [val]*num
random.shuffle(chordN)
chord = [noteF,]
for nte, voce in zip(chordN, [tenorR, contR, sopR]):
posible_n = [x for x in voce if x%12 == nte]
chord.append(random.choice(posible_n))
return chord
"""
Explanation: The function returns a tuple with the tonality represented as a number from 0 to 11 (C to B) and a 'M' if the tonality is major or 'm' if it's minor.
Then I wrote a function to create a single chord given the name of the chord and the fundamental note. The interesting part of this is that a chord of four notes can have multiple permutations, different set of notes and it can have a close or open harmony.
For example all these chords are D major and have the same bass:
End of explanation
"""
def selChord(ton, notesBass):
"""Select the chords from all the posibilities"""
listaOp = OPTIONS_M if ton[-1] == 'M' else OPTIONS_m
listaMod = MOD_M if ton[-1] == 'M' else MOD_m
prog = list()
for note in notesBass:
name = note%12
grad = name-int(ton[:-1])
grad = math.ceil(((grad+12)%12) / 2)
num = (random.choice(listaOp[grad]) + name +12) % 12
grad = num-int(ton[:-1])
grad = math.ceil(((grad+12)%12) / 2)
name = '{}{}'.format(num, listaMod[grad])
prog.append([creatChord(name, note), grad])
return prog
"""
Explanation: After this function, I only need to select a chord for each of the notes in the bass. This process use the midi representation of the notes and making arithmetic operation with them and with a random choice between the possible options. At the end we have a complete chord progression.
End of explanation
"""
def newChordProg(ton, notes):
"""Create a new individual given the tonality and the base notes"""
chords = selChord(ton, notes)
for c in chords:
yield c
"""
Explanation: Now, DEAP requires a generator to create the individuals, so I just create a simple generator that yields each chord of the progression.
End of explanation
"""
def check_interval(chord):
"""Return the number of mistakes in the distance between the notes."""
res = 0
if chord[2] - chord[1] > 12 or chord[2]-chord[1] < 0:
res += 15
if chord[3] - chord[2] > 12 or chord[3]-chord[2] < 0:
res += 15
if chord[1] == chord[2] or chord[2] == chord[3]:
res += 1.4
return res
"""
Explanation: Selection
From Wikipedia: "During each successive generation, a proportion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected."
So, I created a fitness function based on the classical harmony, it evaluate the progression to find "errors" like a distance between notes greater than an octave, or try to avoid two voices singing the same note:
End of explanation
"""
def check_2_chords(ch1, ch2):
"""Return the number of mistakes in the intervals between 2 chords."""
res = 0
# Check for 5° and 8°
ite1 = map(lambda x,y: y-x, ch1[:-1], ch1[1:])
ite2 = map(lambda x,y: y-x, ch2[:-1], ch2[1:])
for inter1, inter2 in zip(ite1, ite2):
if inter1 == 7 and inter2 == 7:
res += 15
elif inter1 == 0 and inter2 == 0:
res += 15
elif inter1 == 12 and inter2 == 12:
res += 15
# Check for big intervals, just to make it more "human"
for note1, note2 in zip(ch1[1:], ch2[1:]):
if abs(note1-note2) >= 7: # 7 equals 5° interval
res += .7
return res
"""
Explanation: Also between two chords, you have to avoid the consecutive fifths and octaves, and a normal person tends to make the intervals in a voice more "natural" making jumps not bigger than a fifth that often.
End of explanation
"""
def neighborhood(iterable):
"""Generator gives the prev actual and next."""
iterator = iter(iterable)
prev = None
item = next(iterator) # throws StopIteration if empty.
for nex in iterator:
yield (prev,item,nex)
prev = item
item = nex
yield (prev,item,None)
"""
Explanation: And for the evaluation of an individual I used this generator to access an element and the two neighbors:
End of explanation
"""
def evalNumErr(ton, individual):
"""Evaluation function."""
res = 0
for prev, item, nex in neighborhood(individual):
res += check_interval(item[0])
if prev == None:
if item[1] != 0:
res += 6
continue
else:
if prev[1] in [4, 6] and item[1] in [3, 1]:
res += 20
res += check_2_chords(prev[0], item[0])
if nex == None:
if item[1] in [1, 2, 3, 4, 5, 6]:
res += 6
return (res,)
"""
Explanation: The actual evaluation function:
End of explanation
"""
def mutChangeNotes(ton, individual, indpb):
"""Mutant function."""
new_ind = toolbox.clone(individual)
for x in range(len(individual[0])):
if random.random() < indpb:
listaOp = OPTIONS_M if ton[-1] == 'M' else OPTIONS_m
listaMod = MOD_M if ton[-1] == 'M' else MOD_m
note = individual[x][0][0]
name = note%12
grad = name-int(ton[:-1])
grad = math.ceil(((grad+12)%12) / 2)
num = (random.choice(listaOp[grad]) + name +12) % 12
grad = num-int(ton[:-1])
grad = math.ceil(((grad+12)%12) / 2)
name = '{}{}'.format(num, listaMod[grad])
new_ind[x] = [creatChord(name, note), grad]
del new_ind.fitness.values
return new_ind,
"""
Explanation: Crossover
In the crossover section I used a simple One point crossover provided by DEAP in its set of tools.
Mutation
In the mutation section I just create a new chord for each one of the lasts chords that randomly pass a threshold.
End of explanation
"""
def transform_lilypond(ton, indiv, make_file=False):
"""Take one list of chords and print the it in lilypond notation."""
note_map = dict()
if ton[-1] == 'M':
note_map = {0: 'c',
1: 'cis',
2: 'd',
3: 'dis',
4: 'e',
5: 'f',
6: 'fis',
7: 'g',
8: 'gis',
9: 'a',
10:'ais',
11:'b'
}
gra = 'major'
else:
note_map = {0: 'c',
1: 'des',
2: 'd',
3: 'ees',
4: 'e',
5: 'f',
6: 'ges',
7: 'g',
8: 'aes',
9: 'a',
10:'bes',
11:'b'
}
gra = 'minor'
voces = [[], [], [], []]
for chord in indiv:
for note, voce in zip(chord, voces):
octave = (note // 12)-4
name_lily = note_map[note % 12]
if octave < 0:
name_lily += ',' * (octave * -1)
elif octave > 0:
name_lily += "'" * octave
voce.append(name_lily)
if make_file:
with open('lily/'+ton+'.ly', 'w') as f:
key_map = {'0': 'c',
'1': 'des',
'2': 'd',
'3': 'ees',
'4': 'e',
'5': 'f',
'6': 'ges',
'7': 'g',
'8': 'aes',
'9': 'a',
'10':'bes',
'11':'b'
}
print(ton)
f.write(Template(TEMPLATE).substitute(key=key_map[ton[:-1]], grade=gra, notes='{}|\n{}|\n{}|\n{}|\n'.format(*(' '.join(voce) for voce in reversed(voces)))))
print('{}|\n{}|\n{}|\n{}|\n'.format(*(' '.join(voce) for voce in reversed(voces))))
"""
Explanation: The next function is just to create a lilypond file using the chords in the individual to see a nice sheet.
End of explanation
"""
def main(ton):
pop = toolbox.population(n=400)
hof = tools.HallOfFame(3)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register('avg', numpy.mean)
stats.register('std', numpy.std)
stats.register('min', numpy.min)
stats.register('max', numpy.max)
pop, log = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.3, ngen=70, stats=stats, halloffame=hof, verbose=True)
while min(log.select('min')) > 15:
pop = toolbox.population(n=400)
pop, log = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.3, ngen=70, stats=stats, halloffame=hof, verbose=True)
for best in hof:
print([x[0] for x in best], end='\n============\n')
transform_lilypond(ton, [x[0] for x in hof[0]], make_file=True)
"""
Explanation: In the main function is where the actual algorithm's running, it's a simple evolutionary algorithm with a hall of fame where the best individuals will be saved. The little while loop is just if you want to run multiple times the algorithm until an individual gets an evaluation lower than, in this case, 15.
End of explanation
"""
if __name__ == '__main__':
line = input('n[#b] notas ')
ton, notes = setTon(line)
print(ton, notes)
# ========================= GA setup =========================
creator.create('FitnessMin', base.Fitness, weights=(-1.0,))
creator.create('Individual', list, fitness=creator.FitnessMin)
toolbox = base.Toolbox()
toolbox.register('creat_notes', newChordProg, ton, notes)
toolbox.register('individual', tools.initIterate, creator.Individual,
toolbox.creat_notes)
toolbox.register('population', tools.initRepeat, list, toolbox.individual)
toolbox.register('evaluate', evalNumErr, ton)
toolbox.register('mate', tools.cxOnePoint)
toolbox.register('mutate', mutChangeNotes, ton, indpb=0.4)
toolbox.register('select', tools.selTournament, tournsize=3)
# =============================================================
main(ton)
"""
Explanation: And at the end set up all the functions and the form of the individual in the toolbox that way DEAP can use them in the algorithm.
The program verbose each of the generations showing the number of individuals evaluated, the average evaluation value, the standard deviation, the minimum and the maximum. At the end it shows the three best individuals in all the evolution process, and it creates a lilypond file with the best of all.
End of explanation
"""
import os
os.system('python auto_trim.py {} {}'.format('lily/'+ton+'.ly', 'temp.jpg'))
display.Image('img/temp.jpg', width=600)
"""
Explanation: And just to show the result I made a little script to trim the pdf that lilypond generate.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/messy-consortium/cmip6/models/sandbox-3/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
HaoMood/cs231n | assignment3 (copy)/assignment3/ImageGradients.ipynb | gpl-3.0 | # As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
End of explanation
"""
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)
"""
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
"""
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
"""
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
"""
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
"""
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
"""
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
"""
Explanation: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
End of explanation
"""
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
"""
Explanation: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
End of explanation
"""
def compute_saliency_maps(X, y, model):
"""
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
"""
saliency = None
##############################################################################
# TODO: Implement this function. You should use the forward and backward #
# methods of the PretrainedCNN class, and compute gradients with respect to #
# the unnormalized class score of the ground-truth classes in y. #
##############################################################################
scores, cache = self.forward(X)
dscores = np.zeros_like(scores)
dscores[np.arange(N), y] = 1
dX, grads = self.backward(dscores, cache)
saliency = np.max(dX, axis=1)
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
"""
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
End of explanation
"""
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
"""
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
End of explanation
"""
def make_fooling_image(X, target_y, model):
"""
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
"""
X_fooling = X.copy()
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. Use gradient ascent on the target class score, using #
# the model.forward method to compute scores and the model.backward method #
# to compute image gradients. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
##############################################################################
while True:
scores, cache = self.forward(X_fooling)
y = np.argmax(scores)
if y == target_y:
break
dscores = np.zeros_like(scores)
dscores[target_y] = 1
dX, grads = self.backward(dscores, cache)
X_fooling += 1000 * dX
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
"""
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
"""
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
"""
Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
End of explanation
"""
|
kerimlcr/ab2017-dpyo | ornek/osmnx/osmnx-0.3/examples/05-example-save-load-networks-shapes.ipynb | gpl-3.0 | import osmnx as ox
%matplotlib inline
ox.config(log_file=True, log_console=True, use_cache=True)
place = 'Piedmont, California, USA'
"""
Explanation: Use OSMnx to construct place boundaries and street networks, and save as various file formats for working with later
Overview of OSMnx
GitHub repo
Examples, demos, tutorials
End of explanation
"""
gdf = ox.gdf_from_place(place)
gdf.loc[0, 'geometry']
# save place boundary geometry as ESRI shapefile
ox.save_gdf_shapefile(gdf, filename='place-shape')
"""
Explanation: Get place shape geometries from OSM and save as shapefile
End of explanation
"""
G = ox.graph_from_place(place, network_type='drive')
G_projected = ox.project_graph(G)
# save street network as ESRI shapefile
ox.save_graph_shapefile(G_projected, filename='network-shape')
"""
Explanation: Construct street network and save as shapefile to work with in GIS
End of explanation
"""
# save street network as GraphML file
ox.save_graphml(G_projected, filename='network.graphml')
"""
Explanation: Save street network as GraphML to work with in Gephi or NetworkX
End of explanation
"""
# save street network as SVG
fig, ax = ox.plot_graph(G_projected, show=False, save=True,
filename='network', file_format='svg')
"""
Explanation: Save street network as SVG to work with in Illustrator
End of explanation
"""
G2 = ox.load_graphml('network.graphml')
fig, ax = ox.plot_graph(G2)
"""
Explanation: Load street network from saved GraphML file
End of explanation
"""
gdf = ox.buildings_from_place(place='Piedmont, California, USA')
gdf.drop(labels='nodes', axis=1).to_file('data/piedmont_bldgs')
"""
Explanation: Save building footprints as a shapefile
End of explanation
"""
|
rrbb014/data_science | fastcampus_dss/2016_05_24/0524_01__베르누이 확률 분포.ipynb | mit | theta = 0.6
rv = sp.stats.bernoulli(theta)
rv
"""
Explanation: 베르누이 확률 분포
베르누이 시도
결과가 성공(Success) 혹은 실패(Fail) 두 가지 중 하나로만 나오는 것을 베르누이 시도(Bernoulli trial)라고 한다.
예를 들어 동전을 한 번 던져 앞면(H:Head)이 나오거나 뒷면(T:Tail)이 나오게 하는 것은 베르누이 시도의 일종이다.
베르누이 시도의 결과를 확률 변수(random variable) $X$ 로 나타낼 때는 보통 성공을 정수 1 ($X=1$), 실패를 정수 0 ($X=0$)으로 정한다. 경우에 따라서는 성공을 1 ($X=1$), 실패를 -1 ($X=-1$)로 정하는 경우도 있다.
베르누이 분포
베르누이 확률 변수는 0, 1 두 가지 값 중 하나만 가질 수 있으므로 이산 확률 변수(discrete random variable)이다. 따라서 확률 질량 함수(pmf: probability mass function)와 누적 분포 함수(cdf:cumulataive distribution function)으로 정의할 수 있다.
베르누이 확률 변수는 1이 나올 확률 $\theta$ 라는 하나의 모수(parameter)만을 가진다. 0이 나올 확률은 $1 - \theta$ 로 정의된다.
베르누이 확률 질량 함수는 다음과 같다.
$$
\text{Bern}(x;\theta) =
\begin{cases}
\theta & \text{if }x=1, \
1-\theta & \text{if }x=0
\end{cases}
$$
이를 하나의 수식으로 표현 하면 다음과 같다.
외워야..
$$
\text{Bern}(x;\theta) = \theta^x(1-\theta)^{(1-x)}
$$
만약 베르누이 확률 변수가 1과 -1이라는 값을 가진다면 다음과 같은 수식으로 쓸 수 있다.
$$ \text{Bern}(x; \theta) = \theta^{(1+x)/2} (1-\theta)^{(1-x)/2} $$
SciPy를 사용한 베르누이 분포의 시뮬레이션
Scipy의 stats 서브 패키지에 있는 bernoulli 클래스는 베르누이 분포 클래스이다. theta 인수로 $theta$을 설정한다.
End of explanation
"""
xx = [0, 1]
plt.bar(xx, rv.pmf(xx), align="center")
plt.xlim(-1, 2)
plt.ylim(0, 1)
plt.xticks([0, 1], ["X=0", "X=1"])
plt.ylabel("P(x)")
plt.title("pmf of Bernoulli distribution")
plt.show()
"""
Explanation: pmf 메서드를 사용하면 확률 질량 함수(pmf: probability mass function)를 계산할 수 있다.
End of explanation
"""
|
Aniruddha-Tapas/Applied-Machine-Learning | Machine Learning using GraphLab/Analyzing Product Sentiment using GraphLab Create.ipynb | mit | import graphlab
"""
Explanation: Predicting sentiment from product reviews
<hr>
Import GraphLab Create
End of explanation
"""
products = graphlab.SFrame('amazon_baby.gl/')
"""
Explanation: Read some product review data
Loading reviews for a set of baby products.
End of explanation
"""
products.head()
"""
Explanation: Exploring the data
Data includes the product name, the review text and the rating of the review.
End of explanation
"""
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
"""
Explanation: Build the word count vector for each review
End of explanation
"""
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
"""
Explanation: Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'
End of explanation
"""
products['rating'].show(view='Categorical')
"""
Explanation: Build a sentiment classifier
End of explanation
"""
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
"""
Explanation: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
"""
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
"""
Explanation: Let's train the sentiment classifier
End of explanation
"""
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
"""
Explanation: Evaluate the sentiment model
End of explanation
"""
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
"""
Explanation: Applying the learned model to understand sentiment for Giraffe
End of explanation
"""
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
"""
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
"""
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
"""
Explanation: Most positive reviews for the giraffe
End of explanation
"""
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
"""
Explanation: Show most negative reviews for giraffe
End of explanation
"""
|
ioggstream/python-course | connexion-101/notebooks/02-openapi-3.ipynb | agpl-3.0 | remote_yaml = 'https://raw.githubusercontent.com/teamdigitale/api-starter-kit/master/openapi/simple.yaml.src'
render_markdown(f'''
[Swagger Editor]({oas_editor_url(remote_yaml)}) is a simple webapp
for editing OpenAPI 3 language specs.
''')
"""
Explanation: OpenAPI & Modeling
OpenAPI is a specification language
OpenAPI is a specification language for REST APIs that allows to communicate:
technical specifications
metadata
docs & references
OpenAPI is driven by a Foundation
The OpenAPI Foundation is an initiative under the Linux Foundation,
participated by government & companies (gov.uk, Microsoft, Google, Oracle, IBM, ..):
Driver for API adoption
Evolution of Swagger 2.0
Lightweight format: YAML
Generates docs & code via tools (swagger-editor, apicur.io)
Allows reusable components via hyperlink (eg. $ref)
OpenAPI is WSDL for REST APIs
OpenAPI Editor
Every OAS3 document begins with
openapi: 3.0.0
Run next cell if it's blank ;)
End of explanation
"""
render_markdown(f'''
1- open [this incomplete OAS3 spec](/edit/notebooks/oas3/ex-01-info.yaml).
2- copy its content in the [Swagger Editor Online]({oas_editor_url('')}) fixing all errors
and adding the missing informations.
3- describe the first API we're going to implement: a service which returns the current
timestamp in [RFC5454](https://tools.ietf.org/html/rfc5424#section-6.2.3)
UTC (eg. `2019-01-01T00:00:00Z`).
4- provide contact informations and terms of services.
5- Feel free to add as many details as you want.
''')
### Basic fields
### Terms of Services
"""
Explanation: Start with Metadata
In OAS3 we should first describe api metadata, to clarify:
API goals, audience and context;
Terms of service;
Versioning.
Here's a simple OAS3 metadata part, contained in the info section.
```
openapi: 3.0.0
info:
version: "1.0.0"
title: |-
Write a short, clear name of your service.
description: |
This field may contain the markdown documentation of the api,
including references to other docs and examples.
# Legal references and terms of services.
termsOfService: 'http://swagger.io/terms/'
contact:
email: robipolli@gmail.com
name: Roberto Polli
url: https://twitter.com/ioggstream
license:
name: Apache 2.0
url: 'http://www.apache.org/licenses/LICENSE-2.0.html'
```
OpenAPI Metadata exercise
End of explanation
"""
render_markdown(f'''
1- open [the previous OAS3 spec](/edit/notebooks/oas3/ex-01-info.yaml).
2- copy its content in the [Swagger Editor Online]({oas_editor_url('')}).
3- provide further informations via custom fields: if you think of any interesting
label, define them and comment properly using `#`
''')
"""
Explanation: Custom fields
In Italy we're experimenting with some custom fields to help catalogs and metadata
x-summary is an one-liner description for catalog purposes. The new OAS 3.1 standardizes it as summary
x-lifecycle can be used to communicate publishing and retiring informations
```
x-lifecycle:
published: 1970-01-01
deprecated: 2050-01-01
retired: 2050-06-01
maturity: published
```
x-api-id you may want to assign a time-persistend UUID to your API, so that you can change its title
x-api-id: 00000000-0000-0000-0000-000000000000
x-gdpr with a list of roles
x-geodata add local references in a machine readable format
OpenAPI Metadata exercise: 2
End of explanation
"""
|
catalyst-cooperative/pudl | test/validate/notebooks/validate_fuel_ferc1.ipynb | mit | %load_ext autoreload
%autoreload 2
import sys
import pandas as pd
import sqlalchemy as sa
import pudl
import pudl.validate as pv
import warnings
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (10,4)
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 56
pudl_settings = pudl.workspace.setup.get_defaults()
ferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
pudl_settings
"""
Explanation: Validation of FERC Form 1 Fuel Data
This notebook runs sanity checks on the FERC Form 1 fuel table (fuel_ferc1). These are the same tests which are run by the fuel_ferc1 validation tests by PyTest. The notebook and visualizations are meant to be used as a diagnostic tool, to help understand what's wrong when the PyTest based data validations fail for some reason.
End of explanation
"""
pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine)
fuel_ferc1 = pudl_out.fuel_ferc1()
"""
Explanation: Pull fuel_ferc1 and calculate some useful values
First we pull the original (post-ETL) FERC 1 fuel data out of the PUDL database using an output object. The FERC Form 1 data only exists at annual resolution, so there's no inter-frequency aggregation to think about.
End of explanation
"""
pudl.validate.plot_vs_self(fuel_ferc1, pv.fuel_ferc1_self)
"""
Explanation: Validating Historical Distributions
As a sanity check of the testing process itself, we can check to see whether the entire historical distribution has attributes that place it within the extremes of a historical subsampling of the distribution. In this case, we sample each historical year, and look at the range of values taken on by some quantile, and see whether the same quantile for the whole of the dataset fits within that range
Columns to Validate
fuel_cost_per_mmbtu (by fuel, coal, oil, or gas)
fuel_cost_per_unit (by fuel, coal, oil, or gas)
fuel_mmbtu_per_unit (by fuel, coal, oil, or gas)
Other Quantities to validate..
Does cost per unit burned X units burned == total cost?
MMBTU per unit X units burned == total MMBTU?
End of explanation
"""
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_coal_mmbtu_per_unit_bounds)
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_oil_mmbtu_per_unit_bounds)
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_gas_mmbtu_per_unit_bounds)
"""
Explanation: Validation Against Fixed Bounds
Some of the variables reported in this table have a fixed range of reasonable values, like the heat content per unit of a given fuel type. These varaibles can be tested for validity against external standards directly. In general we have two kinds of tests in this section:
* Tails: are the exteme values too extreme? Typically, this is at the 5% and 95% level, but depending on the distribution, sometimes other thresholds are used.
* Middle: Is the central value of the distribution where it should be?
Fuel Heat Content
End of explanation
"""
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_coal_cost_per_mmbtu_bounds)
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_oil_cost_per_mmbtu_bounds)
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_gas_cost_per_mmbtu_bounds)
"""
Explanation: Fuel Cost per MMBTU
End of explanation
"""
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_coal_cost_per_unit_bounds)
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_oil_cost_per_unit_bounds)
pudl.validate.plot_vs_bounds(fuel_ferc1, pv.fuel_ferc1_gas_cost_per_unit_bounds)
"""
Explanation: Fuel Cost per Unit
End of explanation
"""
|
ecell/ecell4-notebooks | en/tests/MSD.ipynb | gpl-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy
from ecell4.prelude import *
"""
Explanation: Mean Square Displacement (MSD)
This is a validation for the E-Cell4 library. Here, we test a mean square displacement (MSD) for each simulation algorithms.
End of explanation
"""
radius, D = 0.005, 1
m = NetworkModel()
m.add_species_attribute(Species("A", radius, D))
"""
Explanation: Making a model for all. A single species A is defined. The radius is 5 nm, and the diffusion coefficient is 1um^2/s:
End of explanation
"""
rng = GSLRandomNumberGenerator()
rng.seed(0)
"""
Explanation: Create a random number generator, which is used through whole simulations below:
End of explanation
"""
def plot_displacements(ret):
distances = []
for ret_ in ret:
obs = ret_.observers[1]
for data in obs.data():
distances.extend(tuple(data[-1] - data[0]))
_, bins, _ = plt.hist(distances, bins=30, density=True, facecolor='green', alpha=0.5, label='Simulation')
xmax = max(abs(bins[0]), abs(bins[-1]))
x = numpy.linspace(-xmax, +xmax, 101)
gauss = lambda x, sigmasq: numpy.exp(-0.5 * x * x / sigmasq) / numpy.sqrt(2 * numpy.pi * sigmasq)
plt.plot(x, gauss(x, 2 * D * duration), 'r-', label='Expected')
plt.plot(x, gauss(x, sum(numpy.array(distances) ** 2) / len(distances)), 'k--', label='Fitting')
plt.xlim(x[0], x[-1])
plt.ylim(0, 1)
plt.xlabel('Displacement')
plt.ylabel('Frequenecy')
plt.legend(loc='best', shadow=True)
plt.show()
duration = 0.1
N = 50
obs = FixedIntervalTrajectoryObserver(0.01)
"""
Explanation: Distribution along each axis
First, a distribution along each axis is tested. The distribution after the time t should follow the normal distribution with mean 0 and variance 2Dt.
End of explanation
"""
ret1 = ensemble_simulations(
duration, ndiv=1, model=m, y0={'A': N}, observers=obs,
solver=('egfrd', Integer3(4, 4, 4)), repeat=50)
show(ret1[0].observers[1])
plot_displacements(ret1)
"""
Explanation: Simulating with egfrd:
End of explanation
"""
ret2 = ensemble_simulations(
duration, ndiv=1, model=m, y0={'A': N}, observers=obs,
solver=('spatiocyte', radius), repeat=50)
plot_displacements(ret2)
"""
Explanation: Simulating with spatiocyte:
End of explanation
"""
def test_mean_square_displacement(ret):
t = numpy.array(ret[0].observers[1].t())
sd = []
for ret_ in ret:
obs = ret_.observers[1]
for data in obs.data():
sd.append(numpy.array([length_sq(pos - data[0]) for pos in data]))
return (t, numpy.average(sd, axis=0), numpy.std(sd, axis=0))
duration = 3.0
N = 50
obs = FixedIntervalTrajectoryObserver(0.01)
"""
Explanation: Mean square displacement
A mean square displacement after the time t is euqal to the variance above. The one of a diffusion in volume is 6Dt. ecell4.FixedIntervalTrajectoryObserver enables to track a trajectory of a particle over periodic boundaries.
End of explanation
"""
ret1 = ensemble_simulations(
duration, ndiv=1, model=m, y0={'A': N}, observers=obs,
solver=('egfrd', Integer3(4, 4, 4)), repeat=10)
t, mean1, std1 = test_mean_square_displacement(ret1)
"""
Explanation: Simulating with egfrd:
End of explanation
"""
ret2 = ensemble_simulations(
duration, ndiv=1, model=m, y0={'A': N}, observers=obs,
solver=('spatiocyte', radius), repeat=10)
t, mean2, std2 = test_mean_square_displacement(ret2)
plt.plot(t, 6 * D * t, 'k-', label="Expected")
plt.errorbar(t[::30], mean1[::30], yerr=std1[::30], fmt='go', label="eGFRD")
plt.errorbar(t[::30], mean2[::30], yerr=std2[::30], fmt='ro', label="Spatiocyte")
plt.xlim(t[0], t[-1])
plt.legend(loc="best", shadow=True)
plt.xlabel("Time")
plt.ylabel("Mean Square Displacement")
plt.show()
"""
Explanation: Simulating with spatiocyte:
End of explanation
"""
def multirun(target, job, repeat, method=None, **kwargs):
from ecell4.extra.ensemble import run_ensemble, genseeds
from ecell4.util.session import ResultList
return run_ensemble(target, [list(job) + [genseeds(repeat)]], repeat=repeat, method=method, **kwargs)[0]
def singlerun(job, job_id, task_id):
from ecell4_base.core import AABB, Real3, GSLRandomNumberGenerator
from ecell4_base.core import FixedIntervalNumberObserver, FixedIntervalTrajectoryObserver
from ecell4.util.session import get_factory
from ecell4.extra.ensemble import getseed
duration, solver, ndiv, myseeds = job
rndseed = getseed(myseeds, task_id)
f = get_factory(*solver)
f.rng(GSLRandomNumberGenerator(rndseed))
w = f.world(ones())
L_11, L_2 = 1.0 / 11, 1.0 / 2
w.bind_to(m)
NA = 12
w.add_molecules(Species("A"), NA, AABB(Real3(5, 5, 5) * L_11, Real3(6, 6, 6) * L_11))
sim = f.simulator(w)
retval = [sum(length_sq(p.position() - Real3(L_2, L_2, L_2)) for pid, p in w.list_particles_exact(Species("A"))) / NA]
for i in range(ndiv):
sim.run(duration / ndiv)
mean = sum(length_sq(p.position() - Real3(L_2, L_2, L_2)) for pid, p in w.list_particles_exact(Species("A"))) / NA
retval.append(mean)
return numpy.array(retval)
duration = 0.15
ndiv = 20
t = numpy.linspace(0, duration, ndiv + 1)
"""
Explanation: Mean square displacement in a cube
Here, we test a mean square diplacement in a cube. For the periodic boundaries, particles cannot escape from World, and thus the displacement for each axis is less than a half of the World size. In meso simulations, unlike egfrd and spatiocyte, each molecule doesn't have its ParticleID. meso is just validated in this condition because FixedIntervalTrajectoryObserver is only available with ParticleID.
End of explanation
"""
ret1 = multirun(singlerun, job=(duration, ("egfrd", Integer3(4, 4, 4)), ndiv), repeat=50)
mean1 = numpy.average(ret1, axis=0)
"""
Explanation: Simulating with egfrd:
End of explanation
"""
ret2 = multirun(singlerun, job=(duration, ("spatiocyte", radius), ndiv), repeat=50)
mean2 = numpy.average(ret2, axis=0)
"""
Explanation: Simulating with spatiocyte:
End of explanation
"""
ret3 = multirun(singlerun, job=(duration, ("meso", Integer3(11, 11, 11)), ndiv), repeat=50)
mean3 = numpy.average(ret3, axis=0)
"""
Explanation: Simulating with meso:
End of explanation
"""
plt.plot(t, 6 * D * t, 'k-', label="Expected")
plt.plot((t[0], t[-1]), (0.25, 0.25), 'k--', label='Uniform distribution')
plt.plot(t, mean1, 'go', label="eGFRD")
plt.plot(t, mean2, 'ro', label="Spatiocyte")
plt.plot(t, mean3, 'bo', label="Meso")
plt.xlim(t[0], t[-1])
plt.ylim(0, 0.3)
plt.legend(loc="best", shadow=True)
plt.xlabel("Time")
plt.ylabel("Mean Square Displacement")
plt.show()
"""
Explanation: Mean square displacement at the uniform distribution in a cube is calculated as follows:
$\int_0^1\int_0^1\int_0^1 dxdydz\ (x-0.5)^2+(y-0.5)^2+(z-0.5)^2=3\left[\frac{(x-0.5)^3}{3}\right]_0^1=0.25$
End of explanation
"""
radius, D = 0.005, 1
m = NetworkModel()
A = Species("A", radius, D, "M")
A.set_attribute("dimension", 2)
m.add_species_attribute(A)
M = Species("M", radius, 0)
M.set_attribute("dimension", 2)
m.add_species_attribute(M)
"""
Explanation: Mean square displacement in 2D
Spatial simulation with a structure is only supported by spatiocyte and meso now. Here, mean square displacement on a planar surface is tested for these algorithms.
First, a model is defined with a species A which has the same attributes with above except for the location named M.
End of explanation
"""
duration = 0.3
N = 50
obs = FixedIntervalTrajectoryObserver(0.01)
"""
Explanation: Mean square displacement of a 2D diffusion must follow 4Dt. The diffusion on a planar surface parallel to yz-plane at the center is tested with spatiocyte.
End of explanation
"""
ret2 = ensemble_simulations(
duration, ndiv=1, model=m, y0={'A': N}, observers=obs,
structures={'M': PlanarSurface(0.5 * ones(), unity(), unitz())},
solver=('spatiocyte', radius), repeat=50)
t, mean2, std2 = test_mean_square_displacement(ret2)
plt.plot(t, 6 * D * t, 'k--', label="Expected 3D")
plt.plot(t, 4 * D * t, 'k-', label="Expected 2D")
plt.errorbar(t, mean2, yerr=std2, fmt='ro', label="Spatiocyte")
plt.xlim(t[0], t[-1])
plt.legend(loc="best", shadow=True)
plt.xlabel("Time")
plt.ylabel("Mean Square Displacement")
plt.show()
"""
Explanation: Simulating with spatiocyte:
End of explanation
"""
viz.plot_trajectory(ret2[0].observers[1])
"""
Explanation: Plottig the trajectory of particles on a surface with spatiocyte:
End of explanation
"""
def singlerun(job, job_id, task_id):
from ecell4_base.core import AABB, Real3, GSLRandomNumberGenerator
from ecell4_base.core import FixedIntervalNumberObserver, FixedIntervalTrajectoryObserver
from ecell4.util.session import get_factory
from ecell4.extra.ensemble import getseed
duration, solver, ndiv, myseeds = job
rndseed = getseed(myseeds, task_id)
f = get_factory(*solver)
f.rng(GSLRandomNumberGenerator(rndseed))
w = f.world(ones())
L_11, L_2 = 1.0 / 11, 1.0 / 2
w.bind_to(m)
NA = 12
w.add_structure(Species("M"), PlanarSurface(0.5 * ones(), unity(), unitz()))
w.add_molecules(Species("A"), NA, AABB(ones() * 5 * L_11, ones() * 6 * L_11))
sim = f.simulator(w)
retval = [sum(length_sq(p.position() - Real3(L_2, L_2, L_2)) for pid, p in w.list_particles_exact(Species("A"))) / NA]
for i in range(ndiv):
sim.run(duration / ndiv)
mean = sum(length_sq(p.position() - Real3(L_2, L_2, L_2)) for pid, p in w.list_particles_exact(Species("A"))) / NA
retval.append(mean)
return numpy.array(retval)
duration = 0.15
ndiv = 20
t = numpy.linspace(0, duration, ndiv + 1)
"""
Explanation: Mean square displacement on a planar surface restricted in a cube by periodic boundaries is compared between meso and spatiocyte.
End of explanation
"""
ret2 = multirun(singlerun, job=(duration, ("spatiocyte", radius), ndiv), repeat=50)
mean2 = numpy.average(ret2, axis=0)
"""
Explanation: Simulating with spatiocyte:
End of explanation
"""
ret3 = multirun(singlerun, job=(duration, ("meso", Integer3(11, 11, 11)), ndiv), repeat=50)
mean3 = numpy.average(ret3, axis=0)
"""
Explanation: Simulating with meso:
End of explanation
"""
plt.plot(t, 4 * D * t, 'k-', label="Expected")
plt.plot((t[0], t[-1]), (0.5 / 3, 0.5 / 3), 'k--', label='Uniform distribution')
plt.plot(t, mean2, 'ro', label="Spatiocyte")
plt.plot(t, mean3, 'bo', label="Meso")
plt.xlim(t[0], t[-1])
plt.ylim(0, 0.2)
plt.legend(loc="best", shadow=True)
plt.xlabel("Time")
plt.ylabel("Mean Square Displacement")
plt.show()
"""
Explanation: Mean square displacement at the uniform distribution on a plane is calculated as follows:
$\int_0^1\int_0^1 dxdydz\ (0.5-0.5)^2+(y-0.5)^2+(z-0.5)^2=2\left[\frac{(x-0.5)^3}{3}\right]_0^1=\frac{1}{6}$
End of explanation
"""
|
probml/pyprobml | notebooks/book1/15/attention_torch.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import math
from IPython import display
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils import data
import random
import os
import time
np.random.seed(seed=1)
torch.manual_seed(1)
!mkdir figures # for saving plots
"""
Explanation: Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/15/attention_jax.ipynb
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/attention_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Basics of differentiable (soft) attention
We show how to implement soft attention.
Based on sec 10.3 of http://d2l.ai/chapter_attention-mechanisms/attention-scoring-functions.html.
End of explanation
"""
def sequence_mask(X, valid_len, value=0):
"""Mask irrelevant entries in sequences."""
maxlen = X.size(1)
mask = torch.arange((maxlen), dtype=torch.float32, device=X.device)[None, :] < valid_len[:, None]
X[~mask] = value
return X
def masked_softmax(X, valid_lens):
"""Perform softmax operation by masking elements on the last axis."""
# `X`: 3D tensor, `valid_lens`: 1D or 2D tensor
if valid_lens is None:
return nn.functional.softmax(X, dim=-1)
else:
shape = X.shape
if valid_lens.dim() == 1:
valid_lens = torch.repeat_interleave(valid_lens, shape[1])
else:
valid_lens = valid_lens.reshape(-1)
# On the last axis, replace masked elements with a very large negative
# value, whose exponentiation outputs 0
X = sequence_mask(X.reshape(-1, shape[-1]), valid_lens, value=-1e6)
return nn.functional.softmax(X.reshape(shape), dim=-1)
"""
Explanation: Masked soft attention
End of explanation
"""
Y = masked_softmax(torch.rand(2, 2, 4), torch.tensor([2, 3]))
print(Y)
"""
Explanation: Example. Batch size 2, feature size 2, sequence length 4.
The valid lengths are 2,3. So the output has size (2,2,4),
but the length dimension is full of 0s in the invalid locations.
End of explanation
"""
Y = masked_softmax(torch.rand(2, 2, 4), torch.tensor([[1, 3], [2, 4]]))
print(Y)
"""
Explanation: Example. Batch size 2, feature size 2, sequence length 4.
The valid lengths are (1,3) for batch 1, and (2,4) for batch 2.
End of explanation
"""
class AdditiveAttention(nn.Module):
def __init__(self, key_size, query_size, num_hiddens, dropout, **kwargs):
super(AdditiveAttention, self).__init__(**kwargs)
self.W_k = nn.Linear(key_size, num_hiddens, bias=False)
self.W_q = nn.Linear(query_size, num_hiddens, bias=False)
self.w_v = nn.Linear(num_hiddens, 1, bias=False)
self.dropout = nn.Dropout(dropout)
def forward(self, queries, keys, values, valid_lens):
queries, keys = self.W_q(queries), self.W_k(keys)
# After dimension expansion, shape of `queries`: (`batch_size`, no. of
# queries, 1, `num_hiddens`) and shape of `keys`: (`batch_size`, 1,
# no. of key-value pairs, `num_hiddens`). Sum them up with
# broadcasting
features = queries.unsqueeze(2) + keys.unsqueeze(1)
features = torch.tanh(features)
# There is only one output of `self.w_v`, so we remove the last
# one-dimensional entry from the shape. Shape of `scores`:
# (`batch_size`, no. of queries, no. of key-value pairs)
scores = self.w_v(features).squeeze(-1)
self.attention_weights = masked_softmax(scores, valid_lens)
# Shape of `values`: (`batch_size`, no. of key-value pairs, value
# dimension)
return torch.bmm(self.dropout(self.attention_weights), values)
# batch size 2. 1 query of dim 20, 10 keys of dim 2.
queries, keys = torch.normal(0, 1, (2, 1, 20)), torch.ones((2, 10, 2))
# 10 values of dim 4 in each of the 2 batches.
values = torch.arange(40, dtype=torch.float32).reshape(1, 10, 4).repeat(2, 1, 1)
print(values.shape)
valid_lens = torch.tensor([2, 6])
attention = AdditiveAttention(key_size=2, query_size=20, num_hiddens=8, dropout=0.1)
attention.eval()
A = attention(queries, keys, values, valid_lens)
print(A.shape)
print(A)
"""
Explanation: Additive attention
$$
\alpha(q,k) = w_v^T \tanh(W_q q + w_k k)
$$
End of explanation
"""
def show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5), cmap="Reds"):
display.set_matplotlib_formats("svg")
num_rows, num_cols = matrices.shape[0], matrices.shape[1]
fig, axes = plt.subplots(num_rows, num_cols, figsize=figsize, sharex=True, sharey=True, squeeze=False)
for i, (row_axes, row_matrices) in enumerate(zip(axes, matrices)):
for j, (ax, matrix) in enumerate(zip(row_axes, row_matrices)):
pcm = ax.imshow(matrix.detach(), cmap=cmap)
if i == num_rows - 1:
ax.set_xlabel(xlabel)
if j == 0:
ax.set_ylabel(ylabel)
if titles:
ax.set_title(titles[j])
fig.colorbar(pcm, ax=axes, shrink=0.6)
show_heatmaps(attention.attention_weights.reshape((1, 1, 2, 10)), xlabel="Keys", ylabel="Queries")
"""
Explanation: The heatmap is uniform across the keys, since the keys are all 1s.
However, the support is truncated to the valid length.
End of explanation
"""
class DotProductAttention(nn.Module):
"""Scaled dot product attention."""
def __init__(self, dropout, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
# Shape of `queries`: (`batch_size`, no. of queries, `d`)
# Shape of `keys`: (`batch_size`, no. of key-value pairs, `d`)
# Shape of `values`: (`batch_size`, no. of key-value pairs, value
# dimension)
# Shape of `valid_lens`: (`batch_size`,) or (`batch_size`, no. of queries)
def forward(self, queries, keys, values, valid_lens=None):
d = queries.shape[-1]
# Set `transpose_b=True` to swap the last two dimensions of `keys`
scores = torch.bmm(queries, keys.transpose(1, 2)) / math.sqrt(d)
self.attention_weights = masked_softmax(scores, valid_lens)
return torch.bmm(self.dropout(self.attention_weights), values)
# batch size 2. 1 query of dim 2, 10 keys of dim 2.
queries = torch.normal(0, 1, (2, 1, 2))
attention = DotProductAttention(dropout=0.5)
attention.eval()
attention(queries, keys, values, valid_lens)
show_heatmaps(attention.attention_weights.reshape((1, 1, 2, 10)), xlabel="Keys", ylabel="Queries")
"""
Explanation: Dot-product attention
$$
A = \text{softmax}(Q K^T/\sqrt{d}) V
$$
End of explanation
"""
|
cdawei/flickr-photo | src/suburb-names.ipynb | gpl-2.0 | from osgeo import ogr, osr
import pandas as pd
import numpy as np
import collections
"""
Explanation: Assign Suburbs to POIs
Use SA2 level from ABS to find suburbs. Uses file downloaded from ABS, which is the ZIP link called Statistical Area Level 2 (SA2) ASGS Ed 2011 Digital Boundaires in ESRI Shapefile Format. The files below are obtained after unzipping the file.
ABS has a habit of changing their links, so the above links are likely to be broken.
End of explanation
"""
data_dir = '../data/'
driver = ogr.GetDriverByName('ESRI Shapefile')
node_source = driver.Open(data_dir + 'SA2_2011_AUST.shp', 0)
layer = node_source.GetLayer()
layer.GetFeatureCount()
layer.ResetReading()
feature = layer.GetNextFeature()
feature.keys()
layer.ResetReading()
feature = layer.GetNextFeature()
sla2 = []
name = []
while feature:
# Victoria is state number 2
if feature.GetField('STE_CODE11') == '2':
sla2.append(feature.GetField('SA2_MAIN11'))
name.append(feature.GetField('SA2_NAME11'))
feature = layer.GetNextFeature()
print(len(sla2))
sla_name_dict = {}
for ix, sla in enumerate(sla2):
sla_name_dict[sla] = name[ix]
"""
Explanation: Extracting SLA2 region names
End of explanation
"""
def find_points(locations, sla_name, pt, polygon):
"""
locations : the dataframe containing the locations
sla_name : the name of the SLA region
pt : a handle that determines coordinates
polygon : the polygon defining the SLA
"""
for idx, row in locations.iterrows():
lat, long = row['poiLat'], row['poiLon']
if np.isinf(lat):
continue
pt.SetPoint(0, long, lat)
try:
inside = pt.Within(polygon)
except ValueError:
inside = False
print(long, lat)
print('Unable to solve inside polygon')
return
if inside:
locations.loc[idx, 'suburb'] = sla_name
spatial_ref = osr.SpatialReference()
spatial_ref.SetWellKnownGeogCS("WGS84")
pt = ogr.Geometry(ogr.wkbPoint)
pt.AssignSpatialReference(spatial_ref)
layer.ResetReading()
poi = pd.read_csv(data_dir + 'poi-Melb-all.csv')
poi['suburb'] = ''
sla = layer.GetNextFeature()
num_sla = 1
while sla:
# Victoria is state number 2
if sla.GetField('STE_CODE11') == '2':
sla_id = sla.GetField('SA2_MAIN11')
sla_name = sla_name_dict[sla_id]
polygon = sla.GetGeometryRef()
find_points(poi, sla_name, pt, polygon)
# progress bar
if num_sla % 100 == 0:
print(num_sla)
num_sla += 1
sla = layer.GetNextFeature()
poi.head()
poi.to_csv(data_dir + 'poi-Melb-all-suburb.csv', index=False)
"""
Explanation: Find SLA region name, and add to POIs
End of explanation
"""
print(len(poi))
c_theme = collections.Counter(poi['poiTheme'])
c_suburb = collections.Counter(poi['suburb'])
print(c_theme.most_common(10))
print(c_suburb.most_common(10))
"""
Explanation: Some statistics
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/test-institute-1/cmip6/models/sandbox-3/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-3', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: TEST-INSTITUTE-1
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
konstantinstadler/pymrio | doc/source/notebooks/working_with_eora26.ipynb | gpl-3.0 | import pymrio
eora_storage = '/tmp/mrios/eora26'
eora = pymrio.parse_eora26(year=2005, path=eora_storage)
"""
Explanation: Parsing the Eora26 EE MRIO database
Getting Eora26
The Eora 26 database is available at http://www.worldmrio.com .
You need to register there and can then download the files from http://www.worldmrio.com/simplified .
Parse
To parse a single year do:
End of explanation
"""
eora.get_regions()
"""
Explanation: Explore
Eora includes (almost) all countries:
End of explanation
"""
import country_converter as coco
eora.aggregate(region_agg = coco.agg_conc(original_countries='Eora',
aggregates=['OECD'],
missing_countries='NON_OECD')
)
eora.get_regions()
eora.calc_all()
import matplotlib.pyplot as plt
with plt.style.context('ggplot'):
eora.Q.plot_account(('Total cropland area', 'Total'), figsize=(8,5))
plt.show()
"""
Explanation: This can easily be aggregated to, for example, the OECD/NON_OECD countries with the help of the country converter coco.
End of explanation
"""
|
shubham0704/ATR-FNN | MAMs discussion.ipynb | mit | # dependencies
import matplotlib.pyplot as plt
import pickle
import numpy as np
f = open('final_dataset.pickle','rb')
dataset = pickle.load(f)
sample_image = dataset['train_dataset'][0]
sample_label = dataset['train_labels'][0]
print(sample_label)
plt.figure()
plt.imshow(sample_image)
plt.show()
# lets make Wxx and Mxx for this images
# Wxx
x = np.array([0,0,0])
y = np.array([0,1,0])
final = np.subtract.outer(y,x)
print(final)
# 1. flatten the whole image into n-pixels
x_vectors = sample_image.flatten()
# dimensions must be of the form img_len,1
# for this x_vector the weights must be of the order 1,num_perceptrons
# but this gives me a sparse matrix of the order img_len,num_perceptrons
# what we do here is take sum row wise we will get like [1,0,0,1] will become [2]
# and therefore we will have outputs as img_len,1
# so here to get the weights for x_vectors we have to multiply matrices of the order
# img_len,1 and 1,img_len
#x_vectors = np.array([1,2,5,1])
weights = np.subtract.outer(x_vectors, x_vectors)
print(weights)
add_individual = np.add(weights, x_vectors)
# pg-6 now perform row wise max
result = [max(row) for row in add_individual]
np.testing.assert_array_almost_equal(x_vectors, result)
print('done')
# for k=1 dimesions of Mxx and Wxx are same
"""
Explanation: Morphological Associative memories
End of explanation
"""
import cv2
erode_img = sample_image
# kernel is a pixel set like a cross( or any shape) which convolves and erodes
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(erode_img,kernel,iterations = 1)
plt.figure()
plt.imshow(erosion)
plt.show()
# Now lets try to do some recall
x_eroded = erosion
x_eroded_vector = x_eroded.flatten()
add_individual = np.add(weights, x_eroded_vector)
result = np.array([max(row) for row in add_individual])
# now lets reshape the result to 128 x 128
result.shape = (128, 128)
plt.figure()
plt.imshow(result)
plt.show()
# now lets see the amount of recall error
result = result.flatten()
np.testing.assert_array_almost_equal(result, x_vectors)
print('done 0%')
"""
Explanation: Now lets add some erosive noise to the image and then lets see the recall
End of explanation
"""
|
emiliom/stuff | pyoos_ndbc_demo.ipynb | cc0-1.0 | import pandas as pd
from pyoos.collectors.ndbc.ndbc_sos import NdbcSos
import owslib.swe.sensor.sml as owslibsml
fmt = '{:*^64}'.format
"""
Explanation: pyoos NDBC demo
Demo the capabilities of the pyoos NdbcSos collector.
Starting point was a notebook from Filipe, for the OOI Endurance Array, which has a spatial extent encompassed within the NANOOS domain.
12/23/2015. Emilio Mayorga, NANOOS
End of explanation
"""
# OOI Endurance Array bounding box
bbox = [-127, 43, -123.75, 48]
from datetime import datetime, timedelta
dt = 5 # days
now = datetime.utcnow()
start = now - timedelta(days=dt)
stop = now + timedelta(days=dt)
sos_name = 'sea_water_temperature'
"""
Explanation: Set pyoos filter criteria
End of explanation
"""
collector_ndbc = NdbcSos()
collector_ndbc.set_bbox(bbox)
collector_ndbc.end_time = stop
collector_ndbc.start_time = start
collector_ndbc.variables = [sos_name]
ofrs = collector_ndbc.server.offerings
title = collector_ndbc.server.identification.title
print(fmt(' NDBC Collector offerings '))
print('{}: {} offerings'.format(title, len(ofrs)))
"""
Explanation: Instantiate and configure (filter) NdbcSos collector
End of explanation
"""
# 'offering' is a list; here's one entry
ofr1 = collector_ndbc.server.offerings[1]
vars(ofr1)
ofr1.id, ofr1.name, ofr1.procedures
# 'content' is a dictionary; here's one entry
vars(collector_ndbc.server.contents['station-46211'])
"""
Explanation: Note that the filters set on the collector don't apply to the server offerings. server shows everything available. That's why there are 964 offerings.
FYI: Information available from each offering
Sparse but useful metadata. No filtering of any kind applied yet.
End of explanation
"""
collector_ndbc_raw = collector_ndbc.raw(responseFormat="text/csv")
"""
Explanation: Apply collector raw method
The result is the filtered time series data. But that information will also be used to extract the set of stations returned, then issue a collector metadata request.
End of explanation
"""
type(collector_ndbc_raw), len(collector_ndbc_raw)
# See a piece of the csv
ndbc_raw_lst = collector_ndbc_raw.splitlines()
len(ndbc_raw_lst)
ndbc_raw_lst[:5]
"""
Explanation: collector_ndbc_raw is the entire csv string.
End of explanation
"""
from StringIO import StringIO
datacsv_df = pd.read_csv(StringIO(collector_ndbc_raw.encode('utf-8')),
parse_dates=True)
columns = {'station_id': 'station',
'depth (m)': 'depth_m',
'sea_water_temperature (C)': sos_name}
datacsv_df.rename(columns=columns, inplace=True)
datacsv_df['station'] = [s.split(':')[-1] for s in datacsv_df['station']]
datacsv_df.drop(['sensor_id', 'latitude (degree)', 'longitude (degree)'],
axis=1, inplace=True)
datacsv_df.head(10)
"""
Explanation: Read the time series data into a DataFrame; but keep only the information/columns we'll use for plotting.
End of explanation
"""
stations = list(datacsv_df.station.unique())
len(stations), stations
"""
Explanation: Extract unique station id's as a list
To be reused with the collector metadata method, next.
End of explanation
"""
descsen = collector_ndbc.server.get_operation_by_name('describesensor')
descsen.parameters
# The metadata method for the NdbcSos collector expects
# features to be in the shortened, "station wmo id" form (eg, 46098),
# rather than the offering name (eg, station-46089)
# or the station urn (eg, urn:ioos:station:wmo:46089)
output_format = descsen.parameters['outputFormat']['values'][0]
collector_ndbc.features = stations
ndbc_md_lst = collector_ndbc.metadata(output_format=output_format)
type(ndbc_md_lst), len(ndbc_md_lst), ndbc_md_lst[0]
for sml in ndbc_md_lst:
print sml.members[0].identifiers['stationId'].value
"""
Explanation: Extract station information using collector metadata method
End of explanation
"""
sta0_sml = ndbc_md_lst[4].members[0]
sta0_sml
vars(sta0_sml)
"""
Explanation: One of the SensorML responses, for illustration. The corresponding url for the DecribeSensor request for sta0_sml is http://sdf.ndbc.noaa.gov/sos/server.php?request=DescribeSensor&service=SOS&version=1.0.0&outputformat=text/xml;subtype=%22sensorML/1.0.1%22&procedure=urn:ioos:station:wmo:46211
End of explanation
"""
sta0_sml.contacts.keys(), sta0_sml.identifiers.keys(), sta0_sml.classifiers.keys()
"""
Explanation: Available contacts roles, identifiers, and classifiers:
End of explanation
"""
stations_md_rec = []
for sta_sml_members in ndbc_md_lst:
sta_sml = sta_sml_members.members[0]
station_urn = sta_sml.identifiers['stationId'].value
# The XPath for the location/point coordinates doesn't follow
# the IOOS SOS convention.
loc_str = sta_sml.location.find(owslibsml.nsp('gml:Point/gml:coordinates')).text
# In some cases the platform type XML element seems to be missing ...
if 'platformType' in sta_sml.classifiers:
platform_type = sta_sml.classifiers['platformType'].value
else:
platform_type = 'Unknown'
sta_md = dict(
station_id = station_urn.split(':')[-1],
station_urn = station_urn,
# Long name is also available in the offering as the offering description
longname = sta_sml.identifiers['longName'].value,
lat = float(loc_str.split()[0]),
lon = float(loc_str.split()[1]),
operator = sta_sml.contacts['http://mmisw.org/ont/ioos/definition/operator'].organization,
platform_type = platform_type
)
stations_md_rec.append(sta_md)
stations_df = pd.DataFrame.from_records(stations_md_rec, index='station_id')
print "Number of stations: %d" % len(stations_df)
stations_df.head(20)
"""
Explanation: Extract and transform the desired station metadata, then create DataFrame
End of explanation
"""
%matplotlib inline
import seaborn
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(14, 5))
colors = seaborn.color_palette("Set2", len(stations))
for k, station in enumerate(stations):
stadatadf = datacsv_df[datacsv_df.station == station]
stadatadf = stadatadf.set_index('date_time')
stamddf = stations_df.ix[station]
label = "%s: %s" % (station, stamddf.longname)
stadatadf[sos_name].plot(ax=ax, label=label, color=colors[k])
ax.legend(bbox_to_anchor=(1, 1))
ax.set_ylabel(sos_name + ' (C)');
"""
Explanation: Plot the time series
End of explanation
"""
# The argument offerings=['urn:ioos:network:noaa.nws.ndbc:all'] is not needed.
# It is passed by default.
collector_ndbc_collect = collector_ndbc.collect(responseFormat='text/xml;subtype="om/1.0.0"')
type(collector_ndbc_collect), len(collector_ndbc_collect), collector_ndbc_collect[0]
for s in collector_ndbc_collect:
print s.description
# The feature object property is not populated,
# so the time series data are not parsed and made available
ncc=collector_ndbc_collect[0]
vars(ncc)
"""
Explanation: The sefo3 station can predict the future!
Postscript: The collector collect method
It hasn't been customized for NdbcSos, so it is not very useful yet.
End of explanation
"""
|
vadim-ivlev/STUDY | handson-data-science-python/DataScience-Python3/ConditionalProbabilitySolution.ipynb | mit | from numpy import random
random.seed(0)
totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0}
purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0}
totalPurchases = 0
for _ in range(100000):
ageDecade = random.choice([20, 30, 40, 50, 60, 70])
purchaseProbability = 0.4
totals[ageDecade] += 1
if (random.random() < purchaseProbability):
totalPurchases += 1
purchases[ageDecade] += 1
"""
Explanation: Conditional Probability Solution
First we'll modify the code to have some fixed purchase probability regardless of age, say 40%:
End of explanation
"""
PEF = float(purchases[30]) / float(totals[30])
print("P(purchase | 30s): " + str(PEF))
"""
Explanation: Next we will compute P(E|F) for some age group, let's pick 30 year olds again:
End of explanation
"""
PE = float(totalPurchases) / 100000.0
print("P(Purchase):" + str(PE))
"""
Explanation: Now we'll compute P(E)
End of explanation
"""
|
ereodeereigeo/dataTritiumWS22 | numero_de_datos_perdidos_slide.ipynb | gpl-2.0 | import pandas as pd
"""
Explanation: Número de datos obtenidos y perdidos
Importamos las librerías necesarias
End of explanation
"""
import ext_datos as ext
import procesar as pro
import time_plot as tplt
"""
Explanation: Importamos las librerías creadas para trabajar
End of explanation
"""
dia1 = ext.extraer_data('dia1')
cd ..
dia2 = ext.extraer_data('dia2')
cd ..
dia3 = ext.extraer_data('dia3')
cd ..
dia4 = ext.extraer_data('dia4')
"""
Explanation: Generamos los datasets de todos los días
Notas:
Se leeran los archivos guardados en una carpeta por dia
El problema de la configuración actual de la telemetría del
auto solar es que usa software hecho para registrar datos de
un motor a la vez, lo que nos obliga a usar dos instancias
cada una de las cuales escribe un archivo de datos diferente,
sin embargo el protocolo tcp/ip si pierde momentáneamente
la conexión intenta reconectar automáticamente y los programas
se reconectan al primer motor que detectan.
Esto genera archivos mezclados, datos duplicados y pérdidas de
datos
Cree un script que automatiza separar los motores, fusiona
los datos duplicados y junta en una sola tabla ambos motores.
En primer lugar se extraen los datos de todos los archivos
de cada día y se genera una lista de tablas separadas por motor
End of explanation
"""
motoresdia1 = pro.procesar(dia1)
motoresdia2 = pro.procesar(dia2)
motoresdia3 = pro.procesar(dia3)
motoresdia4 = pro.procesar(dia4)
"""
Explanation: Se procesan las listas anteriores, se concatenan por motor según
la hora de los registros y se rellenan los espacios vacíos con
datos NaN, luego se juntan de costado las tablas (join) y se
le añade el sufijo _m1 y _m2 para diferenciar las columnas
End of explanation
"""
con1 , sin1 = len(motoresdia1), len(motoresdia1.dropna())
con2 , sin2 = len(motoresdia2), len(motoresdia2.dropna())
con3 , sin3 = len(motoresdia3), len(motoresdia3.dropna())
con4 , sin4 = len(motoresdia4), len(motoresdia4.dropna())
d = {'datos_con_valores_perdidos':[con1,con2,con3,con4],\
'datos_sin_valores_perdidos':[sin1,sin2,sin3,sin4]}
tabla1= pd.DataFrame(d,index=[1,2,3,4])
"""
Explanation: Medimos el número de filas de los archivos (con y sin valores perdidos)
End of explanation
"""
tabla1
"""
Explanation: Visualizamos los datos de la tabla
La primera columna corresponde al día de competencia
La segunda indica los datos obtenidos con valores perdidos de uno u otro motor
La tercera corresponde a los datos en donde ambos motores fueron registrados
End of explanation
"""
m1d1, m2d1 = len(motoresdia1.motorRpm_m1.dropna()),\
len(motoresdia1.motorRpm_m2.dropna())
m1d2, m2d2 = len(motoresdia2.motorRpm_m1.dropna()),\
len(motoresdia2.motorRpm_m2.dropna())
m1d3, m2d3 = len(motoresdia3.motorRpm_m1.dropna()),\
len(motoresdia3.motorRpm_m2.dropna())
m1d4, m2d4 = len(motoresdia4.motorRpm_m1.dropna()),\
len(motoresdia4.motorRpm_m2.dropna())
"""
Explanation: Se calcula los datos perdidos por motor
End of explanation
"""
p1d1, p2d1 = round(m1d1/float(con1),4) , round(m2d1/float(con1),4)
p1d2, p2d2 = round(m1d2/float(con2),4) , round(m2d2/float(con2),4)
p1d3 , p2d3 = round(m1d3/float(con3),4) , round(m2d3/float(con3),4)
p1d4 , p2d4 = round(m1d4/float(con4),4) , round(m2d4/float(con4),4)
labels = {'motor1':[100*p1d1, 100*p1d2, 100*p1d3, 100*p1d4],'motor2':\
[100*p2d1, 100*p2d2, 100*p2d3, 100*p2d4]}
tabla2 = pd.DataFrame(labels, index=[1,2,3,4])
tabla2
"""
Explanation: Se calcula la relación entre los datos efectivos por motor y
el total que registaba cada archivo
End of explanation
"""
|
0u812/nteract | example-notebooks/download-stats.ipynb | bsd-3-clause | import IPython.display
import pandas as pd
import requests
# Note:
data = requests.get('https://api.github.com/repos/nteract/nteract/releases').json()
print("{}:\n\t{}\n\t{}".format(
data[0]['tag_name'],
data[0]['assets'][0]['browser_download_url'],
data[0]['assets'][0]['download_count']
))
"""
Explanation: Download counts for nteract
End of explanation
"""
def strip_off_release(browser_download_url):
filename = browser_download_url.split('/')[-1]
basename = filename.split('.')[0]
system = basename.split('-')[1:]
return "-".join(system)
releases = [
{
'version': x['tag_name'],
'counts': { strip_off_release(y['browser_download_url']): y['download_count'] for y in x['assets'] }
}
for x in data
]
releases
versions = []
frames = []
for release in releases:
versions.append(release['version'])
frames.append(pd.DataFrame.from_dict(release['counts'], orient='index'))
df = pd.concat(frames, keys=versions).reset_index()
df.columns = ['version', 'os', 'count']
df['os'] = df.os.replace('os-x', 'darwin-x64')
df
"""
Explanation: The releases API only has context of the filename, so we'll convert:
https://github.com/nteract/nteract/releases/download/v0.0.13/nteract-darwin-x64.zip
to
darwin-x64
Which means we're reliant on our release naming to keep this a nice consistent structure
End of explanation
"""
from distutils.version import LooseVersion
versions = set(df.version.values.tolist())
versions = sorted(versions, key=LooseVersion)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
with sns.color_palette("colorblind", len(versions)):
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
ax = sns.barplot(x='version', y="count", hue="os", data=df, order=versions)
ax.set_xticklabels(versions, rotation=30)
ax.set(xlabel='Version', ylabel='Count')
plt.legend(bbox_to_anchor=(1, 1), loc=2, borderaxespad=0)
plt.show()
"""
Explanation: It would be really interesting to know how these counts change over time.
End of explanation
"""
|
ljchang/psyc63 | Notebooks/7_Introduction_to_Scraping.ipynb | mit | try:
import requests
except:
!pip install requests
try:
from bs4 import BeautifulSoup
except:
!pip install bs4
"""
Explanation: Introduction to Web Scraping
Often we are interested in getting data from a website. Modern websites are often built using a REST framework that has an Application Programming Interface (API) to make HTTP requests to retrieve structured data in the form of JSON or XML.
However, when there is not a clear API, we might need to perform web scraping by directly grabbing the data ourselves
End of explanation
"""
page = requests.get("http://pbs.dartmouth.edu/people")
print(page)
"""
Explanation: Getting data using requests
Requests is an excellent library for performing HTTP requests.
In this simple example, we will scrape data from the PBS faculty webpage.
End of explanation
"""
print(page.content)
"""
Explanation: Here the response '200' indicates that the get request was successful. Now let's look at the actual text that was downloaded from the webpage.
End of explanation
"""
soup = BeautifulSoup(page.content, 'html.parser')
print(soup.prettify())
"""
Explanation: Here you can see that we have downloaded all of the data from the PBS faculty page and that it is in the form of HTML.
HTML is a markup language that tells a browser how to layout content. HTML consists of elements called tags. Each tag indicates a beginning and end. Here are a few examples:
<a></a> - indicates hyperlink
<p></p> - indicates paragraph
<div></div> - indicates a division, or area, of the page.
<b></b> - bolds any text inside.
<i></i> - italicizes any text inside.
<h1></h1> - indicates a header
<table></table> - creates a table.
<ol></ol> - ordered list
<ul></ul> - unordered list
<li></li> - list item
Parsing HTML using Beautiful Soup
There are many libraries that can be helpful for quickly parsing structured text such as HTML. We will be using Beautiful Soup as an example.
End of explanation
"""
names_html = soup.find_all('ul',id='faculty-container')[0].find_all('h4')
names = [x.text for x in names_html]
print(names)
"""
Explanation: Here we are going to find the unordered list tagged with the id 'faculty-container'. We are then going to look for any nested tag that use the 'h4' header tag. This should give us all of the lines with the faculty names as a list.
End of explanation
"""
email_html = soup.find_all('ul',id='faculty-container')[0].find_all('span',{'class' : 'contact'})
email = [x.text for x in email_html]
print(email)
"""
Explanation: What if we wanted to get all of the faculty email addresses?
End of explanation
"""
print([x.split('@')[0] for x in email])
"""
Explanation: Parsing string data
What if we wanted to grab the name from the list of email addresses?
End of explanation
"""
email_dict = dict([(x.split('@')[0],x) for x in email])
print(email_dict)
"""
Explanation: One thing we might do with this data is create a dictionary with names and emails of all of the professors in the department. This could be useful if we wanted to send a bulk email to them.
End of explanation
"""
for x in email_dict.keys():
old = x.split('.')
email_dict[" ".join([i for i in old if len(i) > 2])] = email_dict[x]
del email_dict[x]
print(email_dict)
"""
Explanation: You can see that every name also includes an initial. Let's try to just pull out the first and last name.
End of explanation
"""
|
IBMDecisionOptimization/docplex-examples | examples/mp/jupyter/boxes.ipynb | apache-2.0 | import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
"""
Explanation: Objects in boxes
This tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on the cloud with IBM ILOG CPLEX Optimizer.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Import the library
Step 2: Model the data
Step 3: Prepare the data
Step 4: Set up the prescriptive model
Define the decision variables
Express the business constraints
Express the objective
Solve the model
Step 5: Investigate the solution and run an example analysis
Summary
Describe the business problem
We wish to put $N$ objects which are scattered in the plane, into a row of $N$ boxes.
Boxes are aligned from left to right (if $i < i'$, box $i$ is to the left of box $i'$) on the $x$ axis.
Box $i$ is located at a point $B_i$ of the $(x,y)$ plane and object $j$ is located at $O_j$.
We want to find an arrangement of objects such that:
each box contains exactly one object,
each object is stored in one box,
the total distance from object $j$ to its storage box is minimal.
First, we solve the problem described, and then we add two new constraints and examine how the cost (and solution) changes.
From the first solution, we impose that object #1 is assigned to the box immediately to the left of object #2.
Then we impose that object #5 is assigned to a box next to the box of object #6.
How decision optimization can help
Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
<u>With prescriptive analytics, you can:</u>
Automate the complex decisions and trade-offs to better manage your limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Import the library
Run the following code to import the Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
End of explanation
"""
try:
import cplex
except:
raise Exception('Please install CPLEX. See https://pypi.org/project/cplex/')
"""
Explanation: If CPLEX is not installed, please install CPLEX Community edition.
End of explanation
"""
from math import sqrt
N = 15
box_range = range(1, N+1)
obj_range = range(1, N+1)
import random
o_xmax = N*10
o_ymax = 2*N
box_coords = {b: (10*b, 1) for b in box_range}
obj_coords= {1: (140, 6), 2: (146, 8), 3: (132, 14), 4: (53, 28),
5: (146, 4), 6: (137, 13), 7: (95, 12), 8: (68, 9), 9: (102, 18),
10: (116, 8), 11: (19, 29), 12: (89, 15), 13: (141, 4), 14: (29, 4), 15: (4, 28)}
# the distance matrix from box i to object j
# actually we compute the square of distance to keep integer
# this does not change the essence of the problem
distances = {}
for o in obj_range:
for b in box_range:
dx = obj_coords[o][0]-box_coords[b][0]
dy = obj_coords[o][1]-box_coords[b][1]
d2 = dx*dx + dy*dy
distances[b, o] = d2
"""
Explanation: Step 2: Model the data
The input data is the number of objects (and boxes) N, and their positions in the (x,y) plane.
Step 3: Prepare the data
We use Euclidean distance to compute the distance between an object and its assigned box.
End of explanation
"""
from docplex.mp.environment import Environment
env = Environment()
env.print_information()
"""
Explanation: Step 4: Set up the prescriptive model
End of explanation
"""
from docplex.mp.model import Model
mdl = Model(name="boxes")
"""
Explanation: Create the DOcplex model
The model contains all the business constraints and defines the objective.
End of explanation
"""
# decision variables is a 2d-matrix
x = mdl.binary_var_matrix(box_range, obj_range, name=lambda ij: "x_%d_%d" %(ij[0], ij[1]))
"""
Explanation: Define the decision variables
For each box $i$ ($i$ in $1..N$) and object $j$ ($j$ in $1..N$), we define a binary variable $X_{i,j}$ equal to $1$ if and only if object $j$ is stored in box $i$.
Note that the $name$ parameter is actually a function, this function takes a key pair $ij$ and coins a new name for each corresponding variables. The $name$ parameter also acceptsa string prefix, in which case, Docplex will generate names by concatenating the prefix with the string representation of keys.
End of explanation
"""
# one object per box
mdl.add_constraints(mdl.sum(x[i,j] for j in obj_range) == 1
for i in box_range)
# one box for each object
mdl.add_constraints(mdl.sum(x[i,j] for i in box_range) == 1 for j in obj_range)
mdl.print_information()
"""
Explanation: Express the business constraints
The sum of $X_{i,j}$ over both rows and columns must be equal to $1$, resulting in $2\times N$ constraints.
End of explanation
"""
# minimize total displacement
mdl.minimize( mdl.dotf(x, lambda ij: distances[ij]))
"""
Explanation: Express the objective
The objective is to minimize the total distance between each object and its storage box.
End of explanation
"""
mdl.print_information()
assert mdl.solve(log_output=True), "!!! Solve of the model fails"
mdl.report()
d1 = mdl.objective_value
#mdl.print_solution()
def make_solution_vector(x_vars):
sol = [0]* N
for i in box_range:
for j in obj_range:
if x[i,j].solution_value >= 0.5:
sol[i-1] = j
break
return sol
def make_obj_box_dir(sol_vec):
# sol_vec contains an array of objects in box order at slot b-1 we have obj(b)
return { sol_vec[b]: b+1 for b in range(N)}
sol1 = make_solution_vector(x)
print("* solution: {0!s}".format(sol1))
"""
Explanation: Solve the model
End of explanation
"""
mdl.add_(x[1,2] == 0)
"""
Explanation: Additional constraint #1
As an additional constraint, we want to impose that object #1 is stored immediately to the left of object #2.
As a consequence, object #2 cannot be stored in box #1, so we add:
End of explanation
"""
mdl.add_constraints(x[k-1,1] >= x[k,2]
for k in range(2,N+1))
mdl.print_information()
"""
Explanation: Now, we must state that for $k \geq 2$ if $x[k,2] == 1$ then $x[k-1,1] == 1$; this is a logical implication that we express by a relational operator:
End of explanation
"""
s2 = mdl.solve()
assert s2, "solve failed"
mdl.report()
d2 = mdl.objective_value
sol2 = make_solution_vector(x)
print(" solution #2 ={0!s}".format(sol2))
"""
Explanation: Now let's solve again and check that our new constraint is satisfied, that is, object #1 is immediately left to object #2
End of explanation
"""
# forall k in 2..N-1 then we can use the sum on the right hand side
mdl.add_constraints(x[k,6] <= x[k-1,5] + x[k+1,5] for k in range(2,N))
# if 6 is in box 1 then 5 must be in 2
mdl.add_constraint(x[1,6] <= x[2,5])
# if 6 is last, then 5 must be before last
mdl.add_constraint(x[N,6] <= x[N-1,5])
# we solve again
s3 = mdl.solve()
assert s3, "solve failed"
mdl.report()
d3 = mdl.objective_value
sol3 = make_solution_vector(x)
print(" solution #3 ={0!s}".format(sol3))
"""
Explanation: The constraint is indeed satisfied, with a higher objective, as expected.
Additional constraint #2
Now, we want to add a second constraint to state that object #5 is stored in a box that is next to the box of object #6, either to the left or right.
In other words, when $x[k,6]$ is equal to $1$, then one of $x[k-1,5]$ and $x[k+1,5]$ is equal to $1$;
this is again a logical implication, with an OR in the right side.
We have to handle the case of extremities with care.
End of explanation
"""
try:
import matplotlib.pyplot as plt
from pylab import rcParams
%matplotlib inline
rcParams['figure.figsize'] = 12, 6
def display_solution(sol):
obj_boxes = make_obj_box_dir(sol)
xs = []
ys = []
for o in obj_range:
b = obj_boxes[o]
box_x = box_coords[b][0]
box_y = box_coords[b][1]
obj_x = obj_coords[o][0]
obj_y = obj_coords[o][1]
plt.text(obj_x, obj_y, str(o), bbox=dict(facecolor='red', alpha=0.5))
plt.plot([obj_x, box_x], [obj_y, box_y])
except ImportError:
print("matplotlib not found, nothing will be displayed")
plt = None
def display_solution(sol): pass
"""
Explanation: As expected, the constraint is satisfied; objects #5 and #6 are next to each other.
Predictably, the objective is higher.
Step 5: Investigate the solution and then run an example analysis
Present the solution as a vector of object indices, sorted by box indices.
We use maptplotlib to display the assignment of objects to boxes.
End of explanation
"""
display_solution(sol1)
"""
Explanation: The first solution shows no segments crossing, which is to be expected.
End of explanation
"""
display_solution(sol2)
display_solution(sol3)
def display(myDict, title):
if True: #env.has_matplotlib:
N = len(myDict)
labels = myDict.keys()
values= myDict.values()
try: # Python 2
ind = xrange(N) # the x locations for the groups
except: # Python 3
ind = range(N)
width = 0.2 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, values, width, color='g')
ax.set_title(title)
ax.set_xticks([ind[i]+width/2 for i in ind])
ax.set_xticklabels( labels )
#ax.legend( (rects1[0]), (title) )
plt.show()
else:
print("warning: no display")
from collections import OrderedDict
dists = OrderedDict()
dists["d1"]= d1 -8000
dists["d2"] = d2 - 8000
dists["d3"] = d3 - 8000
print(dists)
display(dists, "evolution of distance objective")
"""
Explanation: The second solution, by enforcing that object #1 must be to the left of object #2, introduces crossings.
End of explanation
"""
|
sanger-pathogens/Roary | contrib/roary_plots/roary_plots.ipynb | gpl-3.0 | # Plotting imports
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
# Other imports
import os
import pandas as pd
import numpy as np
from Bio import Phylo
"""
Explanation: Roary pangenome plots
<h6><a href="javascript:toggle()" target="_self">Toggle source code</a></h6>
End of explanation
"""
t = Phylo.read('parsnp.tree', 'newick')
# Max distance to create better plots
mdist = max([t.distance(t.root, x) for x in t.get_terminals()])
"""
Explanation: parSNP tree
Any other valid newick file is fine, if the tip labels is the same as in the gene_presence_absence matrix from roary.
End of explanation
"""
# Load roary
roary = pd.read_table('gene_presence_absence.csv',
sep=',',
low_memory=False)
# Set index (group name)
roary.set_index('Gene', inplace=True)
# Drop the other info columns
roary.drop(list(roary.columns[:13]), axis=1, inplace=True)
# Transform it in a presence/absence matrix (1/0)
roary.replace('.{2,100}', 1, regex=True, inplace=True)
roary.replace(np.nan, 0, regex=True, inplace=True)
# Sort the matrix by the sum of strains presence
idx = roary.sum(axis=1).sort_values(ascending=False).index
roary_sorted = roary.loc[idx]
# Pangenome frequency plot
plt.figure(figsize=(7, 5))
plt.hist(roary.sum(axis=1), roary.shape[1],
histtype="stepfilled", alpha=.7)
plt.xlabel('Number of genomes')
plt.ylabel('Number of genes')
sns.despine(left=True,
bottom=True)
# Sort the matrix according to tip labels in the tree
roary_sorted = roary_sorted[[x.name for x in t.get_terminals()]]
# PLot presence/absence matrix against the tree
with sns.axes_style('whitegrid'):
fig = plt.figure(figsize=(17, 10))
ax1=plt.subplot2grid((1,40), (0, 10), colspan=30)
a=ax1.imshow(roary_sorted.T, cmap=plt.cm.Blues,
vmin=0, vmax=1,
aspect='auto',
interpolation='none',
)
ax1.set_yticks([])
ax1.set_xticks([])
ax1.axis('off')
ax = fig.add_subplot(1,2,1)
ax=plt.subplot2grid((1,40), (0, 0), colspan=10, facecolor='white')
fig.subplots_adjust(wspace=0, hspace=0)
ax1.set_title('Roary matrix\n(%d gene clusters)'%roary.shape[0])
Phylo.draw(t, axes=ax,
show_confidence=False,
label_func=lambda x: None,
xticks=([],), yticks=([],),
ylabel=('',), xlabel=('',),
xlim=(-0.01,mdist+0.01),
axis=('off',),
title=('parSNP tree\n(%d strains)'%roary.shape[1],),
)
# Plot the pangenome pie chart
plt.figure(figsize=(10, 10))
core = roary[roary.sum(axis=1) == roary.shape[1]].shape[0]
softcore = roary[(roary.sum(axis=1) < roary.shape[1]) &
(roary.sum(axis=1) >= roary.shape[1]*0.95)].shape[0]
shell = roary[(roary.sum(axis=1) < roary.shape[1]*0.95) &
(roary.sum(axis=1) >= roary.shape[1]*0.15)].shape[0]
cloud = roary[roary.sum(axis=1) < roary.shape[1]*0.15].shape[0]
total = roary.shape[0]
def my_autopct(pct):
val=int(pct*total/100.0)
return '{v:d}'.format(v=val)
a=plt.pie([core, softcore, shell, cloud],
labels=['core\n(%d strains)'%roary.shape[1],
'soft-core\n(%d <= strains < %d)'%(roary.shape[1]*.95,
roary.shape[1]),
'shell\n(%d <= strains < %d)'%(roary.shape[1]*.15,
roary.shape[1]*.95),
'cloud\n(strains < %d)'%(roary.shape[1]*.15)],
explode=[0.1, 0.05, 0.02, 0], radius=0.9,
colors=[(0, 0, 1, float(x)/total) for x in (core, softcore, shell, cloud)],
autopct=my_autopct)
"""
Explanation: Roary
End of explanation
"""
|
sdss/marvin | docs/sphinx/jupyter/whats_new_v21.ipynb | bsd-3-clause | import matplotlib
%matplotlib inline
# only necessary if you have a local DB
from marvin import config
config.forceDbOff()
"""
Explanation: What's New in Marvin 2.1
Marvin is Python 3.5+ compliant!
End of explanation
"""
from marvin.tools.cube import Cube
plateifu = '8485-1901'
cube = Cube(plateifu=plateifu)
print(cube)
list(cube.nsa.keys())
# get the mass of the galaxy
cube.nsa.elpetro_logmass
"""
Explanation: Web
Interactive NASA-Sloan Atlas (NSA) Parameter Visualization
http://www.sdss.org/dr13/manga/manga-target-selection/nsa/
- Drag-and-drop parameter names from table onto axis name to change quantity on axis
- Click Box-and-whisker button and scroll horizontally to see distributions of selected parameters.
- Click on arrow in upper right corner of table to show all parameters.
Python snippets (Cube, Spectrum, Map, Query)
Tools
NASA-Sloan Atlas (NSA) Parameters
Cube.nsa or Maps.nsa
End of explanation
"""
from marvin.tools.maps import Maps
maps = Maps(plateifu=plateifu)
print(maps)
haflux = maps['emline_gflux_ha_6564']
print(haflux)
fig, ax = haflux.plot()
stvel = maps['stellar_vel']
fig, ax = stvel.plot()
stsig = maps['stellar_sigma']
fig, ax = stsig.plot()
"""
Explanation: Map Plotting
Completely redesigned map plotting
uses DAP bitmasks (NOVALUE, BADVALUE, MATHERROR, BADFIT, and DONOTUSE) and masks spaxels with ivar = 0
uses hatching for regions with data (i.e., a spectrum) but no measurement by the DAP
clips at 5th and 95th percentiles (10th and 90th percentiles for velocity and sigma plots)
velocity plots are symmetric about 0
minimum SNR is 1
End of explanation
"""
masks, fig, ax = maps.get_bpt()
# this is the global mask for star-forming spaxels. It can be used to do selections on any other map property.
masks['sf']['global']
# let's look at the h-alpha flux values for the star-forming spaxels
haflux.value[masks['sf']['global']]
# let's get the stellar velocity values for the star-forming spaxels
stvel.value[masks['sf']['global']]
# the BPT uses a strict classification scheme based on the BPTs for NII, SII, and OI. If you do not want to use OI,
# you can turn it off
mask, fig, ax = maps.get_bpt(use_oi=False)
"""
Explanation: BPT Diagrams
Classify spaxels in a given Maps object according to BPT diagrams! Will return spaxel classifications for star-forming, composite, seyfert, liner, and ambiguous. Note: there is currently a bug in the BPT code that returns incorrect composite spaxels. This is fixed in a 2.1.2 patch that will be released soon.
End of explanation
"""
masks, fig, ax = maps.get_bpt(snr_min=5)
"""
Explanation: the BPT uses a default minimum SNR threshold cutoff of 3 on each emission line. You can change this globally using the snr keyword. Note: this keyword will change to snrmin in the upcoming 2.12 patch.
End of explanation
"""
masks, fig, ax = maps.get_bpt(snr_min={'ha':5, 'sii':1})
"""
Explanation: or you change it for individual emission lines. It will use the default value of 3 for all lines you do not specify.
End of explanation
"""
|
lorgor/vulnmine | vulnmine/ipynb/170523-train-vendor-match.ipynb | gpl-3.0 | # Initialize
import pandas as pd
import numpy as np
import pip #needed to use the pip functions
# Show versions of all installed software to help debug incompatibilities.
for i in pip.get_installed_distributions(local_only=True):
print(i)
"""
Explanation: Train vendor matching algorithm
The ML classification algorithm has to be retrained from time to time e.g. when scikit-learn undergoes a major release upgrade.
The specific algorithm used is a RandomForest classifier. In initial testing using a k-fold cross-validation approach, this algorithm outperformed several other simple classification algorithms
The training proceeds as follows:
First the algorithm is tuned for typical data by using a grid search.
Next the ML classifier is run on the training data using the optimum parameters.
Finally the trained model is stored for future use.
End of explanation
"""
try:
df_label_vendors = pd.io.parsers.read_csv(
"/home/jovyan/work/shared/data/csv/label_vendors.csv",
error_bad_lines=False,
warn_bad_lines=True,
quotechar='"',
encoding='utf-8')
except IOError as e:
print('\n\n***I/O error({0}): {1}\n\n'.format(
e.errno, e.strerror))
# except ValueError:
# self.logger.critical('Could not convert data to an integer.')
except:
print(
'\n\n***Unexpected error: {0}\n\n'.format(
sys.exc_info()[0]))
raise
# Number of records / columns
df_label_vendors.shape
# Print out some sample values
df_label_vendors.sample(5)
# Check that all rows are labelled
# (Should return "False")
df_label_vendors['match'].isnull().any()
# Format training data as "X" == "features, "y" == target.
# The target value is the 1st column.
df_match_train1 = df_label_vendors[['match','fz_ptl_ratio', 'fz_ptl_tok_sort_ratio', 'fz_ratio', 'fz_tok_set_ratio', 'fz_uwratio','ven_len', 'pu0_len']]
# Convert into 2 numpy arrays for the scikit-learn ML classification algorithms.
np_match_train1 = np.asarray(df_match_train1)
X, y = np_match_train1[:, 1:], np_match_train1[:, 0]
print(X.shape, y.shape)
"""
Explanation: Read in the vendor training data
Read in the manually labelled vendor training data.
Format it and convert to two numpy arrays for input to the scikit-learn ML algorithm.
End of explanation
"""
# Now find optimum parameters for model using Grid Search
from time import time
from scipy.stats import randint as sp_randint
from sklearn.model_selection import RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
# build a classifier
clf = RandomForestClassifier()
# Utility function to report best scores
def report(results, n_top=3):
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results['rank_test_score'] == i)
for candidate in candidates:
print("Model with rank: {0}".format(i))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
results['mean_test_score'][candidate],
results['std_test_score'][candidate]))
print("Parameters: {0}".format(results['params'][candidate]))
print("")
# specify parameters and distributions to sample from
param_dist = {"n_estimators": sp_randint(20, 100),
"max_depth": [3, None],
"max_features": sp_randint(1,7),
"min_samples_split": sp_randint(2,7),
"min_samples_leaf": sp_randint(1, 7),
"bootstrap": [True, False],
"class_weight": ['auto', None],
"criterion": ["gini", "entropy"]}
# run randomized search
n_iter_search = 40
random_search = RandomizedSearchCV(clf, param_distributions=param_dist,
n_iter=n_iter_search)
start = time()
random_search.fit(X, y)
print("RandomizedSearchCV took %.2f seconds for %d candidates"
" parameter settings." % ((time() - start), n_iter_search))
report(random_search.cv_results_)
"""
Explanation: Use a grid search to tune the ML algorithm
Once the best algorithm has been determined, it should be tuned for optimal performance with the data.
This is done using a grid search. From the scikit-learn documentation:
Parameters that are not directly learnt within estimators can be set by searching a parameter space for the best Cross-validation: evaluating estimator performance score... Any parameter provided when constructing an estimator may be optimized in this manner.
Rather than do a compute-intensive search of the entire parameter space, a randomized search is done to find reasonably efficient parameters.
This code was modified from the scikit-learn sample code.
End of explanation
"""
clf = RandomForestClassifier(
bootstrap=True,
min_samples_leaf=2,
n_estimators=40,
min_samples_split=4,
criterion='entropy',
max_features=3,
max_depth=3,
class_weight=None
)
# Train model on original training data
clf.fit(X, y)
# save model for future use
from sklearn.externals import joblib
joblib.dump(clf, '/home/jovyan/work/shared/data/models/vendor_classif_trained_Rdm_Forest.pkl.z')
# Test loading
clf = joblib.load('/home/jovyan/work/shared/data/models/vendor_classif_trained_Rdm_Forest.pkl.z' )
"""
Explanation: Run the ML classifier with optimum parameters on the test data
Based on the above, and ignoring default values, the optimum set of parameters would be something like the following:
'bootstrap':True, 'min_samples_leaf': 2, 'n_estimators': 40, 'min_samples_split': 4, 'criterion':'entropy', 'max_features': 3, 'max_depth: 3, 'class_weight': None
The RandomForest classifier is now trained on the test data to produce the model.
End of explanation
"""
|
chebee7i/palettable | demo/Cubehelix Demo.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Cubehelix Color Maps in Palettable
Cubehelix was designed by D.A. Green to provide a color mapping that would degrade gracefully to grayscale without losing information. This quality makes Cubehelix very useful for continuous colour scales in scientific visualizations that might be printed in grayscale at some point. James Davenport popularized Cubehelix among matplotlib users, and now this iteration brings Cubehelix to Palettable's beautiful API for managing color maps.
End of explanation
"""
from palettable import cubehelix
"""
Explanation: The Cubehelix functionality is contained within the palettable.cubehelix name space, and shares the same API as palettable.tableau, palettable.brewer.sequential, palettable.qualitative, palettable.diverging, and palettable.wesandersen, for instance.
End of explanation
"""
cube = cubehelix.Cubehelix.make(n=5, start_hue=240., end_hue=-300.,min_sat=1., max_sat=2.5,
min_light=0.3, max_light=0.8, gamma=.9)
cube.show_as_blocks() # requires ipythonblocks
"""
Explanation: Making your own Cubehelix Maps
Unlike other color maps, Cubehelix is entirely algorithmic. Thus you can make your own map by instantiating a Cubehelix object with the Cubehelix.make() class method. For example, to make a color palette with 5 colors:
End of explanation
"""
print(cubehelix.Cubehelix.make.__doc__)
"""
Explanation: The Cubehelix docstring provides some guidance on how to build custom Cubehelix maps.
End of explanation
"""
cubehelix.print_maps()
"""
Explanation: Pre-made Cubehelix Maps
Of course, you can also use a number of pre-made color maps. The pre-made cubehelix maps have 16 computed colors. Since matplotlib's color maps are linearly interpolated, these work great as continuous color maps.
Like the other color maps in Palettable, the cubehelix maps are available from the palettable.cubehelix namespace. Here is a listing of color map names:
End of explanation
"""
cubehelix.classic_16.show_continuous_image()
cubehelix.perceptual_rainbow_16.show_continuous_image()
cubehelix.purple_16.show_continuous_image()
cubehelix.jim_special_16.show_continuous_image()
cubehelix.red_16.show_continuous_image()
cubehelix.cubehelix1_16.show_continuous_image()
cubehelix.cubehelix2_16.show_continuous_image()
cubehelix.cubehelix3_16.show_continuous_image()
"""
Explanation: And examples:
End of explanation
"""
cubehelix.cubehelix3_16_r.show_continuous_image()
"""
Explanation: Of course, you can can reverse any color map by appending _r to the map's name.
End of explanation
"""
|
frankbearzou/Data-analysis | Pixar Movies/Pixar Movies.ipynb | mit | pixar_movies['Domestic %'] = pixar_movies['Domestic %'].str.rstrip('%').astype('float')
pixar_movies['International %'] = pixar_movies['International %'].str.rstrip('%').astype('float')
"""
Explanation: Data cleaning
Because Domestic % and International % columns data end with %, and its data type are objects, it is necessary to transfer its data type to float.
End of explanation
"""
pixar_movies['IMDB Score'] = pixar_movies['IMDB Score'] * 10
filtered_pixar = pixar_movies.dropna()
pixar_movies.set_index('Movie', inplace=True)
filtered_pixar.set_index('Movie', inplace=True)
pixar_movies
"""
Explanation: for the score columns, RT Score and Metacritic Score are 100 point scale, but IMDB Score is 10 point scale. IMDB Score could be changed to 100 point scale.
End of explanation
"""
critics_reviews = pixar_movies[['RT Score', 'IMDB Score', 'Metacritic Score']]
critics_reviews.plot(figsize=(10,6))
plt.show()
"""
Explanation: Data visualization
How do the Pixar films fare across each of the major review sites?
End of explanation
"""
critics_reviews.plot(kind='box', figsize=(9,5))
plt.show()
"""
Explanation: How are the average ratings from each review site across all the movies distributed?
End of explanation
"""
revenue_proportions = filtered_pixar[['Domestic %', 'International %']]
revenue_proportions.plot(kind='bar', stacked=True, figsize=(12,6))
#sns.plt.show()
plt.show()
"""
Explanation: How has the ratio of where the revenue comes from changed since the first movie? Now that Pixar is more well known internationally, is more revenue being made internationally for newer movies?
End of explanation
"""
filtered_pixar[['Oscars Nominated', 'Oscars Won']].plot(kind='bar', figsize=(12,6))
plt.show()
"""
Explanation: Is there any correlation between the number of Oscars a movie was nominated for and the number it actually won
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/launching_into_ml/solutions/rapid_prototyping_bqml_automl.ipynb | apache-2.0 | import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Install Python package dependencies.
print("Installing libraries")
! pip3 install {USER_FLAG} --quiet google-cloud-pipeline-components kfp
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform google-cloud-bigquery
"""
Explanation: Experimenting BQML and AutoML with Vertex AI
Overview
This notebook demonstrates how to use Vertex AI Pipelines to rapid prototype a model using both AutoML and BQML, do an evaluation comparison, for a baseline, before progressing to a custom model.
Learning objectives
In this notebook, you learn how to use Vertex AI Predictions for rapid prototyping a model.
This notebook uses the following Google Cloud ML services:
Vertex AI Pipelines
Vertex AI AutoML
Vertex AI BigQuery ML
Google Cloud Pipeline Components
The steps performed include:
Create a BigQuery and Vertex AI training dataset.
Train a BigQuery ML and AutoML model.
Extract evaluation metrics from the BigQueryML and AutoML models.
Select and deploy the best trained model.
Test the deployed model infrastructure.
Run the pipeline.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
<img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/automl_and_bqml.png" />
Dataset
The Abalone Dataset
<img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/dataset.png" />
<p>Dataset Credits</p>
<p>Dua, D. and Graff, C. (2019). UCI Machine Learning Repository <a href="http://archive.ics.uci.edu/ml">http://archive.ics.uci.edu/ml</a>. Irvine, CA: University of California, School of Information and Computer Science.</p>
<p><a href="https://archive.ics.uci.edu/ml/datasets/abalone">Direct link</a></p>
Attribute Information:
<p>Given is the attribute name, attribute type, the measurement unit and, a brief description. The number of rings is the value to predict: either as a continuous value or as a classification problem.</p>
<body>
<table>
<tr>
<th>Name</th>
<th>Data Type</th>
<th>Measurement Unit</th>
<th>Description</th>
</tr>
<tr>
<td>Sex</td>
<td>nominal</td>
<td>--</td>
<td>M, F, and I (infant)</td>
</tr>
<tr>
<td>Length</td>
<td>continuous</td>
<td>mm</td>
<td>Longest shell measurement</td>
</tr>
<tr>
<td>Diameter</td>
<td>continuous</td>
<td>mm</td>
<td>perpendicular to length</td>
</tr>
<tr>
<td>Height</td>
<td>continuous</td>
<td>mm</td>
<td>with meat in shell</td>
</tr>
<tr>
<td>Whole weight</td>
<td>continuous</td>
<td>grams</td>
<td>whole abalone</td>
</tr>
<tr>
<td>Shucked weight</td>
<td>continuous</td>
<td>grams</td>
<td>weight of meat</td>
</tr>
<tr>
<td>Viscera weight</td>
<td>continuous</td>
<td>grams</td>
<td>gut weight (after bleeding)</td>
</tr>
<tr>
<td>Shell weight</td>
<td>continuous</td>
<td>grams</td>
<td>after being dried</td>
</tr>
<tr>
<td>Rings</td>
<td>integer</td>
<td>--</td>
<td>+1.5 gives the age in years</td>
</tr>
</table>
</body>
Install additional packages
End of explanation
"""
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Please ignore any incompatibility warnings and errors.
End of explanation
"""
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
"""
Explanation: Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
"""
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
! gcloud config set project $PROJECT_ID
"""
Explanation: Otherwise, set your project ID here.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = f"gs://{BUCKET_NAME}"
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this notebook, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. It is suggested that you choose a region where Vertex AI services are
available.
End of explanation
"""
! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_URI
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
print("Service Account:", SERVICE_ACCOUNT)
"""
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
"""
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI
"""
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
"""
import sys
from typing import NamedTuple
from google.cloud import aiplatform as vertex
from google.cloud import bigquery
from google_cloud_pipeline_components import \
aiplatform as vertex_pipeline_components
from google_cloud_pipeline_components.experimental import \
bigquery as bq_components
from kfp import dsl
from kfp.v2 import compiler
from kfp.v2.dsl import Artifact, Input, Metrics, Output, component
"""
Explanation: Required imports
End of explanation
"""
PIPELINE_JSON_PKG_PATH = "rapid_prototyping.json"
PIPELINE_ROOT = f"gs://{BUCKET_NAME}/pipeline_root"
DATA_FOLDER = f"{BUCKET_NAME}/data"
RAW_INPUT_DATA = f"gs://{DATA_FOLDER}/abalone.csv"
BQ_DATASET = "j90wipxexhrgq3cquanc5" # @param {type:"string"}
BQ_LOCATION = "US" # @param {type:"string"}
BQ_LOCATION = BQ_LOCATION.upper()
BQML_EXPORT_LOCATION = f"gs://{BUCKET_NAME}/artifacts/bqml"
DISPLAY_NAME = "rapid-prototyping"
ENDPOINT_DISPLAY_NAME = f"{DISPLAY_NAME}_endpoint"
image_prefix = REGION.split("-")[0]
BQML_SERVING_CONTAINER_IMAGE_URI = (
f"{image_prefix}-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest"
)
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
"""
Explanation: Determine some project and pipeline variables
Instructions:
- Make sure the GCS bucket and the BigQuery Dataset do not exist. This script may delete any existing content.
- Your bucket must be on the same region as your Vertex AI resources.
- BQ region can be US or EU;
- Make sure your preferred Vertex AI region is supported [link].
End of explanation
"""
! gsutil cp gs://cloud-samples-data/vertex-ai/community-content/datasets/abalone/abalone.data {RAW_INPUT_DATA}
"""
Explanation: Downloading the data
The cell below will download the dataset into a CSV file and save it in GCS
End of explanation
"""
@component(base_image="python:3.9", packages_to_install=["google-cloud-bigquery"])
def import_data_to_bigquery(
project: str,
bq_location: str,
bq_dataset: str,
gcs_data_uri: str,
raw_dataset: Output[Artifact],
table_name_prefix: str = "abalone",
):
from google.cloud import bigquery
# TODO 1: Construct a BigQuery client object.
client = bigquery.Client(project=project, location=bq_location)
def load_dataset(gcs_uri, table_id):
job_config = bigquery.LoadJobConfig(
schema=[
bigquery.SchemaField("Sex", "STRING"),
bigquery.SchemaField("Length", "NUMERIC"),
bigquery.SchemaField("Diameter", "NUMERIC"),
bigquery.SchemaField("Height", "NUMERIC"),
bigquery.SchemaField("Whole_weight", "NUMERIC"),
bigquery.SchemaField("Shucked_weight", "NUMERIC"),
bigquery.SchemaField("Viscera_weight", "NUMERIC"),
bigquery.SchemaField("Shell_weight", "NUMERIC"),
bigquery.SchemaField("Rings", "NUMERIC"),
],
skip_leading_rows=1,
# The source format defaults to CSV, so the line below is optional.
source_format=bigquery.SourceFormat.CSV,
)
print(f"Loading {gcs_uri} into {table_id}")
load_job = client.load_table_from_uri(
gcs_uri, table_id, job_config=job_config
) # Make an API request.
load_job.result() # Waits for the job to complete.
destination_table = client.get_table(table_id) # Make an API request.
print("Loaded {} rows.".format(destination_table.num_rows))
def create_dataset_if_not_exist(bq_dataset_id, bq_location):
print(
"Checking for existence of bq dataset. If it does not exist, it creates one"
)
dataset = bigquery.Dataset(bq_dataset_id)
dataset.location = bq_location
dataset = client.create_dataset(dataset, exists_ok=True, timeout=300)
print(f"Created dataset {dataset.full_dataset_id} @ {dataset.location}")
bq_dataset_id = f"{project}.{bq_dataset}"
create_dataset_if_not_exist(bq_dataset_id, bq_location)
raw_table_name = f"{table_name_prefix}_raw"
table_id = f"{project}.{bq_dataset}.{raw_table_name}"
print("Deleting any tables that might have the same name on the dataset")
client.delete_table(table_id, not_found_ok=True)
print("will load data to table")
load_dataset(gcs_data_uri, table_id)
raw_dataset_uri = f"bq://{table_id}"
raw_dataset.uri = raw_dataset_uri
"""
Explanation: Pipeline Components
Import to BQ
This component takes the csv file and imports it to a table in BigQuery. If the dataset does not exist, it will be created. If a table with the same name already exists, it will be deleted and recreated
End of explanation
"""
@component(
base_image="python:3.9",
packages_to_install=["google-cloud-bigquery"],
) # pandas, pyarrow and fsspec required to export bq data to csv
def split_datasets(
raw_dataset: Input[Artifact],
bq_location: str,
) -> NamedTuple(
"bqml_split",
[
("dataset_uri", str),
("dataset_bq_uri", str),
("test_dataset_uri", str),
],
):
from collections import namedtuple
from google.cloud import bigquery
raw_dataset_uri = raw_dataset.uri
table_name = raw_dataset_uri.split("bq://")[-1]
print(table_name)
raw_dataset_uri = table_name.split(".")
print(raw_dataset_uri)
project = raw_dataset_uri[0]
bq_dataset = raw_dataset_uri[1]
bq_raw_table = raw_dataset_uri[2]
client = bigquery.Client(project=project, location=bq_location)
def split_dataset(table_name_dataset):
training_dataset_table_name = f"{project}.{bq_dataset}.{table_name_dataset}"
split_query = f"""
CREATE OR REPLACE TABLE
`{training_dataset_table_name}`
AS
SELECT
Sex,
Length,
Diameter,
Height,
Whole_weight,
Shucked_weight,
Viscera_weight,
Shell_weight,
Rings,
CASE(ABS(MOD(FARM_FINGERPRINT(TO_JSON_STRING(f)), 10)))
WHEN 9 THEN 'TEST'
WHEN 8 THEN 'VALIDATE'
ELSE 'TRAIN' END AS split_col
FROM
`{project}.{bq_dataset}.abalone_raw` f
"""
dataset_uri = f"{project}.{bq_dataset}.{bq_raw_table}"
print("Splitting the dataset")
query_job = client.query(split_query) # Make an API request.
query_job.result()
print(dataset_uri)
print(split_query.replace("\n", " "))
return training_dataset_table_name
def create_test_view(training_dataset_table_name, test_view_name="dataset_test"):
view_uri = f"{project}.{bq_dataset}.{test_view_name}"
query = f"""
CREATE OR REPLACE VIEW `{view_uri}` AS SELECT
Sex,
Length,
Diameter,
Height,
Whole_weight,
Shucked_weight,
Viscera_weight,
Shell_weight,
Rings
FROM `{training_dataset_table_name}` f
WHERE
f.split_col = 'TEST'
"""
print(f"Creating view for --> {test_view_name}")
print(query.replace("\n", " "))
query_job = client.query(query) # Make an API request.
query_job.result()
return view_uri
table_name_dataset = "dataset"
dataset_uri = split_dataset(table_name_dataset)
test_dataset_uri = create_test_view(dataset_uri)
dataset_bq_uri = "bq://" + dataset_uri
print(f"dataset: {dataset_uri}")
result_tuple = namedtuple(
"bqml_split",
["dataset_uri", "dataset_bq_uri", "test_dataset_uri"],
)
return result_tuple(
dataset_uri=str(dataset_uri),
dataset_bq_uri=str(dataset_bq_uri),
test_dataset_uri=str(test_dataset_uri),
)
"""
Explanation: Split Datasets
Splits the dataset in 3 slices:
- TRAIN
- EVALUATE
- TEST
AutoML and BigQuery ML use different nomenclatures for data splits:
BQML
How BQML splits the data: link
AutoML
How AutoML splits the data: link
<ul>
<li>Model trials
<p>The training set is used to train models with different preprocessing, architecture, and hyperparameter option combinations. These models are evaluated on the validation set for quality, which guides the exploration of additional option combinations. The best parameters and architectures determined in the parallel tuning phase are used to train two ensemble models as described below.</p></li>
<li>Model evaluation
<p>
Vertex AI trains an evaluation model, using the training and validation sets as training data. Vertex AI generates the final model evaluation metrics on this model, using the test set. This is the first time in the process that the test set is used. This approach ensures that the final evaluation metrics are an unbiased reflection of how well the final trained model will perform in production.</p></li>
<li>Serving model
<p>A model is trained with the training, validation, and test sets, to maximize the amount of training data. This model is the one that you use to request predictions.</p></li>
End of explanation
"""
def _query_create_model(
project_id: str,
bq_dataset: str,
training_data_uri: str,
model_name: str = "linear_regression_model_prototyping",
):
model_uri = f"{project_id}.{bq_dataset}.{model_name}"
model_options = """OPTIONS
( MODEL_TYPE='LINEAR_REG',
input_label_cols=['Rings'],
DATA_SPLIT_METHOD='CUSTOM',
DATA_SPLIT_COL='split_col'
)
"""
query = f"""
CREATE OR REPLACE MODEL
`{model_uri}`
{model_options}
AS
SELECT
Sex,
Length,
Diameter,
Height,
Whole_weight,
Shucked_weight,
Viscera_weight,
Shell_weight,
Rings,
CASE(split_col)
WHEN 'TEST' THEN TRUE
ELSE
FALSE
END
AS split_col
FROM
`{training_data_uri}`;
"""
print(query.replace("\n", " "))
return query
"""
Explanation: Train BQML Model
For this demo, you will use a simple linear regression model on BQML. However, you can be creative with other model architectures, such as Deep Neural Networks, XGboost, Logistic Regression, etc.
For a full list of models supported by BQML, look here: End-to-end user journey for each model.
As pointed out before, BQML and AutoML use different split terminologies, so you do an adaptation of the <i>split_col</i> column directly on the SELECT portion of the CREATE model query:
When the value of DATA_SPLIT_METHOD is 'CUSTOM', the corresponding column should be of type BOOL. The rows with TRUE or NULL values are used as evaluation data. Rows with FALSE values are used as training data.
End of explanation
"""
@component(base_image="python:3.9")
def interpret_bqml_evaluation_metrics(
bqml_evaluation_metrics: Input[Artifact], metrics: Output[Metrics]
) -> dict:
import math
metadata = bqml_evaluation_metrics.metadata
for r in metadata["rows"]:
rows = r["f"]
schema = metadata["schema"]["fields"]
output = {}
for metric, value in zip(schema, rows):
metric_name = metric["name"]
val = float(value["v"])
output[metric_name] = val
metrics.log_metric(metric_name, val)
if metric_name == "mean_squared_error":
rmse = math.sqrt(val)
metrics.log_metric("root_mean_squared_error", rmse)
metrics.log_metric("framework", "BQML")
print(output)
"""
Explanation: Interpret BQML Model Evaluation
When you do Hyperparameter tuning on the model creation query, the output of the pre-built component BigqueryEvaluateModelJobOp will be a table with the metrics obtained by BQML when training the model. In your BigQuery console, they look like the image below. You need to access them programmatically so you can compare them to the AutoML model.
<img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/bqml-evaluate.png?">
The cell below shows you an example of how this can be done. BQML does not give you a root mean squared error to the list of metrics, so we're manually adding it to the metrics dictionary. For more information about the output, please check BQML's documentation.
End of explanation
"""
# Inspired by Andrew Ferlitsch's work on https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_automl_pipeline_components.ipynb
@component(
base_image="python:3.9",
packages_to_install=[
"google-cloud-aiplatform",
],
)
def interpret_automl_evaluation_metrics(
region: str, model: Input[Artifact], metrics: Output[Metrics]
):
"""'
For a list of available regression metrics, go here: gs://google-cloud-aiplatform/schema/modelevaluation/regression_metrics_1.0.0.yaml.
More information on available metrics for different types of models: https://cloud.google.com/vertex-ai/docs/predictions/online-predictions-automl
"""
import google.cloud.aiplatform.gapic as gapic
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{region}-aiplatform.googleapis.com"}
model_service_client = gapic.ModelServiceClient(client_options=client_options)
model_resource_name = model.metadata["resourceName"]
model_evaluations = model_service_client.list_model_evaluations(
parent=model_resource_name
)
# TODO 2: List the model evaluations.
model_evaluation = list(model_evaluations)[0]
available_metrics = [
"meanAbsoluteError",
"meanAbsolutePercentageError",
"rSquared",
"rootMeanSquaredError",
"rootMeanSquaredLogError",
]
output = dict()
for x in available_metrics:
val = model_evaluation.metrics.get(x)
output[x] = val
metrics.log_metric(str(x), float(val))
metrics.log_metric("framework", "AutoML")
print(output)
"""
Explanation: Interpret AutoML Model Evaluation
Similar to BQML, AutoML also generates metrics during its model creation. These can be accessed in the UI, as seen below:
<img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/automl-evaluate.png" />
Since you don't have a pre-built-component to access these metrics programmatically, you can use the Vertex AI GAPIC (Google API Compiler), which auto-generates low-level gRPC interfaces to the service.
End of explanation
"""
@component(base_image="python:3.9")
def select_best_model(
metrics_bqml: Input[Metrics],
metrics_automl: Input[Metrics],
thresholds_dict_str: str,
best_metrics: Output[Metrics],
reference_metric_name: str = "rmse",
) -> NamedTuple(
"Outputs",
[
("deploy_decision", str),
("best_model", str),
("metric", float),
("metric_name", str),
],
):
import json
from collections import namedtuple
best_metric = float("inf")
best_model = None
# BQML and AutoML use different metric names.
metric_possible_names = []
if reference_metric_name == "mae":
metric_possible_names = ["meanAbsoluteError", "mean_absolute_error"]
elif reference_metric_name == "rmse":
metric_possible_names = ["rootMeanSquaredError", "root_mean_squared_error"]
metric_bqml = float("inf")
metric_automl = float("inf")
print(metrics_bqml.metadata)
print(metrics_automl.metadata)
for x in metric_possible_names:
try:
metric_bqml = metrics_bqml.metadata[x]
print(f"Metric bqml: {metric_bqml}")
except:
print(f"{x} does not exist int the BQML dictionary")
try:
metric_automl = metrics_automl.metadata[x]
print(f"Metric automl: {metric_automl}")
except:
print(f"{x} does not exist on the AutoML dictionary")
# Change the condition if higher is better.
print(f"Comparing BQML ({metric_bqml}) vs AutoML ({metric_automl})")
if metric_bqml <= metric_automl:
best_model = "bqml"
best_metric = metric_bqml
best_metrics.metadata = metrics_bqml.metadata
else:
best_model = "automl"
best_metric = metric_automl
best_metrics.metadata = metrics_automl.metadata
thresholds_dict = json.loads(thresholds_dict_str)
deploy = False
# TODO 3: Change the condition if higher is better.
if best_metric < thresholds_dict[reference_metric_name]:
deploy = True
if deploy:
deploy_decision = "true"
else:
deploy_decision = "false"
print(f"Which model is best? {best_model}")
print(f"What metric is being used? {reference_metric_name}")
print(f"What is the best metric? {best_metric}")
print(f"What is the threshold to deploy? {thresholds_dict_str}")
print(f"Deploy decision: {deploy_decision}")
Outputs = namedtuple(
"Outputs", ["deploy_decision", "best_model", "metric", "metric_name"]
)
return Outputs(
deploy_decision=deploy_decision,
best_model=best_model,
metric=best_metric,
metric_name=reference_metric_name,
)
"""
Explanation: Model Selection
Now that you have evaluated the models independently, you are going to move forward with only one of them. This election will be done based on the model evaluation metrics gathered in the previous steps.
Bear in mind that BQML and AutoML use different evaluation metric names, hence you had to do a mapping of these different nomenclatures.
End of explanation
"""
@component(base_image="python:3.9", packages_to_install=["google-cloud-aiplatform"])
def validate_infrastructure(
endpoint: Input[Artifact],
) -> NamedTuple(
"validate_infrastructure_output", [("instance", str), ("prediction", float)]
):
import json
from collections import namedtuple
from google.cloud import aiplatform
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
def treat_uri(uri):
return uri[uri.find("projects/") :]
def request_prediction(endp, instance):
instance = json_format.ParseDict(instance, Value())
instances = [instance]
parameters_dict = {}
parameters = json_format.ParseDict(parameters_dict, Value())
response = endp.predict(instances=instances, parameters=parameters)
print("deployed_model_id:", response.deployed_model_id)
print("predictions: ", response.predictions)
# The predictions are a google.protobuf.Value representation of the model's predictions.
predictions = response.predictions
for pred in predictions:
if type(pred) is dict and "value" in pred.keys():
# AutoML predictions
prediction = pred["value"]
elif type(pred) is list:
# BQML Predictions return different format
prediction = pred[0]
return prediction
endpoint_uri = endpoint.uri
treated_uri = treat_uri(endpoint_uri)
instance = {
"Sex": "M",
"Length": 0.33,
"Diameter": 0.255,
"Height": 0.08,
"Whole_weight": 0.205,
"Shucked_weight": 0.0895,
"Viscera_weight": 0.0395,
"Shell_weight": 0.055,
}
instance_json = json.dumps(instance)
print("Will use the following instance: " + instance_json)
endpoint = aiplatform.Endpoint(treated_uri)
# TODO 4: Make a simple prediction
prediction = request_prediction(endpoint, instance)
result_tuple = namedtuple(
"validate_infrastructure_output", ["instance", "prediction"]
)
return result_tuple(instance=str(instance_json), prediction=float(prediction))
"""
Explanation: Validate Infrastructure
Once the best model has been deployed, you will validate the endpoint by making a simple prediction to it.
End of explanation
"""
pipeline_params = {
"project": PROJECT_ID,
"region": REGION,
"gcs_input_file_uri": RAW_INPUT_DATA,
"bq_dataset": BQ_DATASET,
"bq_location": BQ_LOCATION,
"bqml_model_export_location": BQML_EXPORT_LOCATION,
"bqml_serving_container_image_uri": BQML_SERVING_CONTAINER_IMAGE_URI,
"endpoint_display_name": ENDPOINT_DISPLAY_NAME,
"thresholds_dict_str": '{"rmse": 2.5}',
}
@dsl.pipeline(name=DISPLAY_NAME, description="Rapid Prototyping")
def train_pipeline(
project: str,
gcs_input_file_uri: str,
region: str,
bq_dataset: str,
bq_location: str,
bqml_model_export_location: str,
bqml_serving_container_image_uri: str,
endpoint_display_name: str,
thresholds_dict_str: str,
):
# Imports data to BigQuery using a custom component.
import_data_to_bigquery_op = import_data_to_bigquery(
project, bq_location, bq_dataset, gcs_input_file_uri
)
raw_dataset = import_data_to_bigquery_op.outputs["raw_dataset"]
# Splits the BQ dataset using a custom component.
split_datasets_op = split_datasets(raw_dataset, bq_location=bq_location)
# Generates the query to create a BQML using a static function.
create_model_query = _query_create_model(
project, bq_dataset, split_datasets_op.outputs["dataset_uri"]
)
# Builds BQML model using pre-built-component.
bqml_create_op = bq_components.BigqueryCreateModelJobOp(
project=project, location=bq_location, query=create_model_query
)
bqml_model = bqml_create_op.outputs["model"]
# Gather BQML evaluation metrics using a pre-built-component.
bqml_evaluate_op = bq_components.BigqueryEvaluateModelJobOp(
project=project, location=bq_location, model=bqml_model
)
bqml_eval_metrics_raw = bqml_evaluate_op.outputs["evaluation_metrics"]
# Analyzes evaluation BQML metrics using a custom component.
interpret_bqml_evaluation_metrics_op = interpret_bqml_evaluation_metrics(
bqml_evaluation_metrics=bqml_eval_metrics_raw
)
bqml_eval_metrics = interpret_bqml_evaluation_metrics_op.outputs["metrics"]
# Exports the BQML model to a GCS bucket using a pre-built-component.
bqml_export_op = bq_components.BigqueryExportModelJobOp(
project=project,
location=bq_location,
model=bqml_model,
model_destination_path=bqml_model_export_location,
).after(bqml_evaluate_op)
bqml_exported_gcs_path = bqml_export_op.outputs["exported_model_path"]
# Uploads the recently exported BQML model from GCS into Vertex AI using a pre-built-component.
bqml_model_upload_op = vertex_pipeline_components.ModelUploadOp(
project=project,
location=region,
display_name=DISPLAY_NAME + "_bqml",
artifact_uri=bqml_exported_gcs_path,
serving_container_image_uri=bqml_serving_container_image_uri,
)
bqml_vertex_model = bqml_model_upload_op.outputs["model"]
# Creates a Vertex AI Tabular dataset using a pre-built-component.
dataset_create_op = vertex_pipeline_components.TabularDatasetCreateOp(
project=project,
location=region,
display_name=DISPLAY_NAME,
bq_source=split_datasets_op.outputs["dataset_bq_uri"],
)
# Trains an AutoML Tables model using a pre-built-component.
automl_training_op = vertex_pipeline_components.AutoMLTabularTrainingJobRunOp(
project=project,
location=region,
display_name=f"{DISPLAY_NAME}_automl",
optimization_prediction_type="regression",
optimization_objective="minimize-rmse",
predefined_split_column_name="split_col",
dataset=dataset_create_op.outputs["dataset"],
target_column="Rings",
column_transformations=[
{"categorical": {"column_name": "Sex"}},
{"numeric": {"column_name": "Length"}},
{"numeric": {"column_name": "Diameter"}},
{"numeric": {"column_name": "Height"}},
{"numeric": {"column_name": "Whole_weight"}},
{"numeric": {"column_name": "Shucked_weight"}},
{"numeric": {"column_name": "Viscera_weight"}},
{"numeric": {"column_name": "Shell_weight"}},
{"numeric": {"column_name": "Rings"}},
],
)
automl_model = automl_training_op.outputs["model"]
# Analyzes evaluation AutoML metrics using a custom component.
automl_eval_op = interpret_automl_evaluation_metrics(
region=region, model=automl_model
)
automl_eval_metrics = automl_eval_op.outputs["metrics"]
# 1) Decides which model is best (AutoML vs BQML);
# 2) Determines if the best model meets the deployment condition.
best_model_task = select_best_model(
metrics_bqml=bqml_eval_metrics,
metrics_automl=automl_eval_metrics,
thresholds_dict_str=thresholds_dict_str,
)
# If the deploy condition is True, then deploy the best model.
with dsl.Condition(
best_model_task.outputs["deploy_decision"] == "true",
name="deploy_decision",
):
# Creates a Vertex AI endpoint using a pre-built-component.
endpoint_create_op = vertex_pipeline_components.EndpointCreateOp(
project=project,
location=region,
display_name=endpoint_display_name,
)
endpoint_create_op.after(best_model_task)
# In case the BQML model is the best...
with dsl.Condition(
best_model_task.outputs["best_model"] == "bqml",
name="deploy_bqml",
):
# Deploys the BQML model (now on Vertex AI) to the recently created endpoint using a pre-built component.
model_deploy_bqml_op = (
vertex_pipeline_components.ModelDeployOp( # noqa: F841
endpoint=endpoint_create_op.outputs["endpoint"],
model=bqml_vertex_model,
deployed_model_display_name=DISPLAY_NAME + "_best_bqml",
dedicated_resources_machine_type="n1-standard-2",
dedicated_resources_min_replica_count=2,
dedicated_resources_max_replica_count=2,
traffic_split={
"0": 100
}, # newly deployed model gets 100% of the traffic
).set_caching_options(False)
)
# Sends an online prediction request to the recently deployed model using a custom component.
validate_infrastructure(
endpoint=endpoint_create_op.outputs["endpoint"]
).set_caching_options(False).after(model_deploy_bqml_op)
# In case the AutoML model is the best...
with dsl.Condition(
best_model_task.outputs["best_model"] == "automl",
name="deploy_automl",
):
# Deploys the AutoML model to the recently created endpoint using a pre-built component.
model_deploy_automl_op = (
vertex_pipeline_components.ModelDeployOp( # noqa: F841
endpoint=endpoint_create_op.outputs["endpoint"],
model=automl_model,
deployed_model_display_name=DISPLAY_NAME + "_best_automl",
dedicated_resources_machine_type="n1-standard-2",
dedicated_resources_min_replica_count=2,
dedicated_resources_max_replica_count=2,
traffic_split={
"0": 100
}, # newly deployed model gets 100% of the traffic
).set_caching_options(False)
)
# Sends an online prediction request to the recently deployed model using a custom component.
validate_infrastructure(
endpoint=endpoint_create_op.outputs["endpoint"]
).set_caching_options(False).after(model_deploy_automl_op)
"""
Explanation: The Pipeline
End of explanation
"""
compiler.Compiler().compile(
pipeline_func=train_pipeline,
package_path=PIPELINE_JSON_PKG_PATH,
)
vertex.init(project=PROJECT_ID, location=REGION)
# TODO 5: Run the pipeline job
pipeline_job = vertex.PipelineJob(
display_name=DISPLAY_NAME,
template_path=PIPELINE_JSON_PKG_PATH,
pipeline_root=PIPELINE_ROOT,
parameter_values=pipeline_params,
enable_caching=False,
)
response = pipeline_job.submit()
"""
Explanation: Running the Pipeline
End of explanation
"""
pipeline_job.wait()
"""
Explanation: Wait for the pipeline to complete
Currently, your pipeline is running asynchronous by using the submit() method. To have run it synchronously, you would have invoked the run() method.
In this last step, you block on the asynchronously executed waiting for completion using the wait() method.
End of explanation
"""
vertex.init(project=PROJECT_ID, location=REGION)
delete_bucket = False
print("Will delete endpoint")
endpoints = vertex.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
vertex.Endpoint.delete(endpoint)
print("Deleted endpoint:", endpoint)
print("Will delete models")
suffix_list = ["bqml", "automl", "best"]
for suffix in suffix_list:
try:
model_display_name = f"{DISPLAY_NAME}_{suffix}"
print("Will delete model with name " + model_display_name)
models = vertex.Model.list(
filter=f"display_name={model_display_name}", order_by="create_time"
)
model = models[0]
vertex.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
print("Will delete Vertex dataset")
datasets = vertex.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
vertex.TabularDataset.delete(dataset)
print("Deleted Vertex dataset:", dataset)
pipelines = vertex.PipelineJob.list(
filter=f"pipeline_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
vertex.PipelineJob.delete(pipeline)
print("Deleted pipeline:", pipeline)
# Construct a BigQuery client object.
bq_client = bigquery.Client(project=PROJECT_ID, location=BQ_LOCATION)
# TODO(developer): Set dataset_id to the ID of the dataset to fetch.
dataset_id = f"{PROJECT_ID}.{BQ_DATASET}"
print(f"Will delete BQ dataset '{dataset_id}' from location {BQ_LOCATION}.")
# Use the delete_contents parameter to delete a dataset and its contents.
# Use the not_found_ok parameter to not receive an error if the dataset has already been deleted.
bq_client.delete_dataset(
dataset_id, delete_contents=True, not_found_ok=True
) # Make an API request.
print(f"Deleted BQ dataset '{dataset_id}' from location {BQ_LOCATION}.")
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the notebook.
Otherwise, you can delete the individual resources you created in this notebook:
End of explanation
"""
|
edwardd1/phys202-2015-work | assignments/assignment04/MatplotlibEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Matplotlib Exercise 1
Imports
End of explanation
"""
import os
assert os.path.isfile('yearssn.dat')
"""
Explanation: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
End of explanation
"""
data = np.loadtxt('yearssn.dat')
year = data[0:,0]
ssc = data[0:,1]
#raise NotImplementedError()
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
"""
Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave for grading
"""
Explanation: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave for grading
"""
Explanation: Describe the choices you have made in building this visualization and how they make it effective.
YOUR ANSWER HERE
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
|
ubcgif/gpgLabs | notebooks/em/FDEM_ThreeLoopModel.ipynb | mit | %matplotlib inline
from geoscilabs.em.FDEM3loop import interactfem3loop
from geoscilabs.em.FDEMpipe import interact_femPipe
from matplotlib import rcParams
rcParams['font.size'] = 14
"""
Explanation: Electromagnetics: 3-loop model
In the first part of this notebook, we consider a 3 loop system, consisting of a transmitter loop, receiver loop, and target loop.
<img src="https://github.com/geoscixyz/geosci-labs/blob/main/images/em/FEM3Loop/SurveyParams.png?raw=true" style="width: 60%; height: 60%"> </img>
Import Necessary Packages
End of explanation
"""
fem3loop = interactfem3loop()
fem3loop
"""
Explanation: Your Default Parameters should be:
<table>
<tr>
<th>Parameter </th>
<th>Default value</th>
</tr>
<tr>
<td>Inductance:</td>
<td>L = 0.1</td>
</tr>
<tr>
<td>Resistance:</td>
<td>R = 2000</td>
</tr>
<tr>
<td>X-center of target loop:</td>
<td>xc = 0</td>
</tr>
<tr>
<td>Y-center of target loop:</td>
<td>yc = 0</td>
</tr>
<tr>
<td>Z-center of target loop:</td>
<td>zc = 1</td>
</tr>
<tr>
<td>Inclination of target loop:</td>
<td>dincl = 0</td>
</tr>
<tr>
<td>Declination of target loop:</td>
<td>ddecl = 90</td>
</tr>
<tr>
<td>Frequency:</td>
<td>f = 10000 </td>
</tr>
<tr>
<td>Sample spacing:</td>
<td>dx = 0.25 </td>
</tr>
</table>
To use the default parameters below, either click the box for "default" or adjust the sliders for R, zc, and dx. When answering the lab questions, make sure all the sliders are where they should be!
Run FEM3loop Widget
End of explanation
"""
pipe = interact_femPipe()
pipe
"""
Explanation: Pipe Widget
In the following app, we consider a loop-loop system with a pipe taget. Here, we simulate two surveys, one where the boom is oriented East-West (EW) and one where the boom is oriented North-South (NS).
<img src="https://github.com/geoscixyz/geosci-labs/blob/main/images/em/FEM3Loop/model.png?raw=true" style="width: 40%; height: 40%"> </img>
The variables are:
alpha:
$$\alpha = \frac{\omega L}{R} = \frac{2\pi f L}{R}$$
pipedepth: Depth of the pipe center
We plot the percentage of Hp/Hs ratio in the Widget.
End of explanation
"""
|
alexkeenan/jupyternotebookworkflow | bike_exercise_deeper_analysis.ipynb | mit | from sklearn.decomposition import PCA
X=pivoted.fillna(0).T.values
X2=PCA(2).fit_transform(X)# this tells it the number of components we want to reduce this to is 2
X2.shape
import matplotlib.pyplot as plt
plt.scatter(X2[:,0],X2[:,1])
"""
Explanation: # First thing we're going to do is reduce the dimensionality with PCA
End of explanation
"""
from sklearn.mixture import GaussianMixture
gmm=GaussianMixture(2) #reducing it to two groups
gmm.fit(X)
labels=gmm.predict(X)
labels
plt.scatter(X2[:,0],X2[:,1], c=labels, cmap="rainbow") #c is for color
plt.colorbar()
"""
Explanation: from here you can see that there's 2 distinct types of days.
one thing we could do is to simply try to identify which type of day it is.
We will be using gaussian mixture models (sort of like MCMC?)
to help us determine to which group a data point belongs to.
So PCA will reduce the dimensions to the most helpful inputs
Gaussian mixtures will help identify groups?
End of explanation
"""
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1)
#here you can see the daily commuters! data is much cleaner
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1)
#you can see that this is much more random. These bike rides probably have less to do with commuting
"""
Explanation: Let's just look at one of these, see what it looks like in the original dataset.
End of explanation
"""
pivoted.index.dtype #interestingly the dtype is object
#if we turn this into a datetime then we can get access to a tool that will
#tell us what day of the week it is.
dayofweek=pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0],X2[:,1], c=dayofweek, cmap="rainbow") #c is for color
plt.colorbar();
#you can see that the 0 to 4 is the monday to friday.
#as you can see there are some that are still on weekdays but
#seem to act as weekends (colored ones being grouped with the reddish ones)
dates=pd.DatetimeIndex(pivoted.columns)
dates[(labels==1)&(dayofweek<5)]
#you'll notice that many of these look like holidays. That makes sense.
"""
Explanation: Now lets find out on what days the messy rides occur...
End of explanation
"""
|
bigdata-i523/hid335 | experiment/Python_SKLearn_RandomForestClassifier.ipynb | gpl-3.0 | import sklearn
import mglearn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import Image, display
from sklearn.tree import DecisionTreeClassifier
"""
Explanation: Introduction to Machine Learning
Andreas Muller and Sarah Guido
Ch. 2 Supervised Learning
Decision Trees and Random Forests
Building decision tree until all leaves are pure leads to models that are complex, overfit
Presence of pure leaves means the tree is 100% accurate on the training set
Each data point in training set is in a leaf that has the correct majority class
Strategies to Prevent Overfitting
Pre-pruning:
Stopping the creation of tree early
Limiting the maximum depth of the tree, or limiting maximum number of leaves
Requiring a minimum number of points in a node to keep splitting it
Pruning (post-pruning):
Building tree and then removing or collapsing nodes with little info
Classification with Decision Trees in Scikit Learn
Implemented in the DecisionTreeRegressor or DecisionTreeClassifier
scikit-learn only does pre-pruning, but not post-pruning
Import standard packages
End of explanation
"""
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print(cancer.keys())
print(cancer['target_names'])
print(cancer['feature_names'])
type(cancer)
cancer.data.shape
# Add target_df to cancer_df, and export to CSV file
#cancer_df = pd.DataFrame(cancer.data, columns=cancer.feature_names)
#target_df = pd.DataFrame(cancer.target)
#cancer_df["target"] = target_df[0]
#cancer_df.to_csv("cancer_data.csv", sep=',', encoding='utf-8')
#cancer_df.tail()
"""
Explanation: Import Dataset: WI Breast Cancer Study
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=42)
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
print("Accuracy on the training: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on the test set: {:.3f}".format(tree.score(X_test, y_test)))
"""
Explanation: Classification with Cancer Dataset
Split dataset into TRAIN and TEST sets
Build model using default setting that fully develops tree until all leaves are pure
Fix the random_state in the tree, for breaking ties internally
Evaluate the tree model using score() method
End of explanation
"""
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
print("Accuracy on the training: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on the test set: {:.3f}".format(tree.score(X_test, y_test)))
"""
Explanation: Pruning
Training set accuracy is 100% because leaves are pure
Trees can become arbitrarily deep, complex, if depth of the tree is not limmited
Unpruned trees are proone to overfitting and not generalizing well to new data
Set max_depth=4
Tree depth is limited to 4 branches
Limiting depth of the tree decreases overfitting
Results in lower accuracy on training set, but improvement on test set
End of explanation
"""
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],
feature_names=cancer.feature_names, impurity=False, filled=True)
from IPython.display import display
import graphviz
with open('tree.dot') as f:
dot_graph = f.read()
display(graphviz.Source(dot_graph))
"""
Explanation: Analyzing Decision Tree
Visualize the tree using export_graphviz function from trees module
Set an option to color the nodes to reflect majority class in each node
First, need to install graphviz at terminal using brew install graphviz
End of explanation
"""
print("Feature importances:\n{}".format(tree.feature_importances_))
"""
Explanation: Visualization of Decision Tree for Breast Cancer Dataset
Even with tree with depth of only 4, tree becomes complex; deeper trees even harder to grasp
Follow paths that most of data take: number of samples per node, and value (samples per class)
Feature Importance in Trees
Reates how important each feature is for the decision a tree makes
Values range from 0 = "Not at all", to 1 = perfectly predicts target"
Feature importances always sum to a total of 1.
End of explanation
"""
def plot_feature_importances_cancer(model):
n_features = cancer.data.shape[1]
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), cancer.feature_names)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plot_feature_importances_cancer(tree)
"""
Explanation: Visualize Feature Importancee
Similar to how we visualized coefficients in linear model
Feature used in top split ("worst radius") is most important feature
However, features with low importance may be redundant with another feature that encodes same information
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=0)
forest = RandomForestClassifier(n_estimators=100, random_state=0)
forest.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(forest.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(forest.score(X_test, y_test)))
"""
Explanation: Building Random Forests
To build a random forests model, need to decide how many trees to build: n_estimators parameter
Trees are built completely independently of each other, based on "bootstrap" samples of dataset: n_samples
Algorithm randomly selects subset of features, number of features determined by max_features paramter
Each node of tree makes decision involving different subset of features
Bootstrap Sampling and Subsets of Features
Each decision tree in random forest is built on slightly different dataset
Each split in each tree operates on different subset of features
Critical Parameter: max_features
If max_features set to n_features, each split can look at all features inthe dataset, no randomness in selection
High max_features means the trees in a random forest will be quite similar
Low max_feature means trees in random forest will be quite different
Prediction with Random Forests
Algorithm first makes prediction for every tree in the forest
For regression, final prediction is based on average of results
For classification, a "soft voting" is applied, probabilities for all trees are then averaged, and class with highest probability is predicted
Apply Random Forests with Cancer dataset
Split data into train and test sets; build random forest with 100 trees
Random forest gives better accuracy than linear models of single decision tree, without tuning any parameters
End of explanation
"""
plot_feature_importances_cancer(forest)
"""
Explanation: Features Importance for Random Forest
Computed by aggregating the feature importances over the trees in the forest
Feature importances provided by Random Forest are more reliable than provided by single tree
Many more features have non-zero importance than single tree, chooses similar features
End of explanation
"""
from sklearn.ensemble import GradientBoostingClassifier
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=0)
gbrt = GradientBoostingClassifier(random_state=0, n_estimators=100, max_depth=3, learning_rate=0.01)
gbrt.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(gbrt.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(gbrt.score(X_test, y_test)))
"""
Explanation: Strengths, weaknesses, parameters
Most widely used ML method, very powerful, work well without heavy parameter tuning, without scaling of data
Very difficult to interpret tens of hundreds of trees, random forests are deeper than single decision trees
Random forests do NOT perform well with very high-dimensional, spare data, such as text data
max_features paramter determines how random each tree is; smaller max_features reduces overfitting
Gradient Boosted Classifier Trees
Works by building trees in a serial manner, where each tree tries to correct for mistakes of previous ones
Main idea: combine many simple models, shallow trees ('weak learners'); more tree iteratively improves performance
Parameters
Pre-pruning, and number of trees in ensemble
learning_rate parameter controls how strongly each tree tries to correct mistakes of previous trees
Add more trees to model with n_estimators, increases model complexity
GradientBoostingClassifier on Breast Cancer dataset
With 100 trees, of maximum depth 3, and learning rate of 0.1
End of explanation
"""
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
plot_feature_importances_cancer(gbrt)
"""
Explanation: Feature Importance
With 100 trees, cannot inspect them all, even if maximum depth is only 1
End of explanation
"""
|
thomasantony/CarND-Projects | Exercises/Term1/TensorFlow-Tutorials/05_Ensemble_Learning.ipynb | mit | from IPython.display import Image
Image('images/02_network_flowchart.png')
"""
Explanation: TensorFlow Tutorial #05
Ensemble Learning
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial shows how to use a so-called ensemble of convolutional neural networks. Instead of using a single neural network, we use several neural networks and average their outputs.
This is used on the MNIST data-set for recognizing hand-written digits. The ensemble improves the classification accuracy slightly on the test-set, but the difference is so small that it is possibly random. Furthermore, the ensemble mis-classifies some images that are correctly classified by some of the individual networks.
This tutorial builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text here is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.
Flowchart
The following chart shows roughly how the data flows in a single Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.
This tutorial implements an ensemble of 5 such neural networks, where the network structure is the same but the weights and other variables are different for each network.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
"""
Explanation: Imports
End of explanation
"""
tf.__version__
"""
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
"""
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
"""
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
"""
Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets, but we will make random training-sets further below.
End of explanation
"""
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
"""
Explanation: Class numbers
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
End of explanation
"""
combined_images = np.concatenate([data.train.images, data.validation.images], axis=0)
combined_labels = np.concatenate([data.train.labels, data.validation.labels], axis=0)
"""
Explanation: Helper-function for creating random training-sets
We will train 5 neural networks on different training-sets that are selected at random. First we combine the original training- and validation-sets into one big set. This is done for both the images and the labels.
End of explanation
"""
print(combined_images.shape)
print(combined_labels.shape)
"""
Explanation: Check that the shape of the combined arrays is correct.
End of explanation
"""
combined_size = len(combined_images)
combined_size
"""
Explanation: Size of the combined data-set.
End of explanation
"""
train_size = int(0.8 * combined_size)
train_size
"""
Explanation: Define the size of the training-set used for each neural network. You can try and change this.
End of explanation
"""
validation_size = combined_size - train_size
validation_size
"""
Explanation: We do not use a validation-set during training, but this would be the size.
End of explanation
"""
def random_training_set():
# Create a randomized index into the full / combined training-set.
idx = np.random.permutation(combined_size)
# Split the random index into training- and validation-sets.
idx_train = idx[0:train_size]
idx_validation = idx[train_size:]
# Select the images and labels for the new training-set.
x_train = combined_images[idx_train, :]
y_train = combined_labels[idx_train, :]
# Select the images and labels for the new validation-set.
x_validation = combined_images[idx_validation, :]
y_validation = combined_labels[idx_validation, :]
# Return the new training- and validation-sets.
return x_train, y_train, x_validation, y_validation
"""
Explanation: Helper-function for splitting the combined data-set into a random training- and validation-set.
End of explanation
"""
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
"""
Explanation: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
End of explanation
"""
def plot_images(images, # Images to plot, 2-d array.
cls_true, # True class-no for images.
ensemble_cls_pred=None, # Ensemble predicted class-no.
best_cls_pred=None): # Best-net predicted class-no.
assert len(images) == len(cls_true)
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing if we need to print ensemble and best-net.
if ensemble_cls_pred is None:
hspace = 0.3
else:
hspace = 1.0
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# For each of the sub-plots.
for i, ax in enumerate(axes.flat):
# There may not be enough images for all sub-plots.
if i < len(images):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if ensemble_cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
msg = "True: {0}\nEnsemble: {1}\nBest Net: {2}"
xlabel = msg.format(cls_true[i],
ensemble_cls_pred[i],
best_cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
"""
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
"""
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
"""
Explanation: Plot a few images to see if data is correct
End of explanation
"""
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
"""
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used for inputting data to the graph.
Variables that are going to be optimized so as to make the convolutional network perform better.
The mathematical formulas for the neural network.
A loss measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
"""
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
"""
Explanation: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
End of explanation
"""
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
"""
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
"""
y_true_cls = tf.argmax(y_true, dimension=1)
"""
Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
End of explanation
"""
x_pretty = pt.wrap(x_image)
"""
Explanation: Neural Network
This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.
The basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
End of explanation
"""
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(class_count=10, labels=y_true)
"""
Explanation: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
End of explanation
"""
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
"""
Explanation: Optimization Method
Pretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.
It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
"""
y_pred_cls = tf.argmax(y_pred, dimension=1)
"""
Explanation: Performance Measures
We need a few more performance measures to display the progress to the user.
First we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
End of explanation
"""
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
"""
Explanation: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
End of explanation
"""
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
"""
saver = tf.train.Saver(max_to_keep=100)
"""
Explanation: Saver
In order to save the variables of the neural network, we now create a Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.
Note that if you have more than 100 neural networks in the ensemble then you must increase max_to_keep accordingly.
End of explanation
"""
save_dir = 'checkpoints/'
"""
Explanation: This is the directory used for saving and retrieving the data.
End of explanation
"""
if not os.path.exists(save_dir):
os.makedirs(save_dir)
"""
Explanation: Create the directory if it does not exist.
End of explanation
"""
def get_save_path(net_number):
return save_dir + 'network' + str(net_number)
"""
Explanation: This function returns the save-path for the data-file with the given network number.
End of explanation
"""
session = tf.Session()
"""
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
"""
def init_variables():
session.run(tf.initialize_all_variables())
"""
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it several times below.
End of explanation
"""
train_batch_size = 64
"""
Explanation: Helper-function to create a random training batch.
There are thousands of images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
"""
def random_batch(x_train, y_train):
# Total number of images in the training-set.
num_images = len(x_train)
# Create a random index into the training-set.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random images and labels.
x_batch = x_train[idx, :] # Images.
y_batch = y_train[idx, :] # Labels.
# Return the batch.
return x_batch, y_batch
"""
Explanation: Function for selecting a random training-batch of the given size.
End of explanation
"""
def optimize(num_iterations, x_train, y_train):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch(x_train, y_train)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if i % 100 == 0:
# Calculate the accuracy on the training-batch.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Status-message for printing.
msg = "Optimization Iteration: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
"""
Explanation: Helper-function to perform optimization iterations
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
End of explanation
"""
num_networks = 5
"""
Explanation: Create ensemble of neural networks
Number of neural networks in the ensemble.
End of explanation
"""
num_iterations = 10000
"""
Explanation: Number of optimization iterations for each neural network.
End of explanation
"""
if True:
# For each of the neural networks.
for i in range(num_networks):
print("Neural network: {0}".format(i))
# Create a random training-set. Ignore the validation-set.
x_train, y_train, _, _ = random_training_set()
# Initialize the variables of the TensorFlow graph.
session.run(tf.initialize_all_variables())
# Optimize the variables using this training-set.
optimize(num_iterations=num_iterations,
x_train=x_train,
y_train=y_train)
# Save the optimized variables to disk.
saver.save(sess=session, save_path=get_save_path(i))
# Print newline.
print()
"""
Explanation: Create the ensemble of neural networks. All networks use the same TensorFlow graph that was defined above. For each neural network the TensorFlow weights and variables are initialized to random values and then optimized. The variables are then saved to disk so they can be reloaded later.
You may want to skip this computation if you just want to re-run the Notebook with different analysis of the results.
End of explanation
"""
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_labels(images):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted labels which
# will be calculated in batches and filled into this array.
pred_labels = np.zeros(shape=(num_images, num_classes),
dtype=np.float)
# Now calculate the predicted labels for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images between index i and j.
feed_dict = {x: images[i:j, :]}
# Calculate the predicted labels using TensorFlow.
pred_labels[i:j] = session.run(y_pred, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
return pred_labels
"""
Explanation: Helper-functions for calculating and predicting classifications
This function calculates the predicted labels of images, that is, for each image it calculates a vector of length 10 indicating which of the 10 classes the image is.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
End of explanation
"""
def correct_prediction(images, labels, cls_true):
# Calculate the predicted labels.
pred_labels = predict_labels(images=images)
# Calculate the predicted class-number for each image.
cls_pred = np.argmax(pred_labels, axis=1)
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct
"""
Explanation: Calculate a boolean array whether the predicted classes for the images are correct.
End of explanation
"""
def test_correct():
return correct_prediction(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
"""
Explanation: Calculate a boolean array whether the images in the test-set are classified correctly.
End of explanation
"""
def validation_correct():
return correct_prediction(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
"""
Explanation: Calculate a boolean array whether the images in the validation-set are classified correctly.
End of explanation
"""
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
return correct.mean()
"""
Explanation: Helper-functions for calculating the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4
End of explanation
"""
def test_accuracy():
# Get the array of booleans whether the classifications are correct
# for the test-set.
correct = test_correct()
# Calculate the classification accuracy and return it.
return classification_accuracy(correct)
"""
Explanation: Calculate the classification accuracy on the test-set.
End of explanation
"""
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
correct = validation_correct()
# Calculate the classification accuracy and return it.
return classification_accuracy(correct)
"""
Explanation: Calculate the classification accuracy on the original validation-set.
End of explanation
"""
def ensemble_predictions():
# Empty list of predicted labels for each of the neural networks.
pred_labels = []
# Classification accuracy on the test-set for each network.
test_accuracies = []
# Classification accuracy on the validation-set for each network.
val_accuracies = []
# For each neural network in the ensemble.
for i in range(num_networks):
# Reload the variables into the TensorFlow graph.
saver.restore(sess=session, save_path=get_save_path(i))
# Calculate the classification accuracy on the test-set.
test_acc = test_accuracy()
# Append the classification accuracy to the list.
test_accuracies.append(test_acc)
# Calculate the classification accuracy on the validation-set.
val_acc = validation_accuracy()
# Append the classification accuracy to the list.
val_accuracies.append(val_acc)
# Print status message.
msg = "Network: {0}, Accuracy on Validation-Set: {1:.4f}, Test-Set: {2:.4f}"
print(msg.format(i, val_acc, test_acc))
# Calculate the predicted labels for the images in the test-set.
# This is already calculated in test_accuracy() above but
# it is re-calculated here to keep the code a bit simpler.
pred = predict_labels(images=data.test.images)
# Append the predicted labels to the list.
pred_labels.append(pred)
return np.array(pred_labels), \
np.array(test_accuracies), \
np.array(val_accuracies)
pred_labels, test_accuracies, val_accuracies = ensemble_predictions()
"""
Explanation: Results and analysis
Function for calculating the predicted labels for all the neural networks in the ensemble. The labels are combined further below.
End of explanation
"""
print("Mean test-set accuracy: {0:.4f}".format(np.mean(test_accuracies)))
print("Min test-set accuracy: {0:.4f}".format(np.min(test_accuracies)))
print("Max test-set accuracy: {0:.4f}".format(np.max(test_accuracies)))
"""
Explanation: Summarize the classification accuracies on the test-set for the neural networks in the ensemble.
End of explanation
"""
pred_labels.shape
"""
Explanation: The predicted labels of the ensemble is a 3-dim array, the first dim is the network-number, the second dim is the image-number, the third dim is the classification vector.
End of explanation
"""
ensemble_pred_labels = np.mean(pred_labels, axis=0)
ensemble_pred_labels.shape
"""
Explanation: Ensemble predictions
There are different ways to calculate the predicted labels for the ensemble. One way is to calculate the predicted class-number for each neural network, and then select the class-number with most votes. But this requires a large number of neural networks relative to the number of classes.
The method used here is instead to take the average of the predicted labels for all the networks in the ensemble. This is simple to calculate and does not require a large number of networks in the ensemble.
End of explanation
"""
ensemble_cls_pred = np.argmax(ensemble_pred_labels, axis=1)
ensemble_cls_pred.shape
"""
Explanation: The ensemble's predicted class number is then the index of the highest number in the label, which is calculated using argmax as usual.
End of explanation
"""
ensemble_correct = (ensemble_cls_pred == data.test.cls)
"""
Explanation: Boolean array whether each of the images in the test-set was correctly classified by the ensemble of neural networks.
End of explanation
"""
ensemble_incorrect = np.logical_not(ensemble_correct)
"""
Explanation: Negate the boolean array so we can use it to lookup incorrectly classified images.
End of explanation
"""
test_accuracies
"""
Explanation: Best neural network
Now we find the single neural network that performed best on the test-set.
First list the classification accuracies on the test-set for all the neural networks in the ensemble.
End of explanation
"""
best_net = np.argmax(test_accuracies)
best_net
"""
Explanation: The index of the neural network with the highest classification accuracy.
End of explanation
"""
test_accuracies[best_net]
"""
Explanation: The best neural network's classification accuracy on the test-set.
End of explanation
"""
best_net_pred_labels = pred_labels[best_net, :, :]
"""
Explanation: Predicted labels of the best neural network.
End of explanation
"""
best_net_cls_pred = np.argmax(best_net_pred_labels, axis=1)
"""
Explanation: The predicted class-number.
End of explanation
"""
best_net_correct = (best_net_cls_pred == data.test.cls)
"""
Explanation: Boolean array whether the best neural network classified each image in the test-set correctly.
End of explanation
"""
best_net_incorrect = np.logical_not(best_net_correct)
"""
Explanation: Boolean array whether each image is incorrectly classified.
End of explanation
"""
np.sum(ensemble_correct)
"""
Explanation: Comparison of ensemble vs. the best single network
The number of images in the test-set that were correctly classified by the ensemble.
End of explanation
"""
np.sum(best_net_correct)
"""
Explanation: The number of images in the test-set that were correctly classified by the best neural network.
End of explanation
"""
ensemble_better = np.logical_and(best_net_incorrect,
ensemble_correct)
"""
Explanation: Boolean array whether each image in the test-set was correctly classified by the ensemble and incorrectly classified by the best neural network.
End of explanation
"""
ensemble_better.sum()
"""
Explanation: Number of images in the test-set where the ensemble was better than the best single network:
End of explanation
"""
best_net_better = np.logical_and(best_net_correct,
ensemble_incorrect)
"""
Explanation: Boolean array whether each image in the test-set was correctly classified by the best single network and incorrectly classified by the ensemble.
End of explanation
"""
best_net_better.sum()
"""
Explanation: Number of images in the test-set where the best single network was better than the ensemble.
End of explanation
"""
def plot_images_comparison(idx):
plot_images(images=data.test.images[idx, :],
cls_true=data.test.cls[idx],
ensemble_cls_pred=ensemble_cls_pred[idx],
best_cls_pred=best_net_cls_pred[idx])
"""
Explanation: Helper-functions for plotting and printing comparisons
Function for plotting images from the test-set and their true and predicted class-numbers.
End of explanation
"""
def print_labels(labels, idx, num=1):
# Select the relevant labels based on idx.
labels = labels[idx, :]
# Select the first num labels.
labels = labels[0:num, :]
# Round numbers to 2 decimal points so they are easier to read.
labels_rounded = np.round(labels, 2)
# Print the rounded labels.
print(labels_rounded)
"""
Explanation: Function for printing the predicted labels.
End of explanation
"""
def print_labels_ensemble(idx, **kwargs):
print_labels(labels=ensemble_pred_labels, idx=idx, **kwargs)
"""
Explanation: Function for printing the predicted labels for the ensemble of neural networks.
End of explanation
"""
def print_labels_best_net(idx, **kwargs):
print_labels(labels=best_net_pred_labels, idx=idx, **kwargs)
"""
Explanation: Function for printing the predicted labels for the best single network.
End of explanation
"""
def print_labels_all_nets(idx):
for i in range(num_networks):
print_labels(labels=pred_labels[i, :, :], idx=idx, num=1)
"""
Explanation: Function for printing the predicted labels of all the neural networks in the ensemble. This only prints the labels for the first image.
End of explanation
"""
plot_images_comparison(idx=ensemble_better)
"""
Explanation: Examples: Ensemble is better than the best network
Plot examples of images that were correctly classified by the ensemble and incorrectly classified by the best single network.
End of explanation
"""
print_labels_ensemble(idx=ensemble_better, num=1)
"""
Explanation: The ensemble's predicted labels for the first of these images (top left image):
End of explanation
"""
print_labels_best_net(idx=ensemble_better, num=1)
"""
Explanation: The best network's predicted labels for the first of these images:
End of explanation
"""
print_labels_all_nets(idx=ensemble_better)
"""
Explanation: The predicted labels of all the networks in the ensemble, for the first of these images:
End of explanation
"""
plot_images_comparison(idx=best_net_better)
"""
Explanation: Examples: Best network is better than ensemble
Now plot examples of images that were incorrectly classified by the ensemble but correctly classified by the best single network.
End of explanation
"""
print_labels_ensemble(idx=best_net_better, num=1)
"""
Explanation: The ensemble's predicted labels for the first of these images (top left image):
End of explanation
"""
print_labels_best_net(idx=best_net_better, num=1)
"""
Explanation: The best single network's predicted labels for the first of these images:
End of explanation
"""
print_labels_all_nets(idx=best_net_better)
"""
Explanation: The predicted labels of all the networks in the ensemble, for the first of these images:
End of explanation
"""
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
"""
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation
"""
|
mulhod/reviewer_experience_prediction | jupyter_notebooks/Exploring_Label_Distribution.ipynb | mit | import math
import itertools
from collections import Counter
import numpy as np
import scipy as sp
from pymongo import collection
import matplotlib.pyplot as plt
%matplotlib inline
from src import *
from src.mongodb import *
from src.datasets import *
from src.experiments import *
from data import APPID_DICT
"""
Explanation: Exploring Label Distribution and Converting Distribution to Bins of Values
End of explanation
"""
list(APPID_DICT.keys())
"""
Explanation: Games
End of explanation
"""
list(LABELS)
"""
Explanation: Labels
End of explanation
"""
# Connect to reviews collection
db = connect_to_db(host='localhost', port=37017)
def do_some_distributional_research(db: collection, game: str,
labels: list = LABELS,
partition: str = 'all'):
"""
Run the `distributional_info` function and then apply some
transformations, `nbins`/`bin_factor` values, etc., to the
results.
Generates distributional information for each combination
of label, number of bins, bin factor, and transformation.
:param db: MongoDB collection
:type db: collection
:param game: name of game
:type game: str
:param labels: list of labels
:type labels: list
:param partition: name of data partition (or 'all' to use all
data)
:type partition: str
:yields: tuple of dictionary containing label value
distribution information and a list of the original
label values
:ytype: tuple
"""
# Get distributional data for each label via the
# `distributional_info` function and make some plots, etc.
transformations = {'None': None,
'ln': lambda x: np.log(x) if x > 1 else 0.0,
'**5': lambda x: x**5.0,
'**2': lambda x: x**2.0,
'**0.5': lambda x: x**0.5,
'**0.25': lambda x: x**0.25}
nbins_values = [None, 2, 3, 4, 5]
bin_factor_values = [None, 0.25, 0.5, 5.0, 8.0, 10.0]
filtered_nbins_bin_factor_product = \
filter(lambda x: ((x[0] == None and x[1] == None)
or (x[0] != None)),
itertools.product(nbins_values, bin_factor_values))
transformations_dict = {transformation: {} for transformation
in transformations}
stats_dicts = {str(label): dict(transformations_dict)
for label in labels}
for label in labels:
# Get all raw label values and convert to floats
raw_label_values = \
(list(distributional_info(db,
label,
[game],
partition)
['id_strings_labels_dict'].values()))
raw_label_values = np.array([float(val) for val in raw_label_values])
raw_values_to_return = raw_label_values
# If the label has percentage values, i.e., values between
# 0.0 and 1.0 (inclusive), multiply the values by 100 before
# doing anything else
# Note: Define these specific labels somewhere!
if label in LABELS_WITH_PCT_VALUES:
raw_label_values *= 100.0
# Apply various types of transformations to the data and
# measure the normality of the resulting distribution, etc.
for transformation, transformer in transformations.items():
if transformer:
label_values = np.array([transformer(x)
for x in raw_label_values])
else:
label_values = np.array(raw_label_values)
# Apply various combinations of `nbins`/`bin_factor`
# values (including not specifying those values)
label_transformation_string = '{0}_{1}'.format(label, transformation)
for nbins, bin_factor in filtered_nbins_bin_factor_product:
nbins_bin_factor_string = '{0}_{1}'.format(nbins, bin_factor)
stats_dict = {}
# Don't bin the values if `nbins` and `bin_factor` are
# unspecified
if not nbins and not bin_factor:
pass
else:
# Get min/max values
_min = np.floor(label_values.min())
_max = np.ceil(label_values.max())
# If `bin_factor` is unspecified, use the default
# value, 1.0
bin_factor = bin_factor if bin_factor else 1.0
# Get bin range tuples and validate
try:
bin_ranges = get_bin_ranges(_min, _max, nbins,
bin_factor)
except ValueError as e:
print('Encountered invalid bin_ranges:\n\t'
'nbins: {0}\n\tbin_factor: {1}\n\tmin: '
'{2}\n\tmax: {3}\n\ttransformation: {4}'
'\n\tlabel: {5}'
.format(nbins, bin_factor, _min, _max,
transformation, label))
continue
# Convert raw values
stats_dict['bin_ranges'] = bin_ranges
label_values = np.array([get_bin(bin_ranges, val)
for val in label_values])
stats_dict['label_values'] = label_values
# Collect some stats and measurements
stats_dict.update({'min': label_values.min(),
'max': label_values.max(),
'std': label_values.std(),
'mean': label_values.mean(),
'median': np.median(label_values),
'mode': sp.stats.mode(label_values).mode[0],
'normaltest': sp.stats.normaltest(label_values)})
yield ({label_transformation_string: {nbins_bin_factor_string: stats_dict}},
raw_values_to_return)
# Let's build up a dictionary of distributional information for each label and
# for each in a random subset of 3 games
# Execute a number of times until you get the subset you want
games_subset = list(np.random.choice([game for game in APPID_DICT
if not game.startswith('sample')],
3, replace=False))
dist_info_dict = {}
for game in games_subset:
try:
if dist_info_dict.get(game):
continue
dist_info_dict[game] = do_some_distributional_research(db, game)
except ValueError as e:
continue
# Each game will have 21 different outputs, so let's break things up a bit
dist_info_dict_Arma_3 = dist_info_dict['Arma_3']
dist_info_dict_Team_Fortress_2 = dist_info_dict['Team_Fortress_2']
dist_info_dict_Counter_Strike = dist_info_dict['Counter_Strike']
Arma_3_stats_dicts_all_labels_all_data = do_some_distributional_research(db, 'Arma_3')
next(Arma_3_stats_dicts_all_labels_all_data)
"""
Explanation: Issues
The main concern is that, in order to know whether it will be potentially interesting and worth exploration to do experiments with a certain label, it is necessary to know
if it can be used as is (raw values), which is unlikely, and, if not,
how its distribution can be carved up (specifically, what values for nbins and bin_factor to use in learn, etc.), and
whether or not the current algorithm for deciding on the range of included values (i.e., excluding outliers) and making the value bins works or if it needs to be automated somehow (i.e., even potentially using some kind of cluster analysis, perhaps)
Proposed Plan of Action
Some of this information can be collected via functions in the experiments extension, specifically distributional_info and evenly_distribute_samples
Collect data on the distributions of all of the labels for a subset of games and explore the way that the values are distributed, considering alternate ways that the values could be clustered together
End of explanation
"""
dist_info_dict_Arma_3.keys()
"""
Explanation: Examining the Distribution of Labels for Arma 3
End of explanation
"""
dist_info_dict_Arma_3['num_reviews']['labels_counter']
# Use `get_bin_ranges` to determine the ranges of bins
num_reviews_Arma_3 = dist_info_dict_Arma_3['num_reviews']['labels_counter']
num_reviews_Arma_3_values = np.array(list(num_reviews_Arma_3.keys()))
num_reviews_Arma_3_min_value = num_reviews_Arma_3_values.min()
num_reviews_Arma_3_max_value = num_reviews_Arma_3_values.max()
num_reviews_Arma_3_bin_ranges_3_1 = get_bin_ranges(num_reviews_Arma_3_min_value,
num_reviews_Arma_3_max_value,
nbins=3,
factor=1.0)
num_reviews_Arma_3_bin_ranges_3_1_5 = get_bin_ranges(num_reviews_Arma_3_min_value,
num_reviews_Arma_3_max_value,
nbins=3,
factor=1.5)
num_reviews_Arma_3_bin_ranges_3_2 = get_bin_ranges(num_reviews_Arma_3_min_value,
num_reviews_Arma_3_max_value,
nbins=3,
factor=2.0)
num_reviews_Arma_3_bin_ranges_3_3 = get_bin_ranges(num_reviews_Arma_3_min_value,
num_reviews_Arma_3_max_value,
nbins=3,
factor=3.0)
num_reviews_Arma_3_bin_ranges_2_3 = get_bin_ranges(num_reviews_Arma_3_min_value,
num_reviews_Arma_3_max_value,
nbins=2,
factor=3.0)
num_reviews_Arma_3_bin_ranges_2_10 = get_bin_ranges(num_reviews_Arma_3_min_value,
num_reviews_Arma_3_max_value,
nbins=2,
factor=10.0)
print("bins = 3, bin_factor = 1.0: {}".format(num_reviews_Arma_3_bin_ranges_3_1))
print("bins = 3, bin_factor = 1.5: {}".format(num_reviews_Arma_3_bin_ranges_3_1_5))
print("bins = 3, bin_factor = 2.0: {}".format(num_reviews_Arma_3_bin_ranges_3_2))
print("bins = 3, bin_factor = 3.0: {}".format(num_reviews_Arma_3_bin_ranges_3_3))
print("bins = 2, bin_factor = 3.0: {}".format(num_reviews_Arma_3_bin_ranges_2_3))
print("bins = 2, bin_factor = 10.0: {}".format(num_reviews_Arma_3_bin_ranges_2_10))
num_reviews_raw_label_values_Arma_3 = list(dist_info_dict_Arma_3['num_reviews']['id_strings_labels_dict'].values())
plt.hist(list(np.random.normal(200, 100, 1000)))
plt.title("Normal Distribution Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist(num_reviews_raw_label_values_Arma_3)
plt.title("Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist(num_reviews_raw_label_values_Arma_3, normed=True)
plt.title("Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist([np.log(x) for x in num_reviews_raw_label_values_Arma_3 if x != 0])
plt.title("Log Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist([np.log(x) for x in num_reviews_raw_label_values_Arma_3 if x != 0],
normed=True)
plt.title("Log Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist([np.log(x) for x in num_reviews_raw_label_values_Arma_3 if x != 0],
normed=True, cumulative=True)
plt.title("Log Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist([np.log(x + 1) for x in num_reviews_raw_label_values_Arma_3])
plt.title("Log(x + 1) Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist([np.log2(x + 1) for x in num_reviews_raw_label_values_Arma_3])
plt.title("Log2(x + 1) Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist([np.log10(x + 1) for x in num_reviews_raw_label_values_Arma_3])
plt.title("Log10(x + 1) Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
sp.stats.mstats.zscore(num_reviews_raw_label_values_Arma_3)
plt.hist(sp.stats.mstats.zscore(num_reviews_raw_label_values_Arma_3))
plt.title("z-score num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist([math.sqrt(x) for x in num_reviews_raw_label_values_Arma_3])
plt.title("sqrt(x) Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.hist([x**2 for x in num_reviews_raw_label_values_Arma_3])
plt.title("x^2 Arma_3 num_reviews Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
"""
Explanation: num_reviews
End of explanation
"""
dist_info_dict_Arma_3['total_game_hours_bin']['labels_counter']
"""
Explanation: total_game_hours_bin
End of explanation
"""
dist_info_dict_Arma_3['total_game_hours']['labels_counter']
total_game_hours_raw_label_values_Arma_3 = list(dist_info_dict_Arma_3['total_game_hours']['id_strings_labels_dict'].values())
plt.hist([x**0.25 for x in total_game_hours_raw_label_values_Arma_3])
plt.title("Log x Arma_3 total_game_hours Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
"""
Explanation: total_game_hours
End of explanation
"""
dist_info_dict_Arma_3['total_game_hours_last_two_weeks']['labels_counter']
"""
Explanation: total_game_hours_last_two_weeks
End of explanation
"""
dist_info_dict_Arma_3['num_found_helpful']['labels_counter']
"""
Explanation: num_found_helpful
End of explanation
"""
dist_info_dict_Arma_3['num_found_unhelpful']['labels_counter']
"""
Explanation: num_found_unhelpful
End of explanation
"""
dist_info_dict_Arma_3['found_helpful_percentage']['labels_counter']
"""
Explanation: found_helpful_percentage
End of explanation
"""
dist_info_dict_Arma_3['num_voted_helpfulness']['labels_counter']
"""
Explanation: num_voted_helpfulness
End of explanation
"""
dist_info_dict_Arma_3['num_achievements_attained']['labels_counter']
num_achievements_attained_raw_label_values_Arma_3 = list(dist_info_dict_Arma_3['num_achievements_attained']['id_strings_labels_dict'].values())
plt.hist([np.log(x) for x in num_achievements_attained_raw_label_values_Arma_3 if x != 0])
plt.title("Log Arma_3 num_achievements_attained Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
"""
Explanation: num_achievements_attained
End of explanation
"""
dist_info_dict_Arma_3['num_achievements_percentage']['labels_counter']
num_achievements_percentage_raw_label_values_Arma_3 = list(dist_info_dict_Arma_3['num_achievements_percentage']['id_strings_labels_dict'].values())
plt.hist(num_achievements_percentage_raw_label_values_Arma_3)
plt.title("Arma_3 num_achievements_percentage Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
"""
Explanation: num_achievements_percentage
End of explanation
"""
dist_info_dict_Arma_3['num_achievements_possible']['labels_counter']
"""
Explanation: num_achievements_possible
End of explanation
"""
dist_info_dict_Arma_3['num_guides']['labels_counter']
"""
Explanation: num_guides
End of explanation
"""
dist_info_dict_Arma_3['num_workshop_items']['labels_counter']
"""
Explanation: num_workshop_items
End of explanation
"""
num_friends_raw_label_values_Arma_3 = list(dist_info_dict_Arma_3['num_friends']['id_strings_labels_dict'].values())
plt.hist([np.log(x) for x in num_friends_raw_label_values_Arma_3 if x != 0])
plt.title("Log Arma_3 num_friends Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
"""
Explanation: num_friends
End of explanation
"""
dist_info_dict_Arma_3['num_games_owned']['labels_counter']
"""
Explanation: num_games_owned
End of explanation
"""
dist_info_dict_Arma_3['num_comments']['labels_counter']
"""
Explanation: num_comments
End of explanation
"""
dist_info_dict_Arma_3['friend_player_level']['labels_counter']
"""
Explanation: friend_player_level
End of explanation
"""
dist_info_dict_Arma_3['num_groups']['labels_counter']
"""
Explanation: num_groups
End of explanation
"""
dist_info_dict_Arma_3['num_screenshots']['labels_counter']
"""
Explanation: num_screenshots
End of explanation
"""
dist_info_dict_Arma_3['num_badges']['labels_counter']
"""
Explanation: num_badges
End of explanation
"""
dist_info_dict_Arma_3['num_found_funny']['labels_counter']
"""
Explanation: num_found_funny
End of explanation
"""
for label in dist_info_dict_Team_Fortress_2:
print("Label = {}\n".format(label))
print("{}\n".format(dist_info_dict_Team_Fortress_2[label]['labels_counter']))
"""
Explanation: Examining the Distribution of Labels for Team Fortress 2
End of explanation
"""
for label in dist_info_dict_Counter_Strike:
print("Label = {}\n".format(label))
print("{}\n".format(dist_info_dict_Counter_Strike[label]['labels_counter']))
"""
Explanation: Examining the Distribution of Labels for Counter Strike
End of explanation
"""
|
arturops/deep-learning | intro-to-tensorflow/intro_to_tensorflow.ipynb | mit | import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
"""
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
"""
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
"""
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
"""
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
grayscale_min = 0
grayscale_max = 255
return ( a + (((image_data - grayscale_min)*(b-a))/(grayscale_max - grayscale_min)) )
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
"""
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
"""
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
"""
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
"""
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
"""
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
"""
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 15
learning_rate = 0.3
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
"""
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
"""
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
"""
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation
"""
|
turbomanage/training-data-analyst | quests/serverlessml/04_keras/labs/keras_dnn.ipynb | apache-2.0 | %%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os, json, math
import numpy as np
import shutil
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
%%bash
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not re-create it. \n\nHere are your buckets:"
gsutil ls
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}
echo "\nHere are your current buckets:"
gsutil ls
fi
"""
Explanation: A simple DNN model built in Keras.
In this notebook, we will use the ML datasets we read in with our Keras pipeline earlier and build our Keras DNN to predict the fare amount for NYC taxi cab rides.
Learning objectives
Review how to read in CSV file data using tf.data
Specify input, hidden, and output layers in the DNN architecture
Review and visualize the final DNN shape
Train the model locally and visualize the loss curves
Deploy and predict with the model using Cloud AI Platform
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
"""
!ls -l ../../data/*.csv
"""
Explanation: Locating the CSV files
We will start with the CSV files that we wrote out in the first notebook of this sequence. Just so you don't have to run the notebook, we saved a copy in ../../data
End of explanation
"""
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
# TODO 1: Specify the LABEL_COLUMN name you are predicting for below:
LABEL_COLUMN = ''
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
"""
Explanation: Use tf.data to read the CSV files
We wrote these cells in the third notebook of this sequence where we created a data pipeline with Keras.
First let's define our columns of data, which column we're predicting for, and the default values.
End of explanation
"""
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (
# TODO 1: Complete the four tf.data.experimental.make_csv_dataset options
# Choose from and correctly order: batch_size, CSV_COLUMNS, DEFAULTS, pattern
tf.data.experimental.make_csv_dataset() # <--- fill-in options
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
"""
Explanation: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
End of explanation
"""
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# TODO 2: Specify the five input columns
INPUT_COLS = []
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in INPUT_COLS
}
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
# TODO 2: Create two hidden layers [32,8] with relu activation. Name them h1 and h2
# Tip: Start with h1 = tf.keras.layers.dense
h1 = # complete
h2 = # complete
# final output is a linear activation because this is regression
# TODO 2: Create an output layer with linear activation and name it 'fare'
output =
# TODO 2: Use tf.keras.models.Model and create your model with inputs and output
model =
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
"""
Explanation: Build a DNN with Keras
Now let's build the Deep Neural Network (DNN) model in Keras and specify the input and hidden layers. We will print out the DNN architecture and then visualize it later on.
End of explanation
"""
# TODO 3: Use tf.keras.utils.plot_model() to create a dnn_model.png of your architecture
# Tip: For rank direction, choose Left Right (rankdir='LR')
"""
Explanation: Visualize the DNN
We can visualize the DNN using the Keras plot_model utility.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../../data/taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../../data/taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
# TODO 4: Pass in the correct parameters to train your model
history = model.fit(
)
"""
Explanation: Train the model
To train the model, simply call model.fit().
Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
End of explanation
"""
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
"""
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
End of explanation
"""
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
}, steps=1)
"""
Explanation: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for.
End of explanation
"""
import shutil, os, datetime
OUTPUT_DIR = './export/savedmodel'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {EXPORT_PATH}
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
"""
Explanation: Of course, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
Export the model for serving
Let's export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
End of explanation
"""
%%bash
PROJECT=${PROJECT}
BUCKET=${BUCKET}
REGION=${REGION}
MODEL_NAME=taxifare
VERSION_NAME=dnn
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "The model named $MODEL_NAME already exists."
else
# create model
echo "Creating $MODEL_NAME model now."
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already the existing model $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
# TODO 5: Create the model using gcloud ai-platform predict
# Refer to: https://cloud.google.com/sdk/gcloud/reference/ai-platform/predict
gcloud ai-platform versions create # complete the missing parameters
"""
Explanation: Deploy the model to AI Platform
Next, we will use the gcloud ai-platform command to create a new version for our taxifare model and give it the version name of dnn.
Deploying the model will take 5 - 10 minutes.
End of explanation
"""
%%writefile input.json
{"pickup_longitude": -73.982683, "pickup_latitude": 40.742104,"dropoff_longitude": -73.983766,"dropoff_latitude": 40.755174,"passenger_count": 3.0}
!gcloud ai-platform predict --model taxifare --json-instances input.json --version dnn
"""
Explanation: Monitor the model creation at GCP Console > AI Platform and once the model version dnn is created, proceed to the next cell.
Predict with model using gcloud ai-platform predict
To predict with the model, we first need to create some data that the model hasn't seen before. Let's predict for a new taxi cab ride for you and two friends going from from Kips Bay and heading to Midtown Manhattan for a total distance of 1.3 miles. How much would that cost?
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.0/examples/notebooks/generated/quasibinomial.ipynb | bsd-3-clause | import statsmodels.api as sm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from io import StringIO
"""
Explanation: Quasi-binomial regression
This notebook demonstrates using custom variance functions and non-binary data
with the quasi-binomial GLM family to perform a regression analysis using
a dependent variable that is a proportion.
The notebook uses the barley leaf blotch data that has been discussed in
several textbooks. See below for one reference:
https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_glimmix_sect016.htm
End of explanation
"""
raw = StringIO(
"""0.05,0.00,1.25,2.50,5.50,1.00,5.00,5.00,17.50
0.00,0.05,1.25,0.50,1.00,5.00,0.10,10.00,25.00
0.00,0.05,2.50,0.01,6.00,5.00,5.00,5.00,42.50
0.10,0.30,16.60,3.00,1.10,5.00,5.00,5.00,50.00
0.25,0.75,2.50,2.50,2.50,5.00,50.00,25.00,37.50
0.05,0.30,2.50,0.01,8.00,5.00,10.00,75.00,95.00
0.50,3.00,0.00,25.00,16.50,10.00,50.00,50.00,62.50
1.30,7.50,20.00,55.00,29.50,5.00,25.00,75.00,95.00
1.50,1.00,37.50,5.00,20.00,50.00,50.00,75.00,95.00
1.50,12.70,26.25,40.00,43.50,75.00,75.00,75.00,95.00"""
)
"""
Explanation: The raw data, expressed as percentages. We will divide by 100
to obtain proportions.
End of explanation
"""
df = pd.read_csv(raw, header=None)
df = df.melt()
df["site"] = 1 + np.floor(df.index / 10).astype(int)
df["variety"] = 1 + (df.index % 10)
df = df.rename(columns={"value": "blotch"})
df = df.drop("variable", axis=1)
df["blotch"] /= 100
"""
Explanation: The regression model is a two-way additive model with
site and variety effects. The data are a full unreplicated
design with 10 rows (sites) and 9 columns (varieties).
End of explanation
"""
model1 = sm.GLM.from_formula(
"blotch ~ 0 + C(variety) + C(site)", family=sm.families.Binomial(), data=df
)
result1 = model1.fit(scale="X2")
print(result1.summary())
"""
Explanation: Fit the quasi-binomial regression with the standard variance
function.
End of explanation
"""
plt.clf()
plt.grid(True)
plt.plot(result1.predict(linear=True), result1.resid_pearson, "o")
plt.xlabel("Linear predictor")
plt.ylabel("Residual")
"""
Explanation: The plot below shows that the default variance function is
not capturing the variance structure very well. Also note
that the scale parameter estimate is quite small.
End of explanation
"""
class vf(sm.families.varfuncs.VarianceFunction):
def __call__(self, mu):
return mu ** 2 * (1 - mu) ** 2
def deriv(self, mu):
return 2 * mu - 6 * mu ** 2 + 4 * mu ** 3
"""
Explanation: An alternative variance function is mu^2 * (1 - mu)^2.
End of explanation
"""
bin = sm.families.Binomial()
bin.variance = vf()
model2 = sm.GLM.from_formula("blotch ~ 0 + C(variety) + C(site)", family=bin, data=df)
result2 = model2.fit(scale="X2")
print(result2.summary())
"""
Explanation: Fit the quasi-binomial regression with the alternative variance
function.
End of explanation
"""
plt.clf()
plt.grid(True)
plt.plot(result2.predict(linear=True), result2.resid_pearson, "o")
plt.xlabel("Linear predictor")
plt.ylabel("Residual")
"""
Explanation: With the alternative variance function, the mean/variance relationship
seems to capture the data well, and the estimated scale parameter is
close to 1.
End of explanation
"""
|
mommermi/Introduction-to-Python-for-Scientists | notebooks/CodeOptimization_20161209.ipynb | mit | import time
def square(x):
return x**2
def quadrature(func, a, b, n=10000000):
""" use the quadrature rule to determine the integral over the function from a to b"""
# calculate individual elements
integral_elements = [func(a)/2.]
for k in range(1, n):
integral_elements.append(func(a+k*float(b-a)/n))
integral_elements.append(func(b)/2.)
# sum up elements
result = 0
for element in integral_elements:
result += element
# normalize result and return
return float(b-a)/n*result
start = time.time()
print quadrature(square, 0, 1)
print 'this took', time.time()-start, 's'
"""
Explanation: Code Optimization and Multi Threading
Writing Python code is one thing - writing efficient code is a much different thing. Optimizing your code may take a while; if it takes longer to work on the code than it takes to run it, the additional time spent on optimization is a bad investment. However, if you use that specific piece of code on a daily basis, it might be worth to spend more time on the optimization.
Code Optimization
You can optimize the runtime of your code by identifying bottlenecks: find passages of your code that govern its runtime. Consider the following example in which the integral over a function is determined using the quadrature rule (https://en.wikipedia.org/wiki/Numerical_integration#Quadrature_rules_based_on_interpolating_functions):
End of explanation
"""
def quadrature(func, a, b, n=10000000):
""" use the quadrature rule to determine the integral over the function from a to b"""
result = 0
# calculate individual elements and sum them up
result += func(a)/2.
for k in range(1, n):
result += func(a+k*float(b-a)/n)
result += func(b)/2.
# normalize result and return
return float(b-a)/n*result
start = time.time()
print quadrature(square, 0, 1)
print 'this took', time.time()-start, 's'
"""
Explanation: The code works and returns the correct result - but it takes a while to calculate the result. This is in part due to the fact the we cut our function into 10000000 segments; a smaller number would work, too, but it we need this many iterations to signify the impact different functions used here have.
Let's see how the runtime changes by changing the code.
Lists
List functions are useful but very memory consuming: every time something is appended to a list, Python checks how much memory is left at current location in the memory to add future elements. If the memory at the current location runs low, it will move the list in the memory, reserving space for additional list elements up to 1/2 the length of the current list - this is a problem for large lists.
In this case, the use of lists is not necessary. Instead of using integral_elements, we can add up the results variable (a simple float value) on-the-fly, which also saves us the second for loop:
End of explanation
"""
def quadrature(func, a, b, n=10000000):
""" use the quadrature rule to determine the integral over the function from a to b"""
result = 0
k = 1
# calculate individual elements and sum them up
result += func(a)/2.
while k < n:
result += func(a+k*float(b-a)/n)
k += 1
result += func(b)/2.
# normalize result and return
return float(b-a)/n*result
start = time.time()
print quadrature(square, 0, 1)
print 'this took', time.time()-start, 's'
"""
Explanation: This is only a little bit faster, since there is still a list function in the code: range. Instead of using the for loop, let's try a while loop that uses an integer to count:
End of explanation
"""
import numpy as np
def quadrature(func, a, b, n=10000000):
""" use the quadrature rule to determine the integral over the function from a to b"""
result = 0
# calculate individual elements and sum them up
result += func(a)/2.
steps = a + np.arange(1, n, 1)*float(b-a)/n
result += np.sum(func(steps))
result += func(b)/2.
# normalize result and return
return float(b-a)/n*result
start = time.time()
print quadrature(square, 0, 1)
print 'this took', time.time()-start, 's'
"""
Explanation: This is only a little bit faster.
Numpy Arrays
A good approach to saving runtime is always to use numpy functionality.
End of explanation
"""
from scipy.integrate import quad
start = time.time()
print quad(square, 0, 1)
print 'this took', time.time()-start, 's'
"""
Explanation: Using numpy makes a huge difference, since numpy uses libraries written in C, which is much faster than Python.
Numpy and Scipy Functions
The most important rule to optimize your code is the following: see if what you want to do is already implemented in numpy, scipy, or some other module. Usually, people writing code for these modules know what they do and use a very efficient implementation. For instance, quadrature integration is actually already implemented as part of scipy.integrate:
End of explanation
"""
import time
import numpy as np
from scipy.integrate import quad
def task(x):
""" this simulates a complicated tasks that takes input x and returns some float based on x"""
time.sleep(1) # simulate that calculating the result takes 1 sec
return x**2
# an array with input parameters
input = np.random.rand(10)
start = time.time()
results = []
for x in input:
results.append(task(x))
print results
print 'this took', time.time()-start, 's'
"""
Explanation: Using scipy.integrate.quad is again significantly faster. This is in part due to the fact that it does not use 10000000 segments over which it calculates the integral. Instead, it uses an adaptive algorithm that estimates (and outputs) the uncertainty on the integral and stop iterating if that uncertainty is smaller than some threshold.
General Guideline to Optimize your Code
Use the following rules in your coding to make your code run efficiently:
1. whatever you want to do, check if there is already an existing function available from numpy, scipy, some other module
2. minimize the use of lists; use tuples or numpy arrays (better) instead
3. use numpy functions on arrays - they are especially designed for that and usually run faster than list functions
Multiprocessing
You will often have to deal with problems that require running the exact same code for a number of different input parameters. The runtime of the entire program will be long, since all processes (i.e., each run using a set of input parameters) will be run sequentially.
End of explanation
"""
from multiprocessing import Pool
pool = Pool() # create a pool object
start = time.time()
results = pool.map(task, input) # map the function 'task' on the input data
print results
print 'this took', time.time()-start, 's'
"""
Explanation: The results generated by the different tasks do not rely on each other, i.e., they could in principle be all calculated at the same time. This can be realized using the multiprocessing module:
End of explanation
"""
|
Benedicto/ML-Learning | numpy-tutorial.ipynb | gpl-3.0 | import numpy as np # importing this way allows us to refer to numpy as np
"""
Explanation: Numpy Tutorial
Numpy is a computational library for Python that is optimized for operations on multi-dimensional arrays. In this notebook we will use numpy to work with 1-d arrays (often called vectors) and 2-d arrays (often called matrices).
For a the full user guide and reference for numpy see: http://docs.scipy.org/doc/numpy/
End of explanation
"""
mylist = [1., 2., 3., 4.]
mynparray = np.array(mylist)
mynparray
"""
Explanation: Creating Numpy Arrays
New arrays can be made in several ways. We can take an existing list and convert it to a numpy array:
End of explanation
"""
one_vector = np.ones(4)
print one_vector # using print removes the array() portion
one2Darray = np.ones((2, 4)) # an 2D array with 2 "rows" and 4 "columns"
print one2Darray
zero_vector = np.zeros(4)
print zero_vector
"""
Explanation: You can initialize an array (of any dimension) of all ones or all zeroes with the ones() and zeros() functions:
End of explanation
"""
empty_vector = np.empty(5)
print empty_vector
"""
Explanation: You can also initialize an empty array which will be filled with values. This is the fastest way to initialize a fixed-size numpy array however you must ensure that you replace all of the values.
End of explanation
"""
mynparray[2]
"""
Explanation: Accessing array elements
Accessing an array is straight forward. For vectors you access the index by referring to it inside square brackets. Recall that indices in Python start with 0.
End of explanation
"""
my_matrix = np.array([[1, 2, 3], [4, 5, 6]])
print my_matrix
print my_matrix[1,2]
"""
Explanation: 2D arrays are accessed similarly by referring to the row and column index separated by a comma:
End of explanation
"""
print my_matrix[0:2, 2] # recall 0:2 = [0, 1]
print my_matrix[0, 0:3]
"""
Explanation: Sequences of indices can be accessed using ':' for example
End of explanation
"""
fib_indices = np.array([1, 1, 2, 3])
random_vector = np.random.random(10) # 10 random numbers between 0 and 1
print random_vector
print random_vector[fib_indices]
"""
Explanation: You can also pass a list of indices.
End of explanation
"""
my_vector = np.array([1, 2, 3, 4])
select_index = np.array([True, False, True, False])
print my_vector[select_index]
"""
Explanation: You can also use true/false values to select values
End of explanation
"""
select_cols = np.array([True, False, True]) # 1st and 3rd column
select_rows = np.array([False, True]) # 2nd row
print my_matrix[select_rows, :] # just 2nd row but all columns
print my_matrix[:, select_cols] # all rows and just the 1st and 3rd column
print my_matrix[select_rows, select_cols]
"""
Explanation: For 2D arrays you can select specific columns and specific rows. Passing ':' selects all rows/columns
End of explanation
"""
my_array = np.array([1., 2., 3., 4.])
print my_array*my_array
print my_array**2
print my_array - np.ones(4)
print my_array + np.ones(4)
print my_array / 3
print my_array / np.array([2., 3., 4., 5.]) # = [1.0/2.0, 2.0/3.0, 3.0/4.0, 4.0/5.0]
"""
Explanation: Operations on Arrays
You can use the operations '*', '**', '\', '+' and '-' on numpy arrays and they operate elementwise.
End of explanation
"""
print np.sum(my_array)
print np.average(my_array)
print np.sum(my_array)/len(my_array)
"""
Explanation: You can compute the sum with np.sum() and the average with np.average()
End of explanation
"""
array1 = np.array([1., 2., 3., 4.])
array2 = np.array([2., 3., 4., 5.])
print np.dot(array1, array2)
print np.sum(array1*array2)
"""
Explanation: The dot product
An important mathematical operation in linear algebra is the dot product.
When we compute the dot product between two vectors we are simply multiplying them elementwise and adding them up. In numpy you can do this with np.dot()
End of explanation
"""
array1_mag = np.sqrt(np.dot(array1, array1))
print array1_mag
print np.sqrt(np.sum(array1*array1))
"""
Explanation: Recall that the Euclidean length (or magnitude) of a vector is the squareroot of the sum of the squares of the components. This is just the squareroot of the dot product of the vector with itself:
End of explanation
"""
my_features = np.array([[1., 2.], [3., 4.], [5., 6.], [7., 8.]])
print my_features
my_weights = np.array([0.4, 0.5])
print my_weights
my_predictions = np.dot(my_features, my_weights) # note that the weights are on the right
print my_predictions # which has 4 elements since my_features has 4 rows
"""
Explanation: We can also use the dot product when we have a 2D array (or matrix). When you have an vector with the same number of elements as the matrix (2D array) has columns you can right-multiply the matrix by the vector to get another vector with the same number of elements as the matrix has rows. For example this is how you compute the predicted values given a matrix of features and an array of weights.
End of explanation
"""
my_matrix = my_features
my_array = np.array([0.3, 0.4, 0.5, 0.6])
print np.dot(my_array, my_matrix) # which has 2 elements because my_matrix has 2 columns
"""
Explanation: Similarly if you have a vector with the same number of elements as the matrix has rows you can left multiply them.
End of explanation
"""
matrix_1 = np.array([[1., 2., 3.],[4., 5., 6.]])
print matrix_1
matrix_2 = np.array([[1., 2.], [3., 4.], [5., 6.]])
print matrix_2
print np.dot(matrix_1, matrix_2)
"""
Explanation: Multiplying Matrices
If we have two 2D arrays (matrices) matrix_1 and matrix_2 where the number of columns of matrix_1 is the same as the number of rows of matrix_2 then we can use np.dot() to perform matrix multiplication.
End of explanation
"""
|
brookthomas/GeneDive | preprocessing/Typeahead_ChemicalDisease.ipynb | mit | import sqlite3
import json
DATABASE = "data.sqlite"
conn = sqlite3.connect(DATABASE)
cursor = conn.cursor()
"""
Explanation: Build Adjacency Matrix
End of explanation
"""
# For getting the maximum row id
QUERY_MAX_ID = "SELECT id FROM interactions ORDER BY id DESC LIMIT 1"
# Get interaction data
QUERY_INTERACTION = "SELECT geneids1, mention1, geneids2, mention2 FROM interactions WHERE id = {}"
max_id = cursor.execute(QUERY_MAX_ID).fetchone()[0]
"""
Explanation: Queries
End of explanation
"""
typeahead = {}
final = []
distribution = {}
row_id = 0
while row_id <= max_id:
row_id+= 1
row = cursor.execute(QUERY_INTERACTION.format(row_id))
row = row.fetchone()
if row == None:
continue
id1 = row[0]
symbol1 = row[1]
id2 = row[2]
symbol2 = row[3]
# Only Gene-Gene for this pass
if id1[0] == 'C' or id2[0] == 'C':
pass
else:
continue
if id1[0] == 'C':
if symbol1 not in typeahead:
typeahead[symbol1] = []
if id1 not in typeahead[symbol1]:
typeahead[symbol1].append(id1)
if id2[0] == 'C':
if symbol2 not in typeahead:
typeahead[symbol2] = []
if id2 not in typeahead[symbol2]:
typeahead[symbol2].append(id2)
for key in typeahead:
final.append( {"symbol": key, "values": typeahead[key]} )
with open("chemical_id.json", "w+") as file:
file.write(json.dumps( final ))
print(distribution)
"""
Explanation: Step through every interaction.
If geneids1 not in matrix - insert it as dict.
If geneids2 not in matrix[geneids1] - insert it as []
If probability not in matrix[geneids1][geneids2] - insert it.
Perform the reverse.
End of explanation
"""
|
hpparvi/PyTransit | notebooks/example_quadratic_model.ipynb | gpl-2.0 | %pylab inline
sys.path.append('..')
from pytransit import QuadraticModel
seed(0)
times_sc = linspace(0.85, 1.15, 1000) # Short cadence time stamps
times_lc = linspace(0.85, 1.15, 100) # Long cadence time stamps
k, t0, p, a, i, e, w = 0.1, 1., 2.1, 3.2, 0.5*pi, 0.3, 0.4*pi
pvp = tile([k, t0, p, a, i, e, w], (50,1))
pvp[1:,0] += normal(0.0, 0.005, size=pvp.shape[0]-1)
pvp[1:,1] += normal(0.0, 0.02, size=pvp.shape[0]-1)
ldc = stack([normal(0.3, 0.05, pvp.shape[0]), normal(0.1, 0.02, pvp.shape[0])], 1)
"""
Explanation: Quadratic model
The quadratic model, pytransit.QuadraticModel, implements a transit over a stellar disk with the stellar limb darkening described using the quadratic limb darkening model as described by Mandel & Agol (ApJ 580, 2002). The model is parallelised using numba, and the number of threads can be set using the NUMBA_NUM_THREADS environment variable. An OpenCL version for GPU computation is implemented by pytransit.QuadraticModelCL, and is discussed later in this notebook.
End of explanation
"""
tm = QuadraticModel()
"""
Explanation: Model initialization
The quadratic model doesn't take any special initialization arguments, so the initialization is straightforward.
End of explanation
"""
tm.set_data(times_sc)
"""
Explanation: Data setup
Homogeneous time series
The model needs to be set up by calling set_data() before it can be used. At its simplest, set_data takes the mid-exposure times of the time series to be modelled.
End of explanation
"""
def plot_transits(tm, ldc, fmt='k'):
fig, axs = subplots(1, 3, figsize = (13,3), constrained_layout=True, sharey=True)
flux = tm.evaluate_ps(k, ldc[0], t0, p, a, i, e, w)
axs[0].plot(tm.time, flux, fmt)
axs[0].set_title('Individual parameters')
flux = tm.evaluate_pv(pvp[0], ldc[0])
axs[1].plot(tm.time, flux, fmt)
axs[1].set_title('Parameter vector')
flux = tm.evaluate_pv(pvp, ldc)
axs[2].plot(tm.time, flux.T, 'k', alpha=0.2);
axs[2].set_title('Parameter vector array')
setp(axs[0], ylabel='Normalised flux')
setp(axs, xlabel='Time [days]', xlim=tm.time[[0,-1]])
tm.set_data(times_sc)
plot_transits(tm, ldc)
"""
Explanation: Model use
Evaluation
The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector).
tm.evaluate_ps(k, ldc, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, ldc is the limb darkening coefficient vector, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit.
tm.evaluate_pv(pv, ldc) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as
[[k1, t01, p1, a1, i1, e1, w1],
[k2, t02, p2, a2, i2, e2, w2],
...
[kn, t0n, pn, an, in, en, wn]]
The reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models.
Note: PyTransit uses always a 2D parameter vector array under the hood, and the scalar evaluation method just packs the parameters into an array before model evaluation.
Limb darkening
The quadratic limb darkening coefficients are given either as a 1D or 2D array, depending on whether the model is evaluated for a single set of parameters or an array of parameter vectors. In the first case, the coefficients can be given as [u, v], and in the second, as [[u1, v1], [u2, v2], ... [un, vn]].
In the case of a heterogeneous time series with multiple passbands (more details below), the coefficients are given for a single parameter set as a 1D array with a length $2n_{pb}$ ([u1, v1, u2, v2, ... un, vn], where the index now marks the passband), and for a parameter vector array as a 2D array with a shape (npv, 2*npb), as
[[u11, v11, u12, v12, ... u1n, v1n],
[u21, v21, u22, v22, ... u2n, v2n],
...
[un1, vn1, un2, vn2, ... unn, vnn]]
End of explanation
"""
tm.set_data(times_lc, nsamples=10, exptimes=0.01)
plot_transits(tm, ldc)
"""
Explanation: Supersampling
The transit model can be supersampled by setting the nsamples and exptimes arguments in set_data.
End of explanation
"""
times_1 = linspace(0.85, 1.0, 500)
times_2 = linspace(1.0, 1.15, 10)
times = concatenate([times_1, times_2])
lcids = concatenate([full(times_1.size, 0, 'int'), full(times_2.size, 1, 'int')])
pbids = [0, 1]
nsamples = [1, 10]
exptimes = [0, 0.0167]
ldc2 = tile(ldc, (1,2))
ldc2[:,2:] /= 2
tm.set_data(times, lcids, pbids, nsamples=nsamples, exptimes=exptimes)
plot_transits(tm, ldc2, 'k.-')
"""
Explanation: Heterogeneous time series
PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands.
If a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve.
For example, a set of three light curves, two observed in one passband and the third in another passband
times_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4]
times_2 (lc = 1, pb = 0, lc) = [3, 4]
times_3 (lc = 2, pb = 1, sc) = [1, 5, 6]
Would be set up as
tm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6],
lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2],
pbids = [0, 0, 1],
nsamples = [ 1, 10, 1],
exptimes = [0.1, 1.0, 0.1])
Further, each passband requires two limb darkening coefficients, so the limb darkening coefficient array for a single parameter set should now be
ldc = [u1, v1, u2, v2]
where u and v are the passband-specific quadratic limb darkening model coefficients.
Example: two light curves with different cadences and passbands
End of explanation
"""
import pyopencl as cl
from pytransit import QuadraticModelCL
devices = cl.get_platforms()[0].get_devices()[2:]
ctx = cl.Context(devices)
queue = cl.CommandQueue(ctx)
tm_cl = QuadraticModelCL(cl_ctx=ctx, cl_queue=queue)
tm_cl.set_data(times_sc)
plot_transits(tm_cl, ldc)
"""
Explanation: OpenCL
Usage
The OpenCL version of the quadratic model, pytransit.QuadraticModelCL works identically to the Python version, except that the OpenCL context and queue can be given as arguments in the initialiser, and the model evaluation method can be told to not to copy the model from the GPU memory. If the context and queue are not given, the model creates a default context using cl.create_some_context().
End of explanation
"""
times_sc2 = tile(times_sc, 20) # 20000 short cadence datapoints
times_lc2 = tile(times_lc, 50) # 5000 long cadence datapoints
tm_py = QuadraticModel()
tm_cl = QuadraticModelCL(cl_ctx=ctx, cl_queue=queue)
"""
Explanation: GPU vs. CPU Performance
The performance difference between the OpenCL and Python versions depends on the CPU, GPU, number of simultaneously evaluated models, amount of supersampling, and whether the model data is copied from the GPU memory. The performance difference grows in the favour of OpenCL model with the number of simultaneous models and amount of supersampling, but copying the data slows the OpenCL implementation down. For best performance, also the log likelihood computations should be done in the GPU.
End of explanation
"""
tm_py.set_data(times_sc2)
tm_cl.set_data(times_sc2)
%%timeit
tm_py.evaluate_pv(pvp, ldc)
%%timeit
tm_cl.evaluate_pv(pvp, ldc, copy=True)
"""
Explanation: Short cadence data without heavy supersampling
End of explanation
"""
tm_py.set_data(times_lc2, nsamples=10, exptimes=0.01)
tm_cl.set_data(times_lc2, nsamples=10, exptimes=0.01)
%%timeit
tm_py.evaluate_pv(pvp, ldc)
%%timeit
tm_cl.evaluate_pv(pvp, ldc, copy=True)
"""
Explanation: Long cadence data with supersampling
End of explanation
"""
pvp2 = tile(pvp, (3,1))
ldc2 = tile(ldc, (3,1))
%%timeit
tm_py.evaluate_pv(pvp2, ldc2)
%%timeit
tm_cl.evaluate_pv(pvp2, ldc2, copy=False)
"""
Explanation: Increasing the number of simultaneously evaluated models
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.