repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/keras_for_text_classification.ipynb | apache-2.0 | import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
"""
Explanation: Keras for Text Classification
Learning Objectives
1. Learn how to create a text classification datasets using BigQuery
1. Learn how to tokenize and integerize a corpus of text for training in Keras
1. Learn how to do one-hot-encodings in Keras
1. Learn how to use embedding layers to represent words in Keras
1. Learn about the bag-of-word representation for sentences
1. Learn how to use DNN/CNN/RNN model to classify text in keras
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
"""
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
SEED = 0
"""
Explanation: Replace the variable values in the cell below:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
# TODO: Your code goes here.
FROM
# TODO: Your code goes here.
WHERE
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
LIMIT 10
"""
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a:
Complete the query below to create a sample dataset containing the url, title, and score of articles from the public dataset bigquery-public-data.hacker_news.stories. Use a WHERE clause to restrict to only those articles with
* title length greater than 10 characters
* score greater than 10
* url length greater than 0 characters
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
# TODO: Your code goes here.
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
# TODO: Your code goes here.
GROUP BY
# TODO: Your code goes here.
ORDER BY num_articles DESC
LIMIT 100
"""
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
Lab task 1b:
Complete the query below to count the number of titles within each 'source' category. Note that to grab the 'source' of the article we use the a regex command on the url of the article. To count the number of articles you'll use a GROUP BY in sql, and we'll also restrict our attention to only those articles whose title has greater than 10 characters.
End of explanation
"""
regex = '.*://(.[^/]+)/'
sub_query = """
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
""".format(regex)
query = """
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
""".format(sub_query=sub_query)
print(query)
"""
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
"""
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
"""
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
"""
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
"""
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
"""
title_dataset.source.value_counts()
"""
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
"""
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
"""
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
"""
sample_title_dataset = # TODO: Your code goes here.
# TODO: Your code goes here.
"""
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c:
Use .sample to create a sample dataset of 1,000 articles from the full dataset. Use .value_counts to see how many articles are contained in each of the three source categories?
End of explanation
"""
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0
"""
Explanation: Let's write the sample datatset to disk.
End of explanation
"""
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow.keras.layers import (
Embedding,
Flatten,
GRU,
Conv1D,
Lambda,
Dense,
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
"""
Explanation: Note: You can simply ignore the incompatibility error related
to tensorflow-serving-api and tensorflow-io.
While re-running the above cell you will see the output
tensorflow==2.0.0 that is the installed version of tensorflow.
End of explanation
"""
LOGDIR = "./text_models"
DATA_DIR = "./data"
"""
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
"""
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
"""
Explanation: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times).
End of explanation
"""
tokenizer = Tokenizer()
tokenizer.fit_on_texts(titles_df.title)
integerized_titles = tokenizer.texts_to_sequences(titles_df.title)
integerized_titles[:3]
VOCAB_SIZE = len(tokenizer.index_word)
VOCAB_SIZE
DATASET_SIZE = tokenizer.document_count
DATASET_SIZE
MAX_LEN = max(len(sequence) for sequence in integerized_titles)
MAX_LEN
"""
Explanation: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that:
End of explanation
"""
# TODO 1
def create_sequences(texts, max_len=MAX_LEN):
sequences = # TODO: Your code goes here.
padded_sequences = # TODO: Your code goes here.
return padded_sequences
sequences = create_sequences(titles_df.title[:3])
sequences
titles_df.source[:4]
"""
Explanation: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Lab Task #2:
Complete the code in the create_sequences function below to
* create text sequences from texts using the tokenizer we created above
* pad the end of those text sequences to have length max_len
End of explanation
"""
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
"""
Explanation: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
End of explanation
"""
# TODO 2
def encode_labels(sources):
classes = # TODO: Your code goes here.
one_hots = # TODO: Your code goes here.
return one_hots
encode_labels(titles_df.source[:4])
"""
Explanation: Lab Task #3:
Complete the code in the encode_labels function below to
* create a list that maps each source in sources to its corresponding numeric value using the dictionary CLASSES above
* use the Keras function to one-hot encode the variable classes
End of explanation
"""
N_TRAIN = int(DATASET_SIZE * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
"""
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
"""
sources_train.value_counts()
sources_valid.value_counts()
"""
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
"""
X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train)
X_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
"""
Explanation: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
End of explanation
"""
# TODOs 4-6
def build_dnn_model(embed_dim):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
"""
Explanation: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Lab Tasks #4, #5, and #6:
Create a Keras Sequential model with three layers:
* The first layer should be an embedding layer with output dimension equal to embed_dim.
* The second layer should use a Lambda layer to create a bag-of-words representation of the sentences by computing the mean.
* The last layer should use a Dense layer to predict which class the example belongs to.
End of explanation
"""
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'dnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
BATCH_SIZE = 300
EPOCHS = 100
EMBED_DIM = 10
PATIENCE = 0
dnn_model = build_dnn_model(embed_dim=EMBED_DIM)
dnn_history = dnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(dnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(dnn_history.history)[['accuracy', 'val_accuracy']].plot()
dnn_model.summary()
"""
Explanation: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
End of explanation
"""
def build_rnn_model(embed_dim, units):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
"""
Explanation: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6:
Complete the code below to build an RNN model which predicts the article class. The code below is similar to the DNN you created above; however, here we do not need to use a bag-of-words representation of the sentence. Instead, you can pass the embedding layer directly to an RNN/LSTM/GRU layer.
End of explanation
"""
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'rnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 0
rnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS)
history = rnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
rnn_model.summary()
"""
Explanation: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs):
End of explanation
"""
def build_cnn_model(embed_dim, filters, ksize, strides):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
"""
Explanation: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Complete the code below to create a CNN model for text classification. This model is similar to the previous models in that you should start with an embedding layer. However, the embedding next layers should pass through a 1-dimensional convolution and ultimately the final fully connected, dense layer. Use the arguments of the build_cnn_model function to set up the 1D convolution layer.
End of explanation
"""
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'cnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 5
FILTERS = 200
STRIDES = 2
KSIZE = 3
PATIENCE = 0
cnn_model = build_cnn_model(
embed_dim=EMBED_DIM,
filters=FILTERS,
strides=STRIDES,
ksize=KSIZE,
)
cnn_history = cnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(cnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(cnn_history.history)[['accuracy', 'val_accuracy']].plot()
cnn_model.summary()
"""
Explanation: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
End of explanation
"""
|
eyaltrabelsi/my-notebooks | Lectures/Generators.ipynb | mit | %%memit
g = generators(range(10**8))
print(sum(g))
%%memit
i = iterators(range(10**8))
print(sum(i))
"""
Explanation: Memory of Genrators Vs Iterators 😇
For the generator's work, you need to keep in memory the variables of the generator function.
But you don't have to keep the entire collection in memory, so usually this is EXACTLY the trade-off you want to make.
End of explanation
"""
%%time
g = generators(range(10**8))
print(sum(i))
%%time
i = iterators(range(10**8))
print(sum(i))
"""
Explanation: Performance of Genrators Vs Iterators 😃
End of explanation
"""
g = generators(range(10**8))
print(f"First consumption: {sum(g)}")
print(f"Second consumption: {sum(g)}")
g = generators(range(10**8))
g1, g2 = itertools.tee(g, 2)
print(f"First consumption: {sum(g1)}")
print(f"Second consumption: {sum(g2)}")
"""
Explanation: Consumed once 😱
Every time you want to reuse the elements in a collection it must be regenerated.
End of explanation
"""
|
darkomen/TFG | medidas/13082015/.ipynb_checkpoints/Análisis de datos Ensayo 2-Copy1-checkpoint.ipynb | cc0-1.0 | #Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo2.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
"""
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015
Los datos del experimento:
* Hora de inicio: 12:06
* Hora final : 12:26
* Filamento extruido: 314Ccm
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 5.3 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son distintas:
* En los caso 3 y 5 se mantiene un incremento de +2.
* En los casos 4 y 6 se reduce el incremento a -1.
Este experimento dura 20min por que a simple vista se ve que no aporta ninguna mejora, de hecho, añade más inestabilidad al sitema.
Se opta por añadir más reglas al sistema, e intentar hacer que la velocidad de tracción no llegue a los límites.
End of explanation
"""
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
"""
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
"""
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
"""
Explanation: Aumentando la velocidad se ha conseguido que disminuya el valor máxima, sin embargo ha disminuido el valor mínimo. Para la siguiente iteracción, se va a volver a las velocidades de 1.5- 3.4 y se van a añadir más reglas con unos incrementos de velocidades menores, para evitar saturar la velocidad de traccción tanto a nivel alto como nivel bajo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
"""
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
"""
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
"""
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
"""
Explanation: Representación de X/Y
End of explanation
"""
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
"""
Explanation: Analizamos datos del ratio
End of explanation
"""
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
"""
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation
"""
|
JorisBolsens/PYNQ | Pynq-Z1/notebooks/examples/pmod_grove_buzzer.ipynb | bsd-3-clause | from pynq import Overlay
Overlay("base.bit").download()
"""
Explanation: Grove Buzzer v1.2
This example shows how to use the Grove Buzzer v1.2.
A Grover Buzzer, and PYNQ Grove Adapter are required.
To set up the Pynq-Z1 for this notebook, the PYNQ Grove Adapter is connected to PMODB and the Grove Buzzer is connected to G1 on the PYNQ Grove Adapter.
End of explanation
"""
from pynq.iop import Grove_Buzzer
from pynq.iop import PMODB
from pynq.iop import PMOD_GROVE_G1
grove_buzzer = Grove_Buzzer(PMODB, PMOD_GROVE_G1)
"""
Explanation: 1. Illustrate playing a pre-defined melody
End of explanation
"""
grove_buzzer.play_melody()
"""
Explanation: 2. Play a piece of music
End of explanation
"""
# Play a tone
tone_period = 1200
num_cycles = 500
grove_buzzer.play_tone(tone_period,num_cycles)
"""
Explanation: 3. Generate a tone of desired period and for a desired number of times
The tone_period is in microseconds and the 50% duty cycle will be generated for the given tone_period
End of explanation
"""
from pynq.iop import ARDUINO
from pynq.iop import Arduino_Analog
from pynq.iop import ARDUINO_GROVE_A1
analog1 = Arduino_Analog(ARDUINO, ARDUINO_GROVE_A1)
rounds = 200
for i in range(rounds):
tone_period = int(analog1.read_raw()[0]/5)
num_cycles = 500
grove_buzzer.play_tone(tone_period,50)
"""
Explanation: 4. Controlling the tone
This example will use a grove potentiometer to control the tone of the sound. Plug the potentiometer into A1 group on the shield.
End of explanation
"""
|
vadim-ivlev/STUDY | handson-data-science-python/DataScience-Python3/Distributions.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
values = np.random.uniform(-10.0, 10.0, 100000)
plt.hist(values, 50)
plt.show()
"""
Explanation: Examples of Data Distributions
Uniform Distribution
End of explanation
"""
from scipy.stats import norm
import matplotlib.pyplot as plt
x = np.arange(-3, 3, 0.001)
plt.plot(x, norm.pdf(x))
"""
Explanation: Normal / Gaussian
Visualize the probability density function:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
mu = 5.0
sigma = 2.0
values = np.random.normal(mu, sigma, 10000)
plt.hist(values, 50)
plt.show()
"""
Explanation: Generate some random numbers with a normal distribution. "mu" is the desired mean, "sigma" is the standard deviation:
End of explanation
"""
from scipy.stats import expon
import matplotlib.pyplot as plt
x = np.arange(0, 10, 0.001)
plt.plot(x, expon.pdf(x))
"""
Explanation: Exponential PDF / "Power Law"
End of explanation
"""
from scipy.stats import binom
import matplotlib.pyplot as plt
n, p = 10, 0.5
x = np.arange(0, 10, 0.001)
plt.plot(x, binom.pmf(x, n, p))
"""
Explanation: Binomial Probability Mass Function
End of explanation
"""
from scipy.stats import poisson
import matplotlib.pyplot as plt
mu = 500
x = np.arange(400, 600, 0.5)
plt.plot(x, poisson.pmf(x, mu))
"""
Explanation: Poisson Probability Mass Function
Example: My website gets on average 500 visits per day. What's the odds of getting 550?
End of explanation
"""
|
LucaCanali/Miscellaneous | Impala_SQL_Jupyter/Impala_SQL_Magic_Kerberos.ipynb | apache-2.0 | %load_ext sql
"""
Explanation: Apache Impala and SQL magic for IPython/Jupyter notebooks
with Kerberos authentication
1. Load SQL magic extension (uses ipython-sql by Catherine Devlin)
End of explanation
"""
%config SqlMagic.connect_args="{'kerberos_service_name':'impala', 'auth_mechanism':'GSSAPI'}"
%sql impala://impalasrv-prod:21050/test2
"""
Explanation: 2. Connect to the target database
requires Cloudera impyla package and thrift_sasl
edit the value of config_args as relevant for your environment
End of explanation
"""
%%sql
select * from emp
"""
Explanation: 3. Run SQL on the target using the %%sql cell magic or %sql line magic
End of explanation
"""
Employee_name="SCOTT"
%sql select * from emp where ename=:Employee_name
"""
Explanation: Bind variables
End of explanation
"""
myResultSet=%sql select ename "Employee Name", sal "Salary" from emp
%matplotlib inline
import matplotlib
matplotlib.style.use('ggplot')
myResultSet.bar()
"""
Explanation: Additional example of the integration with the IPython environment
End of explanation
"""
my_DataFrame=myResultSet.DataFrame()
my_DataFrame.head()
"""
Explanation: The integration with Pandas opens many additional possibilities for data analysis
End of explanation
"""
|
Upward-Spiral-Science/spect-team | Code/Assignment-3/Descriptive_Exploratory_Answers_2.ipynb | apache-2.0 | # Ignore different types of ADHD for now
df_disorder_results = df_disorders.drop('ADHD_Type', inplace=False, axis=1)
# Find records that has zero values across all the columns (disorders)
# Extract a list of Patient_IDs corresponding to healthy participants
healthy_ids = df_disorder_results[(df_disorder_results.T==0).all()].index.tolist()
print 'There are %d healthy participants.\n' % len(healthy_ids)
print 'Their Patient_IDs are', healthy_ids
print '\nFor the records above, label y_i=1 (healthy), for all the other records, y_i=0 (mentally disordered).'
# Now constuct the label vector y
y = pd.Series([0] * len(df_disorders), index=patient_ids)
y[healthy_ids] = 1
print 'Finish constructing label vector for healthy/unhealthy.'
"""
Explanation: Descriptive
What do the features in the vector indicate?
Current guess:
This data is likely from a study of people with ADHD, as it is looking at brain activity during a concentration activity. We could maybe try to parse out the areas of the brain associated with different types of ADHD from parts that are activated due to a different disorder based on comparing those with one disorder to some without.
Also the trends of different kinds of ADHD could have different locations in the brain for the concentration task
Why are certain types of baseline values not applicable to certain individuals (NaN values)?
It is impossible to know for sure why certain baseline values are not applicable to certain individuals. However, we speculate a few possibilities. Perhaps baseline data was not recorded, either by design or by error (e.g. measurement utensil was not calibrated appropriately). Alternatively, the baseline data may not have been collected originally, and then later found necessary to compare to the concentration values. The cost of measuring brain activity could also have factored in to the decision to only record the baseline values of some of the participants. Nonetheless, as stated above, we have eliminated the missing values from our training data because without a baseline value, the concentration value is not as meaningful. If we find any consistent trends within the data, we may be able to estimate the baseline values for those with missing information.
What kind of labels are we going to extract? What will be yi?
For starters, a simple learning goal would be to separate healthy participants from the unhealthy ones.
Here we identify those who are diagnosed with no disorders as the healty partipants, and assign such patient records with label of value 1, the remaining records are assigned label 0.
Therefore to answer the question above, our labels at the initial stage are binary, indicating whether a participant is healthy (has no disorders) or not.
End of explanation
"""
# Read full dataset
df_all = pd.DataFrame.from_csv('Data_Adults_1.csv')
# Extract non-numerical features
non_num_keys = [key for key in dict(df_all.dtypes) if dict(df_all.dtypes)[key] not in ['float', 'int']]
print 'Nominal features are', non_num_keys
print 'Unique values for the nominal features:\n'
print np.unique(df_all['RaceName'])
print np.unique(df_all['Age_Group'])
print np.unique(df_all['STUDY_NAME'])
print np.unique(df_all['BSC_Respondent'])
print np.unique(df_all['ADHD_Type'])
print np.unique(df_all['locationname'])
print np.unique(df_all['LDS_Respondent'])
print np.unique(df_all['GSC_Respondent'])
print np.unique(df_all['group_name'])
print np.unique(df_all['Gendername'])
"""
Explanation: Additional prediction goals: ( in the future )
-whether or not certain disorders are correlated with certain baseline values (or delta values from concentration to baseline)
-if different parts of the brain are affected with different kinds of ADHD
Exploratory
What is the sole metric that can be used to separate healthy people from unhealthy people?
As shown in the label construction procedure above, our sole metric for identifying a person as healthy is: the record (row) for such person has 0 values for all disorders (across all disorder columns).
What is the range of values nominal features can take?
End of explanation
"""
# Get baseline and concenctration data
df_base = pd.DataFrame.from_csv('baseline.csv')
df_concen = pd.DataFrame.from_csv('concentration.csv')
# Use numpy matrix format (numerical)
df_base_vals = df_base.values
df_concen_vals = df_concen.values
def check_perfect_corr(coeff):
# Fill diagonal with 0 (not comparing to oneself)
np.fill_diagonal(coeff, 0)
# Perfect correlation: 1 or -1
return coeff.max()==1 or coeff.min()==-1
# Compute Pearson product-moment correlation coefficients
# Check for perfect correlation row-wise and column-wise
def pearson_corr_test(x):
# rowvar = 1: row-wise
# rowvar = 0: column-wise
row_coeff = np.corrcoef(x, y=None, rowvar=1)
col_coeff = np.corrcoef(x, y=None, rowvar=0)
# Check for perfect correlation row-wise
row_corr = check_perfect_corr(row_coeff)
# Check for perfect correlation column-wise
col_corr = check_perfect_corr(col_coeff)
return row_corr and col_corr
print 'Perfect correlation exists in baselines?', pearson_corr_test(df_base_vals)
print 'Perfect correlation exists in concentrations?', pearson_corr_test(df_concen_vals)
"""
Explanation: In order not to comfuse our future model, here for certain types of features, we avoid using one value to denote different categories a variable can take, instead, we want to encode them into small one-hot vectors.
Note that we do not take null values into consideration for now.
Range of values for the nominal features:
1) RaceName: This can be encoded into a one-hot vector of length 16. For example, 'African American' will be encoded into [1]+[0]*15 (python grammar), with the value at index 0 set to 1, the rest of 15 elements of the vector are 0. 'Asian' takes value of 1 at index 2, 'Unknown' takes value of 1 at index 15, etc.
2) Age_Group: This can be represented by scales, where Adult=2, Geriatric=3, Pediatric=1.
3) STUDY_NAME: Only one name, can be removed from the dataset.
4) BSC_Respondent, ADHD_Type, locationname, LDS_Respondent, GSC_Respondent can be encoded in the same way as RaceName, except that their vector lengths will be 4, 7, 11, 4, 4 respectively.
5) group_name: We are not sure if this is just random group names or control group versus experimental group, this has to be decided after we get to look at data documentation. For the former case, it can be ignored. For the latter case, we will represent it using a binary indicator (0-1).
6) Gender: Female=0, Male=1, Unknown=0.5 (maybe).
Are the features correlated?
For feature correlation, since we do not have complete information about all column headers at this point, only correlation within baseline values and concentration values are analyzed.
End of explanation
"""
|
nishantsbi/pattern_classification | dimensionality_reduction/projection/linear_discriminant_analysis.ipynb | gpl-3.0 | %load_ext watermark
%watermark -v -d -u -p pandas,scikit-learn,numpy,matplotlib
"""
Explanation: Sebastian Raschka
- Link to the containing GitHub Repository: https://github.com/rasbt/pattern_classification
- Link to this IPython Notebook on GitHub: linear_discriminant_analysis.ipynb
End of explanation
"""
feature_dict = {i:label for i,label in zip(
range(4),
('sepal length in cm',
'sepal width in cm',
'petal length in cm',
'petal width in cm', ))}
"""
Explanation: <font size="1.5em">More information about the watermark magic command extension.</font>
<hr>
I would be happy to hear your comments and suggestions.
Please feel free to drop me a note via
twitter, email, or google+.
<hr>
Linear Discriminant Analysis bit by bit
<br>
<br>
Sections
Introduction
Principal Component Analysis vs. Linear Discriminant Analysis
What is a "good" feature subspace?
Summarizing the LDA approach in 5 steps
Preparing the sample data set
About the Iris dataset
Reading in the dataset
Histograms and feature selection
Standardization
Normality assumptions
LDA in 5 steps
Step 1: Computing the d-dimensional mean vectors
Step 2: Computing the Scatter Matrices
Step 3: Solving the generalized eigenvalue problem for the matrix $S_{W}^{-1}S_B$
Step 4: Selecting linear discriminants for the new feature subspace
Step 5: Transforming the samples onto the new subspace
A comparison of PCA and LDA
LDA via scikit-learn
Update-scikit-learn-0.15.2
<br>
<br>
Introduction
[back to top]
Linear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications.
The goal is to project a dataset onto a lower-dimensional space with good class-separability in order avoid overfitting ("curse of dimensionality") and also reduce computational costs.
Ronald A. Fisher formulated the Linear Discriminant in 1936 (The Use of Multiple Measurements in Taxonomic Problems), and it also has some practical uses as classifier. The original Linear discriminant was described for a 2-class problem, and it was then later generalized as "multi-class Linear Discriminant Analysis" or "Multiple Discriminant Analysis" by C. R. Rao in 1948 (The utilization of multiple measurements in problems of biological classification)
The general LDA approach is very similar to a Principal Component Analysis (for more information about the PCA, see the previous article Implementing a Principal Component Analysis (PCA) in Python step by step), but in addition to finding the component axes that maximize the variance of our data (PCA), we are additionally interested in the axes that maximize the separation between multiple classes (LDA).
So, in a nutshell, often the goal of an LDA is to project a feature space (a dataset n-dimensional samples) onto a smaller subspace $k$ (where $k \leq n-1$) while maintaining the class-discriminatory information.
In general, dimensionality reduction does not only help reducing computational costs for a given classification task, but it can also be helpful to avoid overfitting by minimizing the error in parameter estimation ("curse of dimensionality").
<br>
<br>
Principal Component Analysis vs. Linear Discriminant Analysis
[back to top]
Both Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are linear transformation techniques that are commonly used for dimensionality reduction. PCA can be described as an "unsupervised" algorithm, since it "ignores" class labels and its goal is to find the directions (the so-called principal components) that maximize the variance in a dataset.
In contrast to PCA, LDA is "supervised" and computes the directions ("linear discriminants") that will represent the axes that that maximize the separation between multiple classes.
Although it might sound intuitive that LDA is superior to PCA for a multi-class classification task where the class labels are known, this might not always the case.
For example, comparisons between classification accuracies for image recognition after using PCA or LDA show that PCA tends to outperform LDA if the number of samples per class is relatively small (PCA vs. LDA, A.M. Martinez et al., 2001).
In practice, it is also not uncommon to use both LDA and PCA in combination: E.g., PCA for dimensionality reduction followed by an LDA.
<br>
<br>
<br>
<br>
What is a "good" feature subspace?
[back to top]
Let's assume that our goal is to reduce the dimensions of a $d$-dimensional dataset by projecting it onto a $(k)$-dimensional subspace (where $k\;<\;d$).
So, how do we know what size we should choose for $k$ ($k$ = the number of dimensions of the new feature subspace), and how do we know if we have a feature space that represents our data "well"?
Later, we will compute eigenvectors (the components) from our data set and collect them in a so-called scatter-matrices (i.e., the in-between-class scatter matrix and within-class scatter matrix).
Each of these eigenvectors is associated with an eigenvalue, which tells us about the "length" or "magnitude" of the eigenvectors.
If we would observe that all eigenvalues have a similar magnitude, then this may be a good indicator that our data is already projected on a "good" feature space.
And in the other scenario, if some of the eigenvalues are much much larger than others, we might be interested in keeping only those eigenvectors with the highest eigenvalues, since they contain more information about our data distribution. Vice versa, eigenvalues that are close to 0 are less informative and we might consider dropping those for constructing the new feature subspace.
<br>
<br>
Summarizing the LDA approach in 5 steps
[back to top]
Listed below are the 5 general steps for performing a linear discriminant analysis; we will explore them in more detail in the following sections.
Compute the $d$-dimensional mean vectors for the different classes from the dataset.
Compute the scatter matrices (in-between-class and within-class scatter matrix).
Compute the eigenvectors ($\pmb e_1, \; \pmb e_2, \; ..., \; \pmb e_d$) and corresponding eigenvalues ($\pmb \lambda_1, \; \pmb \lambda_2, \; ..., \; \pmb \lambda_d$) for the scatter matrices.
Sort the eigenvectors by decreasing eigenvalues and choose $k$ eigenvectors with the largest eigenvalues to form a $k \times d$ dimensional matrix $\pmb W\;$ (where every column represents an eigenvector).
Use this $k \times d$ eigenvector matrix to transform the samples onto the new subspace. This can be summarized by the mathematical equation: $\pmb Y = \pmb X \times \pmb W$ (where $\pmb X$ is a $n \times d$-dimensional matrix representing the $n$ samples, and $\pmb y$ are the transformed $n \times k$-dimensional samples in the new subspace).
<a name="sample_data"></a>
<br>
<br>
Preparing the sample data set
[back to top]
<a name="sample_data"></a>
<br>
<br>
About the Iris dataset
[back to top]
For the following tutorial, we will be working with the famous "Iris" dataset that has been deposited on the UCI machine learning repository
(https://archive.ics.uci.edu/ml/datasets/Iris).
<font size="1">
Reference:
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science.</font>
The iris dataset contains measurements for 150 iris flowers from three different species.
The three classes in the Iris dataset:
Iris-setosa (n=50)
Iris-versicolor (n=50)
Iris-virginica (n=50)
The four features of the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
End of explanation
"""
import pandas as pd
df = pd.io.parsers.read_csv(
filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
header=None,
sep=',',
)
df.columns = [l for i,l in sorted(feature_dict.items())] + ['class label']
df.dropna(how="all", inplace=True) # to drop the empty line at file-end
df.tail()
"""
Explanation: <a name="sample_data"></a>
<br>
<br>
Reading in the dataset
[back to top]
End of explanation
"""
from sklearn.preprocessing import LabelEncoder
X = df[[0,1,2,3]].values
y = df['class label'].values
enc = LabelEncoder()
label_encoder = enc.fit(y)
y = label_encoder.transform(y) + 1
label_dict = {1: 'Setosa', 2: 'Versicolor', 3:'Virginica'}
"""
Explanation: $\pmb X = \begin{bmatrix} x_{1_{\text{sepal length}}} & x_{1_{\text{sepal width}}} & x_{1_{\text{petal length}}} & x_{1_{\text{petal width}}}\
x_{2_{\text{sepal length}}} & x_{2_{\text{sepal width}}} & x_{2_{\text{petal length}}} & x_{2_{\text{petal width}}}\
... \
x_{150_{\text{sepal length}}} & x_{150_{\text{sepal width}}} & x_{150_{\text{petal length}}} & x_{150_{\text{petal width}}}\
\end{bmatrix}, \;\;
\pmb y = \begin{bmatrix} \omega_{\text{setosa}}\
\omega_{\text{setosa}}\
... \
\omega_{\text{virginica}}\end{bmatrix}$
<a name="sample_data"></a>
<br>
<br>
Since it is more convenient to work with numerical values, we will use the LabelEncode from the scikit-learn library to convert the class labels into numbers: 1, 2, and 3.
End of explanation
"""
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import math
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12,6))
for ax,cnt in zip(axes.ravel(), range(4)):
# set bin sizes
min_b = math.floor(np.min(X[:,cnt]))
max_b = math.ceil(np.max(X[:,cnt]))
bins = np.linspace(min_b, max_b, 25)
# plottling the histograms
for lab,col in zip(range(1,4), ('blue', 'red', 'green')):
ax.hist(X[y==lab, cnt],
color=col,
label='class %s' %label_dict[lab],
bins=bins,
alpha=0.5,)
ylims = ax.get_ylim()
# plot annotation
leg = ax.legend(loc='upper right', fancybox=True, fontsize=8)
leg.get_frame().set_alpha(0.5)
ax.set_ylim([0, max(ylims)+2])
ax.set_xlabel(feature_dict[cnt])
ax.set_title('Iris histogram #%s' %str(cnt+1))
# hide axis ticks
ax.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
axes[0][0].set_ylabel('count')
axes[1][0].set_ylabel('count')
fig.tight_layout()
plt.show()
"""
Explanation: $\pmb y = \begin{bmatrix}{\text{setosa}}\
{\text{setosa}}\
... \
{\text{virginica}}\end{bmatrix} \quad \Rightarrow
\begin{bmatrix} {\text{1}}\
{\text{1}}\
... \
{\text{3}}\end{bmatrix}$
<a name="sample_data"></a>
<br>
<br>
Histograms and feature selection
[back to top]
Just to get a rough idea how the samples of our three classes $\omega_1$, $\omega_2$ and $\omega_3$ are distributed, let us visualize the distributions of the four different features in 1-dimensional histograms.
End of explanation
"""
from sklearn import preprocessing
preprocessing.scale(X, axis=0, with_mean=True, with_std=True, copy=False)
print()
"""
Explanation: From just looking at these simple graphical representations of the features, we can already tell that the petal lengths and widths are likely better suited as potential features two separate between the three flower classes. In practice, instead of reducing the dimensionality via a projection (here: LDA), a good alternative would be a feature selection technique. For low-dimensional datasets like Iris, a glance at those histograms would already be very informative. Another simple, but very useful technique would be to use feature selection algorithms, which I have described in more detail in another article: Feature Selection Algorithms in Python
<a name="sample_data"></a>
<br>
<br>
Standardization
[back to top]
Normalization is one important part of every data pre-processing step and typically a requirement for best performances of many machine learning algorithms.
The two most popular approaches for data normalization are the so-called "standardization" and "min-max scaling".
Standardization (or Z-score normalization): Rescaling of the features so that they'll have the properties of a standard normal distribution with μ=0 and σ=1 (i.e., unit variance centered around the mean).
Min-max scaling: Rescaling of the features to unit range, typically a range between 0 and 1. Quite often, min-max scaling is also just called "normalization", which can be quite confusing depending on the context where the term is being used. Via Min-max scaling,
Both are very important procedures, so that I have also a separate article about it with more details: About Feature Scaling and Normalization.
In our case, although the features are already on the same scale (measured in centimeters), we still want to scale the features to unit variance (σ=1, μ=0).
End of explanation
"""
np.set_printoptions(precision=4)
mean_vectors = []
for cl in range(1,4):
mean_vectors.append(np.mean(X[y==cl], axis=0))
print('Mean Vector class %s: %s\n' %(cl, mean_vectors[cl-1]))
"""
Explanation: <a name="sample_data"></a>
<br>
<br>
Normality assumptions
[back to top]
It should be mentioned that LDA assumes normal distributed data, features that are statistically independent, and identical covariance matrices for every class. However, this only applies for LDA as classifier and LDA for dimensionality reduction can also work reasonably well if those assumptions are violated. And even for classification tasks LDA seems can be quite robust to the distribution of the data:
"linear discriminant analysis frequently achieves good performances in
the tasks of face and object recognition, even though the assumptions
of common covariance matrix among groups and normality are often
violated (Duda, et al., 2001)" (Tao Li, et al., 2006).
<br>
<font size="1">References:
Tao Li, Shenghuo Zhu, and Mitsunori Ogihara. “Using Discriminant Analysis for Multi-Class Classification: An Experimental Investigation.” Knowledge and Information Systems 10, no. 4 (2006): 453–72.)
Duda, Richard O, Peter E Hart, and David G Stork. 2001. Pattern Classification. New York: Wiley.</font>
<a name="sample_data"></a>
<br>
<br>
LDA in 5 steps
[back to top]
After we went through several preparation steps, our data is finally ready for the actual LDA. In practice, LDA for dimensionality reduction would be just another preprocessing step for a typical machine learning or pattern classification task.
<a name="sample_data"></a>
<br>
<br>
Step 1: Computing the d-dimensional mean vectors
[back to top]
In this first step, we will start off with a simple computation of the mean vectors $\pmb m_i$, $(i = 1,2,3)$ of the 3 different flower classes:
$\pmb m_i = \begin{bmatrix}
\mu_{\omega_i (\text{sepal length)}}\
\mu_{\omega_i (\text{sepal width})}\
\mu_{\omega_i (\text{petal length)}}\
\mu_{\omega_i (\text{petal width})}\
\end{bmatrix} \; , \quad \text{with} \quad i = 1,2,3$
End of explanation
"""
S_W = np.zeros((4,4))
for cl,mv in zip(range(1,4), mean_vectors):
class_sc_mat = np.zeros((4,4)) # scatter matrix for every class
for row in X[y == cl]:
row, mv = row.reshape(4,1), mv.reshape(4,1) # make column vectors
class_sc_mat += (row-mv).dot((row-mv).T)
S_W += class_sc_mat # sum class scatter matrices
print('within-class Scatter Matrix:\n', S_W)
"""
Explanation: <a name="sample_data"></a>
<br>
<br>
<a name="sc_matrix"></a>
Step 2: Computing the Scatter Matrices
[back to top]
Now, we will compute the two 4x4-dimensional matrices: The within-class and the between-class scatter matrix.
<br>
<br>
2.1 Within-class scatter matrix $S_W$
[back to top]
The within-class scatter matrix $S_W$ is computed by the following equation:
$S_W = \sum\limits_{i=1}^{c} S_i$
where
$S_i = \sum\limits_{\pmb x \in D_i}^n (\pmb x - \pmb m_i)\;(\pmb x - \pmb m_i)^T$
(scatter matrix for every class)
and $\pmb m_i$ is the mean vector
$\pmb m_i = \frac{1}{n_i} \sum\limits_{\pmb x \in D_i}^n \; \pmb x_k$
End of explanation
"""
overall_mean = np.mean(X, axis=0)
S_B = np.zeros((4,4))
for i,mean_vec in enumerate(mean_vectors):
n = X[y==i+1,:].shape[0]
mean_vec = mean_vec.reshape(4,1) # make column vector
overall_mean = overall_mean.reshape(4,1) # make column vector
S_B += n * (mean_vec - overall_mean).dot((mean_vec - overall_mean).T)
print('between-class Scatter Matrix:\n', S_B)
"""
Explanation: <br>
2.1 b
Alternatively, we could also compute the class-covariance matrices by adding the scaling factor $\frac{1}{N-1}$ to the within-class scatter matrix, so that our equation becomes
$\Sigma_i = \frac{1}{N_{i}-1} \sum\limits_{\pmb x \in D_i}^n (\pmb x - \pmb m_i)\;(\pmb x - \pmb m_i)^T$.
and $S_W = \sum\limits_{i=1}^{c} (N_{i}-1) \Sigma_i$
where $N_{i}$ is the sample size of the respective class (here: 50), and in this particular case, we can drop the term ($N_{i}-1)$
since all classes have the same sample size.
However, the resulting eigenspaces will be identical (identical eigenvectors, only the eigenvalues are scaled differently by a constant factor).
<br>
<br>
2.2 Between-class scatter matrix $S_B$
[back to top]
The between-class scatter matrix $S_B$ is computed by the following equation:
$S_B = \sum\limits_{i=1}^{c} N_{i} (\pmb m_i - \pmb m) (\pmb m_i - \pmb m)^T$
where
$\pmb m$ is the overall mean, and $\pmb m_{i}$ and $N_{i}$ are the sample mean and sizes of the respective classes.
End of explanation
"""
eig_vals, eig_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
for i in range(len(eig_vals)):
eigvec_sc = eig_vecs[:,i].reshape(4,1)
print('\nEigenvector {}: \n{}'.format(i+1, eigvec_sc.real))
print('Eigenvalue {:}: {:.2e}'.format(i+1, eig_vals[i].real))
"""
Explanation: <br>
<br>
Step 3: Solving the generalized eigenvalue problem for the matrix $S_{W}^{-1}S_B$
[back to top]
Next, we will solve the generalized eigenvalue problem for the matrix $S_{W}^{-1}S_B$ to obtain the linear discriminants.
End of explanation
"""
for i in range(len(eig_vals)):
eigv = eig_vecs[:,i].reshape(4,1)
np.testing.assert_array_almost_equal(np.linalg.inv(S_W).dot(S_B).dot(eigv),
eig_vals[i] * eigv,
decimal=6, err_msg='', verbose=True)
print('ok')
"""
Explanation: <br>
<br>
After this decomposition of our square matrix into eigenvectors and eigenvalues, let us briefly recapitulate how we can interpret those results. As we remember from our first linear algebra class in high school or college, both eigenvectors and eigenvalues are providing us with information about the distortion of a linear transformation: The eigenvectors are basically the direction of this distortion, and the eigenvalues are the scaling factor for the eigenvectors that describing the magnitude of the distortion.
If we are performing the LDA for dimensionality reduction, the eigenvectors are important since they will form the new axes of our new feature subspace; the associated eigenvalues are of particular interest since they will tell us how "informative" the new "axes" are.
Let us briefly double-check our calculation and talk more about the eigenvalues in the next section.
<br>
<br>
Checking the eigenvector-eigenvalue calculation
[back to top]
A quick check that the eigenvector-eigenvalue calculation is correct and satisfy the equation:
$\pmb A\pmb{v} = \lambda\pmb{v}$
<br>
where
$\pmb A = S_{W}^{-1}S_B\
\pmb{v} = \; \text{Eigenvector}\
\lambda = \; \text{Eigenvalue}$
End of explanation
"""
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs = sorted(eig_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for i in eig_pairs:
print(i[0])
"""
Explanation: <br>
<br>
Step 4: Selecting linear discriminants for the new feature subspace
[back to top]
<br>
<br>
4.1. Sorting the eigenvectors by decreasing eigenvalues
[back to top]
Remember from the introduction that we are not only interested in merely projecting the data into a subspace that improves the class separability, but also reduces the dimensionality of our feature space, (where the eigenvectors will form the axes of this new feature subspace).
However, the eigenvectors only define the directions of the new axis, since they have all the same unit length 1.
So, in order to decide which eigenvector(s) we want to drop for our lower-dimensional subspace, we have to take a look at the corresponding eigenvalues of the eigenvectors. Roughly speaking, the eigenvectors with the lowest eigenvalues bear the least information about the distribution of the data, and those are the ones we want to drop.
The common approach is to rank the eigenvectors from highest to lowest corresponding eigenvalue and choose the top $k$ eigenvectors.
End of explanation
"""
print('Variance explained:\n')
eigv_sum = sum(eig_vals)
for i,j in enumerate(eig_pairs):
print('eigenvalue {0:}: {1:.2%}'.format(i+1, (j[0]/eigv_sum).real))
"""
Explanation: <br>
<br>
If we take a look at the eigenvalues, we can already see that 2 eigenvalues are close to 0 and conclude that the eigenpairs are less informative than the other two. Let's express the "explained variance" as percentage:
End of explanation
"""
W = np.hstack((eig_pairs[0][1].reshape(4,1), eig_pairs[1][1].reshape(4,1)))
print('Matrix W:\n', W.real)
"""
Explanation: <br>
<br>
The first eigenpair is by far the most informative one, and we won't loose much information if we would form a 1D-feature spaced based on this eigenpair.
<br>
<br>
4.2. Choosing k eigenvectors with the largest eigenvalues
[back to top]
After sorting the eigenpairs by decreasing eigenvalues, it is now time to construct our $k \times d$-dimensional eigenvector matrix $\pmb W$ (here $4 \times 2$: based on the 2 most informative eigenpairs) and thereby reducing the initial 4-dimensional feature space into a 2-dimensional feature subspace.
End of explanation
"""
X_lda = X.dot(W)
assert X_lda.shape == (150,2), "The matrix is not 2x150 dimensional."
from matplotlib import pyplot as plt
def plot_step_lda():
ax = plt.subplot(111)
for label,marker,color in zip(
range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):
plt.scatter(x=X_lda[:,0].real[y == label],
y=X_lda[:,1].real[y == label],
marker=marker,
color=color,
alpha=0.5,
label=label_dict[label]
)
plt.xlabel('LD1')
plt.ylabel('LD2')
leg = plt.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.title('LDA: Iris projection onto the first 2 linear discriminants')
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.grid()
plt.tight_layout
plt.show()
plot_step_lda()
"""
Explanation: <br>
<br>
Step 5: Transforming the samples onto the new subspace
[back to top]
In the last step, we use the $4 \times 2$-dimensional matrix $\pmb W$ that we just computed to transform our samples onto the new subspace via the equation
$\pmb Y = \pmb X \times \pmb W $.
(where $\pmb X$ is a $n \times d$-dimensional matrix representing the $n$ samples, and $\pmb Y$ are the transformed $n \times k$-dimensional samples in the new subspace).
End of explanation
"""
from sklearn.decomposition import PCA as sklearnPCA
sklearn_pca = sklearnPCA(n_components=2)
X_pca = sklearn_pca.fit_transform(X)
def plot_pca():
ax = plt.subplot(111)
for label,marker,color in zip(
range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):
plt.scatter(x=X_pca[:,0][y == label],
y=X_pca[:,1][y == label],
marker=marker,
color=color,
alpha=0.5,
label=label_dict[label]
)
plt.xlabel('PC1')
plt.ylabel('PC2')
leg = plt.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.title('PCA: Iris projection onto the first 2 principal components')
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.tight_layout
plt.grid()
plt.show()
plot_pca()
plot_step_lda()
"""
Explanation: The scatter plot above represents our new feature subspace that we constructed via LDA. We can see that the first linear discriminant "LD1" separates the classes quite nicely. However, the second discriminant, "LD2", does not add much valuable information, which we've already concluded when we looked at the ranked eigenvalues is step 4.
<br>
<br>
A comparison of PCA and LDA
[back to top]
In order to compare the feature subspace that we obtained via the Linear Discriminant Analysis, we will use the PCA class from the scikit-learn machine-learning library. The documentation can be found here:
http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html.
For our convenience, we can directly specify to how many components we want to retain in our input dataset via the n_components parameter.
n_components : int, None or string
Number of components to keep. if n_components is not set all components are kept:
n_components == min(n_samples, n_features)
if n_components == ‘mle’, Minka’s MLE is used to guess the dimension if 0 < n_components < 1,
select the number of components such that the amount of variance that needs to be explained
is greater than the percentage specified by n_components
<br>
<br>
But before we skip to the results of the respective linear transformations, let us quickly recapitulate the purposes of PCA and LDA: PCA finds the axes with maximum variance for the whole data set where LDA tries to find the axes for best class seperability. In practice, often a LDA is done followed by a PCA for dimensionality reduction.
<br>
<br>
<br>
<br>
End of explanation
"""
from sklearn.lda import LDA
# LDA
sklearn_lda = LDA(n_components=2)
X_lda_sklearn = sklearn_lda.fit_transform(X, y)
def plot_scikit_lda(X, title, mirror=1):
ax = plt.subplot(111)
for label,marker,color in zip(
range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):
plt.scatter(x=X[:,0][y == label]*mirror,
y=X[:,1][y == label],
marker=marker,
color=color,
alpha=0.5,
label=label_dict[label]
)
plt.xlabel('LD1')
plt.ylabel('LD2')
leg = plt.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.title(title)
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.grid()
plt.tight_layout
plt.show()
plot_step_lda()
plot_scikit_lda(X_lda_sklearn, title='Default LDA via scikit-learn', mirror=(-1))
"""
Explanation: <br>
<br>
The two plots above nicely confirm what we have discussed before: Where the PCA accounts for the most variance in the whole dataset, the LDA gives us the axes that account for the most variance between the individual classes.
<br>
<br>
LDA via scikit-learn
[back to top]
Now, After we have seen how an Linear Discriminant Analysis works in this step-wise approach, there is also a more convenient way via the LDA class implemented in the scikit-learn machine learning library.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/io/tutorials/postgresql.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow IO Authors.
End of explanation
"""
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
"""
Explanation: TensorFlow IO から PostgreSQL データベースを読み取る
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/postgresql"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
概要
このチュートリアルでは、トレーニングまたは推論のために PostgreSQL データベースサーバーからtf.data.Datasetを作成し、作成したDatasetをtf.kerasに渡す方法を紹介します。
SQL データベースは、データサイエンティストにとって重要なデータソースです。最も人気のあるオープンソース SQL データベースの 1 つである PostgreSQL は、企業全体の重要なデータやトランザクションデータを格納するために広く使用されています。PostgreSQL データベースサーバーから直接Datasetを作成し、トレーニングまたは推論のためにDatasetをtf.kerasに渡すと、データパイプラインを大幅に簡略化されるのでデータサイエンティストは機械学習モデルの構築に専念できます。
セットアップと使用法
必要な tensorflow-io パッケージをインストールし、ランタイムを再起動する
End of explanation
"""
# Install postgresql server
!sudo apt-get -y -qq update
!sudo apt-get -y -qq install postgresql
!sudo service postgresql start
# Setup a password `postgres` for username `postgres`
!sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';"
# Setup a database with name `tfio_demo` to be used
!sudo -u postgres psql -U postgres -c 'DROP DATABASE IF EXISTS tfio_demo;'
!sudo -u postgres psql -U postgres -c 'CREATE DATABASE tfio_demo;'
"""
Explanation: PostgreSQL のインストールとセットアップ (オプション)
注: このノートブックは、Google Colab でのみ実行するように設計されています。システムにパッケージをインストールし、sudo アクセスが必要です。ローカルの Jupyter ノートブックで実行する場合は、注意して続行してください。
Google Colab での使用法をデモするには、PostgreSQL サーバーをインストールします。パスワードと空のデータベースも必要です。
このノートブックを Google Colab で実行していない場合、または既存のデータベースを使用する場合は、次の設定をスキップして次のセクションに進んでください。
End of explanation
"""
%env TFIO_DEMO_DATABASE_NAME=tfio_demo
%env TFIO_DEMO_DATABASE_HOST=localhost
%env TFIO_DEMO_DATABASE_PORT=5432
%env TFIO_DEMO_DATABASE_USER=postgres
%env TFIO_DEMO_DATABASE_PASS=postgres
"""
Explanation: 必要な環境変数を設定する
次の環境変数は、前のセクションの PostgreSQL 設定に基づいています。設定が異なる場合、または既存のデータベースを使用している場合は、それに応じて変更する必要があります。
End of explanation
"""
!curl -s -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/postgresql/AirQualityUCI.sql
!PGPASSWORD=$TFIO_DEMO_DATABASE_PASS psql -q -h $TFIO_DEMO_DATABASE_HOST -p $TFIO_DEMO_DATABASE_PORT -U $TFIO_DEMO_DATABASE_USER -d $TFIO_DEMO_DATABASE_NAME -f AirQualityUCI.sql
"""
Explanation: PostgreSQL サーバーでデータを準備する
このチュートリアルではデータベースを作成し、デモのためにデータベースにデータを入力します。このチュートリアルで使用されるデータは、Air Quality Data Set からのデータで、UCI Machine Learning Repository から入手できます。
以下は、Air Quality Data Set のサブセットのプレビューです。
Date | Time | CO(GT) | PT08.S1(CO) | NMHC(GT) | C6H6(GT) | PT08.S2(NMHC) | NOx(GT) | PT08.S3(NOx) | NO2(GT) | PT08.S4(NO2) | PT08.S5(O3) | T | RH | AH
--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ---
10/03/2004 | 18.00.00 | 2,6 | 1360 | 150 | 11,9 | 1046 | 166 | 1056 | 113 | 1692 | 1268 | 13,6 | 48,9 | 0,7578
10/03/2004 | 19.00.00 | 2 | 1292 | 112 | 9,4 | 955 | 103 | 1174 | 92 | 1559 | 972 | 13,3 | 47,7 | 0,7255
10/03/2004 | 20.00.00 | 2,2 | 1402 | 88 | 9,0 | 939 | 131 | 1140 | 114 | 1555 | 1074 | 11,9 | 54,0 | 0,7502
10/03/2004 | 21.00.00 | 2,2 | 1376 | 80 | 9,2 | 948 | 172 | 1092 | 122 | 1584 | 1203 | 11,0 | 60,0 | 0,7867
10/03/2004 | 22.00.00 | 1,6 | 1272 | 51 | 6,5 | 836 | 131 | 1205 | 116 | 1490 | 1110 | 11,2 | 59,6 | 0,7888
大気質データセットと UCI 機械学習リポジトリの詳細については、参照文献セクションをご覧ください。
データの準備をシンプルにするために、Air Quality Data Setの SQL バージョンが用意されており、AirQualityUCI.sql として入手できます。
表を作成するステートメントは次のとおりです。
CREATE TABLE AirQualityUCI (
Date DATE,
Time TIME,
CO REAL,
PT08S1 INT,
NMHC REAL,
C6H6 REAL,
PT08S2 INT,
NOx REAL,
PT08S3 INT,
NO2 REAL,
PT08S4 INT,
PT08S5 INT,
T REAL,
RH REAL,
AH REAL
);
データベースに表を作成してデータを入力するための完全なコマンドは以下のとおりです。
End of explanation
"""
import os
import tensorflow_io as tfio
endpoint="postgresql://{}:{}@{}?port={}&dbname={}".format(
os.environ['TFIO_DEMO_DATABASE_USER'],
os.environ['TFIO_DEMO_DATABASE_PASS'],
os.environ['TFIO_DEMO_DATABASE_HOST'],
os.environ['TFIO_DEMO_DATABASE_PORT'],
os.environ['TFIO_DEMO_DATABASE_NAME'],
)
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT co, pt08s1 FROM AirQualityUCI;",
endpoint=endpoint)
print(dataset.element_spec)
"""
Explanation: PostgreSQL サーバーからデータセットを作成し、TensorFlow で使用する
PostgreSQL サーバーからのデータセットの作成は、queryおよびendpoint引数を指定してtfio.experimental.IODataset.from_sqlを呼び出して簡単に実行できます。queryはテーブル内の選択した列の SQL クエリで、endpoint引数はアドレスとデータベース名です。
End of explanation
"""
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT nox, no2 FROM AirQualityUCI;",
endpoint=endpoint)
dataset = dataset.map(lambda e: (e['nox'] - e['no2']))
# check only the first 20 record
dataset = dataset.take(20)
print("NOx - NO2:")
for difference in dataset:
print(difference.numpy())
"""
Explanation: 上記の dataset.element_spec の出力からわかるように、作成された Dataset の要素はデータセットテーブルの列名をキーとする Python dict オブジェクトであるため、さらに演算を適用するのが非常に便利です。たとえば、Dataset の nox と no2 フィールドを選択して、その差を計算することができます。
End of explanation
"""
|
Jackie789/JupyterNotebooks | CorrectingForAssumptions.ipynb | gpl-3.0 | import math
import warnings
from IPython.display import display
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn import linear_model
#import statsmodels.formula.api as smf
#import statsmodels as smf
# Display preferences.
%matplotlib inline
pd.options.display.float_format = '{:.3f}'.format
# Suppress annoying harmless error.
warnings.filterwarnings(
action="ignore",
module="scipy",
message="^internal gelsd"
)
# Acquire, load, and preview the data.
data = pd.read_csv(
'http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv',
index_col=0
)
display(data.head())
# Instantiate and fit our model.
regr = linear_model.LinearRegression()
Y = data['Sales'].values.reshape(-1, 1)
X = data[['TV','Radio','Newspaper']]
regr.fit(X, Y)
# Inspect the results.
print('\nCoefficients: \n', regr.coef_)
print('\nIntercept: \n', regr.intercept_)
print('\nR-squared:')
print(regr.score(X, Y))
# Extract predicted values.
predicted = regr.predict(X).ravel()
actual = data['Sales']
# Calculate the error, also called the residual.
residual = actual - predicted
sns.set_style("whitegrid")
# This looks a bit concerning.
plt.hist(residual)
plt.title('Residual counts (skewed to the right)')
plt.xlabel('Residual')
plt.ylabel('Count')
plt.show()
plt.scatter(predicted, residual)
plt.xlabel('Predicted')
plt.ylabel('Residual')
plt.axhline(y=0)
plt.title('Residual vs. Predicted( Heteroscedasticity present)')
plt.show()
"""
Explanation: Engineering Existing Data to Follow Multivariate Linear Regression Assumptions
Jackie Zuker
Assumptions:
Linear relationship - Features should have a linear relationship with the outcome
Multivariate normality - The error from the model should be normally distributed
Homoscedasticity - The distribution of error should be consistent for all predicted values
Low multicollinearity - correlations between features should be low or non-existent
The model in use has problems with heterscedasticity and multivariate non-normality.
End of explanation
"""
plt.hist(actual)
plt.show()
sqrt_actual = np.sqrt(actual)
plt.hist(sqrt_actual)
plt.show()
"""
Explanation: As shown above, the error from the model is not normally distributed. The error is skewed to the right, similar to the raw data itself.
Additionally, the distribution of error terms is not consistent, it is heteroscadastic.
Inspect the Data and Transform
The data is skewed to the right. The data is transformed by taking its square root to see if we can obtain a more normal distribution.
End of explanation
"""
# Extract predicted values.
predicted = regr.predict(X).ravel()
actual = data['Sales']
# Calculate the error, also called the residual.
corr_residual = sqrt_actual - predicted
plt.hist(corr_residual)
plt.title('Residual counts')
plt.xlabel('Residual')
plt.ylabel('Count')
plt.show()
"""
Explanation: That's a little better. Has this helped the multivariate normality? Yes.
End of explanation
"""
plt.scatter(predicted, corr_residual)
plt.xlabel('Predicted')
plt.ylabel('Residual')
plt.axhline(y=0)
plt.title('Residual vs. Predicted')
plt.show()
"""
Explanation: Transforming the data into the sqrt of the data lessened the skewness to the right, and allowed the error from the model to be more normally-distributed. Let's see if our transformation helped the problem with heteroscedasticity.
Homoscedasticity
End of explanation
"""
|
google/neural-tangents | notebooks/weight_space_linearization.ipynb | apache-2.0 | !pip install --upgrade pip
!pip install -q tensorflow-datasets
!pip install --upgrade jax[cuda11_cudnn805] -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install -q git+https://www.github.com/google/neural-tangents
from jax import jit
from jax import grad
from jax import random
import jax.numpy as np
from jax.example_libraries.stax import logsoftmax
from jax.example_libraries import optimizers
import tensorflow_datasets as tfds
import neural_tangents as nt
from neural_tangents import stax
def process_data(data_chunk):
"""Flatten the images and one-hot encode the labels."""
image, label = data_chunk['image'], data_chunk['label']
samples = image.shape[0]
image = np.array(np.reshape(image, (samples, -1)), dtype=np.float32)
image = (image - np.mean(image)) / np.std(image)
label = np.eye(10)[label]
return {'image': image, 'label': label}
"""
Explanation: <a href="https://colab.research.google.com/github/google/neural-tangents/blob/main/notebooks/weight_space_linearization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Import & Utils
Install JAX, Tensorflow Datasets, and Neural Tangents
The first line specifies the version of jaxlib that we would like to import. Note, that "cp36" species the version of python (version 3.6) used by JAX. Make sure your colab kernel matches this version.
End of explanation
"""
learning_rate = 1.0
batch_size = 128
training_epochs = 5
steps_per_epoch = 50000 // batch_size
"""
Explanation: Weight Space Linearization
Setup some experiment parameters.
End of explanation
"""
train_data = tfds.load('mnist:3.*.*', split=tfds.Split.TRAIN)
train_data = tfds.as_numpy(
train_data.shuffle(1024).batch(batch_size).repeat(training_epochs))
test_data = tfds.load('mnist:3.*.*', split=tfds.Split.TEST)
"""
Explanation: Create MNIST data pipeline using TensorFlow Datasets.
End of explanation
"""
init_fn, f, _ = stax.serial(
stax.Dense(512, 1., 0.05),
stax.Erf(),
stax.Dense(10, 1., 0.05))
key = random.PRNGKey(0)
_, params = init_fn(key, (-1, 784))
"""
Explanation: Create a Fully-Connected Network.
End of explanation
"""
f_lin = nt.linearize(f, params)
"""
Explanation: Linearize the network.
End of explanation
"""
opt_init, opt_apply, get_params = optimizers.momentum(learning_rate, 0.9)
state = opt_init(params)
lin_state = opt_init(params)
"""
Explanation: Create an optimizer and initialize it for the full network and the linearized network.
End of explanation
"""
loss = lambda fx, y_hat: -np.mean(logsoftmax(fx) * y_hat)
"""
Explanation: Create a cross-entropy loss function.
End of explanation
"""
grad_loss = jit(grad(lambda params, x, y: loss(f(params, x), y)))
grad_lin_loss = jit(grad(lambda params, x, y: loss(f_lin(params, x), y)))
"""
Explanation: Specialize the loss to compute gradients of the network and linearized network.
End of explanation
"""
print ('Epoch\tLoss\tLinear Loss')
epoch = 0
for i, batch in enumerate(train_data):
batch = process_data(batch)
X, Y = batch['image'], batch['label']
params = get_params(state)
state = opt_apply(i, grad_loss(params, X, Y), state)
lin_params = get_params(lin_state)
lin_state = opt_apply(i, grad_lin_loss(lin_params, X, Y), lin_state)
if i % steps_per_epoch == 0:
print('{}\t{:.4f}\t{:.4f}'.format(
epoch, loss(f(params, X), Y), loss(f_lin(lin_params, X), Y)))
epoch += 1
"""
Explanation: Train the network and its linearization.
End of explanation
"""
|
thewtex/SimpleITK-Notebooks | 61_Registration_Introduction_Continued.ipynb | apache-2.0 | import SimpleITK as sitk
# Utility method that either downloads data from the network or
# if already downloaded returns the file name for reading from disk (cached data).
from downloaddata import fetch_data as fdata
# Always write output to a separate directory, we don't want to pollute the source directory.
import os
OUTPUT_DIR = 'Output'
# GUI components (sliders, dropdown...).
from ipywidgets import interact, fixed
# Enable display of html.
from IPython.display import display, HTML
# Plots will be inlined.
%matplotlib inline
# Callbacks for plotting registration progress.
import registration_callbacks
"""
Explanation: <h1 align="center">Introduction to SimpleITKv4 Registration - Continued</h1>
ITK v4 Registration Components
<img src="ITKv4RegistrationComponentsDiagram.svg" style="width:700px"/><br><br>
Before starting with this notebook, please go over the first introductory notebook found here.
In this notebook we will visually assess registration by viewing the overlap between images using external viewers.
The two viewers we recommend for this task are ITK-SNAP and 3D Slicer. ITK-SNAP supports concurrent linked viewing between multiple instances of the program. 3D Slicer supports concurrent viweing of multiple volumes via alpha blending.
End of explanation
"""
def save_transform_and_image(transform, fixed_image, moving_image, outputfile_prefix):
"""
Write the given transformation to file, resample the moving_image onto the fixed_images grid and save the
result to file.
Args:
transform (SimpleITK Transform): transform that maps points from the fixed image coordinate system to the moving.
fixed_image (SimpleITK Image): resample onto the spatial grid defined by this image.
moving_image (SimpleITK Image): resample this image.
outputfile_prefix (string): transform is written to outputfile_prefix.tfm and resampled image is written to
outputfile_prefix.mha.
"""
resample = sitk.ResampleImageFilter()
resample.SetReferenceImage(fixed_image)
# SimpleITK supports several interpolation options, we go with the simplest that gives reasonable results.
resample.SetInterpolator(sitk.sitkLinear)
resample.SetTransform(transform)
sitk.WriteImage(resample.Execute(moving_image), outputfile_prefix+'.mha')
sitk.WriteTransform(transform, outputfile_prefix+'.tfm')
def DICOM_series_dropdown_callback(fixed_image, moving_image, series_dictionary):
"""
Callback from dropbox which selects the two series which will be used for registration.
The callback prints out some information about each of the series from the meta-data dictionary.
For a list of all meta-dictionary tags and their human readable names see DICOM standard part 6,
Data Dictionary (http://medical.nema.org/medical/dicom/current/output/pdf/part06.pdf)
"""
# The callback will update these global variables with the user selection.
global selected_series_fixed
global selected_series_moving
img_fixed = sitk.ReadImage(series_dictionary[fixed_image][0])
img_moving = sitk.ReadImage(series_dictionary[moving_image][0])
# There are many interesting tags in the DICOM data dictionary, display a selected few.
tags_to_print = {'0010|0010': 'Patient name: ',
'0008|0060' : 'Modality: ',
'0008|0021' : 'Series date: ',
'0008|0031' : 'Series time:',
'0008|0070' : 'Manufacturer: '}
html_table = []
html_table.append('<table><tr><td><b>Tag</b></td><td><b>Fixed Image</b></td><td><b>Moving Image</b></td></tr>')
for tag in tags_to_print:
fixed_tag = ''
moving_tag = ''
try:
fixed_tag = img_fixed.GetMetaData(tag)
except: # ignore if the tag isn't in the dictionary
pass
try:
moving_tag = img_moving.GetMetaData(tag)
except: # ignore if the tag isn't in the dictionary
pass
html_table.append('<tr><td>' + tags_to_print[tag] +
'</td><td>' + fixed_tag +
'</td><td>' + moving_tag + '</td></tr>')
html_table.append('</table>')
display(HTML(''.join(html_table)))
selected_series_fixed = fixed_image
selected_series_moving = moving_image
"""
Explanation: Utility functions
A number of utility functions, saving a transform and corresponding resampled image, callback for selecting a
DICOM series from several series found in the same directory.
End of explanation
"""
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
# 'selected_series_moving/fixed' will be updated by the interact function.
selected_series_fixed = ''
selected_series_moving = ''
# Directory contains multiple DICOM studies/series, store the file names
# in dictionary with the key being the seriesID.
reader = sitk.ImageSeriesReader()
series_file_names = {}
series_IDs = reader.GetGDCMSeriesIDs(data_directory) #list of all series
if series_IDs: #check that we have at least one series
for series in series_IDs:
series_file_names[series] = reader.GetGDCMSeriesFileNames(data_directory, series)
interact(DICOM_series_dropdown_callback, fixed_image=series_IDs, moving_image =series_IDs, series_dictionary=fixed(series_file_names));
else:
print('This is surprising, data directory does not contain any DICOM series.')
# Actually read the data based on the user's selection.
reader.SetFileNames(series_file_names[selected_series_fixed])
fixed_image = reader.Execute()
reader.SetFileNames(series_file_names[selected_series_moving])
moving_image = reader.Execute()
# Save images to file and view overlap using external viewer.
sitk.WriteImage(fixed_image, os.path.join(OUTPUT_DIR, "fixedImage.mha"))
sitk.WriteImage(moving_image, os.path.join(OUTPUT_DIR, "preAlignment.mha"))
"""
Explanation: Loading Data
In this notebook we will work with CT and MR scans of the CIRS 057A multi-modality abdominal phantom. The scans are multi-slice DICOM images. The data is stored in a zip archive which is automatically retrieved and extracted when we request a file which is part of the archive.
End of explanation
"""
initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelIDValue()),
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY)
# Save moving image after initial transform and view overlap using external viewer.
save_transform_and_image(initial_transform, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "initialAlignment"))
"""
Explanation: Initial Alignment
A reasonable guesstimate for the initial translational alignment can be obtained by using
the CenteredTransformInitializer (functional interface to the CenteredTransformInitializerFilter).
The resulting transformation is centered with respect to the fixed image and the
translation aligns the centers of the two images. There are two options for
defining the centers of the images, either the physical centers
of the two data sets (GEOMETRY), or the centers defined by the intensity
moments (MOMENTS).
Two things to note about this filter, it requires the fixed and moving image
have the same type even though it is not algorithmically required, and its
return type is the generic SimpleITK.Transform.
End of explanation
"""
print(initial_transform)
"""
Explanation: Look at the transformation, what type is it?
End of explanation
"""
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100)
# Scale the step size differently for each parameter, this is critical!!!
registration_method.SetOptimizerScalesFromPhysicalShift()
registration_method.SetInitialTransform(initial_transform, inPlace=False)
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
final_transform_v1 = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform_v1, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "finalAlignment-v1"))
"""
Explanation: Final registration
Version 1
<ul>
<li> Single scale (not using image pyramid).</li>
<li> Initial transformation is not modified in place.</li>
</ul>
<ol>
<li>
Illustrate the need for scaling the step size differently for each parameter:
<ul>
<li> SetOptimizerScalesFromIndexShift - estimated from maximum shift of voxel indexes (only use if data is isotropic).</li>
<li> SetOptimizerScalesFromPhysicalShift - estimated from maximum shift of physical locations of voxels.</li>
<li> SetOptimizerScalesFromJacobian - estimated from the averaged squared norm of the Jacobian w.r.t. parameters.</li>
</ul>
</li>
<li>
Look at the optimizer's stopping condition to ensure we have not terminated prematurely.
</li>
</ol>
End of explanation
"""
print(final_transform_v1)
"""
Explanation: Look at the final transformation, what type is it?
End of explanation
"""
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100)
registration_method.SetOptimizerScalesFromPhysicalShift()
# Set the initial moving and optimized transforms.
optimized_transform = sitk.Euler3DTransform()
registration_method.SetMovingInitialTransform(initial_transform)
registration_method.SetInitialTransform(optimized_transform)
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
# Need to compose the transformations after registration.
final_transform_v11 = sitk.Transform(optimized_transform)
final_transform_v11.AddTransform(initial_transform)
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform_v11, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "finalAlignment-v1.1"))
"""
Explanation: Version 1.1
The previous example illustrated the use of the ITK v4 registration framework in an ITK v3 manner. We only referred to a single transformation which was what we optimized.
In ITK v4 the registration method accepts three transformations (if you look at the diagram above you will only see two transformations, Moving transform represents $T_{opt} \circ T_m$):
<ul>
<li>
SetInitialTransform, $T_{opt}$ - composed with the moving initial transform, maps points from the virtual image domain to the moving image domain, modified during optimization.
</li>
<li>
SetFixedInitialTransform $T_f$- maps points from the virtual image domain to the fixed image domain, never modified.
</li>
<li>
SetMovingInitialTransform $T_m$- maps points from the virtual image domain to the moving image domain, never modified.
</li>
</ul>
The transformation that maps points from the fixed to moving image domains is thus: $^M\mathbf{p} = T_{opt}(T_m(T_f^{-1}(^F\mathbf{p})))$
We now modify the previous example to use $T_{opt}$ and $T_m$.
End of explanation
"""
print(final_transform_v11)
"""
Explanation: Look at the final transformation, what type is it? Why is it differnt from the previous example?
End of explanation
"""
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) #, estimateLearningRate=registration_method.EachIteration)
registration_method.SetOptimizerScalesFromPhysicalShift()
final_transform = sitk.Euler3DTransform(initial_transform)
registration_method.SetInitialTransform(final_transform)
registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [2,1,0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent,
registration_callbacks.metric_update_multires_iterations)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform, fixed_image, moving_image, os.path.join(OUTPUT_DIR, 'finalAlignment-v2'))
"""
Explanation: Version 2
<ul>
<li> Multi scale - specify both scale, and how much to smooth with respect to original image.</li>
<li> Initial transformation modified in place, so in the end we have the same type of transformation in hand.</li>
</ul>
End of explanation
"""
print(final_transform)
"""
Explanation: Look at the final transformation, what type is it?
End of explanation
"""
|
tritemio/multispot_paper | realtime kinetics/8-spot dsDNA steady-state - Summary.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from pathlib import Path
from scipy.stats import linregress
dir_ = r'C:\Data\Antonio\data\8-spot 5samples data\2013-05-15/'
filenames = [str(f) for f in Path(dir_).glob('*.hdf5')]
filenames
keys = [f.stem.split('_')[0] for f in Path(dir_).glob('*.hdf5')]
keys
filenames_dict = {k: v.stem for k, v in zip(keys, Path(dir_).glob('*.hdf5'))}
filenames_dict
def _filename_fit(idx, method, window, step):
return 'results/%s_%sfit_ampl_only__window%ds_step%ds.txt' % (filenames_dict[idx], method, window, step)
def _filename_nb(idx, window, step):
return 'results/%s_burst_data_vs_time__window%ds_step%ds.txt' % (filenames_dict[idx], window, step)
def process(meas_id):
methods = ['em', 'll', 'hist']
fig_width = 14
fs = 18
def savefig(title, **kwargs):
plt.savefig("figures/Meas%s %s" % (meas_id, title))
bursts = pd.DataFrame.from_csv(_filename_nb(meas_id, window=30, step=1))
nbm = bursts.num_bursts.mean()
nbc = bursts.num_bursts_detrend
print("Number of bursts (detrended): %7.1f MEAN, %7.1f VAR, %6.3f VAR/MEAN" %
(nbm, nbc.var(), nbc.var()/nbm))
fig, ax = plt.subplots(figsize=(fig_width, 3))
ax.plot(bursts.tstart, bursts.num_bursts)
ax.plot(bursts.tstart, bursts.num_bursts_linregress, 'r')
title = 'Number of bursts - Full measurement'
ax.set_title(title, fontsize=fs)
savefig(title)
fig, ax = plt.subplots(figsize=(fig_width, 3))
ax.plot(bursts.tstart, bursts.num_bursts_detrend)
ax.axhline(nbm, color='r')
title = 'Number of bursts (detrended) - Full measurement'
ax.set_title(title, fontsize=fs)
savefig(title)
params = {}
for window in (5, 30):
for method in methods:
p = pd.DataFrame.from_csv(_filename_fit(meas_id, method=method,
window=window, step=1))
params[method, window, 1] = p
meth = 'em'
fig, ax = plt.subplots(figsize=(fig_width, 3))
ax.plot('kinetics', data=params[meth, 5, 1], marker='h', lw=0, color='gray', alpha=0.2)
ax.plot('kinetics', data=params[meth, 30, 1], marker='h', lw=0, alpha=0.5)
ax.plot('kinetics_linregress', data=params[meth, 30, 1], color='r')
title = 'Population fraction - Full measurement'
ax.set_title(title, fontsize=fs)
savefig(title)
px = params
print('Kinetics 30s: %.3f STD, %.3f STD detrended.' %
((100*px[meth, 30, 1].kinetics).std(),
(100*px[meth, 30, 1].kinetics_linregress).std()))
"""
Explanation: Summary
<p class="lead">This notebook summarizes the realtime-kinetic measurements.
</p>
Requirement
Before running this notebook, you need to pre-process the data with:
8-spot dsDNA-steady-state - Run-All
This pre-processing analyzes the measurement data files,
compute the moving-window slices, the number of bursts
and fits the population fractions. All results are saved as TXT in
the results folder.
The present notebook loads these results and presents a summary.
End of explanation
"""
process(meas_id = '7d')
"""
Explanation: Measurement 0
End of explanation
"""
process(meas_id = '12d')
"""
Explanation: Measurement 1
End of explanation
"""
process(meas_id = '17d')
"""
Explanation: Measurement 2
End of explanation
"""
|
benvanwerkhoven/kernel_tuner | tutorial/diffusion_use_optparam.ipynb | apache-2.0 | nx = 1024
ny = 1024
"""
Explanation: Tutorial: From physics to tuned GPU kernels
This tutorial is designed to show you the whole process starting from modeling a physical process to a Python implementation to creating optimized and auto-tuned GPU application using Kernel Tuner.
In this tutorial, we will use diffusion as an example application.
We start with modeling the physical process of diffusion, for which we create a simple numerical implementation in Python. Then we create a CUDA kernel that performs the same computation, but on the GPU. Once we have a CUDA kernel, we start using the Kernel Tuner for auto-tuning our GPU application. And finally, we'll introduce a few code optimizations to our CUDA kernel that will improve performance, but also add more parameters to tune on using the Kernel Tuner.
<div class="alert alert-info">
**Note:** If you are reading this tutorial on the Kernel Tuner's documentation pages, note that you can actually run this tutorial as a Jupyter Notebook. Just clone the Kernel Tuner's [GitHub repository](http://github.com/benvanwerkhoven/kernel_tuner). Install the Kernel Tuner and Jupyter Notebooks and you're ready to go! You can start the tutorial by typing "jupyter notebook" in the "kernel_tuner/tutorial" directory.
</div>
Diffusion
Put simply, diffusion is the redistribution of something from a region of high concentration to a region of low concentration without bulk motion. The concept of diffusion is widely used in many fields, including physics, chemistry, biology, and many more.
Suppose that we take a metal sheet, in which the temperature is exactly equal to one degree everywhere in the sheet.
Now if we were to heat a number of points on the sheet to a very high temperature, say a thousand degrees, in an instant by some method. We could see the heat diffuse from these hotspots to the cooler areas. We are assuming that the metal does not melt. In addition, we will ignore any heat loss from radiation or other causes in this example.
We can use the diffusion equation to model how the heat diffuses through our metal sheet:
\begin{equation}
\frac{\partial u}{\partial t}= D \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right)
\end{equation}
Where $x$ and $y$ represent the spatial descretization of our 2D domain, $u$ is the quantity that is being diffused, $t$ is the descretization in time, and the constant $D$ determines how fast the diffusion takes place.
In this example, we will assume a very simple descretization of our problem. We assume that our 2D domain has $nx$ equi-distant grid points in the x-direction and $ny$ equi-distant grid points in the y-direction. Be sure to execute every cell as you read through this document, by selecting it and pressing shift+enter.
End of explanation
"""
def diffuse(field, dt=0.225):
field[1:nx-1,1:ny-1] = field[1:nx-1,1:ny-1] + dt * (
field[1:nx-1,2:ny]+field[2:nx,1:ny-1]-4*field[1:nx-1,1:ny-1]+
field[0:nx-2,1:ny-1]+field[1:nx-1,0:ny-2] )
return field
"""
Explanation: This results in a constant distance of $\delta x$ between all grid points in the $x$ dimension. Using central differences, we can numerically approximate the derivative for a given point $x_i$:
\begin{equation}
\left. \frac{\partial^2 u}{\partial x^2} \right|{x{i}} \approx \frac{u_{x_{i+1}}-2u_{{x_i}}+u_{x_{i-1}}}{(\delta x)^2}
\end{equation}
We do the same for the partial derivative in $y$:
\begin{equation}
\left. \frac{\partial^2 u}{\partial y^2} \right|{y{i}} \approx \frac{u_{y_{i+1}}-2u_{y_{i}}+u_{y_{i-1}}}{(\delta y)^2}
\end{equation}
If we combine the above equations, we can obtain a numerical estimation for the temperature field of our metal sheet in the next time step, using $\delta t$ as the time between time steps. But before we do, we also simplify the expression a little bit, because we'll assume that $\delta x$ and $\delta y$ are always equal to 1.
\begin{equation}
u'{x,y} = u{x,y} + \delta t \times \left( \left( u_{x_{i+1},y}-2u_{{x_i},y}+u_{x_{i-1},y} \right) + \left( u_{x,y_{i+1}}-2u_{x,y_{i}}+u_{x,y_{i-1}} \right) \right)
\end{equation}
In this formula $u'_{x,y}$ refers to the temperature field at the time $t + \delta t$. As a final step, we further simplify this equation to:
\begin{equation}
u'{x,y} = u{x,y} + \delta t \times \left( u_{x,y_{i+1}}+u_{x_{i+1},y}-4u_{{x_i},y}+u_{x_{i-1},y}+u_{x,y_{i-1}} \right)
\end{equation}
Python implementation
We can create a Python function that implements the numerical approximation defined in the above equation. For simplicity we'll use the assumption of a free boundary condition.
End of explanation
"""
import numpy
#setup initial conditions
def get_initial_conditions(nx, ny):
field = numpy.ones((ny, nx)).astype(numpy.float32)
field[numpy.random.randint(0,nx,size=10), numpy.random.randint(0,ny,size=10)] = 1e3
return field
field = get_initial_conditions(nx, ny)
"""
Explanation: To give our Python function a test run, we will now do some imports and generate the input data for the initial conditions of our metal sheet with a few very hot points. We'll also make two plots, one after a thousand time steps, and a second plot after another two thousand time steps. Do note that the plots are using different ranges for the colors. Also, executing the following cell may take a little while.
End of explanation
"""
from matplotlib import pyplot
%matplotlib inline
#run the diffuse function a 1000 times and another 2000 times and make plots
fig, (ax1, ax2) = pyplot.subplots(1,2)
cpu=numpy.copy(field)
for i in range(1000):
cpu = diffuse(cpu)
ax1.imshow(cpu)
for i in range(2000):
cpu = diffuse(cpu)
ax2.imshow(cpu)
"""
Explanation: We can now use this initial condition to solve the diffusion problem and plot the results.
End of explanation
"""
#run another 1000 steps of the diffuse function and measure the time
from time import time
start = time()
cpu=numpy.copy(field)
for i in range(1000):
cpu = diffuse(cpu)
end = time()
print("1000 steps of diffuse on a %d x %d grid took" %(nx,ny), (end-start)*1000.0, "ms")
pyplot.imshow(cpu)
"""
Explanation: Now let's take a quick look at the execution time of our diffuse function. Before we do, we also copy the current state of the metal sheet to be able to restart the computation from this state.
End of explanation
"""
def get_kernel_string(nx, ny):
return """
#define nx %d
#define ny %d
#define dt 0.225f
__global__ void diffuse_kernel(float *u_new, float *u) {
int x = blockIdx.x * block_size_x + threadIdx.x;
int y = blockIdx.y * block_size_y + threadIdx.y;
if (x>0 && x<nx-1 && y>0 && y<ny-1) {
u_new[y*nx+x] = u[y*nx+x] + dt * (
u[(y+1)*nx+x]+u[y*nx+x+1]-4.0f*u[y*nx+x]+u[y*nx+x-1]+u[(y-1)*nx+x]);
}
}
""" % (nx, ny)
kernel_string = get_kernel_string(nx, ny)
"""
Explanation: Computing on the GPU
The next step in this tutorial is to implement a GPU kernel that will allow us to run our problem on the GPU. We store the kernel code in a Python string, because we can directly compile and run the kernel from Python. In this tutorial, we'll use the CUDA programming model to implement our kernels.
If you prefer OpenCL over CUDA, don't worry. Everything in this tutorial
applies as much to OpenCL as it does to CUDA. But we will use CUDA for our
examples, and CUDA terminology in the text.
End of explanation
"""
from pycuda import driver, compiler, gpuarray, tools
import pycuda.autoinit
from time import time
#allocate GPU memory
u_old = gpuarray.to_gpu(field)
u_new = gpuarray.to_gpu(field)
#setup thread block dimensions and compile the kernel
threads = (16,16,1)
grid = (int(nx/16), int(ny/16), 1)
block_size_string = "#define block_size_x 16\n#define block_size_y 16\n"
mod = compiler.SourceModule(block_size_string+kernel_string)
diffuse_kernel = mod.get_function("diffuse_kernel")
"""
Explanation: The above CUDA kernel parallelizes the work such that every grid point will be processed by a different CUDA thread. Therefore, the kernel is executed by a 2D grid of threads, which are grouped together into 2D thread blocks. The specific thread block dimensions we choose are not important for the result of the computation in this kernel. But as we will see will later, they will have an impact on performance.
In this kernel we are using two, currently undefined, compile-time constants for block_size_x and block_size_y, because we will auto tune these parameters later. It is often needed for performance to fix the thread block dimensions at compile time, because the compiler can unroll loops that iterate using the block size, or because you need to allocate shared memory using the thread block dimensions.
The next bit of Python code initializes PyCuda, and makes preparations so that we can call the CUDA kernel to do the computation on the GPU as we did earlier in Python.
End of explanation
"""
#call the GPU kernel a 1000 times and measure performance
t0 = time()
for i in range(500):
diffuse_kernel(u_new, u_old, block=threads, grid=grid)
diffuse_kernel(u_old, u_new, block=threads, grid=grid)
driver.Context.synchronize()
print("1000 steps of diffuse ona %d x %d grid took" %(nx,ny), (time()-t0)*1000, "ms.")
#copy the result from the GPU to Python for plotting
gpu_result = u_old.get()
fig, (ax1, ax2) = pyplot.subplots(1,2)
ax1.imshow(gpu_result)
ax1.set_title("GPU Result")
ax2.imshow(cpu)
ax2.set_title("Python Result")
"""
Explanation: The above code is a bit of boilerplate we need to compile a kernel using PyCuda. We've also, for the moment, fixed the thread block dimensions at 16 by 16. These dimensions serve as our initial guess for what a good performing pair of thread block dimensions could look like.
Now that we've setup everything, let's see how long the computation would take using the GPU.
End of explanation
"""
nx = 4096
ny = 4096
field = get_initial_conditions(nx, ny)
kernel_string = get_kernel_string(nx, ny)
"""
Explanation: That should already be a lot faster than our previous Python implementation, but we can do much better if we optimize our GPU kernel. And that is exactly what the rest of this tutorial is about!
Also, if you think the Python boilerplate code to call a GPU kernel was a bit messy, we've got good news for you! From now on, we'll only use the Kernel Tuner to compile and benchmark GPU kernels, which we can do with much cleaner Python code.
Auto-Tuning with the Kernel Tuner
Remember that previously we've set the thread block dimensions to 16 by 16. But how do we actually know if that is the best performing setting? That is where auto-tuning comes into play. Basically, it is very difficult to provide an answer through performance modeling and as such, we'd rather use the Kernel Tuner to compile and benchmark all possible kernel configurations.
But before we continue, we'll increase the problem size, because the GPU is very likely underutilized.
End of explanation
"""
from collections import OrderedDict
tune_params = OrderedDict()
tune_params["block_size_x"] = [16, 32, 48, 64, 128]
tune_params["block_size_y"] = [2, 4, 8, 16, 32]
"""
Explanation: The above code block has generated new initial conditions and a new string that contains our CUDA kernel using our new domain size.
To call the Kernel Tuner, we have to specify the tunable parameters, in our case block_size_x and block_size_y. For this purpose, we'll create an ordered dictionary to store the tunable parameters. The keys will be the name of the tunable parameter, and the corresponding value is the list of possible values for the parameter. For the purpose of this tutorial, we'll use a small number of commonly used values for the thread block dimensions, but feel free to try more!
End of explanation
"""
args = [field, field]
"""
Explanation: We also have to tell the Kernel Tuner about the argument list of our CUDA kernel. Because the Kernel Tuner will be calling the CUDA kernel and measure its execution time. For this purpose we create a list in Python, that corresponds with the argument list of the diffuse_kernel CUDA function. This list will only be used as input to the kernel during tuning. The objects in the list should be Numpy arrays or scalars.
Because you can specify the arguments as Numpy arrays, the Kernel Tuner will take care of allocating GPU memory and copying the data to the GPU.
End of explanation
"""
problem_size = (nx, ny)
"""
Explanation: We're almost ready to call the Kernel Tuner, we just need to set how large the problem is we are currently working on by setting a problem_size. The Kernel Tuner knows about thread block dimensions, which it expects to be called block_size_x, block_size_y, and/or block_size_z. From these and the problem_size, the Kernel Tuner will compute the appropiate grid dimensions on the fly.
End of explanation
"""
from kernel_tuner import tune_kernel
result = tune_kernel("diffuse_kernel", kernel_string, problem_size, args, tune_params)
"""
Explanation: And that's everything the Kernel Tuner needs to know to be able to start tuning our kernel. Let's give it a try by executing the next code block!
End of explanation
"""
kernel_string_shared = """
#define nx %d
#define ny %d
#define dt 0.225f
__global__ void diffuse_kernel(float *u_new, float *u) {
int tx = threadIdx.x;
int ty = threadIdx.y;
int bx = blockIdx.x * block_size_x;
int by = blockIdx.y * block_size_y;
__shared__ float sh_u[block_size_y+2][block_size_x+2];
#pragma unroll
for (int i = ty; i<block_size_y+2; i+=block_size_y) {
#pragma unroll
for (int j = tx; j<block_size_x+2; j+=block_size_x) {
int y = by+i-1;
int x = bx+j-1;
if (x>=0 && x<nx && y>=0 && y<ny) {
sh_u[i][j] = u[y*nx+x];
}
}
}
__syncthreads();
int x = bx+tx;
int y = by+ty;
if (x>0 && x<nx-1 && y>0 && y<ny-1) {
int i = ty+1;
int j = tx+1;
u_new[y*nx+x] = sh_u[i][j] + dt * (
sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] +
sh_u[i][j-1] + sh_u[i-1][j] );
}
}
""" % (nx, ny)
"""
Explanation: Note that the Kernel Tuner prints a lot of useful information. To ensure you'll be able to tell what was measured in this run the Kernel Tuner always prints the GPU or OpenCL Device name that is being used, as well as the name of the kernel.
After that every line contains the combination of parameters and the time that was measured during benchmarking. The time that is being printed is in milliseconds and is obtained by averaging the execution time of 7 runs of the kernel. Finally, as a matter of convenience, the Kernel Tuner also prints the best performing combination of tunable parameters. However, later on in this tutorial we'll explain how to analyze and store the tuning results using Python.
Looking at the results printed above, the difference in performance between the different kernel configurations may seem very little. However, on our hardware, the performance of this kernel already varies in the order of 10%. Which of course can build up to large differences in the execution time if the kernel is to be executed thousands of times. We can also see that the performance of the best configuration in this set is 5% better than our initially guessed thread block dimensions of 16 by 16.
In addtion, you may notice that not all possible combinations of values for block_size_x and block_size_y are among the results. For example, 128x32 is not among the results. This is because some configuration require more threads per thread block than allowed on our GPU. The Kernel Tuner checks the limitations of your GPU at runtime and automatically skips over configurations that use too many threads per block. It will also do this for kernels that cannot be compiled because they use too much shared memory. And likewise for kernels that use too many registers to be launched at runtime. If you'd like to know about which configurations were skipped automatically you can pass the optional parameter verbose=True to tune_kernel.
However, knowing the best performing combination of tunable parameters becomes even more important when we start to further optimize our CUDA kernel. In the next section, we'll add a simple code optimization and show how this affects performance.
Using shared memory
Shared memory, is a special type of the memory available in CUDA. Shared memory can be used by threads within the same thread block to exchange and share values. It is in fact, one of the very few ways for threads to communicate on the GPU.
The idea is that we'll try improve the performance of our kernel by using shared memory as a software controlled cache. There are already caches on the GPU, but most GPUs only cache accesses to global memory in L2. Shared memory is closer to the multiprocessors where the thread blocks are executed, comparable to an L1 cache.
However, because there are also hardware caches, the performance improvement from this step is expected to not be that great. The more fine-grained control that we get by using a software managed cache, rather than a hardware implemented cache, comes at the cost of some instruction overhead. In fact, performance is quite likely to degrade a little. However, this intermediate step is necessary for the next optimization step we have in mind.
End of explanation
"""
result = tune_kernel("diffuse_kernel", kernel_string_shared, problem_size, args, tune_params)
"""
Explanation: We can now tune this new kernel using the kernel tuner
End of explanation
"""
kernel_string_tiled = """
#define nx %d
#define ny %d
#define dt 0.225f
__global__ void diffuse_kernel(float *u_new, float *u) {
int tx = threadIdx.x;
int ty = threadIdx.y;
int bx = blockIdx.x * block_size_x * tile_size_x;
int by = blockIdx.y * block_size_y * tile_size_y;
__shared__ float sh_u[block_size_y*tile_size_y+2][block_size_x*tile_size_x+2];
#pragma unroll
for (int i = ty; i<block_size_y*tile_size_y+2; i+=block_size_y) {
#pragma unroll
for (int j = tx; j<block_size_x*tile_size_x+2; j+=block_size_x) {
int y = by+i-1;
int x = bx+j-1;
if (x>=0 && x<nx && y>=0 && y<ny) {
sh_u[i][j] = u[y*nx+x];
}
}
}
__syncthreads();
#pragma unroll
for (int tj=0; tj<tile_size_y; tj++) {
int i = ty+tj*block_size_y+1;
int y = by + ty + tj*block_size_y;
#pragma unroll
for (int ti=0; ti<tile_size_x; ti++) {
int j = tx+ti*block_size_x+1;
int x = bx + tx + ti*block_size_x;
if (x>0 && x<nx-1 && y>0 && y<ny-1) {
u_new[y*nx+x] = sh_u[i][j] + dt * (
sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] +
sh_u[i][j-1] + sh_u[i-1][j] );
}
}
}
}
""" % (nx, ny)
"""
Explanation: Tiling GPU Code
One very useful code optimization is called tiling, sometimes also called thread-block-merge. You can look at it in this way, currently we have many thread blocks that together work on the entire domain. If we were to use only half of the number of thread blocks, every thread block would need to double the amount of work it performs to cover the entire domain. However, the threads may be able to reuse part of the data and computation that is required to process a single output element for every element beyond the first.
This is a code optimization because effectively we are reducing the total number of instructions executed by all threads in all thread blocks. So in a way, were are condensing the total instruction stream while keeping the all the really necessary compute instructions. More importantly, we are increasing data reuse, where previously these values would have been reused from the cache or in the worst-case from GPU memory.
We can apply tiling in both the x and y-dimensions. This also introduces two new tunable parameters, namely the tiling factor in x and y, which we will call tile_size_x and tile_size_y.
This is what the new kernel looks like:
End of explanation
"""
tune_params["tile_size_x"] = [1,2,4] #add tile_size_x to the tune_params
tune_params["tile_size_y"] = [1,2,4] #add tile_size_y to the tune_params
grid_div_x = ["block_size_x", "tile_size_x"] #tile_size_x impacts grid dimensions
grid_div_y = ["block_size_y", "tile_size_y"] #tile_size_y impacts grid dimensions
result = tune_kernel("diffuse_kernel", kernel_string_tiled, problem_size, args,
tune_params, grid_div_x=grid_div_x, grid_div_y=grid_div_y)
"""
Explanation: We can tune our tiled kernel by adding the two new tunable parameters to our dictionary tune_params.
We also need to somehow tell the Kernel Tuner to use fewer thread blocks to launch kernels with tile_size_x or tile_size_y larger than one. For this purpose the Kernel Tuner's tune_kernel function supports two optional arguments, called grid_div_x and grid_div_y. These are the grid divisor lists, which are lists of strings containing all the tunable parameters that divide a certain grid dimension. So far, we have been using the default settings for these, in which case the Kernel Tuner only uses the block_size_x and block_size_y tunable parameters to divide the problem_size.
Note that the Kernel Tuner will replace the values of the tunable parameters inside the strings and use the product of the parameters in the grid divisor list to compute the grid dimension rounded up. You can even use arithmetic operations, inside these strings as they will be evaluated. As such, we could have used ["block_size_x*tile_size_x"] to get the same result.
We are now ready to call the Kernel Tuner again and tune our tiled kernel. Let's execute the following code block, note that it may take a while as the number of kernel configurations that the Kernel Tuner will try has just been increased with a factor of 9!
End of explanation
"""
import pycuda.autoinit
# define the optimal parameters
size = [nx,ny,1]
threads = [128,4,1]
# create a dict of fixed parameters
fixed_params = OrderedDict()
fixed_params['block_size_x'] = threads[0]
fixed_params['block_size_y'] = threads[1]
# select the kernel to use
kernel_string = kernel_string_shared
# replace the block/tile size
for k,v in fixed_params.items():
kernel_string = kernel_string.replace(k,str(v))
"""
Explanation: We can see that the number of kernel configurations tried by the Kernel Tuner is growing rather quickly. Also, the best performing configuration quite a bit faster than the best kernel before we started optimizing. On our GTX Titan X, the execution time went from 0.72 ms to 0.53 ms, a performance improvement of 26%!
Note that the thread block dimensions for this kernel configuration are also different. Without optimizations the best performing kernel used a thread block of 32x2, after we've added tiling the best performing kernel uses thread blocks of size 64x4, which is four times as many threads! Also the amount of work increased with tiling factors 2 in the x-direction and 4 in the y-direction, increasing the amount of work per thread block by a factor of 8. The difference in the area processed per thread block between the naive and the tiled kernel is a factor 32.
However, there are actually several kernel configurations that come close. The following Python code prints all instances with an execution time within 5% of the best performing configuration.
Using the best parameters in a production run
Now that we have determined which parameters are the best for our problems we can use them to simulate the heat diffusion problem. There are several ways to do so depending on the host language you wish to use.
Python run
To use the optimized parameters in a python run, we simply have to modify the kernel code to specify which value to use for the block and tile size. There are of course many different ways to achieve this. In simple cases on can define a dictionary of values and replace the string block_size_i and tile_size_j by their values.
End of explanation
"""
# for regular and shared kernel
grid = [int(numpy.ceil(n/t)) for t,n in zip(threads,size)]
"""
Explanation: We also need to determine the size of the grid
End of explanation
"""
#allocate GPU memory
u_old = gpuarray.to_gpu(field)
u_new = gpuarray.to_gpu(field)
# compile the kernel
mod = compiler.SourceModule(kernel_string)
diffuse_kernel = mod.get_function("diffuse_kernel")
"""
Explanation: We can then transfer the data initial condition on the two gpu arrays as well as compile the code and get the function we want to use.
End of explanation
"""
#call the GPU kernel a 1000 times and measure performance
t0 = time()
for i in range(500):
diffuse_kernel(u_new, u_old, block=tuple(threads), grid=tuple(grid))
diffuse_kernel(u_old, u_new, block=tuple(threads), grid=tuple(grid))
driver.Context.synchronize()
print("1000 steps of diffuse on a %d x %d grid took" %(nx,ny), (time()-t0)*1000, "ms.")
#copy the result from the GPU to Python for plotting
gpu_result = u_old.get()
pyplot.imshow(gpu_result)
"""
Explanation: We now just have to use the kernel with these optimized parameters to run the simulation
End of explanation
"""
kernel_string = """
#ifndef block_size_x
#define block_size_x <insert optimal value>
#endif
#ifndef block_size_y
#define block_size_y <insert optimal value>
#endif
#define nx %d
#define ny %d
#define dt 0.225f
__global__ void diffuse_kernel(float *u_new, float *u) {
......
}
}
""" % (nx, ny)
"""
Explanation: C run
If you wish to incorporate the optimized parameters in the kernel and use it in a C run you can use ifndef statement at the begining of the kerenel as demonstrated in the psedo code below.
End of explanation
"""
|
cristhro/Machine-Learning | ejercicio 5/.ipynb_checkpoints/Practica 5-checkpoint.ipynb | gpl-3.0 | from imdb import IMDb
from datetime import datetime
from elasticsearch import Elasticsearch
es = Elasticsearch()
ia = IMDb()
listaPelis = ia.get_top250_movies()
listaPelis
"""
Explanation: Sacar la lista de 250 Pelis
End of explanation
"""
for i in range(10,250):
peli = listaPelis[i]
peli2 = ia.get_movie(peli.movieID)
string = peli2.summary()
separado = string.split('\n')
solucion = {}
for i in range(2,len(separado)):
sep2 = separado[i].split(':')
#Forma de evitar que haya fallo al pasar el split a diccionario
#Caso del fallo en los 2 cuadros de abajo
sep2[1:len(sep2)] = [''.join(sep2[1:len(sep2)])]
solucion.update(dict([sep2]))
es.index(index='prueba-index', doc_type='text', body=solucion)
separado
sep2[1]
"""
Explanation: Sacar toda la info de una peli para poder meterla en un diccionario y usarla en ElasticSearch, indexandola (metodo todo en 1)
Tarda bastante en ejecutarse (5 a 15 min), mete 250 peliculas en elastic
quitado parametro de es.index (, id=i)
Coge el sumario de cada peli de la lista, y guarda la info en elasticSearch
End of explanation
"""
import pandas as pd
lista=[]
for i in range(0400000,0400010,1):
peli = ia.get_movie(i)
lista.append(peli.summary())
datos = pd.DataFrame(lista)
print datos.values
import pandas as pd
lista=[]
datos = pd.DataFrame([])
for i in range(0005000,0005003):
lista.append(ia.get_movie(i))
lista.append(ia.get_movie_plot(i))
datos = datos.append(lista)
print datos.values
"""
Explanation: Pruebas
End of explanation
"""
from datetime import datetime
from elasticsearch import Elasticsearch
es = Elasticsearch()
'''
doc = {
'prueba': 'Holi',
'text': 'A man throws away an old top hat and a tramp uses it to sole his boots.',
}
res = es.index(index="movies-index", doc_type='text', id=1, body=doc)
print(res['created'])
'''
res = es.get(index="movies-index", doc_type='text', id=6)
print(res['_source'])
es.indices.refresh(index="movies-index")
res = es.search(index="movies-index", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(text)s" % hit["_source"])
"""
Explanation: Elastic Seach (cabezera de ejemplo)
End of explanation
"""
# make sure ES is up and running
import requests
res = requests.get('http://localhost:9200')
print(res.content)
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
"""
Explanation: Inicializacion real de Elastic Search (ejecutar)
End of explanation
"""
#Lista con el top 250 de peliculas
top = ia.get_top250_movies()
#Recorro la lista y saco los datos para indexarlos en elastic search, el id es el orden en la lista
for i in range(0,250):
es.index(index='films-index', doc_type='text', id=i, body=top[i].data)
"""
Explanation: Guardamos el top 250 dentro de elastic search (antiguo)
End of explanation
"""
res = es.search(index="films-index", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])
#Modificar para que funcione
for hit in res['hits']['hits']:
print("%(kind)s %(title)s %(year)s %(rating)s" % hit["_source"])
"""
Explanation: Buscamos los datos guardados (antiguo)
End of explanation
"""
res = es.search(index="prueba-index", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s %(Genres)s %(Director)s %(Cast)s %(Writer)s %(Country)s %(Language)s %(Rating)s %(Plot)s" % hit["_source"])
res = es.search(index="prueba-index", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s" % hit["_source"])
res = es.search(index="prueba-index", body={"query": {"match_all": {}}})
res
res = es.search(index="prueba-index", body={
"query":
{"match" : {'Director': 'Christopher Nolan'}
},
{
"highlight" : {
"fields" : {
"Language" : {}
}
}
}
})
res
"""
Explanation: Sacar los hits e info de unos cuantos de ellos
End of explanation
"""
res = es.search(index="prueba-index", body={"query": {"match" : {'Director': 'Christophe Nola'}}})
print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s" % hit["_source"])
"""
Explanation: Query sin fuzziness
No funciona si le quitas una letra, la query de abajo si al ser fuzzy
End of explanation
"""
bodyQuery = {
"query": {
"multi_match" : {
"query" : "Interes",
"fields": ["Plot", "Title"],
"fuzziness": "2",
"type": "phrase",
}
}
}
res = es.search(index="prueba-index", body=bodyQuery)
#print res
#print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s" % hit["_source"])
"""
Explanation: Query con fuzziness añadida
End of explanation
"""
bodyQuery2 = {
"query": {
"match": {
"_all":"Inter"
}
},
"highlight" : {
"fields" : {
"Title" : {},
"Plot" : {"fragment_size" : 150, "number_of_fragments" : 3}
},
#Permite el hightlight sobre campos que no se han hecho query
#como Plot en este ejemplo
"require_field_match" : False
}
}
res = es.search(index="prueba-index", body=bodyQuery2)
print("Got %d Hits:" % res['hits']['total'])
# Uso el [0] porque solo hay 1 hit, si hubiese mas, pues habria mas campos
# de la lista, habria que usar el for de arriba para sacar el highlight de
# cada uno de la lista
#print res['hits']['hits'][0]['highlight']
for hit in res['hits']['hits']:
print(hit)
"""
Explanation: Query 2 con highlight de distintos campos y la forma de mostrarlo
End of explanation
"""
es.delete(index='prueba-index', doc_type='text', id=1)
"""
Explanation: Borrar datos
End of explanation
"""
|
SheffieldML/GPyOpt | manual/GPyOpt_modular_bayesian_optimization.ipynb | bsd-3-clause | %pylab inline
import GPyOpt
import GPy
"""
Explanation: GPyOpt: Modular Bayesian Optimization
Written by Javier Gonzalez, Amazon Reseach Cambridge
Last updated, July 2017.
In the Introduction Bayesian Optimization GPyOpt we showed how GPyOpt can be used to solve optimization problems with some basic functionalities. The object
GPyOpt.methods.BayesianOptimization
is used to initialize the desired functionalities, such us the acquisition function, the initial design or the model. In some cases we want to have control over those objects and we may want to replace some element in the loop without having to integrate the new elements in the base code framework. This is now possible through the modular implementation of the package using the
GPyOpt.methods.ModularBayesianOptimization
class. In this notebook we are going to show how to use the backbone of GPyOpt to run a Bayesian optimization algorithm in which we will use our own acquisition function. In particular we are going to use the Expected Improvement integrated over the jitter parameter. That is
$$acqu_{IEI}(x;{x_n,y_n},\theta) = \int acqu_{EI}(x;{x_n,y_n},\theta,\psi) p(\psi;a,b)d\psi $$
where $p(\psi;a,b)$ is, in this example, the distribution $Beta(a,b)$.
This acquisition is not available in GPyOpt, but we will implement and use in this notebook. The same can be done for other models, acquisition optimizers etc.
As usual, we start loading GPy and GPyOpt.
End of explanation
"""
# --- Function to optimize
func = GPyOpt.objective_examples.experiments2d.branin()
func.plot()
"""
Explanation: In this example we will use the Branin function as a test case.
End of explanation
"""
objective = GPyOpt.core.task.SingleObjective(func.f)
"""
Explanation: Because we are won't use the pre implemented wrapper, we need to create the classes for each element of the optimization. In total we need to create:
Class for the objective function,
End of explanation
"""
space = GPyOpt.Design_space(space =[{'name': 'var_1', 'type': 'continuous', 'domain': (-5,10)},
{'name': 'var_2', 'type': 'continuous', 'domain': (1,15)}])
"""
Explanation: Class for the design space,
End of explanation
"""
model = GPyOpt.models.GPModel(optimize_restarts=5,verbose=False)
"""
Explanation: Class for the model type,
End of explanation
"""
aquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(space)
"""
Explanation: Class for the acquisition optimizer,
End of explanation
"""
initial_design = GPyOpt.experiment_design.initial_design('random', space, 5)
"""
Explanation: Class for the initial design,
End of explanation
"""
from GPyOpt.acquisitions.base import AcquisitionBase
from GPyOpt.acquisitions.EI import AcquisitionEI
from numpy.random import beta
class jitter_integrated_EI(AcquisitionBase):
analytical_gradient_prediction = True
def __init__(self, model, space, optimizer=None, cost_withGradients=None, par_a=1, par_b=1, num_samples= 10):
super(jitter_integrated_EI, self).__init__(model, space, optimizer)
self.par_a = par_a
self.par_b = par_b
self.num_samples = num_samples
self.samples = beta(self.par_a,self.par_b,self.num_samples)
self.EI = AcquisitionEI(model, space, optimizer, cost_withGradients)
def acquisition_function(self,x):
acqu_x = np.zeros((x.shape[0],1))
for k in range(self.num_samples):
self.EI.jitter = self.samples[k]
acqu_x +=self.EI.acquisition_function(x)
return acqu_x/self.num_samples
def acquisition_function_withGradients(self,x):
acqu_x = np.zeros((x.shape[0],1))
acqu_x_grad = np.zeros(x.shape)
for k in range(self.num_samples):
self.EI.jitter = self.samples[k]
acqu_x_sample, acqu_x_grad_sample =self.EI.acquisition_function_withGradients(x)
acqu_x += acqu_x_sample
acqu_x_grad += acqu_x_grad_sample
return acqu_x/self.num_samples, acqu_x_grad/self.num_samples
"""
Explanation: Class for the acquisition function. Because we want to use our own acquisition, we need to implement a class to handle it. We will use the currently available Expected Improvement to create an integrated version over the jitter parameter. Samples will be generated using a beta distribution with parameters a and b as it is done using the default numpy beta function.
End of explanation
"""
acquisition = jitter_integrated_EI(model, space, optimizer=aquisition_optimizer, par_a=1, par_b=10, num_samples=200)
xx = plt.hist(acquisition.samples,bins=50)
"""
Explanation: Now we initialize the class for this acquisition and we plot the histogram of the used samples to integrate the acquisition.
End of explanation
"""
# --- CHOOSE a collection method
evaluator = GPyOpt.core.evaluators.Sequential(acquisition)
"""
Explanation: Finally we create the class for the type of evaluator,
End of explanation
"""
bo = GPyOpt.methods.ModularBayesianOptimization(model, space, objective, acquisition, evaluator, initial_design)
"""
Explanation: With all the classes on place,including the one we have created for this example, we can now create the Bayesian optimization object.
End of explanation
"""
max_iter = 10
bo.run_optimization(max_iter = max_iter)
"""
Explanation: And we run the optimization.
End of explanation
"""
bo.plot_acquisition()
bo.plot_convergence()
"""
Explanation: We plot the acquisition and the diagnostic plots.
End of explanation
"""
|
sujitpal/polydlot | src/tensorflow/02-mnist-cnn.ipynb | apache-2.0 | from __future__ import division, print_function
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import accuracy_score, confusion_matrix
import numpy as np
import matplotlib.pyplot as plt
import os
import tensorflow as tf
%matplotlib inline
DATA_DIR = "../../data"
TRAIN_FILE = os.path.join(DATA_DIR, "mnist_train.csv")
TEST_FILE = os.path.join(DATA_DIR, "mnist_test.csv")
LOG_DIR = os.path.join(DATA_DIR, "tf-mnist-cnn-logs")
MODEL_FILE = os.path.join(DATA_DIR, "tf-mnist-cnn")
IMG_SIZE = 28
LEARNING_RATE = 0.001
BATCH_SIZE = 128
NUM_CLASSES = 10
NUM_EPOCHS = 5
"""
Explanation: MNIST Digit Recognition - CNN
End of explanation
"""
def parse_file(filename):
xdata, ydata = [], []
fin = open(filename, "rb")
i = 0
for line in fin:
if i % 10000 == 0:
print("{:s}: {:d} lines read".format(
os.path.basename(filename), i))
cols = line.strip().split(",")
ydata.append(int(cols[0]))
xdata.append(np.reshape(np.array([float(x) / 255.
for x in cols[1:]]), (IMG_SIZE, IMG_SIZE, 1)))
i += 1
fin.close()
print("{:s}: {:d} lines read".format(os.path.basename(filename), i))
y = np.array(ydata)
X = np.array(xdata)
return X, y
Xtrain, ytrain = parse_file(TRAIN_FILE)
Xtest, ytest = parse_file(TEST_FILE)
print(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)
def datagen(X, y, batch_size=BATCH_SIZE, num_classes=NUM_CLASSES):
ohe = OneHotEncoder(n_values=num_classes)
while True:
shuffled_indices = np.random.permutation(np.arange(len(y)))
num_batches = len(y) // batch_size
for bid in range(num_batches):
batch_indices = shuffled_indices[bid*batch_size:(bid+1)*batch_size]
Xbatch = np.zeros((batch_size, X.shape[1], X.shape[2], X.shape[3]))
Ybatch = np.zeros((batch_size, num_classes))
for i in range(batch_size):
Xbatch[i] = X[batch_indices[i]]
Ybatch[i] = ohe.fit_transform(y[batch_indices[i]]).todense()
yield Xbatch, Ybatch
self_test_gen = datagen(Xtrain, ytrain)
Xbatch, Ybatch = self_test_gen.next()
print(Xbatch.shape, Ybatch.shape)
"""
Explanation: Prepare Data
End of explanation
"""
with tf.name_scope("data"):
X = tf.placeholder(tf.float32, [None, IMG_SIZE, IMG_SIZE, 1], name="X")
Y = tf.placeholder(tf.float32, [None, NUM_CLASSES], name="Y")
def conv2d(x, W, b, strides=1):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding="SAME")
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding="SAME")
def network(x, dropout=0.75):
# CONV-1: 5x5 kernel, channels 1 => 32
W1 = tf.Variable(tf.random_normal([5, 5, 1, 32]))
b1 = tf.Variable(tf.random_normal([32]))
conv1 = conv2d(x, W1, b1)
# MAXPOOL-1
conv1 = maxpool2d(conv1, 2)
# CONV-2: 5x5 kernel, channels 32 => 64
W2 = tf.Variable(tf.random_normal([5, 5, 32, 64]))
b2 = tf.Variable(tf.random_normal([64]))
conv2 = conv2d(conv1, W2, b2)
# MAXPOOL-2
conv2 = maxpool2d(conv2, k=2)
# FC1: input=(None, 7, 7, 64), output=(None, 1024)
flatten = tf.reshape(conv2, [-1, 7*7*64])
W3 = tf.Variable(tf.random_normal([7*7*64, 1024]))
b3 = tf.Variable(tf.random_normal([1024]))
fc1 = tf.add(tf.matmul(flatten, W3), b3)
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction (1024 => 10)
W4 = tf.Variable(tf.random_normal([1024, NUM_CLASSES]))
b4 = tf.Variable(tf.random_normal([NUM_CLASSES]))
pred = tf.add(tf.matmul(fc1, W4), b4)
return pred
# define network
Y_ = network(X, 0.75)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=Y_, labels=Y))
optimizer = tf.train.AdamOptimizer(
learning_rate=LEARNING_RATE).minimize(loss)
correct_pred = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar("loss", loss)
tf.summary.scalar("accuracy", accuracy)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
"""
Explanation: Define Network
End of explanation
"""
history = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
# tensorboard viz
logger = tf.summary.FileWriter(LOG_DIR, sess.graph)
train_gen = datagen(Xtrain, ytrain, BATCH_SIZE)
num_batches = len(Xtrain) // BATCH_SIZE
for epoch in range(NUM_EPOCHS):
total_loss, total_acc = 0., 0.
for bid in range(num_batches):
Xbatch, Ybatch = train_gen.next()
_, batch_loss, batch_acc, Ybatch_, summary = sess.run(
[optimizer, loss, accuracy, Y_, merged_summary_op],
feed_dict={X: Xbatch, Y:Ybatch})
# write to tensorboard
logger.add_summary(summary, epoch * num_batches + bid)
# accumulate to print once per epoch
total_acc += batch_acc
total_loss += batch_loss
total_acc /= num_batches
total_loss /= num_batches
print("Epoch {:d}/{:d}: loss={:.3f}, accuracy={:.3f}".format(
(epoch + 1), NUM_EPOCHS, total_loss, total_acc))
saver.save(sess, MODEL_FILE, (epoch + 1))
history.append((total_loss, total_acc))
logger.close()
losses = [x[0] for x in history]
accs = [x[1] for x in history]
plt.subplot(211)
plt.title("Accuracy")
plt.plot(accs)
plt.subplot(212)
plt.title("Loss")
plt.plot(losses)
plt.tight_layout()
plt.show()
"""
Explanation: Train Network
End of explanation
"""
BEST_MODEL = os.path.join(DATA_DIR, "tf-mnist-cnn-5")
saver = tf.train.Saver()
ys, ys_ = [], []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, BEST_MODEL)
test_gen = datagen(Xtest, ytest, BATCH_SIZE)
val_loss, val_acc = 0., 0.
num_batches = len(Xtrain) // BATCH_SIZE
for _ in range(num_batches):
Xbatch, Ybatch = test_gen.next()
Ybatch_ = sess.run(Y_, feed_dict={X: Xbatch, Y:Ybatch})
ys.extend(np.argmax(Ybatch, axis=1))
ys_.extend(np.argmax(Ybatch_, axis=1))
acc = accuracy_score(ys_, ys)
cm = confusion_matrix(ys_, ys)
print("Accuracy: {:.4f}".format(acc))
print("Confusion Matrix")
print(cm)
"""
Explanation: Visualize with Tensorboard
We have also requested the total_loss and total_accuracy scalars to be logged in our computational graph, so the above charts can also be seen from the built-in tensorboard tool. The scalars are logged to the directory given by LOG_DIR, so we can start the tensorboard tool from the command line:
$ cd ../../data
$ tensorboard --logdir=tf-mnist-cnn-logs
Starting TensorBoard 54 at http://localhost:6006
(Press CTRL+C to quit)
We can then view the [visualizations on tensorboard] (http://localhost:6006)
Evaluate Network
End of explanation
"""
|
SteveDiamond/cvxpy | examples/notebooks/dqcp/hypersonic_shape_design.ipynb | gpl-3.0 | %matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import math
x = np.linspace(.25,1,num=201)
obj = []
for i in range(len(x)):
obj.append(math.sqrt(1/x[i]**2-1))
plt.plot(x,obj)
import cvxpy as cp
x = cp.Variable(pos=True)
obj = cp.sqrt(cp.inv_pos(cp.square(x))-1)
print("This objective function is", obj.curvature)
"""
Explanation: Aerospace Design via Quasiconvex Optimization
Consider a triangle, or a wedge, located within a hypersonic flow. A standard aerospace design optimization problem is to design the wedge to maximize the lift-to-drag ratio (L/D) (or conversely minimize the D/L ratio), subject to certain geometric constraints. In this example, the wedge is known to have a constant hypotenuse, and our job is to choose its width and height.
The drag-to-lift ratio is given by
$$
\frac{\mathrm{D}}{\mathrm{L}} = \frac{\mathrm{c_d}}{\mathrm{c_l}},
$$
where $\mathrm{c_d}$ and $\mathrm{c_l}$ are drag and lift coefficients, respectively, that are obtained by integrating the projection of the pressure coefficient in directions parallel to, and perpendicular to, the body.
It turns out that the the drag-to-lift ratio is a quasilinear function, as we'll now show. We will assume the pressure coefficient is given by the Newtonian sine-squared law for whetted areas of the body,
$$
\mathrm{c_p} = 2(\hat{v}\cdot\hat{n})^2
$$
and elsewhere $\mathrm{c_p} = 0$. Here, $\hat{v}$ is the free stream direction, which for simplicity we will assume is parallel to the body so that, $\hat{v} = \langle 1, 0 \rangle$, and $\hat{n}$ is the local unit normal. For a wedge defined by width $\Delta x$, and height $\Delta y$,
$$
\hat{n} = \langle -\Delta y/s,-\Delta x/s \rangle
$$
where $s$ is the hypotenuse length. Therefore,
$$
\mathrm{c_p} = 2((1)(-\Delta y/s)+(0)(-\Delta x/s))^2 = \frac{2 \Delta y^2}{s^2}
$$
The lift and drag coefficients are given by
$$
\begin{align}
\mathrm{c_d} &= \frac{1}{c}\int_0^s -\mathrm{c_p}\hat{n}_x \mathrm{d}s \
\mathrm{c_l} &= \frac{1}{c}\int_0^s -\mathrm{c_p}\hat{n}_y \mathrm{d}s
\end{align}
$$
Where $c$ is the reference chord length of the body. Given that $\hat{n}$, and therefore $\mathrm{c_p}$ are constant over the whetted surface of the body,
$$
\begin{align}
\mathrm{c_d} &= -\frac{s}{c}\mathrm{c_p}\hat{n}_x = \frac{s}{c}\frac{2 \Delta y^2}{s^2}\frac{\Delta y}{s} \
\mathrm{c_l} &= -\frac{s}{c}\mathrm{c_p}\hat{n}_y = \frac{s}{c}\frac{2 \Delta y^2}{s^2}\frac{\Delta x}{s}
\end{align}
$$
Assuming $s=1$, so that $\Delta y = \sqrt{1-\Delta x^2}$, plugging in the above into the equation for $D/L$, we obtain
$$
\frac{\mathrm{D}}{\mathrm{L}} = \frac{\Delta y}{\Delta x} = \frac{\sqrt{1-\Delta x^2}}{\Delta x} = \sqrt{\frac{1}{\Delta x^2}-1}.
$$
This function is representable as a DQCP, quasilinear function. We plot it below, and then we write it using DQCP.
End of explanation
"""
a = .05 # USER INPUT: height of rectangle, should be at most b
b = .65 # USER INPUT: width of rectangle
constraint = [a*cp.inv_pos(x)-(1-b)*cp.sqrt(1-cp.square(x))<=0]
print(constraint)
prob = cp.Problem(cp.Minimize(obj), constraint)
prob.solve(qcp=True, verbose=True)
print('Final L/D Ratio = ', 1/obj.value)
print('Final width of wedge = ', x.value)
print('Final height of wedge = ', math.sqrt(1-x.value**2))
"""
Explanation: Minimizing this objective function subject to constraints representing payload requirements is a standard aerospace design problem. In this case we will consider the constraint that the wedge must be able to contain a rectangle of given length and width internally along its hypotenuse. This is representable as a convex constraint.
End of explanation
"""
y = math.sqrt(1-x.value**2)
lambda1 = a*x.value/y
lambda2 = a*x.value**2/y+a*y
lambda3 = a*x.value-y*(a*x.value/y-b)
plt.plot([0,x.value],[0,0],'b.-')
plt.plot([0,x.value],[0,-y],'b.-')
plt.plot([x.value,x.value],[0,-y],'b.-')
pt1 = [lambda1*x.value,-lambda1*y]
pt2 = [(lambda1+b)*x.value,-(lambda1+b)*y]
pt3 = [(lambda1+b)*x.value+a*y,-(lambda1+b)*y+a*x.value]
pt4 = [lambda1*x.value+a*y,-lambda1*y+a*x.value]
plt.plot([pt1[0],pt2[0]],[pt1[1],pt2[1]],'r.-')
plt.plot([pt2[0],pt3[0]],[pt2[1],pt3[1]],'r.-')
plt.plot([pt3[0],pt4[0]],[pt3[1],pt4[1]],'r.-')
plt.plot([pt4[0],pt1[0]],[pt4[1],pt1[1]],'r.-')
plt.axis('equal')
"""
Explanation: Once the solution has been found, we can create a plot to verify that the rectangle is inscribed within the wedge.
End of explanation
"""
|
anhaidgroup/py_entitymatching | notebooks/guides/step_wise_em_guides/Removing Features From Feature Table.ipynb | bsd-3-clause | # Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
"""
Explanation: Introduction
This IPython notebook illustrates how to remove features from feature table.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
"""
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the paths of the input tables
path_A = datasets_dir + os.sep + 'person_table_A.csv'
path_B = datasets_dir + os.sep + 'person_table_B.csv'
# Read the CSV files and set 'ID' as the key attribute
A = em.read_csv_metadata(path_A, key='ID')
B = em.read_csv_metadata(path_B, key='ID')
# Get features (for blocking)
feature_table = em.get_features_for_blocking(A, B, validate_inferred_attr_types=False)
# Get features (for matching)
# feature_table = em.get_features_for_matching(A, B)
"""
Explanation: Then, read the (sample) input tables for blocking purposes
End of explanation
"""
type(feature_table)
feature_table.head()
# Drop first row
feature_table = feature_table.drop(0)
feature_table.head()
#Remove all the features except involving name (Include only the features where the left attribute is name)
feature_table = feature_table[feature_table.left_attribute=='name']
feature_table
#Remove all the features except involving jaccard (Include only the features where the sim function is jaccard)
feature_table = feature_table[feature_table.simfunction=='jaccard']
feature_table
"""
Explanation: Removing Features from Feature Table
End of explanation
"""
|
RaRe-Technologies/gensim | docs/notebooks/topic_coherence-movies.ipynb | lgpl-2.1 | from __future__ import print_function
import re
import os
from scipy.stats import pearsonr
from datetime import datetime
from gensim.models import CoherenceModel
from gensim.corpora.dictionary import Dictionary
from smart_open import smart_open
"""
Explanation: Benchmark testing of coherence pipeline on Movies dataset
How to find how well coherence measure matches your manual annotators
Introduction: For the validation of any model adapted from a paper, it is of utmost importance that the results of benchmark testing on the datasets listed in the paper match between the actual implementation (palmetto) and gensim. This coherence pipeline has been implemented from the work done by Roeder et al. The paper can be found here.
Approach :
1. In this notebook, we'll use the Movies dataset mentioned in the paper. This dataset along with the topics on which the coherence is calculated and the gold (human) ratings on these topics can be found here.
2. We will then calculate the coherence on these topics using the pipeline implemented in gensim.
3. Once we have all our coherence values on these topics we will calculate the correlation with the human ratings using pearson's r.
4. We will compare this final correlation value with the values listed in the paper and see if the pipeline is working as expected.
End of explanation
"""
base_dir = os.path.join(os.path.expanduser('~'), "workshop/nlp/data/")
data_dir = os.path.join(base_dir, 'wiki-movie-subset')
if not os.path.exists(data_dir):
raise ValueError("SKIP: Please download the movie corpus.")
ref_dir = os.path.join(base_dir, 'reference')
topics_path = os.path.join(ref_dir, 'topicsMovie.txt')
human_scores_path = os.path.join(ref_dir, 'goldMovie.txt')
%%time
texts = []
file_num = 0
preprocessed = 0
listing = os.listdir(data_dir)
for fname in listing:
file_num += 1
if 'disambiguation' in fname:
continue # discard disambiguation and redirect pages
elif fname.startswith('File_'):
continue # discard images, gifs, etc.
elif fname.startswith('Category_'):
continue # discard category articles
# Not sure how to identify portal and redirect pages,
# as well as pages about a single year.
# As a result, this preprocessing differs from the paper.
with smart_open(os.path.join(data_dir, fname), 'rb') as f:
for line in f:
# lower case all words
lowered = line.lower()
#remove punctuation and split into seperate words
words = re.findall(r'\w+', lowered, flags = re.UNICODE | re.LOCALE)
texts.append(words)
preprocessed += 1
if file_num % 10000 == 0:
print('PROGRESS: %d/%d, preprocessed %d, discarded %d' % (
file_num, len(listing), preprocessed, (file_num - preprocessed)))
%%time
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
"""
Explanation: Download the dataset (movie.zip) and gold standard data (topicsMovie.txt and goldMovie.txt) from the link and plug in the locations below.
End of explanation
"""
print(len(corpus))
print(dictionary)
topics = [] # list of 100 topics
with smart_open(topics_path, 'rb') as f:
topics = [line.split() for line in f if line]
len(topics)
human_scores = []
with smart_open(human_scores_path, 'rb') as f:
for line in f:
human_scores.append(float(line.strip()))
len(human_scores)
"""
Explanation: Cross validate the numbers
According to the paper the number of documents should be 108,952 with a vocabulary of 1,625,124. The difference is because of a difference in preprocessing. However the results obtained are still very similar.
End of explanation
"""
# We first need to filter out any topics that contain terms not in our dictionary
# These may occur as a result of preprocessing steps differing from those used to
# produce the reference topics. In this case, this only occurs in one topic.
invalid_topic_indices = set(
i for i, topic in enumerate(topics)
if any(t not in dictionary.token2id for t in topic)
)
print("Topics with out-of-vocab terms: %s" % ', '.join(map(str, invalid_topic_indices)))
usable_topics = [topic for i, topic in enumerate(topics) if i not in invalid_topic_indices]
"""
Explanation: Deal with any vocabulary mismatch.
End of explanation
"""
%%time
cm = CoherenceModel(topics=usable_topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')
u_mass = cm.get_coherence_per_topic()
print("Calculated u_mass coherence for %d topics" % len(u_mass))
"""
Explanation: Start off with u_mass coherence measure.
End of explanation
"""
%%time
cm = CoherenceModel(topics=usable_topics, texts=texts, dictionary=dictionary, coherence='c_v')
c_v = cm.get_coherence_per_topic()
print("Calculated c_v coherence for %d topics" % len(c_v))
"""
Explanation: Start c_v coherence measure
This is expected to take much more time since c_v uses a sliding window to perform probability estimation and uses the cosine similarity indirect confirmation measure.
End of explanation
"""
%%time
cm.coherence = 'c_uci'
c_uci = cm.get_coherence_per_topic()
print("Calculated c_uci coherence for %d topics" % len(c_uci))
%%time
cm.coherence = 'c_npmi'
c_npmi = cm.get_coherence_per_topic()
print("Calculated c_npmi coherence for %d topics" % len(c_npmi))
final_scores = [
score for i, score in enumerate(human_scores)
if i not in invalid_topic_indices
]
len(final_scores)
"""
Explanation: Start c_uci and c_npmi coherence measures
c_v and c_uci and c_npmi all use the boolean sliding window approach of estimating probabilities. Since the CoherenceModel caches the accumulated statistics, calculation of c_uci and c_npmi are practically free after calculating c_v coherence. These two methods are simpler and were shown to correlate less with human judgements than c_v but more so than u_mass.
End of explanation
"""
for our_scores in (u_mass, c_v, c_uci, c_npmi):
print(pearsonr(our_scores, final_scores)[0])
"""
Explanation: The values in the paper were:
u_mass correlation : 0.093
c_v correlation : 0.548
c_uci correlation : 0.473
c_npmi correlation : 0.438
Our values are also very similar to these values which is good. This validates the correctness of our pipeline, as we can reasonably attribute the differences to differences in preprocessing.
End of explanation
"""
|
jlecoeur/kalman_notebook | altitude_sonar_baro_gps_accel/kalman_altitude_sonar_baro_gps_accel.ipynb | gpl-2.0 | m = 10000 # timesteps
dt = 1/ 250.0 # update loop at 250Hz
t = np.arange(m) * dt
freq = 0.1 # Hz
amplitude = 0.5 # meter
alt_true = 405 + amplitude * np.cos(2 * np.pi * freq * t)
height_true = 5 + amplitude * np.cos(2 * np.pi * freq * t)
vel_true = - amplitude * (2 * np.pi * freq) * np.sin(2 * np.pi * freq * t)
acc_true = - amplitude * (2 * np.pi * freq)**2 * np.cos(2 * np.pi * freq * t)
plt.plot(t, height_true)
plt.plot(t, vel_true)
plt.plot(t, acc_true)
plt.legend(['elevation', 'velocity', 'acceleration'], loc='best')
plt.xlabel('time')
"""
Explanation: Kalman filter for altitude estimation from accelerometer and sonar
I) TRAJECTORY
We assume sinusoidal trajectory
End of explanation
"""
sonar_sampling_period = 1 / 10.0 # sonar reading at 10Hz
# Sonar noise
sigma_sonar_true = 0.05 # in meters
meas_sonar = height_true[::(sonar_sampling_period/dt)] + sigma_sonar_true * np.random.randn(m // (sonar_sampling_period/dt))
t_meas_sonar = t[::(sonar_sampling_period/dt)]
plt.plot(t_meas_sonar, meas_sonar, 'or')
plt.plot(t, height_true)
plt.legend(['Sonar measure', 'Elevation (true)'])
plt.title("Sonar measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
"""
Explanation: II) MEASUREMENTS
Sonar
End of explanation
"""
baro_sampling_period = 1 / 10.0 # baro reading at 10Hz
# Baro noise
sigma_baro_true = 2.0 # in meters
meas_baro = alt_true[::(baro_sampling_period/dt)] + sigma_baro_true * np.random.randn(m // (baro_sampling_period/dt))
t_meas_baro = t[::(baro_sampling_period/dt)]
plt.plot(t_meas_baro, meas_baro, 'or')
plt.plot(t, alt_true)
plt.title("Baro measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
"""
Explanation: Baro
End of explanation
"""
gps_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gps_true = 5.0 # in meters
meas_gps = alt_true[::(gps_sampling_period/dt)] + sigma_gps_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gps, 'or')
plt.plot(t, alt_true)
plt.title("GPS measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
"""
Explanation: GPS
End of explanation
"""
gpsvel_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gpsvel_true = 10.0 # in meters/s
meas_gpsvel = vel_true[::(gps_sampling_period/dt)] + sigma_gpsvel_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gpsvel, 'or')
plt.plot(t, vel_true)
plt.title("GPS velocity measurement")
plt.xlabel('time (s)')
plt.ylabel('vel (m/s)')
"""
Explanation: GPS velocity
End of explanation
"""
sigma_acc_true = 0.2 # in m.s^-2
acc_bias = 1.5
meas_acc = acc_true + sigma_acc_true * np.random.randn(m) + acc_bias
plt.plot(t, meas_acc, '.')
plt.plot(t, acc_true)
plt.title("Accelerometer measurement")
plt.xlabel('time (s)')
plt.ylabel('acc ($m.s^{-2}$)')
"""
Explanation: Acceleration
End of explanation
"""
x = np.matrix([0.0, 0.0, 0.0, 0.0]).T
print(x, x.shape)
"""
Explanation: III) PROBLEM FORMULATION
State vector
$$x_{k} = \left[ \matrix{ z \ h \ \dot z \ \zeta } \right]
= \matrix{ \text{Altitude} \ \text{Height above ground} \ \text{Vertical speed} \ \text{Accelerometer bias} }$$
Input vector
$$ u_{k} = \left[ \matrix{ \ddot z } \right] = \text{Accelerometer} $$
Formal definition (Law of motion):
$$ x_{k+1} = \textbf{A} \cdot x_{k} + \textbf{B} \cdot u_{k} $$
$$ x_{k+1} = \left[
\matrix{ 1 & 0 & \Delta_t & \frac{1}{2} \Delta t^2
\ 0 & 1 & \Delta t & \frac{1}{2} \Delta t^2
\ 0 & 0 & 1 & \Delta t
\ 0 & 0 & 0 & 1 } \right]
\cdot
\left[ \matrix{ z \ h \ \dot z \ \zeta } \right]
+ \left[ \matrix{ \frac{1}{2} \Delta t^2 \ \frac{1}{2} \Delta t^2 \ \Delta t \ 0 } \right]
\cdot
\left[ \matrix{ \ddot z } \right] $$
Measurement
$$ y = H \cdot x $$
$$ \left[ \matrix{ y_{sonar} \ y_{baro} \ y_{gps} \ y_{gpsvel} } \right]
= \left[ \matrix{ 0 & 1 & 0 & 0
\ 1 & 0 & 0 & 0
\ 1 & 0 & 0 & 0
\ 0 & 0 & 1 & 0 } \right] \cdot \left[ \matrix{ z \ h \ \dot z \ \zeta } \right] $$
Measures are done separately according to the refresh rate of each sensor
We measure the height from sonar
$$ y_{sonar} = H_{sonar} \cdot x $$
$$ y_{sonar} = \left[ \matrix{ 0 & 1 & 0 & 0 } \right] \cdot x $$
We measure the altitude from barometer
$$ y_{baro} = H_{baro} \cdot x $$
$$ y_{baro} = \left[ \matrix{ 1 & 0 & 0 & 0 } \right] \cdot x $$
We measure the altitude from gps
$$ y_{gps} = H_{gps} \cdot x $$
$$ y_{gps} = \left[ \matrix{ 1 & 0 & 0 & 0 } \right] \cdot x $$
We measure the velocity from gps
$$ y_{gpsvel} = H_{gpsvel} \cdot x $$
$$ y_{gpsvel} = \left[ \matrix{ 0 & 0 & 1 & 0 } \right] \cdot x $$
IV) IMPLEMENTATION
Initial state $x_0$
End of explanation
"""
P = np.diag([100.0, 100.0, 100.0, 100.0])
print(P, P.shape)
"""
Explanation: Initial uncertainty $P_0$
End of explanation
"""
dt = 1 / 250.0 # Time step between filter steps (update loop at 250Hz)
A = np.matrix([[1.0, 0.0, dt, 0.5*dt**2],
[0.0, 1.0, dt, 0.5*dt**2],
[0.0, 0.0, 1.0, dt ],
[0.0, 0.0, 0.0, 1.0]])
print(A, A.shape)
"""
Explanation: Dynamic matrix $A$
End of explanation
"""
B = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt ],
[0.0]])
print(B, B.shape)
"""
Explanation: Disturbance Control Matrix $B$
End of explanation
"""
H_sonar = np.matrix([[0.0, 1.0, 0.0, 0.0]])
print(H_sonar, H_sonar.shape)
H_baro = np.matrix([[1.0, 0.0, 0.0, 0.0]])
print(H_baro, H_baro.shape)
H_gps = np.matrix([[1.0, 0.0, 0.0, 0.0]])
print(H_gps, H_gps.shape)
H_gpsvel = np.matrix([[0.0, 0.0, 1.0, 0.0]])
print(H_gpsvel, H_gpsvel.shape)
"""
Explanation: Measurement Matrix $H$
End of explanation
"""
# sonar
sigma_sonar = sigma_sonar_true # sonar noise
R_sonar = np.matrix([[sigma_sonar**2]])
print(R_sonar, R_sonar.shape)
# baro
sigma_baro = sigma_baro_true # sonar noise
R_baro = np.matrix([[sigma_baro**2]])
print(R_baro, R_baro.shape)
# gps
sigma_gps = sigma_gps_true # sonar noise
R_gps = np.matrix([[sigma_gps**2]])
print(R_gps, R_gps.shape)
# gpsvel
sigma_gpsvel = sigma_gpsvel_true # sonar noise
R_gpsvel = np.matrix([[sigma_gpsvel**2]])
print(R_gpsvel, R_gpsvel.shape)
"""
Explanation: Measurement noise covariance $R$
End of explanation
"""
from sympy import Symbol, Matrix, latex
from sympy.interactive import printing
printing.init_printing()
dts = Symbol('\Delta t')
s1 = Symbol('\sigma_1') # drift of accelerometer bias
Qs = Matrix([[0.5*dts**2], [0.5*dts**2], [dts], [1.0]])
Qs*Qs.T*s1**2
sigma_acc_drift = 0.0001
G = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt],
[1.0]])
Q = G*G.T*sigma_acc_drift**2
print(Q, Q.shape)
"""
Explanation: Process noise covariance $Q$
End of explanation
"""
I = np.eye(4)
print(I, I.shape)
"""
Explanation: Identity Matrix
End of explanation
"""
u = meas_acc
print(u, u.shape)
"""
Explanation: Input
End of explanation
"""
# Re init state
# State
x[0] = 300.0
x[1] = 5.0
x[2] = 0.0
x[3] = 0.0
# Estimate covariance
P[0,0] = 100.0
P[1,1] = 100.0
P[2,2] = 100.0
P[3,3] = 100.0
# Preallocation for Plotting
# estimate
zt = []
ht = []
dzt= []
zetat=[]
# covariance
Pz = []
Ph = []
Pdz= []
Pzeta=[]
# kalman gain
Kz = []
Kh = []
Kdz= []
Kzeta=[]
for filterstep in range(m):
# ========================
# Time Update (Prediction)
# ========================
# Project the state ahead
x = A*x + B*u[filterstep]
# Project the error covariance ahead
P = A*P*A.T + Q
# ===============================
# Measurement Update (Correction)
# ===============================
# Sonar (only at the beginning, ex take off)
if filterstep%25 == 0 and (filterstep <2000 or filterstep>9000):
# Compute the Kalman Gain
S_sonar = H_sonar*P*H_sonar.T + R_sonar
K_sonar = (P*H_sonar.T) * np.linalg.pinv(S_sonar)
# Update the estimate via z
Z_sonar = meas_sonar[filterstep//25]
y_sonar = Z_sonar - (H_sonar*x) # Innovation or Residual
x = x + (K_sonar*y_sonar)
# Update the error covariance
P = (I - (K_sonar*H_sonar))*P
# Baro
if filterstep%25 == 0:
# Compute the Kalman Gain
S_baro = H_baro*P*H_baro.T + R_baro
K_baro = (P*H_baro.T) * np.linalg.pinv(S_baro)
# Update the estimate via z
Z_baro = meas_baro[filterstep//25]
y_baro = Z_baro - (H_baro*x) # Innovation or Residual
x = x + (K_baro*y_baro)
# Update the error covariance
P = (I - (K_baro*H_baro))*P
# GPS
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gps = H_gps*P*H_gps.T + R_gps
K_gps = (P*H_gps.T) * np.linalg.pinv(S_gps)
# Update the estimate via z
Z_gps = meas_gps[filterstep//250]
y_gps = Z_gps - (H_gps*x) # Innovation or Residual
x = x + (K_gps*y_gps)
# Update the error covariance
P = (I - (K_gps*H_gps))*P
# GPSvel
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gpsvel = H_gpsvel*P*H_gpsvel.T + R_gpsvel
K_gpsvel = (P*H_gpsvel.T) * np.linalg.pinv(S_gpsvel)
# Update the estimate via z
Z_gpsvel = meas_gpsvel[filterstep//250]
y_gpsvel = Z_gpsvel - (H_gpsvel*x) # Innovation or Residual
x = x + (K_gpsvel*y_gpsvel)
# Update the error covariance
P = (I - (K_gpsvel*H_gpsvel))*P
# ========================
# Save states for Plotting
# ========================
zt.append(float(x[0]))
ht.append(float(x[1]))
dzt.append(float(x[2]))
zetat.append(float(x[3]))
Pz.append(float(P[0,0]))
Ph.append(float(P[1,1]))
Pdz.append(float(P[2,2]))
Pzeta.append(float(P[3,3]))
# Kz.append(float(K[0,0]))
# Kdz.append(float(K[1,0]))
# Kzeta.append(float(K[2,0]))
"""
Explanation: V) TEST
Filter loop
End of explanation
"""
plt.figure(figsize=(17,15))
plt.subplot(321)
plt.plot(t, zt, color='b')
plt.fill_between(t, np.array(zt) - 10* np.array(Pz), np.array(zt) + 10*np.array(Pz), alpha=0.2, color='b')
plt.plot(t, alt_true, 'g')
plt.plot(t_meas_baro, meas_baro, '.r')
plt.plot(t_meas_gps, meas_gps, 'ok')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([405 - 50 * amplitude, 405 + 30 * amplitude])
plt.legend(['estimate', 'true altitude', 'baro reading', 'gps reading', 'sonar switched off/on'], loc='lower right')
plt.title('Altitude')
plt.subplot(322)
plt.plot(t, ht, color='b')
plt.fill_between(t, np.array(ht) - 10* np.array(Ph), np.array(ht) + 10*np.array(Ph), alpha=0.2, color='b')
plt.plot(t, height_true, 'g')
plt.plot(t_meas_sonar, meas_sonar, '.r')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
# plt.ylim([5 - 1.5 * amplitude, 5 + 1.5 * amplitude])
plt.ylim([5 - 10 * amplitude, 5 + 10 * amplitude])
plt.legend(['estimate', 'true height above ground', 'sonar reading', 'sonar switched off/on'])
plt.title('Height')
plt.subplot(323)
plt.plot(t, dzt, color='b')
plt.fill_between(t, np.array(dzt) - 10* np.array(Pdz), np.array(dzt) + 10*np.array(Pdz), alpha=0.2, color='b')
plt.plot(t, vel_true, 'g')
plt.plot(t_meas_gps, meas_gpsvel, 'ok')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([0 - 10.0 * amplitude, + 10.0 * amplitude])
plt.legend(['estimate', 'true velocity', 'gps_vel reading', 'sonar switched off/on'])
plt.title('Velocity')
plt.subplot(324)
plt.plot(t, zetat, color='b')
plt.fill_between(t, np.array(zetat) - 10* np.array(Pzeta), np.array(zetat) + 10*np.array(Pzeta), alpha=0.2, color='b')
plt.plot(t, -acc_bias * np.ones_like(t), 'g')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
plt.ylim([-2.0, 1.0])
# plt.ylim([0 - 2.0 * amplitude, + 2.0 * amplitude])
plt.legend(['estimate', 'true bias', 'sonar switched off/on'])
plt.title('Acc bias')
plt.subplot(325)
plt.plot(t, Pz)
plt.plot(t, Ph)
plt.plot(t, Pdz)
plt.ylim([0, 1.0])
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
plt.legend(['Altitude', 'Height', 'Velocity', 'sonar switched off/on'])
plt.title('Incertitudes')
"""
Explanation: VI) PLOT
Altitude $z$
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/4d3b714a9291625bb4b01d7f9c7c3a16/compute_source_psd_epochs.ipynb | bsd-3-clause | # Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, compute_source_psd_epochs
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_event = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
subjects_dir = data_path + '/subjects'
event_id, tmin, tmax = 1, -0.2, 0.5
snr = 1.0 # use smaller SNR for raw data
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)
raw = mne.io.read_raw_fif(fname_raw)
events = mne.read_events(fname_event)
# Set up pick list
include = []
raw.info['bads'] += ['EEG 053'] # bads + 1 more
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=include, exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,
eog=150e-6))
# define frequencies of interest
fmin, fmax = 0., 70.
bandwidth = 4. # bandwidth of the windows in Hz
"""
Explanation: Compute Power Spectral Density of inverse solution from single epochs
Compute PSD of dSPM inverse solution on single trial epochs restricted
to a brain label. The PSD is computed using a multi-taper method with
Discrete Prolate Spheroidal Sequence (DPSS) windows.
End of explanation
"""
n_epochs_use = 10
stcs = compute_source_psd_epochs(epochs[:n_epochs_use], inverse_operator,
lambda2=lambda2,
method=method, fmin=fmin, fmax=fmax,
bandwidth=bandwidth, label=label,
return_generator=True, verbose=True)
# compute average PSD over the first 10 epochs
psd_avg = 0.
for i, stc in enumerate(stcs):
psd_avg += stc.data
psd_avg /= n_epochs_use
freqs = stc.times # the frequencies are stored here
stc.data = psd_avg # overwrite the last epoch's data with the average
"""
Explanation: Compute source space PSD in label
..note:: By using "return_generator=True" stcs will be a generator object
instead of a list. This allows us so to iterate without having to
keep everything in memory.
End of explanation
"""
brain = stc.plot(initial_time=10., hemi='lh', views='lat', # 10 HZ
clim=dict(kind='value', lims=(20, 40, 60)),
smoothing_steps=3, subjects_dir=subjects_dir)
brain.add_label(label, borders=True, color='k')
"""
Explanation: Visualize the 10 Hz PSD:
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(freqs, psd_avg.mean(axis=0))
ax.set_xlabel('Freq (Hz)')
ax.set_xlim(stc.times[[0, -1]])
ax.set_ylabel('Power Spectral Density')
"""
Explanation: Visualize the entire spectrum:
End of explanation
"""
|
tombstone/models | research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb | apache-2.0 | import os
img_dir = '/tmp/nst'
if not os.path.exists(img_dir):
os.makedirs(img_dir)
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg
"""
Explanation: Neural Style Transfer with tf.keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
In this tutorial, we will learn how to use deep learning to compose images in the style of another image (ever wish you could paint like Picasso or Van Gogh?). This is known as neural style transfer! This is a technique outlined in Leon A. Gatys' paper, A Neural Algorithm of Artistic Style, which is a great read, and you should definitely check it out.
But, what is neural style transfer?
Neural style transfer is an optimization technique used to take three images, a content image, a style reference image (such as an artwork by a famous painter), and the input image you want to style -- and blend them together such that the input image is transformed to look like the content image, but “painted” in the style of the style image.
For example, let’s take an image of this turtle and Katsushika Hokusai's The Great Wave off Kanagawa:
<img src="https://github.com/tensorflow/models/blob/master/research/nst_blogpost/Green_Sea_Turtle_grazing_seagrass.jpg?raw=1" alt="Drawing" style="width: 200px;"/>
<img src="https://github.com/tensorflow/models/blob/master/research/nst_blogpost/The_Great_Wave_off_Kanagawa.jpg?raw=1" alt="Drawing" style="width: 200px;"/>
Image of Green Sea Turtle
-By P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons
Now how would it look like if Hokusai decided to paint the picture of this Turtle exclusively with this style? Something like this?
<img src="https://github.com/tensorflow/models/blob/master/research/nst_blogpost/wave_turtle.png?raw=1" alt="Drawing" style="width: 500px;"/>
Is this magic or just deep learning? Fortunately, this doesn’t involve any witchcraft: style transfer is a fun and interesting technique that showcases the capabilities and internal representations of neural networks.
The principle of neural style transfer is to define two distance functions, one that describes how different the content of two images are , $L_{content}$, and one that describes the difference between two images in terms of their style, $L_{style}$. Then, given three images, a desired style image, a desired content image, and the input image (initialized with the content image), we try to transform the input image to minimize the content distance with the content image and its style distance with the style image.
In summary, we’ll take the base input image, a content image that we want to match, and the style image that we want to match. We’ll transform the base input image by minimizing the content and style distances (losses) with backpropagation, creating an image that matches the content of the content image and the style of the style image.
Specific concepts that will be covered:
In the process, we will build practical experience and develop intuition around the following concepts
Eager Execution - use TensorFlow's imperative programming environment that evaluates operations immediately
Learn more about eager execution
See it in action
Using Functional API to define a model - we'll build a subset of our model that will give us access to the necessary intermediate activations using the Functional API
Leveraging feature maps of a pretrained model - Learn how to use pretrained models and their feature maps
Create custom training loops - we'll examine how to set up an optimizer to minimize a given loss with respect to input parameters
We will follow the general steps to perform style transfer:
Visualize data
Basic Preprocessing/preparing our data
Set up loss functions
Create model
Optimize for loss function
Audience: This post is geared towards intermediate users who are comfortable with basic machine learning concepts. To get the most out of this post, you should:
* Read Gatys' paper - we'll explain along the way, but the paper will provide a more thorough understanding of the task
* Understand reducing loss with gradient descent
Time Estimated: 30 min
Setup
Download Images
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (10,10)
mpl.rcParams['axes.grid'] = False
import numpy as np
from PIL import Image
import time
import functools
%tensorflow_version 1.x
import tensorflow as tf
from tensorflow.python.keras.preprocessing import image as kp_image
from tensorflow.python.keras import models
from tensorflow.python.keras import losses
from tensorflow.python.keras import layers
from tensorflow.python.keras import backend as K
"""
Explanation: Import and configure modules
End of explanation
"""
tf.enable_eager_execution()
print("Eager execution: {}".format(tf.executing_eagerly()))
# Set up some global values here
content_path = '/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg'
style_path = '/tmp/nst/The_Great_Wave_off_Kanagawa.jpg'
"""
Explanation: We’ll begin by enabling eager execution. Eager execution allows us to work through this technique in the clearest and most readable way.
End of explanation
"""
def load_img(path_to_img):
max_dim = 512
img = Image.open(path_to_img)
long = max(img.size)
scale = max_dim/long
img = img.resize((round(img.size[0]*scale), round(img.size[1]*scale)), Image.ANTIALIAS)
img = kp_image.img_to_array(img)
# We need to broadcast the image array such that it has a batch dimension
img = np.expand_dims(img, axis=0)
return img
def imshow(img, title=None):
# Remove the batch dimension
out = np.squeeze(img, axis=0)
# Normalize for display
out = out.astype('uint8')
plt.imshow(out)
if title is not None:
plt.title(title)
plt.imshow(out)
"""
Explanation: Visualize the input
End of explanation
"""
plt.figure(figsize=(10,10))
content = load_img(content_path).astype('uint8')
style = load_img(style_path).astype('uint8')
plt.subplot(1, 2, 1)
imshow(content, 'Content Image')
plt.subplot(1, 2, 2)
imshow(style, 'Style Image')
plt.show()
"""
Explanation: These are input content and style images. We hope to "create" an image with the content of our content image, but with the style of the style image.
End of explanation
"""
def load_and_process_img(path_to_img):
img = load_img(path_to_img)
img = tf.keras.applications.vgg19.preprocess_input(img)
return img
"""
Explanation: Prepare the data
Let's create methods that will allow us to load and preprocess our images easily. We perform the same preprocessing process as are expected according to the VGG training process. VGG networks are trained on image with each channel normalized by mean = [103.939, 116.779, 123.68]and with channels BGR.
End of explanation
"""
def deprocess_img(processed_img):
x = processed_img.copy()
if len(x.shape) == 4:
x = np.squeeze(x, 0)
assert len(x.shape) == 3, ("Input to deprocess image must be an image of "
"dimension [1, height, width, channel] or [height, width, channel]")
if len(x.shape) != 3:
raise ValueError("Invalid input to deprocessing image")
# perform the inverse of the preprocessing step
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype('uint8')
return x
"""
Explanation: In order to view the outputs of our optimization, we are required to perform the inverse preprocessing step. Furthermore, since our optimized image may take its values anywhere between $- \infty$ and $\infty$, we must clip to maintain our values from within the 0-255 range.
End of explanation
"""
# Content layer where will pull our feature maps
content_layers = ['block5_conv2']
# Style layer we are interested in
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1'
]
num_content_layers = len(content_layers)
num_style_layers = len(style_layers)
"""
Explanation: Define content and style representations
In order to get both the content and style representations of our image, we will look at some intermediate layers within our model. As we go deeper into the model, these intermediate layers represent higher and higher order features. In this case, we are using the network architecture VGG19, a pretrained image classification network. These intermediate layers are necessary to define the representation of content and style from our images. For an input image, we will try to match the corresponding style and content target representations at these intermediate layers.
Why intermediate layers?
You may be wondering why these intermediate outputs within our pretrained image classification network allow us to define style and content representations. At a high level, this phenomenon can be explained by the fact that in order for a network to perform image classification (which our network has been trained to do), it must understand the image. This involves taking the raw image as input pixels and building an internal representation through transformations that turn the raw image pixels into a complex understanding of the features present within the image. This is also partly why convolutional neural networks are able to generalize well: they’re able to capture the invariances and defining features within classes (e.g., cats vs. dogs) that are agnostic to background noise and other nuisances. Thus, somewhere between where the raw image is fed in and the classification label is output, the model serves as a complex feature extractor; hence by accessing intermediate layers, we’re able to describe the content and style of input images.
Specifically we’ll pull out these intermediate layers from our network:
End of explanation
"""
def get_model():
""" Creates our model with access to intermediate layers.
This function will load the VGG19 model and access the intermediate layers.
These layers will then be used to create a new model that will take input image
and return the outputs from these intermediate layers from the VGG model.
Returns:
returns a keras model that takes image inputs and outputs the style and
content intermediate layers.
"""
# Load our model. We load pretrained VGG, trained on imagenet data
vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet')
vgg.trainable = False
# Get output layers corresponding to style and content layers
style_outputs = [vgg.get_layer(name).output for name in style_layers]
content_outputs = [vgg.get_layer(name).output for name in content_layers]
model_outputs = style_outputs + content_outputs
# Build model
return models.Model(vgg.input, model_outputs)
"""
Explanation: Build the Model
In this case, we load VGG19, and feed in our input tensor to the model. This will allow us to extract the feature maps (and subsequently the content and style representations) of the content, style, and generated images.
We use VGG19, as suggested in the paper. In addition, since VGG19 is a relatively simple model (compared with ResNet, Inception, etc) the feature maps actually work better for style transfer.
In order to access the intermediate layers corresponding to our style and content feature maps, we get the corresponding outputs and using the Keras Functional API, we define our model with the desired output activations.
With the Functional API defining a model simply involves defining the input and output:
model = Model(inputs, outputs)
End of explanation
"""
def get_content_loss(base_content, target):
return tf.reduce_mean(tf.square(base_content - target))
"""
Explanation: In the above code snippet, we’ll load our pretrained image classification network. Then we grab the layers of interest as we defined earlier. Then we define a Model by setting the model’s inputs to an image and the outputs to the outputs of the style and content layers. In other words, we created a model that will take an input image and output the content and style intermediate layers!
Define and create our loss functions (content and style distances)
Content Loss
Our content loss definition is actually quite simple. We’ll pass the network both the desired content image and our base input image. This will return the intermediate layer outputs (from the layers defined above) from our model. Then we simply take the euclidean distance between the two intermediate representations of those images.
More formally, content loss is a function that describes the distance of content from our output image $x$ and our content image, $p$. Let $C_{nn}$ be a pre-trained deep convolutional neural network. Again, in this case we use VGG19. Let $X$ be any image, then $C_{nn}(X)$ is the network fed by X. Let $F^l_{ij}(x) \in C_{nn}(x)$ and $P^l_{ij}(p) \in C_{nn}(p)$ describe the respective intermediate feature representation of the network with inputs $x$ and $p$ at layer $l$. Then we describe the content distance (loss) formally as: $$L^l_{content}(p, x) = \sum_{i, j} (F^l_{ij}(x) - P^l_{ij}(p))^2$$
We perform backpropagation in the usual way such that we minimize this content loss. We thus change the initial image until it generates a similar response in a certain layer (defined in content_layer) as the original content image.
This can be implemented quite simply. Again it will take as input the feature maps at a layer L in a network fed by x, our input image, and p, our content image, and return the content distance.
Computing content loss
We will actually add our content losses at each desired layer. This way, each iteration when we feed our input image through the model (which in eager is simply model(input_image)!) all the content losses through the model will be properly compute and because we are executing eagerly, all the gradients will be computed.
End of explanation
"""
def gram_matrix(input_tensor):
# We make the image channels first
channels = int(input_tensor.shape[-1])
a = tf.reshape(input_tensor, [-1, channels])
n = tf.shape(a)[0]
gram = tf.matmul(a, a, transpose_a=True)
return gram / tf.cast(n, tf.float32)
def get_style_loss(base_style, gram_target):
"""Expects two images of dimension h, w, c"""
# height, width, num filters of each layer
# We scale the loss at a given layer by the size of the feature map and the number of filters
height, width, channels = base_style.get_shape().as_list()
gram_style = gram_matrix(base_style)
return tf.reduce_mean(tf.square(gram_style - gram_target))# / (4. * (channels ** 2) * (width * height) ** 2)
"""
Explanation: Style Loss
Computing style loss is a bit more involved, but follows the same principle, this time feeding our network the base input image and the style image. However, instead of comparing the raw intermediate outputs of the base input image and the style image, we instead compare the Gram matrices of the two outputs.
Mathematically, we describe the style loss of the base input image, $x$, and the style image, $a$, as the distance between the style representation (the gram matrices) of these images. We describe the style representation of an image as the correlation between different filter responses given by the Gram matrix $G^l$, where $G^l_{ij}$ is the inner product between the vectorized feature map $i$ and $j$ in layer $l$. We can see that $G^l_{ij}$ generated over the feature map for a given image represents the correlation between feature maps $i$ and $j$.
To generate a style for our base input image, we perform gradient descent from the content image to transform it into an image that matches the style representation of the original image. We do so by minimizing the mean squared distance between the feature correlation map of the style image and the input image. The contribution of each layer to the total style loss is described by
$$E_l = \frac{1}{4N_l^2M_l^2} \sum_{i,j}(G^l_{ij} - A^l_{ij})^2$$
where $G^l_{ij}$ and $A^l_{ij}$ are the respective style representation in layer $l$ of $x$ and $a$. $N_l$ describes the number of feature maps, each of size $M_l = height * width$. Thus, the total style loss across each layer is
$$L_{style}(a, x) = \sum_{l \in L} w_l E_l$$
where we weight the contribution of each layer's loss by some factor $w_l$. In our case, we weight each layer equally ($w_l =\frac{1}{|L|}$)
Computing style loss
Again, we implement our loss as a distance metric .
End of explanation
"""
def get_feature_representations(model, content_path, style_path):
"""Helper function to compute our content and style feature representations.
This function will simply load and preprocess both the content and style
images from their path. Then it will feed them through the network to obtain
the outputs of the intermediate layers.
Arguments:
model: The model that we are using.
content_path: The path to the content image.
style_path: The path to the style image
Returns:
returns the style features and the content features.
"""
# Load our images in
content_image = load_and_process_img(content_path)
style_image = load_and_process_img(style_path)
# batch compute content and style features
style_outputs = model(style_image)
content_outputs = model(content_image)
# Get the style and content feature representations from our model
style_features = [style_layer[0] for style_layer in style_outputs[:num_style_layers]]
content_features = [content_layer[0] for content_layer in content_outputs[num_style_layers:]]
return style_features, content_features
"""
Explanation: Apply style transfer to our images
Run Gradient Descent
If you aren't familiar with gradient descent/backpropagation or need a refresher, you should definitely check out this awesome resource.
In this case, we use the Adam* optimizer in order to minimize our loss. We iteratively update our output image such that it minimizes our loss: we don't update the weights associated with our network, but instead we train our input image to minimize loss. In order to do this, we must know how we calculate our loss and gradients.
* Note that L-BFGS, which if you are familiar with this algorithm is recommended, isn’t used in this tutorial because a primary motivation behind this tutorial was to illustrate best practices with eager execution, and, by using Adam, we can demonstrate the autograd/gradient tape functionality with custom training loops.
We’ll define a little helper function that will load our content and style image, feed them forward through our network, which will then output the content and style feature representations from our model.
End of explanation
"""
def compute_loss(model, loss_weights, init_image, gram_style_features, content_features):
"""This function will compute the loss total loss.
Arguments:
model: The model that will give us access to the intermediate layers
loss_weights: The weights of each contribution of each loss function.
(style weight, content weight, and total variation weight)
init_image: Our initial base image. This image is what we are updating with
our optimization process. We apply the gradients wrt the loss we are
calculating to this image.
gram_style_features: Precomputed gram matrices corresponding to the
defined style layers of interest.
content_features: Precomputed outputs from defined content layers of
interest.
Returns:
returns the total loss, style loss, content loss, and total variational loss
"""
style_weight, content_weight = loss_weights
# Feed our init image through our model. This will give us the content and
# style representations at our desired layers. Since we're using eager
# our model is callable just like any other function!
model_outputs = model(init_image)
style_output_features = model_outputs[:num_style_layers]
content_output_features = model_outputs[num_style_layers:]
style_score = 0
content_score = 0
# Accumulate style losses from all layers
# Here, we equally weight each contribution of each loss layer
weight_per_style_layer = 1.0 / float(num_style_layers)
for target_style, comb_style in zip(gram_style_features, style_output_features):
style_score += weight_per_style_layer * get_style_loss(comb_style[0], target_style)
# Accumulate content losses from all layers
weight_per_content_layer = 1.0 / float(num_content_layers)
for target_content, comb_content in zip(content_features, content_output_features):
content_score += weight_per_content_layer* get_content_loss(comb_content[0], target_content)
style_score *= style_weight
content_score *= content_weight
# Get total loss
loss = style_score + content_score
return loss, style_score, content_score
"""
Explanation: Computing the loss and gradients
Here we use tf.GradientTape to compute the gradient. It allows us to take advantage of the automatic differentiation available by tracing operations for computing the gradient later. It records the operations during the forward pass and then is able to compute the gradient of our loss function with respect to our input image for the backwards pass.
End of explanation
"""
def compute_grads(cfg):
with tf.GradientTape() as tape:
all_loss = compute_loss(**cfg)
# Compute gradients wrt input image
total_loss = all_loss[0]
return tape.gradient(total_loss, cfg['init_image']), all_loss
"""
Explanation: Then computing the gradients is easy:
End of explanation
"""
import IPython.display
def run_style_transfer(content_path,
style_path,
num_iterations=1000,
content_weight=1e3,
style_weight=1e-2):
# We don't need to (or want to) train any layers of our model, so we set their
# trainable to false.
model = get_model()
for layer in model.layers:
layer.trainable = False
# Get the style and content feature representations (from our specified intermediate layers)
style_features, content_features = get_feature_representations(model, content_path, style_path)
gram_style_features = [gram_matrix(style_feature) for style_feature in style_features]
# Set initial image
init_image = load_and_process_img(content_path)
init_image = tf.Variable(init_image, dtype=tf.float32)
# Create our optimizer
opt = tf.train.AdamOptimizer(learning_rate=5, beta1=0.99, epsilon=1e-1)
# For displaying intermediate images
iter_count = 1
# Store our best result
best_loss, best_img = float('inf'), None
# Create a nice config
loss_weights = (style_weight, content_weight)
cfg = {
'model': model,
'loss_weights': loss_weights,
'init_image': init_image,
'gram_style_features': gram_style_features,
'content_features': content_features
}
# For displaying
num_rows = 2
num_cols = 5
display_interval = num_iterations/(num_rows*num_cols)
start_time = time.time()
global_start = time.time()
norm_means = np.array([103.939, 116.779, 123.68])
min_vals = -norm_means
max_vals = 255 - norm_means
imgs = []
for i in range(num_iterations):
grads, all_loss = compute_grads(cfg)
loss, style_score, content_score = all_loss
opt.apply_gradients([(grads, init_image)])
clipped = tf.clip_by_value(init_image, min_vals, max_vals)
init_image.assign(clipped)
end_time = time.time()
if loss < best_loss:
# Update best loss and best image from total loss.
best_loss = loss
best_img = deprocess_img(init_image.numpy())
if i % display_interval== 0:
start_time = time.time()
# Use the .numpy() method to get the concrete numpy array
plot_img = init_image.numpy()
plot_img = deprocess_img(plot_img)
imgs.append(plot_img)
IPython.display.clear_output(wait=True)
IPython.display.display_png(Image.fromarray(plot_img))
print('Iteration: {}'.format(i))
print('Total loss: {:.4e}, '
'style loss: {:.4e}, '
'content loss: {:.4e}, '
'time: {:.4f}s'.format(loss, style_score, content_score, time.time() - start_time))
print('Total time: {:.4f}s'.format(time.time() - global_start))
IPython.display.clear_output(wait=True)
plt.figure(figsize=(14,4))
for i,img in enumerate(imgs):
plt.subplot(num_rows,num_cols,i+1)
plt.imshow(img)
plt.xticks([])
plt.yticks([])
return best_img, best_loss
best, best_loss = run_style_transfer(content_path,
style_path, num_iterations=1000)
Image.fromarray(best)
"""
Explanation: Optimization loop
End of explanation
"""
#from google.colab import files
#files.download('wave_turtle.png')
"""
Explanation: To download the image from Colab uncomment the following code:
End of explanation
"""
def show_results(best_img, content_path, style_path, show_large_final=True):
plt.figure(figsize=(10, 5))
content = load_img(content_path)
style = load_img(style_path)
plt.subplot(1, 2, 1)
imshow(content, 'Content Image')
plt.subplot(1, 2, 2)
imshow(style, 'Style Image')
if show_large_final:
plt.figure(figsize=(10, 10))
plt.imshow(best_img)
plt.title('Output Image')
plt.show()
show_results(best, content_path, style_path)
"""
Explanation: Visualize outputs
We "deprocess" the output image in order to remove the processing that was applied to it.
End of explanation
"""
best_starry_night, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')
show_results(best_starry_night, '/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')
"""
Explanation: Try it on other images
Image of Tuebingen
Photo By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons
Starry night + Tuebingen
End of explanation
"""
best_poc_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')
show_results(best_poc_tubingen,
'/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')
"""
Explanation: Pillars of Creation + Tuebingen
End of explanation
"""
best_kandinsky_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')
show_results(best_kandinsky_tubingen,
'/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')
"""
Explanation: Kandinsky Composition 7 + Tuebingen
End of explanation
"""
best_poc_turtle, best_loss = run_style_transfer('/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg',
'/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')
show_results(best_poc_turtle,
'/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg',
'/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')
"""
Explanation: Pillars of Creation + Sea Turtle
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | quests/serverlessml/07_caip/solution/train_caip.ipynb | apache-2.0 | import logging
import nbformat
import sys
import yaml
def write_parameters(cell_source, params_yaml, outfp):
with open(params_yaml, 'r') as ifp:
y = yaml.safe_load(ifp)
# print out all the lines in notebook
write_code(cell_source, 'PARAMS from notebook', outfp)
# print out YAML file; this will override definitions above
formats = [
'{} = {}', # for integers and floats
'{} = "{}"', # for strings
]
write_code(
'\n'.join([
formats[type(value) is str].format(key, value) for key, value in y.items()]),
'PARAMS from YAML',
outfp
)
def write_code(cell_source, comment, outfp):
lines = cell_source.split('\n')
if len(lines) > 0 and lines[0].startswith('%%'):
prefix = '#'
else:
prefix = ''
print("### BEGIN {} ###".format(comment), file=outfp)
for line in lines:
line = prefix + line.replace('print(', 'logging.info(')
if len(line) > 0 and (line[0] == '!' or line[0] == '%'):
print('#' + line, file=outfp)
else:
print(line, file=outfp)
print("### END {} ###\n".format(comment), file=outfp)
def convert_notebook(notebook_filename, params_yaml, outfp):
write_code('import logging', 'code added by notebook conversion', outfp)
with open(INPUT) as ifp:
nb = nbformat.reads(ifp.read(), nbformat.NO_CONVERT)
for cell in nb.cells:
if cell.cell_type == 'code':
if 'tags' in cell.metadata and 'display' in cell.metadata.tags:
logging.info('Ignoring cell # {} with display tag'.format(cell.execution_count))
elif 'tags' in cell.metadata and 'parameters' in cell.metadata.tags:
logging.info('Writing params cell # {}'.format(cell.execution_count))
write_parameters(cell.source, PARAMS, outfp)
else:
logging.info('Writing model cell # {}'.format(cell.execution_count))
write_code(cell.source, 'Cell #{}'.format(cell.execution_count), outfp)
import os
INPUT='../../06_feateng_keras/solution/taxifare_fc.ipynb'
PARAMS='./notebook_params.yaml'
OUTDIR='./container/trainer'
!mkdir -p $OUTDIR
OUTFILE=os.path.join(OUTDIR, 'model.py')
!touch $OUTDIR/__init__.py
with open(OUTFILE, 'w') as ofp:
#convert_notebook(INPUT, PARAMS, sys.stdout)
convert_notebook(INPUT, PARAMS, ofp)
#!cat $OUTFILE
"""
Explanation: Train ML model on Cloud AI Platform
This notebook shows how to:
* Export training code from a Keras notebook into a trainer file
* Create a Docker container based on a DLVM container
* Deploy training job to cluster
TODO: Export the data from BigQuery to GCS
Navigate to export_data.ipynb
Update 'your-gcs-project-here' to your GCP project name
Run all the notebook cells
TODO: Edit notebook parameters
Navigate to notebook_params.yaml
Replace the bucket name with your own bucket containing your model (likely gcp-project with -ml at the end)
Save the notebook
Return to this notebook and continue
Export code from notebook
This notebook extracts code from a notebook and creates a Python file suitable for use as model.py
End of explanation
"""
!python3 $OUTFILE
"""
Explanation: Try out model file
<b>Note</b> Once the training starts, Interrupt the Kernel (from the notebook ribbon bar above). Because it processes the entire dataset, this will take a long time on the relatively small machine on which you are running Notebooks.
End of explanation
"""
%%writefile container/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
#RUN python3 -m pip install --upgrade --quiet tf-nightly-2.0-preview
RUN python3 -m pip install --upgrade --quiet cloudml-hypertune
COPY trainer /trainer
CMD ["python3", "/trainer/model.py"]
%%writefile container/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=serverlessml_training_container
#export IMAGE_TAG=$(date +%Y%m%d_%H%M%S)
#export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME:$IMAGE_TAG
export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME
echo "Building $IMAGE_URI"
docker build -f Dockerfile -t $IMAGE_URI ./
echo "Pushing $IMAGE_URI"
docker push $IMAGE_URI
!find container
"""
Explanation: Create Docker container
Package up the trainer file into a Docker container and submit the image.
End of explanation
"""
%%bash
cd container
bash push_docker.sh
"""
Explanation: <b>Note</b>: If you get a permissions error when running push_docker.sh from Notebooks, do it from CloudShell:
* Open CloudShell on the GCP Console
* git clone https://github.com/GoogleCloudPlatform/training-data-analyst
* cd training-data-analyst/quests/serverlessml/07_caip/solution/container
* bash push_docker.sh
This next step takes 5 - 10 minutes to run
End of explanation
"""
%%bash
JOBID=serverlessml_$(date +%Y%m%d_%H%M%S)
REGION=us-central1
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$(gcloud config list project --format "value(core.project)")-ml
#IMAGE=gcr.io/deeplearning-platform-release/tf2-cpu
IMAGE=gcr.io/$PROJECT_ID/serverlessml_training_container
gcloud beta ai-platform jobs submit training $JOBID \
--staging-bucket=gs://$BUCKET --region=$REGION \
--master-image-uri=$IMAGE \
--master-machine-type=n1-standard-4 --scale-tier=CUSTOM
"""
Explanation: Deploy to AI Platform
Submit a training job using this custom container that we have just built. After you submit the job, monitor it here.
End of explanation
"""
|
james-prior/euler | euler-008-largest-product-in-a-series-20161128.ipynb | mit | from __future__ import print_function
import string
import operator
from functools import reduce
s = '''
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
'''
s = ''.join(c for c in s if c in string.digits)
len(str(s)), s
def foo(s, n):
biggest_product = 0
for i in range(len(s) - n + 1):
t = s[i:i+n]
product = reduce(operator.mul, map(int, t))
if product > biggest_product:
biggest_product = product
return biggest_product
foo(s, 4)
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
adjacent_digits = (s[i:i+n] for i in range(len(s) - n + 1))
products = (
reduce(operator.mul, map(int, t))
for t in adjacent_digits)
return max(products)
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
return max(
reduce(operator.mul, map(int, s[i:i+n]))
for i in range(len(s) - n + 1))
n = 13
%timeit foo(s, n)
foo(s, n)
"""
Explanation: Project Euler
Problem #8
The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
End of explanation
"""
s = list(map(int, s))
def foo(s, n):
biggest_product = 0
for i in range(len(s) - n + 1):
t = s[i:i+n]
product = reduce(operator.mul, t)
if product > biggest_product:
biggest_product = product
return biggest_product
foo(s, 4)
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
adjacent_digits = (s[i:i+n] for i in range(len(s) - n + 1))
products = (
reduce(operator.mul, t)
for t in adjacent_digits)
return max(products)
n = 13
%timeit foo(s, n)
foo(s, n)
def foo(s, n):
return max(
reduce(operator.mul, s[i:i+n])
for i in range(len(s) - n + 1))
n = 13
%timeit foo(s, n)
foo(s, n)
"""
Explanation: It seems that having s as a big string and repeatedly converting digits to ints is wasteful, so I convert s to be a list of ints and repeat the above adjusted for dealing with a list of ints.
End of explanation
"""
# Keep a running product,
# so that one only needs to
# 1. divide out the digit that is "leaving"
# 2. multiply by the new digit.
#
# Handling zeroes makes the code complicated.
def foo(s, n):
biggest_product = 0
need_to_recalculate = True
for i in range(len(s) - n + 1):
t = s[i:i+n]
if need_to_recalculate:
product = reduce(operator.mul, t)
else:
product *= t[-1]
if product > biggest_product:
biggest_product = product
if product == 0:
need_to_recalculate = True
else:
product //=t[0]
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
# When a zero digit is encountered,
# skip over the subsequences that would include it.
def foo(s, n):
biggest_product = 0
for i in range(len(s) - n + 1):
if all(s[i:i+n]):
break
while i < len(s) - n + 1:
if s[i+n-1] == 0:
i += n
continue
product = reduce(operator.mul, s[i:i+n])
if product > biggest_product:
biggest_product = product
i += 1
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
# When a zero digit is encountered,
# skip over the subsequences that would include it.
#
# Avoid the special case code that frets over
# a zero in the initial subsequence.
def foo(s, n):
biggest_product = 0
i = 0
while i < len(s) - n + 1:
if s[i+n-1] == 0:
i += n
continue
product = reduce(operator.mul, s[i:i+n])
if product > biggest_product:
biggest_product = product
i += 1
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
"""
Explanation: Having s be a list of ints made the code faster and easier to read.
That's a nice combination.
Below, I explore optimizing for speed.
End of explanation
"""
def foo(s, n):
biggest_product = 0
iter_i = iter(range(len(s) - n + 1))
for i in iter_i:
if s[i+n-1] == 0:
[next(iter_i) for _ in range(n-1)]
continue
product = reduce(operator.mul, s[i:i+n])
if product > biggest_product:
biggest_product = product
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
# Put the repeated next(iter_i) in a for loop
# instead of a comprehension.
#
# I am surprised that it is faster.
# It it very ugly also, although probably a little bit clearer.
def foo(s, n):
biggest_product = 0
iter_i = iter(range(len(s) - n + 1))
for i in iter_i:
if s[i+n-1] == 0:
# Skip over the subsequences that include this zero digit.
# Want to do i += n.
for _ in range(n-1):
next(iter_i)
continue
product = reduce(operator.mul, s[i:i+n])
if product > biggest_product:
biggest_product = product
return biggest_product
n = 13
%timeit foo(s, n)
foo(s, n)
"""
Explanation: The while stuff is very un-Pythonic, but fast.
Next I try using a more Pythonic iter(range(...)),
but it is even uglier than the while stuff above.
End of explanation
"""
|
readywater/caltrain-predict | .ipynb_checkpoints/03explore-checkpoint.ipynb | mit | # Import necessary libraries
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sys
import re
import random
import operator
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.cross_validation import KFold
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.feature_selection import SelectKBest, f_classif
from func import *
# inline plot
%matplotlib inline
#%%javascript
#IPython.OutputArea.auto_scroll_threshold = 9999;
#%load 'data/raw-twt2016-01-26-14/21/09.csv'
df = pd.read_csv("data/raw-twt2016-01-26-14-21-09.csv",sep='\t',error_bad_lines=False)
# df.head(5)
print len(df.index)
list(df.columns.values)
"""
Explanation: Dictionary
train_direction = 0 south, 1 north
train_type = 0 Local, 1 Limited, 2 Bullet
train_
End of explanation
"""
# Fill in blank hashtags
df = df.where((pd.notnull(df)), np.nan)
df["hashtags"].fillna('')
# Add some date/time things
df["created_at"] = pd.to_datetime(df["created_at"], errors='coerce')
df["day_of_week"] = df["created_at"].apply(lambda x: x.weekday())
df["day_of_month"] = df["created_at"].apply(lambda x: x.day)
df["month"] = df["created_at"].apply(lambda x: x.month)
df["time_of_day"] = df["created_at"].apply(lambda x: get_time_of_day(x))
tod_Dummy = pd.get_dummies(df['time_of_day'])
print(tod_Dummy.head(5))
# del tod_Dummy['shutdown']
# df['in_reply_to_screen_name'].fillna(-1)
# df['in_reply_to_status_id'].fillna(-1)
# df['in_reply_to_user_id'].fillna(-1)
# df['retweeted_status'].fillna(-1)
# df['retweeted'].fillna(-1)
df['retweet_count'].fillna(np.nan)
df['favorite_count'].fillna(np.nan)
df["hashtags"].fillna(np.nan)
df["hashtags"] = df["hashtags"].apply(lambda x: str(x)[1:-1])
df.loc[df["hashtags"]=='a',"hashtags"] = ''
list(df.columns.values)
#Potentially remove, just cleaning for analysis sake
del df['Unnamed: 0']
del df['truncated']
del df['user_mentions']
del df['urls']
del df['source']
del df['lang']
del df['place']
del df['favorited']
del df['media']
del df['user']
# More likely to remove
del df['in_reply_to_status_id']
del df['in_reply_to_user_id']
del df['retweeted']
del df['retweeted_status']
len(df)
df.plot(x='created_at', y='day_of_week', kind='hist')
# fdf = df[["created_at","id","text","hashtags"]]
# str(fdf
"""
Explanation: Cleanin' the data
End of explanation
"""
# df['favorite_count'] = df['favorite_count'].astype(np.int64)
# df['retweet_count'] = df['retweet_count'].astype(np.int64)
# df['text'] = df['text'].astype(str)
# df['id'] = df['id'].astype(np.int64)
# df['day_of_week'] = df['day_of_week'].astype(np.int64)
# df['day_of_month'] = df['day_of_month'].astype(np.int64)
# df['month'] = df['month'].astype(np.int64)
# df['time_of_day'] = df['time_of_day'].astype(np.int64)
df.loc[df["hashtags"]=='on',"hashtags"] = np.nan
df.convert_objects(convert_numeric=True)
df.dtypes
len(df)
# Pull out potential trains from both hashtags and text
df["topic_train"] = df["text"].apply(lambda x: check_train_id(x))
df["topic_train"] = df["topic_train"].apply(lambda x: str(x)[1:-1])
df["topic_train"].fillna(np.nan)
df.head(5)
len(df)
# pd.pivot_table(
# df,values='values',
# index=['month'],
# columns=['day_of_week'])
"""
Explanation: Let's start getting some more detailed data from the trips as well
End of explanation
"""
ret = []
def parse_train(t):
# x should be a list with train codes eg 123
# {"id": "123", "type:" "bullet", direction: "south"}
try:
s = t['topic_train'].split(',')
except:
return t['topic_train']
if s[0] == '':
# print ""
return np.nan
for x in s:
# print "Iter",x[1:-1]
q = {}
# Check train id
# x = parse_train_id(x)
x = str(x)
x = re.sub('[^0-9]','', x)
if len(x)<3: continue
# 1 = north, 0 = south
q["t_northbound"] = 1 if int(x[2]) in [1,3,5,7,9] else 0
q['t_limited'] = 0
q['t_bullet'] = 0
if x[0] == '1':
q['t_limited'] = 0
elif x[0] == '2':
q["t_limited"] = 1 # limited
elif x[0] == '3':
q["t_bullet"] = 1 # bullet
else:
q['t_limited'] = 0
ret.append({'tweet_id': t['id'],
'timestamp': t['created_at'],
'train_id': int(x),
't_northbound':q["t_northbound"],
't_limited': q["t_limited"],
't_bullet': q['t_bullet']})
return s
# Let's then filter those train topics into details
# Btw this is jank as fuck.
# red = df[['id','created_at','topic_train']]
red = df.apply(lambda x:parse_train(x),axis=1)
print "red return:",len(red)
print "ret return,",len(ret)
#red
tf = pd.DataFrame(ret)
tf.head(5)
#events = pd.DataFrame([pd.Series(x) for x in red.apply(parse_train)])
#events
#del new.iloc[0]
#new.fillna('')
#df.combine_first(new)
print df.loc[df['topic_train'] != '',['topic_train','text']]
len(tf)
len(tf)
df = df.merge(tf, left_on='id',right_on='tweet_id',how='right')
df.groupby(['time_of_day','month']).mean()
list(df.columns.values)
df.plot(x='time_of_day',y='day_of_week',kind='hist')
pd.scatter_matrix(df,alpha=0.1,figsize=(15,15), diagonal='hist');
df.groupby('month').describe()
df[df['train_id'] > 0].groupby('day_of_week').count()
df[df['train_id'] > 0].groupby('month').count()
df[df['train_id'] > 0].groupby('time_of_day').count()
df.corr()
"""
Explanation: First, a word about the below code.
In the accompanying func.py there is a function called parse_train that returns a pandas.Series object. For some reason, when it's returned from a map or apply, it seems to get cast as a string. When applied to a list or a dataframe, this string gets turned into a single field in the row, OR divided into several rows, throwing the count off.
To get around this, I return the results of the parse_train function and then CAST it back to a series. This adds a weird 0 index, which I delete. I then fill in the plethora of NaNs and recombine it with the primary dataframe.
For context, previous iterations included
df['topic_train'].apply(lambda x:parse_train(x))
which would return a pd.Series object with str versions of the returned pd.Series from parse_train
End of explanation
"""
|
google/eng-edu | ml/cc/prework/ko/intro_to_pandas.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2017 Google LLC.
End of explanation
"""
from __future__ import print_function
import pandas as pd
pd.__version__
"""
Explanation: # Pandas 간단 소개
학습 목표:
* pandas 라이브러리의 DataFrame 및 Series 데이터 구조에 관한 소개 확인하기
* DataFrame 및 Series 내의 데이터 액세스 및 조작
* pandas DataFrame으로 CSV 데이터 가져오기
* DataFrame의 색인을 다시 생성하여 데이터 셔플
Pandas는 열 중심 데이터 분석 API입니다. 입력 데이터를 처리하고 분석하는 데 효과적인 도구이며, 여러 ML 프레임워크에서도 Pandas 데이터 구조를 입력으로 지원합니다.
Pandas API를 전체적으로 소개하려면 길어지겠지만 중요한 개념은 꽤 간단하므로 아래에서 소개하도록 하겠습니다. 전체 내용을 참조할 수 있도록 Pandas 문서 사이트에서 광범위한 문서와 여러 가이드를 제공하고 있습니다.
## 기본 개념
다음 행은 Pandas API를 가져와서 API 버전을 출력합니다.
End of explanation
"""
pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
"""
Explanation: Pandas의 기본 데이터 구조는 두 가지 클래스로 구현됩니다.
DataFrame은 행 및 이름 지정된 열이 포함된 관계형 데이터 테이블이라고 생각할 수 있습니다.
Series는 하나의 열입니다. DataFrame에는 하나 이상의 Series와 각 Series의 이름이 포함됩니다.
데이터 프레임은 데이터 조작에 일반적으로 사용하는 추상화입니다. Spark 및 R에 유사한 구현이 존재합니다.
Series를 만드는 한 가지 방법은 Series 객체를 만드는 것입니다. 예를 들면 다음과 같습니다.
End of explanation
"""
city_names = pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
population = pd.Series([852469, 1015785, 485199])
pd.DataFrame({ 'City name': city_names, 'Population': population })
"""
Explanation: DataFrame 객체는 string 열 이름과 매핑되는 'dict'를 각각의 Series에 전달하여 만들 수 있습니다. Series의 길이가 일치하지 않는 경우, 누락된 값은 특수 NA/NaN 값으로 채워집니다. 예를 들면 다음과 같습니다.
End of explanation
"""
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe.describe()
"""
Explanation: 하지만 대부분의 경우 전체 파일을 DataFrame으로 로드합니다. 다음 예는 캘리포니아 부동산 데이터가 있는 파일을 로드합니다. 다음 셀을 실행하여 데이터에 로드하고 기능 정의를 만들어 보세요.
End of explanation
"""
california_housing_dataframe.head()
"""
Explanation: 위의 예에서는 DataFrame.describe를 사용하여 DataFrame에 관한 흥미로운 통계를 보여줍니다. 또 다른 유용한 함수는 DataFrame.head로, DataFrame 레코드 중 처음 몇 개만 표시합니다.
End of explanation
"""
california_housing_dataframe.hist('housing_median_age')
"""
Explanation: Pandas의 또 다른 강력한 기능은 그래핑입니다. 예를 들어 DataFrame.hist를 사용하면 한 열에서 값의 분포를 빠르게 검토할 수 있습니다.
End of explanation
"""
cities = pd.DataFrame({ 'City name': city_names, 'Population': population })
print(type(cities['City name']))
cities['City name']
print(type(cities['City name'][1]))
cities['City name'][1]
print(type(cities[0:2]))
cities[0:2]
"""
Explanation: ## 데이터 액세스
익숙한 Python dict/list 작업을 사용하여 DataFrame 데이터에 액세스할 수 있습니다.
End of explanation
"""
population / 1000.
"""
Explanation: 또한 Pandas는 고급 색인 생성 및 선택 기능을 위한 풍부한 API를 제공합니다. 이 내용은 너무 광범위하므로 여기에서 다루지 않습니다.
## 데이터 조작
Python의 기본 산술 연산을 Series에 적용할 수도 있습니다. 예를 들면 다음과 같습니다.
End of explanation
"""
import numpy as np
np.log(population)
"""
Explanation: NumPy는 유명한 계산과학 툴킷입니다. Pandas Series는 대부분의 NumPy 함수에 인수로 사용할 수 있습니다.
End of explanation
"""
population.apply(lambda val: val > 1000000)
"""
Explanation: 더 복잡한 단일 열 변환에는 Series.apply를 사용할 수 있습니다. Python map 함수처럼,
Series.apply는 인수로 lambda 함수를 허용하며, 이는 각 값에 적용됩니다.
아래의 예에서는 인구가 백만 명을 초과하는지 나타내는 새 Series를 만듭니다.
End of explanation
"""
cities['Area square miles'] = pd.Series([46.87, 176.53, 97.92])
cities['Population density'] = cities['Population'] / cities['Area square miles']
cities
"""
Explanation: DataFrames 수정 역시 간단합니다. 예를 들어 다음 코드는 기존 DataFrame에 두 개의 Series를 추가합니다.
End of explanation
"""
# Your code here
"""
Explanation: ## 실습 #1
다음 두 명제 모두 True인 경우에만 True인 새 부울 열을 추가하여 도시 테이블을 수정합니다.
도시 이름은 성인의 이름을 본따서 지었다.
도시 면적이 130제곱킬로미터보다 넓다.
참고: 부울 Series는 기존 부울 연산자가 아닌 비트 연산자를 사용하여 결합할 수 있습니다. 예를 들어 logical and를 실행할 때 and 대신 &를 사용합니다.
참고: 스페인어에서 "San"은 "성인"의 의미입니다.
End of explanation
"""
cities['Is wide and has saint name'] = (cities['Area square miles'] > 50) & cities['City name'].apply(lambda name: name.startswith('San'))
cities
"""
Explanation: ### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
End of explanation
"""
city_names.index
cities.index
"""
Explanation: ## 색인
Series와 DataFrame 객체 모두 식별자 값을 각 Series 항목이나 DataFrame 행에 할당하는 index 속성을 정의합니다.
기본적으로 생성 시 Pandas는 소스 데이터의 순서를 나타내는 색인 값을 할당합니다. 생성된 이후 색인 값은 고정됩니다. 즉, 데이터의 순서가 재정렬될 때 변하지 않습니다.
End of explanation
"""
cities.reindex([2, 0, 1])
"""
Explanation: DataFrame.reindex를 호출하여 수동으로 행의 순서를 재정렬합니다. 예를 들어 다음은 도시 이름을 기준으로 분류하는 것과 효과가 같습니다.
End of explanation
"""
cities.reindex(np.random.permutation(cities.index))
"""
Explanation: 색인 재생성은 DataFrame을 섞기(임의 설정하기) 위한 좋은 방법입니다. 아래의 예에서는 배열처럼 된 색인을 NumPy의 random.permutation 함수에 전달하여 값을 섞습니다. 이렇게 섞인 배열로 reindex를 호출하면 DataFrame 행도 같은 방식으로 섞입니다.
다음 셀을 여러 번 실행해 보세요.
End of explanation
"""
# Your code here
"""
Explanation: 자세한 정보는 색인 문서를 참조하세요.
## 실습 #2
reindex 메서드는 원래 DataFrame의 색인 값에 없는 색인 값을 허용합니다. 메서드를 실행해보고 이런 값을 사용하면 어떤 결과가 나오는지 확인해보세요. 왜 이런 값이 허용된다고 생각하나요?
End of explanation
"""
cities.reindex([0, 4, 5, 2])
"""
Explanation: ### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
reindex 입력 배열에 원래 DataFrame 색인 값에 없는 값을 포함하면 reindex가 이 \'누락된\' 색인에 새 행을 추가하고 모든 해당 열을 NaN 값으로 채웁니다.
End of explanation
"""
|
rrbb014/data_science | fastcampus_dss/2016_05_18/0518_01.__연립방정식과 역행렬.ipynb | mit | A = np.array([[1, 3, -2], [3, 5, 6], [2, 4, 3]])
A
b = np.array([[5], [7], [8]])
b
Ainv = np.linalg.inv(A)
Ainv
x = np.dot(Ainv, b) # 앞에
x
np.dot(A, x) - b #수치적인 에러떄문에 0이 나오지않는다. inverse 명령은 실생활에서 사용하지않는다. 역행렬이 뭔지 알고싶을때만 쓴다.
x, resid, rank, s = np.linalg.lstsq(A, b) # A가 안정적인거여서 똑같이 나왔지만...
x
"""
Explanation: 연립방정식과 역행렬 (행렬의 연산과 성질 참고)
다음과 같이 $x_1, x_2, \cdots, x_n$ 이라는 $n$ 개의 미지수를 가지는 방정식을 연립 방정식(system of equations)이라고 한다.
$$
\begin{matrix}
a_{11} x_1 & + \;& a_{12} x_2 &\; + \cdots + \;& a_{1M} x_M &\; = \;& b_1 \
a_{21} x_1 & + \;& a_{22} x_2 &\; + \cdots + \;& a_{2M} x_M &\; = \;& b_2 \
\vdots\;\;\; & & \vdots\;\;\; & & \vdots\;\;\; & & \;\vdots \
a_{N1} x_1 & + \;& a_{N2} x_2 &\; + \cdots + \;& a_{NM} x_M &\; = \;& b_N \
\end{matrix}
$$
행렬의 곱셈을 이용하면 이 연립 방정식은 다음과 같이 간단하게 쓸 수 있다.
$$ Ax = b $$
이 식에서 $A, x, b$ 는 다음과 같이 정의한다.
$$
A =
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1M} \
a_{21} & a_{22} & \cdots & a_{2M} \
\vdots & \vdots & \ddots & \vdots \
a_{N1} & a_{N2} & \cdots & a_{NM} \
\end{bmatrix}
$$
$$
x =
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_M
\end{bmatrix}
$$
$$
b=
\begin{bmatrix}
b_1 \ b_2 \ \vdots \ b_N
\end{bmatrix}
$$
$$
Ax = b
\;\;\;\;\;
\rightarrow
\;\;\;\;\;
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1M} \
a_{21} & a_{22} & \cdots & a_{2M} \
\vdots & \vdots & \ddots & \vdots \
a_{N1} & a_{N2} & \cdots & a_{NM} \
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_M
\end{bmatrix}
=
\begin{bmatrix}
b_1 \ b_2 \ \vdots \ b_N
\end{bmatrix}
$$
만약 $A, x, b$가 행렬이 아닌 실수라면 이 식은 나눗셈을 사용하여 다음과 같이 쉽게 풀 수 있을 것이다.
$$ x = \dfrac{b}{A} $$
그러나 행렬은 나눗셈이 정의되지 않으므로 이 식은 사용할 수 없다. 대신 역행렬(inverse)을 사용하여 이 식을 쉽게 풀 수 있다.
역행렬
정방 행렬(square matrix) $A\;(A \in \mathbb{R}^{M \times M}) $ 에 대한 역행렬은 $A^{-1}$ 이란 기호로 표시한다.
역행렬 $A^{-1}$은 원래의 행렬 $A$와 다음 관계를 만족하는 정방 행렬을 말한다. $I$는 단위 행렬(identity matrix)이다.
$$ A^{-1} A = A A^{-1} = I $$
두 개 이상의 정방 행렬의 곱은 마찬가지로 같은 크기의 정방행렬이 되는데 이러한 행렬의 곱의 역행렬에 대해서는 다음 성질이 성립한다.
$$ (AB)^{-1} = B^{-1} A^{-1} $$ # A,B의 역행렬 모두 존재해야 가능하다.
$$ (ABC)^{-1} = C^{-1} B^{-1} A^{-1} $$
역행렬과 연립 방정식의 해
미지수의 수와 방정식의 수가 같다면 행렬 $A$ 는 정방 행렬이 된다.
만약 행렬 $A$의 역행렬 $ A^{-1} $ 이 존재한다면 역행렬의 정의에서 연립 방정식의 해는 다음과 같이 구해진다.
$$ Ax = b $$
$$ A^{-1}Ax = A^{-1}b $$
$$ Ix = A^{-1}b $$
$$ x = A^{-1}b $$
NumPy의 역행렬 계산
NumPy의 linalg 서브패키지에는 역행렬을 구하는 inv(inverse) 라는 명령어가 존재한다. 그러나 실제 계산시에는 수치해석 상의 여러가지 문제로 inv 명령어 보다는 lstsq (least square:최소자승) 명령어를 사용한다.
End of explanation
"""
np.random.seed(0)
A = np.random.randn(3, 3)
A
np.linalg.det(A)
"""
Explanation: 위 해결 방법에는 두 가지 의문이 존재한다. 우선 역행렬이 존재하는지 어떻게 알 수 있는가? 또 두 번째 만약 미지수의 수와 방정식의 수가 다르다면 어떻게 되는가?
행렬식
우선 역행렬이 존재하는지 알아보는 방법의 하나로 행렬식(determinant)라는 정방 행렬의 특징을 계산하는 방법이다. 행렬 $A$ 에 대한 행렬식은 $\text{det}A$라는 기호로 표기한다.
행렬식(determinant)의 수학적인 정의는 상당히 복잡하므로 여기에서는 생략한다. 다만 몇가지 크기의 정방 행렬에 대해서는 다음과 같은 수식으로 구할 수 있다.
1×1 행렬의 행렬식
$$\det\begin{bmatrix}a\end{bmatrix}=a$$
2×2 행렬의 행렬식
$$\det\begin{bmatrix}a&b\c&d\end{bmatrix}=ad-bc$$
3×3 행렬의 행렬식
$$\det\begin{bmatrix}a&b&c\d&e&f\g&h&i\end{bmatrix}=aei+bfg+cdh-ceg-bdi-afh$$
NumPy에서는 det 명령으로 행렬식의 값을 구할 수 있다.
End of explanation
"""
A = np.array([[2, 0], [-1, 1], [0, 2]])
A
b = np.array([[1], [0], [-1]])
b
Apinv = np.dot(np.linalg.inv(np.dot(A.T, A)), A.T)
Apinv
x = np.dot(Apinv, b)
x
np.dot(A, x) - b
x, resid, rank, s = np.linalg.lstsq(A, b) #resid = error값, rank , s
x
"""
Explanation: 행렬식과 역행렬 사이에는 다음의 관계가 있다.
행렬식의 값이 0이 아니면 역행렬이 존재한다. 반대로 역행렬이 존재하면 행렬식의 값은 0이 아니다.
최소 자승 문제
연립 방정식은 다음과 같은 세 종류가 있다.
미지수의 수가 방정식의 수와 같다. ($N = M$)
미지수의 수가 방정식의 수보다 적다. ($N < M$)
미지수의 수가 방정식의 수보다 많다. ($N > M$)
1번의 경우는 앞에서 다루었다. 2번의 경우에는 너무 많은 해가 존재할 수 있다. 3번의 경우에는 2번과 반대로 모든 조건을 만족하는 해가 하나도 존재할 수 없을 수도 있다.
그런데 데이터 분석 문제에서는 $A$ 를 feature matrix, $x$ 를 가중치 벡터 $w$ 라고 보았을 때 데이터의 수가 가중치의 갯수보다 많은 경우가 일반적이다. 다만 이 때는 방정식이 정확하게 등호를 이루기를 바라지는 않는다. 즉, 대략적으로만 좌변과 우변이 비슷하면 되는 경우이다.
$$ Ax \approx b $$
이 경우에는 좌변과 우변의 차이를 최소하하는 문제로 바꾸어 풀 수 있다.
오차벡터 $$ e = Ax-b $$
제곱합 $$ e^Te = (Ax-b)^T(Ax-b) $$
오차를 최소화시키는 x를 구하겠다! (최소자승 문제)$$ x* = \text{arg} \min_x e^Te = \text{arg} \min_x \; (Ax-b)^T(Ax-b) $$
이러한 문제를 최소 자승(Least Square) 문제라고 한다.
문제는 x가 정방행렬이 아니라는 것.
A의 T를 취해주고 두개를 곱하면 4 x N * N * 4 = 4 * 4 행렬
최소 자승 문제의 답은 $A^TA$ 는 항상 정방행렬이 된다는 점을 사용하여 다음과 같이 풀 수 있다.
$$ Ax = b $$
$$ A^TAx = A^Tb $$
$$ (A^TA)x = A^Tb $$
$$ x = (A^TA)^{-1}A^T b $$
$$ x = ((A^TA)^{-1}A^T) b $$
이 값이 최소 자승 문제의 답이 된다는 것은 행렬의 미분을 사용하여 증명할 수 있다. 여기에서 행렬 $(A^TA)^{-1}A^T$ 를 행렬 $A$ 의 의사 역행렬(pseudo inverse)라고 하며 다음과 같이 $ A^{+}$ 로 표기하기도 한다.
penrose pseudo inverse?
$$ A^{+} = (A^TA)^{-1}A^T $$
NumPy의 lstsq 명령은 사실 이러한 최소 자승 문제를 푸는 명령이다.
FEATURE는 2개인데, x가 3개인 케이스
근데 이게 최선일까? 정말 minimize가 되지는 않아!!!
행렬의 미분이 필요해!!! 최소자승 문제의 답이 minimize할 수 있는 것인지는 차후에..
End of explanation
"""
|
williamdjones/protein_binding | notebooks/Step 1 Random Forest Feature Selection (In Progress).ipynb | mit | import time
import glob
import h5py
import multiprocessing
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("seaborn-muted")
from utils.input_pipeline import load_data, load_protein
from scipy.stats import randint as sp_randint
from sklearn.model_selection import cross_val_score, RandomizedSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import Imputer, Normalizer
from sklearn.feature_selection import VarianceThreshold
from sklearn.linear_model import RandomizedLogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, f1_score, make_scorer
random_state=np.random.RandomState(0)
imputer = Imputer()
normalizer = Normalizer()
forest_classifier = RandomForestClassifier(n_jobs=10)
data_path = "data/full_26_kinase_data.h5"
"""
Explanation: STEP 1 in the Feature Selection Pipeline: Train Random Forest to Identify the informative Features
End of explanation
"""
data_fo = h5py.File(data_path,'r')
protein_list = list(data_fo.keys())
input_size = 0
for protein in protein_list:
input_size += data_fo[protein]['label'].shape[0]
print(input_size)
X = np.ndarray([0,5432])
y = np.ndarray([0,1])
for protein in protein_list:
#create a balanced set for each of the proteins by randomly sampling from each of the negative classes
X_p,y_p = load_data(data_path,protein_name_list=[protein], mode=1)
X_n, y_n = load_data(data_path,protein_name_list=[protein],sample_size = X_p.shape[0], mode=0)
X_ = np.vstack((X_p,X_n))
y_ = np.vstack((y_p,y_n))
X = np.vstack((X_,X))
y = np.vstack((y_,y))
"""
Explanation: Load the Data
End of explanation
"""
# once new data is ready, remove the imputer, keep normalizing
forest_pipe = Pipeline(steps=[('imputer', imputer), ('normalizer', normalizer),
('selection_forest',RandomForestClassifier(n_jobs=16, oob_score=True,
class_weight="balanced",random_state=random_state))])
forest_params = {"selection_forest__n_estimators": sp_randint(15,30),
"selection_forest__criterion": ["gini","entropy"]
}
estimator_search = RandomizedSearchCV(estimator=forest_pipe,param_distributions=forest_params, scoring='f1',cv=5, random_state=random_state)
estimator_search.fit(X,y.flatten())
forest_model = estimator_search.best_estimator_
support = forest_model.named_steps['selection_forest'].feature_importances_
support = forest_model.named_steps['selection_forest'].feature_importances_
"""
Explanation: Random Forest
The algorithm which constructs a random forest natively performs feature selection by finding the best splits for a particular feature to minimize some measure of label impurity. This can be leveraged as a feature selection method to train other classifiers (in addition to other random forests).
End of explanation
"""
plt.clf()
plt.figure(figsize=[12,8])
plt.plot(np.sort(support)[::-1])
plt.title("Random Forest Feature Support (sorted)")
plt.ylabel("feature importance")
plt.savefig("poster_results/feature_importance_curve_full_set.png")
plt.show()
full_features = list(h5py.File("data/full_26_kinase_data.h5","r")["lck"].keys())
# use a list comprehension instead
full_features.remove("label")
full_features.remove("receptor")
full_features.remove("drugID")
keep_idxs = support > np.mean(support,axis=0)
features_to_keep = np.asarray(full_features)[keep_idxs]
features_to_keep = pd.DataFrame(features_to_keep)
features_to_keep.to_csv("data/step1_features.csv",index=False,header=False)
print(len(full_features),features_to_keep.shape)
"""
Explanation: and collect the features
so that they can be used in later experiments
End of explanation
"""
|
PyLadiesCZ/pyladies.cz | original/v1/s002-hello-world/ostrava/feedback-homeworks.ipynb | mit | tah_cloveka = 'kámen'
tah_pocitace = 'papír'
if tah_cloveka == 'kámen' and tah_pocitace == 'kámen'or tah_cloveka == 'nůžky' and tah_pocitace == 'nůžky' or tah_cloveka == 'papír' and tah_pocitace == 'papír':
print('Plichta.')
elif tah_cloveka == 'kámen' and tah_pocitace == 'nůžky' or tah_cloveka == 'nůžky'and tah_pocitace == 'papír' or tah_cloveka == 'papír' and tah_pocitace == 'kámen':
print('Vyhrála jsi!')
elif tah_cloveka == 'kámen' and tah_pocitace == 'papír'or tah_cloveka == 'papír' and tah_pocitace == 'nůžky' or tah_cloveka == 'nůžky' and tah_pocitace == 'kámen':
print('Počítač vyhrál.')
"""
Explanation: Feedback k domácím projektům
Je možné tohle zjednodušit?
End of explanation
"""
tah_cloveka = 'kámen'
tah_pocitace = 'papír'
if tah_cloveka == tah_pocitace:
print('Plichta.')
elif tah_cloveka == 'kámen' and tah_pocitace == 'nůžky' or tah_cloveka == 'nůžky'and tah_pocitace == 'papír' or tah_cloveka == 'papír' and tah_pocitace == 'kámen':
print('Vyhrála jsi!')
else:
print('Počítač vyhrál.')
"""
Explanation: Ano, je
End of explanation
"""
from random import randrange
cislo = randrange(2)
if cislo == 0:
tah_pocitace = "kámen"
print("Počítač vybral kámen.")
if cislo == 1:
print("Počítač vybral nůžky.")
tah_pocitace = "nůžky"
else:
tah_pocitace = "papír"
print("Počítač vybral papír.")
"""
Explanation: Najdi chyby 1
Tento kousek kódu, který se stará o výběr tahu počítače na základě náhodně vygenerovaného čísla, může vypada na první pohled správně, ale ve skutečnosti jej stačí párkrát spustit a chybička se projeví.
End of explanation
"""
from random import randrange
cislo = randrange(2)
if cislo == 0:
tah_pocitace = "kámen"
print("Počítač vybral kámen.")
elif cislo == 1:
print("Počítač vybral nůžky.")
tah_pocitace = "nůžky"
else:
tah_pocitace = "papír"
print("Počítač vybral papír.")
"""
Explanation: Správné řešení
Chybička byla v záměně elif za další if což způsobilo rozdělení jedné podmínky se třemi větvemi na dvě samostatné podmínky, z nichž ta první měla jen jednu větev (jen jeden if) a ta druhá dvě (jeden if a jeden else).
Další chyba pak byla v generování náhodných čísel, protože randrange(2) vrátí vždy jen 0 nebo 1.
End of explanation
"""
strana = int(input('Zadej velikost strany v cm: '))
strana = 2852
print('Objem krychle o straně',strana,'cm je', strana**3,'cm3')
print('Obsah krychle o straně',strana,'cm je', 6*strana**2,'cm2')
"""
Explanation: Najdu chybu 2
Copak se asi stane s proměnnou strana ještě před výpočtem povrchu a objemu?
End of explanation
"""
print('Odpovídej "ano" nebo "ne".')
stastna_retezec = input('Jsi šťastná?')
bohata_retezec = input('Jsi bohatá?')
if stastna_retezec == 'ano':
if bohata_retezec == 'ano':
print ("ty se máš")
elif bohata_retezec == 'ne':
print ("zkus mín utrácet")
elif stastna_retezec == 'ne':
if bohata_retezec == 'ano':
print ("zkus se víc usmívat")
elif bohata_retezec == 'ne':
print ("to je mi líto")
else:
print ("Nerozumím.")
"""
Explanation: Šťastná - bohatá
Několik možných řešení programu šťastná-bohatá. Všechny dělají totéž, ale některé jsou zkrátka lépe čitelné a kompaktnější.
Řešení 1
End of explanation
"""
print('Odpovídej "ano" nebo "ne".')
stastna_retezec = input('Jsi šťastná?')
bohata_retezec = input('Jsi bohatá?')
if stastna_retezec == 'ano' and bohata_retezec == 'ano':
print ("Grauluji")
elif stastna_retezec == 'ano' and bohata_retezec == 'ne':
print('Zkus míň utrácet.')
elif stastna_retezec == 'ne' and bohata_retezec == 'ano':
print ("zkus se víc usmívat")
elif stastna_retezec == 'ne' and bohata_retezec == 'ne':
print ("to je mi líto")
else:
print ("Nerozumim")
"""
Explanation: Řešení 2
End of explanation
"""
print('Odpovídej "ano" nebo "ne".')
stastna_retezec = input('Jsi šťastná? ')
if stastna_retezec == 'ano':
stastna = True
elif stastna_retezec == 'ne':
stastna = False
else:
print('Nerozumím!')
bohata_retezec = input('Jsi bohatá? ')
if bohata_retezec == 'ano':
bohata = True
elif bohata_retezec == 'ne':
bohata = False
else:
print('Nerozumím!')
if bohata and stastna:
print('Gratuluji!')
elif bohata:
print('Zkus se víc usmívat.')
elif stastna:
print('Zkus míň utrácet.')
else:
print('To je mi líto.')
"""
Explanation: Řešení 3
End of explanation
"""
|
dualphase90/Learning-Neural-Networks | NN in Scikit Learn.ipynb | mit | ## Input
X = [[0., 0.], [1., 1.]]
## Labels
y = [0, 1]
## Create Model
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=(5, 2), random_state=1)
## Fit
clf.fit(X, y)
## Make Predictions
clf.predict([[2., 2.], [-1., -2.]])
"""
Explanation: Classification
Class MLPClassifier implements a multi-layer perceptron (MLP) algorithm that trains using Backpropagation.
MLP trains on two arrays: array X of size (n_samples, n_features), which holds the training samples represented as floating point feature vectors; and array y of size (n_samples,), which holds the target values (class labels) for the training samples:
End of explanation
"""
## Weight matrices/ model parameters
[coef.shape for coef in clf.coefs_]
## Coefficents of classifier
clf.coefs_
"""
Explanation: MLP can fit a non-linear model to the training data. clf.coefs_ contains the weight matrices that constitute the model parameters:
End of explanation
"""
clf.predict_proba([[0, 0.], [0., 0.]])
"""
Explanation: Currently, MLPClassifier supports only the Cross-Entropy loss function, which allows probability estimates by running the predict_proba method.
MLP trains using Backpropagation. More precisely, it trains using some form of gradient descent and the gradients are calculated using Backpropagation. For classification, it minimizes the Cross-Entropy loss function, giving a vector of probability estimates P(y|x) per sample x:
End of explanation
"""
X = [[0., 0.], [1., 1.]]
y = [[0, 1], [1, 1]]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=(15,), random_state=1)
clf.fit(X, y)
clf.predict([1., 2.])
clf.predict([0., 0.])
"""
Explanation: MLPClassifier supports multi-class classification by applying Softmax as the output function.
Further, the model supports multi-label classification in which a sample can belong to more than one class. For each class, the raw output passes through the logistic function. Values larger or equal to 0.5 are rounded to 1, otherwise to 0. For a predicted output of a sample, the indices where the value is 1 represents the assigned classes of that sample:
End of explanation
"""
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
h = .02 # step size in the mesh
alphas = np.logspace(-5, 3, 5)
names = []
for i in alphas:
names.append('alpha ' + str(i))
classifiers = []
for i in alphas:
classifiers.append(MLPClassifier(alpha=i, random_state=1))
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=0, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable]
figure = plt.figure(figsize=(17, 9))
i = 1
# iterate over datasets
for X, y in datasets:
# preprocess dataset, split into training and test part
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
plt.show()
"""
Explanation: Regularization
Both MLPRegressor and class:MLPClassifier use parameter alpha for regularization (L2 regularization) term which helps in avoiding overfitting by penalizing weights with large magnitudes. Following plot displays varying decision function with value of alpha.
Varying regularization in Multi-layer Perceptron¶
A comparison of different values for regularization parameter ‘alpha’ on synthetic datasets. The plot shows that different alphas yield different decision functions.
Alpha is a parameter for regularization term, aka penalty term, that combats overfitting by constraining the size of the weights. Increasing alpha may fix high variance (a sign of overfitting) by encouraging smaller weights, resulting in a decision boundary plot that appears with lesser curvatures. Similarly, decreasing alpha may fix high bias (a sign of underfitting) by encouraging larger weights, potentially resulting in a more complicated decision boundary.
End of explanation
"""
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# Don't cheat - fit only on training data
scaler.fit(X_train)
X_train = scaler.transform(X_train)
# apply same transformation to test data
X_test = scaler.transform(X_test)
"""
Explanation: Tips on Practical Use
Multi-layer Perceptron is sensitive to feature scaling, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0, 1] or [-1, +1], or standardize it to have mean 0 and variance 1. Note that you must apply the same scaling to the test set for meaningful results. You can use StandardScaler for standardization.
End of explanation
"""
X = [[0., 0.], [1., 1.]]
y = [0, 1]
clf = MLPClassifier(hidden_layer_sizes=(15,), random_state=1, max_iter=1, warm_start=True)
for i in range(10):
clf.fit(X, y)
# additional monitoring / inspection
"""
Explanation: An alternative and recommended approach is to use StandardScaler in a Pipeline
* Finding a reasonable regularization parameter \alpha is best done using GridSearchCV, usually in the range 10.0 ** -np.arange(1,7).
* Empirically, we observed that L-BFGS converges faster and with better solutions on small datasets. For relatively large datasets, however, Adam is very robust. It usually converges quickly and gives pretty good performance. SGD with momentum or nesterov’s momentum, on the other hand, can perform better than those two algorithms if learning rate is correctly tuned.
More control with warm_start
If you want more control over stopping criteria or learning rate in SGD, or want to do additional monitoring, using warm_start=True and max_iter=1 and iterating yourself can be helpful:
End of explanation
"""
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn import datasets
# different learning rate schedules and momentum parameters
params = [{'solver': 'sgd', 'learning_rate': 'constant', 'momentum': 0,
'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'constant', 'momentum': .9,
'nesterovs_momentum': False, 'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'constant', 'momentum': .9,
'nesterovs_momentum': True, 'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': 0,
'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': .9,
'nesterovs_momentum': True, 'learning_rate_init': 0.2},
{'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': .9,
'nesterovs_momentum': False, 'learning_rate_init': 0.2},
{'solver': 'adam', 'learning_rate_init': 0.01}]
labels = ["constant learning-rate", "constant with momentum",
"constant with Nesterov's momentum",
"inv-scaling learning-rate", "inv-scaling with momentum",
"inv-scaling with Nesterov's momentum", "adam"]
plot_args = [{'c': 'red', 'linestyle': '-'},
{'c': 'green', 'linestyle': '-'},
{'c': 'blue', 'linestyle': '-'},
{'c': 'red', 'linestyle': '--'},
{'c': 'green', 'linestyle': '--'},
{'c': 'blue', 'linestyle': '--'},
{'c': 'black', 'linestyle': '-'}]
def plot_on_dataset(X, y, ax, name):
# for each dataset, plot learning for each learning strategy
print("\nlearning on dataset %s" % name)
ax.set_title(name)
X = MinMaxScaler().fit_transform(X)
mlps = []
if name == "digits":
# digits is larger but converges fairly quickly
max_iter = 15
else:
max_iter = 400
for label, param in zip(labels, params):
print("training: %s" % label)
mlp = MLPClassifier(verbose=0, random_state=0,
max_iter=max_iter, **param)
mlp.fit(X, y)
mlps.append(mlp)
print("Training set score: %f" % mlp.score(X, y))
print("Training set loss: %f" % mlp.loss_)
for mlp, label, args in zip(mlps, labels, plot_args):
ax.plot(mlp.loss_curve_, label=label, **args)
fig, axes = plt.subplots(2, 2, figsize=(15, 10))
# load / generate some toy datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
data_sets = [(iris.data, iris.target),
(digits.data, digits.target),
datasets.make_circles(noise=0.2, factor=0.5, random_state=1),
datasets.make_moons(noise=0.3, random_state=0)]
for ax, data, name in zip(axes.ravel(), data_sets, ['iris', 'digits',
'circles', 'moons']):
plot_on_dataset(*data, ax=ax, name=name)
fig.legend(ax.get_lines(), labels=labels, ncol=3, loc="upper center")
plt.show()
"""
Explanation: Compare Stochastic learning strategies for MLPClassifier
This example visualizes some training loss curves for different stochastic learning strategies, including SGD and Adam. Because of time-constraints, we use several small datasets, for which L-BFGS might be more suitable. The general trend shown in these examples seems to carry over to larger datasets, however.
Note that those results can be highly dependent on the value of learning_rate_init.
End of explanation
"""
|
bt3gl/Machine-Learning-Resources | ml_notebooks/basics.ipynb | gpl-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
from __future__ import absolute_import, division, print_function
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
"""
Explanation: Customization basics: tensors and operations
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/customization/basics"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This is an introductory TensorFlow tutorial shows how to:
Import the required package
Create and use tensors
Use GPU acceleration
Demonstrate tf.data.Dataset
End of explanation
"""
import tensorflow as tf
"""
Explanation: Import TensorFlow
To get started, import the tensorflow module. As of TensorFlow 2.0, eager execution is turned on by default. This enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
End of explanation
"""
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
"""
Explanation: Tensors
A Tensor is a multi-dimensional array. Similar to NumPy ndarray objects, tf.Tensor objects have a data type and a shape. Additionally, tf.Tensors can reside in accelerator memory (like a GPU). TensorFlow offers a rich library of operations (tf.add, tf.matmul, tf.linalg.inv etc.) that consume and produce tf.Tensors. These operations automatically convert native Python types, for example:
End of explanation
"""
x = tf.matmul([[1]], [[2, 3]])
print(x)
print(x.shape)
print(x.dtype)
"""
Explanation: Each tf.Tensor has a shape and a datatype:
End of explanation
"""
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
"""
Explanation: The most obvious differences between NumPy arrays and tf.Tensors are:
Tensors can be backed by accelerator memory (like GPU, TPU).
Tensors are immutable.
NumPy Compatibility
Converting between a TensorFlow tf.Tensors and a NumPy ndarray is easy:
TensorFlow operations automatically convert NumPy ndarrays to Tensors.
NumPy operations automatically convert Tensors to NumPy ndarrays.
Tensors are explicitly converted to NumPy ndarrays using their .numpy() method. These conversions are typically cheap since the array and tf.Tensor share the underlying memory representation, if possible. However, sharing the underlying representation isn't always possible since the tf.Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion involves a copy from GPU to host memory.
End of explanation
"""
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.config.experimental.list_physical_devices("GPU"))
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
"""
Explanation: GPU acceleration
Many TensorFlow operations are accelerated using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation—copying the tensor between CPU and GPU memory, if necessary. Tensors produced by an operation are typically backed by the memory of the device on which the operation executed, for example:
End of explanation
"""
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.config.experimental.list_physical_devices("GPU"):
print("On GPU:")
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
"""
Explanation: Device Names
The Tensor.device property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with GPU:<N> if the tensor is placed on the N-th GPU on the host.
Explicit Device Placement
In TensorFlow, placement refers to how individual operations are assigned (placed on) a device for execution. As mentioned, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation and copies tensors to that device, if needed. However, TensorFlow operations can be explicitly placed on specific devices using the tf.device context manager, for example:
End of explanation
"""
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
"""
Explanation: Datasets
This section uses the tf.data.Dataset API to build a pipeline for feeding data to your model. The tf.data.Dataset API is used to build performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
Create a source Dataset
Create a source dataset using one of the factory functions like Dataset.from_tensors, Dataset.from_tensor_slices, or using objects that read from files like TextLineDataset or TFRecordDataset. See the TensorFlow Dataset guide for more information.
End of explanation
"""
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
"""
Explanation: Apply transformations
Use the transformations functions like map, batch, and shuffle to apply transformations to dataset records.
End of explanation
"""
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
"""
Explanation: Iterate
tf.data.Dataset objects support iteration to loop over records:
End of explanation
"""
|
yvesdubief/UVM-ME249-CFD | .ipynb_checkpoints/ME249-Lecture-3-checkpoint.ipynb | gpl-2.0 | %matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
from IPython.display import Image
from IPython.core.display import HTML
def header(text):
raw_html = '<h4>' + str(text) + '</h4>'
return raw_html
def box(text):
raw_html = '<div style="border:1px dotted black;padding:2em;">'+str(text)+'</div>'
return HTML(raw_html)
def nobox(text):
raw_html = '<p>'+str(text)+'</p>'
return HTML(raw_html)
def addContent(raw_html):
global htmlContent
htmlContent += raw_html
class PDF(object):
def __init__(self, pdf, size=(200,200)):
self.pdf = pdf
self.size = size
def _repr_html_(self):
return '<iframe src={0} width={1[0]} height={1[1]}></iframe>'.format(self.pdf, self.size)
def _repr_latex_(self):
return r'\includegraphics[width=1.0\textwidth]{{{0}}}'.format(self.pdf)
class ListTable(list):
""" Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook. """
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
font = {'family' : 'serif',
'color' : 'black',
'weight' : 'normal',
'size' : 18,
}
"""
Explanation: Lecture 3: Accuracy in Fourier's Space
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
Lx = 2.*np.pi
Nx = 256
u = np.zeros(Nx,dtype='float64')
du = np.zeros(Nx,dtype='float64')
ddu = np.zeros(Nx,dtype='float64')
k_0 = 2.*np.pi/Lx
x = np.linspace(Lx/Nx,Lx,Nx)
Nwave = 32
uwave = np.zeros((Nx,Nwave),dtype='float64')
duwave = np.zeros((Nx,Nwave),dtype='float64')
dduwave = np.zeros((Nx,Nwave),dtype='float64')
ampwave = np.random.random(Nwave)
phasewave = np.random.random(Nwave)*2*np.pi
for iwave in range(Nwave):
uwave[:,iwave] = ampwave[iwave]*np.cos(k_0*iwave*x+phasewave[iwave])
duwave[:,iwave] = -k_0*iwave*ampwave[iwave]*np.sin(k_0*iwave*x+phasewave[iwave])
dduwave[:,iwave] = -(k_0*iwave)**2*ampwave[iwave]*np.cos(k_0*iwave*x+phasewave[iwave])
u = np.sum(uwave,axis=1)
du = np.sum(duwave,axis=1)
ddu = np.sum(dduwave,axis=1)
#print(u)
plt.plot(x,u,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
plt.plot(x,du,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$du/dx$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
plt.plot(x,ddu,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$d^2u/dx^2$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
"""
Explanation: <p class='alert alert-success'>
Solve the questions in green blocks. Save the file as ME249-Lecture-3-YOURNAME.ipynb and change YOURNAME in the bottom cell. Send me and the grader the <b>html</b> file not the ipynb file.
</p>
<h1>Discrete Fourier Series</h1>
Consider a function $f$ periodic over a domain $0\leq x\leq 2\pi$, discretized by $N_x$ points. The longest wavelength wave that can be contained in the domain is $L_x$. A phyiscal understanding of Fourier series is the representation of a system as the sum of many waves fo wavelengths smaller or equal to $L_x$. In a discrete sense, the series of wave used to decompose the system is defined as:
$$
a_n\exp\left(\hat{\jmath}\frac{2\pi n}{Lx}\right)
$$
such that
<p class='alert alert-danger'>
$$
f(x) = \sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}\frac{2\pi nx}{Lx}\right)
$$
</p>
and
<p class='alert alert-danger'>
$$
a_n = \frac{1}{L_x}\int_Lf(x)\exp\left(-\hat{\jmath}\frac{2\pi nx}{Lx}\right)dx
$$
</p>
Here $\hat{\jmath}^2=-1$.Often the reduction to wavenumber is used, where
<p class='alert alert-danger'>
$$
k_n = \frac{2\pi n}{L_x}
$$
</p>
Note that if $x$ is time instead of distance, $L_x$ is a time $T$ and the smallest frequency contained in the domain is $f_0=1/T_0$ and the wavenumber $n$ is $k_n=2\pi f_0n=2\pi f_n$ with $f_n$ for $\vert n\vert >1$ are the higher frequencies.
<h1>Discrete Fourier Transform (DFT)</h1>
In scientific computing we are interested in applying Fourier series on vectors or matrices, containing a integer number of samples. The DFT is the fourier series for the number of samples. DFT functions available in python or any other language only care about the number of samples, therefore the wavenumber is
<p class='alert alert-danger'>
$$
k_n=\frac{2\pi n}{N_x}
$$
</p>
Consider a function $f$ periodic over a domain $0\leq x\leq 2\pi$, discretized by $N_x$ points. The nodal value is $f_i$ located at $x_i=(i+1)\Delta x$ with $\Delta x=L_x/Nx$. The DFT is defined as
<p class='alert alert-danger'>
$$
\hat{f}_k=\sum_{i=0}^{N_x-1}f_i\exp\left(-2\pi\hat{\jmath}\frac{ik}{N_x}\right)
$$
</p>
The inverse DFT is defined as
<p class='alert alert-danger'>
$$
f_i=\sum_{k=0}^{N_x-1}\hat{f}_k\exp\left(2\pi\hat{\jmath}\frac{ik}{N_x}\right)
$$
</p>
<h1>Fast Fourier Transform (FFT)</h1>
Using symmetries, the FFT reduces computational costs and stores in the following way:
<p class='alert alert-danger'>
$$
\hat{f}_k=\sum_{i=-Nx/2+1}^{N_x/2}f_i\exp\left(-2\pi\hat{\jmath}\frac{ik}{N_x}\right)
$$
</p>
<p class='alert alert-info'>
Compared to the Fourier series, DFT or FFT assumes that the system can be accurately captured by a finite number of waves. It is up to the user to ensure that the number of computational points is sufficient to capture the smallest scale, or smallest wavelength or highest frequence. Remember that the function on which FT is applied must be periodic over the domain and the grid spacing must be uniform.
</p>
There are FT algorithms for unevenly space data, but this is beyond the scope of this notebook.
<h1>Filtering</h1>
The following provides examples of low- and high-pass filters based on Fourier transform. A ideal low-(high-) pass filter passes frequencies that are lower than a threshold without attenuation and removes frequencies that are higher than the threshold.
When applied to spatial data (function of $x$ rather than $t$-time), the FT (Fourier Transform) of a variable is function of wavenumbers
$$
k_n=\frac{2\pi n}{L_x}
$$
or wavelengths
$$
\lambda_n=\frac{2\pi}{k_n}
$$
The test function is defined as sum of $N_{wave}$ cosine function:
$$
u(x)=\sum_{n=0}^{N_{wave}}A_n\cos\left(nx+\phi_n\right)
$$
with the following first and second derivatives:
$$
\frac{du}{dx}=\sum_{n=1}^{N_{wave}}-nA_n\sin\left(nx+\phi_n\right)
$$
$$
\frac{d^2u}{dx^2}=\sum_{n=1}^{N_{wave}}-n^2A_n\cos\left(nx+\phi_n\right)
$$
The python code for function u and its derivatives is written below. Here amplitudes $A_n$ and phases $\phi_n$ are chosen randomly of ranges $[0,1]$ and $[0,2\pi]$, respectively.
End of explanation
"""
#check FT^-1(FT(u)) - Sanity check
u_hat = np.fft.fft(u)
v = np.real(np.fft.ifft(u_hat))
plt.plot(x,u,'r-',lw=2,label='$u$')
plt.plot(x,v,'b--',lw=2,label='$FT^{-1}[FT[u]]$')
plt.xlim(0,Lx)
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
print('error',np.linalg.norm(u-v,np.inf))
"""
Explanation: First, let's perform a sanity check, i.e.
$$
u=FT^{-1}\left[FT\left[u\right]\right]
$$
where $FT$ designates the Fourier transform and $FT^{-1}$ its inverse.
End of explanation
"""
F = np.zeros(Nx/2+1,dtype='float64')
F = np.real(u_hat[0:Nx/2+1]*np.conj(u_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0))) #This is how the FFT stores the wavenumbers
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='$\Phi_u$')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Phi_u(k)$', fontdict = font)
plt.xticks(fontsize = 16)
y_ticks = np.logspace(-30,5,8)
plt.yticks(y_ticks,fontsize = 16) #Specify ticks, necessary when increasing font
plt.show()
"""
Explanation: <h2>Spectrum</h2>
For now we will define the spectrum $\Phi_u$ as
<p class='alert alert-danger'>
$$
\Phi_u(k_n) = \hat{u}_n.\hat{u}_n^*
$$
</p>
which can be interpreted as the energy contained in the $k_n$ wavenumber. This is helpful when searching for the most energetic scales or waves in our system. Thanks to the symmetries of the FFT, the spectrum is defined over $n=0$ to $N_x/2$
End of explanation
"""
# filtering the smaller waves
def low_pass_filter_fourier(a,k,kcutoff):
N = a.shape[0]
a_hat = np.fft.fft(u)
filter_mask = np.where(np.abs(k) > kcut)
a_hat[filter_mask] = 0.0 + 0.0j
a_filter = np.real(np.fft.ifft(a_hat))
return a_filter
kcut=Nwave/2+1
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
v = low_pass_filter_fourier(u,k,kcut)
u_filter_exact = np.sum(uwave[:,0:kcut+1],axis=1)
plt.plot(x,v,'r-',lw=2,label='filtered with fft')
plt.plot(x,u_filter_exact,'b--',lw=2,label='filtered (exact)')
plt.plot(x,u,'g:',lw=2,label='original')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
print('error:',np.linalg.norm(v-u_filter_exact,np.inf))
F = np.zeros(Nx/2+1,dtype='float64')
F_filter = np.zeros(Nx/2+1,dtype='float64')
u_hat = np.fft.fft(u)
F = np.real(u_hat[0:Nx/2+1]*np.conj(u_hat[0:Nx/2+1]))
v_hat = np.fft.fft(v)
F_filter = np.real(v_hat[0:Nx/2+1]*np.conj(v_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='$\Phi_u$')
plt.loglog(k[0:Nx/2+1],F_filter,'b-',lw=2,label='$\Phi_v$')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Phi_u(k)$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(y_ticks,fontsize = 16)
plt.show()
"""
Explanation: <h2>Low-Pass Filter</h2>
The following code filters the original signal by half the wavenumbers using FFT and compares to exact filtered function
End of explanation
"""
u_hat = np.fft.fft(u)
kfilter = Nwave/2
k = np.linspace(0,Nx-1,Nx)
filter_mask = np.where((k < kfilter) | (k > Nx-kfilter) )
u_hat[filter_mask] = 0.+0.j
v = np.real(np.fft.ifft(u_hat))
plt.plot(x,v,'r-',lw=2,label='Filtered (FT)')
plt.plot(x,np.sum(uwave[:,kfilter:Nwave+1],axis=1),'b--',lw=2,label='Filtered (exact)')
plt.plot(x,u,'g:',lw=2,label='original')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
F = np.zeros(Nx/2+1,dtype='float64')
F_filter = np.zeros(Nx/2+1,dtype='float64')
u_hat = np.fft.fft(u)
F = np.real(u_hat[0:Nx/2+1]*np.conj(u_hat[0:Nx/2+1]))
v_hat = np.fft.fft(v)
F_filter = np.real(v_hat[0:Nx/2+1]*np.conj(v_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='\Phi_u')
plt.loglog(k[0:Nx/2+1],F_filter,'b-',lw=2,label='\Phi_v')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Phi(k)$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(y_ticks,fontsize = 16)
plt.show()
"""
Explanation: <h2> High-Pass Filter</h2>
<p class='alert alert-success'>
From the example below, develop a function for a high-pass filter.
</p>
End of explanation
"""
u_hat = np.fft.fft(u)
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
ik = 1j*k
v_hat = ik*u_hat
v = np.real(np.fft.ifft(v_hat))
plt.plot(x,v,'r-',lw=2,label='$FT^{-1}[\hat{\jmath}kFT[u]]$')
plt.plot(x,du,'b--',lw=2,label='exact derivative')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.ylabel('$du/dx$',fontsize = 18)
plt.xlabel('$x$',fontsize = 18)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
print('error:',np.linalg.norm(v-du))
mk2 = ik*ik
v_hat = mk2*u_hat
v = np.real(np.fft.ifft(v_hat))
plt.plot(x,v,'r-',lw=2,label='$FT^{-1}[-k^2FT[u]]$')
plt.plot(x,ddu,'b--',lw=2,label='exact derivative')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.ylabel('$d^2u/dx^2$',fontsize = 18)
plt.xlabel('$x$',fontsize = 18)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
print('error:',np.linalg.norm(v-ddu))
"""
Explanation: <h1>Derivation in Fourier Space</h1>
Going back to the Fourier series,
$$
u(x) = \sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}\frac{2\pi nx}{L_x}\right)=\sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}k_nx\right)
$$
with
$$
k_n=\frac{2\pi n}{L_x}\,
$$
it is obvious that any $m$ derivative of the real variable $u$ be:
$$
\frac{d^mu}{dx^m} = \sum_{n=-\infty}^{\infty}\left(\hat{\jmath}k_n\right)^ma_n\exp\left(\hat{\jmath}k_nx\right)
$$
In other words, if $u_n$ is defined as:
$$
u_n(x) = a_n\exp\left(\hat{\jmath}k_nx\right)
$$
then
$$
\frac{d^mu_n}{dx^m} = \left(\hat{\jmath}k_n\right)^m u_n\,.
$$
A $m$ derivativation in the Fourier space amounts to the multiplication of each Fourier coefficient $a_n$ by $\left(\hat{\jmath}k_n\right)^m$.
The following is a code for the first derivative of $u$:
End of explanation
"""
du_fd = np.zeros(Nx,dtype='float64')
dx = Lx/Nx
du_fd[1:Nx] = (u[1:Nx]-u[0:Nx-1])/dx
du_fd[0] = (u[0]-u[Nx-1])/dx
plt.plot(x,du_fd,'r-',lw=2,label='FD derivative')
plt.plot(x,du,'b--',lw=2,label='exact derivative')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.ylabel('$du/dx$',fontsize = 18)
plt.xlabel('$x$',fontsize = 18)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
print('error:',np.linalg.norm(du_fd-du,np.inf))
"""
Explanation: <h1> Comparison with Finite Difference Derivatives</h1>
When the number of Fourier node is sufficient to capture all scales, derivatives computed in the Fourier space are essentially exact. The following code compares the exact first derivative with a first order upwind scheme:
$$
\left.\frac{\delta u}{\delta x}\right\vert_i=\frac{u_i-u_{i-1}}{\Delta x}
$$
For finite difference derivative, we will use the symbol $\delta/\delta x$ rather than $d/dx$.
<p class='alert alert-success'>
Show that the scheme is first order and write the leading term in the truncation error.
</p>
End of explanation
"""
F = np.zeros(Nx/2+1,dtype='float64')
F_fd = np.zeros(Nx/2+1,dtype='float64')
du_hat = np.fft.fft(du)/Nx
F = np.real(du_hat[0:Nx/2+1]*np.conj(du_hat[0:Nx/2+1]))
dv_hat = np.fft.fft(du_fd)/Nx
F_fd = np.real(dv_hat[0:Nx/2+1]*np.conj(dv_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='$\Phi_{du/dx}$')
plt.loglog(k[0:Nx/2+1],F_fd,'b-',lw=2,label='$\Phi_{\delta u/\delta x}$')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Phi(k)$', fontdict = font)
plt.xticks(fontsize = 16)
plt.ylim(0.1,250)
plt.yticks(fontsize = 16)
plt.show()
print('error:',np.linalg.norm(F[0:Nwave/2]-F_fd[0:Nwave/2],np.inf))
print('error per wavenumber')
plt.loglog(k[0:Nx/2+1],np.abs(F-F_fd),'r-',lw=2)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Vert\Phi(k)-\Phi_{\delta u/\delta x}(k)\Vert$', fontdict = font)
plt.xticks(fontsize = 16)
plt.ylim(1e-5,250)
plt.yticks(fontsize = 16)
plt.show()
"""
Explanation: The error is large which is compounded by the fact that <FONT FACE="courier" style="color:blue">np.linalg.norm</FONT> does not normalize the norm by the number of points. Nonetheless the error is far greater than for the derivative using the Fourier space. To shed some light on the cause for errors, the following code computes the spectrum of the first derivative. The second graph shows the difference in spectral energy between the exact solution and the derivative approximated with finite difference per wavenumber.
<p class='alert alert-info'>
However the best way to visualize the problem with spectra alone is to lower the resolution at the top cell and rerun the whole notebook. Finish the notebook as is and then rerun with $N_x=64$.
</p>
End of explanation
"""
L = np.pi
N = 32
dx = L/N
omega = np.linspace(0,L,N)
omegap_exact = omega
omegap_modified = np.sin(omega)
plt.plot(omega,omegap_exact,'k-',lw=2,label='exact')
plt.plot(omega,omegap_modified,'r-',lw=2,label='1st order upwind')
plt.xlim(0,L)
plt.ylim(0,L)
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$\omega\Delta x$', fontdict = font)
plt.ylabel('$\omega_{mod}\Delta x$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
"""
Explanation: <p class='alert alert-success'>
Which scales are the most affected by the finite difference scheme? What effect do you observe?
</p>
<h1>Modified Wavenumber</h1>
Starting from,
$$
f(x) = \sum_{k=-N/2+1}^{N/2}\hat{f}k\exp\left(\hat{\jmath}\frac{2\pi kx}{L_x}\right)
$$
and introducing the wavenumber
$$
\omega = \frac{2\pi k}{L_x}
$$
allows to reduce the Fourier expansion to
$$
f(x) = \sum{\omega=-\pi}^{\pi}\hat{f}\omega e^{\hat{\jmath}\omega x}
$$
The derivative of $f$ becomes
$$
f'(x) = \sum{\omega=-\pi}^{\pi}\hat{f'}\omega e^{\hat{\jmath}\omega x}
$$
where $\hat{f'}\omega$ is only a notation, not the derivative of the Fourier coefficient:
$$
\hat{f'}\omega=\hat{\jmath}\omega\hat{f}\omega
$$
Considering the symmetry of the FT, we restrict the study to $\omega\Delta x\in[0,\pi]$.
Now we want to express the first order finite difference scheme in terms of Fourier coefficients and wavenumbers. The scaled coordinates expression of the scheme is
$$
\frac{\delta f}{\delta x}=\frac{f(x)-f(x-\Delta x)}{\Delta x}=\sum_{\omega=-\pi}^{\pi}\hat{f}\omega\frac{e^{\hat{\jmath}\omega x}-e^{\hat{\jmath}\omega (x-\Delta x)}}{\Delta x}=\sum{\omega=-\pi}^{\pi}\frac{1-e^{-\hat{\jmath}\omega\Delta x}}{\Delta x}\hat{f}\omega e^{\hat{\jmath}\omega x}
$$
We now define a modified wavenumber for the first order upwind scheme:
$$
\hat{\jmath}\omega'=\frac{1-e^{-\hat{\jmath}\omega\Delta x}}{\Delta x}=\frac{1-\cos(-\omega\Delta x)-\hat{\jmath}\sin(-\omega\Delta x)}{\Delta x}
$$
which reduces to
$$
\hat{\jmath}\omega\text{mod}\Delta x= \hat{\jmath}\sin(\omega\Delta x)+\left(\cos(1-\omega\Delta x)\right)
$$
The modified wavenumber is no longer purely imaginary, and even the imaginary part is far from the exact wavenumber, as shown in the following plot.
End of explanation
"""
u_hat = np.fft.fft(u)
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
dx = Lx/Nx
ikm = 1j*np.sin(2.*np.pi/Nx*k)/dx+(1-np.cos(2.*np.pi/Nx*k))/dx
v_hat = ikm*u_hat
dum = np.real(np.fft.ifft(v_hat))
plt.plot(x,dum,'r-',lw=2,label='$FT^{-1}[\hat{\jmath}\omega_{mod}FT[u]]$')
plt.plot(x,du_fd,'b--',lw=2,label='$\delta u/\delta x$')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.ylabel('$du/dx$',fontsize = 18)
plt.xlabel('$x$',fontsize = 18)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
print('error:',np.linalg.norm(du_fd-dum,np.inf))
"""
Explanation: First, let's verify that the derivation of the modified wavenumber is correct.
End of explanation
"""
!ipython nbconvert --to html ME249-Lecture-3-YOURNAME.ipynb
"""
Explanation: <p class='alert alert-success'>
- Explain why $\omega\Delta x$ can be defined from $0$ to $\pi$.
</p>
<p class='alert alert-success'>
- Write a code to illustrate the effects of the imaginary part and real part on the derivative on the following function
$$
f(x) = cos(nx)
$$
defined for $x\in[0,2\pi]$, discretized with $N$ points. Study a few $n$, ranging from large scales to small scales. Note the two effects we seek to identify are phase change and amplitude change.
</p>
<p class='alert alert-success'>
- Derive the modified wavenumber for the second order central finite difference scheme
$$
\frac{\delta f}{\delta x}=\frac{f_{i+1}-f_{i-1}}{2\Delta x}
$$
</p>
<p class='alert alert-success'>
- Create a second order upwind scheme and derive the modified wavenumber. Compare the performance of the first order and the second order schemes. For the second order upwind scheme, find $a$, $b$ and $c$ such that
$$
\frac{\delta f}{\delta x}=\frac{af_{i-2}+bf_{i-1}+cf_i}{\Delta x}
$$
</p>
End of explanation
"""
|
Brunel-Visualization/Brunel | python/examples/Whiskey.ipynb | apache-2.0 | import pandas as pd
from numpy import log, abs, sign, sqrt
import brunel
whiskey = pd.read_csv("data/whiskey.csv")
print('Data on whiskies:', ', '.join(whiskey.columns))
"""
Explanation: Whiskey Data
This data set contains data on a small number of whiskies
End of explanation
"""
%%brunel data('whiskey') x(country, category) color(rating) treemap label(name:3) tooltip(#all)
style('.label {font-size:7pt}') legends(none)
:: width=900, height=600
%%brunel data('whiskey') bubble color(rating:red) sort(rating) size(abv) label(name:6) tooltip(#all) filter(price, category)
:: height=500
%%brunel data('whiskey')
line x(age) y(rating) mean(rating) using(interpolate) label(country) split(country)
bin(age:8) color(#selection) legends(none) |
treemap x(category) interaction(select) size(#count) color(#selection) legends(none) sort(#count:ascending) bin(category:9)
tooltip(country) list(country) label(#count) style('.labels .label {font-size:14px}')
:: width=900
%%brunel data('whiskey')
bubble label(country:3) bin(country) size(#count) color(#selection) sort(#count) interaction(select) tooltip(name) list(name) legends(none) at(0,10,60,100)
| x(abv) y(rating) color(#count:blue) legends(none) bin(abv:8) bin(rating:5) style('symbol:rect; stroke:none; size:100%')
interaction(select) label(#selection) list(#selection) at(60,15,100,100) tooltip(rating, abv,#count) legends(none)
| bar label(brand:70) list(brand) at(0,0, 100, 10) axes(none) color(#selection) legends(none) interaction(filter)
:: width=900, height=600
"""
Explanation: Summaries
Shown below are the following charts:
A treemap display for each whiskey, broken down by country and category. The cells are colored by the rating, with lower-rated whiskies in blue, and higher-rated in reds. Missing data for ratings show as black.
A filtered chart allowing you to select whiskeys based on price and category
A line chart showing the relationship between age and rating. A simple treemap of categories is linked to this chart
A bubble chart of countries linked to a heatmap of alcohol level (ABV) by rating
End of explanation
"""
from sklearn import tree
D = whiskey[['Name', 'ABV', 'Age', 'Rating', 'Price']].dropna()
X = D[ ['ABV', 'Age', 'Rating'] ]
y = D['Price']
clf = tree.DecisionTreeRegressor(min_samples_leaf=4)
clf.fit(X, y)
D['Predicted'] = clf.predict(X)
f = D['Predicted'] - D['Price']
D['Diff'] = sqrt(abs(f)) * sign(f)
D['LPrice'] = log(y)
%brunel data('D') y(diff) x(LPrice) tooltip(name, price, predicted, rating) color(rating) :: width=700
"""
Explanation: Some Analysis
Here we use the sci-kit decision tree regression tool to predict the price of a whiskey given its age, rating and ABV value.
We transform the output for plotting purposes, but note that the tooltips give the original data
End of explanation
"""
%%brunel data('whiskey')
bar x(country) y(#count) interaction(select) color(#selection) |
bar color(category) y(#count) percent(#count) polar stack label(category) legends(none) interaction(filter) tooltip(#count,category)
:: width=900, height=300
"""
Explanation: Simple Linked Charts
Click on a bar to see the proportions of Whiskey categories per country
End of explanation
"""
|
maropu/spark | python/docs/source/getting_started/quickstart.ipynb | apache-2.0 | from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
"""
Explanation: Quickstart
This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of RDDs. When Spark transforms data, it does not immediately compute the transformation but plans how to compute later. When actions such as collect() are explicitly called, the computation starts.
This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself on a live notebook here.
There is also other useful information in Apache Spark documentation site, see the latest version of Spark SQL and DataFrames, RDD Programming Guide, Structured Streaming Programming Guide, Spark Streaming Programming Guide and Machine Learning Library (MLlib) Guide.
PySpark applications start with initializing SparkSession which is the entry point of PySpark as below. In case of running it in PySpark shell via <code>pyspark</code> executable, the shell automatically creates the session in the variable <code>spark</code> for users.
End of explanation
"""
from datetime import datetime, date
import pandas as pd
from pyspark.sql import Row
df = spark.createDataFrame([
Row(a=1, b=2., c='string1', d=date(2000, 1, 1), e=datetime(2000, 1, 1, 12, 0)),
Row(a=2, b=3., c='string2', d=date(2000, 2, 1), e=datetime(2000, 1, 2, 12, 0)),
Row(a=4, b=5., c='string3', d=date(2000, 3, 1), e=datetime(2000, 1, 3, 12, 0))
])
df
"""
Explanation: DataFrame Creation
A PySpark DataFrame can be created via pyspark.sql.SparkSession.createDataFrame typically by passing a list of lists, tuples, dictionaries and pyspark.sql.Rows, a pandas DataFrame and an RDD consisting of such a list.
pyspark.sql.SparkSession.createDataFrame takes the schema argument to specify the schema of the DataFrame. When it is omitted, PySpark infers the corresponding schema by taking a sample from the data.
Firstly, you can create a PySpark DataFrame from a list of rows
End of explanation
"""
df = spark.createDataFrame([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
], schema='a long, b double, c string, d date, e timestamp')
df
"""
Explanation: Create a PySpark DataFrame with an explicit schema.
End of explanation
"""
pandas_df = pd.DataFrame({
'a': [1, 2, 3],
'b': [2., 3., 4.],
'c': ['string1', 'string2', 'string3'],
'd': [date(2000, 1, 1), date(2000, 2, 1), date(2000, 3, 1)],
'e': [datetime(2000, 1, 1, 12, 0), datetime(2000, 1, 2, 12, 0), datetime(2000, 1, 3, 12, 0)]
})
df = spark.createDataFrame(pandas_df)
df
"""
Explanation: Create a PySpark DataFrame from a pandas DataFrame
End of explanation
"""
rdd = spark.sparkContext.parallelize([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
])
df = spark.createDataFrame(rdd, schema=['a', 'b', 'c', 'd', 'e'])
df
"""
Explanation: Create a PySpark DataFrame from an RDD consisting of a list of tuples.
End of explanation
"""
# All DataFrames above result same.
df.show()
df.printSchema()
"""
Explanation: The DataFrames created above all have the same results and schema.
End of explanation
"""
df.show(1)
"""
Explanation: Viewing Data
The top rows of a DataFrame can be displayed using DataFrame.show().
End of explanation
"""
spark.conf.set('spark.sql.repl.eagerEval.enabled', True)
df
"""
Explanation: Alternatively, you can enable spark.sql.repl.eagerEval.enabled configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controlled via spark.sql.repl.eagerEval.maxNumRows configuration.
End of explanation
"""
df.show(1, vertical=True)
"""
Explanation: The rows can also be shown vertically. This is useful when rows are too long to show horizontally.
End of explanation
"""
df.columns
df.printSchema()
"""
Explanation: You can see the DataFrame's schema and column names as follows:
End of explanation
"""
df.select("a", "b", "c").describe().show()
"""
Explanation: Show the summary of the DataFrame
End of explanation
"""
df.collect()
"""
Explanation: DataFrame.collect() collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side.
End of explanation
"""
df.take(1)
"""
Explanation: In order to avoid throwing an out-of-memory exception, use DataFrame.take() or DataFrame.tail().
End of explanation
"""
df.toPandas()
"""
Explanation: PySpark DataFrame also provides the conversion back to a pandas DataFrame to leverage pandas API. Note that toPandas also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side.
End of explanation
"""
df.a
"""
Explanation: Selecting and Accessing Data
PySpark DataFrame is lazily evaluated and simply selecting a column does not trigger the computation but it returns a Column instance.
End of explanation
"""
from pyspark.sql import Column
from pyspark.sql.functions import upper
type(df.c) == type(upper(df.c)) == type(df.c.isNull())
"""
Explanation: In fact, most of column-wise operations return Columns.
End of explanation
"""
df.select(df.c).show()
"""
Explanation: These Columns can be used to select the columns from a DataFrame. For example, DataFrame.select() takes the Column instances that returns another DataFrame.
End of explanation
"""
df.withColumn('upper_c', upper(df.c)).show()
"""
Explanation: Assign new Column instance.
End of explanation
"""
df.filter(df.a == 1).show()
"""
Explanation: To select a subset of rows, use DataFrame.filter().
End of explanation
"""
import pandas
from pyspark.sql.functions import pandas_udf
@pandas_udf('long')
def pandas_plus_one(series: pd.Series) -> pd.Series:
# Simply plus one by using pandas Series.
return series + 1
df.select(pandas_plus_one(df.a)).show()
"""
Explanation: Applying a Function
PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also the latest Pandas UDFs and Pandas Function APIs. For instance, the example below allows users to directly use the APIs in a pandas Series within Python native function.
End of explanation
"""
def pandas_filter_func(iterator):
for pandas_df in iterator:
yield pandas_df[pandas_df.a == 1]
df.mapInPandas(pandas_filter_func, schema=df.schema).show()
"""
Explanation: Another example is DataFrame.mapInPandas which allows users directly use the APIs in a pandas DataFrame without any restrictions such as the result length.
End of explanation
"""
df = spark.createDataFrame([
['red', 'banana', 1, 10], ['blue', 'banana', 2, 20], ['red', 'carrot', 3, 30],
['blue', 'grape', 4, 40], ['red', 'carrot', 5, 50], ['black', 'carrot', 6, 60],
['red', 'banana', 7, 70], ['red', 'grape', 8, 80]], schema=['color', 'fruit', 'v1', 'v2'])
df.show()
"""
Explanation: Grouping Data
PySpark DataFrame also provides a way of handling grouped data by using the common approach, split-apply-combine strategy.
It groups the data by a certain condition applies a function to each group and then combines them back to the DataFrame.
End of explanation
"""
df.groupby('color').avg().show()
"""
Explanation: Grouping and then applying the avg() function to the resulting groups.
End of explanation
"""
def plus_mean(pandas_df):
return pandas_df.assign(v1=pandas_df.v1 - pandas_df.v1.mean())
df.groupby('color').applyInPandas(plus_mean, schema=df.schema).show()
"""
Explanation: You can also apply a Python native function against each group by using pandas API.
End of explanation
"""
df1 = spark.createDataFrame(
[(20000101, 1, 1.0), (20000101, 2, 2.0), (20000102, 1, 3.0), (20000102, 2, 4.0)],
('time', 'id', 'v1'))
df2 = spark.createDataFrame(
[(20000101, 1, 'x'), (20000101, 2, 'y')],
('time', 'id', 'v2'))
def asof_join(l, r):
return pd.merge_asof(l, r, on='time', by='id')
df1.groupby('id').cogroup(df2.groupby('id')).applyInPandas(
asof_join, schema='time int, id int, v1 double, v2 string').show()
"""
Explanation: Co-grouping and applying a function.
End of explanation
"""
df.write.csv('foo.csv', header=True)
spark.read.csv('foo.csv', header=True).show()
"""
Explanation: Getting Data in/out
CSV is straightforward and easy to use. Parquet and ORC are efficient and compact file formats to read and write faster.
There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also the latest Spark SQL, DataFrames and Datasets Guide in Apache Spark documentation.
CSV
End of explanation
"""
df.write.parquet('bar.parquet')
spark.read.parquet('bar.parquet').show()
"""
Explanation: Parquet
End of explanation
"""
df.write.orc('zoo.orc')
spark.read.orc('zoo.orc').show()
"""
Explanation: ORC
End of explanation
"""
df.createOrReplaceTempView("tableA")
spark.sql("SELECT count(*) from tableA").show()
"""
Explanation: Working with SQL
DataFrame and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the DataFrame as a table and run a SQL easily as below:
End of explanation
"""
@pandas_udf("integer")
def add_one(s: pd.Series) -> pd.Series:
return s + 1
spark.udf.register("add_one", add_one)
spark.sql("SELECT add_one(v1) FROM tableA").show()
"""
Explanation: In addition, UDFs can be registered and invoked in SQL out of the box:
End of explanation
"""
from pyspark.sql.functions import expr
df.selectExpr('add_one(v1)').show()
df.select(expr('count(*)') > 0).show()
"""
Explanation: These SQL expressions can directly be mixed and used as PySpark columns.
End of explanation
"""
|
kit-cel/lecture-examples | ccgbc/ch2_Codes_Basic_Concepts/Block_Code_Decoding_Performance.ipynb | gpl-2.0 | import numpy as np
import numpy.polynomial.polynomial as npp
from scipy.stats import norm
from scipy.special import comb
import matplotlib.pyplot as plt
"""
Explanation: Bounded Distance Decoding Performance of a Linear Block Code
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* The decoding performance of codes under BDD decoding
* Comparison with the Bhattacharyya-parameter based bound on ML decoding
End of explanation
"""
def Plw(n,l,w,delta):
return np.sum([comb(w,l-r)*comb(n-w,r)*(delta**(w-l+2*r))*((1-delta)**(n-w+l-2*r)) for r in range(l+1)])
"""
Explanation: Implement the helper function which computes the probability $P_\ell^w$ that a received word $\boldsymbol{y}$ is exactly at Hamming distance $\ell$ from a codeword of weight $w$ after transmission of the zero codeword over a BSC with error probability $\delta$, with
$$
P_{\ell}^w = \sum_{r=0}^{\ell}\binom{w}{\ell-r}\binom{n-w}{r}\delta^{w-\ell+2r}(1-\delta)^{n-w+l-2r}
$$
End of explanation
"""
# weight enumerator polynomial
Aw = [0,0,0,4,6,8,8,4,1]
n = 10
dmin = np.nonzero(Aw)[0][0]
e = int(np.floor((dmin-1)/2))
delta_range = np.logspace(-6,-0.31,100)
Pcw_range = [np.sum([Aw[w]*np.sum([Plw(n,l,w,delta) for l in range(e+1)]) for w in range(len(Aw))]) for delta in delta_range]
Pcw_bound_range = [np.sum([comb(n,w)*((delta)**w)*((1-delta)**(n-w)) for w in range(e+1,n+1)]) for delta in delta_range]
P_F_range = np.array([1-np.sum([comb(n,w)*((delta)**w)*((1-delta)**(n-w)) for w in range(e+1)]) for delta in delta_range]) - np.array(Pcw_range)
# compute bound for ML decoding
Bhattacharyya_range = [2*np.sqrt(delta*(1-delta)) for delta in delta_range]
P_ML_bound_range = [npp.polyval(B, Aw) for B in Bhattacharyya_range]
fig = plt.figure(1,figsize=(12,7))
plt.loglog(delta_range, Pcw_range,'b-')
plt.loglog(delta_range, Pcw_bound_range,'g-')
plt.loglog(delta_range, P_F_range,'r-')
plt.loglog(delta_range, P_ML_bound_range,'k--')
plt.xlim((1,1e-6))
plt.ylim((1e-12,1))
plt.xlabel('BSC error rate $\delta$', fontsize=16)
plt.ylabel('Error rate', fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
plt.legend(['$P_{cw}$','Upper bound on $P_{cw}$', '$P_F$', 'Upper bound on $P_{\mathrm{ML}}$'], fontsize=14);
"""
Explanation: Show performance and some bounds illustrating the decoding performance over the BSC of a binary linear block code with generator matrix
$$
\boldsymbol{G} = \left(\begin{array}{cccccccccc}
1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 \
0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 \
0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 \
0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 \
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1
\end{array} \right)
$$
that has weight enumerator polynomial $A(W) = 4W^3+6W^4+8W^5+8W^6+4W^7+W^8$.
We compute the following:
1. The performance of bounded distance decoding (BDD) given by
$$
P_{cw} = \sum_{w=1}^{n}A_w \sum_{\ell=0}^{\lfloor\frac{d_{\min}-1}{2}\rfloor}P_{\ell}^w
$$
2. An easier to compute upper bound on the performance of BDD, which is given by
$$
P_{cw} \leq \sum_{w=\lfloor\frac{d_{\min}-1}{2}\rfloor+1}^{n}\binom{n}{w}\delta^{w}(1-\delta)^{n-w}
$$
3. The failure probability of BDD (i.e., the probability that the decoder cannot find a valid codeword, which is the probability that we fall outside a sphere of radius $\lfloor\frac{d_{\min}-1}{2}\rfloor$ around the codewords)
$$
P_F = 1 - \underbrace{\sum_{j=0}^{\lfloor\frac{d_{\min}-1}{2}\rfloor}\binom{n}{j}\delta^j(1-\delta)^{n-j}}{\text{prob. of falling into correct sphere}} - \underbrace{P{cw}}{\substack{\text{prob. of}\ \text{failing into}\\text{incorrect sphere}}}
$$
4. An upper bound on ML decoding given by
$$
P{\mathrm{ML}} \leq A(\mathcal{B}(\mathbb{C}{\text{BSC}})) = \sum{w=d_{\min}}^n 2A_w\sqrt{\delta(1-\delta)}
$$
where $\mathcal{B}(\mathbb{C}_{\text{BSC}} = 2\sqrt{\delta(1-\delta)}$ is the Bhattacharyya parameter of the BSC. We can see that the ML bound is rather loose in this case as we can easily outperform the bound with BDD.
End of explanation
"""
|
tcstewar/testing_notebooks | Function Space description.ipynb | gpl-2.0 | domain = np.linspace(-1, 1, 2000)
def gaussian(mag, mean, sd):
return mag * np.exp(-(domain-mean)**2/(2*sd**2))
pylab.plot(domain, gaussian(mag=1, mean=0, sd=0.1))
pylab.show()
"""
Explanation: Function Spaces in Nengo
Here is proposed new utilities to add to Nengo to support function space representations.
The basic idea of a function space is that instead of representing vectors, we represent full functions (or even vector fields).
What is a function?
Really, it's just a set of points. For example, here's a guassian evaluated at a set of points:
End of explanation
"""
space = []
for i in range(100):
space.append(gaussian(mag=np.random.uniform(0,1),
mean=np.random.uniform(-1,1),
sd=np.random.uniform(0.1, 0.3)))
pylab.plot(domain, np.array(space).T)
pylab.show()
"""
Explanation: To define a function space, we need a bunch of different functions: for example, gaussians at different means and standard deviations and magnitudes. We could generate this by randomly generating it ourselves:
End of explanation
"""
space = nengo.dists.Function(gaussian,
mag=nengo.dists.Uniform(0, 1),
mean=nengo.dists.Uniform(-1, 1),
sd=nengo.dists.Uniform(0.1, 0.3))
data = space.sample(100)
pylab.plot(domain, data.T)
pylab.show()
"""
Explanation: Instead of generating this ourselves manually, this can be done with a new nengo.dists.Function class. This is a subclass of nengo.dists.Distribution (just like nengo.dists.Uniform).
End of explanation
"""
model = nengo.Network()
with model:
stim = nengo.Node(gaussian(mag=1, mean=0.5, sd=0.1))
ens = nengo.Ensemble(n_neurons=200, dimensions=2000,
encoders=space,
eval_points=space,
)
nengo.Connection(stim, ens)
probe_func = nengo.Probe(ens, synapse=0.03)
sim = nengo.Simulator(model)
sim.run(0.2)
pylab.plot(domain, sim.data[probe_func][-1])
pylab.figure()
pylab.imshow(sim.data[probe_func], extent=(-1,1,0.2,0), aspect=10.0)
pylab.ylabel('time')
pylab.show()
"""
Explanation: The naive way of putting this into Nengo
Well, if a function is just a bunch of points, we could just directly use this sort of distribution to define encoders and sample points.
End of explanation
"""
model = nengo.Network()
with model:
stim = nengo.Node(gaussian(mag=1, mean=0.5, sd=0.2))
ens = nengo.Ensemble(n_neurons=100, dimensions=2000,
encoders=space,
eval_points=space)
nengo.Connection(stim, ens)
peak = nengo.Ensemble(n_neurons=50, dimensions=1)
def find_peak(x):
return domain[np.argmax(x)]
nengo.Connection(ens, peak, function=find_peak)
probe_peak = nengo.Probe(peak, synapse=0.03)
sim = nengo.Simulator(model)
sim.run(0.2)
pylab.plot(sim.trange(), sim.data[probe_peak])
pylab.show()
"""
Explanation: We can even compute functions on this function space. Here is an example trying to find the peak of the represented function.
End of explanation
"""
fs = nengo.FunctionSpace(
nengo.dists.Function(gaussian,
mag=nengo.dists.Uniform(0, 1),
mean=nengo.dists.Uniform(-1, 1),
sd=nengo.dists.Uniform(0.1, 0.3)),
n_basis=20)
"""
Explanation: The problem with the naive approach
It's fine and correct, but it's slow. Why? Well, think about what a Connection is doing now. In a normal Nengo connection, we have N neurons and D dimensions, and we get this huge speedup because instead of an NxN connection weight matrix, we have an NxD decoder matrix and a DxN encoder matrix.
But in this situation, D starts being bigger than N! In theory, we can let D go to infinity, and all of a sudden all of our nice encoder and decoder matrices explode up to ridiculous sizes.
Exploiting redundancy
So, what do we do instead? Well, let's take advantage of the fact that those functions we are representing have tons of redundancy. What we're going to do is to find a basis space for those functions such that we only have to represent a smaller number of dimensions. This is done via SVD and taken care of by the new nengo.FunctionSpace object.
End of explanation
"""
model = nengo.Network()
with model:
stim = nengo.Node(fs.project(gaussian(mag=1, mean=0.5, sd=0.1)))
ens = nengo.Ensemble(n_neurons=100, dimensions=fs.n_basis,
encoders=fs.project(space),
eval_points=fs.project(space))
nengo.Connection(stim, ens)
probe_func = nengo.Probe(ens, synapse=0.03)
sim = nengo.Simulator(model)
sim.run(0.2)
pylab.plot(domain, fs.reconstruct(sim.data[probe_func][-1]))
pylab.figure()
pylab.imshow(fs.reconstruct(sim.data[probe_func]), extent=(-1,1,0.2,0), aspect=10.0)
pylab.ylabel('time')
pylab.show()
"""
Explanation: This will make a compressed function space for the given function. The n_basis paramater indicates that we only want 10 dimensions in the internal representation. Now we can use this to define our nengo model.
We use the functions fs.project and fs.reconstruct to convert into and out of this compressed space. Importantly, fs.project can take a Distribution (including a Function, as above) or an array. If it is given an array, it does the transformation. If it is given a Distribution, then it returns a new Distribution that samples from the original distribution and does the projection when needed. This means we can use it to set encoders and decoders!
End of explanation
"""
model = nengo.Network()
with model:
stim = nengo.Node(fs.project(gaussian(mag=1, mean=0.5, sd=0.2)))
ens = nengo.Ensemble(n_neurons=100, dimensions=fs.n_basis,
encoders=fs.project(space),
eval_points=fs.project(space))
nengo.Connection(stim, ens)
peak = nengo.Ensemble(n_neurons=50, dimensions=1)
def find_peak(x):
return domain[np.argmax(fs.reconstruct(x))]
nengo.Connection(ens, peak, function=find_peak)
probe_peak = nengo.Probe(peak, synapse=0.03)
sim = nengo.Simulator(model)
sim.run(0.2)
pylab.plot(sim.trange(), sim.data[probe_peak])
pylab.show()
"""
Explanation: This is now much faster, since all the matrices are reasonably sized now. The cost is that you have to use fs.project() and fs.reconstruct() to go in and out of the compressed function space.
You can also still compute functions
End of explanation
"""
model = nengo.Network()
with model:
stim = nengo.Node(gaussian(mag=1, mean=0.5, sd=0.1))
ens = nengo.Ensemble(n_neurons=100, dimensions=2000,
encoders=space,
eval_points=space,
radius=radius
)
nengo.Connection(stim, ens)
probe_func = nengo.Probe(ens, synapse=0.03)
probe_spikes = nengo.Probe(ens.neurons)
sim = nengo.Simulator(model)
sim.run(0.2)
pylab.imshow(sim.data[probe_func], extent=(-1,1,0.2,0), aspect=10.0)
pylab.ylabel('time')
pylab.figure()
pylab.hist(np.mean(sim.data[probe_spikes], axis=0))
pylab.xlabel('Hz')
pylab.show()
radius = np.mean(np.linalg.norm(space.sample(10), axis=1))
model = nengo.Network()
with model:
stim = nengo.Node(gaussian(mag=1, mean=0.5, sd=0.1))
ens = nengo.Ensemble(n_neurons=100, dimensions=2000,
encoders=space,
eval_points=space.sample(5000)/radius,
radius=radius
)
nengo.Connection(stim, ens)
probe_func = nengo.Probe(ens, synapse=0.03)
probe_spikes = nengo.Probe(ens.neurons)
sim = nengo.Simulator(model)
sim.run(0.2)
pylab.imshow(sim.data[probe_func], extent=(-1,1,0.2,0), aspect=10.0)
pylab.ylabel('time')
pylab.figure()
pylab.hist(np.mean(sim.data[probe_spikes], axis=0))
pylab.xlabel('Hz')
pylab.show()
"""
Explanation: Radius issues
Looking at the represented function, it looks like it works even better than before (less noise)! Why?
Mostly because in the initial naive version we forgot to worry about the radius. The encoders that we specified in the naive approach got automatically scaled to unit length, and the vectors we were feeding in for sample points were much larger than unit length. So most of the neurons are spending a lot of their time very saturated (since the gain and bias calculations don't know about the changed radius). We could have rescaled the eval_points and put in a radius to make it equivalent.
(Note: this would be slightly easier if Ensembles had a scale_eval_points parameters, like Connections do)
End of explanation
"""
fs = nengo.FunctionSpace(
nengo.dists.Function(gaussian,
mag=nengo.dists.Uniform(0, 1),
mean=nengo.dists.Uniform(-1, 1),
sd=nengo.dists.Uniform(0.1, 0.3)),
n_basis=20)
model = nengo.Network()
with model:
stim = nengo.Node(fs.project(gaussian(mag=1, mean=0.5, sd=0.1)))
ens = nengo.Ensemble(n_neurons=500, dimensions=fs.n_basis)
ens.encoders = fs.project(
nengo.dists.Function(gaussian,
mean=nengo.dists.Uniform(-1, 1),
sd=0.1,
mag=1))
ens.eval_points = fs.project(fs.space)
nengo.Connection(stim, ens)
probe_func = nengo.Probe(ens, synapse=0.03)
sim = nengo.Simulator(model)
sim.run(0.2)
pylab.plot(domain, fs.reconstruct(sim.data[probe_func][-1]))
pylab.figure()
pylab.imshow(fs.reconstruct(sim.data[probe_func]), extent=(-1,1,0.2,0), aspect=10.0)
pylab.ylabel('time')
pylab.show()
"""
Explanation: Multiple Distributions
It should be noted that there are three different distributions in play here. The distribution used to define the space, the distribution for the encoders, and the distribution for the evaluation points. These can be all different. I believe we will often want to have the distribution used by the space to be the same as the eval_points, but the encoders are generally pretty different. For example, I might have encoders that are all gaussians of the same height and width.
End of explanation
"""
model = nengo.Network()
with model:
ens = nengo.Ensemble(n_neurons=500, dimensions=fs.n_basis)
ens.encoders = fs.project(fs.space)
ens.eval_points = fs.project(fs.space)
# input
stim = fs.make_stimulus_node(gaussian, 3)
nengo.Connection(stim, ens)
stim_control = nengo.Node([1, 0, 0.2])
nengo.Connection(stim_control, stim)
#output
plot = fs.make_plot_node(domain=domain)
nengo.Connection(ens, plot, synapse=0.1)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, cfg='funcspace.cfg')
"""
Explanation: Nengo GUI integration
There are two useful tools that help for doing stuff with nengo_gui. For plotting the function, there is an fs.make_plot_node function. For input, there is an fs.make_stimulus_node that gives you a node that'll let you control the variables passed in to the underlying function. For example:
End of explanation
"""
model = nengo.Network()
with model:
ens = nengo.Ensemble(n_neurons=500, dimensions=fs.n_basis)
ens.encoders = fs.project(space)
ens.eval_points = fs.project(fs.space)
# input
stim = fs.make_stimulus_node(gaussian, 3)
nengo.Connection(stim, ens)
stim_control = nengo.Node([1, 0, 0.2])
nengo.Connection(stim_control, stim)
#output
plot = fs.make_plot_node(domain=domain, n_pts=50, lines=2)
nengo.Connection(ens, plot[:fs.n_basis], synapse=0.1)
nengo.Connection(stim, plot[fs.n_basis:], synapse=0.1)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, cfg='funcspace2.cfg')
"""
Explanation: The output display support multiple plots overlaid on top of each other, and has configurable number of points drawn
End of explanation
"""
domain = np.random.uniform(-1, 1, size=(1000, 2))
def gaussian2d(meanx, meany, sd):
return np.exp(-((domain[:,0]-meanx)**2+(domain[:,1]-meany)**2)/(2*sd**2))
fs = nengo.FunctionSpace(
nengo.dists.Function(gaussian2d,
meanx=nengo.dists.Uniform(-1, 1),
meany=nengo.dists.Uniform(-1, 1),
sd=nengo.dists.Uniform(0.1, 0.7)),
n_basis=50)
model = nengo.Network()
with model:
ens = nengo.Ensemble(n_neurons=500, dimensions=fs.n_basis)
ens.encoders = fs.project(fs.space)
ens.eval_points = fs.project(fs.space)
stimulus = nengo.Node(fs.project(gaussian2d(0,0,0.2)))
nengo.Connection(stimulus, ens)
probe = nengo.Probe(ens, synapse=0.01)
sim = nengo.Simulator(model)
sim.run(0.2)
from mpl_toolkits.mplot3d import Axes3D
fig = pylab.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(domain[:,0], domain[:,1], fs.reconstruct(sim.data[probe][-1]))
pylab.show()
"""
Explanation: Multidimensional representations
There is nothing restricting this code to 1 dimensional functions. For example, here is a 2D bump:
End of explanation
"""
domain = np.linspace(-1, 1, 2000)
def gaussian(mag, mean, sd):
return mag * np.exp(-(domain-mean)**2/(2*sd**2))
fs = nengo.FunctionSpace(
nengo.dists.Function(gaussian,
mag=nengo.dists.Uniform(0, 1),
mean=nengo.dists.Uniform(-1, 1),
sd=nengo.dists.Uniform(0.1, 0.3)),
n_basis=40)
model = nengo.Network()
with model:
ens = nengo.Ensemble(n_neurons=2000, dimensions=fs.n_basis+1)
ens.encoders = nengo.dists.Combined(
[fs.project(fs.space), nengo.dists.UniformHypersphere(surface=True)],
[fs.n_basis,1])
ens.eval_points = nengo.dists.Combined(
[fs.project(fs.space), nengo.dists.UniformHypersphere(surface=False)],
[fs.n_basis,1])
stim_shift = nengo.Node([0])
nengo.Connection(stim_shift, ens[-1])
# input
stim = fs.make_stimulus_node(gaussian, 3)
nengo.Connection(stim, ens[:-1])
stim_control = nengo.Node([1, 0, 0.13])
nengo.Connection(stim_control, stim)
#output
plot = fs.make_plot_node(domain=domain)
def shift_func(x):
shift = int(x[-1]*500)
return fs.project(np.roll(fs.reconstruct(x[:-1]), shift))
nengo.Connection(ens, plot, synapse=0.1, function=shift_func)
from nengo_gui.ipython import IPythonViz
IPythonViz(model, cfg='funcspace3.cfg')
"""
Explanation: There is no nengo_gui support for displaying anything other than 1d functions (yet).
Also, notice that you could also represent 2 1-D functions just by concatenating their output. As long as the function passed in to nengo.dists.Function just returns a vector, everything should work.
Combining functions and scalars
Sometimes, we might want to do both functions and normal NEF vector representations in the same population. For example, here I want to represent both a function and a shift that should be applied to that function.
To do this, we need a way of specifying the encoders and eval_points of the combined representation. This is done with a new nengo.dists.Combined which takes a list of distributions and concatenates them together. When doing this concatenation, we need to know how many dimensions are used for each. So, if I have 20-D done with a function space and 2-D done with a normal representations, I'd do
nengo.dists.Combined(
[fs.project(fs.space), nengo.dists.UniformHypersphere(surface=False)],
[fs.n_basis,2])
i'm not all that happy with this syntax yet, and I think there's some tweaking still needed in terms of the correct scaling that should be applied, but it mostly works:
End of explanation
"""
|
ameliecordier/iutdoua-info_algo2015 | 2015-12-10 - TD17 - Tableaux et tris, trace et complexité.ipynb | cc0-1.0 | # Exemple = [2, 3, 4, 6, 7, 0, 0, 0, 0]
def decalageADroite(tab, i, derniereCase):
for a in range (derniereCase,i-1,-1):
print(a)
tab[a+1]=tab[a]
return tab
# Je suppose que je veux insérer "5" dans la case 3, et que je sais que mon tableau se finit dans la case 4
print(decalageADroite(exemple, 3, 4))
print(exemple)
# Pour ceux qui voudraient utiliser des vrais tableaux en Python
from numpy import array, empty
#Création d'un tableau à partir d'une liste de valeur :
a = array([2,4,6,0,2,8])
#Création d'un tableau non initialisé :
a = empty(10, float) # taille du tableau, type d'éléments
%load_ext doctestmagic
"""
Explanation: Exercice 1 : Écrire une fonction qui permet d'insérer un élément dans un tableau en décalant tous ses successeurs vers la droite. Dans un premier temps, par souci de simplification, on fait l'hypothèse que le tableau est assez grand et que les cases de droite contiennent des 0.
End of explanation
"""
#Solution itérative
def triSelection (tab):
'''
entrée / sortie :tab est un tableau d'entier
pré-cond: tab non vide
post-cond tab est trié dans l'ordre croissant
'''
b=0
min=tab[0]
for i in range (len(tab)):
for j in range (b,len(tab)):
if (tab[j]<min):
min=tab[j]
indice=j
c=tab[b]
tab[b]=tab[indice]
tab[indice]=c
b=b+1
min=c
return tab
print(triSelection([5,2,3,4]))
%doctest triSelection
"""
Explanation: Exercice 2 : proposez une solution itérative et une solution récursive au problème du tri par sélection du minimum.
End of explanation
"""
def triSelectionR (tab,i):
'''
entrée / sortie :tab est un tableau d'entier
pré-cond: tab non vide
post-cond tab est trié dans l'ordre croissant
'''
imin=i
if i!=len(tab)-1:
for j in range(i,len(tab)):
if tab[j]<tab[imin]:
imin=j
s=tab[i]
tab[i]=tab[imin]
tab[imin]=s
return triSelectionR(tab,i+1)
else:
return tab
print(triSelectionR([3,4,2,6,1],0))
#Quel est ce tri ?
def tri(tab):
'''
:entree/sortie tab: un tableau de valeurs ordonnables
:pre-cond: aucune
:post-cond: le tableau est modifié et trié par ordre croissant
>>> tri([1,5,3,2,4])
[1, 2, 3, 4, 5]
>>> tri(["pomme", "poire", "banane"])
['banane', 'poire', 'pomme']
'''
for i in range (len(tab)-1):
for j in range (i+1, len(tab)):
if tab[j]<tab[i]:
a=tab[j]
tab[j]=tab[i]
tab[i]=a
return tab
"""
Explanation: Solution récursive :
http://pythontutor.com/visualize.html#code=def+triSelectifRecursif(tableau,indice%29%3A%0A%09indiceMin%3Dindice%0A%09if+indice!%3Dlen(tableau%29%3A%0A%09%09for+i+in+range(indice,len(tableau%29%29%3A%0A%09%09%09if+tableau%5Bi%5D%3Ctableau%5BindiceMin%5D%3A%0A%09%09%09%09indiceMin%3Di%0A%09%09temp%3Dtableau%5Bindice%5D%0A%09%09tableau%5Bindice%5D%3Dtableau%5BindiceMin%5D%0A%09%09tableau%5BindiceMin%5D%3Dtemp%0A%09%09triSelectifRecursif(tableau,indice%2B1%29%0A%09return+tableau%5Bindice%3A%5D%0AtriSelectifRecursif(%5B2,1,5,4%5D,0%29&mode=display&origin=opt-frontend.js&cumulative=false&heapPrimitives=false&textReferences=false&py=3&rawInputLstJSON=%5B%5D&curInstr=68
End of explanation
"""
"""
:entrée n: entier
:pré-cond n ≥ 0
:sortie r: entier
:post-cond: r est la partie entière de la racine de n
"""
## exemple d'entrées
n = 91
##
r = 0
while r*r <= n:
r = r+1
r = r-1
## pour voir la sortie
print(r)
##
"""
Explanation: Rappels sur la trace d'exécution d'un algorithme et la complexité
(merci à Pierre-Antoine Champin pour cette section)
La trace d'exécution d'un algorithme est constituée
en prenant une "photo" de toutes les variables de cet algorithme
aux instants suivants :
au début
à chaque while
à la fin
La trace est un "compte-rendu" de l'exécution de l'algorithme.
Donnez la trace d'exécution de l'algorithme suivant pour les valeurs suivantes de n : 91, 100, 500.
End of explanation
"""
## exemple d'entrées
n = 91
##
min = 0
max = n
while max-min > 1:
moy = (max+min)//2
if moy*moy <= n:
min = moy
else:
max = moy
r = min
## pour voir la sortie
print(r)
##
"""
Explanation: On peut facilement se convaincre que la longueur de la trace sera toujours égale à r+4. En effet :
la valeur finale de r correspond au nombre de fois où on est rentré dans la boucle, moins 1 (à cause de la ligne 18).
La taille de la trace est ici égale
au nombre de fois où on est entré dans la boucle,
plus 1 pour le passage à la ligne 13 qui sort de la boucle,
plus 1 pour la photo de départ,
plus 1 pour la photo à la fin,
soit (nombre de passages dans la boucle) + 3, soit r + 4.
Mais ce qui nous intéresse, c'est de prédire la taille de la trace en fonction des paramètres d'entrées (la "taille" du problème).
En l'occurence, puisque r est la partie entière de √n, on peut affirmer que la longueur de la trace est partie_entière(√n)+4, qu'on peut simplifier en disant qu'elle est proportionnelle à √n.
Complexité
On appelle complexité d'un algorithme la mesure de la longueur de ses traces d'exécution en fonction de ses paramètres d'entrée.
Ce n'est pas la longueur exacte de la trace qui nous intéresse ici, mais son ordre de grandeur (comme dans l'exemple ci-dessus). C'est pourquoi on utilise la notation 𝓞(...) qui sert justement à représenter les ordres de grandeur.
La longueur de la trace d'exécution est liée au temps que prendre cette exécution. Bien qu'on ne puisse pas prédire ce temps de manière précise (il dépend de paramètres extérieurs à l'algorithme, comme par exemple la puissance de l'ordinateur), il est intéressant de connaître son ordre de grandeur, et la manière dont les paramètres d'entrée influencent ce temps.
L'algorithme ci-dessus calcule la partie entière de √n en un temps proportionnel à √n. On dira qu'il a « un temps d'exécution en 𝓞(√n) ».
On peut faire mieux avec l'algorithme ci-dessous :
End of explanation
"""
%pylab inline
xs = range(1,300)
plot(xs, sqrt(xs), "r-", label=" √n")
plot(xs, log2(xs), "g-", label="log2(n)")
legend(loc="lower right")
"""
Explanation: L'algorithme ci-dessus applique une recherche dichotomique :
on utilise le fait que
* la racine de n est forcément comprise entre 0 et n, et que
* les racines de deux nombres sont dans le même ordre que ces nombres.
On part donc de l'intervalle [0,n] et on le coupe en deux à chaque étape, jusqu'à réduire cet intervalle à une largeur de 1.
Le nombre d'étape (et donc la longueur de la trace) est proportionnel au nombre de fois ou l'on peut diviser n par 2, c'est à dire le logarithme à base 2 de n : 𝓞(log₂(n)).
On peut se convaincre, avec les courbe ci-dessous, que cet algorithme est bien plus efficace que le précédent pour de grandes valeurs de n.
End of explanation
"""
"""
:entrée x: flottant
:entrée erreur: flottant
:pré-cond x ≥ 0
:sortie r: entier
:post-cond: r est la racine de 'x' à 'erreur' près
"""
## exemple d'entrées
x=500
precision=0.001
##
# AUTRE SOLUTION #
min = 0
max = x
while max-min > erreur:
moy = (max+min)/2
if moy*moy <= x:
min = moy
else:
max = moy
r = min
## pour voir la sortie
print(r)
# et la vérifier
print(r*r)
##
"""
Explanation: Calcul de la racine carrée
La recherche dichotomique de l'algorithme ci-dessus s'arrête lorsque l'intervalle a une largeur de 1. Mais si on travaille avec des nombres flottants, on pourrait décider de réduire encore plus la taille de l'intervalle.
On définit donc un nouvel algorithme, prenant cette fois deux paramètres d'entrée :
* x, le nombre flottant dont on veut calculer la racine carrée,
* erreur, l'erreur maximale que l'on accepte d'avoir sur le résultat
End of explanation
"""
|
Alexoner/mooc | cs231n/assignment3/q2.ipynb | apache-2.0 | # A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from time import time
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: TinyImageNet and Ensembles
So far, we have only worked with the CIFAR-10 dataset. In this exercise we will introduce the TinyImageNet dataset. You will combine several pretrained models into an ensemble, and show that the ensemble performs better than any individual model.
End of explanation
"""
from cs231n.data_utils import load_tiny_imagenet
tiny_imagenet_a = 'cs231n/datasets/tiny-imagenet-100-A'
class_names, X_train, y_train, X_val, y_val, X_test, y_test = load_tiny_imagenet(tiny_imagenet_a)
# Zero-mean the data
mean_img = np.mean(X_train, axis=0)
X_train -= mean_img
X_val -= mean_img
X_test -= mean_img
"""
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_splits.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet dataset will take up about 490MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
"""
for names in class_names:
print ' '.join('"%s"' % name for name in names)
"""
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
"""
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(class_names), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(y_train == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = X_train[train_idx] + mean_img
img = img.transpose(1, 2, 0).astype('uint8')
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(class_names[class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
"""
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
"""
mode = 'train'
name_to_label = {n.lower(): i for i, ns in enumerate(class_names) for n in ns}
if mode == 'train':
X, y = X_train, y_train
elif mode == 'val':
X, y = X_val, y_val
num_correct = 0
num_images = 10
for i in xrange(num_images):
idx = np.random.randint(X.shape[0])
img = (X[idx] + mean_img).transpose(1, 2, 0).astype('uint8')
plt.imshow(img)
plt.gca().axis('off')
plt.gcf().set_size_inches((2, 2))
plt.show()
got_name = False
while not got_name:
name = raw_input('Guess the class for the above image (%d / %d) : ' % (i + 1, num_images))
name = name.lower()
got_name = name in name_to_label
if not got_name:
print 'That is not a valid class name; try again'
guess = name_to_label[name]
if guess == y[idx]:
num_correct += 1
print 'Correct!'
else:
print 'Incorrect; it was actually %r' % class_names[y[idx]]
acc = float(num_correct) / num_images
print 'You got %d / %d correct for an accuracy of %f' % (num_correct, num_images, acc)
"""
Explanation: Test human performance
Run the following to test your own classification performance on the TinyImageNet-100-A dataset.
You can run several times in 'training' mode to get familiar with the task; once you are ready to test yourself, switch the mode to 'val'.
You won't be penalized if you don't correctly classify all the images, but you should still try your best.
End of explanation
"""
from cs231n.data_utils import load_models
models_dir = 'cs231n/datasets/tiny-100-A-pretrained'
# models is a dictionary mappping model names to models.
# Like the previous assignment, each model is a dictionary mapping parameter
# names to parameter values.
models = load_models(models_dir)
"""
Explanation: Download pretrained models
We have provided 10 pretrained ConvNets for the TinyImageNet-100-A dataset. Each of these models is a five-layer ConvNet with the architecture
[conv - relu - pool] x 3 - affine - relu - affine - softmax
All convolutional layers are 3x3 with stride 1 and all pooling layers are 2x2 with stride 2. The first two convolutional layers have 32 filters each, and the third convolutional layer has 64 filters. The hidden affine layer has 512 neurons. You can run the forward and backward pass for these five layer convnets using the function five_layer_convnet in the file cs231n/classifiers/convnet.py.
Each of these models was trained for 25 epochs over the TinyImageNet-100-A training data with a batch size of 50 and with dropout on the hidden affine layer. Each model was trained using slightly different values for the learning rate, regularization, and dropout probability.
To download the pretrained models, go into the cs231n/datasets directory and run the get_pretrained_models.sh script. Once you have done so, run the following to load the pretrained models into memory.
NOTE: The pretrained models will take about 245MB of disk space.
End of explanation
"""
from cs231n.classifiers.convnet import five_layer_convnet
# Dictionary mapping model names to their predicted class probabilities on the
# validation set. model_to_probs[model_name] is an array of shape (N_val, 100)
# where model_to_probs[model_name][i, j] = p indicates that models[model_name]
# predicts that X_val[i] has class i with probability p.
model_to_probs = {}
################################################################################
# TODO: Use each model to predict classification probabilities for all images #
# in the validation set. Store the predicted probabilities in the #
# model_to_probs dictionary as above. To compute forward passes and compute #
# probabilities, use the function five_layer_convnet in the file #
# cs231n/classifiers/convnet.py. #
# #
# HINT: Trying to predict on the entire validation set all at once will use a #
# ton of memory, so you should break the validation set into batches and run #
# each batch through each model separately. #
################################################################################
from cs231n.classifiers.convnet import five_layer_convnet
import math
batch_size = 100
for model_name, model in models.items():
model_to_probs[model_name] = None
for i in range(int(math.ceil(X_val.shape[0] / batch_size))):
for model_name, model in models.items():
y_predict = five_layer_convnet(X_val[i*batch_size: (i+1)*batch_size],
model,
None and y_val[i*batch_size: (i+1)*batch_size],
return_probs=True)
try:
if model_to_probs[model_name] is None:
model_to_probs[model_name] = y_predict
else:
model_to_probs[model_name] = np.concatenate(
(model_to_probs[model_name], y_predict), axis=0)
except:
print(model_to_probs[model_name].shape, y_predict.shape)
raise
pass
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Compute and print the accuracy for each model.
for model_name, probs in model_to_probs.iteritems():
acc = np.mean(np.argmax(probs, axis=1) == y_val)
print '%s got accuracy %f' % (model_name, acc)
"""
Explanation: Run models on the validation set
To benchmark the performance of each model on its own, we will use each model to make predictions on the validation set.
End of explanation
"""
def compute_ensemble_preds(probs_list):
"""
Use the predicted class probabilities from different models to implement
the ensembling method described above.
Inputs:
- probs_list: A list of numpy arrays, where each gives the predicted class
probabilities under some model. In other words,
probs_list[j][i, c] = p means that the jth model in the ensemble thinks
that the ith data point has class c with probability p.
Returns:
An array y_pred_ensemble of ensembled predictions, such that
y_pred_ensemble[i] = c means that ensemble predicts that the ith data point
is predicted to have class c.
"""
y_pred_ensemble = None
############################################################################
# TODO: Implement this function. Store the ensemble predictions in #
# y_pred_ensemble. #
############################################################################
probs_list_ensemble = np.mean(probs_list, axis=0)
y_pred_ensemble = np.argmax(probs_list_ensemble, axis=1)
pass
############################################################################
# END OF YOUR CODE #
############################################################################
return y_pred_ensemble
# Combine all models into an ensemble and make predictions on the validation set.
# This should be significantly better than the best individual model.
print np.mean(compute_ensemble_preds(model_to_probs.values()) == y_val)
"""
Explanation: Use a model ensemble
A simple way to implement an ensemble of models is to average the predicted probabilites for each model in the ensemble.
More concretely, suppose we have models $k$ models $m_1,\ldots,m_k$ and we want to combine them into an ensemble. If $p(x=y_i \mid m_j)$ is the probability that the input $x$ is classified as $y_i$ under model $m_j$, then the enemble predicts
$$p(x=y_i \mid {m_1,\ldots,m_k}) = \frac1k\sum_{j=1}^kp(x=y_i\mid m_j)$$
In the cell below, implement this simple ensemble method by filling in the compute_ensemble_preds function. The ensemble of all 10 models should perform much better than the best individual model.
End of explanation
"""
################################################################################
# TODO: Create a plot comparing ensemble size with ensemble performance as #
# described above. #
# #
# HINT: Look up the function itertools.combinations. #
################################################################################
import itertools
ensemble_sizes = []
val_accs = []
for i in range(1, 11):
combinations = itertools.combinations(model_to_probs.values(), i)
for combination in combinations:
ensemble_sizes.append(i)
y_pred_ensemple = compute_ensemble_preds(combination)
val_accs.append(np.mean(y_pred_ensemple == y_val))
pass
plt.scatter(ensemble_sizes, val_accs)
plt.title('Ensemble size vs Performance')
plt.xlabel('ensemble size')
plt.ylabel('validation set accuracy')
################################################################################
# END OF YOUR CODE #
################################################################################
"""
Explanation: Ensemble size vs Performance
Using our 10 pretrained models, we can form many different ensembles of different sizes. More precisely, if we have $n$ models and we want to form an ensemble of $k$ models, then there are $\binom{n}{k}$ possible ensembles that we can form, where
$$\binom{n}{k} = \frac{n!}{(n-k)!k!}$$
We can use these different possible ensembles to study the effect of ensemble size on ensemble performance.
In the cell below, compute the validation set accuracy of all possible ensembles of our 10 pretrained models. Produce a scatter plot with "ensemble size" on the horizontal axis and "validation set accuracy" on the vertical axis. Your plot should have a total of
$$\sum_{k=1}^{10} \binom{10}{k}$$
points corresponding to all possible ensembles of the 10 pretrained models.
You should be able to compute the validation set predictions of these ensembles without computing any more forward passes through any of the networks.
End of explanation
"""
|
michael-hoffman/titanic-revisited | Titanic_ML_v1.ipynb | gpl-3.0 | # data analysis and wrangling
import pandas as pd
import numpy as np
import scipy
# visualization
import matplotlib.pyplot as plt
import seaborn as sns
# machine learning
from sklearn.svm import SVC
from sklearn import preprocessing
import fancyimpute
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import classification_report
from sklearn.model_selection import cross_val_score
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
%matplotlib inline
"""
Explanation: Titanic: Revisiting Disaster
An Exploration into the Data using Python
Data Science on the Hill (Michael Hoffman and Charlies Bonfield)
1. Introduction <a class="anchor" id="first-bullet"></a>
From our previous work on this dataset, it seems like the best way forward is to include new features with explanatory power. The first way to do that would be to use more sophisticated parsing of the available data (particulary the name feature) and extract new features from that. Secondly, we can include interactions up to a specified order -increasing the features in a systematic way. We will explore the second method in this revistation of the titanic dataset. Skip to section 4. if you have already seen the previous work.
End of explanation
"""
# Load the data.
training_data = pd.read_csv('train.csv')
test_data = pd.read_csv('test.csv')
# Examine the first few rows of data in the training set.
training_data.head()
"""
Explanation: 2. Loading/Examining the Data <a class="anchor" id="second-bullet"></a>
End of explanation
"""
# Extract title from names, then assign to one of five classes.
# Function based on code from: https://www.kaggle.com/startupsci/titanic/titanic-data-science-solutions
def add_title(data):
data['Title'] = data.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
data.Title = data.Title.replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr', 'Major',
'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
data.Title = data.Title.replace('Mlle', 'Miss')
data.Title = data.Title.replace('Ms', 'Miss')
data.Title = data.Title.replace('Mme', 'Mrs')
# Map from strings to numerical variables.
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
data.Title = data.Title.map(title_mapping)
data.Title = data.Title.fillna(0)
return data
"""
Explanation: 3. All the Features! <a class="anchor" id="third-bullet"></a>
We will be extracting the features with custom functions. This isn't necessary for all the features for this project, but we want to leave the possibilty for further development open for the future.
3a. Extracting Titles from Names <a class="anchor" id="third-first"></a>
While the Name feature itself may not appear to be useful at first glance, we can tease out additional features that may be useful for predicting survival on the Titanic. We will extract a Title from each name, as that carries information about social and marital status (which in turn may relate to survival).
End of explanation
"""
missing_emb_training = training_data[pd.isnull(training_data.Embarked) == True]
missing_emb_test = test_data[pd.isnull(test_data.Embarked) == True]
missing_emb_training.head()
missing_emb_test.head()
"""
Explanation: 3b. Treating Missing Ports of Embarkation <a class="anchor" id="third-second"></a>
Next, let's see if there are any rows that are missing ports of embarkation.
End of explanation
"""
grid = sns.FacetGrid(training_data[training_data.Pclass == 1], col='Embarked', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Fare', alpha=.5, bins=20)
grid.map(plt.axvline, x=80.0, color='red', ls='dashed')
grid.add_legend();
"""
Explanation: We have two passengers in the training set that are missing ports of embarkation, while we are not missing any in the test set. <br>
The features which may allow us to assign a port of embarkation based on the data that we do have are Pclass, Fare, and Cabin. However, since we are missing so much of the Cabin column (more on that later), let's focus in on the other two.
End of explanation
"""
# Recast port of departure as numerical feature.
def simplify_embark(data):
# Two missing values, assign Cherbourg as port of departure.
data.Embarked = data.Embarked.fillna('C')
le = preprocessing.LabelEncoder().fit(data.Embarked)
data.Embarked = le.transform(data.Embarked)
return data
"""
Explanation: Although Southampton was the most popular port of embarkation, there was a greater fraction of passengers in the first ticket class from Cherbourg who paid 80.00 for their tickets. Therefore, we will assign 'C' to the missing values for Embarked. We will also recast Embarked as a numerical feature.
End of explanation
"""
missing_fare_training = training_data[np.isnan(training_data['Fare'])]
missing_fare_test = test_data[np.isnan(test_data['Fare'])]
missing_fare_training.head()
missing_fare_test.head()
"""
Explanation: 3c. Handling Missing Fares <a class="anchor" id="third-third"></a>
We will perform a similar analysis to see if there are any missing fares.
End of explanation
"""
restricted_training = training_data[(training_data.Pclass == 3) & (training_data.Embarked == 'S')]
restricted_test = test_data[(test_data.Pclass == 3) & (test_data.Embarked == 'S')]
restricted_test = restricted_test[~np.isnan(restricted_test.Fare)] # Leave out poor Mr. Storey
combine = [restricted_training, restricted_test]
combine = pd.concat(combine)
# Find median fare, plot over resulting distribution.
fare_med = np.median(combine.Fare)
sns.kdeplot(combine.Fare, shade=True)
plt.axvline(fare_med, color='r', ls='dashed', lw='1', label='Median')
plt.legend();
"""
Explanation: This time, the Fare column in the training set is complete, but we are missing that information for one passenger in the test set. Since we do have PClass and Embarked, however, we will assign a fare based on the distribution of fares for those particular values of PClass and Embarked.
End of explanation
"""
test_data['Fare'] = test_data['Fare'].fillna(fare_med)
"""
Explanation: After examining the distribution of Fare restricted to the specified values of Pclass and Fare, we will use the median for the missing fare (as it falls very close the fare corresponding to the peak of the distribution).
End of explanation
"""
missing_cabin_training = np.size(training_data.Cabin[pd.isnull(training_data.Cabin) == True]) / np.size(training_data.Cabin) * 100.0
missing_cabin_test = np.size(test_data.Cabin[pd.isnull(test_data.Cabin) == True]) / np.size(test_data.Cabin) * 100.0
print('Percentage of Missing Cabin Numbers (Training): %0.1f' % missing_cabin_training)
print('Percentage of Missing Cabin Numbers (Test): %0.1f' % missing_cabin_test)
"""
Explanation: 3d. Cabin Number: Relevant or Not? <a class="anchor" id="third-fourth"></a>
When we first encountered the data, we figured that Cabin would be one of the most important features in predicting survival, as it would not be unreasonable to think of it as a proxy for a passenger's position on the Titanic relative to the lifeboats (distance to deck, distance to nearest stairwell, social class, etc.).
Unfortunately, much of this data is missing:
End of explanation
"""
## Set of functions to transform features into more convenient format.
#
# Code performs three separate tasks:
# (1). Pull out the first letter of the cabin feature.
# Code taken from: https://www.kaggle.com/jeffd23/titanic/scikit-learn-ml-from-start-to-finish
# (2). Recasts cabin feature as number.
def simplify_cabins(data):
data.Cabin = data.Cabin.fillna('N')
data.Cabin = data.Cabin.apply(lambda x: x[0])
#cabin_mapping = {'N': 0, 'A': 1, 'B': 1, 'C': 1, 'D': 1, 'E': 1,
# 'F': 1, 'G': 1, 'T': 1}
#data['Cabin_Known'] = data.Cabin.map(cabin_mapping)
le = preprocessing.LabelEncoder().fit(data.Cabin)
data.Cabin = le.transform(data.Cabin)
return data
"""
Explanation: What can we do with this data (rather, the lack thereof)?
For now, let's just pull out the first letter of each cabin number (including NaNs), cast them as numbers, and hope they improve the performance of our classifier.
End of explanation
"""
# Recast sex as numerical feature.
def simplify_sex(data):
sex_mapping = {'male': 0, 'female': 1}
data.Sex = data.Sex.map(sex_mapping).astype(int)
return data
# Drop all unwanted features (name, ticket).
def drop_features(data):
return data.drop(['Name','Ticket'], axis=1)
# Perform all feature transformations.
def transform_all(data):
data = add_title(data)
data = simplify_embark(data)
data = simplify_cabins(data)
data = simplify_sex(data)
data = drop_features(data)
return data
training_data = transform_all(training_data)
test_data = transform_all(test_data)
all_data = [training_data, test_data]
combined_data = pd.concat(all_data)
# Inspect data.
combined_data.head()
"""
Explanation: 3e. Quick Fixes <a class="anchor" id="third-fifth"></a>
Prior to the last step (which is arguably the largest one), we need to tie up a few remaining loose ends:
- Recast Sex as numerical feature.
- Drop unwanted features.
- Name: We've taken out the information that we need (Title).
- Ticket: There appears to be no rhyme or reason to the data in this column, so we remove it from our analysis.
- Combine training/test data prior to age imputation.
End of explanation
"""
null_ages = pd.isnull(combined_data.Age)
known_ages = pd.notnull(combined_data.Age)
initial_dist = combined_data.Age[known_ages]
"""
Explanation: 3f. Imputing Missing Ages <a class="anchor" id="third-sixth"></a>
It is expected that age will be an important feature; however, a number of ages are missing. Attempting to predict the ages with a simple model was not very successful. We decided to follow the recommendations for imputation from this article.
End of explanation
"""
def impute_ages(data):
drop_survived = data.drop(['Survived'], axis=1)
column_titles = list(drop_survived)
mice_results = fancyimpute.MICE().complete(np.array(drop_survived))
results = pd.DataFrame(mice_results, columns=column_titles)
results['Survived'] = list(data['Survived'])
return results
complete_data = impute_ages(combined_data)
complete_data.Age = complete_data.Age[~(complete_data.Age).index.duplicated(keep='first')]
"""
Explanation: This paper provides an introduction to the MICE method with a focus on practical aspects and challenges in using this method. We have chosen to use the MICE implementation from fancyimpute.
End of explanation
"""
# Transform age and fare data to have mean zero and variance 1.0.
scaler = preprocessing.StandardScaler()
select = 'Age Fare'.split()
complete_data[select] = scaler.fit_transform(complete_data[select])
training_data = complete_data[:891]
test_data = complete_data[891:].drop('Survived', axis=1)
# drop uninformative data and the target feature
droplist = 'Survived PassengerId'.split()
data = training_data.drop(droplist, axis=1)
# Define features and target values
X, y = data, training_data['Survived']
# generate the polynomial features
poly = preprocessing.PolynomialFeatures(2)
X = pd.DataFrame(poly.fit_transform(X)).drop(0, axis=1)
# feature selection
features = SelectKBest(f_classif, k=12).fit(X,y)
# print(sorted(list(zip(features.scores_, X.columns)), reverse=True))
X_new = pd.DataFrame(features.transform(X))
X_new.head()
# ----------------------------------
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
# Support Vector Machines
#
# # Set the parameters by cross-validation
# param_dist = {'C': scipy.stats.uniform(0.1, 1000), 'gamma': scipy.stats.uniform(.001, 1.0),
# 'kernel': ['rbf'], 'class_weight':['balanced', None]}
#
# clf = SVC()
#
# # run randomized search
# n_iter_search = 10000
# random_search = RandomizedSearchCV(clf, param_distributions=param_dist,
# n_iter=n_iter_search, n_jobs=-1, cv=4)
#
# start = time()
# random_search.fit(X, y)
# print("RandomizedSearchCV took %.2f seconds for %d candidates"
# " parameter settings." % ((time() - start), n_iter_search))
# report(random_search.cv_results_)
# exit()
"""
RandomizedSearchCV took 4851.48 seconds for 10000 candidates parameter settings.
Model with rank: 1
Mean validation score: 0.833 (std: 0.013)
Parameters: {'kernel': 'rbf', 'C': 107.54222939713921, 'gamma': 0.013379109762586716, 'class_weight': None}
Model with rank: 2
Mean validation score: 0.832 (std: 0.012)
Parameters: {'kernel': 'rbf', 'C': 154.85033872208422, 'gamma': 0.010852578446979289, 'class_weight': None}
Model with rank: 2
Mean validation score: 0.832 (std: 0.012)
Parameters: {'kernel': 'rbf', 'C': 142.60506747360913, 'gamma': 0.011625955252680842, 'class_weight': None}
"""
params = {'kernel': 'rbf', 'C': 107.54222939713921, 'gamma': 0.013379109762586716, 'class_weight': None}
clf = SVC(**params)
scores = cross_val_score(clf, X, y, cv=4, n_jobs=-1)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
droplist = 'PassengerId'.split()
clf.fit(X,y)
predictions = clf.predict(test_data.drop(droplist, axis=1))
#print(predictions)
print('Predicted Number of Survivors: %d' % int(np.sum(predictions)))
# output .csv for upload
# submission = pd.DataFrame({
# "PassengerId": test_data['PassengerId'].astype(int),
# "Survived": predictions.astype(int)
# })
#
# submission.to_csv('../submission.csv', index=False)
"""
Explanation: 4. Interaction Terms <a class="anchor" id="fourth-bullet"></a>
We will employ the PolynomialFeatures function to obtain all the possible combinations of features at second order.
QUICK EXPLAINER WITH SOME MATH HERE
End of explanation
"""
|
omoju/udacityUd120Lessons | Feature Selection.ipynb | gpl-3.0 | from __future__ import division
data_point = data_dict['METTS MARK']
frac = data_point["from_poi_to_this_person"] / data_point["to_messages"]
print frac
def computeFraction( poi_messages, all_messages ):
""" given a number messages to/from POI (numerator)
and number of all messages to/from a person (denominator),
return the fraction of messages to/from that person
that are from/to a POI
"""
### you fill in this code, so that it returns either
### the fraction of all messages to this person that come from POIs
### or
### the fraction of all messages from this person that are sent to POIs
### the same code can be used to compute either quantity
### beware of "NaN" when there is no known email address (and so
### no filled email features), and integer division!
### in case of poi_messages or all_messages having "NaN" value, return 0.
fraction = 0
if poi_messages != 'NaN':
fraction = float(poi_messages) / float(all_messages)
return fraction
submit_dict = {}
for name in data_dict:
data_point = data_dict[name]
from_poi_to_this_person = data_point["from_poi_to_this_person"]
to_messages = data_point["to_messages"]
fraction_from_poi = computeFraction( from_poi_to_this_person, to_messages )
print'{:5}{:35}{:.2f}'.format('FROM ', name, fraction_from_poi)
data_point["fraction_from_poi"] = fraction_from_poi
from_this_person_to_poi = data_point["from_this_person_to_poi"]
from_messages = data_point["from_messages"]
fraction_to_poi = computeFraction( from_this_person_to_poi, from_messages )
#print fraction_to_poi
print'{:5}{:35}{:.2f}'.format('TO: ', name, fraction_to_poi)
submit_dict[name]={"from_poi_to_this_person":fraction_from_poi,
"from_this_person_to_poi":fraction_to_poi}
data_point["fraction_to_poi"] = fraction_to_poi
#####################
def submitDict():
return submit_dict
"""
Explanation: get email of author
compare to list of known persons of interest
return boolean if author is person of interest
aggregate count over all emails to person
End of explanation
"""
sys.path.append(dataPath+'text_learning/')
words_file = "your_word_data.pkl"
authors_file = "your_email_authors.pkl"
word_data = pickle.load( open(words_file, "r"))
authors = pickle.load( open(authors_file, "r") )
### test_size is the percentage of events assigned to the test set (the
### remainder go into training)
### feature matrices changed to dense representations for compatibility with
### classifier functions in versions 0.15.2 and earlier
from sklearn import cross_validation
features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(word_data,
authors, test_size=0.1, random_state=42)
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english')
features_train = vectorizer.fit_transform(features_train)
features_test = vectorizer.transform(features_test).toarray()
### a classic way to overfit is to use a small number
### of data points and a large number of features;
### train on only 150 events to put ourselves in this regime
features_train = features_train[:150].toarray()
labels_train = labels_train[:150]
"""
Explanation: Beware of BUGS!!!
When Katie was working on the Enron POI identifier, she engineered a feature that identified when a given person was on the same email as a POI. So for example, if Ken Lay and Katie Malone are both recipients of the same email message, then Katie Malone should have her "shared receipt" feature incremented. If she shares lots of emails with POIs, maybe she's a POI herself.
Here's the problem: there was a subtle bug, that Ken Lay's "shared receipt" counter would also be incremented when this happens. And of course, then Ken Lay always shares receipt with a POI, because he is a POI. So the "shared receipt" feature became extremely powerful in finding POIs, because it effectively was encoding the label for each person as a feature.
We found this first by being suspicious of a classifier that was always returning 100% accuracy. Then we removed features one at a time, and found that this feature was driving all the performance. Then, digging back through the feature code, we found the bug outlined above. We changed the code so that a person's "shared receipt" feature was only incremented if there was a different POI who received the email, reran the code, and tried again. The accuracy dropped to a more reasonable level.
We take a couple of lessons from this:
- Anyone can make mistakes--be skeptical of your results!
- 100% accuracy should generally make you suspicious. Extraordinary claims require extraordinary proof.
- If there's a feature that tracks your labels a little too closely, it's very likely a bug!
- If you're sure it's not a bug, you probably don't need machine learning--you can just use that feature alone to assign labels.
Feature Selection Mini Project
End of explanation
"""
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf.fit(features_train, labels_train)
print"{}{:.2f}".format("Classifier accurancy: ", clf.score(features_test, labels_test))
import operator
featuresImportance = clf.feature_importances_
featuresSortedByScore = []
for feature in range(len(featuresImportance)):
if featuresImportance[feature] > 0.2:
featuresSortedByScore.append([feature, featuresImportance[feature]])
df = sorted(featuresSortedByScore, key=operator.itemgetter(1), reverse=True)
for i in range(len(df)):
print "{:5d}: {:f}".format(df[i][0], df[i][1])
for i in range(len(df)):
print vectorizer.get_feature_names()[df[i][0]]
"""
Explanation: This is an interative process
- start off with a peered down version of the dataset
- run a decision tree on it
- get the accuracy, should be rather high
- get the important features definesd by coefs over 0.2
- remove those features
- run again until very fews have 0.2 importance value
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/tutorials/distribute/save_and_load.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import tensorflow_datasets as tfds
import tensorflow as tf
"""
Explanation: 使用分布策略保存和加载模型
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/tutorials/distribute/save_and_load"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/save_and_load.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/save_and_load.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 Github 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/distribute/save_and_load.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
在训练期间一般需要保存和加载模型。有两组用于保存和加载 Keras 模型的 API:高级 API 和低级 API。本教程演示了在使用 tf.distribute.Strategy 时如何使用 SavedModel API。要了解 SavedModel 和序列化的相关概况,请参阅保存的模型指南和 Keras 模型序列化指南。让我们从一个简单的示例开始:
导入依赖项:
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
def get_data():
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
return train_dataset, eval_dataset
def get_model():
with mirrored_strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
return model
"""
Explanation: 使用 tf.distribute.Strategy 准备数据和模型:
End of explanation
"""
model = get_model()
train_dataset, eval_dataset = get_data()
model.fit(train_dataset, epochs=2)
"""
Explanation: 训练模型:
End of explanation
"""
keras_model_path = "/tmp/keras_save"
model.save(keras_model_path)
"""
Explanation: 保存和加载模型
现在,您已经有一个简单的模型可供使用,让我们了解一下如何保存/加载 API。有两组可用的 API:
高级 Keras model.save 和 tf.keras.models.load_model
低级 tf.saved_model.save 和 tf.saved_model.load
Keras API
以下为使用 Keras API 保存和加载模型的示例:
End of explanation
"""
restored_keras_model = tf.keras.models.load_model(keras_model_path)
restored_keras_model.fit(train_dataset, epochs=2)
"""
Explanation: 恢复无 tf.distribute.Strategy 的模型:
End of explanation
"""
another_strategy = tf.distribute.OneDeviceStrategy("/cpu:0")
with another_strategy.scope():
restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)
restored_keras_model_ds.fit(train_dataset, epochs=2)
"""
Explanation: 恢复模型后,您可以继续在它上面进行训练,甚至无需再次调用 compile(),因为在保存之前已经对其进行了编译。模型以 TensorFlow 的标准 SavedModel proto 格式保存。有关更多信息,请参阅 saved_model 格式指南。
现在,加载模型并使用 tf.distribute.Strategy 进行训练:
End of explanation
"""
model = get_model() # get a fresh model
saved_model_path = "/tmp/tf_save"
tf.saved_model.save(model, saved_model_path)
"""
Explanation: 如您所见, tf.distribute.Strategy 可以按预期进行加载。此处使用的策略不必与保存前所用策略相同。
tf.saved_model API
现在,让我们看一下较低级别的 API。保存模型与 Keras API 类似:
End of explanation
"""
DEFAULT_FUNCTION_KEY = "serving_default"
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
"""
Explanation: 可以使用 tf.saved_model.load() 进行加载。但是,由于该 API 级别较低(因此用例范围更广泛),所以不会返回 Keras 模型。相反,它返回一个对象,其中包含可用于进行推断的函数。例如:
End of explanation
"""
predict_dataset = eval_dataset.map(lambda image, label: image)
for batch in predict_dataset.take(1):
print(inference_func(batch))
"""
Explanation: 加载的对象可能包含多个函数,每个函数与一个键关联。"serving_default" 是使用已保存的 Keras 模型的推断函数的默认键。要使用此函数进行推断,请运行以下代码:
End of explanation
"""
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
another_strategy.run(inference_func,args=(batch,))
"""
Explanation: 您还可以采用分布式方式加载和进行推断:
End of explanation
"""
import tensorflow_hub as hub
def build_model(loaded):
x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')
# Wrap what's loaded to a KerasLayer
keras_layer = hub.KerasLayer(loaded, trainable=True)(x)
model = tf.keras.Model(x, keras_layer)
return model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
model = build_model(loaded)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
model.fit(train_dataset, epochs=2)
"""
Explanation: 调用已恢复的函数只是基于已保存模型的前向传递(预测)。如果您想继续训练加载的函数,或者将加载的函数嵌入到更大的模型中,应如何操作? 通常的做法是将此加载对象包装到 Keras 层以实现此目的。幸运的是,TF Hub 为此提供了 hub.KerasLayer,如下所示:
End of explanation
"""
model = get_model()
# Saving the model using Keras's save() API
model.save(keras_model_path)
another_strategy = tf.distribute.MirroredStrategy()
# Loading the model using lower level API
with another_strategy.scope():
loaded = tf.saved_model.load(keras_model_path)
"""
Explanation: 如您所见,hub.KerasLayer 可将从 tf.saved_model.load() 加载回的结果封装到可用于构建其他模型的 Keras 层。这对于迁移学习非常实用。
我应使用哪种 API?
对于保存,如果您使用的是 Keras 模型,那么始终建议使用 Keras 的 model.save() API。如果您保存的不是 Keras 模型,那么您只能选择使用较低级的 API。
对于加载,使用哪种 API 取决于您要从加载的 API 中获得什么。如果您无法或不想获取 Keras 模型,请使用 tf.saved_model.load()。否则,请使用 tf.keras.models.load_model()。请注意,只有保存 Keras 模型后,才能恢复 Keras 模型。
可以混合使用 API。您可以使用 model.save 保存 Keras 模型,并使用低级 API tf.saved_model.load 加载非 Keras 模型。
End of explanation
"""
model = get_model()
# Saving the model to a path on localhost.
saved_model_path = "/tmp/tf_save"
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model.save(saved_model_path, options=save_options)
# Loading the model from a path on localhost.
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
load_options = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost')
loaded = tf.keras.models.load_model(saved_model_path, options=load_options)
"""
Explanation: 从本地设备保存/加载
要在远程运行(例如使用 Cloud TPU)的情况下从本地 I/O 设备保存和加载,则必须使用选项 experimental_io_device 将 I/O 设备设置为本地主机。
End of explanation
"""
class SubclassedModel(tf.keras.Model):
output_name = 'output_layer'
def __init__(self):
super(SubclassedModel, self).__init__()
self._dense_layer = tf.keras.layers.Dense(
5, dtype=tf.dtypes.float32, name=self.output_name)
def call(self, inputs):
return self._dense_layer(inputs)
my_model = SubclassedModel()
# my_model.save(keras_model_path) # ERROR!
tf.saved_model.save(my_model, saved_model_path)
"""
Explanation: 警告
有一种特殊情况,您的 Keras 模型没有明确定义的输入。例如,可以创建没有任何输入形状的序贯模型 (Sequential([Dense(3), ...])。子类化模型在初始化后也没有明确定义的输入。在这种情况下,在保存和加载时都应坚持使用较低级别的 API,否则会出现错误。
要检查您的模型是否具有明确定义的输入,只需检查 model.inputs 是否为 None。如果非 None,则一切正常。在 .fit、.evaluate、.predict 中使用模型,或调用模型 (model(inputs)) 时,输入形状将自动定义。
以下为示例:
End of explanation
"""
|
pglauner/misc | src/cs730/3_regularization.ipynb | gpl-2.0 | # These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
"""
Explanation: Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: First reload the data we generated in notmist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
hidden1 = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
weights2 = tf.Variable(
tf.truncated_normal([1024, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(hidden1, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(biases1) +
tf.nn.l2_loss(weights2) + tf.nn.l2_loss(biases2))
loss += 5e-4 * regularizers
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1),
weights2) + biases2)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1),
weights2) + biases2)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Problem 1
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
End of explanation
"""
batch_size = 12
SEED = 66478
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
weights2 = tf.Variable(tf.truncated_normal([1024, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
def model(data, train=False):
hidden1 = tf.nn.relu(tf.matmul(data, weights1) + biases1)
return tf.matmul(hidden1, weights2) + biases2
# Training computation.
logits = model(tf_train_dataset, True)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(biases1) +
tf.nn.l2_loss(weights2) + tf.nn.l2_loss(biases2))
loss += 5e-4 * regularizers
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
End of explanation
"""
batch_size = 12
SEED = 66478
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
weights2 = tf.Variable(tf.truncated_normal([1024, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
def model(data, train=False):
hidden1 = tf.nn.relu(tf.matmul(data, weights1) + biases1)
if train:
hidden1 = tf.nn.dropout(hidden1, 0.5, seed=SEED)
return tf.matmul(hidden1, weights2) + biases2
# Training computation.
logits = model(tf_train_dataset, True)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(biases1) +
tf.nn.l2_loss(weights2) + tf.nn.l2_loss(biases2))
loss += 5e-4 * regularizers
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Problem 3
Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.
What happens to our extreme overfitting case?
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, 1024]))
biases1 = tf.Variable(tf.zeros([1024]))
weights2 = tf.Variable(tf.truncated_normal([1024, 1024]))
biases2 = tf.Variable(tf.zeros([1024]))
weights3 = tf.Variable(tf.truncated_normal([1024, num_labels]))
biases3 = tf.Variable(tf.zeros([num_labels]))
def model(data, train=False):
hidden1 = tf.nn.relu(tf.matmul(data, weights1) + biases1)
if train:
hidden1 = tf.nn.dropout(hidden1, 0.7, seed=SEED)
hidden2 = tf.matmul(hidden1, weights2) + biases2
if train:
hidden2 = tf.nn.dropout(hidden2, 0.7, seed=SEED)
return tf.matmul(hidden2, weights3) + biases3
# Training computation.
logits = model(tf_train_dataset, True)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(biases1)
+ tf.nn.l2_loss(weights2) + tf.nn.l2_loss(biases2)
+ tf.nn.l2_loss(weights3) + tf.nn.l2_loss(biases3))
loss += 5e-4 * regularizers
# Optimizer.
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.01, step, 3000, 0.5, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Problem 4
Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.
One avenue you can explore is to add multiple layers.
Another one is to use learning rate decay:
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
End of explanation
"""
|
idwaker/git_python_session | Numpy and Pandas.ipynb | unlicense | from array import array
array('i', [1, 2, 3])
import numpy as np
np.array([1, 5, 6, 9])
arr = np.array([1, 5, 6, 9])
arr.dtype
np.array([1.2, 5.6, 4, 9.0, 7])
np.array([1.2, 5.6, 4, 9.0, 7]).dtype
np.array(['1', 5, 6])
np.arange(1, 9)
m1 = np.arange(1, 9)
m1
m1.shape
m1.size
m1 * 4
m1* 2
m1 + (m1 * 2)
m1.ndim
m2 = np.array([[1, 2, 3], [7, 8, 9]])
m2.ndim
m2.size
m2.shape
m2
m3 = m2.transpose()
m3
m3.shape
np.zeros((2, 3))
np.ones((3, 2))
np.diag((3, 4))
help(np.ones)
np.ones(5)
np.diag(np.ones(5))
np.linspace(0, 1)
x = np.linspace(1, 10, num=20)
x
x[1] - x[0]
np.linspace(1, 99)
np.linspace(1, 5, num=20)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(np.linspace(0, 1))
np.pi
np.sin(np.pi/2.0)
np.sin(1.4)
np.sin(np.array([1.4, 2, 3]))
x_range = np.linspace(-np.pi, np.pi, 50)
x_range
np.sin(x_range)
plt.plot(x_range, np.sin(x_range))
plt.plot(x_range, np.tan(x_range))
np.random.rand(3, 2)
np.random.randint(1, 99)
np.random.randint(1, 99)
np.random.randint(1, 99, size=8)
m5 = np.random.randint(1, 99, size=12)
m5
m5.mean()
np.median(m5)
m5.max()
m5.min()
m5.std()
m5.sum()
"""
Explanation: DataScience
Python
R
Numpy [ array | matrices ]
Pandas [ data analysis ]
Matplotlib / Bokeh [ visualization ]
Math
Linear Algebra
Probability and Statistics
Calculus
Advance Computing
- Machine Learning
- Machine Vision
- Data Scientist
Data Science
Numpy
End of explanation
"""
import pandas as pd
countries = ["Nepal", "India", "Pakistan", "Bhutan"]
zip_codes = [977, 91, 233, 987]
dataset = list(zip(countries, zip_codes))
dataset
df = pd.DataFrame(data=dataset, columns=["Country", "Zip Code"])
df
dframe = pd.read_csv('API_NPL_DS2_en_csv_v2.csv', skiprows=3)
dframe.head()
dframe.tail()
dframe.dtypes
dframe["2014"]
dframe[["Indicator Name", "2014"]].head()
dframe[dframe["Indicator Name"] == "Agricultural land (sq. km)"]
agri = dframe[dframe["Indicator Name"] == "Agricultural land (sq. km)"]
agri.plot(kind='bar')
agri.loc[0:, "2000":"2013"]
agri.loc[0:, "2000":"2013"].plot(kind='bar')
agri = agri.reset_index()
agri.loc[0:, "2000":"2013"]
agri.loc[0:, "2000":"2013"].transpose()
agri.loc[0:, "2000":"2013"].transpose().plot(kind='bar')
agri_t = agri.loc[0:, "2000":"2013"].transpose()
agri_t
agri_t.columns = ["Agricultural land (sq. km)"]
agri_t
agri_t.plot(kind="bar")
agri_t.plot()
dframe[dframe["Indicator Name"].str.contains('of goods and services')]
import_export = dframe[dframe["Indicator Code"].isin(["BM.GSR.GNFS.CD",
"BX.GSR.GNFS.CD"])]
import_export
import_export = import_export.loc[0:, "2007":"2014"]
import_export
import_export = import_export.transpose()
import_export
import_export.columns = ["Imports of goods and services (BoP, current US$)",
"Exports of goods and services (BoP, current US$)"]
import_export
import_export.plot()
import_export.apply(pd.to_numeric, args=('coerce',))
names = ['Bob', 'Jessica', 'Hari', 'John', 'Rajesh', 'Seldon']
names[np.random.randint(0, len(names))]
random_names = [names[np.random.randint(0, len(names))]
for i in range(0, 99)]
len(random_names)
random_ages = [np.random.randint(10, 78) for i in range(0, 99)]
len(random_ages)
age_distrib = pd.DataFrame(list(zip(random_names, random_ages)),
columns=["Name", "Age"])
age_distrib.head()
bob.count()
bob = age_distrib[age_distrib["Name"] == "Bob"]
bob
bob = bob.reset_index()
bob
bob.plot(kind="scatter", x='index', y='Age')
np.unique(age_distrib["Name"], return_inverse=True)
age_distrib["Name"]
np.unique(age_distrib["Name"], return_inverse=True)
unique_names, x_values = np.unique(age_distrib["Name"],
return_inverse=True)
"""
Explanation: pandas
Series
DataFrame
End of explanation
"""
unique_names
x_values = x_values + 1
age_distrib["Values"] = x_values
age_distrib
unique_ages, y_values = np.unique(age_distrib["Age"],
return_inverse=True)
len(x_values)
age_distrib.plot(kind='scatter', x='Values', y='Age')
unique_names = np.insert(unique_names, 0, '')
unique_names
ax = age_distrib.plot(kind='scatter', x='Values', y='Age',
color='cyan')
ax.set_xticklabels(unique_names)
ax
age_distrib[["Name", "Age"]].head()
age_distrib[["Name", "Age"]].groupby('Name')
people_count = age_distrib[["Name", "Age"]].groupby('Name').count()
people_count
people_count.plot(kind='pie', y='Age')
"""
Explanation: x_values
End of explanation
"""
|
flohorovicic/pynoddy | docs/notebooks/simple_dipping_layer.ipynb | gpl-2.0 | from matplotlib import rc_params
from IPython.core.display import HTML
css_file = 'pynoddy.css'
# HTML(open(css_file, "r").read())
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
reload(pynoddy.history)
%matplotlib inline
rcParams.update({'font.size': 20})
"""
Explanation: Dipping Layer for MLMC
Setup for simple dipping layer model as an input for MLMC
End of explanation
"""
os.chdir(r'/Users/flow/git/mlmc/case_studies/dipping_layer')
"""
Explanation: Swap to working directory
End of explanation
"""
reload(pynoddy.history)
# Combined: model generation and output vis to test:
history = "simple_model.his"
output_name = "simple_out"
#
# A general note: the 'reload' statements are only important
# for development purposes (when modules were chnaged), but not
# in required for normal execution.
#
reload(pynoddy.history)
reload(pynoddy.events)
# create pynoddy object
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 2,
'layer_names' : ['layer 1', 'layer 2'],
'layer_thickness' : [1500, 1500]}
nm.add_event('stratigraphy', strati_options )
nm.write_history(history)
# Compute the model
reload(pynoddy)
pynoddy.compute_model(history, output_name)
# Plot output
import pynoddy.output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False, fig_filename = "ex01_strati.eps")
"""
Explanation: Defining a stratigraphy
We start with the definition of a (base) stratigraphy for the model.
End of explanation
"""
# create pynoddy object
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 2,
'layer_names' : ['layer 1', 'layer 2'],
'layer_thickness' : [1500, 1500]}
nm.add_event('stratigraphy', strati_options )
tilt_options = {'name' : 'Tilt',
'pos' : (6000, 0, 5000),
'rotation' : 10,
'plunge_direction' : 0,
'plunge' : 20}
nm.add_event('tilt', tilt_options)
nm.events
nm.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title = "",
savefig = False, fig_filename = "ex01_fault_E.eps")
"""
Explanation: Add tilt event
End of explanation
"""
!pwd
"""
Explanation: Calculate gravity field for tilted model
Compute now the gravity field
End of explanation
"""
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (6000, 0, 5000),
'dip_dir' : 270,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
nm.events
nm.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title = "",
savefig = False, fig_filename = "ex01_fault_E.eps")
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_1',
'pos' : (5500, 3500, 0),
'dip_dir' : 270,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
nm.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1], colorbar = True)
nm1 = pynoddy.history.NoddyHistory(history)
nm1.get_extent()
"""
Explanation: Add a fault event
As a next step, let's now add the faults to the model.
End of explanation
"""
reload(pynoddy.history)
reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3',
'layer 4', 'layer 5', 'layer 6',
'layer 7', 'layer 8'],
'layer_thickness' : [1500, 500, 500, 500, 500,
500, 500, 500]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_W',
'pos' : (4000, 3500, 5000),
'dip_dir' : 90,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (6000, 3500, 5000),
'dip_dir' : 270,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
nm.write_history(history)
# Change cube size
nm1 = pynoddy.history.NoddyHistory(history)
nm1.change_cube_size(50)
nm1.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = True, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
"""
Explanation: Complete Model Set-up
And here now, combining all the previous steps, the entire model set-up with base stratigraphy and two faults:
End of explanation
"""
|
tcstewar/testing_notebooks | nikhil/Custom Learning Rule with membrane voltage.ipynb | gpl-2.0 | model = nengo.Network()
with model:
pre = nengo.Ensemble(n_neurons=1, dimensions=1, encoders=[[1]], gain=[2], bias=[0])
post = nengo.Ensemble(n_neurons=1, dimensions=1, encoders=[[1]], gain=[2], bias=[0],
neuron_type=nengo.LIF(tau_rc=0.1))
stim_pre = nengo.Node(lambda t: 1 if 0.2<t%0.4<0.3 else 0)
stim_post = nengo.Node(lambda t: 1 if 0.25<t%0.4<0.35 else 0)
nengo.Connection(stim_pre, pre, synapse=None)
nengo.Connection(stim_post, post, synapse=None)
p_stim_pre = nengo.Probe(stim_pre)
p_stim_post = nengo.Probe(stim_post)
p_pre = nengo.Probe(pre.neurons)
p_post = nengo.Probe(post.neurons)
p_post_v = nengo.Probe(post.neurons, 'voltage')
sim = nengo.Simulator(model)
sim.run(2)
plt.figure(figsize=(12,5))
plt.subplot(2, 1, 1)
plt.plot(sim.trange(), sim.data[p_stim_pre], label='stim')
plt.plot(sim.trange(), sim.data[p_pre]/1000, c='k', label='spikes')
plt.ylabel('pre')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(sim.trange(), sim.data[p_stim_post], label='stim')
plt.plot(sim.trange(), sim.data[p_post_v], label='voltage')
plt.plot(sim.trange(), sim.data[p_post]/1000, c='k', label='spikes')
plt.ylabel('post')
plt.legend()
"""
Explanation: Step 1: no learning rule
Here's just the basic setup with two neurons and we can look at the spikes and membrane voltage. The stim_pre and stim_post give regular pulses of input. I've adjusted the post-synaptic neuron to have a membrane time constant of 100ms, just so we can see the membrane voltage decay slowly.
End of explanation
"""
class CustomRule(nengo.Process):
def __init__(self, vthp=0.25, vthn=0.15, vprog=0.9):
self.vthp = vthp
self.vthn = vthn
self.vprog = vprog
self.signal = None
super().__init__()
def make_step(self, shape_in, shape_out, dt, rng, state=None):
self.w = np.zeros((shape_out[0], shape_in[0]))
def step(t, x):
assert self.signal is not None
vmem = self.signal[0]
# fill in the adjustment to the weight here
# x is the spiking output of the pre-synaptic neurons so you can determine which neuron spiked
# here's a really simple hebbian rule, as an example
dw = x*vmem*dt
self.w += dw
return np.dot(self.w, x)
return step
def set_signal(self, signal):
self.signal = signal
model = nengo.Network()
with model:
pre = nengo.Ensemble(n_neurons=1, dimensions=1, encoders=[[1]], gain=[2], bias=[0])
post = nengo.Ensemble(n_neurons=1, dimensions=1, encoders=[[1]], gain=[2], bias=[0],
neuron_type=nengo.LIF(tau_rc=0.1))
stim_pre = nengo.Node(lambda t: 1 if 0.2<t%0.4<0.3 else 0)
stim_post = nengo.Node(lambda t: 1 if 0.25<t%0.4<0.35 else 0)
nengo.Connection(stim_pre, pre, synapse=None)
nengo.Connection(stim_post, post, synapse=None)
w = nengo.Node(CustomRule(), size_in=1, size_out=1)
nengo.Connection(pre, w, synapse=None)
nengo.Connection(w, post, synapse=None)
p_stim_pre = nengo.Probe(stim_pre)
p_stim_post = nengo.Probe(stim_post)
p_pre = nengo.Probe(pre.neurons)
p_post = nengo.Probe(post.neurons)
p_post_v = nengo.Probe(post.neurons, 'voltage')
sim = nengo.Simulator(model)
w.output.set_signal(sim.signals[sim.model.sig[post.neurons]["voltage"]])
sim.run(2)
plt.figure(figsize=(12,5))
plt.subplot(2, 1, 1)
plt.plot(sim.trange(), sim.data[p_stim_pre], label='stim')
plt.plot(sim.trange(), sim.data[p_pre]/1000, c='k', label='spikes')
plt.ylabel('pre')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(sim.trange(), sim.data[p_stim_post], label='stim')
plt.plot(sim.trange(), sim.data[p_post_v], label='voltage')
plt.plot(sim.trange(), sim.data[p_post]/1000, c='k', label='spikes')
plt.ylabel('post')
plt.legend()
"""
Explanation: Step 2: Add a learning rule
Now we add a Node that implements the learning rule. To organize the Node a bit better (and have a place to store the weights), I'm using a nengo.Process, which is just a slightly fancier way of specifying the function for a Node.
The weird "signal" thing is how we're accessing the membrane voltage. Since there isn't a way right now to have that data be passed by a Connection, we're just hacking into the internals of Nengo to get that information.
For simplicity, this just implements a very simple learning rule where dw = vmem. I also multiply by dt just to keep it scaled down, and multiply by x so that it only happens when there's a pre-synaptic spike.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/47de0e2137654670a631ea71dfab4b62/plot_lcmv_beamformer_volume.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
# sphinx_gallery_thumbnail_number = 3
import mne
from mne.datasets import sample
from mne.beamformer import make_lcmv, apply_lcmv
print(__doc__)
"""
Explanation: Compute LCMV inverse solution in volume source space
Compute LCMV inverse solution on an auditory evoked dataset in a volume source
space.
End of explanation
"""
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'
# Get epochs
event_id, tmin, tmax = [1, 2], -0.2, 0.5
# Read forward model
forward = mne.read_forward_solution(fname_fwd)
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
events = mne.read_events(event_fname)
# Set up pick list: gradiometers and magnetometers, excluding bad channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads')
# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])
# Re-normalize our empty-room projectors, so they are fine after subselection
raw.info.normalize_proj()
# Read epochs
proj = False # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average()
# Visualize sensor space data
evoked.plot_joint(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
"""
Explanation: Data preprocessing:
End of explanation
"""
# Read regularized noise covariance and compute regularized data covariance
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk',
rank=None)
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk', rank=None)
# Compute weights of free orientation (vector) beamformer with weight
# normalization (neural activity index, NAI). Providing a noise covariance
# matrix enables whitening of the data and forward solution. Source orientation
# is optimized by setting pick_ori to 'max-power'.
# weight_norm can also be set to 'unit-noise-gain'. Source orientation can also
# be 'normal' (but only when using a surface-based source space) or None,
# which computes a vector beamfomer. Note, however, that not all combinations
# of orientation selection and weight normalization are implemented yet.
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='max-power',
weight_norm='nai', rank=None)
print(filters)
# You can save these with:
# filters.save('filters-lcmv.h5')
# Apply this spatial filter to the evoked data.
stc = apply_lcmv(evoked, filters, max_ori_out='signed')
"""
Explanation: Compute covariance matrices, fit and apply spatial filter.
End of explanation
"""
# You can save result in stc files with:
# stc.save('lcmv-vol')
clim = dict(kind='value', pos_lims=[0.3, 0.6, 0.9])
stc.plot(src=forward['src'], subject='sample', subjects_dir=subjects_dir,
clim=clim)
"""
Explanation: Plot source space activity:
End of explanation
"""
clim = dict(kind='value', lims=[0.3, 0.6, 0.9])
abs(stc).plot(src=forward['src'], subject='sample', subjects_dir=subjects_dir,
mode='glass_brain', clim=clim)
"""
Explanation: We can also visualize the activity on a "glass brain" (shown here with
absolute values):
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/feature_engineering/labs/2_bqml_adv_feat_eng-lab.ipynb | apache-2.0 | %%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
"""
Explanation: LAB 02: Advanced Feature Engineering in BQML
Learning Objectives
Create SQL statements to evaluate the model
Extract temporal features
Perform a feature cross on temporal features
Apply ML.FEATURE_CROSS to categorical features
Create a Euclidan feature column
Feature cross coordinate features
Apply the BUCKETIZE function
Apply the TRANSFORM clause and L2 Regularization
Evaluate the model using ML.PREDICT
Introduction
In this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model. By continuing the utilization of feature engineering to improve the prediction of the fare amount for a taxi ride in New York City by reducing the RMSE.
In this Notebook, we perform a feature cross using BigQuery's ML.FEATURE_CROSS, derive coordinate features, feature cross coordinate features, clean up the code, apply the BUCKETIZE function, the TRANSFORM clause, L2 Regularization, and evaluate model performance throughout the process.
Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.
Set up environment variables and load necessary libraries
End of explanation
"""
%%bash
# Create a BigQuery dataset for feat_eng if it doesn't exist
datasetexists=$(bq ls -d | grep -w feat_eng)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: feat_eng"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:feat_eng
echo "\nHere are your current datasets:"
bq ls
fi
"""
Explanation: The source dataset
Our dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to acess the dataset.
The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE
feat_eng.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
passenger_count*1.0 AS passengers,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1
AND fare_amount >= 2.5
AND passenger_count > 0
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
"""
Explanation: Create the training data table
Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this post.
Note: The dataset in the create table code below is the one created previously, e.g. "feat_eng". The table name is "feateng_training_data". Run the query to create the table.
End of explanation
"""
%%bigquery
# LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT
*
FROM
feat_eng.feateng_training_data
LIMIT
0
"""
Explanation: Verify table creation
Verify that you created the dataset.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.baseline_model OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
"""
Explanation: Baseline Model: Create the baseline model
Next, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques.
When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source.
Now we create the SQL statement to create the baseline model.
End of explanation
"""
%%bigquery
# Eval statistics on the held out data.
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.baseline_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
"""
Explanation: REMINDER: The query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.
You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Evaluate the baseline model
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.
NOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab.
Review the learning and eval statistics for the baseline_model.
End of explanation
"""
# TODO 1
"""
Explanation: NOTE: Because you performed a linear regression, the results include the following columns:
mean_absolute_error
mean_squared_error
mean_squared_log_error
median_absolute_error
r2_score
explained_variance
Resource for an explanation of the Regression Metrics.
Mean squared error (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values.
Root mean squared error (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data.
R2: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean.
Next, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_1 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
#TODO 2 - Your code here
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
"""
Explanation: Model 1: EXTRACT dayofweek from the pickup_datetime feature.
As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday).
If you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the dataype returned would be integer.
Next, we create a model titled "model_1" from the benchmark model and extract out the DayofWeek.
End of explanation
"""
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_1)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
"""
Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
"""
Explanation: Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_2 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
# TODO 2 -- Your code here
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
"""
Explanation: Model 2: EXTRACT hourofday from the pickup_datetime feature
As you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date.
Hourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am.
Next, we create a model titled "model_2" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
"""
Explanation: Evaluate the model.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_3 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
# TODO 3 -- Your code here
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
"""
Explanation: Model 3: Feature cross dayofweek and hourofday using CONCAT
First, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross.
Note: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hour ofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the "feature cross" of hourofday and dayofweek "numerically". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it.
Create the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model "model_3"
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
"""
Explanation: Next we evaluate the model.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL feat_eng.model_4
OPTIONS
(model_type='linear_reg',
input_label_cols=['fare_amount'])
AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
#CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
#CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS hourofday,
#TODO 1: Correct the ML.FEATURE_CROSS statement
ML.FEATURE_CROSS(CAST(STRUCT(CAST(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
STRUCT(CAST(HOUR FROM pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM `feat_eng.feateng_training_data`
"""
Explanation: Model 4: Apply the ML.FEATURE_CROSS clause to categorical features
BigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross.
ML.FEATURE_CROSS generates a STRUCT feature with all combinations of crossed categorical features, except for 1-degree items (the original features) and self-crossing items.
Syntax: ML.FEATURE_CROSS(STRUCT(features), degree)
The feature parameter is a categorical features separated by comma to be crossed. The maximum number of input features is 10. An unnamed feature is not allowed in features. Duplicates are not allowed in features.
Degree(optional): The highest degree of all combinations. Degree should be in the range of [1, 4]. Default to 2.
Output: The function outputs a STRUCT of all combinations except for 1-degree items (the original features) and self-crossing items, with field names as concatenation of original feature names and values as the concatenation of the column string values.
We continue with feature engineering began in Lab 01. Here, we examine the components of ML.Feature_Cross.
Note that the ML.FEATURE_CROSS statement contains errors, please correct the statement before continuing or you will get errors.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_4)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_4)
"""
Explanation: Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_5 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
#CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
#CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS hourofday,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
#pickuplon,
#pickuplat,
#dropofflon,
#dropofflat,
# TODO 2-- Your code here
FROM
`feat_eng.feateng_training_data`
"""
Explanation: Sliding down the slope toward a loss minimum (reduced taxi fare)!
Our fourth model above gives us an RMSE of 9.65 for estimating fares. Recall our heuristic benchmark was 8.29. This may be the result of feature crossing. Let's apply more feature engineering techniques to see if we can't get this loss metric lower!
Model 5: Feature cross coordinate features to create a Euclidean feature
Pickup coordinate:
* pickup_longitude AS pickuplon
* pickup_latitude AS pickuplat
Dropoff coordinate:
* #dropoff_longitude AS dropofflon
* #dropoff_latitude AS dropofflat
Coordinate Features:
* The pick-up and drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
We need to convert those coordinates into a single column of a spatial data type. We will use the ST_DISTANCE and the ST_GEOGPOINT functions.
ST_DISTANCE: ST_DISTANCE(geography_1, geography_2). Returns the shortest distance in meters between two non-empty GEOGRAPHYs (e.g. between two spatial objects).
ST_GEOGPOINT: ST_GEOGPOINT(longitude, latitude). Creates a GEOGRAPHY with a single point. ST_GEOGPOINT creates a point from the specified FLOAT64 longitude and latitude parameters and returns that point in a GEOGRAPHY value.
Next we convert the feature coordinates into a single column of a spatial data type. Use the The ST_Distance and the ST.GeogPoint functions.
SAMPLE CODE:
ST_Distance(ST_GeogPoint(value1,value2), ST_GeogPoint(value3, value4)) AS euclidean
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_5)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_5)
"""
Explanation: Next, two distinct SQL statements show metrics for model_5.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_6 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
#pickuplon,
#pickuplat,
#dropofflon,
#dropofflat,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
#TODO 3: Correct this statement.
CONCAT(ST_AsText(ST_GeogPoint(ST_SnapToGrid(dropofflon, pickuplon,
pickuplat, dropofflat),
0.01)), ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropofflon,
dropofflat),
0.01))) AS pickup_and_dropoff
FROM
`feat_eng.feateng_training_data`
"""
Explanation: Model 6: Feature cross pick-up and drop-off locations features
In this section, we feature cross the pick-up and drop-off locations so that the model can learn pick-up-drop-off pairs that will require tolls.
This step takes the geographic point corresponding to the pickup point and grids to a 0.1-degree-latitude/longitude grid (approximately 8km x 11km in New York—we should experiment with finer resolution grids as well). Then, it concatenates the pickup and dropoff grid points to learn “corrections” beyond the Euclidean distance associated with pairs of pickup and dropoff locations.
Because the lat and lon by themselves don't have meaning, but only in conjunction, it may be useful to treat the fields as a pair instead of just using them as numeric values. However, lat and lon are continuous numbers, so we have to discretize them first. That's what SnapToGrid does.
ST_SNAPTOGRID: ST_SNAPTOGRID(geography_expression, grid_size). Returns the input GEOGRAPHY, where each vertex has been snapped to a longitude/latitude grid. The grid size is determined by the grid_size parameter which is given in degrees.
REMINDER: The ST_GEOGPOINT creates a GEOGRAPHY with a single point. ST_GEOGPOINT creates a point from the specified FLOAT64 longitude and latitude parameters and returns that point in a GEOGRAPHY value. The ST_Distance function returns the minimum distance between two spatial objectsa. It also returns meters for geographies and SRID units for geometrics.
Modify the code to feature cross the pick-up and drop-off locations features. Please correct the statement before continuing or you will get errors.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_6)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_6)
"""
Explanation: Next, we evaluate model_6.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_6 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
CONCAT(ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickuplon,
pickuplat),
0.01)), ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropofflon,
dropofflat),
0.01))) AS pickup_and_dropoff
FROM
`feat_eng.feateng_training_data`
"""
Explanation: Code Clean Up
Exercise: Clean up the code to see where we are
Remove all the commented statements in the SQL statement. We should now have a total of five input features for our model.
1. fare_amount
2. passengers
3. day_hr
4. euclidean
5. pickup_and_dropoff
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_7 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
#TODO 4 -- Your code here.
FROM
`feat_eng.feateng_training_data`
"""
Explanation: BQML's Pre-processing functions:
Here are some of the preprocessing functions in BigQuery ML:
* ML.FEATURE_CROSS(STRUCT(features)) does a feature cross of all the combinations
* ML.POLYNOMIAL_EXPAND(STRUCT(features), degree) creates x, x<sup>2</sup>, x<sup>3</sup>, etc.
* ML.BUCKETIZE(f, split_points) where split_points is an array
Model 7: Apply the BUCKETIZE Function
BUCKETIZE
Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value.
ML.BUCKETIZE(feature, split_points)
feature: A numerical column.
split_points: Array of numerical points to split the continuous values in feature into buckets. With n split points (s1, s2 … sn), there will be n+1 buckets generated.
Output: The function outputs a STRING for each row, which is the bucket name. bucket_name is in the format of bin_<bucket_number>, where bucket_number starts from 1.
Currently, our model uses the ST_GeogPoint function to derive the pickup and dropoff feature. In this lab, we use the BUCKETIZE function to create the pickup and dropoff feature.
Next, apply the BUCKETIZE function to model_7 and run the query.
End of explanation
"""
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_7)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_7)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_7)
"""
Explanation: Next, we evaluate model_7.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.final_model
TODO 5:_________ (fare_amount,
#SQRT( (pickuplon-dropofflon)*(pickuplon-dropofflon) + (pickuplat-dropofflat)*(pickuplat-dropofflat) ) AS euclidean,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
CONCAT( ML.BUCKETIZE(pickuplon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(pickuplat,
GENERATE_ARRAY(37, 45, 0.01)), ML.BUCKETIZE(dropofflon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(dropofflat,
GENERATE_ARRAY(37, 45, 0.01)) ) AS pickup_and_dropoff ) OPTIONS(input_label_cols=['fare_amount'],
model_type='linear_reg',
TODO 5:_________) AS
SELECT
*
FROM
feat_eng.feateng_training_data
"""
Explanation: Final Model: Apply the TRANSFORM clause and L2 Regularization
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. BigQuery ML now supports defining data transformations during model creation, which will be automatically applied during prediction and evaluation. This is done through the TRANSFORM clause in the existing CREATE MODEL statement. By using the TRANSFORM clause, user specified transforms during training will be automatically applied during model serving (prediction, evaluation, etc.)
In our case, we are using the TRANSFORM clause to separate out the raw input data from the TRANSFORMED features. The input columns of the TRANSFORM clause is the query_expr (AS SELECT part). The output columns of TRANSFORM from select_list are used in training. These transformed columns are post-processed with standardization for numerics and one-hot encoding for categorical variables by default.
The advantage of encapsulating features in the TRANSFORM clause is the client code doing the PREDICT doesn't change, e.g. our model improvement is transparent to client code. Note that the TRANSFORM clause MUST be placed after the CREATE statement.
L2 Regularization
Sometimes, the training RMSE is quite reasonable, but the evaluation RMSE illustrate more error. Given the severity of the delta between the EVALUATION RMSE and the TRAINING RMSE, it may be an indication of overfitting. When we do feature crosses, we run into the risk of overfitting (for example, when a particular day-hour combo doesn't have enough taxi rides).
Overfitting is a phenomenon that occurs when a machine learning or statistics model is tailored to a particular dataset and is unable to generalize to other datasets. This usually happens in complex models, like deep neural networks. Regularization is a process of introducing additional information in order to prevent overfitting.
Therefore, we will apply L2 Regularization to the final model. As a reminder, a regression model that uses the L1 regularization technique is called Lasso Regression while a regression model that uses the L2 Regularization technique is called Ridge Regression. The key difference between these two is the penalty term. Lasso shrinks the less important feature’s coefficient to zero, thus removing some features altogether. Ridge regression adds “squared magnitude” of coefficient as a penalty term to the loss function.
In other words, L1 limits the size of the coefficients. L1 can yield sparse models (i.e. models with few coefficients); Some coefficients can become zero and eliminated.
L2 regularization adds an L2 penalty equal to the square of the magnitude of coefficients. L2 will not yield sparse models and all coefficients are shrunk by the same factor (none are eliminated).
The regularization terms are ‘constraints’ by which an optimization algorithm must ‘adhere to’ when minimizing the loss function, apart from having to minimize the error between the true y and the predicted ŷ. This in turn reduces model complexity, making our model simpler. A simpler model can reduce the chances of overfitting.
Now, replace the blanks with the correct syntax to apply the TRANSFORM clause and L2 Regularization to the final model. Delete the "TODO 5" when you are done.
End of explanation
"""
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.final_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.final_model)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.final_model)
"""
Explanation: Next, we evaluate the final model.
End of explanation
"""
%%bigquery
SELECT
*
FROM
TODO 6:_______
(_________________,
(
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime ))
"""
Explanation: Predictive Model
Now that you have evaluated your model, the next step is to use it to predict an outcome. You use your model to predict the taxifare amount.
The ML.PREDICT function is used to predict results using your model: feat_eng.final_model.
Since this is a regression model (predicting a continuous numerical value), the best way to see how it performed is to evaluate the difference between the value predicted by the model and the benchmark score. We can do this with an ML.PREDICT query.
Now, replace the blanks with the correct syntax to apply the ML.PREDICT function. Delete the "TODO 6" when you are done.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
x = ['m4', 'm5', 'm6','m7', 'final']
RMSE = [9.65,5.58,5.90,6.23,5.39]
x_pos = [i for i, _ in enumerate(x)]
plt.bar(x_pos, RMSE, color='green')
plt.xlabel("Model")
plt.ylabel("RMSE")
plt.title("RMSE Model Summary")
plt.xticks(x_pos, x)
plt.show()
plt.show()
"""
Explanation: Lab Summary:
Our ML problem: Develop a model to predict taxi fare based on distance -- from one point to another in New York City.
OPTIONAL Exercise: Create a RMSE summary table.
Markdown table generator: http://www.tablesgenerator.com/markdown_tables
Create a RMSE summary table:
Solution Table
| Model | Taxi Fare | Description |
|-------------|-----------|---------------------------------------|
| model_4 | 9.65 | --Feature cross categorical features |
| model_5 | 5.58 | --Create a Euclidian feature column |
| model_6 | 5.90 | --Feature cross Geo-location features |
| model_7 | 6.23 | --Apply the TRANSFORM Clause |
| final_model | 5.39 | --Apply L2 Regularization |
RUN the cell to visualize a RMSE bar chart.
End of explanation
"""
|
rressi/MyNotebooks | Numba_Demo_001.ipynb | mit | def sum_p(X):
y = 0
for x_i in range(int(X)):
y += x_i
return y
"""
Explanation: Numba Demo 1
Sum of first X integers
Given this simple function:
$$sum(x) = \sum\limits_{x=0}^X x$$
Lets define $sum_p(x)$ in pure Python
End of explanation
"""
from numba import jit
@jit
def sum_j(X):
y = 0
for x_i in range(int(X)):
y += x_i
return y
"""
Explanation: Then we define $sum_j(x)$ that is identical but just with decorator @jit in the definition.
End of explanation
"""
import os
import time
import pandas as pd
import matplotlib
%matplotlib inline
# Different platforms require different functions to properly measure current timestamp:
if os.name == 'nt':
now = time.clock
else:
now = time.time
def run_benchmarks(functions, call_parameters, num_times,
logy=False, logx=False):
# Executes one function several times and measure performances:
def _apply_function(function, num_times):
for j in range(num_times):
t_0 = now()
y = function(*call_parameters)
duration = (now() - t_0)
yield float(duration)
def _name(function):
return '${' + function.__name__ + '(x)}$'
# Execute all functions the requested number of times and collects durations:
def _apply_functions(functions, num_times):
for function in functions:
yield pd.Series(_apply_function(function, num_times),
name=_name(function))
# Collects and plots the results:
df = pd.concat(_apply_functions(functions, num_times),
axis=1)
ax = df.plot(figsize=(10,5),
logy=logy,
logx=logx,
title='$T[f(x)]$ in seconds',
style='o-')
"""
Explanation: Lets benchmark them!
Lets define a benchmark to study perfromances of our implementations of $sum(x)$:
End of explanation
"""
run_benchmarks(functions=[sum_p, sum_j],
call_parameters=(10000000,),
num_times=5,
logy=True) # Logarithmic scale
"""
Explanation: Benchmark results
Lets measure them:
End of explanation
"""
run_benchmarks(functions=[sum_j],
call_parameters=(1000000000000000.,),
num_times=5,
logy=True) # Logarithmic scale
"""
Explanation: Numba caching
A second run to study numba caching mechanism
End of explanation
"""
from numba import jit
@jit
def sum_j(x):
y = 0.
x_i = 0.
while x_i < x:
y += x_i
x_i += 1.
return y
%load_ext Cython
%%cython
def sum_c(double x):
cdef double y = 0.
cdef double x_i = 0.
while x_i < x:
y += x_i
x_i += 1.
return y
"""
Explanation: Numba JIT functionality works in the following way:
- At each call of a function $f(x)$, numba looks at the type $T$ of $x$.
- If it is the first time that type have been used generates a native implementation $f_T(x)$.
- If it is not the first time that type have been used generates fetches the native implementation from a cache.
- Numba executes $f_T(x)$ that is orders of magnitudes faster than a pure Python implementation.
And what about adding Cython to the game?
Lets define the same functon, but tuned to operate with floats:
$$sum(x) = \sum\limits_{x=0}^X x$$
We redefine it using numba and cython, this time using float numbers.
End of explanation
"""
run_benchmarks(functions=[sum_j, sum_c],
call_parameters=(1000000000.,),
num_times=10)
"""
Explanation: About Cython:
- generates C code from a Python code.
- allows to define low level C-types.
- in this example we use C-type double.
- C code is generated, compiled and executed.
Benchmarks JIT vs Cython
End of explanation
"""
%%cython --annotate
def sum_c(double x):
cdef double y = 0.
cdef double x_i = 0.
while x_i < x:
y += x_i
x_i += 1.
return y
"""
Explanation: The numba jitted function is comparable with the cythonized one, lets check what was the C code cython used, just to give us an idea of the efficiency of the code generated.
End of explanation
"""
|
zomansud/coursera | ml-foundations/week-2/Assignment - Week 2.ipynb | mit | import graphlab
"""
Explanation: Load GrahpLab Create
End of explanation
"""
#limit number of worker processes to 4
graphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4)
#set canvas to open inline
graphlab.canvas.set_target('ipynb')
"""
Explanation: Basic settings
End of explanation
"""
sales = graphlab.SFrame('home_data.gl/')
"""
Explanation: Load House Sales Data
End of explanation
"""
highest_avg_price_zipcode = '98039'
sales_zipcode = sales[sales['zipcode'] == highest_avg_price_zipcode]
avg_price_highest_zipcode = sales_zipcode['price'].mean()
print avg_price_highest_zipcode
"""
Explanation: Assignment begins
1. Selection and summary statistics
In the notebook we covered in the module, we discovered which neighborhood (zip code) of Seattle had the highest average house sale price. Now, take the sales data, select only the houses with this zip code, and compute the average price. Save this result to answer the quiz at the end.
End of explanation
"""
total_houses = sales.num_rows()
print total_houses
"""
Explanation: 2. Filtering data
Using logical filters, first select the houses that have ‘sqft_living’ higher than 2000 sqft but no larger than 4000 sqft.
What fraction of the all houses have ‘sqft_living’ in this range? Save this result to answer the quiz at the end.
Total number of houses
End of explanation
"""
filtered_houses = sales[(sales['sqft_living'] > 2000) & (sales['sqft_living'] <= 4000)]
print filtered_houses.num_rows()
filtered_houses = sales[sales.apply(lambda x: (x['sqft_living'] > 2000) & (x['sqft_living'] <= 4000))]
print filtered_houses.num_rows()
total_filtered_houses = filtered_houses.num_rows()
print total_filtered_houses
"""
Explanation: Houses with the above criteria
End of explanation
"""
filtered_houses_fraction = total_filtered_houses / float(total_houses)
print filtered_houses_fraction
"""
Explanation: Fraction of Houses
End of explanation
"""
advanced_features = [
'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',
'condition', # condition of house
'grade', # measure of quality of construction
'waterfront', # waterfront property
'view', # type of view
'sqft_above', # square feet above ground
'sqft_basement', # square feet in basement
'yr_built', # the year built
'yr_renovated', # the year renovated
'lat', 'long', # the lat-long of the parcel
'sqft_living15', # average sq.ft. of 15 nearest neighbors
'sqft_lot15', # average lot size of 15 nearest neighbors
]
print advanced_features
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
"""
Explanation: 3. Building a regression model with several more features
Build the feature set
End of explanation
"""
train_data, test_data = sales.random_split(.8, seed=0)
"""
Explanation: Create train and test data
End of explanation
"""
my_feature_model = graphlab.linear_regression.create(train_data, target='price', features=my_features, validation_set=None)
print my_feature_model.evaluate(test_data)
print test_data['price'].mean()
advanced_feature_model = graphlab.linear_regression.create(train_data, target='price', features=advanced_features, validation_set=None)
print advanced_feature_model.evaluate(test_data)
"""
Explanation: Compute the RMSE
RMSE(root mean squared error) on the test_data for the model using just my_features, and for the one using advanced_features.
End of explanation
"""
print my_feature_model.evaluate(test_data)['rmse'] - advanced_feature_model.evaluate(test_data)['rmse']
"""
Explanation: Difference in RMSE
What is the difference in RMSE between the model trained with my_features and the one trained with advanced_features? Save this result to answer the quiz at the end.
End of explanation
"""
|
smorton2/think-stats | code/chap12ex.ipynb | gpl-3.0 | from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
import numpy as np
import pandas as pd
import random
import thinkstats2
import thinkplot
"""
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
transactions = pd.read_csv('mj-clean.csv', parse_dates=[5])
transactions.head()
"""
Explanation: Time series analysis
Load the data from "Price of Weed".
End of explanation
"""
def GroupByDay(transactions, func=np.mean):
"""Groups transactions by day and compute the daily mean ppg.
transactions: DataFrame of transactions
returns: DataFrame of daily prices
"""
grouped = transactions[['date', 'ppg']].groupby('date')
daily = grouped.aggregate(func)
daily['date'] = daily.index
start = daily.date[0]
one_year = np.timedelta64(1, 'Y')
daily['years'] = (daily.date - start) / one_year
return daily
"""
Explanation: The following function takes a DataFrame of transactions and compute daily averages.
End of explanation
"""
def GroupByQualityAndDay(transactions):
"""Divides transactions by quality and computes mean daily price.
transaction: DataFrame of transactions
returns: map from quality to time series of ppg
"""
groups = transactions.groupby('quality')
dailies = {}
for name, group in groups:
dailies[name] = GroupByDay(group)
return dailies
"""
Explanation: The following function returns a map from quality name to a DataFrame of daily averages.
End of explanation
"""
dailies = GroupByQualityAndDay(transactions)
"""
Explanation: dailies is the map from quality name to DataFrame.
End of explanation
"""
import matplotlib.pyplot as plt
thinkplot.PrePlot(rows=3)
for i, (name, daily) in enumerate(dailies.items()):
thinkplot.SubPlot(i+1)
title = 'Price per gram ($)' if i == 0 else ''
thinkplot.Config(ylim=[0, 20], title=title)
thinkplot.Scatter(daily.ppg, s=10, label=name)
if i == 2:
plt.xticks(rotation=30)
thinkplot.Config()
else:
thinkplot.Config(xticks=[])
"""
Explanation: The following plots the daily average price for each quality.
End of explanation
"""
import statsmodels.formula.api as smf
def RunLinearModel(daily):
model = smf.ols('ppg ~ years', data=daily)
results = model.fit()
return model, results
"""
Explanation: We can use statsmodels to run a linear model of price as a function of time.
End of explanation
"""
from IPython.display import display
for name, daily in dailies.items():
model, results = RunLinearModel(daily)
print(name)
display(results.summary())
"""
Explanation: Here's what the results look like.
End of explanation
"""
def PlotFittedValues(model, results, label=''):
"""Plots original data and fitted values.
model: StatsModel model object
results: StatsModel results object
"""
years = model.exog[:,1]
values = model.endog
thinkplot.Scatter(years, values, s=15, label=label)
thinkplot.Plot(years, results.fittedvalues, label='model', color='#ff7f00')
"""
Explanation: Now let's plot the fitted model with the data.
End of explanation
"""
def PlotLinearModel(daily, name):
"""Plots a linear fit to a sequence of prices, and the residuals.
daily: DataFrame of daily prices
name: string
"""
model, results = RunLinearModel(daily)
PlotFittedValues(model, results, label=name)
thinkplot.Config(title='Fitted values',
xlabel='Years',
xlim=[-0.1, 3.8],
ylabel='Price per gram ($)')
"""
Explanation: The following function plots the original data and the fitted curve.
End of explanation
"""
name = 'high'
daily = dailies[name]
PlotLinearModel(daily, name)
"""
Explanation: Here are results for the high quality category:
End of explanation
"""
series = np.arange(10)
"""
Explanation: Moving averages
As a simple example, I'll show the rolling average of the numbers from 1 to 10.
End of explanation
"""
pd.rolling_mean(series, 3)
"""
Explanation: With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
End of explanation
"""
def PlotRollingMean(daily, name):
"""Plots rolling mean.
daily: DataFrame of daily prices
"""
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.rolling_mean(reindexed.ppg, 30)
thinkplot.Plot(roll_mean, label='rolling mean', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='price per gram ($)')
"""
Explanation: The following function plots the rolling mean.
End of explanation
"""
PlotRollingMean(daily, name)
"""
Explanation: Here's what it looks like for the high quality category.
End of explanation
"""
def PlotEWMA(daily, name):
"""Plots rolling mean.
daily: DataFrame of daily prices
"""
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.ewma(reindexed.ppg, 30)
thinkplot.Plot(roll_mean, label='EWMA', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='price per gram ($)')
PlotEWMA(daily, name)
"""
Explanation: The exponentially-weighted moving average gives more weight to more recent points.
End of explanation
"""
def FillMissing(daily, span=30):
"""Fills missing values with an exponentially weighted moving average.
Resulting DataFrame has new columns 'ewma' and 'resid'.
daily: DataFrame of daily prices
span: window size (sort of) passed to ewma
returns: new DataFrame of daily prices
"""
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
ewma = pd.ewma(reindexed.ppg, span=span)
resid = (reindexed.ppg - ewma).dropna()
fake_data = ewma + thinkstats2.Resample(resid, len(reindexed))
reindexed.ppg.fillna(fake_data, inplace=True)
reindexed['ewma'] = ewma
reindexed['resid'] = reindexed.ppg - ewma
return reindexed
def PlotFilled(daily, name):
"""Plots the EWMA and filled data.
daily: DataFrame of daily prices
"""
filled = FillMissing(daily, span=30)
thinkplot.Scatter(filled.ppg, s=15, alpha=0.2, label=name)
thinkplot.Plot(filled.ewma, label='EWMA', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='Price per gram ($)')
"""
Explanation: We can use resampling to generate missing values with the right amount of noise.
End of explanation
"""
PlotFilled(daily, name)
"""
Explanation: Here's what the EWMA model looks like with missing values filled.
End of explanation
"""
def SerialCorr(series, lag=1):
xs = series[lag:]
ys = series.shift(lag)[lag:]
corr = thinkstats2.Corr(xs, ys)
return corr
"""
Explanation: Serial correlation
The following function computes serial correlation with the given lag.
End of explanation
"""
filled_dailies = {}
for name, daily in dailies.items():
filled_dailies[name] = FillMissing(daily, span=30)
"""
Explanation: Before computing correlations, we'll fill missing values.
End of explanation
"""
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.ppg, lag=1)
print(name, corr)
"""
Explanation: Here are the serial correlations for raw price data.
End of explanation
"""
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.resid, lag=1)
print(name, corr)
"""
Explanation: It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
It is more interested to see whether there are still correlations after we subtract away the trends.
End of explanation
"""
rows = []
for lag in [1, 7, 30, 365]:
print(lag, end='\t')
for name, filled in filled_dailies.items():
corr = SerialCorr(filled.resid, lag)
print('%.2g' % corr, end='\t')
print()
"""
Explanation: Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
End of explanation
"""
import statsmodels.tsa.stattools as smtsa
filled = filled_dailies['high']
acf = smtsa.acf(filled.resid, nlags=365, unbiased=True)
print('%0.2g, %.2g, %0.2g, %0.2g, %0.2g' %
(acf[0], acf[1], acf[7], acf[30], acf[365]))
"""
Explanation: The strongest correlation is a weekly cycle in the medium quality category.
Autocorrelation
The autocorrelation function is the serial correlation computed for all lags.
We can use it to replicate the results from the previous section.
End of explanation
"""
def SimulateAutocorrelation(daily, iters=1001, nlags=40):
"""Resample residuals, compute autocorrelation, and plot percentiles.
daily: DataFrame
iters: number of simulations to run
nlags: maximum lags to compute autocorrelation
"""
# run simulations
t = []
for _ in range(iters):
filled = FillMissing(daily, span=30)
resid = thinkstats2.Resample(filled.resid)
acf = smtsa.acf(resid, nlags=nlags, unbiased=True)[1:]
t.append(np.abs(acf))
high = thinkstats2.PercentileRows(t, [97.5])[0]
low = -high
lags = range(1, nlags+1)
thinkplot.FillBetween(lags, low, high, alpha=0.2, color='gray')
"""
Explanation: To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
End of explanation
"""
def PlotAutoCorrelation(dailies, nlags=40, add_weekly=False):
"""Plots autocorrelation functions.
dailies: map from category name to DataFrame of daily prices
nlags: number of lags to compute
add_weekly: boolean, whether to add a simulated weekly pattern
"""
thinkplot.PrePlot(3)
daily = dailies['high']
SimulateAutocorrelation(daily)
for name, daily in dailies.items():
if add_weekly:
daily = AddWeeklySeasonality(daily)
filled = FillMissing(daily, span=30)
acf = smtsa.acf(filled.resid, nlags=nlags, unbiased=True)
lags = np.arange(len(acf))
thinkplot.Plot(lags[1:], acf[1:], label=name)
"""
Explanation: The following function plots the actual autocorrelation for lags up to 40 days.
The flag add_weekly indicates whether we should add a simulated weekly cycle.
End of explanation
"""
def AddWeeklySeasonality(daily):
"""Adds a weekly pattern.
daily: DataFrame of daily prices
returns: new DataFrame of daily prices
"""
fri_or_sat = (daily.index.dayofweek==4) | (daily.index.dayofweek==5)
fake = daily.copy()
fake.ppg.loc[fri_or_sat] += np.random.uniform(0, 2, fri_or_sat.sum())
return fake
"""
Explanation: To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
End of explanation
"""
axis = [0, 41, -0.2, 0.2]
PlotAutoCorrelation(dailies, add_weekly=False)
thinkplot.Config(axis=axis,
loc='lower right',
ylabel='correlation',
xlabel='lag (day)')
"""
Explanation: Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
End of explanation
"""
PlotAutoCorrelation(dailies, add_weekly=True)
thinkplot.Config(axis=axis,
loc='lower right',
xlabel='lag (days)')
"""
Explanation: Here's what it would look like if there were a weekly cycle.
End of explanation
"""
def GenerateSimplePrediction(results, years):
"""Generates a simple prediction.
results: results object
years: sequence of times (in years) to make predictions for
returns: sequence of predicted values
"""
n = len(years)
inter = np.ones(n)
d = dict(Intercept=inter, years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict = results.predict(predict_df)
return predict
def PlotSimplePrediction(results, years):
predict = GenerateSimplePrediction(results, years)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.2, label=name)
thinkplot.plot(years, predict, color='#ff7f00')
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)',
loc='upper right')
"""
Explanation: Prediction
The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
End of explanation
"""
name = 'high'
daily = dailies[name]
_, results = RunLinearModel(daily)
years = np.linspace(0, 5, 101)
PlotSimplePrediction(results, years)
"""
Explanation: Here's what the prediction looks like for the high quality category, using the linear model.
End of explanation
"""
def SimulateResults(daily, iters=101, func=RunLinearModel):
"""Run simulations based on resampling residuals.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
"""
_, results = func(daily)
fake = daily.copy()
result_seq = []
for _ in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
"""
Explanation: When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
End of explanation
"""
def GeneratePredictions(result_seq, years, add_resid=False):
"""Generates an array of predicted values from a list of model results.
When add_resid is False, predictions represent sampling error only.
When add_resid is True, they also include residual error (which is
more relevant to prediction).
result_seq: list of model results
years: sequence of times (in years) to make predictions for
add_resid: boolean, whether to add in resampled residuals
returns: sequence of predictions
"""
n = len(years)
d = dict(Intercept=np.ones(n), years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict_seq = []
for fake_results in result_seq:
predict = fake_results.predict(predict_df)
if add_resid:
predict += thinkstats2.Resample(fake_results.resid, n)
predict_seq.append(predict)
return predict_seq
"""
Explanation: To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
If add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).
End of explanation
"""
def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel):
"""Plots predictions.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
"""
result_seq = SimulateResults(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.3, color='gray')
predict_seq = GeneratePredictions(result_seq, years, add_resid=False)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.5, color='gray')
"""
Explanation: To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
End of explanation
"""
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotPredictions(daily, years)
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)')
"""
Explanation: Here are the results for the high quality category.
End of explanation
"""
def SimulateIntervals(daily, iters=101, func=RunLinearModel):
"""Run simulations based on different subsets of the data.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
"""
result_seq = []
starts = np.linspace(0, len(daily), iters).astype(int)
for start in starts[:-2]:
subset = daily[start:]
_, results = func(subset)
fake = subset.copy()
for _ in range(iters):
fake.ppg = (results.fittedvalues +
thinkstats2.Resample(results.resid))
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
"""
Explanation: But there is one more source of uncertainty: how much past data should we use to build the model?
The following function generates a sequence of models based on different amounts of past data.
End of explanation
"""
def PlotIntervals(daily, years, iters=101, percent=90, func=RunLinearModel):
"""Plots predictions based on different intervals.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
"""
result_seq = SimulateIntervals(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.2, color='gray')
"""
Explanation: And this function plots the results.
End of explanation
"""
name = 'high'
daily = dailies[name]
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotIntervals(daily, years)
PlotPredictions(daily, years)
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)')
"""
Explanation: Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercises
Exercise: The linear model I used in this chapter has the obvious drawback that it is linear, and there is no reason to expect prices to change linearly over time. We can add flexibility to the model by adding a quadratic term, as we did in Section 11.3.
Use a quadratic model to fit the time series of daily prices, and use the model to generate predictions. You will have to write a version of RunLinearModel that runs that quadratic model, but after that you should be able to reuse code from the chapter to generate predictions.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: Write a definition for a class named SerialCorrelationTest that extends HypothesisTest from Section 9.2. It should take a series and a lag as data, compute the serial correlation of the series with the given lag, and then compute the p-value of the observed correlation.
Use this class to test whether the serial correlation in raw price data is statistically significant. Also test the residuals of the linear model and (if you did the previous exercise), the quadratic model.
End of explanation
"""
name = 'high'
daily = dailies[name]
filled = FillMissing(daily)
diffs = filled.ppg.diff()
thinkplot.plot(diffs)
plt.xticks(rotation=30)
thinkplot.Config(ylabel='Daily change in price per gram ($)')
filled['slope'] = pd.ewma(diffs, span=365)
thinkplot.plot(filled.slope[-365:])
plt.xticks(rotation=30)
thinkplot.Config(ylabel='EWMA of diff ($)')
# extract the last inter and the mean of the last 30 slopes
start = filled.index[-1]
inter = filled.ewma[-1]
slope = filled.slope[-30:].mean()
start, inter, slope
# reindex the DataFrame, adding a year to the end
dates = pd.date_range(filled.index.min(),
filled.index.max() + np.timedelta64(365, 'D'))
predicted = filled.reindex(dates)
# generate predicted values and add them to the end
predicted['date'] = predicted.index
one_day = np.timedelta64(1, 'D')
predicted['days'] = (predicted.date - start) / one_day
predict = inter + slope * predicted.days
predicted.ewma.fillna(predict, inplace=True)
# plot the actual values and predictions
thinkplot.Scatter(daily.ppg, alpha=0.1, label=name)
thinkplot.Plot(predicted.ewma, color='#ff7f00')
"""
Explanation: Worked example: There are several ways to extend the EWMA model to generate predictions. One of the simplest is something like this:
Compute the EWMA of the time series and use the last point as an intercept, inter.
Compute the EWMA of differences between successive elements in the time series and use the last point as a slope, slope.
To predict values at future times, compute inter + slope * dt, where dt is the difference between the time of the prediction and the time of the last observation.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/reusable_embeddings.ipynb | apache-2.0 | import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
"""
Explanation: Reusable Embeddings
Learning Objectives
1. Learn how to use a pre-trained TF Hub text modules to generate sentence vectors
1. Learn how to incorporate a pre-trained TF-Hub module into a Keras model
1. Learn how to deploy and use a text model on CAIP
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset.
First, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will build, train and compare simple DNN models starting with different pre-trained TF-Hub layers.
End of explanation
"""
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT
REGION = "us-central1"
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
"""
Explanation: Replace the variable values in the cell below:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
"""
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
"""
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
"""
regex = '.*://(.[^/]+)/'
sub_query = """
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
""".format(regex)
query = """
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
""".format(sub_query=sub_query)
print(query)
"""
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
"""
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
"""
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
"""
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
"""
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
"""
title_dataset.source.value_counts()
"""
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
"""
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
"""
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
"""
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
"""
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
End of explanation
"""
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
"""
Explanation: Let's write the sample datatset to disk.
End of explanation
"""
MODEL_DIR = "./text_models"
DATA_DIR = "./data"
"""
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
"""
ls ./data/
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
"""
Explanation: Loading the dataset
As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times):
End of explanation
"""
titles_df.source.value_counts()
"""
Explanation: Let's look again at the number of examples per label to make sure we have a well-balanced dataset:
End of explanation
"""
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
"""
Explanation: Preparing the labels
In this lab, we will use pre-trained TF-Hub embeddings modules for english for the first layer of our models. One immediate
advantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text.
This also means that our model will be able to consume text directly instead of sequences of integers representing the words.
However, as before, we still need to preprocess the labels into one-hot-encoded vectors:
End of explanation
"""
N_TRAIN = int(len(titles_df) * 0.95)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
"""
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
"""
sources_train.value_counts()
sources_valid.value_counts()
"""
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per class.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
"""
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
"""
Explanation: Now let's create the features and labels we will feed our models with:
End of explanation
"""
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"
nnlm_module = KerasLayer(# TODO)
"""
Explanation: NNLM Model
We will first try a word embedding pre-trained using a Neural Probabilistic Language Model. TF-Hub has a 50-dimensional one called
nnlm-en-dim50-with-normalization, which also
normalizes the vectors produced.
Lab Task 1a: Import NNLM TF Hub module into KerasLayer
Once loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set trainable=True in the KerasLayer that loads the pre-trained embedding:
End of explanation
"""
nnlm_module(tf.constant([# TODO]))
"""
Explanation: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence:
Lab Task 1b: Use module to encode a sentence string
End of explanation
"""
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(# TODO)
"""
Explanation: Swivel Model
Then we will try a word embedding obtained using Swivel, an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings.
TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module.
Lab Task 1c: Import Swivel TF Hub module into KerasLayer
End of explanation
"""
swivel_module(tf.constant([# TODO]))
"""
Explanation: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence:
Lab Task 1d: Use module to encode a sentence string
End of explanation
"""
def build_model(hub_module, name):
model = Sequential([
# TODO
Dense(16, activation='relu'),
Dense(N_CLASSES, activation='softmax')
], name=name)
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
"""
Explanation: Building the models
Let's write a function that
takes as input an instance of a KerasLayer (i.e. the swivel_module or the nnlm_module we constructed above) as well as the name of the model (say swivel or nnlm)
returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes:
Lab Task 2: Incorporate a pre-trained TF Hub module as first layer of Keras Sequential Model
End of explanation
"""
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
X_train, Y_train = train_data
tf.random.set_seed(33)
model_dir = os.path.join(MODEL_DIR, model.name)
if tf.io.gfile.exists(model_dir):
tf.io.gfile.rmtree(model_dir)
history = model.fit(
X_train, Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping(), TensorBoard(model_dir)],
)
return history
"""
Explanation: Let's also wrap the training code into a train_and_evaluate function that
* takes as input the training and validation data, as well as the compiled model itself, and the batch_size
* trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing
* returns an history object, which will help us to plot the learning curves
End of explanation
"""
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
nnlm_model = build_model(nnlm_module, 'nnlm')
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)
history = nnlm_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
"""
Explanation: Training NNLM
End of explanation
"""
swivel_model = build_model(swivel_module, name='swivel')
swivel_history = train_and_evaluate(data, val_data, swivel_model)
history = swivel_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
"""
Explanation: Training Swivel
End of explanation
"""
OUTPUT_DIR = "./savedmodels"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, 'swivel')
os.environ['EXPORT_PATH'] = EXPORT_PATH
shutil.rmtree(EXPORT_PATH, ignore_errors=True)
tf.saved_model.save(swivel_model, EXPORT_PATH)
"""
Explanation: Swivel trains faster but achieves a lower validation accuracy, and requires more epochs to train on.
Deploying the model
The first step is to serialize one of our trained Keras model as a SavedModel:
End of explanation
"""
%%bash
# TODO 5
MODEL_NAME=title_model
VERSION_NAME=swivel
if [[ $(gcloud ai-platform models list --format='value(name)' | grep ^$MODEL_NAME$) ]]; then
echo "$MODEL_NAME already exists"
else
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --region=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep ^$VERSION_NAME$) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create $VERSION_NAME\
--model=$MODEL_NAME \
--framework=# TODO \
--python-version=# TODO \
--runtime-version=2.1 \
--origin=# TODO \
--staging-bucket=# TODO \
--machine-type n1-standard-4 \
--region=$REGION
"""
Explanation: Then we can deploy the model using the gcloud CLI as before:
Lab Task 3a: Complete the following script to deploy the swivel model
End of explanation
"""
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
"""
Explanation: Before we try our deployed model, let's inspect its signature to know what to send to the deployed API:
End of explanation
"""
%%writefile input.json
{# TODO}
!gcloud ai-platform predict \
--model title_model \
--json-instances input.json \
--version swivel \
--region=$REGION
"""
Explanation: Let's go ahead and hit our model:
Lab Task 3b: Create the JSON object to send a title to the API you just deployed
(Hint: Look at the 'saved_model_cli show' command output above.)
End of explanation
"""
|
berlemontkevin/Jupyter_Notebook | Statistical_Physics/Percolation.ipynb | apache-2.0 | from pylab import *
from scipy.ndimage import measurements
%matplotlib inline
L = 100
r = rand(L,L)
p = 0.4
z = r < p
imshow(z, origin='lower', interpolation='nearest')
colorbar()
title("Matrix")
show()
"""
Explanation: Percolation dans les modèles de lattice
Rapide étude des différents clusters sur un modèle de lattice
End of explanation
"""
lw, num = measurements.label(z)
imshow(lw, origin='lower', interpolation='nearest')
colorbar()
title("Labeled clusters")
show()
"""
Explanation: On a construit une amtrice de nombres aléatoires $r$. Cette matrice est assimilable à notrem odèle de lattice en 2 dimensions. La variable $p$ correspond à la densité dans la lattice.
De plus, on souhaite numéroter les clusters pour pouvoir les étudier plus simplement.
End of explanation
"""
b = arange(lw.max() + 1)
shuffle(b)
shuffledLw = b[lw]
imshow(shuffledLw, origin='lower', interpolation='nearest')
colorbar()
title("Labeled clusters")
show()
"""
Explanation: L'idée est de les rendre aléatoire pour mieux visualiser, d'où le code suivant :
End of explanation
"""
area = measurements.sum(z, lw, index=arange(lw.max() + 1))
areaImg = area[lw]
im3 = imshow(areaImg, origin='lower', interpolation='nearest')
colorbar()
title("Clusters by area")
show()
"""
Explanation: La function measurements de python permet de tirer certaines propriétés des clusters. On peut par exemple mesurer facilement leur taille, avec le type de connexions voulues.
End of explanation
"""
area = measurements.sum(z, lw, index=arange(lw.max() + 1))
areaImg = area[lw]
im3 = imshow(areaImg, origin='lower', interpolation='nearest')
colorbar()
title("Clusters by area")
sliced = measurements.find_objects(areaImg == areaImg.max())
if(len(sliced) > 0):
sliceX = sliced[0][1]
sliceY = sliced[0][0]
plotxlim=im3.axes.get_xlim()
plotylim=im3.axes.get_ylim()
plot([sliceX.start, sliceX.start, sliceX.stop, sliceX.stop, sliceX.start], \
[sliceY.start, sliceY.stop, sliceY.stop, sliceY.start, sliceY.start], \
color="red")
xlim(plotxlim)
ylim(plotylim)
show()
"""
Explanation: On souhaite à présent observer le cluster de plus grande taille. Il sera donc encadrer par une boite dans l'image suivante.
End of explanation
"""
for i,p in enumerate([0.2,0.4,0.5,0.6,0.8]) :
fig = plt.figure()
r = rand(L,L)
z = r < p
lw, num = measurements.label(z)
area = measurements.sum(z, lw, index=arange(lw.max() + 1))
areaImg = area[lw]
im = imshow(areaImg, origin='lower', interpolation='nearest')
fig.colorbar(im)
title("Clusters by area p = "+str(p))
sliced = measurements.find_objects(areaImg == areaImg.max())
if(len(sliced) > 0):
sliceX = sliced[0][1]
sliceY = sliced[0][0]
plotxlim=im.axes.get_xlim()
plotylim=im.axes.get_ylim()
plot([sliceX.start, sliceX.start, sliceX.stop, sliceX.stop, sliceX.start], \
[sliceY.start, sliceY.stop, sliceY.stop, sliceY.start, sliceY.start], \
color="red")
xlim(plotxlim)
ylim(plotylim)
show()
from pylab import *
from scipy.ndimage import measurements
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Set up the figures
fig1,ax1=plt.subplots(5,3,figsize=(10, 6))
ax1 = ax1.ravel()
fig2,ax2=plt.subplots(5,3,figsize=(10, 6))
ax2 = ax2.ravel()
#Set up variables
p=np.arange(0., 1., 0.05)
areamax=np.zeros(len(p))
lwmax=np.zeros(len(p))
for ind,L in enumerate([50, 100, 200, 500, 1000]):
for ind2,marker in enumerate(['ko', 'bo', 'ro']):
# Randomise the percolation network
r = rand(L,L)
for i,p2 in enumerate(p):
# Determine the connectivity for probability p
z = r<p2
lw, num = measurements.label(z)
# Plot the clusters for given p
area = measurements.sum(z, lw, index=arange(lw.max() + 1))
areaImg = area[lw]
lwmax[i] = lw.max()
areamax[i] = area.max()
# Plot the number of clusters
ax2[3*ind + ind2 ].plot(p, areamax, marker)
ax1[3*ind + ind2 ].plot(p, lwmax, marker)
# Plot the size of the largest cluster
#plt.figure(2)
#plt.plot(p, area.max(), marker)
#plt.draw()
fig1.tight_layout()
st = fig1.suptitle('Number of cluster', fontsize="x-large")
st.set_y(0.95)
fig1.subplots_adjust(top=0.85)
fig2.tight_layout()
st = fig2.suptitle('Largest cluster', fontsize="x-large")
st.set_y(0.95)
fig2.subplots_adjust(top=0.85)
show()
"""
Explanation: Pour finir, Voici ce qui se passe pour différentes valeurs de $p$. Ainsi nous mettrons en évidence le phénomène de percolation.
End of explanation
"""
|
gschivley/Teaching-python | Python and Jupyter basics/Python basics - RISE presentation.ipynb | mit | x = 4
print x, type(x)
x = 'hello'
print x, type(x)
x = 1 # x is an integer
x = 'hello' # now x is a string
x = [1, 2, 3] # now x is a list
print x
print type(x)
print len(x)
"""
Explanation: Some Python and Jupyter Basics
<br>
Greg Schivley
<br>
With material taken from the Whirlwind Tour of Python
This session is going to cover some basics of Python and Jupyter notebooks that I have found useful or wish I learned sooner.
I assume you know some programming, so I won't specifically cover loops, logic statements, or functions.
Stop me with questions as we go.
Using conda to manage Python
Python is an open source language with lots of awesome libraries and packages. But you have to install and manage them yourself.
The Anaconda distribution comes with most of what you need for scientific computing, but you might find others or need to update packages.
pip is the default python package manager, and is used to install packages from PyPI. Some packages are only available on PyPI, so you'll need to use pip to get them.
Unless you have to use pip, I recommend sticking with conda to manage packages.
Each can't manage packages installed by the other, so it's easy to end up with duplicate versions.
Using conda in terminal
conda install ...
conda update ...
conda remove ...
Unfortunately pip has different syntax
It's also worth reading this blog that talks about the history and differences between conda and pip.
Jupyter notebooks
Quick Start Guide
Computer code
Rich text elements
Inline figures
Notebooks run in your browser
<img src="Screen Shot 2017-01-03 at 12.30.07 PM.png"/>
<img src="Screen Shot 2017-01-03 at 12.32.24 PM.png"/>
Strengths/weaknesses of notebooks
Pros
Great way to iteratively explore data or do analysis
Lets you document your work in a readable way as you go
Notebooks can be easily shared as HTML through nbviewer.jupyter.org and will render as HTML on GitHub
Entire textbooks have been written in the notebook format
Can accommodate non-Python kernels (R, Julia, Haskel, etc) or run other languages in a Python notebook.
Cons
Notebooks can get really long
Easy to be lazy and copy code rather than creating functions
Might lose some functionality from your favorite IDE
Have to keep your browser open
How to use Jupyter notebooks
This is just to get you started.
Read the Jupyter documentation (Quick Start or Main) for more information.
Open a command prompt or terminal window and navigate to the parent folder where you are working
<img src="Screen Shot 2017-01-03 at 12.50.48 PM.png"/>
Type jupyter notebook to launch the notebook server
<img src="Screen Shot 2017-01-03 at 12.52.37 PM.png"/>
Helpful keyboard shortcuts
esc exits out of edit mode (green box) and puts the cell in command mode (blue box)
Most keyboard commands are run when in command mode
shift-enter runs a cell and moves to the next cell
m changes a cell to markdown, 1-6 makes it markdown as a section header
dd deletes a cell
c copies a cell, v pastes it below the currently selected cell
Python Variables Are Pointers
Assigning variables in Python is as easy as putting a variable name to the left of the equals (=) sign.
End of explanation
"""
x = [1, 2, 3]
y = x
print y
x.append(4)
print y
"""
Explanation: Python variables are pointers
Two variables can point to the same object. Be careful, as this can lead to unanticipated consequences!
End of explanation
"""
x = 10
y = x
# add 5 to x's value, and assign it to x
x += 5
print "x =", x
print "y =", y
"""
Explanation: Python variables are pointers
Fortunately, simple objects are immutable - you can't change the value of "10" in memory, but you can point a variable to a different value
End of explanation
"""
# 3/2
3/2.
from __future__ import division
3/2
"""
Explanation: Off-topic note
Python 3 does away with int/float division issues. You can add this behavior to Python 2.
End of explanation
"""
L = [2, 3, 5, 7] # Define a list
print L
print len(L)
"""
Explanation: Types of objects (data structures)
Common basic Python objects include:
- Simple objects like ints, floats, strings, etc
- Lists
- Sets
- Dictionaries
- Tuples
| Type | Example |Description |
|-----------|-------------------------|---------------------------------------|
| list | [1, 2, 3] | Ordered collection |
| tuple | (1, 2, 3) | Immutable ordered collection |
| dict | {'a':1, 'b':2, 'c':3} | Unordered (key,value) mapping |
| set | {1, 2, 3} | Unordered collection of unique values |
Lists
Some basic list operations
End of explanation
"""
# L = L + [13, 17, 19, ['a', 'b']]
print L
"""
Explanation: Addition concatenates lists.
Lists can be a combination of different object types, including other lists
End of explanation
"""
L = [2, 3, 5, 7, 11]
L[0] # Python is 0 indexed
L[:-2]
L[0:3] # element 3 is not included [)
"""
Explanation: List indexing and slicing
Python provides access to elements through indexing (single elements), and slicing (multiple elements).
Both are indicated by a square-bracket syntax.
End of explanation
"""
print L
2 in L
4 in L
(2 in L) and (4 not in L)
"""
Explanation: A few quick examples of easy logic statements
End of explanation
"""
# range generates a list from 0 to n-1
# Behavior is different in Python 3
range(5)
%%timeit
l = [] # Iniitalize an empty list
for value in range(500):
l.append(value**2)
"""
Explanation: List comprehensions
Lists are great basic objects, and can be extremely fast if used correctly
Slower way to add values to a list
End of explanation
"""
%%timeit
[value**2 for value in range(500)]
"""
Explanation: Better way to generate a list
Don't do this for very complex calculations - gets hard to read
End of explanation
"""
import numpy as np
np.array(range(5))
%%timeit
np.array(range(500))**2
"""
Explanation: Using NumPy operations on an entire array is even faster
End of explanation
"""
t = (1, 2, 3)
t = 1, 2, 3
print t
print t[0]
"""
Explanation: Tuples
Tuples are in many ways similar to lists, but they are defined with parentheses rather than square brackets.
Or they can be defined without any brackets.
End of explanation
"""
t[1] = 4
"""
Explanation: Tuples are immutable
Once defined, the object can't be changed.
End of explanation
"""
def f(x):
a = x**2
b = x**3
return a, b
f(2)
a, b = f(2)
print 'a =', a
print 'b =', b
"""
Explanation: Tuples are often used to pass groups of values
Pass a group of objects/values into or out of a function
End of explanation
"""
numbers = {'one':1, 'two':2, 'three':3}
numbers
numbers['two'] # Use a key to return the matching value
numbers.keys() # Keys don't stay ordered
"""
Explanation: I defined a function above. Functions start with <span style="color:green; font-family:Monospace">def</span>, end the first line with a colon, and use <span style="color:green; font-family:Monospace">return</span> if you are returning values
Dictionaries
Dictionaries are flexible mappings of keys to values. They are created via a comma-separated list of key:value pairs within curly braces.
End of explanation
"""
l = [1, 2, 1, 4, 5, 2]
set(l)
"""
Explanation: Sets
Sets are collections of unique values. You can create a set of unique values from a list-like object and can use set operations (union, intersection, etc)
End of explanation
"""
l_array = np.array(l) # make a numpy array from the list
l_array
# l_array.reshape((2,3))
set(l_array)
"""
Explanation: I said list-like objects, which points to an important feature of Python: types don't matter so long as they behave correctly (duck typing)
End of explanation
"""
for idx, value in enumerate(['dog', 'cat', 'pig', 'sheep']):
print idx, value
"""
Explanation: Iterators
Iterators are objects with multiple values that can be looped through. Sometimes you can also do indirect iteration, which will provide a single value at a time.
It's helpful to know about the enumerate and zip functions when using iterators.
Enumerate
Return both the index and value for each item in the iterator
End of explanation
"""
L = [2, 4, 6, 8, 10]
R = [3, 6, 9, 12, 15]
for lval, rval in zip(L, R):
print (lval, rval)
# zip(L, R)[0]
# Doing the same thing with indexing
# for i in range(len(L)):
# print (L[i], R[i])
"""
Explanation: Zip
Combine two or more sets of iterators into a list of tuples
End of explanation
"""
|
cubewise-code/TM1py-samples | Samples/exploratory_analysis.ipynb | mit | from TM1py.Services import TM1Service
from TM1py.Utils import Utils
import pandas as pd
import xlwings as xw
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
"""
Explanation: <h1 style="font-size:42px; text-align:center; margin-bottom:30px;"><span style="color:SteelBlue">TM1py:</span> Data Analyis</h1>
<hr>
The first step you should complete before actually stepping through this notebook is to install all required dependencies. Assuming you already have Anaconda installed, you'll only need to install TM1py. To do this run the following commands:
<pre>pip install TM1py</pre>
The code requires an FX cube that contains data for 2017. You can run <a href="https://github.com/cubewise-code/tm1py-samples/blob/master/Samples/samples_setup.py">this script</a> to set up the cube up in your TM1 model.
In order to load the exchange rates into the cube run <a href="https://github.com/cubewise-code/tm1py-samples/blob/master/Samples/fx_rates_fred_to_cube_daily.py">this script</a> .
Step 1: Import the Required Modules
Before we can start python'ing, we need to import a few modules that are required.
<a href="https://code.cubewise.com/tm1py/">TM1py</a> for interacting with the TM1 API,
<a href="https://pandas.pydata.org/">Pandas</a> for data manipulation and transformation,
<a href="https://matplotlib.org/">matplotlib</a> for creating plots and
<a href="https://www.xlwings.org/">xlwings</a> to interact with Excel sheets from python.
End of explanation
"""
# Server address
address = '10.77.19.10'
# HTTP port number - this can be found in your config file
port = '12354'
# username
user = 'admin'
# password
password = 'apple'
# SSL parameter - this can be found in your config file
ssl = True
"""
Explanation: Step 2: Setup your TM1 parameters
TM1 Server Parameters
Define your TM1 server parameters below. In this example the script is connecting to our local TM1 instance.
End of explanation
"""
# MDX Query to get the daily exchange Rates from USD to CHF in 2017
mdx = "SELECT \
NON EMPTY {[TM1py Date].[2017-01-01]:[TM1pyDate].[2017-12-31]} ON ROWS, \
{[TM1py Currency To].[CHF], [TM1py Currency To].[EUR], [TM1py Currency To].[GBP]} ON COLUMNS \
FROM [TM1py FX Rates] \
WHERE ([TM1py Currency From].[USD], [TM1py FX Rates Measure].[Spot])"
with TM1Service(address=address, port=port, user=user, password=password, ssl=ssl) as tm1:
# Query data through MDX
data = tm1.cubes.cells.execute_mdx(mdx)
# Transform data into Pandas DataFrame
df = Utils.build_pandas_dataframe_from_cellset(data, multiindex=True)
"""
Explanation: Step 3: Get TM1 Data
The following code obtains a data set from TM1 through an MDX Query. The resulting dataset is transformed into a pandas dataframe for statistical analysis.
For more on MDX Queries check out this basic <a href="https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models/mdx/mdx-query-the-basic-query"> MDX Turoial</a>
For more on pandas and dataframes check out this <a href="https://pandas.pydata.org/pandas-docs/stable/tutorials.html">pandas tutorial</a>.
End of explanation
"""
print(df.std(level='TM1py Currency To'))
"""
Explanation: Step 4: Calculate Statistical Measures
With Pandas we can do all kind of statistical analysis with our data set. In this Block we calculate statistical measures by currency pair over time.
Standard Deviation
End of explanation
"""
print(df.median(level='TM1py Currency To'))
"""
Explanation: Median
End of explanation
"""
# Setup the Canvas
plt.rcParams['figure.figsize']=(20,10)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
# Manipulate DataFrame to simplify plotting
df_plot = df.unstack(level=['TM1py Currency From', 'TM1py Currency To', 'TM1py FX Rates Measure'])
df_plot.index = pd.to_datetime(df_plot.index)
# Draw graph into Canvas
df_plot.plot(legend=False, ax=ax)
"""
Explanation: Step 5: Plot over time with matplotlib
Plot the DataFrame with matplotlib over time.
End of explanation
"""
# Rolling 30 days Mean
window = 30
# Setup the Canvas
plt.rcParams['figure.figsize']=(20,10)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
# Group by currency, calculate and draw graph into plot
for cur, df_cur in df.groupby(level='TM1py Currency To'):
# Manipulate DataFrame to simplify plotting
df_cur = df_cur.unstack(level=['TM1py Currency From', 'TM1py Currency To', 'TM1py FX Rates Measure'])
df_cur.index = pd.to_datetime(df_cur.index)
# Draw graph into Canvas
df_cur.rolling(window=window).mean().plot(legend=False, ax=ax)
"""
Explanation: Step 6: Calculate and plot Moving Average
Calculate and plot the Moving Average with a window of 30 days.
USD to CHF orange, USD to EUR blue, USD to GBP violet.
End of explanation
"""
xw.Range('A1').value = df
"""
Explanation: Step 7: Dump data to Excel with XLWings library
Dump the entire dataframe in a csv like format into Excel (starting at cell A1). Requires Excel to be open!
End of explanation
"""
xw.sheets.active.pictures.add(fig, name="2017 USD to CHF, EUR and GBP")
"""
Explanation: Step 8: Dump Chart to Excel Sheet
Dump the plot as an image into your Excel Sheet. Requires Excel to be open!
End of explanation
"""
|
bashtage/statsmodels | examples/notebooks/tsa_filters.ipynb | bsd-3-clause | %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
index = pd.Index(sm.tsa.datetools.dates_from_range("1959Q1", "2009Q3"))
print(index)
dta.index = index
del dta["year"]
del dta["quarter"]
print(sm.datasets.macrodata.NOTE)
print(dta.head(10))
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
dta.realgdp.plot(ax=ax)
legend = ax.legend(loc="upper left")
legend.prop.set_size(20)
"""
Explanation: Time Series Filters
End of explanation
"""
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp)
gdp_decomp = dta[["realgdp"]].copy()
gdp_decomp["cycle"] = gdp_cycle
gdp_decomp["trend"] = gdp_trend
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax, fontsize=16)
legend = ax.get_legend()
legend.prop.set_size(20)
"""
Explanation: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
End of explanation
"""
bk_cycles = sm.tsa.filters.bkfilter(dta[["infl", "unemp"]])
"""
Explanation: Baxter-King approximate band-pass filter: Inflation and Unemployment
Explore the hypothesis that inflation and unemployment are counter-cyclical.
The Baxter-King filter is intended to explicitly deal with the periodicity of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average
$$y_{t}^{*}=\sum_{k=-K}^{k=K}a_ky_{t-k}$$
where $a_{-k}=a_k$ and $\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2).
For completeness, the filter weights are determined as follows
$$a_{j} = B_{j}+\theta\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
$$B_{0} = \frac{\left(\omega_{2}-\omega_{1}\right)}{\pi}$$
$$B_{j} = \frac{1}{\pi j}\left(\sin\left(\omega_{2}j\right)-\sin\left(\omega_{1}j\right)\right)\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
where $\theta$ is a normalizing constant such that the weights sum to zero.
$$\theta=\frac{-\sum_{j=-K^{K}b_{j}}}{2K+1}$$
$$\omega_{1}=\frac{2\pi}{P_{H}}$$
$$\omega_{2}=\frac{2\pi}{P_{L}}$$
$P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.
End of explanation
"""
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot(111)
bk_cycles.plot(ax=ax, style=["r--", "b-"])
"""
Explanation: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
End of explanation
"""
print(sm.tsa.stattools.adfuller(dta["unemp"])[:3])
print(sm.tsa.stattools.adfuller(dta["infl"])[:3])
cf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[["infl", "unemp"]])
print(cf_cycles.head(10))
fig = plt.figure(figsize=(14, 10))
ax = fig.add_subplot(111)
cf_cycles.plot(ax=ax, style=["r--", "b-"])
"""
Explanation: Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment
The Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the
calculations of the weights in
$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\dots+B_{T-1-t}y_{T-1}+\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\dots+B_{t-2}y_{2}+\tilde B_{t-1}y_{1}$$
for $t=3,4,...,T-2$, where
$$B_{j} = \frac{\sin(jb)-\sin(ja)}{\pi j},j\geq1$$
$$B_{0} = \frac{b-a}{\pi},a=\frac{2\pi}{P_{u}},b=\frac{2\pi}{P_{L}}$$
$\tilde B_{T-t}$ and $\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.
The CF filter is appropriate for series that may follow a random walk.
End of explanation
"""
|
aboucaud/python-euclid2016 | notebooks/05-Euclid.ipynb | bsd-3-clause | # this must be included at the top of a python2 src file
# to ensure most python3 features that are backported
# to python2 are available
from __future__ import absolute_import, division, print_function
from builtins import (bytes, str, open, super, range,
zip, round, input, int, pow, object, map, zip)
"""
Explanation: Python general coding guidelines
"The guidelines provided here are intended to improve the readability of code and make it consistent across the wide spectrum of Python code" PEP8
Python PEPs are "Python Enhancement Proposal". The PEP for the conding standard adopted by the Python Software foundation (PSF) is the PEP8.
There are also other coding standards specific to organizations and/or projects, e.g:
Euclid's Python coding standard can be found here
Google's python coding style here.
Python2 code must be compatible with Python3
End of explanation
"""
# manynames.py
X = 11 # Global (module) name/attribute (X, or manynames.X)
def f():
print(X) # Access global X (11)
def g():
X = 22 # Local (function) variable (X, hides module X)
print(X)
class C:
X = 33 # Class attribute (C.X)
def m(self):
X = 44 # Local variable in method (X)
self.X = 55 # Instance attribute (instance.X)
f()
g()
print('C.X = {}'.format(C.X))
my_c = C()
print('my_c.X = {}'.format(my_c.X))
my_c.m()
print('my_c.X = {}'.format(my_c.X))
"""
Explanation: In case you missed it, there was a dedicated session on future proofing your code
in the second developers workshop last year. http://euclid.roe.ac.uk/attachments/download/6019
Scoping: namespaces in python
End of explanation
"""
def scope_test():
def do_local():
spam = "local spam"
def do_nonlocal():
nonlocal spam
spam = "nonlocal spam"
def do_global():
global spam
spam = "global spam"
def scope_test():
def do_local():
spam = "local spam"
def do_nonlocal():
nonlocal spam
spam = "nonlocal spam"
def do_global():
pass
"""
Explanation: Avoid as possible global variables for several modules
End of explanation
"""
x = 0
def outer():
x = 1
def inner():
x = 2
print("inner:", x)
inner()
print("outer:", x)
outer()
print("global:", x)
x = 0
def outer():
x = 1
def inner():
nonlocal x # binds x to the outer scope (not to the global scope)
x = 2
print("inner:", x)
inner()
print("outer:", x)
outer()
print("global:", x)
"""
Explanation: TIP
To examine what is available in the current scope, use the build in functions globals(), locals()
nonlocal is only available for python3 and above. A good example of the usage of nonlocal http://stackoverflow.com/questions/1261875/python-nonlocal-statement?answertab=active#tab-top
End of explanation
"""
%%file my_module.py
#
# my module content...
#
"""
Explanation: Naming conventions
Python packages and modules should also have short, all-lowercase names
<font color='green'>OK</font>
End of explanation
"""
%%file My_module.py
#
# my module content...
#
%%file MyModule.py
#
# my module content...
#
%%file My_Module.py
#
# my module content...
#
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
def Foo():
pass
def MyFoo():
pass
"""
Explanation: Almost without exception, class names use mixed case starting with uppercase
<font color='green'>OK</font>
End of explanation
"""
def foo():
pass
def my_foo():
pass
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
%%file my_module.py
class _MyInternalClassThatShouldNotBeAccessedOutsideThisModule():
pass
def foo():
my_instance = _MyInternalClassThatShouldNotBeAccessedOutsideThisModule()
"""
Explanation: Classes for internal use MUST have a leading underscore
<font color='green'>OK</font>
End of explanation
"""
%%file my_module.py
class _MyInternalClassThatShouldNotBeAccessedOutsideThisModule():
pass
%%file my_script.py
import my_module
my_instance = _MyInternalClassThatShouldNotBeAccessedOutsideThisModule()
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
class MyError(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
"""
Explanation: An exception name MUST include the suffix "Error"
<font color='green'>OK</font>
End of explanation
"""
class MyWarning(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class MyException(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
class MyClass(object):
def __init__(self):
self._excutable = None
@property
def executable(self):
return self._executable
@executable.setter
def executable(self, value):
# check that the executable actually can be found in the OS/system
# the assign it to the backing variable
self._executable = value
class MyOtherClass(object):
def __init__(self):
self._speed_of_light_si = 3e8
@property
def speed_of_light_si(self):
return self._speed_of_light_si
"""
Explanation: Developer SHOULD use properties to protect the service from the implementation
<font color='green'>OK</font>
End of explanation
"""
class MyClass(object):
def __init__(self, path_to_exec):
self.excutable = path_to_exec
class MyOtherClass(object):
def __init__(self):
self.speed_of_light_si = 3e8
x = MyOtherClass()
x.speed_of_light_si
x.speed_of_light_si = 1
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
class MyClass(object):
def __init__(self):
self._safe_combination_pin_code = 541976
"""A private attribute that is not intended to be used outside the class"""
"""
Explanation: Protected Class Attribute Names MUST be prefixed with a single underscore
<font color='green'>OK</font>
End of explanation
"""
class MyClass(object):
def __init__(self):
self._safe_combination_pin_code = 541976
"""A private attribute that is not intended to be used outside the class"""
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
class foo(object):
def __init__(self):
self._print = None
"""
Explanation: If your public attribute name collides with a reserved keyword, append a single trailing “_” underscore to your attribute name
<font color='green'>OK</font>
End of explanation
"""
class foo(object):
def __init__(self):
self.print = None
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
class Mapping:
def __init__(self, iterable):
self.items_list = []
self.__update(iterable)
def update(self, iterable):
for item in iterable:
self.items_list.append(item)
__update = update # private copy of original update() method
class MappingSubclass(Mapping):
def update(self, keys, values):
# provides new signature for update()
# but does not break __init__()
for item in zip(keys, values):
self.items_list.append(item)
"""
Explanation: Private Class Attribute Names MUST be prefixed with a double underscore
<font color='green'>OK</font>
End of explanation
"""
def see_above_examples(x, y):
pass
"""
Explanation: <font color='red'>NOT OK</font>
Function names MUST be lowercase with words separated by underscore
<font color='green'>OK</font>
End of explanation
"""
def MyBadFunctionName(x, y):
pass
def my_Bad_function(x, y):
pass
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
class MyClass(object):
def __init__(self):
pass
def make_mesh(self):
pass
"""
Explanation: Always use self for the first argument to instance methods
<font color='green'>OK</font>
End of explanation
"""
class MyClass(object):
def __init__(this):
pass
def make_mesh(this):
pass
class MyOtherClass(object):
def __init__(that):
pass
def make_mesh(that):
pass
class MyOtherClass2(object):
def __init__(asdasdasdasd):
pass
def make_mesh(asdasdasdasd):
pass
MyOtherClass2()
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
class MyClass(object):
def __init__(self):
pass
def my_method1(self):
pass
@classmethod
def my_class_method_foo(cls):
pass
"""
Explanation: Always use cls for the first argument to class methods
<font color='green'>OK</font>
End of explanation
"""
class MyClass(object):
def __init__(self):
pass
def my_method1(self):
pass
@classmethod
def my_class_method_foo(self):
pass
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
def good_append(new_item, a_list=None):
if a_list is None:
a_list = []
a_list.append(new_item)
return a_list
my_list = good_append(1)
print(good_append(2, my_list))
print(good_append(3, my_list))
"""
Explanation: Avoid usage of mutable types (lists, dictionaries) as default values of the arguments of a method
<font color='green'>OK</font>
End of explanation
"""
def bad_append(new_item, a_list=[]):
a_list.append(new_item)
return a_list
my_list = bad_append(1)
print(bad_append(2))
print(bad_append(3))
"""
Explanation: <font color='red'>NOT OK (unless it is intentional)</font>
End of explanation
"""
class Foo(object):
my_const = "Name"
"""
Explanation: Constants are usually defined on a module level and written in all capital letters with underscores separating words
<font color='green'>OK</font>
End of explanation
"""
class Foo(object):
my_const = "Name"
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
my_global_variable = 1
"""
Explanation: Global variables names should be lowercase with words separated by underscores
<font color='green'>OK</font>
End of explanation
"""
MY_GLOBAL_VARIABLE = 1
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
#!/usr/bin/env python # Shebang line (#!), only for executable scripts
# my module comments
# more comments...
"""
my module docstring
...
...
"""
import os
import sys
# and other imports...
__all__ = ['MyClass1', 'MyClass2'] # whatever you wish to import with from my_module import *, if any
#
# Public variables
#
#
# Public classes, functions...
#
"""
Explanation: Files
The parts of a module MUST be sorted
<font color='green'>OK</font>
End of explanation
"""
#!/usr/bin/env python # Shebang line (#!), only for executable scripts
"""
my module docstring
...
...
"""
# my module comments
# more comments...
import os
import sys
# and other imports...
__all__ = ['MyClass1', 'MyClass2'] # whatever you wish to import with from my_module import *, if any
import numpy
#
# Public variables
#
#
# Public classes, functions...
#
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
import os
import sys
import numpy
import matplotlib
import my_module
import my_module2
"""
Explanation: Imports SHOULD be grouped, in order: standard lib, 3rd party lib, local lib
<font color='green'>OK</font>
End of explanation
"""
import numpy
import matplotlib
import os
import sys
import my_module
import my_module2
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
%%file my_module.py
__all__ = ['Foo', 'my_func']
manager1 = 1
manager2 = 1
class MyLocalManager():
pass
class Foo():
pass
def my_func():
pass
from my_module import *
print(list(filter(lambda x: 'manager' in x.lower(), globals())))
"""
Explanation: Modules designed for use via "from M import *" SHOULD use the all mechanism
<font color='green'>OK</font>
End of explanation
"""
%%file my_other_module.py
manager1 = 1
manager2 = 1
class MyLocalManager():
pass
class Foo():
pass
def my_func():
pass
from my_other_module import *
print(list(filter(lambda x: 'manager' in x.lower(), globals())))
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
class Base(object):
pass
class Outer(object):
class Inner(object):
pass
class Child(Base):
"""Explicitly inherits from another class already."""
pass
"""
Explanation: Statements
If a class inherits from no other base classes, explicitly inherit from object
<font color='green'>OK</font>
End of explanation
"""
class Base:
pass
class Outer:
class Inner:
pass
"""
Explanation: <font color='red'>NOT RECCOMENDED (UNLESS YOU KNOW WHAT YOU ARE DOING)</font>
End of explanation
"""
x = 1
x += 1
print(x)
"""
Explanation: Do not use non-existent pre-increment or pre-decrement operator
<font color='green'>OK</font>
End of explanation
"""
x = 1
print(++x) # +(+x)
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
# use meaningful names for boolean variables
data_found = False
if not data_found:
print('no data found')
"""
Explanation: Use the "implicit" false if at all possible
<font color='green'>OK</font>
End of explanation
"""
if data_found == False:
print('no data found')
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
with open("hello.txt", 'w') as hello_file:
for word in ['aaa', 'bbb']:
hello_file.write(word)
# file is closed automoatically in a context manager
"""
Explanation: Explicitly close files and sockets when done with them
<font color='green'>OK</font>
End of explanation
"""
hello_file = open("hello.txt", 'w')
for word in ['aaa', 'bbb']:
hello_file.write(word)
# easy to forget closing the file
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
%%file my_specific_module.py
class MySpecificError(Exception):
"""Base class for errors in my package."""
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
try:
raise MySpecificError(2*2)
except MySpecificError as e:
print('My exception occurred, value:', e.value)
%run my_specific_module
"""
Explanation: Modules or packages should define their own domain-specific base exception class
<font color='green'>OK</font>
End of explanation
"""
%%file my_specific_module.py
try:
raise ValueError("""can not accept bla bla value""")
except ValueError as e:
print('Exception occurred')
%run my_specific_module.py
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
raise ValueError("""this is an instance of the ValueError exception class""")
"""
Explanation: When raising an exception, raise an exception instance and not an exception class
<font color='green'>OK</font>
End of explanation
"""
raise ValueError
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
try:
import platform_specific_module
except ImportError:
platform_specific_module = None
print('import error occured')
"""
Explanation: When catching exceptions, mention specific exceptions whenever possible
<font color='green'>OK</font>
End of explanation
"""
try:
import platform_specific_module
except:
platform_specific_module = None
print('i caught an exception, but i do not know what it is')
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
my_list = [
1, 2, 3,
4, 5, 6,
]
result = some_function_that_takes_arguments(
'a', 'b', 'c',
'd', 'e', 'f',
)
#or it may be lined up under the first character of the line that starts the multi-line construct, as in
my_list = [
1, 2, 3,
4, 5, 6,
]
result = some_function_that_takes_arguments(
'a', 'b', 'c',
'd', 'e', 'f',
)
"""
Explanation: Layout and Comments
Block layout rules
<font color='green'>OK</font>
End of explanation
"""
# don't line things up under the = sign
my_list = [
1, 2, 3,
4, 5, 6,
]
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
do_one()
do_two()
do_three()
"""
Explanation: Compound statements (multiple statements on the same line) are discouraged
<font color='green'>OK</font>
End of explanation
"""
do_one(); do_two(); do_three()
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
# Aligned with opening delimiter
foo = long_function_name(var_one, var_two,
var_three, var_four)
# More indentation included to distinguish this from the rest.
def long_function_name(
var_one, var_two, var_three,
var_four):
print(var_one)
"""
Explanation: Function layout rules
<font color='green'>OK</font>
End of explanation
"""
foo = long_function_name(var_one, var_two,
var_three, var_four)
# Further indentation required as indentation is not distinguishable.
def long_function_name(
var_one, var_two, var_three,
var_four):
print(var_one)
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
import os
import sys
"""
Explanation: Import layout rules
<font color='green'>OK</font>
End of explanation
"""
import sys, os
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
from Tkinter import (Tk, Frame, Button, Entry, Canvas, Text,
LEFT, DISABLED, NORMAL, RIDGE, END)
"""
Explanation: brackets and braces SHOULD be used for wrapped lines
<font color='green'>OK</font>
End of explanation
"""
from Tkinter import Tk, Frame, Button, Entry, Canvas, Text,\
LEFT, DISABLED, NORMAL, RIDGE, END
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
my_func(ham[1], {eggs: 2})
if x == 4: print(x, y); x, y = y, x
print(x)
dict['key'] = list[index]
x = 1
y = 2
long_variable = 3
"""
Explanation: Avoid extraneous whitespace in the following situations
<font color='green'>OK</font>
End of explanation
"""
my_func(ham[ 1 ], { eggs: 2 })
if x == 4 : print (x , y) ; x , y = y , x
print (x)
dict ['key'] = list [index]
x = 1
y = 2
long_variable = 3
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
i = i + 1
submitted += 1
x = x*2 - 1
hypot2 = x*x + y*y
c = (a+b) * (a-b)
"""
Explanation: Binary operators SHOULD be surrounded by a single space
<font color='green'>OK</font>
End of explanation
"""
i=i+1
submitted +=1
x = x * 2 - 1
hypot2 = x * x + y * y
c = (a + b) * (a - b)
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
%%file my_module.py
def func1():
pass
def func2():
pass
class foo1():
def method1():
pass
def method2():
pass
"""
Explanation: Blank lines rules
<font color='green'>OK</font>
End of explanation
"""
def func1():
pass
def func2():
pass
class foo1():
def method1():
pass
def method2():
pass
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
%%file my_module.py
# this is intended to be documentation that should not be extracted
# by doxygen or sphinx.
"""
Explanation: Block comments rules
<font color='green'>OK</font>
End of explanation
"""
%%file my_module.py
"""
sphinx treats this as a docstring, thus it is not a block comment.
"""
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
if i & (i-1) == 0: # true if i is a power of 2
if i & (i-1) == 0: # true if i is a power of 2
"""
Explanation: Inline comments rules
<font color='green'>OK</font>
End of explanation
"""
if i & (i-1) == 0:# true if i is a power of 2
"""
Explanation: <font color='red'>NOT OK</font>
End of explanation
"""
pep8 utils.py
pylint utils.py -f html > utils.html
"""
Explanation: Documentation strings ("docstrings") MUST be used for packages, modules, functions, classes, and methods
see sample project directory
Pylint & pep8 & sonarQube
pep8: https://www.python.org/dev/peps/pep-0008/
pylint: https://www.pylint.org/
sonarQube: http://euclid.roe.ac.uk/projects/codeen-users/wiki/User_Cod_Std-Tools
Exercise
We will run pep8 and pylint on actual code from Euclid's IAL project.
EC/SGS/ST/4-2-03-IAL/drm/trunk/euclid_ial/drm/system/utils.py
We copied this to the exercises directory of this tutorial
python-euclid2016/exercises/coding-standard
End of explanation
"""
|
CompPhysics/MachineLearning | doc/src/Optimization/autodiff/examples_allowed_functions-Copy1.ipynb | cc0-1.0 | import autograd.numpy as np
from autograd import grad
"""
Explanation: Examples of the supported features in Autograd
Before using Autograd for more complicated calculations, it might be useful to experiment with what kind of functions Autograd is capable of finding the gradient of. The following Python functions are just meant to illustrate what Autograd can do, but please feel free to experiment with other, possibly more complicated, functions as well!
End of explanation
"""
def f1(x):
return x**3 + 1
f1_grad = grad(f1)
# Remember to send in float as argument to the computed gradient from Autograd!
a = 1.0
# See the evaluated gradient at a using autograd:
print("The gradient of f1 evaluated at a = %g using autograd is: %g"%(a,f1_grad(a)))
# Compare with the analytical derivative, that is f1'(x) = 3*x**2
grad_analytical = 3*a**2
print("The gradient of f1 evaluated at a = %g by finding the analytic expression is: %g"%(a,grad_analytical))
"""
Explanation: Supported functions
Here are some examples of supported function implementations that Autograd can differentiate. Keep in mind that this list over examples is not comprehensive, but rather explores which basic constructions one might often use.
Functions using simple arithmetics
End of explanation
"""
def f2(x1,x2):
return 3*x1**3 + x2*(x1 - 5) + 1
# By sending the argument 0, Autograd will compute the derivative w.r.t the first variable, in this case x1
f2_grad_x1 = grad(f2,0)
# ... and differentiate w.r.t x2 by sending 1 as an additional arugment to grad
f2_grad_x2 = grad(f2,1)
x1 = 1.0
x2 = 3.0
print("Evaluating at x1 = %g, x2 = %g"%(x1,x2))
print("-"*30)
# Compare with the analytical derivatives:
# Derivative of f2 w.r.t x1 is: 9*x1**2 + x2:
f2_grad_x1_analytical = 9*x1**2 + x2
# Derivative of f2 w.r.t x2 is: x1 - 5:
f2_grad_x2_analytical = x1 - 5
# See the evaluated derivations:
print("The derivative of f2 w.r.t x1: %g"%( f2_grad_x1(x1,x2) ))
print("The analytical derivative of f2 w.r.t x1: %g"%( f2_grad_x1(x1,x2) ))
print()
print("The derivative of f2 w.r.t x2: %g"%( f2_grad_x2(x1,x2) ))
print("The analytical derivative of f2 w.r.t x2: %g"%( f2_grad_x2(x1,x2) ))
"""
Explanation: Functions with two (or more) arguments
To differentiate with respect to two (or more) arguments of a Python function, Autograd need to know at which variable the function if being differentiated with respect to.
End of explanation
"""
def f3(x): # Assumes x is an array of length 5 or higher
return 2*x[0] + 3*x[1] + 5*x[2] + 7*x[3] + 11*x[4]**2
f3_grad = grad(f3)
x = np.linspace(0,4,5)
# Print the computed gradient:
print("The computed gradient of f3 is: ", f3_grad(x))
# The analytical gradient is: (2, 3, 5, 7, 22*x[4])
f3_grad_analytical = np.array([2, 3, 5, 7, 22*x[4]])
# Print the analytical gradient:
print("The analytical gradient of f3 is: ", f3_grad_analytical)
"""
Explanation: Note that the grad function will not produce the true gradient of the function. The true gradient of a function with two or more variables will produce a vector, where each element is the function differentiated w.r.t a variable.
Functions using the elements of its argument directly
End of explanation
"""
def f4(x):
return np.sqrt(1+x**2) + np.exp(x) + np.sin(2*np.pi*x)
f4_grad = grad(f4)
x = 2.7
# Print the computed derivative:
print("The computed derivative of f4 at x = %g is: %g"%(x,f4_grad(x)))
# The analytical derivative is: x/sqrt(1 + x**2) + exp(x) + cos(2*pi*x)*2*pi
f4_grad_analytical = x/np.sqrt(1 + x**2) + np.exp(x) + np.cos(2*np.pi*x)*2*np.pi
# Print the analytical gradient:
print("The analytical gradient of f4 at x = %g is: %g"%(x,f4_grad_analytical))
"""
Explanation: Note that in this case, when sending an array as input argument, the output from Autograd is another array. This is the true gradient of the function, as opposed to the function in the previous example. By using arrays to represent the variables, the output from Autograd might be easier to work with, as the output is closer to what one could expect form a gradient-evaluting function.
Functions using mathematical functions from Numpy
End of explanation
"""
def f5(x):
if x >= 0:
return x**2
else:
return -3*x + 1
f5_grad = grad(f5)
x = 2.7
# Print the computed derivative:
print("The computed derivative of f5 at x = %g is: %g"%(x,f5_grad(x)))
# The analytical derivative is:
# if x >= 0, then 2*x
# else -3
if x >= 0:
f5_grad_analytical = 2*x
else:
f5_grad_analytical = -3
# Print the analytical derivative:
print("The analytical derivative of f5 at x = %g is: %g"%(x,f5_grad_analytical))
"""
Explanation: Functions using if-else tests
End of explanation
"""
def f6_for(x):
val = 0
for i in range(10):
val = val + x**i
return val
def f6_while(x):
val = 0
i = 0
while i < 10:
val = val + x**i
i = i + 1
return val
f6_for_grad = grad(f6_for)
f6_while_grad = grad(f6_while)
x = 0.5
# Print the computed derivaties of f6_for and f6_while
print("The computed derivative of f6_for at x = %g is: %g"%(x,f6_for_grad(x)))
print("The computed derivative of f6_while at x = %g is: %g"%(x,f6_while_grad(x)))
# Both of the functions are implementation of the sum: sum(x**i) for i = 0, ..., 9
# The analytical derivative is: sum(i*x**(i-1))
f6_grad_analytical = 0
for i in range(10):
f6_grad_analytical += i*x**(i-1)
print("The analytical derivative of f6 at x = %g is: %g"%(x,f6_grad_analytical))
"""
Explanation: Functions using for- and while loops
End of explanation
"""
def f7(n): # Assume that n is an integer
if n == 1 or n == 0:
return 1
else:
return n*f7(n-1)
f7_grad = grad(f7)
n = 2.0
print("The computed derivative of f7 at n = %d is: %g"%(n,f7_grad(n)))
# The function f7 is an implementation of the factorial of n.
# By using the product rule, one can find that the derivative is:
f7_grad_analytical = 0
for i in range(int(n)-1):
tmp = 1
for k in range(int(n)-1):
if k != i:
tmp *= (n - k)
f7_grad_analytical += tmp
print("The analytical derivative of f7 at n = %d is: %g"%(n,f7_grad_analytical))
"""
Explanation: Functions using recursion
End of explanation
"""
def f8(x): # Assume x is an array
x[2] = 3
return x*2
f8_grad = grad(f8)
x = 8.4
print("The derivative of f8 is:",f8_grad(x))
"""
Explanation: Note that if n is equal to zero or one, Autograd will give an error message. This message appears when the output is independent on input.
Unsupported functions
Autograd supports many features. However, there are some functions that is not supported (yet) by Autograd.
Assigning a value to the variable being differentiated with respect to
End of explanation
"""
def f9(a): # Assume a is an array with 2 elements
b = np.array([1.0,2.0])
return a.dot(b)
f9_grad = grad(f9)
x = np.array([1.0,0.0])
print("The derivative of f9 is:",f9_grad(x))
"""
Explanation: Here, Autograd tells us that an 'ArrayBox' does not support item assignment. The item assignment is done when the program tries to assign x[2] to the value 3. However, Autograd has implemented the computation of the derivative such that this assignment is not possible.
The syntax a.dot(b) when finding the dot product
End of explanation
"""
def f9_alternative(x): # Assume a is an array with 2 elements
b = np.array([1.0,2.0])
return np.dot(x,b) # The same as x_1*b_1 + x_2*b_2
f9_alternative_grad = grad(f9_alternative)
x = np.array([3.0,0.0])
print("The gradient of f9 is:",f9_alternative_grad(x))
# The analytical gradient of the dot product of vectors x and b with two elements (x_1,x_2) and (b_1, b_2) respectively
# w.r.t x is (b_1, b_2).
"""
Explanation: Here we are told that the 'dot' function does not belong to Autograd's version of a Numpy array.
To overcome this, an alternative syntax which also computed the dot product can be used:
End of explanation
"""
|
anandha2017/udacity | nd101 Deep Learning Nanodegree Foundation/DockerImages/26_sirajs_text_summarisation/notebooks/01-How_to_make_a_text_summarizer/train.ipynb | mit | import os
# os.environ['THEANO_FLAGS'] = 'device=cpu,floatX=float32'
import keras
keras.__version__
"""
Explanation: you should use GPU but if it is busy then you always can fall back to your CPU
End of explanation
"""
FN0 = 'vocabulary-embedding'
"""
Explanation: Use indexing of tokens from vocabulary-embedding this does not clip the indexes of the words to vocab_size.
Use the index of outside words to replace them with several oov words (oov , oov0, oov1, ...) that appear in the same description and headline. This will allow headline generator to replace the oov with the same word in the description
End of explanation
"""
FN1 = 'train'
"""
Explanation: implement the "simple" model from http://arxiv.org/pdf/1512.01712v1.pdf
you can start training from a pre-existing model. This allows you to run this notebooks many times, each time using different parameters and passing the end result of one run to be the input of the next.
I've started with maxlend=0 (see below) in which the description was ignored. I then moved to start with a high LR and the manually lowering it. I also started with nflips=0 in which the original headlines is used as-is and slowely moved to 12 in which half the input headline was fliped with the predictions made by the model (the paper used fixed 10%)
End of explanation
"""
maxlend=25 # 0 - if we dont want to use description at all
maxlenh=25
maxlen = maxlend + maxlenh
rnn_size = 512 # must be same as 160330-word-gen
rnn_layers = 3 # match FN1
batch_norm=False
"""
Explanation: input data (X) is made from maxlend description words followed by eos
followed by headline words followed by eos
if description is shorter than maxlend it will be left padded with empty
if entire data is longer than maxlen it will be clipped and if it is shorter it will be right padded with empty.
labels (Y) are the headline words followed by eos and clipped or padded to maxlenh
In other words the input is made from a maxlend half in which the description is padded from the left
and a maxlenh half in which eos is followed by a headline followed by another eos if there is enough space.
The labels match only the second half and
the first label matches the eos at the start of the second half (following the description in the first half)
End of explanation
"""
activation_rnn_size = 40 if maxlend else 0
# training parameters
seed=42
p_W, p_U, p_dense, p_emb, weight_decay = 0, 0, 0, 0, 0
optimizer = 'adam'
LR = 1e-4
batch_size=64
nflips=10
nb_train_samples = 30000
nb_val_samples = 3000
"""
Explanation: the out of the first activation_rnn_size nodes from the top LSTM layer will be used for activation and the rest will be used to select predicted word
End of explanation
"""
import cPickle as pickle
with open('data/%s.pkl'%FN0, 'rb') as fp:
embedding, idx2word, word2idx, glove_idx2idx = pickle.load(fp)
vocab_size, embedding_size = embedding.shape
with open('data/%s.data.pkl'%FN0, 'rb') as fp:
X, Y = pickle.load(fp)
nb_unknown_words = 10
print 'number of examples',len(X),len(Y)
print 'dimension of embedding space for words',embedding_size
print 'vocabulary size', vocab_size, 'the last %d words can be used as place holders for unknown/oov words'%nb_unknown_words
print 'total number of different words',len(idx2word), len(word2idx)
print 'number of words outside vocabulary which we can substitue using glove similarity', len(glove_idx2idx)
print 'number of words that will be regarded as unknonw(unk)/out-of-vocabulary(oov)',len(idx2word)-vocab_size-len(glove_idx2idx)
for i in range(nb_unknown_words):
idx2word[vocab_size-1-i] = '<%d>'%i
"""
Explanation: read word embedding
End of explanation
"""
oov0 = vocab_size-nb_unknown_words
for i in range(oov0, len(idx2word)):
idx2word[i] = idx2word[i]+'^'
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=nb_val_samples, random_state=seed)
len(X_train), len(Y_train), len(X_test), len(Y_test)
del X
del Y
empty = 0
eos = 1
idx2word[empty] = '_'
idx2word[eos] = '~'
import numpy as np
from keras.preprocessing import sequence
from keras.utils import np_utils
import random, sys
def prt(label, x):
print label+':',
for w in x:
print idx2word[w],
print
i = 334
prt('H',Y_train[i])
prt('D',X_train[i])
i = 334
prt('H',Y_test[i])
prt('D',X_test[i])
"""
Explanation: when printing mark words outside vocabulary with ^ at their end
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout, RepeatVector, Merge
from keras.layers.wrappers import TimeDistributed
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
from keras.regularizers import l2
# seed weight initialization
random.seed(seed)
np.random.seed(seed)
regularizer = l2(weight_decay) if weight_decay else None
"""
Explanation: Model
End of explanation
"""
model = Sequential()
model.add(Embedding(vocab_size, embedding_size,
input_length=maxlen,
W_regularizer=regularizer, dropout=p_emb, weights=[embedding], mask_zero=True,
name='embedding_1'))
for i in range(rnn_layers):
lstm = LSTM(rnn_size, return_sequences=True, # batch_norm=batch_norm,
W_regularizer=regularizer, U_regularizer=regularizer,
b_regularizer=regularizer, dropout_W=p_W, dropout_U=p_U,
name='lstm_%d'%(i+1)
)
model.add(lstm)
model.add(Dropout(p_dense,name='dropout_%d'%(i+1)))
"""
Explanation: start with a standaed stacked LSTM
End of explanation
"""
from keras.layers.core import Lambda
import keras.backend as K
def simple_context(X, mask, n=activation_rnn_size, maxlend=maxlend, maxlenh=maxlenh):
desc, head = X[:,:maxlend,:], X[:,maxlend:,:]
head_activations, head_words = head[:,:,:n], head[:,:,n:]
desc_activations, desc_words = desc[:,:,:n], desc[:,:,n:]
# RTFM http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.batched_tensordot
# activation for every head word and every desc word
activation_energies = K.batch_dot(head_activations, desc_activations, axes=(2,2))
# make sure we dont use description words that are masked out
activation_energies = activation_energies + -1e20*K.expand_dims(1.-K.cast(mask[:, :maxlend],'float32'),1)
# for every head word compute weights for every desc word
activation_energies = K.reshape(activation_energies,(-1,maxlend))
activation_weights = K.softmax(activation_energies)
activation_weights = K.reshape(activation_weights,(-1,maxlenh,maxlend))
# for every head word compute weighted average of desc words
desc_avg_word = K.batch_dot(activation_weights, desc_words, axes=(2,1))
return K.concatenate((desc_avg_word, head_words))
class SimpleContext(Lambda):
def __init__(self,**kwargs):
super(SimpleContext, self).__init__(simple_context,**kwargs)
self.supports_masking = True
def compute_mask(self, input, input_mask=None):
return input_mask[:, maxlend:]
def get_output_shape_for(self, input_shape):
nb_samples = input_shape[0]
n = 2*(rnn_size - activation_rnn_size)
return (nb_samples, maxlenh, n)
if activation_rnn_size:
model.add(SimpleContext(name='simplecontext_1'))
model.add(TimeDistributed(Dense(vocab_size,
W_regularizer=regularizer, b_regularizer=regularizer,
name = 'timedistributed_1')))
model.add(Activation('softmax', name='activation_1'))
from keras.optimizers import Adam, RMSprop # usually I prefer Adam but article used rmsprop
# opt = Adam(lr=LR) # keep calm and reduce learning rate
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
%%javascript
// new Audio("http://www.soundjay.com/button/beep-09.wav").play ()
K.set_value(model.optimizer.lr,np.float32(LR))
def str_shape(x):
return 'x'.join(map(str,x.shape))
def inspect_model(model):
for i,l in enumerate(model.layers):
print i, 'cls=%s name=%s'%(type(l).__name__, l.name)
weights = l.get_weights()
for weight in weights:
print str_shape(weight),
print
inspect_model(model)
"""
Explanation: A special layer that reduces the input just to its headline part (second half).
For each word in this part it concatenate the output of the previous layer (RNN)
with a weighted average of the outputs of the description part.
In this only the last rnn_size - activation_rnn_size are used from each output.
The first activation_rnn_size output is used to computer the weights for the averaging.
End of explanation
"""
if FN1:
model.load_weights('data/%s.hdf5'%FN1)
"""
Explanation: Load
End of explanation
"""
def lpadd(x, maxlend=maxlend, eos=eos):
"""left (pre) pad a description to maxlend and then add eos.
The eos is the input to predicting the first word in the headline
"""
assert maxlend >= 0
if maxlend == 0:
return [eos]
n = len(x)
if n > maxlend:
x = x[-maxlend:]
n = maxlend
return [empty]*(maxlend-n) + x + [eos]
samples = [lpadd([3]*26)]
# pad from right (post) so the first maxlend will be description followed by headline
data = sequence.pad_sequences(samples, maxlen=maxlen, value=empty, padding='post', truncating='post')
np.all(data[:,maxlend] == eos)
data.shape,map(len, samples)
probs = model.predict(data, verbose=0, batch_size=1)
probs.shape
"""
Explanation: Test
End of explanation
"""
# variation to https://github.com/ryankiros/skip-thoughts/blob/master/decoding/search.py
def beamsearch(predict, start=[empty]*maxlend + [eos],
k=1, maxsample=maxlen, use_unk=True, empty=empty, eos=eos, temperature=1.0):
"""return k samples (beams) and their NLL scores, each sample is a sequence of labels,
all samples starts with an `empty` label and end with `eos` or truncated to length of `maxsample`.
You need to supply `predict` which returns the label probability of each sample.
`use_unk` allow usage of `oov` (out-of-vocabulary) label in samples
"""
def sample(energy, n, temperature=temperature):
"""sample at most n elements according to their energy"""
n = min(n,len(energy))
prb = np.exp(-np.array(energy) / temperature )
res = []
for i in xrange(n):
z = np.sum(prb)
r = np.argmax(np.random.multinomial(1, prb/z, 1))
res.append(r)
prb[r] = 0. # make sure we select each element only once
return res
dead_k = 0 # samples that reached eos
dead_samples = []
dead_scores = []
live_k = 1 # samples that did not yet reached eos
live_samples = [list(start)]
live_scores = [0]
while live_k:
# for every possible live sample calc prob for every possible label
probs = predict(live_samples, empty=empty)
# total score for every sample is sum of -log of word prb
cand_scores = np.array(live_scores)[:,None] - np.log(probs)
cand_scores[:,empty] = 1e20
if not use_unk:
for i in range(nb_unknown_words):
cand_scores[:,vocab_size - 1 - i] = 1e20
live_scores = list(cand_scores.flatten())
# find the best (lowest) scores we have from all possible dead samples and
# all live samples and all possible new words added
scores = dead_scores + live_scores
ranks = sample(scores, k)
n = len(dead_scores)
ranks_dead = [r for r in ranks if r < n]
ranks_live = [r - n for r in ranks if r >= n]
dead_scores = [dead_scores[r] for r in ranks_dead]
dead_samples = [dead_samples[r] for r in ranks_dead]
live_scores = [live_scores[r] for r in ranks_live]
# append the new words to their appropriate live sample
voc_size = probs.shape[1]
live_samples = [live_samples[r//voc_size]+[r%voc_size] for r in ranks_live]
# live samples that should be dead are...
# even if len(live_samples) == maxsample we dont want it dead because we want one
# last prediction out of it to reach a headline of maxlenh
zombie = [s[-1] == eos or len(s) > maxsample for s in live_samples]
# add zombies to the dead
dead_samples += [s for s,z in zip(live_samples,zombie) if z]
dead_scores += [s for s,z in zip(live_scores,zombie) if z]
dead_k = len(dead_samples)
# remove zombies from the living
live_samples = [s for s,z in zip(live_samples,zombie) if not z]
live_scores = [s for s,z in zip(live_scores,zombie) if not z]
live_k = len(live_samples)
return dead_samples + live_samples, dead_scores + live_scores
# !pip install python-Levenshtein
def keras_rnn_predict(samples, empty=empty, model=model, maxlen=maxlen):
"""for every sample, calculate probability for every possible label
you need to supply your RNN model and maxlen - the length of sequences it can handle
"""
sample_lengths = map(len, samples)
assert all(l > maxlend for l in sample_lengths)
assert all(l[maxlend] == eos for l in samples)
# pad from right (post) so the first maxlend will be description followed by headline
data = sequence.pad_sequences(samples, maxlen=maxlen, value=empty, padding='post', truncating='post')
probs = model.predict(data, verbose=0, batch_size=batch_size)
return np.array([prob[sample_length-maxlend-1] for prob, sample_length in zip(probs, sample_lengths)])
def vocab_fold(xs):
"""convert list of word indexes that may contain words outside vocab_size to words inside.
If a word is outside, try first to use glove_idx2idx to find a similar word inside.
If none exist then replace all accurancies of the same unknown word with <0>, <1>, ...
"""
xs = [x if x < oov0 else glove_idx2idx.get(x,x) for x in xs]
# the more popular word is <0> and so on
outside = sorted([x for x in xs if x >= oov0])
# if there are more than nb_unknown_words oov words then put them all in nb_unknown_words-1
outside = dict((x,vocab_size-1-min(i, nb_unknown_words-1)) for i, x in enumerate(outside))
xs = [outside.get(x,x) for x in xs]
return xs
def vocab_unfold(desc,xs):
# assume desc is the unfolded version of the start of xs
unfold = {}
for i, unfold_idx in enumerate(desc):
fold_idx = xs[i]
if fold_idx >= oov0:
unfold[fold_idx] = unfold_idx
return [unfold.get(x,x) for x in xs]
import sys
import Levenshtein
def gensamples(skips=2, k=10, batch_size=batch_size, short=True, temperature=1., use_unk=True):
i = random.randint(0,len(X_test)-1)
print 'HEAD:',' '.join(idx2word[w] for w in Y_test[i][:maxlenh])
print 'DESC:',' '.join(idx2word[w] for w in X_test[i][:maxlend])
sys.stdout.flush()
print 'HEADS:'
x = X_test[i]
samples = []
if maxlend == 0:
skips = [0]
else:
skips = range(min(maxlend,len(x)), max(maxlend,len(x)), abs(maxlend - len(x)) // skips + 1)
for s in skips:
start = lpadd(x[:s])
fold_start = vocab_fold(start)
sample, score = beamsearch(predict=keras_rnn_predict, start=fold_start, k=k, temperature=temperature, use_unk=use_unk)
assert all(s[maxlend] == eos for s in sample)
samples += [(s,start,scr) for s,scr in zip(sample,score)]
samples.sort(key=lambda x: x[-1])
codes = []
for sample, start, score in samples:
code = ''
words = []
sample = vocab_unfold(start, sample)[len(start):]
for w in sample:
if w == eos:
break
words.append(idx2word[w])
code += chr(w//(256*256)) + chr((w//256)%256) + chr(w%256)
if short:
distance = min([100] + [-Levenshtein.jaro(code,c) for c in codes])
if distance > -0.6:
print score, ' '.join(words)
# print '%s (%.2f) %f'%(' '.join(words), score, distance)
else:
print score, ' '.join(words)
codes.append(code)
gensamples(skips=2, batch_size=batch_size, k=10, temperature=1.)
"""
Explanation: Sample generation
this section is only used to generate examples. you can skip it if you just want to understand how the training works
End of explanation
"""
def flip_headline(x, nflips=None, model=None, debug=False):
"""given a vectorized input (after `pad_sequences`) flip some of the words in the second half (headline)
with words predicted by the model
"""
if nflips is None or model is None or nflips <= 0:
return x
batch_size = len(x)
assert np.all(x[:,maxlend] == eos)
probs = model.predict(x, verbose=0, batch_size=batch_size)
x_out = x.copy()
for b in range(batch_size):
# pick locations we want to flip
# 0...maxlend-1 are descriptions and should be fixed
# maxlend is eos and should be fixed
flips = sorted(random.sample(xrange(maxlend+1,maxlen), nflips))
if debug and b < debug:
print b,
for input_idx in flips:
if x[b,input_idx] == empty or x[b,input_idx] == eos:
continue
# convert from input location to label location
# the output at maxlend (when input is eos) is feed as input at maxlend+1
label_idx = input_idx - (maxlend+1)
prob = probs[b, label_idx]
w = prob.argmax()
if w == empty: # replace accidental empty with oov
w = oov0
if debug and b < debug:
print '%s => %s'%(idx2word[x_out[b,input_idx]],idx2word[w]),
x_out[b,input_idx] = w
if debug and b < debug:
print
return x_out
def conv_seq_labels(xds, xhs, nflips=None, model=None, debug=False):
"""description and hedlines are converted to padded input vectors. headlines are one-hot to label"""
batch_size = len(xhs)
assert len(xds) == batch_size
x = [vocab_fold(lpadd(xd)+xh) for xd,xh in zip(xds,xhs)] # the input does not have 2nd eos
x = sequence.pad_sequences(x, maxlen=maxlen, value=empty, padding='post', truncating='post')
x = flip_headline(x, nflips=nflips, model=model, debug=debug)
y = np.zeros((batch_size, maxlenh, vocab_size))
for i, xh in enumerate(xhs):
xh = vocab_fold(xh) + [eos] + [empty]*maxlenh # output does have a eos at end
xh = xh[:maxlenh]
y[i,:,:] = np_utils.to_categorical(xh, vocab_size)
return x, y
def gen(Xd, Xh, batch_size=batch_size, nb_batches=None, nflips=None, model=None, debug=False, seed=seed):
"""yield batches. for training use nb_batches=None
for validation generate deterministic results repeating every nb_batches
while training it is good idea to flip once in a while the values of the headlines from the
value taken from Xh to value generated by the model.
"""
c = nb_batches if nb_batches else 0
while True:
xds = []
xhs = []
if nb_batches and c >= nb_batches:
c = 0
new_seed = random.randint(0, sys.maxint)
random.seed(c+123456789+seed)
for b in range(batch_size):
t = random.randint(0,len(Xd)-1)
xd = Xd[t]
s = random.randint(min(maxlend,len(xd)), max(maxlend,len(xd)))
xds.append(xd[:s])
xh = Xh[t]
s = random.randint(min(maxlenh,len(xh)), max(maxlenh,len(xh)))
xhs.append(xh[:s])
# undo the seeding before we yield inorder not to affect the caller
c+= 1
random.seed(new_seed)
yield conv_seq_labels(xds, xhs, nflips=nflips, model=model, debug=debug)
r = next(gen(X_train, Y_train, batch_size=batch_size))
r[0].shape, r[1].shape, len(r)
def test_gen(gen, n=5):
Xtr,Ytr = next(gen)
for i in range(n):
assert Xtr[i,maxlend] == eos
x = Xtr[i,:maxlend]
y = Xtr[i,maxlend:]
yy = Ytr[i,:]
yy = np.where(yy)[1]
prt('L',yy)
prt('H',y)
if maxlend:
prt('D',x)
test_gen(gen(X_train, Y_train, batch_size=batch_size))
"""
Explanation: Data generator
Data generator generates batches of inputs and outputs/labels for training. The inputs are each made from two parts. The first maxlend words are the original description, followed by eos followed by the headline which we want to predict, except for the last word in the headline which is always eos and then empty padding until maxlen words.
For each, input, the output is the headline words (without the start eos but with the ending eos) padded with empty words up to maxlenh words. The output is also expanded to be y-hot encoding of each word.
To be more realistic, the second part of the input should be the result of generation and not the original headline.
Instead we will flip just nflips words to be from the generator, but even this is too hard and instead
implement flipping in a naive way (which consumes less time.) Using the full input (description + eos + headline) generate predictions for outputs. For nflips random words from the output, replace the original word with the word with highest probability from the prediction.
End of explanation
"""
test_gen(gen(X_train, Y_train, nflips=6, model=model, debug=False, batch_size=batch_size))
valgen = gen(X_test, Y_test,nb_batches=3, batch_size=batch_size)
"""
Explanation: test fliping
End of explanation
"""
for i in range(4):
test_gen(valgen, n=1)
"""
Explanation: check that valgen repeats itself after nb_batches
End of explanation
"""
history = {}
traingen = gen(X_train, Y_train, batch_size=batch_size, nflips=nflips, model=model)
valgen = gen(X_test, Y_test, nb_batches=nb_val_samples//batch_size, batch_size=batch_size)
r = next(traingen)
r[0].shape, r[1].shape, len(r)
for iteration in range(500):
print 'Iteration', iteration
h = model.fit_generator(traingen, samples_per_epoch=nb_train_samples,
nb_epoch=1, validation_data=valgen, nb_val_samples=nb_val_samples
)
for k,v in h.history.iteritems():
history[k] = history.get(k,[]) + v
with open('data/%s.history.pkl'%FN,'wb') as fp:
pickle.dump(history,fp,-1)
model.save_weights('data/%s.hdf5'%FN, overwrite=True)
gensamples(batch_size=batch_size)
"""
Explanation: Train
End of explanation
"""
|
renekm/CD-atualizado- | exercicios/Exercicio aula 17.ipynb | gpl-3.0 | %matplotlib inline
import os
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import stats
from scipy.stats import norm
"""
Explanation: Atividade: Soma de variáveis aleatórias
Aula 17
Preparo Prévio:
1. Seção 5.1 – págs 137 a 140: aborda como fazer uma distribuição de probabilidade conjunta entre duas variáveis aleatórias X e Y e define função de probabilidade conjunta.
2. Seção 5.2 – págs 146 a 149; págs 156 e 157 (Exemplo 5.12) e págs 158 a 162 (Exemplo 5.13): aborda propriedades de esperança e variância em soma de variáveis aleatórias (X+Y).
Hoje:
1. Descrever as propriedades de distribuição conjunta entre duas variáveis aleatórias discretas.
2. Compreender correlação entre variáveis aleatórias.
3. Descrever as propriedades de esperança e variância para soma de variáveis aleatórias (discretas e contínuas).
Próxima aula:
1. Leitura prévia necessária: Magalhães e Lima (7ª. Edição): Seção 7.3 (págs 234 a 240).
End of explanation
"""
#Valores da tabela
y=[-1,0,1] #colunas
x=[-0.25,0,0.25] #linhas
probXY=[[[] for i in range(3)] for i in range(3)]
pxy=[0.05,0.07,0.26,0.075,0.21,0.12,0.125,0.07,0.02]
k=0
for i in range(3):
for j in range(3):
probXY[i][j]=pxy[k]
k+=1
print(probXY)
#item 1
#Distribuição de X
probX=[0,0,0]
for i in range(3):
for j in range(3):
probX[i]+=probXY[i][j]
for i in range(3):
print("Probabilidade de X=",x[i]," é igual a ", probX[i])
espX=0
varX=0
for i in range(3):
espX+=x[i]*probX[i]
for i in range(3):
varX+=(x[i]-espX)**2*probX[i]
print("Esperança de X=",espX)
print("Variância de X=",varX)
#item 1
#Distribuição de Y
probY=[0,0,0]
for i in range(3):
for j in range(3):
probY[j]+=probXY[i][j]
for i in range(3):
print("Probabilidade de Y=",y[i]," é igual a ", probY[i])
espY=0
varY=0
for i in range(3):
espY+=y[i]*probY[i]
for i in range(3):
varY+=(y[i]-espY)**2*probY[i]
print("Esperança de Y=",espY)
print("Variância de Y=",varY)
#item 1
#Covariância e Correlação
cov=0
for i in range(3):
for j in range(3):
cov+=(x[i]-espX)*(y[j]-espY)*probXY[i][j]
corr=cov/(varX*varY)**(0.5)
print("Covariância entre X e Y=", cov)
print("Correlação entre X e Y=", corr)
#item 2
#Distribuição do G=0.5*X +0.5*Y
g=[]
probG=[]
for i in range(3):
for j in range(3):
a = 0.5*x[i]+0.5*y[j],
if a in g:
probG[g.index(a)] += probX[i][j]
else:
g.append(a)
probG.append(probXY[i][j])
for i in range(len(g)):
print("Probabilidade de G=",g[i]," é igual a ", probG[i])
#item 3
#Esperança e variância de G
espG=0
varG=0
for i in range(len(g)):
espG+=g[i]*probG[i]
for i in range(len(g)):
varG+=(g[i]-espG)**2*probG[i]
print("Esperança e variância de G usando distribuição de probabilidade de G:")
print("Esperança de G=",espG)
print("Variância de G=",varG)
#item 4
#Esperança e variância de G usando propriedades de soma de variáveis aleatórias
#G=0.5X + 0.5Y
#G=0.5*(X+Y)
espGp = 0.5*(espX+espY)
varGp = 0.5**2*(varX+varY+2*cov)
print("Esperança e variância de G usando propriedades:")
print("Esperança de G=",espGp)
print("Variância de G=",varGp)
"""
Explanation: <font color='blue'>Exercício 1 - Exemplo 3 da Aula 17 </font>
Num determinado momento em um certo país, a taxa de juros ($X$) pode variar 0,25 pontos percentuais (pp), para cima ou para baixo, ou manter-se constante.
Já a taxa de câmbio ($Y$) pode variar para mais ou para menos em 1 pp, ou manter-se constante.
A tabela seguinte reflete as distribuições marginais e conjunta dessas duas taxas representadas, aqui, por $X$ e $Y$.
Um investidor aplica a mesma quantia num fundo que acompanha a variação da taxa de juros ($X$) e num fundo que acompanha a variação cambial ($Y$). Ao final do dia ele resgatará seu investimento.
1. Encontre a $E(X)$, $Var(X)$, $E(Y)$, $Var(Y)$, $Cov(X,Y)$ e $Corr(X,Y)$.
2. Construa a distribuição de probabilidades do ganho (em variação %) desse investidor, ou seja, encontre todos os valores de $G=0.5X+0.5Y$, sendo $G$ o ganho do investidor aplicando metade do dinheiro em $X$ e metade do dinheiro em $Y$.
3. Calcule esperança e variância de $G$ utilizando a distribuição de probabilidade, ou seja, $E(G)$ e $Var(G)$.
4. Calcule esperança e variância de $G$ utilizando propriedades de esperança e variância da soma de variáveis aleatórias.
End of explanation
"""
#Informações do enunciado
muX = 21
varX = 4
muY = 18.90
varY = 2.25
corXY = 0.95
covXY = corXY*(varX*varY)**(0.5)
mean = [muX, muY]
cov = [[varX, covXY], [covXY, varY]] # diagonal covariance
n=100
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2, lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
n=1000
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2, lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
n=10000
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2, lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
"""
Explanation: <font color='blue'>Exercício 2 - Soma de normais correlacionadas</font>
Um pacote com mil peças de resistor de carbono 1/8w tem o preço distribuído como uma normal com média 21 reais e desvio padrão de 2 reais, ou seja, $X$~$N(21;4)$.
Já jumpers/fios macho-fêmea com 40 unidades de 20 cm tem o preço distribuído como uma normal com média 18,90 reais e desvio padrão de 1,50 reais, ou seja, $Y$~$N(18,90;2,25)$.
Assuma que a correlação entre esses dois preços seja de 0,85.
Simule $n=100$, $n=1.000$ e $n=10.000$ de cada variáveis aleatória respeitando a correlação entre elas. Para tanto, consulte o comando np.random.multivariate_normal(mean, cov, n).
Se você vai passear na Santa Efigênia para comprar um pacote de cada um, calcule a esperança e a variância do gasto $G=X+Y$ com a compra de uma pacotinho de resistores e um jumper nas especificações acima descritos.
Calcule esperança e variância de $G$ utilizando propriedades de esperança e variância da soma de variáveis aleatórias.
Construa a distribuição do gasto e verifique se o gasto se assemelha a distribuição normal.
Repita assumindo correlação igual a zero entre X e Y.
End of explanation
"""
muX = 21
varX = 4
muY = 18.90
varY = 2.25
corXY = 0
covXY = corXY*(varX*varY)**(0.5)
mean = [muX, muY]
cov = [[varX, covXY], [covXY, varY]]
n=100
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2,lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
n=1000
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
#temp
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2,lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
n=10000
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2,lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
"""
Explanation: item 5
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/federated/tutorials/simulations.ipynb | apache-2.0 | #@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
"""
Explanation: High-performance simulations with TFF
This tutorial will describe how to setup high-performance simulations with TFF
in a variety of common scenarios.
TODO(b/134543154): Populate the content, some of the things to cover here:
- using GPUs in a single-machine setup,
- multi-machine setup on GCP/GKE, with and without TPUs,
- interfacing MapReduce-like backends,
- current limitations and when/how they will be relaxed.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/simulations"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/federated/tutorials/simulations.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/federated/tutorials/simulations.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/federated/tutorials/simulations.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Before we begin
First, make sure your notebook is connected to a backend that has the relevant components (including gRPC dependencies for multi-machine scenarios) compiled.
では、TFF ウェブサイトから MNIST サンプルを読み込み、10 台のクライアントグループで小規模な実験ループを実行する Python 関数を宣言することから始めましょう。
End of explanation
"""
evaluate()
"""
Explanation: シングルマシンシミュレーション
現在、デフォルトで次のようにオンになっています。
End of explanation
"""
|
jeicher/cobrapy | documentation_builder/simulating.ipynb | lgpl-2.1 | import pandas
pandas.options.display.max_rows = 100
import cobra.test
model = cobra.test.create_test_model("textbook")
"""
Explanation: Simulating with FBA
Simulations using flux balance analysis can be solved using Model.optimize(). This will maximize or minimize (maximizing is the default) flux through the objective reactions.
End of explanation
"""
model.optimize()
"""
Explanation: Running FBA
End of explanation
"""
model.solution.status
model.solution.f
"""
Explanation: The Model.optimize() function will return a Solution object, which will also be stored at model.solution. A solution object has several attributes:
f: the objective value
status: the status from the linear programming solver
x_dict: a dictionary of {reaction_id: flux_value} (also called "primal")
x: a list for x_dict
y_dict: a dictionary of {metabolite_id: dual_value}.
y: a list for y_dict
For example, after the last call to model.optimize(), the status should be 'optimal' if the solver returned no errors, and f should be the objective value
End of explanation
"""
model.summary()
"""
Explanation: Analyzing FBA solutions
Models solved using FBA can be further analyzed by using summary methods, which output printed text to give a quick representation of model behavior. Calling the summary method on the entire model displays information on the input and output behavior of the model, along with the optimized objective.
End of explanation
"""
model.metabolites.nadh_c.summary()
"""
Explanation: In addition, the input-output behavior of individual metabolites can also be inspected using summary methods. For instance, the following commands can be used to examine the overall redox balance of the model
End of explanation
"""
model.metabolites.atp_c.summary()
"""
Explanation: Or to get a sense of the main energy production and consumption reactions
End of explanation
"""
biomass_rxn = model.reactions.get_by_id("Biomass_Ecoli_core")
"""
Explanation: Changing the Objectives
The objective function is determined from the objective_coefficient attribute of the objective reaction(s). Generally, a "biomass" function which describes the composition of metabolites which make up a cell is used.
End of explanation
"""
model.objective
"""
Explanation: Currently in the model, there is only one objective reaction (the biomass reaction), with an objective coefficient of 1.
End of explanation
"""
# change the objective to ATPM
model.objective = "ATPM"
# The upper bound should be 1000, so that we get
# the actual optimal value
model.reactions.get_by_id("ATPM").upper_bound = 1000.
model.objective
model.optimize().f
"""
Explanation: The objective function can be changed by assigning Model.objective, which can be a reaction object (or just it's name), or a dict of {Reaction: objective_coefficient}.
End of explanation
"""
model.reactions.get_by_id("ATPM").objective_coefficient = 0.
biomass_rxn.objective_coefficient = 1.
model.objective
"""
Explanation: The objective function can also be changed by setting Reaction.objective_coefficient directly.
End of explanation
"""
fva_result = cobra.flux_analysis.flux_variability_analysis(
model, model.reactions[:20])
pandas.DataFrame.from_dict(fva_result).T.round(5)
"""
Explanation: Running FVA
FBA will not give always give unique solution, because multiple flux states can achieve the same optimum. FVA (or flux variability analysis) finds the ranges of each metabolic flux at the optimum.
End of explanation
"""
fva_result = cobra.flux_analysis.flux_variability_analysis(
model, model.reactions[:20], fraction_of_optimum=0.9)
pandas.DataFrame.from_dict(fva_result).T.round(5)
"""
Explanation: Setting parameter fraction_of_optimium=0.90 would give the flux ranges for reactions at 90% optimality.
End of explanation
"""
model.optimize()
model.summary(fva=0.95)
"""
Explanation: Running FVA in summary methods
Flux variability analysis can also be embedded in calls to summary methods. For instance, the expected variability in substrate consumption and product formation can be quickly found by
End of explanation
"""
model.metabolites.pyr_c.summary(fva=0.95)
"""
Explanation: Similarly, variability in metabolite mass balances can also be checked with flux variability analysis
End of explanation
"""
FBA_sol = model.optimize()
pFBA_sol = cobra.flux_analysis.optimize_minimal_flux(model)
"""
Explanation: In these summary methods, the values are reported as a the center point +/- the range of the FVA solution, calculated from the maximum and minimum values.
Running pFBA
Parsimonious FBA (often written pFBA) finds a flux distribution which gives the optimal growth rate, but minimizes the total sum of flux. This involves solving two sequential linear programs, but is handled transparently by cobrapy. For more details on pFBA, please see Lewis et al. (2010).
End of explanation
"""
abs(FBA_sol.f - pFBA_sol.f)
"""
Explanation: These functions should give approximately the same objective value
End of explanation
"""
|
AlphaGit/deep-learning | embeddings/Skip-Gram_word2vec.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'gi',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
{k: vocab_to_int[k] for k in list(vocab_to_int.keys())[:30]}
"{:,}".format(len(int_words))
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
from collections import Counter
t = 1e-5
word_counts = Counter(int_words)
amount_of_total_words = len(int_words)
def subsampling_probability(threshold, current_word_count):
word_relative_frequency = current_word_count / amount_of_total_words
return 1 - np.sqrt(threshold / word_relative_frequency)
probability_per_word = { current_word: subsampling_probability(t, current_word_count) for current_word, current_word_count in word_counts.items() }
train_words = [ i for i in int_words if np.random.random() > probability_per_word[i] ]
print("Words dropped: {:,}, final size: {:,}".format(len(int_words) - len(train_words), len(train_words)))
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
r = np.random.randint(1, window_size + 1)
min_index = max(idx - r, 0)
max_index = idx + r
words_in_batch = words[min_index:idx] + words[idx + 1:max_index + 1] # avoid returning the current word on idx
return list(set(words_in_batch)) # avoid duplicates
get_target([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 4, 5)
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, (None))
labels = tf.placeholder(tf.int32, (None, None))
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 400
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
import random
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_soft/td1a_cython_edit_correction.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.soft - Calcul numérique et Cython - correction
End of explanation
"""
def distance_edition(mot1, mot2):
dist = { (-1,-1): 0 }
for i,c in enumerate(mot1) :
dist[i,-1] = dist[i-1,-1] + 1
dist[-1,i] = dist[-1,i-1] + 1
for j,d in enumerate(mot2) :
opt = [ ]
if (i-1,j) in dist :
x = dist[i-1,j] + 1
opt.append(x)
if (i,j-1) in dist :
x = dist[i,j-1] + 1
opt.append(x)
if (i-1,j-1) in dist :
x = dist[i-1,j-1] + (1 if c != d else 0)
opt.append(x)
dist[i,j] = min(opt)
return dist[len(mot1)-1,len(mot2)-1]
%timeit distance_edition("idstzance","distances")
"""
Explanation: Exercice : python/C appliqué à une distance d'édition
On reprend la fonction donnée dans l'énoncé.
End of explanation
"""
%load_ext cython
"""
Explanation: solution avec notebook
Les préliminaires :
End of explanation
"""
%%cython --annotate
cimport cython
def cidistance_edition(str mot1, str mot2):
cdef int dist [500][500]
cdef int cost, c
cdef int l1 = len(mot1)
cdef int l2 = len(mot2)
dist[0][0] = 0
for i in range(l1):
dist[i+1][0] = dist[i][0] + 1
dist[0][i+1] = dist[0][i] + 1
for j in range(l2):
cost = dist[i][j+1] + 1
c = dist[i+1][j] + 1
if c < cost : cost = c
c = dist[i][j]
if mot1[i] != mot2[j] : c += 1
if c < cost : cost = c
dist[i+1][j+1] = cost
cost = dist[l1][l2]
return cost
mot1, mot2 = "idstzance","distances"
%timeit cidistance_edition(mot1, mot2)
"""
Explanation: Puis :
End of explanation
"""
import sys
from pyquickhelper.loghelper import run_cmd
code = """
def cdistance_edition(str mot1, str mot2):
cdef int dist [500][500]
cdef int cost, c
cdef int l1 = len(mot1)
cdef int l2 = len(mot2)
dist[0][0] = 0
for i in range(l1):
dist[i+1][0] = dist[i][0] + 1
dist[0][i+1] = dist[0][i] + 1
for j in range(l2):
cost = dist[i][j+1] + 1
c = dist[i+1][j] + 1
if c < cost : cost = c
c = dist[i][j]
if mot1[i] != mot2[j] : c += 1
if c < cost : cost = c
dist[i+1][j+1] = cost
cost = dist[l1][l2]
return cost
"""
name = "cedit_distance"
with open(name + ".pyx","w") as f : f.write(code)
setup_code = """
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("__NAME__.pyx",
compiler_directives={'language_level' : "3"})
)
""".replace("__NAME__",name)
with open("setup.py","w") as f:
f.write(setup_code)
cmd = "{0} setup.py build_ext --inplace".format(sys.executable)
out,err = run_cmd(cmd)
if err is not None and err != '':
raise Exception(err)
import pyximport
pyximport.install()
import cedit_distance
from cedit_distance import cdistance_edition
mot1, mot2 = "idstzance","distances"
%timeit cdistance_edition(mot1, mot2)
"""
Explanation: solution sans notebook
End of explanation
"""
mot1 = mot1 * 10
mot2 = mot2 * 10
%timeit distance_edition(mot1,mot2)
%timeit cdistance_edition(mot1, mot2)
"""
Explanation: La version Cython est 10 fois plus rapide. Et cela ne semble pas dépendre de la dimension du problème.
End of explanation
"""
|
strikingmoose/chi_lars_face_detection | notebook/12 - Building & Training Convolutional Neural Network (AWS).ipynb | apache-2.0 | # Install tflearn
import os
os.system("sudo pip install tflearn tqdm boto3 opencv-python")
"""
Explanation: 12 - Building & Training Convolutional Neural Network
Preface
Note that this same notebook crashed my laptop when I tried to train my CNN, so I'm migrating this onto AWS. This notebook's code is a similar copy of the previous notebook in this series except for this preface and the section after the model has been successfully trained.
End of explanation
"""
import cv2
import numpy as np
import pandas as pd
import urllib
import math
import boto3
import os
import copy
from tqdm import tqdm
from matplotlib import pyplot as plt
%matplotlib inline
# Connect to s3 bucket
s3 = boto3.resource('s3', region_name = 'ca-central-1')
my_bucket = s3.Bucket('2017edmfasatb')
# Get all files in the project directory under chi_lars_face_detection/photos/
chi_photos = [i.key for i in my_bucket.objects.all() if 'chi_lars_face_detection/photos/chi/' in i.key]
lars_photos = [i.key for i in my_bucket.objects.all() if 'chi_lars_face_detection/photos/lars/' in i.key]
# Define function to convert URL to numpy array
def url_to_image(url):
# Download the image, convert it to a numpy array, and then read it into OpenCV format
resp = urllib.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_GRAYSCALE)
# Rotate image
image = np.rot90(image, 3)
# Build resize into function
image = cv2.resize(image, (0,0), fx=0.03, fy=0.03)
# Return the image
return image
# Loop through all files to download into a single array from AWS
url_prefix = 'https://s3.ca-central-1.amazonaws.com/2017edmfasatb'
# Trying out the new tqdm library for progress bar
chi_photos_list = [url_to_image(os.path.join(url_prefix, x)) for x in tqdm(chi_photos)]
# Trying out the new tqdm library for progress bar
lars_photos_list = [url_to_image(os.path.join(url_prefix, x)) for x in tqdm(lars_photos)]
# Convert to numpy arrays
chi_photos_np = np.array(chi_photos_list)
lars_photos_np = np.array(lars_photos_list)
# Temporarily save np arrays
np.save('chi_photos_np_0.03_compress', chi_photos_np)
np.save('lars_photos_np_0.03_compress', lars_photos_np)
# Temporarily load from np arrays
chi_photos_np = np.load('chi_photos_np_0.03_compress.npy')
lars_photos_np = np.load('lars_photos_np_0.03_compress.npy')
# View shape of numpy array
chi_photos_np.shape
# Set width var
width = chi_photos_np.shape[-1]
width
"""
Explanation: Feature Building
End of explanation
"""
# Try out scaler on a manually set data (min of 0, max of 255)
from sklearn.preprocessing import MinMaxScaler
# Set test data list to train on (min of 0, max of 255)
test_list = np.array([0, 255]).reshape(-1, 1)
test_list
# Initialize scaler
scaler = MinMaxScaler()
# Fit test list
scaler.fit(test_list)
"""
Explanation: Scaling Inputs
End of explanation
"""
chi_photos_np.reshape(-1, width, width, 1).shape
"""
Explanation: Reshaping 3D Array To 4D Array
End of explanation
"""
# Reshape to prepare for scaler
chi_photos_np_flat = chi_photos_np.reshape(1, -1)
chi_photos_np_flat[:10]
# Scale
chi_photos_np_scaled = scaler.transform(chi_photos_np_flat)
chi_photos_np_scaled[:10]
# Reshape to prepare for scaler
lars_photos_np_flat = lars_photos_np.reshape(1, -1)
lars_photos_np_scaled = scaler.transform(lars_photos_np_flat)
"""
Explanation: Putting It All Together
End of explanation
"""
# Reshape
chi_photos_reshaped = chi_photos_np_scaled.reshape(-1, width, width, 1)
lars_photos_reshaped = lars_photos_np_scaled.reshape(-1, width, width, 1)
print('{} has shape: {}'. format('chi_photos_reshaped', chi_photos_reshaped.shape))
print('{} has shape: {}'. format('lars_photos_reshaped', lars_photos_reshaped.shape))
# Create copy of chi's photos to start populating x_input
x_input = copy.deepcopy(chi_photos_reshaped)
print('{} has shape: {}'. format('x_input', x_input.shape))
# Concatentate lars' photos to existing x_input
x_input = np.append(x_input, lars_photos_reshaped, axis = 0)
print('{} has shape: {}'. format('x_input', x_input.shape))
"""
Explanation: Now let's reshape.
End of explanation
"""
# Create label arrays
y_chi = np.array([[1, 0] for i in chi_photos_reshaped])
y_lars = np.array([[0, 1] for i in lars_photos_reshaped])
print('{} has shape: {}'. format('y_chi', y_chi.shape))
print('{} has shape: {}'. format('y_lars', y_lars.shape))
# Preview the first few elements
y_chi[:5]
y_lars[:5]
# Create copy of chi's labels to start populating y_input
y_input = copy.deepcopy(y_chi)
print('{} has shape: {}'. format('y_input', y_input.shape))
# Concatentate lars' labels to existing y_input
y_input = np.append(y_input, y_lars, axis = 0)
print('{} has shape: {}'. format('y_input', y_input.shape))
"""
Explanation: Preparing Labels
End of explanation
"""
# TFlearn libraries
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
# sentdex's code to build the neural net using tflearn
# Input layer --> conv layer w/ max pooling --> conv layer w/ max pooling --> fully connected layer --> output layer
convnet = input_data(shape = [None, width, width, 1], name = 'input')
convnet = conv_2d(convnet, 32, 10, activation = 'relu')
convnet = max_pool_2d(convnet, 2)
convnet = conv_2d(convnet, 64, 10, activation = 'relu')
convnet = max_pool_2d(convnet, 2)
convnet = fully_connected(convnet, 1024, activation = 'relu')
convnet = dropout(convnet, 0.8)
convnet = fully_connected(convnet, 2, activation = 'softmax')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
"""
Explanation: Training
I'm going to just copy and paste the CNN structure I used for the MNIST tutorial and see what happens. I'm running this on my own laptop by the way, let's observe the speed.
End of explanation
"""
# Import library
from sklearn.cross_validation import train_test_split
print(x_input.shape)
print(y_input.shape)
# Perform train test split
x_train, x_test, y_train, y_test = train_test_split(x_input, y_input, test_size = 0.1, stratify = y_input)
"""
Explanation: Train Test Split
I'm just going to do a 90 / 10 train test split here
- My training data will consist of roughly 360 training images
- My test data will consist of roughly 40 test images
End of explanation
"""
# Train with data
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 3,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Save model
model.save('model_4_epochs_0.03_compression_99.6.tflearn')
"""
Explanation: Training
Let's try training with 3 epochs.
End of explanation
"""
# Predict on test set, generating probabilities of each class (one-hot style)
y_pred_proba = model.predict(x_test)
y_pred_proba[:10]
# Convert probabilities to direct predictions
y_pred_labels = np.array(['chi' if y[0] >= 0.5 else 'lars' for y in y_pred_proba])
y_pred_labels[:10]
# Convert y_test to direct predictions
y_test_labels = np.array(['chi' if y[0] >= 0.5 else 'lars' for y in y_test])
y_test_labels[:10]
"""
Explanation: Testing
Okay, so that was quite the wild ride. I've gotten something to work right now and it's giving me 99.99% accuracy. (0.0147 loss). The loss is cross entropy (measuring node purity, something to the tune of $D=-\sum_{k=1}^{K}\hat{p}{mk}log{\hat{p}{mk}}$), so that seems like quite a good loss value to have. Let's try to predict on our test set and generate a simple confusion matrix just to make sure we're sane.
End of explanation
"""
# Confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test_labels, y_pred_labels)
"""
Explanation: It looks like it's gotten the first 10 right so far... creepy. Let's make a confusion matrix.
End of explanation
"""
|
ISosnovik/UVA_AML17 | week_2/1.Blocks.ipynb | mit | import automark as am
username = 'sosnovik'
am.register_id(username, ('ivan sosnovik', 'i.sosnovik@uva.nl'))
am.get_progress(username)
"""
Explanation: Assignment 1
Blocks
The main idea of this assignment is to allow you to undestand how neural networks (NNs) work. We will cover the main aspects such as the Backpropagation and Optimization Methods. All mathematical operations should be implemented in NumPy only.
The assignmnent consist of 2 notebooks:
* Blocks - the place where the main building blocks of the NNs should be implemented
* Experiments - a playground. There we will train the models
Note
Some of the concepts below have not (yet) been discussed durin the lecture. These will be discussed further during the next Wednesday lecture
Table of contents
0. Preliminaries
1. Backpropagation
2. Dense layer
3. ReLU nonlinearity
4. Sequential model
5. Loss
6. $L_2$ Regularization & Weight Decay
7. Optimizer
8. Advanced blocks
8.1 Dropout
8.2 MSE Loss
0. Preliminaries
In this assignment we will use classes and their instances (objects). It will allow us to write less code and make it more readable. However, you don't have to take care about the exact implementation of the classes. We did it for you.
But if you are interested in it, here are some useful links:
* The official documentation
* Video by sentdex: Object Oriented Programming Introduction
* Antipatterns in OOP: Stop Writing Classes
The interface of the current blocks is mostly inspired by Torch / PyTorch. You can also take a look at the first implementation of Keras
We use automark to check if the answers are correct. Register first
End of explanation
"""
from __future__ import print_function, absolute_import, division
import numpy as np
"""
Explanation: 1. Backpropagation
Each layer is a function of several parameters (weights): $h = f(x, \theta)$
The layers could be chained. Therefore, the neural network $F$ is a composition of functions:
$$
F = f_k \circ f_{k-1} \circ \dots \ f_1\
h_1 = f_1(x, \theta_1)\
h_2 = f_2(h_1, \theta_2)\
\dots \
\dot{t} = f_k(h_{k-1}, \theta_k)
$$
The neural network is trained by minimizing the loss function $\mathcal{L}$. During class we have discussed the squared-loss for linear regression, where we used $\mathcal{L}_{\textrm{reg}} = \tfrac{1}{2}\sum_n (t_n - \dot{t}_n)^2$, where $t_n$ is the target-value of training example $n$ and $\dot{t}_n$ the predicted value by the network/regressor.
Currently, the most effective way of training is to use the variation of the Gradient descent called Stochastic gradient descent (and its improvements).
The parameters of the $m$-th layer are updated according to the following scheme:
$$
\theta_m \leftarrow \theta_m - \gamma \frac{\partial \mathcal{L}}{\partial \theta_m}
$$
Hyperparameter $\gamma$ is called learning rate
As the layers are chained, the computation of $\partial \mathcal{L}/\partial \theta_m$ in advance is a complicated task. However, it is easily computed, when the forward pass is finished using the chain rule.
The above-stated gradient is calculated using the chain rule:
$$
\frac{\partial \mathcal{L}}{\partial \theta_m} =
\frac{\partial \mathcal{L}}{\partial h_m}
\frac{\partial h_m}{\partial \theta_m} =
\frac{\partial \mathcal{L}}{\partial h_{m+1}}
\frac{\partial h_{m+1}}{\partial h_m}
\frac{\partial h_m}{\partial \theta_m} = \dots
$$
Therefore, for each layer we have to be able to calculate several expressions:
$h_m = f_m(h_{m-1}, \theta_m)$ - the forward inference
$\partial h_{m} / \partial h_{m-1}$ - the partial derivative of the output with respect to the input
$\partial h_{m} / \partial \theta_m$ - the partial derivative of the output with respect to the parameters
The algorithm of training of a NN using the chain rule is called Backpropagation
2. Dense layer
Dense Layer (Fully-Connected, Multiplicative) is the basic layer of a neural network. It transforms input matrix of size (n_objects, n_in) to the matrix of size (n_objects, n_out) by performing the following operation:
$$
H = XW + b
$$
or in other words:
$$
H_{ij} = \sum\limits_{k=1}^{n_\text{in}} X_{ik}W_{kj} + b_j
$$
Example:
You have a model of just 1 layer. The input is a point in a 3D space. And you want to predict its label: $-1$ or $1$.
You have $75$ objects in you training subset (or batch).
Therefore, $X$ has shape $75 \times 3$. $Y$ has shape $75 \times 1$. Weight $W$ of the layer has shape $3 \times 1$. And $b$ is a scalar.
End of explanation
"""
def dense_forward(x_input, W, b):
"""Perform the mapping of the input
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the output of a dense layer
np.array of size `(n_objects, n_out)`
"""
#################
### YOUR CODE ###
#################
return output
"""
Explanation: Implement the forward path:
$$
H = XW + b
$$
End of explanation
"""
X_test = np.array([[1, -1],
[-1, 0]])
W_test = np.array([[4, 0],
[2, -1]])
b_test = np.array([1, 2])
h_test = dense_forward(X_test, W_test, b_test)
print(h_test)
am.test_student_function(username, dense_forward, ['x_input', 'W', 'b'])
"""
Explanation: Let's check your first function. We set the matrices $X, W, b$:
$$
X = \begin{bmatrix}
1 & -1 \
-1 & 0 \
\end{bmatrix} \quad
W = \begin{bmatrix}
4 & 0\
2 & -1\
\end{bmatrix} \quad
b = \begin{bmatrix}
1 & 2 \
\end{bmatrix}
$$
And then compute
$$
XW = \begin{bmatrix}
1 & -1 \
-1 & 0 \
\end{bmatrix}
\begin{bmatrix}
4 & 0 \
2 & -1\
\end{bmatrix} =
\begin{bmatrix}
2 & 1 \
-4 & 0 \
\end{bmatrix} \
XW + b =
\begin{bmatrix}
3 & 3 \
-3 & 2 \
\end{bmatrix}
$$
End of explanation
"""
def dense_grad_input(x_input, grad_output, W, b):
"""Calculate the partial derivative of
the loss with respect to the input of the layer
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
grad_output: partial derivative of the loss functions with
respect to the ouput of the dense layer
np.array of size `(n_objects, n_out)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the partial derivative of the loss with
respect to the input of the layer
np.array of size `(n_objects, n_in)`
"""
#################
### YOUR CODE ###
#################
return grad_input
am.test_student_function(username, dense_grad_input, ['x_input', 'grad_output', 'W', 'b'])
"""
Explanation: Implement the chain rule:
$$
\frac{\partial \mathcal{L}}{\partial X} =
\frac{\partial \mathcal{L}}{\partial H}
\frac{\partial H}{\partial X}
$$
End of explanation
"""
def dense_grad_W(x_input, grad_output, W, b):
"""Calculate the partial derivative of
the loss with respect to W parameter of the layer
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
grad_output: partial derivative of the loss functions with
respect to the ouput of the dense layer
np.array of size `(n_objects, n_out)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the partial derivative of the loss
with respect to W parameter of the layer
np.array of size `(n_in, n_out)`
"""
#################
### YOUR CODE ###
#################
return grad_W
def dense_grad_b(x_input, grad_output, W, b):
"""Calculate the partial derivative of
the loss with respect to b parameter of the layer
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
grad_output: partial derivative of the loss functions with
respect to the ouput of the dense layer
np.array of size `(n_objects, n_out)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the partial derivative of the loss
with respect to b parameter of the layer
np.array of size `(n_out,)`
"""
#################
### YOUR CODE ###
#################
return grad_b
am.test_student_function(username, dense_grad_W, ['x_input', 'grad_output', 'W', 'b'])
am.test_student_function(username, dense_grad_b, ['x_input', 'grad_output', 'W', 'b'])
"""
Explanation: Compute the gradient of the weights:
$$
\frac{\partial \mathcal{L}}{\partial W} =
\frac{\partial \mathcal{L}}{\partial H}
\frac{\partial H}{\partial W} \
\frac{\partial \mathcal{L}}{\partial b} =
\frac{\partial \mathcal{L}}{\partial H}
\frac{\partial H}{\partial b} \
$$
End of explanation
"""
class Layer(object):
def __init__(self):
self.training_phase = True
self.output = 0.0
def forward(self, x_input):
self.output = x_input
return self.output
def backward(self, x_input, grad_output):
return grad_output
def get_params(self):
return []
def get_params_gradients(self):
return []
class Dense(Layer):
def __init__(self, n_input, n_output):
super(Dense, self).__init__()
#Randomly initializing the weights from normal distribution
self.W = np.random.normal(size=(n_input, n_output))
self.grad_W = np.zeros_like(self.W)
#initializing the bias with zero
self.b = np.zeros(n_output)
self.grad_b = np.zeros_like(self.b)
def forward(self, x_input):
self.output = dense_forward(x_input, self.W, self.b)
return self.output
def backward(self, x_input, grad_output):
# get gradients of weights
self.grad_W = dense_grad_W(x_input, grad_output, self.W, self.b)
self.grad_b = dense_grad_b(x_input, grad_output, self.W, self.b)
# propagate the gradient backwards
return dense_grad_input(x_input, grad_output, self.W, self.b)
def get_params(self):
return [self.W, self.b]
def get_params_gradients(self):
return [self.grad_W, self.grad_b]
dense_layer = Dense(2, 1)
x_input = np.random.random((3, 2))
y_output = dense_layer.forward(x_input)
print(x_input)
print(y_output)
"""
Explanation: Dense Layer Class
First of all we define the basic class Layer. And then inherit it.
We implement it for you. But Dense class is based on the above-written functions.
End of explanation
"""
def relu_forward(x_input):
"""relu nonlinearity
# Arguments
x_input: np.array of size `(n_objects, n_in)`
# Output
the output of relu layer
np.array of size `(n_objects, n_in)`
"""
#################
### YOUR CODE ###
#################
return output
def relu_grad_input(x_input, grad_output):
"""relu nonlinearity gradient.
Calculate the partial derivative of the loss
with respect to the input of the layer
# Arguments
x_input: np.array of size `(n_objects, n_in)`
grad_output: np.array of size `(n_objects, n_in)`
# Output
the partial derivative of the loss
with respect to the input of the layer
np.array of size `(n_objects, n_in)`
"""
#################
### YOUR CODE ###
#################
return grad_input
am.test_student_function(username, relu_forward, ['x_input'])
am.test_student_function(username, relu_grad_input, ['x_input', 'grad_output'])
class ReLU(Layer):
def forward(self, x_input):
self.output = relu_forward(x_input)
return self.output
def backward(self, x_input, grad_output):
return relu_grad_input(x_input, grad_output)
"""
Explanation: 3. ReLU nonlinearity
The dense layer is a linear layer. Combinging several linear (dense) layers is always equivalent to one dense layer see the proof below:
$$
H_1 = XW_1 + b_1\
H_2 = H_1W_2 + b_2\
H_2 = (XW_1 + b_1)W_2 + b_2 = X(W_1W_2) + (b_1W_2 + b_2) = XW^ + b^
$$
Using multiple layers (ie a deep model) using only linear layers is equivalent to a single dense layer. A deep model using only linear layers is therefore ineffective.
In order to overcome this we need to add some non-linearities. Usually they are element-wise (ie per dimension).
$$
H_1 = XW_1 + b_1\
H_2 = f(H_1)\
H_3 = H_2W_3 + b_3 = f(XW_1 + b_1)W_2 + b_2\neq XW^ + b^
$$
Nowadays, one of the most popular nonlinearity is ReLU:
$$
\text{ReLU}(x) = \max(0, x)
$$
It is so popular, given that it is very simple and has an easy gradient.
Example
$$
\text{ReLU} \Big(
\begin{bmatrix}
1 & -0.5 \
0.3 & 0.1
\end{bmatrix}
\Big) =
\begin{bmatrix}
1 & 0 \
0.3 & 0.1
\end{bmatrix}
$$
It is a layer without trainable parameters. Just implement two functions to make it work
End of explanation
"""
class SequentialNN(object):
def __init__(self):
self.layers = []
self.training_phase = True
def set_training_phase(is_training=True):
self.training_phase = is_training
for layer in self.layers:
layer.training_phase = is_training
def add(self, layer):
self.layers.append(layer)
def forward(self, x_input):
self.output = x_input
for layer in self.layers:
self.output = layer.forward(self.output)
return self.output
def backward(self, x_input, grad_output):
inputs = [x_input] + [l.output for l in self.layers[:-1]]
for input_, layer_ in zip(inputs[::-1], self.layers[::-1]):
grad_output = layer_.backward(input_, grad_output)
def get_params(self):
params = []
for layer in self.layers:
params.extend(layer.get_params())
return params
def get_params_gradients(self):
grads = []
for layer in self.layers:
grads.extend(layer.get_params_gradients())
return grads
"""
Explanation: 4. Sequential model
In order to make the work with layers more comfortable, we create SequentialNN - a class, which stores all its layers and performs the basic manipulations.
End of explanation
"""
nn = SequentialNN()
nn.add(Dense(10, 4))
nn.add(ReLU())
nn.add(Dense(4, 1))
"""
Explanation: Here is the simple neural netowrk. It takes an input of shape (Any, 10). Pass it through Dense(10, 4), ReLU and Dense(4, 1). The output is a batch of size (Any, 1)
```
INPUT
|
|
[ReLU]
|
####
|
#
```
End of explanation
"""
# This is a basic class.
# All other losses will inherit it
class Loss(object):
def __init__(self):
self.output = 0.0
def forward(self, target_pred, target_true):
return self.output
def backward(self, target_pred, target_true):
return np.zeros_like(target_pred).reshape((-1, 1))
"""
Explanation: 5. Loss
Here we will define the loss functions. Each loss should be able to compute its value and compute its gradient with respect to the input.
End of explanation
"""
def hinge_forward(target_pred, target_true):
"""Compute the value of Hinge loss
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the value of Hinge loss
for a given prediction and the ground truth
scalar
"""
#################
### YOUR CODE ###
#################
return output
am.test_student_function(username, hinge_forward, ['target_pred', 'target_true'])
"""
Explanation: First of all, we will define Hinge loss function.
$$
\mathcal{L}(T, \dot{T}) = \frac{1}{N}\sum\limits_{k=1}^{N}\max(0, 1 - \dot{t}_k \cdot t_k)
$$
$N$ - number of objects
$\dot{T}$ and $T$ are the vectors of length $N$.
$\dot{t}_k$ is the predicted class of the $k$-th object. $\dot{t}_k \in {\rm I!R}$
$t_k$ is the real class of this object. $t_k \in {-1, 1}$
This loss function is used to train SVM estimators.
Let's implement the calculation of the loss.
End of explanation
"""
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects, 1)`
target_true: ground truth - np.array of size `(n_objects, 1)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects, 1)`
"""
#################
### YOUR CODE ###
#################
return grad_input
am.test_student_function(username, hinge_grad_input, ['target_pred', 'target_true'])
class Hinge(Loss):
def forward(self, target_pred, target_true):
self.output = hinge_forward(target_pred, target_true)
return self.output
def backward(self, target_pred, target_true):
return hinge_grad_input(target_pred, target_true).reshape((-1, 1))
"""
Explanation: Now you should compute the gradient of the loss function with respect to its input. It is a vector with the same shape as the input.
$$
\frac{\partial \mathcal{L}}{\partial \dot{T}} =
\begin{bmatrix}
\frac{\partial \mathcal{L}}{\partial \dot{t}_1} \
\frac{\partial \mathcal{L}}{\partial \dot{t}_2} \
\vdots \
\frac{\partial \mathcal{L}}{\partial \dot{t}_N} \
\end{bmatrix}
$$
End of explanation
"""
def l2_regularizer(weight_decay, weights):
"""Compute the L2 regularization term
# Arguments
weight_decay: float
weights: list of arrays of different shapes
# Output
sum of the L2 norms of the input weights
scalar
"""
#################
### YOUR CODE ###
#################
return 0.0
am.test_student_function(username, l2_regularizer, ['weight_decay', 'weights'])
"""
Explanation: 6. $L_2$ Regularization & Weight Decay
There are several ways of the regularization of a model. They are used to avoid learning models which behave well on the training subset and fail during testing. We will implement $L_2$ regularization also known as weight decay.
The key idea of $L_2$ regularization is to add an extra term to the loss functions:
$$
\mathcal{L}^* = \mathcal{L} + \frac{\lambda}{2} \|\theta\|^2_2
$$
For some cases only the weights of a single layer are penalized, but we will penalize all the weights.
$$
\mathcal{L}^* = \mathcal{L} + \frac{\lambda}{2} \sum\limits_{m=1}^k \|\theta_m\|^2_2
$$
Therefore, the updating scheme is also modified
$$
\theta_m \leftarrow \theta_m - \gamma \frac{\partial \mathcal{L}^}{\partial \theta_m}\
\frac{\partial \mathcal{L}^}{\partial \theta_m} = \frac{\partial \mathcal{L}}{\partial \theta_m} + \lambda \theta_m\
\theta_m \leftarrow \theta_m - \gamma \frac{\partial \mathcal{L}}{\partial \theta_m} - \lambda \theta_m
$$
As you can see, the updating scheme also gets an extra term. $\lambda$ is the coefficient of the weight decay.
The update of the weights would be implemented later in Optimizer class. Here you should implement the computation of $L_2$ norm of the weights from the given list.
$$
f(\lambda, [\theta_1, \theta_2, \dots, \theta_k]) = \frac{\lambda}{2} \sum\limits_{m=1}^k \|\theta_m\|^2_2
$$
End of explanation
"""
class Optimizer(object):
'''This is a basic class.
All other optimizers will inherit it
'''
def __init__(self, model, lr=0.01, weight_decay=0.0):
self.model = model
self.lr = lr
self.weight_decay = weight_decay
def update_params(self):
pass
class SGD(Optimizer):
'''Stochastic gradient descent optimizer
https://en.wikipedia.org/wiki/Stochastic_gradient_descent
'''
def update_params(self):
weights = self.model.get_params()
grads = self.model.get_params_gradients()
for w, dw in zip(weights, grads):
update = self.lr * dw + self.weight_decay * w
# it writes the result to the previous variable instead of copying
np.subtract(w, update, out=w)
"""
Explanation: 7. Optimizer
We implement the optimizer to perform the updates of the weights according to the certain scheme.
End of explanation
"""
def dropout_generate_mask(shape, drop_rate):
"""Generate mask
# Arguments
shape: shape of the input array
tuple
drop_rate: probability of the element
to be multiplied by 0
scalar
# Output
binary mask
"""
#################
### YOUR CODE ###
#################
return mask
"""
Explanation: 8. Advanced blocks
This is an optional section. If you liked the process of understanding NNs by implementing them from scratch, here are several more tasks for you.
8.1 Dropout
Dropout is a method of regularization. It is also could be interpreted as the augmentation method. The key idea is to randomly drop some values of the input tensor. It avoids overfitting of the model. Its behaviour is different during the training and testing.
First of all, you should implement the method of calculating the binary mask. The binary mask has the same shape as the input. The mask could have the value 0 for the certain element with the probability drop_rate and 1 - with 1.0 - drop_rate. None, in $p$ from article is 1.0 - drop_rate
End of explanation
"""
def dropout_forward(x_input, mask, drop_rate, training_phase):
"""Perform the mapping of the input
# Arguments
x_input: input of the layer
np.array of size `(n_objects, n_in)`
mask: binary mask
np.array of size `(n_objects, n_in)`
drop_rate: probability of the element to be multiplied by 0
scalar
training_phase: bool eiser `True` - training, or `False` - testing
# Output
the output of the dropout layer
np.array of size `(n_objects, n_in)`
"""
#################
### YOUR CODE ###
#################
return output
am.test_student_function(username, dropout_forward, ['x_input', 'mask', 'drop_rate', 'training_phase'])
"""
Explanation: Now implement the above-described operation of mapping.
End of explanation
"""
def dropout_grad_input(x_input, grad_output, mask):
"""Calculate the partial derivative of
the loss with respect to the input of the layer
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
grad_output: partial derivative of the loss functions with
respect to the ouput of the dropout layer
np.array of size `(n_objects, n_in)`
mask: binary mask
np.array of size `(n_objects, n_in)`
# Output
the partial derivative of the loss with
respect to the input of the layer
np.array of size `(n_objects, n_in)`
"""
#################
### YOUR CODE ###
#################
return grad_input
am.test_student_function(username, dropout_grad_input, ['x_input', 'grad_output', 'mask'])
class Dropout(Layer):
def __init__(self, drop_rate):
super(Dropout, self).__init__()
self.drop_rate = drop_rate
self.mask = 1.0
def forward(self, x_input):
if self.training_phase:
self.mask = dropout_generate_mask(x_input.shape, self.drop_rate)
self.output = dropout_forward(x_input, self.mask,
self.drop_rate, self.training_phase)
return self.output
def backward(self, x_input, grad_output):
grad_input = dropout_grad_input(x_input, grad_output, self.mask)
return grad_input
"""
Explanation: And, as usual, implement the calculation of the partial derivative of the loss function with respect to the input of a layer
End of explanation
"""
def mse_forward(target_pred, target_true):
"""Compute the value of MSE loss
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects, 1)`
target_true: ground truth - np.array of size `(n_objects, 1)`
# Output
the value of MSE loss
for a given prediction and the ground truth
scalar
"""
#################
### YOUR CODE ###
#################
return output
am.test_student_function(username, mse_forward, ['target_pred', 'target_true'])
"""
Explanation: 8.2 MSE Loss
MSE (Mean Squared Error) is a popular loss for the regression tasks.
$$
\mathcal{L}(T, \dot{T}) = \frac{1}{2N}\sum\limits_{k=1}^N(t_k - \dot{t}_k)^2
$$
$N$ - number of objects
$\dot{T}$ and $T$ are the vectors of length $N$.
$\dot{t}_k$ is the predicted target value of the $k$-th object. $\dot{t}_k \in {\rm I!R}$
$t_k$ is the real target value of the $k$-th object. $t_k \in {\rm I!R}$
This loss function is used to train regressors.
Let's implement the calculation of the loss.
End of explanation
"""
def mse_grad_input(target_pred, target_true):
"""Compute the partial derivative
of MSE loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects, 1)`
target_true: ground truth - np.array of size `(n_objects, 1)`
# Output
the partial derivative
of MSE loss with respect to its input
np.array of size `(n_objects, 1)`
"""
#################
### YOUR CODE ###
#################
return grad_input
am.test_student_function(username, mse_grad_input, ['target_pred', 'target_true'])
class MSE(Loss):
def forward(self, target_pred, target_true):
self.output = mse_forward(target_pred, target_true)
return self.output
def backward(self, target_pred, target_true):
return mse_grad_input(target_pred, target_true).reshape((-1, 1))
"""
Explanation: Now you should compute the gradient of the loss function with respect to its input.
End of explanation
"""
am.get_progress(username)
"""
Explanation: Let's check the progress one more time
End of explanation
"""
|
DeepLearningUB/DeepLearningMaster | 3. Tensorflow programming model.ipynb | mit | import tensorflow as tf
print(tf.__version__)
# Basic constant operations = to assign a value to a tensor
a = tf.constant(2)
b = tf.constant(3)
c = a+b
d = a*b
e = c+d
# non interactive session
with tf.Session() as sess:
print("a=2")
print("b=3")
print("(a+b)+(a*b) = %i" % sess.run(e))
"""
Explanation: Tensorflow
When starting off with deep learning, one of the first questions to ask is, which framework to learn?
Common choices include Theano, TensorFlow, Torch, and Keras. All of these choices have their own pros and cons and have their own way of doing things.
From The Anatomy of Deep Learning Frameworks
The core components of a deep learning framework we must consider are:
How Tensor Objects are defined. At the heart of the framework is the tensor object. A tensor is a generalization of a matrix to n-dimensions. We need a Tensor Object that supports storing the data in form of tensors. Not just that, we would like the object to be able to convert other data types (images, text, video) into tensors and back, supporting indexing, overloading operators, having a space efficient way to store the data and so on.
How Operations on the Tensor Object are defined. A neural network can be considered as a series of Operations performed on an input tensor to give an output.
The use of a Computation Graph and its Optimizations. Instead of implementing operations as functions, they are usually implemented as classes. This allows us to store more information about the operation like calculated shape of the output (useful for sanity checks), how to compute the gradient or the gradient itself (for the auto-differentiation), have ways to be able to decide whether to compute the op on GPU or CPU and so on. The power of neural networks lies in the ability to chain multiple operations to form a powerful approximator. Therefore, the standard use case is that you can initialize a tensor, perform actions after actions on them and finally interpret the resulting tensor as labels or real values. Unfortunately, as you chain more and more operations together, several issues arise that can drastically slow down your code and introduce bugs as well. There are more such issues and it becomes necessary to be able to get a bigger picture to even notice that these issues exist. We need a way to optimize the resultant chain of operations for both space and time. A Computation Graph which is basically an object that contains links to the instances of various Ops and the relations between which operation takes the output of which operation as well as additional information.
The use of Auto-differentiation tools. Another benefit of having the computational graph is that calculating gradients used in the learning phase becomes modular and straightforward to compute.
The use of BLAS/cuBLAS and cuDNN extensions for maximizing performance. BLAS or Basic Linear Algebra Subprograms are a collection of optimized matrix operations, initially written in Fortran. These can be leveraged to do very fast matrix (tensor) operations and can provide significant speedups. There are many other software packages like Intel MKL, ATLAS which also perform similar functions. BLAS packages are usually optimized assuming that the instructions will be run on a CPU. In the deep learning situation, this is not the case and BLAS may not be able to fully exploit the parallelism offered by GPGPUs. To solve this issue, NVIDIA has released cuBLAS which is optimized for GPUs. This is now included with the CUDA toolkit.
The computational model for Tensorflow (tf) is a directed graph.
Nodes are functions (operations in tf terminology) and edges are tensors.
Tensor are multidimensional data arrays.
$$f(a,b) = (a*b) + (a+b)$$
There are several reasons for this design:
+ The most important is that is a good way to split up computation into small, easily differentiable pieces. tf uses automatic differentiation to automatically compute the derivative of every node with respect any other node that can affect the first node's output.
+ The grah is also a convenient way for distributing computation across multiple CPUs, GPUs, etc.
The primary API of tf (written in C++) is accessed through Python.
There are different way of installing tf:
Pip install: May impact existing Python programs on your machine.
Virtualenv install: Install TensorFlow in its own directory, not impacting any existing Python programs on your machine.
Anaconda install (Windows: only Python 3, not Python 2.7): Install TensorFlow in its own environment for those running the Anaconda Python distribution. Does not impact existing Python programs on your machine.
Docker install: Run TensorFlow in a Docker container isolated from all other programs on your machine. It allows coexisting tf versions.
Installing from sources: Install TensorFlow by building a pip wheel that you then install using pip.
Our preferred way is Docker install.
Fundamentals
tf computation graphs are described in code with tf API.
End of explanation
"""
a = tf.zeros([2,3], tf.int32)
b = tf.ones([2,3], tf.int32)
c = tf.fill([3,3], 23.9)
d = tf.range(0,10,1)
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(b))
print(sess.run(c))
print(sess.run(d))
"""
Explanation: You can create initialized tensors in many ways:
End of explanation
"""
a = tf.random_normal([2,2], 0.0, 1.0)
b = tf.random_uniform([2,2], 0.0, 1.0)
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(b))
"""
Explanation: tf sequences are not iterable!
We can also generate random variables:
End of explanation
"""
idx = tf.constant(20)
idx_list = tf.range(idx) # 0~19
shuffle = tf.random_shuffle(idx_list)
with tf.Session() as sess:
a, b = sess.run([idx_list, shuffle])
print a
print b
# Basic operations with variable graph input
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
c = tf.add(a,b)
d = tf.mul(a,b)
e = tf.add(c,d)
values = feed_dict={a: 5, b: 3}
# non interactive session
with tf.Session() as sess:
print('a = %i' % sess.run(a, values))
print('b = %i' % sess.run(b, values))
print("(a+b)+(a*b) = %i" % sess.run(e, values))
"""
Explanation: How to generate random shuffled number in tensorflow?
End of explanation
"""
# Basic operations with variable as graph input
a = tf.placeholder(tf.int16,shape=[2])
b = tf.placeholder(tf.int16,shape=[2])
c = tf.add(a,b)
d = tf.mul(a,b)
e = tf.add(c,d)
variables = feed_dict={a: [2,2], b: [3,3]}
# non interactive session
with tf.Session() as sess:
print(sess.run(a, variables))
print(sess.run(b, variables))
print(sess.run(e, variables))
"""
Explanation: A computational graph is a series of functions chained together, each passing its output to zero, one or more functions further along the chain.
In this way we can construct very complex transformations on data by using a library of simple functions.
Nodes represent some sort of computation beign done in the graph context.
Edges are the actual values (tensors) that get passed to and from nodes.
The values flowing into the graph can come from different sources: from a different graph, from a file, entered by the client, etc. The input nodes simply pass on values given to them.
The other nodes take values, apply an operation and output their result.
Values running on edges are tensors:
End of explanation
"""
# your code here
"""
Explanation: Exercise
Implement this computational graph:
End of explanation
"""
# graph definition
# we can assign a name to every node
a = tf.placeholder(tf.int32, name='input_a')
b = tf.placeholder(tf.int32, name='input_b')
c = tf.add(a,b,name='add_1')
d = tf.mul(a,b,name='mul_1')
e = tf.add(c,d,name='add_2')
values = feed_dict={a: 5, b: 3}
# now we can run the graph in an interactive session
sess = tf.Session()
print(sess.run(e, values))
sess.close()
"""
Explanation: There are certein connections between nodes that are not allowed: you cannot create circular dependencies.
Dependency: Any node A that is required for the computation of a later node B is said to be a dependency of B.
The main reason is that dependencies create endless feedback loops.
There is one exception to this rule: recurrent neural networks. In this case tf simulate this kind of dependences by copying a finite number of versions of the graph, placing them side-by-side, and feeding them into another sequence. This process is referred as unrolling the graph.
Keeping track of dependencies is a basic feature of tf. Let's suppose that we want to compute the output value of the mul node. We can see in the unrolled graph that is not necessary to compute the full graph to get the output of that node. But how to ensure that we only compute the necessary nodes?
It's pretty easy:
+ Build a list for each node with all nodes it directly depends on (not indirectly!).
+ Initialize an empty stack, wich will eventually hold all the nodes we want to compute.
+ Put the node you want to get the output from.
+ Recursively, look at its dependency list and add to the stack the nodes it depends on, until there are no dependencies left to run and in this way we guarantee that we have all the nodes we need.
The stack will be ordered in a way that we are guaranteed to be able to run each node in the stack as we iterate through it.
The main thing to look out for is to keep track of nodes that were already computed and to store their value in memory.
As we have seen in previous code, tf workflow is a two-step process:
Define the computation graph.
Run the graph with data.
End of explanation
"""
# cleaning the tf graph space
tf.reset_default_graph()
a = tf.placeholder(tf.int16, name='input_a')
b = tf.placeholder(tf.int16, name='input_b')
c = tf.add(a,b,name='add_1')
d = tf.mul(a,b,name='mul_1')
e = tf.add(c,d,name='add_2')
values = feed_dict={a: 5, b: 3}
# now we can run the graph
# graphs are run by invoking Session objects
session = tf.Session()
# when you are passing an operation to 'run' you are
# asking to run all operations necessary to compute that node
# you can save the value of the node in a Python var
output = session.run(e, values)
print(output)
# now let's visualize the graph
# SummaryWriter is an object where we can save information
# about the execution of the computational graph
writer = tf.train.SummaryWriter('my_graph', session.graph)
writer.close()
# closing interactive session
session.close()
print output
"""
Explanation: tf has a very useful tool: tensor-board. Let's see how to use it.
End of explanation
"""
# your code here
"""
Explanation: Open a terminal and type in:
tensorboard --logdir="my_graph"
This starts a tensorboard server on port 6006. There, click on the Graphs link. You can see that each of the nodes is labeled based on the name parameter you passed into each operation.
Exercise
Implement and visualize this graph for a constant tensor [5,3]:
Check these functions in the tf official documentation (https://www.tensorflow.org/): tf.reduce_prod, tf.reduce_sum.
End of explanation
"""
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
a = tf.placeholder(tf.int16, shape=[2,2], name='input_a')
shape = tf.shape(a)
session = tf.Session()
print(session.run(shape))
session.close()
"""
Explanation: tf input data
tf can take several Python var types that are automatically converted to tensors:
tf.constant([5,3], name='input_a')
But tf has a plethora of other data types: tf_int16, tf_quint8, etc.
tf is tightly integrated with NumPy. In fact, tf data types are based on those from NumPy. Tensors returned from Session.run are NumPy arrays. NumPy arrays is the recommended way of specifying tensors.
The shape of tensors describe both the number of dimensions in a tensor as well as the length of each dimension. In addition to to being able to specify fixed lengths to each dimension, you can also assign a flexible length by passing in None as dimension's value.
End of explanation
"""
tf.reset_default_graph()
list_a_values = [1,2,3]
a = tf.placeholder(tf.int16)
b = a * 2
with tf.Session() as sess:
for a_value in list_a_values:
print(sess.run(b,{a: a_value}))
"""
Explanation: We can feed data points to placeholder by iterating through the data set:
End of explanation
"""
import tensorflow as tf
tf.reset_default_graph()
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
c = a+b
d = a*b
e = c+d
variables = feed_dict={a: 5, b: 3}
with tf.Session() as sess:
print("(a+b)+(a*b) = %i" % sess.run(e, variables))
"""
Explanation: tf operations
tf overloads common mathematical operations:
End of explanation
"""
import tensorflow as tf
tf.reset_default_graph()
a = tf.Variable(3,name="my_var")
b = a.assign(tf.mul(2,a)) # variables are objects, not ops.
# The statement a.assign(...) does not actually assign any to a,
# but rather creates a tf.Operation that you have to explicitly
# run to update the variable.
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print "a:",a.eval() # variables are objects, not ops.
print "b:", sess.run(b)
print "b:", sess.run(b)
print "b:", sess.run(b)
tf.reset_default_graph()
a = tf.Variable(3,name="my_var")
b = a.assign(tf.mul(2,a))
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print a.eval()
tf.reset_default_graph()
a = tf.Variable(3,name="my_var")
b = a.assign(tf.mul(2,a))
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
sess.run(b)
print a.eval()
"""
Explanation: There are more Tensorflow Operations
tf graphs
Creating a graph is simple:
python
import tensorflow as tf
g = tf.Graph()
Once the graph is initialized we can attach operation to it by using the Graph.as_default() method:
python
with g.as_default():
a = tf.mul(2,3)
...
tf automatically creates a graph at the beginning and assigns it to be the default. Thus, if not using Graph.as_default() any operation will be automatically placed in the default graph.
Creating multiple graphs can be useful if you are defining multiple models that do not have interdependencies:
```python
g1 = tf.Graph()
g2 = tf.Graph()
with g1.as_default():
...
with g2.as_default():
...
```
tf Variables
Tensor and Operation objects are immutable, but we need a mechanism to save changing values over time.
This is accomplished with Variable objects, which contain mutable tensor values that persist accross multiple calls to Session.run().
Variables can be used anywhere you might use a tensor.
tf has a number of helper operations to initialize variables: tf-zeros(), tf_ones(), tf.random_uniform(), tf.random_normal(), etc.
Variable objects live in a Graph but their state is managed by Session. Because of these they need an extra step for inicialization:
```python
import tensorflow as tf
tf.reset_default_graph()
a = tf.Variable(3,name="my_var")
b = tf.add(5,a)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
...
```
In order to chage the value of a Variable we can use the Variable.assign() method:
End of explanation
"""
import tensorflow as tf
tf.reset_default_graph()
a = tf.Variable(3,name="my_var")
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print(sess.run(a.assign_add(1)))
print(sess.run(a.assign_sub(1)))
print(sess.run(a.assign_sub(1)))
sess.run(tf.initialize_all_variables())
print(sess.run(a))
"""
Explanation: We can increment and decrement variables:
End of explanation
"""
tf.reset_default_graph()
a = tf.Variable(10)
sess1 = tf.Session()
sess2 = tf.Session()
sess1.run(tf.initialize_all_variables())
sess2.run(tf.initialize_all_variables())
print sess1.run(a.assign_add(10))
print sess2.run(a.assign_sub(2))
sess1.close()
sess2.close()
"""
Explanation: Some classes of tf (f.e. Optimizer) are able to automatically change variable values without explicitely asking to do so.
Tensorflow sessions maintain values separately, each Session can have its own current value for a variable defined in the graph:
End of explanation
"""
import tensorflow as tf
tf.reset_default_graph()
with tf.name_scope("Scope_A"):
a = tf.add(1, 2, name="A_add")
b = tf.mul(a, 3, name="A_mul")
with tf.name_scope("Scope_B"):
c = tf.add(4, 5, name="B_add")
d = tf.mul(c, 6, name="B_mul")
e = tf.add(b, d, name="output")
writer = tf.train.SummaryWriter('./name_scope_1', graph=tf.get_default_graph())
writer.close()
"""
Explanation: tf name scopes
tf offers a tool to help organize your graphs: name scopes.
Name scopes allows you to group operations into larger, named blocks. This is very usefu to visualize complex models with tensorboard.
End of explanation
"""
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
# Explicitly create a Graph object
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("variables"):
# your code here
# Primary transformation Operations
with tf.name_scope("transformation"):
# Separate input layer
with tf.name_scope("input"):
# your code here
# Separate middle layer
with tf.name_scope("intermediate_layer"):
# your code here
# Separate output layer
# your code here
with tf.name_scope("update"):
# Increments the total_output Variable by the latest output
# your code here
# Summary Operations
with tf.name_scope("summaries"):
avg = tf.div(update_total, tf.cast(increment_step, tf.float32), name="average")
# Creates summaries for output node
tf.scalar_summary(b'Output', output, name="output_summary")
tf.scalar_summary(b'Sum of outputs over time', update_total, name="total_summary")
tf.scalar_summary(b'Average of outputs over time', avg, name="average_summary")
# Global Variables and Operations
with tf.name_scope("global_ops"):
# Initialization Op
init = tf.initialize_all_variables()
# Merge all summaries into one Operation
merged_summaries = tf.merge_all_summaries()
# Start an interactive Session, using the explicitly created Graph
sess = tf.Session(graph=graph)
# Open a SummaryWriter to save summaries
writer = tf.train.SummaryWriter('./improved_graph', graph)
# Initialize Variables
sess.run(init)
"""
Explanation: We can start tensorboard to see the graph: tensorboard --logdir="./name_scope_1".
You can expand the name scope boxes by clicking +.
Exercise
Let's built and visualize and complex model:
Our inputs will be placeholders.
The model will take in a single vector of any lenght.
The graph will be segmented in name scopes.
We will accumulate the total value of all outputs over time.
At each run, we are going to save the output of the graph, the accumulated total of all outputs, and the average value of all outputs to disk for use in tensorboard.
End of explanation
"""
def run_graph(input_tensor):
"""
Helper function; runs the graph with given input tensor and saves summaries
"""
feed_dict = {a: input_tensor}
out, step, summary = sess.run([output, increment_step, merged_summaries],
feed_dict=feed_dict)
writer.add_summary(summary, global_step=step)
# Run the graph with various inputs
run_graph([2,8])
run_graph([3,1,3,3])
run_graph([8])
run_graph([1,2,3])
run_graph([11,4])
run_graph([4,1])
run_graph([7,3,1])
run_graph([6,3])
run_graph([0,2])
run_graph([4,5,6])
# Write the summaries to disk
writer.flush()
# Close the SummaryWriter
writer.close()
# Close the session
sess.close()
"""
Explanation: Let's write a function to run the graph several times:
End of explanation
"""
|
manojkumar-github/NLP-TextAnalytics | IntentMatching/sentence_similarity_gensim_wmd.ipynb | mit | # Importing the dependecies
import gensim
"""
Explanation: Short-Sentence Similarity using Gensim Word Mover Distance
1. Gensim Word-Movers model
Reference:
Note: Refer to other similarity functions
https://radimrehurek.com/gensim/models/word2vec.html
End of explanation
"""
#load word2vec model, here GoogleNews is used , this should be downloaded and to be loaded from the local path
model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
# intent list for the application
intent_list = ['Read the news', 'Hello', 'Get my news', 'Get feed', 'Read my feed']
"""
Explanation: Load the Google's pre-trained model
End of explanation
"""
print ("provide your intent")
input_intent = input()
intent_similiarity_map = dict()
print ("\n")
for each in intent_list:
#calculate distance between two sentences using WMD algorithm (Word Movers Distance)
distance = model.wmdistance(each, input_intent)
# map the values into a dictionary
intent_similiarity_map[each] = distance
print (intent_similiarity_map)
print ("\n")
print ("Selected Intent for the given user input")
# pick the intent with minimum distance
print (min(intent_similiarity_map, key = intent_similiarity_map.get))
"""
Explanation: Algorithm
End of explanation
"""
|
Heerozh/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
words = list(set(text))
int_to_vocab = {i: v for i, v in enumerate(words)}
vocab_to_int = {v: i for i, v in enumerate(words)}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation||',
';': '||Semicolon||',
'!': '||Exclamation||',
'?': '||Question||',
'(': '||LParentheses||',
')': '||RParentheses||',
'--': '||Dash||',
'\n': '||Return||',
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
xinput = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
lnrate = tf.placeholder(tf.float32)
return xinput, targets, lnrate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, 'initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, 'final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
nbatchs = (len(int_text) - 1) // seq_length // batch_size
rtn = np.zeros((nbatchs, 2, batch_size, seq_length), dtype=np.int32)
for i in range(batch_size):
for j in range(nbatchs):
start = (i*nbatchs+j)
batch_i = start % nbatchs
start *= seq_length
rtn[batch_i][0][i] = int_text[start : start+seq_length]
rtn[batch_i][1][i] = int_text[start+1 : start+seq_length+1]
return rtn
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 66
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = (len(int_text)-1) // batch_size // seq_length * 2
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
#writer = tf.summary.FileWriter('./summary/1')
with tf.Session(graph=train_graph) as sess:
#writer.add_graph(sess.graph)
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
xinput = loaded_graph.get_tensor_by_name("input:0")
initial = loaded_graph.get_tensor_by_name("initial_state:0")
final = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return xinput, initial, final, probs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
choiced = np.random.choice(list(int_to_vocab.keys()), p=probabilities)
return int_to_vocab[choiced]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
CyberCRI/dataanalysis-herocoli-redmetrics | v1.52/Tests/1.5 Google form analysis - PCA.ipynb | cc0-1.0 | %run "../Functions/1. Google form analysis.ipynb"
"""
Explanation: Google form analysis tests
Purpose: determine in what extent the current data can accurately describe correlations, underlying factors on the score.
Especially concerning the 'before' groups: are there underlying groups explaining the discrepancies in score? Are those groups tied to certain questions?
Table of Contents
Sorted total answers to questions
Cross-samples t-tests
biologists vs non-biologists
biologists vs non-biologists before
PCAs
<br>
<br>
<br>
<br>
End of explanation
"""
binarized = getAllBinarized()
score = np.dot(binarized,np.ones(len(binarized.columns)))
dimensions = binarized.shape[1]
dimensions
binarized['class'] = 'default'
# split data table into data X and class labels y
X = binarized.iloc[:,0:dimensions].values
y = binarized.iloc[:,dimensions].values
"""
Explanation: PCAs
<a id=PCAs />
Purpose: find out which questions have the more weight in the computation of the score.
Other leads: LDA, ANOVA.
Source for PCA: http://sebastianraschka.com/Articles/2015_pca_in_3_steps.html
End of explanation
"""
from sklearn.preprocessing import StandardScaler
X_std = StandardScaler().fit_transform(X)
"""
Explanation: Standardizing
End of explanation
"""
mean_vec = np.mean(X_std, axis=0)
cov_mat = (X_std - mean_vec).T.dot((X_std - mean_vec)) / (X_std.shape[0]-1)
print('Covariance matrix \n%s' %cov_mat)
print('NumPy covariance matrix: \n%s' %np.cov(X_std.T))
"""
Explanation: 1 - Eigendecomposition - Computing Eigenvectors and Eigenvalues
Covariance Matrix
End of explanation
"""
cov_mat = np.cov(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
#print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
"""
Explanation: eigendecomposition on the covariance matrix:
End of explanation
"""
cor_mat1 = np.corrcoef(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cor_mat1)
#print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
"""
Explanation: Correlation Matrix
Eigendecomposition of the standardized data based on the correlation matrix:
End of explanation
"""
u,s,v = np.linalg.svd(X_std.T)
s
"""
Explanation: Eigendecomposition of the raw data based on the correlation matrix:
cor_mat2 = np.corrcoef(binarized.T)
eig_vals, eig_vecs = np.linalg.eig(cor_mat2)
print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
Singular Vector Decomposition
End of explanation
"""
for ev in eig_vecs:
np.testing.assert_array_almost_equal(1.0, np.linalg.norm(ev))
print('Everything ok!')
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort()
eig_pairs.reverse()
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:')
for i in eig_pairs:
print(i[0])
tot = sum(eig_vals)
var_exp = [(i / tot)*100 for i in sorted(eig_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
plt.bar(range(dimensions), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(dimensions), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
var_exp[:5]
cum_var_exp[:5]
"""
Explanation: 2 - Selecting Principal Components
End of explanation
"""
matrix_w = np.hstack((eig_pairs[0][1].reshape(dimensions,1),
eig_pairs[1][1].reshape(dimensions,1)))
print('Matrix W:\n', matrix_w)
"""
Explanation: Projection Matrix
End of explanation
"""
gform.columns
colors = ('blue','red','green','magenta','cyan','purple','yellow','black','white')
len(colors)
Y = X_std.dot(matrix_w)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
ax = plt.subplot(111)
plt.scatter(Y[:, 0], Y[:, 1])
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title("base PCA")
plt.show()
"""
Explanation: 3 - Projection Onto the New Feature Space
End of explanation
"""
# classNames is a tuple
def classifyAndPlot(classNames, classes, title = '', rainbow = False):
defaultClassName = ''
sampleSize = 0
for classIndex in range(0, len(classes)):
sampleSize += len(classes[classIndex])
if(sampleSize < gform.shape[0]):
if(len(classNames) == len(classes) + 1):
defaultClassName = classNames[-1]
else:
defaultClassName = 'other'
classNames.append(defaultClassName)
for labelIndex in binarized.index:
i = int(labelIndex[len('corrections'):])
isUserSet = False
for classIndex in range(0, len(classes)):
if(gform.iloc[i][localplayerguidkey] in classes[classIndex].values):
binarized.loc[labelIndex,'class'] = classNames[classIndex]
isUserSet = True
if not isUserSet:
if not (defaultClassName in classNames):
print("unexpected error: check the exhaustiveness of the provided classes")
binarized.loc[labelIndex,'class'] = defaultClassName
y = binarized.iloc[:,dimensions].values
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
ax = plt.subplot(111)
colors = ('blue','red','green','magenta','cyan','purple','yellow','black','white')
if (rainbow or len(classNames) > len(colors)):
colors = plt.cm.rainbow(np.linspace(1, 0, len(classNames)))
colors = colors[:len(classNames)]
for lab, col in zip(classNames,colors):
plt.scatter(Y[y==lab, 0],
Y[y==lab, 1],
label=lab,
c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
# source https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
if(len(title) > 0):
plt.title(title)
plt.show()
answered = binarized[binarized['Guess: the bacterium would glow yellow...'] == 1]
indices = answered.index.map(lambda label: int(label[len('corrections'):]))
surveys = gform.iloc[indices][localplayerguidkey]
#classifyAndPlot(['guessed', 'did not'], [surveys])
title = 'test title'
rainbow = True
alreadyDefaultClassName = True
classNames = ['guessed', 'did not']
classes = [surveys]
# classNames is a tuple
#def classifyAndPlot(classNames, classes, title = '', rainbow = False):
defaultClassName = ''
sampleSize = 0
for classIndex in range(0, len(classes)):
sampleSize += len(classes[classIndex])
if(sampleSize < gform.shape[0]):
if(len(classNames) == len(classes) + 1):
defaultClassName = classNames[-1]
else:
defaultClassName = 'other'
classNames.append(defaultClassName)
for labelIndex in binarized.index:
i = int(labelIndex[len('corrections'):])
isUserSet = False
for classIndex in range(0, len(classes)):
if(gform.iloc[i][localplayerguidkey] in classes[classIndex].values):
binarized.loc[labelIndex,'class'] = classNames[classIndex]
isUserSet = True
if not isUserSet:
if not (defaultClassName in classNames):
print("unexpected error: check the exhaustiveness of the provided classes")
binarized.loc[labelIndex,'class'] = defaultClassName
y = binarized.iloc[:,dimensions].values
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
ax = plt.subplot(111)
colors = ('blue','red','green','magenta','cyan','purple','yellow','black','white')
if (rainbow or len(classNames) > len(colors)):
colors = plt.cm.rainbow(np.linspace(1, 0, len(classNames)))
colors = colors[:len(classNames)]
for lab, col in zip(classNames,colors):
plt.scatter(Y[y==lab, 0],
Y[y==lab, 1],
label=lab,
c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
# source https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
if(len(title) > 0):
plt.title(title)
plt.show()
answered = binarized[binarized['Guess: the bacterium would glow yellow...'] == 1]
indices = answered.index.map(lambda label: int(label[len('corrections'):]))
surveys = gform.iloc[indices][localplayerguidkey]
classifyAndPlot(['guessed', 'did not'], [surveys])
classifyAndPlot(['biologist', 'other'], [getSurveysOfBiologists(gform, True)[localplayerguidkey]], title = 'biologists and non-biologists')
classifyAndPlot(['gamer', 'other'], [getSurveysOfGamers(gform, True)[localplayerguidkey]], title = 'gamers and non-gamers')
classNames = []
classes = []
for answer in gform['Are you interested in biology?'].value_counts().index:
classNames.append(answer)
classes.append(gform[gform['Are you interested in biology?'] == answer][localplayerguidkey])
classNames.append('other')
classifyAndPlot(classNames, classes, rainbow = True, title = 'interest in biology')
"""
Explanation: import mca
X = binarized.iloc[:,0:dimensions].values
y = binarized.iloc[:,dimensions].values
X_std.shape
xstddf = pd.DataFrame(X_std)
Y2 = mca.MCA(xstddf, ncols=dimensions)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
ax = plt.subplot(111)
plt.scatter(Y2[:, 0], Y2[:, 1])
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title("base MCA")
plt.show()
End of explanation
"""
#np.plot(score)
classNames = []
classes = []
for thisScore in np.unique(score):
classNames.append(thisScore)
index = np.where(score == thisScore)[0]
classes.append( gform.loc[index][localplayerguidkey])
classifyAndPlot(classNames, classes, rainbow = True, title = 'score')
classNames = []
classes = []
question = 'How old are you?'
for answer in np.sort(gform[question].unique()):
classNames.append(answer)
classes.append(gform[gform[question] == answer][localplayerguidkey])
classifyAndPlot(classNames, classes, rainbow = True, title = 'age')
gform.columns[:5]
# questions to avoid:
#0 Timestamp
#3 Age
#40 Remarks
#41 ID
from itertools import chain
questionRange = chain(range(1,3), range(4,40), range(42,44))
for questionIndex in questionRange:
question = gform.columns[questionIndex]
classNames = []
classes = []
for answer in gform[question].value_counts().index:
classNames.append(answer)
classes.append(gform[gform[question] == answer][localplayerguidkey])
classifyAndPlot(classNames, classes, title = question, rainbow = False)
eig_vals
eig_vecs[0]
maxComponentIndex = np.argmax(abs(eig_vecs[0]))
binarized.columns[maxComponentIndex]
sum(eig_vecs[0]*eig_vecs[0])
eig_vecs[0]
sortedIndices = []
descendingWeights = np.sort(abs(eig_vecs[0]))[::-1]
for sortedComponent in descendingWeights:
sortedIndices.append(np.where(abs(eig_vecs[0]) == sortedComponent)[0][0])
sortedQuestions0 = pd.DataFrame(index = descendingWeights, data = binarized.columns[sortedIndices])
sortedQuestions0
def accessFirst(a):
return a[0]
sortedQuestionsLastIndex = 10
array1 = np.arange(sortedQuestionsLastIndex+1.)/(sortedQuestionsLastIndex + 1.)
sortedQuestionsLastIndex+1,\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Accent(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Dark2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Paired(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Pastel1(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Pastel2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set1(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set3(array1)))),\
from matplotlib import cm
def displayQuestionsContributions(\
sortedQuestions,\
title = "Contributions of questions to component",\
sortedQuestionsLastIndex = 10\
):
colors=cm.Set3(np.arange(sortedQuestionsLastIndex+1.)/(sortedQuestionsLastIndex + 1.))
sortedQuestionsLabelsArray = np.append(sortedQuestions.values.flatten()[:sortedQuestionsLastIndex], 'others')
sortedQuestionsValuesArray = np.append(sortedQuestions.index[:sortedQuestionsLastIndex], sum(sortedQuestions.index[sortedQuestionsLastIndex:]))
fig1, ax1 = plt.subplots()
ax1.pie(sortedQuestionsValuesArray, labels=sortedQuestionsLabelsArray, autopct='%1.1f%%', startangle=100, colors = colors)
ax1.axis('equal')
# cf https://matplotlib.org/users/customizing.html
plt.rcParams['patch.linewidth'] = 0
plt.rcParams['text.color'] = '#2b2b2b'
plt.title(title)
plt.tight_layout()
plt.show()
displayQuestionsContributions(sortedQuestions0, sortedQuestionsLastIndex = 10, title = 'Contributions of questions to component 1')
sum(sortedQuestions0.index**2)
sortedIndices = []
descendingWeights = np.sort(abs(eig_vecs[1]))[::-1]
for sortedComponent in descendingWeights:
sortedIndices.append(np.where(abs(eig_vecs[1]) == sortedComponent)[0][0])
sortedQuestions1 = pd.DataFrame(index = descendingWeights, data = binarized.columns[sortedIndices])
sortedQuestions1
displayQuestionsContributions(sortedQuestions1, sortedQuestionsLastIndex = 10, title = 'Contributions of questions to component 2')
sum(sortedQuestions1.index**2)
"""
Explanation: TODO: find simple way to plot scores
End of explanation
"""
|
azhurb/deep-learning | sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Project 2).ipynb | mit | def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: Transforming Text into Numbers
End of explanation
"""
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Project 2: Creating the Input/Output Data
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/99a2efbcf51159fbb58f12830f81525d/compute_csd.ipynb | bsd-3-clause | # Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
# License: BSD-3-Clause
from matplotlib import pyplot as plt
import mne
from mne.datasets import sample
from mne.time_frequency import csd_fourier, csd_multitaper, csd_morlet
print(__doc__)
"""
Explanation: Compute a cross-spectral density (CSD) matrix
A cross-spectral density (CSD) matrix is similar to a covariance matrix, but in
the time-frequency domain. It is the first step towards computing
sensor-to-sensor coherence or a DICS beamformer.
This script demonstrates the three methods that MNE-Python provides to compute
the CSD:
Using short-term Fourier transform: :func:mne.time_frequency.csd_fourier
Using a multitaper approach: :func:mne.time_frequency.csd_multitaper
Using Morlet wavelets: :func:mne.time_frequency.csd_morlet
End of explanation
"""
n_jobs = 1
"""
Explanation: In the following example, the computation of the CSD matrices can be
performed using multiple cores. Set n_jobs to a value >1 to select the
number of cores to use.
End of explanation
"""
data_path = sample.data_path()
fname_raw = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_event = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
raw = mne.io.read_raw_fif(fname_raw)
events = mne.read_events(fname_event)
"""
Explanation: Loading the sample dataset.
End of explanation
"""
picks = mne.pick_types(raw.info, meg='grad')
# Make some epochs, based on events with trigger code 1
epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=1,
picks=picks, baseline=(None, 0),
reject=dict(grad=4000e-13), preload=True)
"""
Explanation: By default, CSD matrices are computed using all MEG/EEG channels. When
interpreting a CSD matrix with mixed sensor types, be aware that the
measurement units, and thus the scalings, differ across sensors. In this
example, for speed and clarity, we select a single channel type:
gradiometers.
End of explanation
"""
csd_fft = csd_fourier(epochs, fmin=15, fmax=20, n_jobs=n_jobs)
csd_mt = csd_multitaper(epochs, fmin=15, fmax=20, adaptive=True, n_jobs=n_jobs)
"""
Explanation: Computing CSD matrices using short-term Fourier transform and (adaptive)
multitapers is straightforward:
End of explanation
"""
frequencies = [16, 17, 18, 19, 20]
csd_wav = csd_morlet(epochs, frequencies, decim=10, n_jobs=n_jobs)
"""
Explanation: When computing the CSD with Morlet wavelets, you specify the exact
frequencies at which to compute it. For each frequency, a corresponding
wavelet will be constructed and convolved with the signal, resulting in a
time-frequency decomposition.
The CSD is constructed by computing the correlation between the
time-frequency representations between all sensor-to-sensor pairs. The
time-frequency decomposition originally has the same sampling rate as the
signal, in our case ~600Hz. This means the decomposition is over-specified in
time and we may not need to use all samples during our CSD computation, just
enough to get a reliable correlation statistic. By specifying decim=10,
we use every 10th sample, which will greatly speed up the computation and
will have a minimal effect on the CSD.
End of explanation
"""
csd_fft.mean().plot()
plt.suptitle('short-term Fourier transform')
csd_mt.mean().plot()
plt.suptitle('adaptive multitapers')
csd_wav.mean().plot()
plt.suptitle('Morlet wavelet transform')
"""
Explanation: The resulting :class:mne.time_frequency.CrossSpectralDensity objects have a
plotting function we can use to compare the results of the different methods.
We're plotting the mean CSD across frequencies.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/LC.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: 'lc' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc')
print b.filter(kind='lc')
print b.filter(kind='lc_dep')
"""
Explanation: Dataset Parameters
Let's add a lightcurve dataset to the Bundle.
End of explanation
"""
print b['times']
"""
Explanation: times
End of explanation
"""
print b['fluxes']
"""
Explanation: fluxes
End of explanation
"""
print b['sigmas']
"""
Explanation: sigmas
End of explanation
"""
print b['ld_func@primary']
"""
Explanation: ld_func
End of explanation
"""
b['ld_func@primary'] = 'logarithmic'
print b['ld_coeffs@primary']
"""
Explanation: ld_coeffs
ld_coeffs will only be available if ld_func is not interp, so let's set it to logarithmic
End of explanation
"""
print b['passband']
"""
Explanation: passband
End of explanation
"""
print b['intens_weighting']
"""
Explanation: intens_weighting
See the Intensity Weighting tutorial
End of explanation
"""
print b['pblum']
"""
Explanation: pblum
See the Passband Luminosity tutorial
End of explanation
"""
print b['l3']
"""
Explanation: l3
See the "Third" Light tutorial
End of explanation
"""
print b['compute']
"""
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to computing fluxes and the LC dataset.
Other compute options are covered elsewhere:
* parameters related to dynamics are explained in the section on the orb dataset
* parameters related to meshing, eclipse detection, and subdivision are explained in the section on the mesh dataset
End of explanation
"""
print b['lc_method']
"""
Explanation: lc_method
End of explanation
"""
print b['irrad_method']
"""
Explanation: irrad_method
End of explanation
"""
print b['boosting_method']
"""
Explanation: boosting_method
End of explanation
"""
print b['atm@primary']
"""
Explanation: For more details on boosting, see the Beaming and Boosting example script
atm
End of explanation
"""
b.set_value('times', np.linspace(0,1,101))
b.run_compute()
b['lc@model'].twigs
print b['times@lc@model']
print b['fluxes@lc@model']
"""
Explanation: For more details on heating, see the Reflection and Heating example script
Synthetics
End of explanation
"""
afig, mplfig = b['lc@model'].plot(show=True)
"""
Explanation: Plotting
By default, LC datasets plot as flux vs time.
End of explanation
"""
afig, mplfig = b['lc@model'].plot(x='phases', show=True)
"""
Explanation: Since these are the only two columns available in the synthetic model, the only other option is to plot in phase instead of time.
End of explanation
"""
b['period'].components
afig, mplfig = b['lc@model'].plot(x='phases:binary', show=True)
"""
Explanation: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01')
print b['columns'].choices
b['columns'] = ['intensities@lc01', 'abs_intensities@lc01', 'normal_intensities@lc01', 'abs_normal_intensities@lc01', 'pblum@lc01', 'boost_factors@lc01']
b.run_compute()
print b['model'].datasets
"""
Explanation: Mesh Fields
By adding a mesh dataset and setting the columns parameter, light-curve (i.e. passband-dependent) per-element quantities can be exposed and plotted.
Let's add a single mesh at the first time of the light-curve and re-call run_compute
End of explanation
"""
b.filter(dataset='lc01', kind='mesh', context='model').twigs
"""
Explanation: These new columns are stored with the lc's dataset tag, but with the 'mesh' dataset-kind.
End of explanation
"""
afig, mplfig = b['mesh01@model'].plot(fc='intensities', ec='None', show=True)
"""
Explanation: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the mesh dataset).
End of explanation
"""
print b['pblum@primary@lc01@mesh@model']
"""
Explanation: Now let's look at each of the available fields.
pblum
For more details, see the tutorial on Passband Luminosities
End of explanation
"""
print b['abs_normal_intensities@primary@lc01@mesh@model']
"""
Explanation: 'pblum' is the passband luminosity of the entire star/mesh - this is a single value (unlike most of the parameters in the mesh) and does not have per-element values.
abs_normal_intensities
End of explanation
"""
print b['normal_intensities@primary@lc01@mesh@model']
"""
Explanation: 'abs_normal_intensities' are the absolute normal intensities per-element.
normal_intensities
End of explanation
"""
print b['abs_intensities@primary@lc01@mesh@model']
"""
Explanation: 'normal_intensities' are the relative normal intensities per-element.
abs_intensities
End of explanation
"""
print b['intensities@primary@lc01@mesh@model']
"""
Explanation: 'abs_intensities' are the projected absolute intensities (towards the observer) per-element.
intensities
End of explanation
"""
print b['boost_factors@primary@lc01@mesh@model']
"""
Explanation: 'intensities' are the projected relative intensities (towards the observer) per-element.
boost_factors
End of explanation
"""
|
AustinPUG/PGDay2016 | Numba inside PostgreSQL.ipynb | bsd-3-clause | import psycopg2
"""
Explanation: Very Brief Demo of Numba Speedup inside PostgreSQL
Background
This notebook was originally presented as part of a talk at PGDay Austin 2016 by Davin Potts (davin AT discontinuity DOT net).
The talk built up to this notebook by first providing stories about computer vision technologies employed by Stipple, Inc., providing pointers to how similar functionality could be implemented by others using Python+NumPy+scikit-learn, and then introducing Numba to drastically accelerate the execution of such functionality inside a postgres database (performing data-intensive computations where the data lives).
Links mentioned during the talk:
* https://techcrunch.com/2013/10/24/stipple-search-mobile/ (Overview of Stipple, Inc.)
* http://blogs.wsj.com/venturecapital/2014/04/22/stipple-shuts-down-vc-backers-included-floodgate-kleiner-perkins/
* https://vimeo.com/63258721 (Tutorial on Image Fingerprinting using scikit-learn from PyData Santa Clara 2013 also by Davin)
* http://numba.pydata.org/ (Numba project home)
Purpose
Let's create a compute-intensive Python function, compile it using Numba inside PL/Python, then test its relative speedup.
End of explanation
"""
with psycopg2.connect(database='radon_fingerprints',
host='localhost',
port=5437) as conn:
with conn.cursor() as cursor:
cursor.execute("CREATE EXTENSION IF NOT EXISTS plpythonu;")
cursor.execute("""
CREATE OR REPLACE FUNCTION prep_sum2d ()
RETURNS integer
AS $$
# GD is a so-called "global dictionary" made available
# to us by PL/Python. It allows us to share information
# across functions/code within a single session. We
# will use it to "share" our Numba-compiled function
# with a different Python function defined later in
# this cell.
if 'numba' in GD and 'numpy' in GD:
numpy = GD['numpy']
numba = GD['numba']
else:
import numpy
import numba
GD['numpy'] = numpy
GD['numba'] = numba
# Define our compute-intensive function to play with.
# (This is the example offered on the main Numba webpage.)
def sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result
# Store it in PL/Python's special 'GD' dict for ease of later use.
GD['sum2d'] = sum2d
# Compile a version of sum2d using Numba, and store it for later use.
jitsum2d = numba.jit(sum2d, target='gpu')
csum2d = jitsum2d.compile(numba.double(numba.double[:,::1]))
GD['jitsum2d'] = jitsum2d
GD['csum2d'] = csum2d
return 1
$$ LANGUAGE plpythonu;
""")
#cursor.execute("DROP FUNCTION speedtest_sum2d();")
cursor.execute("""
CREATE OR REPLACE FUNCTION speedtest_sum2d ()
RETURNS float
AS $$
import time
if 'numba' in GD and 'numpy' in GD:
numpy = GD['numpy']
numba = GD['numba']
else:
import numpy
import numba
GD['numpy'] = numpy
GD['numba'] = numba
sum2d = GD['sum2d']
jitsum2d = GD['jitsum2d']
csum2d = GD['csum2d']
# Create some random input data to play with.
arr = numpy.random.randn(100, 100)
# Exercise the pure-Python function, sum2d.
start = time.time()
res = sum2d(arr)
duration = time.time() - start
plpy.log("Result from python is %s in %s (msec)" % (res, duration*1000))
csum2d(arr) # Warm up
# Exercise the Numba version of that same function, csum2d.
start = time.time()
res = csum2d(arr)
duration2 = time.time() - start
plpy.log("Result from compiled is %s in %s (msec)" % (res, duration2*1000))
plpy.log("Speed up is %s" % (duration / duration2))
return (duration / duration2)
$$ LANGUAGE plpythonu;
""")
conn.commit()
"""
Explanation: Create Compute-Intensive Python Function
We'll create two functions in PL/Python. The first does the work of defining the function and even compiling it with Numba. The second performs a speedtest to compare the pure-Python version of the function with the Numba compiled version of it.
Imagining a more real-world use case, the first might be a function we want to make fast and the second is perhaps the code that uses that function (which could have been conventional SQL, no Python needed).
End of explanation
"""
with psycopg2.connect(database='radon_fingerprints',
host='localhost',
port=5437) as conn:
with conn.cursor() as cursor:
cursor.execute("SELECT prep_sum2d();")
rows = cursor.fetchall()
conn.commit()
rows
"""
Explanation: Quick Test on the Setup Function
End of explanation
"""
with psycopg2.connect(database='radon_fingerprints',
host='localhost',
port=5437) as conn:
with conn.cursor() as cursor:
cursor.execute("SELECT prep_sum2d();")
cursor.execute("SELECT speedtest_sum2d();")
rows = cursor.fetchall()
conn.commit()
rows
"""
Explanation: Compare Performance of the Numba Compiled Version with the Pure-Python Function
The messages from plpy.log() will end up in the postgres logfile. The output from running this function will be the relative speedup of using the Numba version of the function. Speaking more generally, the amount of speedup will, of course, vary depending upon the amount and character of data being consumed, the nature of the algorithm, how data is consumed from a table (presumably what you'd likely be doing inside postgres), et cetera, et cetera.
End of explanation
"""
|
root-mirror/training | SummerStudentCourse/2019/Exercises/ROOTBooks/CreateAHistogram_Solution.ipynb | gpl-2.0 | import ROOT
"""
Explanation: Create a Histogram
Create a histogram, fill it with random numbers, set its colour to blue, draw it.
Can you:
- Can you use the native Python random number generator for this?
- Can you make your plot interactive using JSROOT?
- Can you document what you did in markdown?
End of explanation
"""
h = ROOT.TH1F("h", "My Notebook Histo;x;#", 64, -4, 4)
"""
Explanation: We now create our histogram
End of explanation
"""
from random import gauss
numbers = [gauss(0., 1.) for _ in range(1000)]
numbers
for i in numbers: h.Fill(i)
"""
Explanation: We now import the gauss generation from the random module of Python and fill the histogram
End of explanation
"""
%jsroot on
h.SetLineColor(ROOT.kBlue)
h.SetFillColor(ROOT.kBlue)
c = ROOT.TCanvas()
h.Draw()
c.Draw()
"""
Explanation: Time for styling the histogram and use jsroot
End of explanation
"""
|
chetan51/nupic.research | projects/whydense/cifar-100/HyperparameterAnalysis.ipynb | gpl-3.0 | browser = RayTuneExperimentBrowser(os.path.expanduser("~/nta/results/VGG19SparseFull"))
df = browser.best_experiments(min_test_accuracy=0.0, min_noise_accuracy=0.0, sort_by="test_accuracy")
df.head(5)
df.columns
df.iloc[0]
"""
Explanation: Load data and general exploration
End of explanation
"""
len(df[df['epochs']==164])
df[df['epochs']==164][['test_accuracy_max', 'noise_accuracy_max']].corr()
df[df['epochs']==164][['test_accuracy_max', 'noise_accuracy_max']].min()
df[df['epochs']==164][['test_accuracy_max', 'noise_accuracy_max']].mean()
df[df['epochs']==164][['test_accuracy_max', 'noise_accuracy_max']].max()
len(df[df['epochs']==90])
df[df['epochs']==90][['test_accuracy_max', 'noise_accuracy_max']].corr()
df[df['epochs']==90][['test_accuracy_max', 'noise_accuracy_max']].min()
df[df['epochs']==90][['test_accuracy_max', 'noise_accuracy_max']].mean()
df[df['epochs']==90][['test_accuracy_max', 'noise_accuracy_max']].max()
"""
Explanation: Epochs and Accuracy exploration
End of explanation
"""
df[df['epochs']>=30][['epochs', 'test_accuracy']].astype(np.float32).corr()
df[df['epochs']>=30][['epochs', 'noise_accuracy']].astype(np.float32).corr()
"""
Explanation: It is interesting to see that the experiments that run 90 epochs have a very different correlation between noise and test accuracy than the experiments that run 164 epochs, even though the averages are very similar. What can that mean? After some point progress in test accuracy can lead to regress in noise accuracy? Which would imply the more the model fits to the standard data, the lesser the noise accuracy
End of explanation
"""
tunable_params_general = ['learning_rate', 'learning_rate_gamma', 'weight_decay', 'momentum', 'batch_size', 'batches_in_epoch']
tunable_params_sparsity = ['boost_strength', 'boost_strength_factor', 'k_inference_factor', 'cnn_percent_on', 'cnn_weight_sparsity']
tunable_params = tunable_params_general + tunable_params_sparsity
performance_metrics = ['noise_accuracy_max', 'test_accuracy_max']
corr_params = tunable_params + performance_metrics
df[corr_params].astype(np.float32).corr()
df[corr_params].astype(np.float32).corr() > 0.3
df[corr_params].astype(np.float32).corr() < -0.3
"""
Explanation: Test accuracy seems more correlated to number of epochs then test accuracy, but the difference is small, might be due to randomness
A look at other possible correlations
End of explanation
"""
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
from pprint import pprint
# Including all experiments with 30 or more epochs
df_inf = df[df['epochs']>=30]
y1 = df_inf['noise_accuracy_max']
y2 = df_inf['test_accuracy_max']
X = df_inf[tunable_params].astype(np.float32)
# adjust all X to same scale
scaler = StandardScaler()
X = scaler.fit_transform(X)
model_noise = LinearRegression()
model_noise.fit(X, y1)
print("\n Noise accuracy")
pprint(list(zip(tunable_params, model_noise.coef_)))
model_test = LinearRegression()
model_test.fit(X, y2)
print("\n Test accuracy")
pprint(list(zip(tunable_params, model_test.coef_)))
# Including all experiments with 90 or more epochs ("completed")
df_inf = df[df['epochs']>=90]
y1 = df_inf['noise_accuracy_max']
y2 = df_inf['test_accuracy_max']
X = df_inf[tunable_params].astype(np.float32)
# adjust all X to same scale
scaler = StandardScaler()
X = scaler.fit_transform(X)
model_noise = LinearRegression()
model_noise.fit(X, y1)
print("\n Noise accuracy")
pprint(list(zip(tunable_params, model_noise.coef_)))
model_test = LinearRegression()
model_test.fit(X, y2)
print("\n Test accuracy")
pprint(list(zip(tunable_params, model_test.coef_)))
"""
Explanation: Positive correlation: cnn_percent_on with noise_accuracy_max and test_accuracy_max
Negative correlation: momentum and noise_accuracy_max
Further analysis on the tunable hyperparameters
End of explanation
"""
# Only included complete experiments
df_inf = df[df['epochs']>=90][corr_params]
def stats(arr):
return [round(v, 4) for v in [np.min(arr), np.mean(arr), np.max(arr)]]
df_inf.sort_values('test_accuracy_max', ascending=False)[tunable_params].head(5).apply(stats)
df_inf.sort_values('test_accuracy_max', ascending=True)[tunable_params].head(5).apply(stats)
"""
Explanation: As correlation already showed, cnn_percent_on and momentum seems to have the greatest impact. The first is expected, but momentum is actually an interesting finding, specially since it is negatively correlated in the sparse model - a smaller momentum would lead to a higher noise accuracy. Why is that?
cnn_percent_on impacts specially the test accuracy, indicating sparsity would actually have a negative impact on test performance
What is the ideal value for each variable that maximizes both accuracies?
End of explanation
"""
df_inf.sort_values('noise_accuracy_max', ascending=False)[tunable_params].head(5).apply(stats)
df_inf.sort_values('noise_accuracy_max', ascending=True)[tunable_params].head(5).apply(stats)
"""
Explanation: Why is boost strength affecting the test accuracy? It does not have the same impact in noise accuracy. Hypothesis: it is a regularization for the amount of sparsity in the model.
Higher cnn_percent_on and cnn_weight_sparsity are indicatives of better test accuracy. Incidentally, they are also indicatives of better noise accuracy, which is unexpected
Lower weight decay improves noise accuracy, however has no impact on test accuracy. Weight decay would just make the network even more sparse in the cases where it is already too sparse, which can be a bigger evidence that too much sparsity is impacting performance
As expected, higher batch size and more batches per epoch improve both metrics
For noise accuracy, lower learning rate is preferred
Momentum ideal value seems to be between 0.5 and 0.6, and it has a high impact on the model. This is unexpected as usual values for SGD momentum in literature is around 0.9
End of explanation
"""
class RayTuneExperimentBrowser(object):
"""
Class for browsing and manipulating experiment results directories created
by Ray Tune.
"""
def __init__(self, experiment_path):
self.experiment_path = os.path.abspath(experiment_path)
self.experiment_states = self._get_experiment_states(
self.experiment_path, exit_on_fail=True)
self.progress = {}
self.exp_directories = {}
self.checkpoint_directories = {}
self.params = {}
for experiment_state in self.experiment_states:
self._read_experiment(experiment_state)
def _read_experiment(self, experiment_state):
checkpoint_dicts = experiment_state["checkpoints"]
checkpoint_dicts = [flatten_dict(g) for g in checkpoint_dicts]
for exp in checkpoint_dicts:
if exp.get("logdir", None) is None:
continue
exp_dir = os.path.basename(exp["logdir"])
csv = os.path.join(self.experiment_path, exp_dir, "progress.csv")
self.progress[exp["experiment_tag"]] = pd.read_csv(csv)
self.exp_directories[exp["experiment_tag"]] = os.path.abspath(
os.path.join(self.experiment_path, exp_dir))
# Figure out checkpoint file (.pt or .pth) if it exists. For some reason
# we need to switch to the directory in order for glob to work.
ed = os.path.abspath(os.path.join(self.experiment_path, exp_dir))
os.chdir(ed)
cds = glob.glob("checkpoint*")
if len(cds) > 0:
cd = max(cds)
cf = glob.glob(os.path.join(cd, "*.pt"))
cf += glob.glob(os.path.join(cd, "*.pth"))
if len(cf) > 0:
self.checkpoint_directories[exp["experiment_tag"]] = os.path.join(
ed, cf[0])
else:
self.checkpoint_directories[exp["experiment_tag"]] = ""
else:
self.checkpoint_directories[exp["experiment_tag"]] = ""
# Read in the configs for this experiment
paramsFile = os.path.join(self.experiment_path, exp_dir, "params.json")
with open(paramsFile) as f:
self.params[exp["experiment_tag"]] = json.load(f)
def get_value(self, exp_substring="",
tags=["test_accuracy", "noise_accuracy"],
which='max'):
"""
For every experiment whose name matches exp_substring, scan the history
and return the appropriate value associated with tag.
'which' can be one of the following:
last: returns the last value
min: returns the minimum value
max: returns the maximum value
median: returns the median value
Returns a pandas dataframe with two columns containing name and tag value
"""
# Collect experiment names that match exp at all
exps = [e for e in self.progress if exp_substring in e]
# empty histories always return None
columns = ['Experiment Name']
# add the columns names for main tags
for tag in tags:
columns.append(tag)
columns.append(tag+'_'+which)
if which in ["max", "min"]:
columns.append("epoch_"+str(tag))
# add training iterations
columns.append('epochs')
# add the remaining variables
columns.extend(self.params[exps[0]].keys())
all_values = []
for e in exps:
# values for the experiment name
values = [e]
# values for the main tags
for tag in tags:
values.append(self.progress[e][tag].iloc[-1])
if which == "max":
values.append(self.progress[e][tag].max())
v = self.progress[e][tag].idxmax()
values.append(v)
elif which == "min":
values.append(self.progress[e][tag].min())
values.append(self.progress[e][tag].idxmin())
elif which == "median":
values.append(self.progress[e][tag].median())
elif which == "last":
values.append(self.progress[e][tag].iloc[-1])
else:
raise RuntimeError("Invalid value for which='{}'".format(which))
# add number of epochs
values.append(self.progress[e]['training_iteration'].iloc[-1])
# remaining values
for v in self.params[e].values():
if isinstance(v,list):
values.append(np.mean(v))
else:
values.append(v)
all_values.append(values)
p = pd.DataFrame(all_values, columns=columns)
return p
def get_checkpoint_file(self, exp_substring=""):
"""
For every experiment whose name matches exp_substring, return the
full path to the checkpoint file. Returns a list of paths.
"""
# Collect experiment names that match exp at all
exps = [e for e in self.progress if exp_substring in e]
paths = [self.checkpoint_directories[e] for e in exps]
return paths
def _get_experiment_states(self, experiment_path, exit_on_fail=False):
"""
Return every experiment state JSON file in the path as a list of dicts.
The list is sorted such that newer experiments appear later.
"""
experiment_path = os.path.expanduser(experiment_path)
experiment_state_paths = glob.glob(
os.path.join(experiment_path, "experiment_state*.json"))
if not experiment_state_paths:
if exit_on_fail:
print("No experiment state found!")
sys.exit(0)
else:
return
experiment_state_paths = list(experiment_state_paths)
experiment_state_paths.sort()
experiment_states = []
for experiment_filename in list(experiment_state_paths):
with open(experiment_filename) as f:
experiment_states.append(json.load(f))
return experiment_states
def get_parameters(self, sorted_experiments):
for i,e in sorted_experiments.iterrows():
if e['Experiment Name'] in self.params:
params = self.params[e['Experiment Name']]
print(params['cnn_percent_on'][0])
print('test_accuracy')
for i,e in sorted_experiments.iterrows():
print(e['test_accuracy'])
print('noise_accuracy')
for i,e in sorted_experiments.iterrows():
print(e['noise_accuracy'])
def best_experiments(self, min_test_accuracy=0.86, min_noise_accuracy=0.785, sort_by="noise_accuracy"):
"""
Return a dataframe containing all experiments whose best test_accuracy and
noise_accuracy are above the specified thresholds.
"""
best_accuracies = self.get_value()
best_accuracies.sort_values(sort_by, axis=0, ascending=False,
inplace=True, na_position='last')
columns = best_accuracies.columns
best_experiments = pd.DataFrame(columns=columns)
for i, row in best_accuracies.iterrows():
if ((row["test_accuracy"] > min_test_accuracy)
and (row["noise_accuracy"] > min_noise_accuracy)):
best_experiments = best_experiments.append(row)
return best_experiments
def prune_checkpoints(self, max_test_accuracy=0.86, max_noise_accuracy=0.785):
"""
TODO: delete the checkpoints for all models whose best test_accuracy and
noise_accuracy are below the specified thresholds.
"""
pass
"""
Explanation: Supporting classes
End of explanation
"""
|
ML4DS/ML4all | R1.Intro_Regression/.ipynb_checkpoints/regression_intro-checkpoint.ipynb | mit | # Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import numpy as np
import scipy.io # To read matlab files
import pandas as pd # To read data tables from csv files
# For plots and graphical results
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pylab
# For the student tests (only for python 2)
import sys
if sys.version_info.major==2:
from test_helper import Test
# That's default image size for this interactive session
pylab.rcParams['figure.figsize'] = 9, 6
"""
Explanation: Introduction to Regression.
Author: Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Notebook version: 1.1 (Sep 12, 2017)
Changes: v.1.0 - First version. Extracted from regression_intro_knn v.1.0.
v.1.1 - Compatibility with python 2 and python 3
End of explanation
"""
from sklearn import datasets
# Load the dataset. Select it by uncommenting the appropriate line
D_all = datasets.load_boston()
#D_all = datasets.load_diabetes()
# Extract data and data parameters.
X = D_all.data # Complete data matrix (including input and target variables)
S = D_all.target # Target variables
n_samples = X.shape[0] # Number of observations
n_vars = X.shape[1] # Number of variables (including input and target)
"""
Explanation: 1. The regression problem
The goal of regression methods is to predict the value of some target variable $S$ from the observation of one or more input variables $X_0, X_1, \ldots, X_{m-1}$ (that we will collect in a single vector $\bf X$).
Regression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$.
<img src="figs/block_diagram.png" width=400>
The only information available to estimate the relation between the inputs and the target is a dataset $\mathcal D$ containing several observations of all variables.
$$\mathcal{D} = {{\bf x}{k}, s{k}}_{k=0}^{K-1}$$
The dataset $\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\bf x}$, computes an output $\hat{s} = f({\bf x})$ that is a good predition of the true value of the target, $s$.
<img src="figs/predictor.png" width=300>
2. Examples of regression problems.
The <a href=http://scikit-learn.org/>scikit-learn</a> package contains several <a href=http://scikit-learn.org/stable/datasets/> datasets</a> related to regression problems.
<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston > Boston dataset</a>: the target variable contains housing values in different suburbs of Boston. The goal is to predict these values based on several social, economic and demographic variables taken frome theses suburbs (you can get more details in the <a href = https://archive.ics.uci.edu/ml/datasets/Housing > UCI repository </a>).
<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes /> Diabetes dataset</a>.
We can load these datasets as follows:
End of explanation
"""
print(n_samples)
"""
Explanation: This dataset contains
End of explanation
"""
print(n_vars)
"""
Explanation: observations of the target variable and
End of explanation
"""
# Select a dataset
nrows = 4
ncols = 1 + (X.shape[1]-1)/nrows
# Some adjustment for the subplot.
pylab.subplots_adjust(hspace=0.2)
# Plot all variables
for idx in range(X.shape[1]):
ax = plt.subplot(nrows,ncols,idx+1)
ax.scatter(X[:,idx], S) # <-- This is the key command
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
plt.ylabel('Target')
"""
Explanation: input variables.
3. Scatter plots
3.1. 2D scatter plots
When the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i>
Python methods plot and scatter from the matplotlib package can be used for these graphical representations.
End of explanation
"""
# <SOL>
x2 = X[:,2]
x4 = X[:,4]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x2, x4, S, zdir=u'z', s=20, c=u'b', depthshade=True)
ax.set_xlabel('$x_2$')
ax.set_ylabel('$x_4$')
ax.set_zlabel('$s$')
plt.show()
# </SOL>
"""
Explanation: 3.2. 3D Plots
With the addition of a third coordinate, plot and scatter can be used for 3D plotting.
Exercise 1:
Select the diabetes dataset. Visualize the target versus components 2 and 4. (You can get more info about the <a href=http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter>scatter</a> command and an <a href=http://matplotlib.org/examples/mplot3d/scatter3d_demo.html>example of use</a> in the <a href=http://matplotlib.org/index.html> matplotlib</a> documentation)
End of explanation
"""
# In this section we will plot together the square and absolute errors
grid = np.linspace(-3,3,num=100)
plt.plot(grid, grid**2, 'b-', label='Square error')
plt.plot(grid, np.absolute(grid), 'r--', label='Absolute error')
plt.xlabel('Error')
plt.ylabel('Cost')
plt.legend(loc='best')
plt.show()
"""
Explanation: 4. Evaluating a regression task
In order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\hat{s})$. Two common losses are
Square error: $l(s, \hat{s}) = (s - \hat{s})^2$
Absolute error: $l(s, \hat{s}) = |s - \hat{s}|$
Note that both the square and absolute errors are functions of the estimation error $e = s-{\hat s}$. However, this is not necessarily the case. As an example, imagine a situation in which we would like to introduce a penalty which increases with the magnitude of the estimated variable. For such case, the following cost would better fit our needs: $l(s,{\hat s}) = s^2 \left(s-{\hat s}\right)^2$.
End of explanation
"""
# Load dataset in arrays X and S
df = pd.read_csv('datasets/x01.csv', sep=',', header=None)
X = df.values[:,0]
S = df.values[:,1]
# <SOL>
fig = plt.figure()
plt.scatter(X, S)
plt.xlabel('$x_2$')
plt.ylabel('$x_4$')
xgrid = np.array([-200.0, 6900.0])
plt.plot(xgrid, 1.2*xgrid)
R = np.mean((S-1.2*X)**2)
print('The average square error is {0}'.format(R))
# </SOL>
if sys.version_info.major==2:
Test.assertTrue(np.isclose(R, 153781.943889), 'Incorrect value for the average square error')
else:
np.testing.assert_almost_equal(R, 153781.943889, decimal=4)
print("Test passed")
"""
Explanation: The overal prediction performance is computed as the average of the loss computed over a set of samples:
$${\bar R} = \frac{1}{K}\sum_{k=0}^{K-1} l\left(s_k, \hat{s}_k\right)$$
Exercise 2:
The dataset in file 'datasets/x01.csv', taken from <a href="http://people.sc.fsu.edu/~jburkardt/datasets/regression/x01.txt">here</a> records the average weight of the brain and body for a number of mammal species.
* Represent a scatter plot of the targe variable versus the one-dimensional input.
* Plot, over the same plot, the prediction function given by $S = 1.2 X$
* Compute the square error rate for the given dataset.
End of explanation
"""
|
georgetown-analytics/machine-learning | examples/pbwitt/Testing Paul Witt Yellowbrick .ipynb | mit | %matplotlib inline
import os
import json
import time
import pickle
import requests
import numpy as np
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
df=pd.read_csv("/Users/pwitt/Documents/machine-learning/examples/pbwitt/Dataset/Training/Features_Variant_1.csv")
# Fetch the data if required
DATA = df
print('Data Shape ' + str(df.shape))
print(df.dtypes)
FEATURES = [
"Page Popularity/likes",
"Page Checkins’s",
"Page talking about",
"Page Category",
"Derived5",
"Derived6",
"Derived7",
"Derived8",
"Derived9",
"Derived10",
"Derived11",
"Derived12",
"Derived13",
"Derived14",
"Derived15",
"Derived16",
"Derived17",
"Derived18",
"Derived19",
"Derived20",
"Derived21",
"Derived22",
"Derived23",
"Derived24",
"Derived25",
"Derived26",
"Derived27",
"Derived28",
"Derived29",
"CC1",
"CC2",
"CC3",
'CC4',
'CC5',
"Base time",
"Post length",
"Post Share Count",
"Post Promotion Status",
"H Local",
"Post published weekday-Sun",
"Post published weekday-Mon",
"Post published weekday-Tues",
"Post published weekday-Weds",
"Post published weekday-Thurs",
"Post published weekday-Fri",
"Post published weekday-Sat",
"Base DateTime weekday-Sun",
"Base DateTime weekday-Mon",
"Base DateTime weekday-Tues",
"Base DateTime weekday-Wed",
"Base DateTime weekday-Thurs",
"Base DateTime weekday-Fri",
"Base DateTime weekday-Sat",
"Target_Variable"
]
# Read the data into a DataFrame
df.columns=FEATURES
df.head()
#Note: Dataset is sorted. There is variation in the distributions.
# Determine the shape of the data
print("{} instances with {} columns\n".format(*df.shape))
"""
Explanation: Using Yellowbrick for Machine Learning Visualizations on Facebook Data
Paul Witt
The dataset below was provided to the UCI Machine Learning Repository from researchers who used Neural Networks and Decision Trees to predict how many comments a given Facebook post would generate.
There are five variants of the dataset. This notebook only uses the first.
The full paper can be found here:
http://uksim.info/uksim2015/data/8713a015.pdf
The primary purpose of this notebook is to test Yellowbrick.
Attribute Information:
All features are integers or float values.
1
Page Popularity/likes
Decimal Encoding
Page feature
Defines the popularity or support for the source of the document.
2
Page Checkins’s
Decimal Encoding
Page feature
Describes how many individuals so far visited this place. This feature is only associated with the places eg:some institution, place, theater etc.
3
Page talking about
Decimal Encoding
Page feature
Defines the daily interest of individuals towards source of the document/ Post. The people who actually come back to the page, after liking the page. This include activities such as comments, likes to a post, shares, etc by visitors to the page.
4
Page Category
Value Encoding
Page feature
Defines the category of the source of the document eg: place, institution, brand etc.
5 - 29
Derived
Decimal Encoding
Derived feature
These features are aggregated by page, by calculating min, max, average, median and standard deviation of essential features.
30
CC1
Decimal Encoding
Essential feature
The total number of comments before selected base date/time.
31
CC2
Decimal Encoding
Essential feature
The number of comments in last 24 hours, relative to base date/time.
32
CC3
Decimal Encoding
Essential feature
The number of comments in last 48 to last 24 hours relative to base date/time.
33
CC4
Decimal Encoding
Essential feature
The number of comments in the first 24 hours after the publication of post but before base date/time.
34
CC5
Decimal Encoding
Essential feature
The difference between CC2 and CC3.
35
Base time
Decimal(0-71) Encoding
Other feature
Selected time in order to simulate the scenario.
36
Post length
Decimal Encoding
Other feature
Character count in the post.
37
Post Share Count
Decimal Encoding
Other feature
This features counts the no of shares of the post, that how many peoples had shared this post on to their timeline.
38
Post Promotion Status
Binary Encoding
Other feature
To reach more people with posts in News Feed, individual promote their post and this features tells that whether the post is promoted(1) or not(0).
39
H Local
Decimal(0-23) Encoding
Other feature
This describes the H hrs, for which we have the target variable/ comments received.
40-46
Post published weekday
Binary Encoding
Weekdays feature
This represents the day(Sunday...Saturday) on which the post was published.
47-53
Base DateTime weekday
Binary Encoding
Weekdays feature
This represents the day(Sunday...Saturday) on selected base Date/Time.
54
Target Variable
Decimal
Target
The no of comments in next H hrs(H is given in Feature no 39).
Data Exploration
End of explanation
"""
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
# Specify the features of interest
# Used all for testing purposes
features = FEATURES
# Extract the numpy arrays from the data frame
X = df[features].as_matrix()
y = df["Base time"].as_matrix()
# Instantiate the visualizer with the Covariance ranking algorithm
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Instantiate the visualizer with the Pearson ranking algorithm
visualizer = Rank2D(features=features, algorithm='pearson')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
"""
Explanation: Test Yellowbrick Covariance Ranking
End of explanation
"""
from sklearn.datasets.base import Bunch
DATA_DIR = os.path.abspath(os.path.join(".", "..", "pbwitt","data"))
# Show the contents of the data directory
for name in os.listdir(DATA_DIR):
if name.startswith("."): continue
print ("- {}".format(name))
def load_data(root=DATA_DIR):
filenames = {
'meta': os.path.join(root, 'meta.json'),
'rdme': os.path.join(root, 'README.md'),
'data': os.path.join(root, 'Features_Variant_1.csv'),
}
#Load the meta data from the meta json
with open(filenames['meta'], 'r') as f:
meta = json.load(f)
feature_names = meta['feature_names']
# Load the description from the README.
with open(filenames['rdme'], 'r') as f:
DESCR = f.read()
# Load the dataset from the data file.
dataset = pd.read_csv(filenames['data'], header=None)
#tranform to numpy
data = dataset.iloc[:,0:53]
target = dataset.iloc[:,-1]
# Extract the target from the data
data = np.array(data)
target = np.array(target)
# Create the bunch object
return Bunch(
data=data,
target=target,
filenames=filenames,
feature_names=feature_names,
DESCR=DESCR
)
# Save the dataset as a variable we can use.
dataset = load_data()
print(dataset.data.shape)
print(dataset.target.shape)
from yellowbrick.regressor import PredictionError, ResidualsPlot
from sklearn import metrics
from sklearn import cross_validation
from sklearn.cross_validation import KFold
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn import linear_model
from sklearn.cross_validation import train_test_split
from sklearn import preprocessing
from sklearn.linear_model import ElasticNet, Lasso
from sklearn.linear_model import Ridge, Lasso
from sklearn.cross_validation import KFold
"""
Explanation: Data Extraction
Create a bunch object to store data on disk.
data: array of shape n_samples * n_features
target: array of length n_samples
feature_names: names of the features
filenames: names of the files that were loaded
DESCR: contents of the readme
End of explanation
"""
def fit_and_evaluate(dataset, model, label,vis, **kwargs ):
"""
Because of the Scikit-Learn API, we can create a function to
do all of the fit and evaluate work on our behalf!
"""
start = time.time() # Start the clock!
scores = {'Mean Absolute Error:':[], 'Mean Squared Error:':[], 'Median Absolute Error':[], 'R2':[]}
for train, test in KFold(dataset.data.shape[0], n_folds=12, shuffle=True):
X_train, X_test = dataset.data[train], dataset.data[test]
y_train, y_test = dataset.target[train], dataset.target[test]
estimator = model(**kwargs)
estimator.fit(X_train, y_train)
expected = y_test
predicted = estimator.predict(X_test)
#For Visulizer below
if vis=='Ridge_vis':
return [X_train,y_train,X_test,y_test]
if vis=='Lasso_vis':
return [X_train,y_train,X_test,y_test]
scores['Mean Absolute Error:'].append(metrics.mean_absolute_error(expected, predicted))
scores['Mean Squared Error:'].append(metrics.mean_squared_error(expected, predicted))
scores['Median Absolute Error'].append(metrics.median_absolute_error(expected, predicted ))
scores['R2'].append(metrics.r2_score(expected, predicted))
# Report
print("Build and Validation of {} took {:0.3f} seconds".format(label, time.time()-start))
print("Validation scores are as follows:\n")
print(pd.DataFrame(scores).mean())
# Write official estimator to disk
estimator = model(**kwargs)
estimator.fit(dataset.data, dataset.target)
outpath = label.lower().replace(" ", "-") + ".pickle"
with open(outpath, 'wb') as f:
pickle.dump(estimator, f)
print("\nFitted model written to:\n{}".format(os.path.abspath(outpath)))
print("Lasso Scores and Visualization Below: \n")
fit_and_evaluate(dataset, Lasso, "Facebook Lasso",'NA')
# Instantiate the linear model and visualizer
lasso = Lasso()
visualizer = PredictionError(lasso)
visualizer.fit(fit_and_evaluate(dataset, Lasso, "X_train",'Lasso_vis')[0], fit_and_evaluate(dataset, Lasso, "y_train",'Lasso_vis')[1]) # Fit the training data to the visualizer
visualizer.score(fit_and_evaluate(dataset, Lasso, "X_train",'Lasso_vis')[2], fit_and_evaluate(dataset, Lasso, "y_train",'Lasso_vis')[3])
g = visualizer.poof() # Draw/show/poof the data
# Instantiate the linear model and visualizer
print("Ridge Scores and Target Visualization Below:\n")
fit_and_evaluate(dataset, Ridge, "Facebook Ridge", 'NA')
ridge = Ridge()
visualizer = ResidualsPlot(ridge)
visualizer.fit(fit_and_evaluate(dataset, Ridge, "X_train",'Ridge_vis')[0], fit_and_evaluate(dataset, Ridge, "y_train",'Ridge_vis')[1]) # Fit the training data to the visualizer
visualizer.score(fit_and_evaluate(dataset, Ridge, "X_train",'Ridge_vis')[2], fit_and_evaluate(dataset, Ridge, "y_train",'Ridge_vis')[3]) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
fit_and_evaluate(dataset, ElasticNet, "Facebook ElasticNet", 'NA')
"""
Explanation: Build and Score Regression Models
Create function -- add parameters for Yellowbrick target visulizations
Score models using Mean Absolute Error, Mean Squared Error, Median Absolute Error, R2
End of explanation
"""
|
mdeff/ntds_2017 | projects/reports/face_manifold/NTDS_Project.ipynb | mit | import os
import numpy as np
from sklearn.tree import ExtraTreeRegressor
from sklearn import manifold
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from matplotlib import animation
from PIL import Image
import pickle
from scipy.linalg import norm
import networkx as nx
from scipy import spatial
from bokeh.plotting import figure, output_file, show, ColumnDataSource
from bokeh.models import HoverTool
from bokeh.io import output_notebook
output_notebook()
%matplotlib inline
plt.rcParams["figure.figsize"] = (8,6)
import warnings
warnings.filterwarnings('ignore')
PATH = './img_align_celeba'
def load_image(filepath):
''' Loads an image at the path specified by the parameter filepath '''
im = Image.open(filepath)
return im
def show_image(im):
''' Displays an image'''
fig1, ax1 = plt.subplots(1, 1)
ax1.imshow(im, cmap='gray');
return
#Loads image files from all sub-directories
imgfiles = [os.path.join(root, name)
for root, dirs, files in os.walk(PATH)
for name in files
if name.endswith((".jpg"))]
"""
Explanation: Manifold Learning on Face Data
Atul Kumar Sinha, Karttikeya Mangalam and Prakhar Srivastava
In this project, we explore manifold learning on face data to embed high dimensional face images into a lower dimensional embedding. We hypothesize that euclidean distance in this lower dimensional embedding reflects image similarity in a better way. This hypothesis is tested by choosing path(s) that contain a number of points (images) from this lower dimensional space which represent an ordererd set of images. These images are then combined to generate a video which shows a smooth morphing.
End of explanation
"""
#N=int(len(imgfiles)/30)
N=len(imgfiles)
print("Number of images = {}".format(N))
test = imgfiles[0:N]
test[1]
"""
Explanation: Dataset
We are using CelebA Dataset which is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter.
We randomly downsample it by a factor of 30 for computational reasons.
End of explanation
"""
sample_path = imgfiles[0]
sample_im = load_image(sample_path)
sample_im = np.array(sample_im)
img_shape = (sample_im.shape[0],sample_im.shape[1])
ims = np.zeros((N, sample_im.shape[1]*sample_im.shape[0]))
for i, filepath in enumerate(test):
im = load_image(filepath)
im = np.array(im)
im = im.mean(axis=2)
im = np.asarray(im).ravel().astype(float)
ims[i] = im
"""
Explanation: Loading the data
End of explanation
"""
#iso = manifold.Isomap(n_neighbors=2, n_components=3, max_iter=500, n_jobs=-1)
#Z = iso.fit_transform(ims) #don't run, can load from pickle as in below cells
#saving the learnt embedding
#with open('var6753_n2_d3.pkl', 'wb') as f: #model learnt with n_neighbors=2 and n_components=3
# pickle.dump(Z,f)
#with open('var6753_n2_d2.pkl', 'wb') as f: #model learnt with n_neighbors=2 and n_components=2
# pickle.dump(Z,f)
#with open('var6753_n4_d3.pkl', 'wb') as f: #model learnt with n_neighbors=4 and n_components=3
# pickle.dump(Z,f)
with open('var6753_n2_d2.pkl', 'rb') as f:
Z = pickle.load(f)
#Visualizing the learnt 3D-manifold in two dimensions
source = ColumnDataSource(
data=dict(
x=Z[:, 0],
y=Z[:, 1],
desc=list(range(Z.shape[0])),
)
)
hover = HoverTool(
tooltips=[
("index", "$index"),
("(x,y)", "($x, $y)"),
("desc", "@desc"),
]
)
p = figure(plot_width=700, plot_height=700, tools=[hover],title="Mouse over the dots")
p.circle('x', 'y', size=10, source=source)
show(p)
"""
Explanation: Learning the Manifold
We are using Isomap for dimensionality reduction as we believe that the face image data lies on a structured manifold in a higher dimension and thus is embeddable in a much lower dimension without much loss of information.
Further, Isomap is a graph based technique which aligns with our scope.
End of explanation
"""
#Mapping the regressor from low dimension space to high dimension space
lin = ExtraTreeRegressor(max_depth=19)
lin.fit(Z, ims)
lin.score(Z, ims)
pred = lin.predict(Z[502].reshape(1, -1));
fig_new, [ax1,ax2] = plt.subplots(1,2)
ax1.imshow(ims[502].reshape(*img_shape), cmap = 'gray')
ax1.set_title('Original')
ax2.imshow(pred.reshape(*img_shape), cmap = 'gray')
ax2.set_title('Reconstructed')
person1 = 34
person2 = 35
test = ((Z[person1] + Z[person2]) / 2) #+ 0.5*np.random.randn(*Z[person1].shape)
pred = lin.predict(test.reshape(1, -1))
fig_newer, [ax1, ax2, ax3] = plt.subplots(1, 3)
ax1.imshow(ims[person1].reshape(*img_shape), cmap = 'gray')
ax1.set_title('Face 1')
ax2.imshow(ims[person2].reshape(*img_shape), cmap = 'gray')
ax2.set_title('Face 2')
ax3.imshow(pred.reshape(*img_shape), cmap = 'gray')
ax3.set_title('Face between lying on manifold');
distances = spatial.distance.squareform(spatial.distance.pdist(Z, 'braycurtis'))
kernel_width = distances.mean()
weights = np.exp(-np.square(distances) / (kernel_width ** 0.1))
for i in range(weights.shape[0]):
weights[i][i] = 0
NEIGHBORS = 2
#NEIGHBORS = 100
# Your code here.
#Find sorted indices of weights for each row
indices = np.argsort(weights, axis = 1)
#Create a zero matrix which would later be filled with sparse weights
n_weights = np.zeros((weights.shape[0], weights.shape[1]))
#Loop that iterates over the 'K' strongest weights in each row, and assigns them to sparse matrix, leaving others zero
for i in range(indices.shape[0]):
for j in range(indices.shape[1] - NEIGHBORS, indices.shape[1]):
col = indices[i][j]
n_weights[i][col] = weights[i][col]
#Imposing symmetricity
big = n_weights.T > n_weights
n_weights_s = n_weights - n_weights * big + n_weights.T * big
G = nx.from_numpy_matrix(n_weights_s)
pos = {}
for i in range(Z.shape[0]):
pos[i] = Z[i,0:2]
fig2,ax2 = plt.subplots()
nx.draw(G, pos, ax=ax2, node_size=10)
imlist=nx.all_pairs_dijkstra_path(G)[0][102] #choosing the path starting at node 0 and ending at node 102
imlist
N=25 #number of sub-samples between each consecutive pair in the path
lbd = np.linspace(0, 1, N)
counter = 0
for count, i in enumerate(imlist):
if count != len(imlist) - 1:
person1 = i
person2 = imlist[count + 1]
for j in range(N):
test = (lbd[j] * Z[person2]) + ((1 - lbd[j]) * Z[person1])
pred = lin.predict(test.reshape(1, -1))
im = Image.fromarray(pred.reshape(*img_shape))
im = im.convert('RGB')
im.save('{}.png'.format(counter))
counter += 1
os.system("ffmpeg -f image2 -r 10 -i ./%d.png -vcodec mpeg4 -y ./method1.mp4")
"""
Explanation: Regeneration from Lower Dimensional Space
While traversing the chosen path, we are also sub sampling in the lower dimensional space in order to create smooth transitions in the video. We naturally expect smoothness as points closer in the lower dimensional space should correspond to similar images. Since we do not have an exact representation for these sub-sampled points in the original image space, we need a method to map these back to the higher dimension.
We will be using Extremely randomized trees for regression.
As an alternative, we would also be testing convex combination approach to generate representations for the sub-sampled points.
Path Selection Heuristic
Method 1
Generating k-nearest graph using the Gaussian kernel. We further generate all pair shortest paths from this graph and randomly choose any path from that list for visualization. For regeneration of sub-sampled points, we use Extremely randomized trees as mentioned above.
End of explanation
"""
norm_vary = list()
norm_im = list()
lbd = np.linspace(0, 1, 101)
person1=12
person2=14
for i in range(101):
test = (lbd[i] * Z[person2]) + ((1-lbd[i]) * Z[person1])
norm_vary.append(norm(test))
pred = lin.predict(test.reshape(1, -1))
im = Image.fromarray(pred.reshape(*img_shape))
norm_im.append(norm(im))
f, ax = plt.subplots(1,1)
ax.plot(norm_vary)
ax.set_title('Norm for the mean image in projected space')
norm_vary = list()
norm_im = list()
lbd = np.linspace(0, 1, 101)
for i in range(101):
test = (lbd[i] * Z[person1]) + ((1-lbd[i]) * Z[person2])
norm_vary.append(norm(test))
pred = lin.predict(test.reshape(1, -1))
im = Image.fromarray(pred.reshape(*img_shape))
norm_im.append(norm(im))
f, ax = plt.subplots(1,1)
ax.plot(norm_im)
ax.set_title('Norm for mean image in original space')
"""
Explanation: Please check the generated video in the same enclosing folder.
Observing the output of the tree regressor we notice sudden jumps in the reconstructed video. We suspect that these discontinuities are either an artefact of the isomap embedding in a much lower dimension or because of the reconstruction method.
To investigate further we plot the frobenius norm of the sampled image in the isomap domain and that of the reconstructed image in the original domain. Since, we are sampling on a linear line between two images, the plot of the norm of the image of expected to be either an increasing or a decreasing linear graph. This indeed turnout the case for the sampled images in the isomap domain.
However, as we suspected, after reconstruction we observed sudden jumps in the plot. Clearly, this is because of the tree regressor which is overfitting the data, in which case there are sudden jumps in the plot.
End of explanation
"""
#Interesting paths with N4D3 model
#imlist = [1912,3961,2861,4870,146,6648]
#imlist = [3182,5012,5084,1113,2333,1375]
#imlist = [5105,5874,4255,2069,1178]
#imlist = [3583,2134,1034, 3917,3704, 5920,6493]
#imlist = [1678,6535,6699,344,6677,5115,6433]
#Interesting paths with N2D3 model
imlist = [1959,3432,6709,4103, 4850,6231,4418,4324]
#imlist = [369,2749,1542,366,1436,2836]
#Interesting paths with N2D2 model
#imlist = [2617,4574,4939,5682,1917,3599,6324,1927]
N=25
lbd = np.linspace(0, 1, N)
counter = 0
for count, i in enumerate(imlist):
if count != len(imlist) - 1:
person1 = i
person2 = imlist[count + 1]
for j in range(N):
im = (lbd[j] * ims[person2]) + ((1 - lbd[j]) * ims[person1])
im = Image.fromarray(im.reshape((218, 178)))
im = im.convert('RGB')
im.save('{}.png'.format(counter))
counter += 1
os.system("ffmpeg -f image2 -r 10 -i ./%d.png -vcodec mpeg4 -y ./method2.mp4")
"""
Explanation: Even after extensive hyperparamter tuning, we are unable to learn a reasonable regressor hence we use the convex combination approach in him dim.
Method 2
Instead of choosing a path from the graph, manually choosing a set of points which visibbly lie on a 2D manifold. For regeneration of sub-sampled points, we use convex combinations of the consecutive pairs in high dimensional space itself.
End of explanation
"""
|
vzg100/Post-Translational-Modification-Prediction | .ipynb_checkpoints/Lysine Acetylation -MLP -dbptm-checkpoint.ipynb | mit | from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
"""
Explanation: Template for test
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/k_acetylation.csv")
y.process_data(vector_function="sequence", amino_acid="K", imbalance_function=i, random_data=-1)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/acet.csv", "K")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/k_acetylation.csv")
x.process_data(vector_function="sequence", amino_acid="K", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/acet.csv", "K")
del x
"""
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using K acytelation.
Training data is from CUCKOO group and benchmarks are from dbptm.
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/k_acetylation.csv")
y.process_data(vector_function="chemical", amino_acid="K", imbalance_function=i, random_data=-1)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/acet.csv", "K")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/k_acetylation.csv")
x.process_data(vector_function="chemical", amino_acid="K", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/acet.csv", "K")
del x
"""
Explanation: Chemical Vector
End of explanation
"""
|
musketeer191/job_analytics | .ipynb_checkpoints/user_apply_job-checkpoint.ipynb | gpl-3.0 | # Global vars
DATA_DIR = 'D:/larc_projects/job_analytics/data/clean/'
RES_DIR = 'd:/larc_projects/job_analytics/results/'
AGG_DIR = RES_DIR + 'agg/'
FIG_DIR = RES_DIR + 'figs/'
apps = pd.read_csv(DATA_DIR + 'apps_with_time.csv')
apps.shape
# Rm noise (numbers) in job_title column
apps['is_number'] = map(is_number, apps['job_title'])
apps = apps.query('is_number == False')
# del apps['user.id']; del apps['user.index']; del apps['item.index']; del apps['freq']
apps.rename(columns={'job.id': 'job_id', 'job.title': 'job_title', 'apply.date': 'apply_date'},
inplace=True)
apps.to_csv(DATA_DIR + 'apps_with_time.csv', index=False)
apps.head(3)
"""
Explanation: HELPERS:
End of explanation
"""
n_applicant = apps['uid'].nunique(); n_application = apps.shape[0]
n_job = len(np.unique(apps['job_id'])); n_job_title = len(np.unique(apps['job_title']))
n_company = posts['company_registration_number_uen_ep'].nunique()
stats = pd.DataFrame({'n_application': n_application, 'n_applicant': n_applicant,
'n_job': n_job, 'n_job_title': n_job_title, 'n_company': n_company}, index=[0])
stats
stats.to_csv(DATA_DIR + 'stats/stats.csv', index=False)
"""
Explanation: Basic statistics
End of explanation
"""
agg_apps = apps.groupby(by=['uid', 'job_title']).agg({'job_id': 'nunique', 'apply_date': 'nunique'})
# convert to DF
agg_apps = agg_apps.add_prefix('n_').reset_index()
agg_apps['n_apply'] = agg_apps['n_job_id']
agg_apps.head(3)
"""
Explanation: Applicant-apply-Job matrix
A. Number of times an applicant applies a specific job title (position).
End of explanation
"""
quantile(df['n_apply'])
"""
Explanation: Let's look at the quartiles of the number of times an applicant applies for a specific job.
End of explanation
"""
plt.hist(df['n_apply'], bins=np.unique(df['n_apply']), log=True)
plt.xlabel(r'$N_{apply}$')
plt.ylabel('# applicant-job pairs (log scale)')
plt.savefig(DATA_DIR + 'apply_freq.pdf')
plt.show()
plt.close()
"""
Explanation: As expected, for most of the cases (50%), an applicant applies just once for a specific job.
However, we can also see at least 1 extreme case where an applicant applies 582 times for just a job title. Thus, let's look more closely at the distribution of $N_{apply}$.
End of explanation
"""
extremes = agg_apps.query('n_apply >= 100')
extremes.sort_values(by='n_apply', ascending=False, inplace=True)
extremes.head()
print('No. of extreme cases: {}'.format(extremes.shape[0]))
"""
Explanation: From the histogram, we can see that there are cases when a user applies for a job titles at least 100 times. Let's look closer at those extreme cases.
Extreme cases (a user applies the same job title at least 100 times)
End of explanation
"""
# ext_users = np.unique(extremes['uid'])
df = apps[apps['uid'].isin(extremes['uid'])]
df = df[df['job_title'].isin(extremes['job_title'])]
ext_apps = df
ext_apps.head(1)
res = calDuration(ext_apps)
res = pd.merge(res, extremes, left_index=True, right_on=['uid', 'job_title'])
res.sort_values(by='uid', inplace=True)
res = res[['uid', 'job_title', 'n_apply', 'first_apply_date', 'last_apply_date', 'n_active_day', 'total_duration_in_day']]
res.sort_values('n_apply', ascending=False, inplace=True)
res.head()
res.tail()
tmp = apps.query('uid==33833')[['uid', 'job_title', 'job_id']] .groupby('job_title').agg({'job_id': 'nunique'})
tmp = tmp.add_prefix('n_').reset_index()
tmp.rename(columns={'n_job_id': 'n_apply'}, inplace=True)
tmp.sort_values('n_apply', ascending=False, inplace=True)
quantile(res['n_active_day'])
res.to_csv(RES_DIR + 'extremes.csv')
"""
Explanation: To get a more complete picture on these extreme cases, let's put in apply dates and companies of those jobs.
Get dates and compute duration of extreme applications:
End of explanation
"""
apps_with_duration = calDuration(apps)
apps_with_duration.head()
all_res = pd.merge(apps_with_duration, agg_apps, left_index=True, right_on=['uid', 'job_title'])
all_res.sort_values(by='uid', inplace=True)
all_res = all_res[['uid', 'job_title', 'n_apply', 'first_apply_date', 'last_apply_date', 'n_active_day', 'total_duration_in_day']]
all_res.head()
all_res.shape
all_res.to_csv(AGG_DIR + 'timed_apps.csv', index=False)
normal = all_res.query('n_apply < 100')
extremes = res
plt.figure(figsize=(10,6))
plt.subplot(1,2,1)
plt.hist(extremes['n_active_day'], bins=np.unique(extremes['n_active_day']))
plt.title('Extreme cases')
plt.xlabel('# active days')
plt.ylabel('# user-apply-job cases')
plt.subplots_adjust(wspace=.5)
plt.subplot(1,2,2)
plt.hist(normal['n_active_day'], bins=np.unique(normal['n_active_day']),
log=True)
plt.title('Normal cases')
plt.xlabel('# active days')
plt.ylabel('# user-apply-job cases')
plt.savefig(RES_DIR + 'n_active_day.pdf')
plt.show()
plt.close()
"""
Explanation: Dates/duration of all applications:
End of explanation
"""
agg_job_title = apps[['uid', 'job_title']].groupby('uid').agg({'job_title': 'nunique'})
agg_job_title = agg_job_title.add_prefix('n_').reset_index()
agg_job_title.sort_values('n_job_title', ascending=False, inplace=True)
# agg_job_title.head()
agg_job_id = apps[['uid', 'job_id']].groupby('uid').agg({'job_id': 'nunique'})
agg_job_id = agg_job_id.add_prefix('n_').reset_index()
agg_job_id.sort_values('n_job_id', ascending=False, inplace=True)
agg_df = pd.merge(agg_job_title, agg_job_id)
agg_df.rename(columns={'n_job_id': 'n_job'}, inplace=True)
agg_df.head()
plt.close('all')
fig = plt.figure(figsize=(10,6))
plt.subplot(1,2,1)
loglog(agg_df['n_job_title'], xl='# Job titles applied', yl='# applicants')
plt.subplots_adjust(wspace=.5)
plt.subplot(1,2,2)
loglog(agg_df['n_job'], xl='# Jobs applied', yl='# applicants')
plt.savefig(FIG_DIR + 'figs/applied_jobs.pdf')
plt.show()
plt.close()
print apps.shape[0]
# Join all job titles of each user for reference
t0 = time()
tmp = apps[['uid', 'job_title']].groupby('uid').agg({'job_title': paste})
print('Finished joining job titles after {}s'.format(time()-t0))
tmp = tmp.add_suffix('s').reset_index()
apps_by_job_title = pd.merge(apps_by_job_title, tmp)
apps_by_job_title.sort_values('n_job_title', ascending=False, inplace=True)
apps_by_job_title.to_csv(AGG_DIR + 'apps_by_job_title.csv', index=False)
"""
Explanation: B. Number of different job titles an applicant applies
End of explanation
"""
posts = pd.read_csv(DATA_DIR + 'full_job_posts.csv')
print(posts.shape)
posts = dot2dash(posts)
posts.head()
# Extract just job id and employer id
job_and_employer = posts[['job_id', 'company_registration_number_uen_ep']].drop_duplicates()
job_and_employer.head(1)
# Load employer details (names, desc,...)
employer_detail = pd.read_csv(DATA_DIR + 'employers.csv')
employer_detail.drop_duplicates(inplace=True)
print(employer_detail.shape)
employer_detail = dot2dash(employer_detail)
employer_detail.head(1)
# Merge to add employer details
job_and_employer = job_and_employer.rename(columns={'company_registration_number_uen_ep': 'reg_no_uen_ep'})
df = pd.merge(apps, pd.merge(job_and_employer, employer_detail))
print(df.shape)
df.sort_values(by='organisation_name_ep', inplace=True, na_position='first')
df.head()
# del df['is_number']
# df.to_csv(DATA_DIR + 'full_apps.csv', index=False)
df.head()
user_apply_comp = df[['uid', 'reg_no_uen_ep', 'organisation_name_ep']]
user_apply_comp['n_apply'] = ''
apps_by_comp = user_apply_comp.groupby(['uid', 'reg_no_uen_ep', 'organisation_name_ep']).count()
apps_by_comp = apps_by_comp.reset_index()
apps_by_comp.sort_values('n_apply', ascending=False, inplace=True)
apps_by_comp.head()
apps_by_comp.to_csv(AGG_DIR + 'apps_by_comp.csv', index=False)
loglog(apps_by_comp['n_apply'], xl='# applications', yl='# user-apply-company cases')
plt.savefig(FIG_DIR + 'user_comp.pdf')
plt.show()
plt.close()
quantile(user_apply_comp['n_apply'])
"""
Explanation: C. Number of company an applicant applies
Merge necessary files to get a full dataset
End of explanation
"""
tmp = df[['uid', 'job_title', 'reg_no_uen_ep', 'organisation_name_ep']]
tmp['n_apply'] = ''
apps_by_job_comp = tmp.groupby(['uid', 'job_title', 'reg_no_uen_ep', 'organisation_name_ep']).count()
apps_by_job_comp = apps_by_job_comp.reset_index()
apps_by_job_comp.sort_values('n_apply', ascending=False, inplace=True)
print apps_by_job_comp.shape
apps_by_job_comp.head()
apps_by_job_comp.to_csv(AGG_DIR + 'apps_by_job_comp.csv', index=False)
loglog(apps_by_job_comp['n_apply'], xl='# applications', yl='# user-apply-job-at-company cases')
plt.savefig(FIG_DIR + 'user_job_comp.pdf')
plt.show()
plt.close()
job_comp = df[['job_title', 'organisation_name_ep']].drop_duplicates()
job_comp.shape
"""
Explanation: D. Number of (job title, company) an applicant applies
End of explanation
"""
|
enbanuel/phys202-2015-work | assignments/assignment07/AlgorithmsEx02.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
"""
Explanation: Algorithms Exercise 2
Imports
End of explanation
"""
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
# YOUR CODE HERE
#I always start with an empty list k.
k=[]
for i in range(0, len(a)):
#Check to see if the number in index i is greater than the numbers in the adjacent indicies, whilst being in range of the list.
if (i==len(a)-1 or a[i]>a[i+1]) and a[i]>a[i-1]:
k.append(i)
return np.array(k)
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
"""
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
"""
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
# YOUR CODE HERE
h = pi_digits_str
j=[]
for i in h:
j.append(int(i))
n = np.array(j)
v = find_peaks(n)
m = np.diff(v)
f = plt.figure(figsize=(10,6))
plt.hist(m, bins=20)
plt.ylabel('Distance between maxima')
plt.xlabel('Index of maxima')
m
assert True # use this for grading the pi digits histogram
"""
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation
"""
|
lithiumdenis/MLSchool | 7. Анализ тональности.ipynb | mit | import codecs
fileObj = codecs.open( 'data/TextWorks/training.txt', "r", "utf_8_sig" )
lines = fileObj.readlines()
#with open( 'data/TextWorks/training.txt'
# # Путь к вашему training.txt-файлу
# ) as handle:
# lines = handle.readlines()
data = [x.strip().split('\t') for x in lines]
df = pd.DataFrame(data=data, columns=['target', 'text'])
df.target = df.target.astype(np.int32)
df = df.drop_duplicates().reset_index(drop=True)
#with open( 'data/TextWorks/testdata.txt'
# # Путь к вашему test.txt-файлу
# ) as handle:
# lines = handle.readlines()
fileObj = codecs.open( 'data/TextWorks/testdata.txt', "r", "utf_8_sig" )
lines = fileObj.readlines()
data = [x.strip().split('\t') for x in lines]
df_test = pd.DataFrame(data=data, columns=['text'])
df_test = df_test.drop_duplicates().reset_index(drop=True)
df.head()
"""
Explanation: Данные
Возьмите данные отсюда: https://inclass.kaggle.com/c/si650winter11
В этот раз это inclass-соревнование, а не настоящий kaggle. В задачах на текст Kaggle обычно не мелочится, и сгружает гигабайты в выборках. Конкретно же этот конкурс из университета Мичигана вполне легко разворачивается на домашнем компьютере.
End of explanation
"""
#Загрузка модулей nltk через интерфейс проги
#import nltk
#nltk.download()
from nltk import word_tokenize, wordpunct_tokenize, sent_tokenize
s = df.text[1]
s
import re
tokens = [x.lower() for x in word_tokenize(s) if re.match("[a-zA-Z\d]+", x) is not None]
tokens
"""
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 1.</h3>
</div>
</div>
Токенизация
End of explanation
"""
from nltk.corpus import stopwords
[print(x, end='\t') for x in stopwords.words('english')];
[print(x, end='\t') for x in stopwords.words('russian')];
throwed = [x for x in tokens if x in stopwords.words('english')]
throwed
filtered_tokens = [x for x in tokens if x not in stopwords.words('english')]
filtered_tokens
"""
Explanation: Стоп-слова
End of explanation
"""
from nltk import WordNetLemmatizer
wnl = WordNetLemmatizer()
lemmatized = [wnl.lemmatize(x, pos='v') for x in filtered_tokens] # default is Verb
for a,b in zip(filtered_tokens, lemmatized):
print(a.rjust(10), '->', b.ljust(10), '' if a == b else '<- processed!')
lemmatized = [wnl.lemmatize(x) for x in filtered_tokens] # default is Noun
for a,b in zip(filtered_tokens, lemmatized):
print(a.rjust(10), '->', b.ljust(10), '' if a == b else '<- processed!')
#Сделаем разделение по частям речи
from nltk import pos_tag
pos_tag(filtered_tokens)
from nltk.help import upenn_tagset
#Описание сокращений
#upenn_tagset()
from nltk.corpus import wordnet as wn
convert_tag = lambda t: { 'N': wn.NOUN, 'V': wn.VERB, 'R': wn.ADV, 'J': wn.ADJ, 'S': wn.ADJ_SAT }.get(t[:1], wn.NOUN)
lemmatized = [wnl.lemmatize(word, convert_tag(tag)) for word, tag in pos_tag(filtered_tokens)]
for a,b in zip(filtered_tokens, lemmatized):
print(a.rjust(10), '->', b.ljust(10), '' if a == b else '<- processed!')
"""
Explanation: Лемматизация (Приведение к единственному числу именительного падежа)
End of explanation
"""
from nltk.stem import SnowballStemmer, LancasterStemmer, PorterStemmer
sbs = SnowballStemmer('english')
stemmed = [sbs.stem(x) for x in filtered_tokens]
for a,b in zip(filtered_tokens, lemmatized):
print(a.rjust(10), '->', b.ljust(10), '' if a == b else '<- processed!')
sbs = PorterStemmer()
stemmed = [sbs.stem(x) for x in filtered_tokens]
for a,b in zip(filtered_tokens, lemmatized):
print(a.rjust(10), '->', b.ljust(10), '' if a == b else '<- processed!')
sbs = LancasterStemmer()
stemmed = [sbs.stem(x) for x in filtered_tokens]
for a,b in zip(filtered_tokens, lemmatized):
print(a.rjust(10), '->', b.ljust(10), '' if a == b else '<- processed!')
sbs = SnowballStemmer('english')
def process_by_far(s):
s = [x.lower() for x in word_tokenize(s) if re.match("[a-zA-Z\d]+", x) is not None] # токенизация
s = [x for x in s if x not in stopwords.words('english')] # стоп-слова
s = [sbs.stem(x) for x in s] # стемминг
return ' '.join(s)
df['cleansed_text'] = df.text.apply(process_by_far)
df.head()
"""
Explanation: Стемминг (усечение окончаний)
End of explanation
"""
from toolz.itertoolz import concat
all_tokens = list(concat(df.cleansed_text.str.split()))
from nltk.probability import FreqDist
fd = FreqDist(all_tokens)
fd.most_common(10)
plt.figure(figsize=(22, 10));
fd.plot(100, cumulative=False)
fd.hapaxes()[:3] # - тут все слова, что встретились лишь единожды
len(fd.keys()), len(fd.hapaxes())
#удаляем редко встречающиеся слова
df['frequent_cleansed'] = df.cleansed_text.str.split()\
.apply(lambda ss: ' '.join([x for x in ss if x not in fd.hapaxes()]))
df.head()
"""
Explanation: Частотный словарь
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
X_bow = cv.fit_transform(df.frequent_cleansed).todense();
y = df.target
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X_bow, y);
df.target[-4:]
df.frequent_cleansed[-4:]
df.frequent_cleansed[3:4]
text = cv.transform(df.frequent_cleansed[3:4])
import eli5
eli5.explain_prediction_sklearn(clf, text, feature_names=list(cv.vocabulary_.keys()))
"""
Explanation: Кодирование текста: Bag of Words
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(min_df=1)
X_tfidf = tfidf.fit_transform(df.frequent_cleansed).todense()
idf = tfidf.idf_
terms_score = list(zip(tfidf.get_feature_names(), idf))
sorted(terms_score, key=lambda x: -x[1])[:20]
"""
Explanation: TF-IDF (Term Frequency-Inverse Document Frequency)
End of explanation
"""
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
scores = cross_val_score(clf, X_bow, y)
print(np.mean(scores), '+/-', 2 * np.std(scores))
from sklearn.ensemble import RandomForestClassifier
scores = cross_val_score(RandomForestClassifier(n_estimators=100), X_bow, y)
print(np.mean(scores), '+/-', 2 * np.std(scores))
"""
Explanation: Модели с разреженными матрицами
End of explanation
"""
# загрузим библиотеки и установим опции
from __future__ import division, print_function
# отключим всякие предупреждения Anaconda
import warnings
warnings.filterwarnings('ignore')
#%matplotlib inline
import numpy as np
import pandas as pd
from sklearn.metrics import roc_auc_score
# загрузим обучающую и тестовую выборки
train_df = pd.read_csv('data/TextWorks/train_sessions.csv')#,index_col='session_id')
test_df = pd.read_csv('data/TextWorks/test_sessions.csv')#, index_col='session_id')
# приведем колонки time1, ..., time10 к временному формату
times = ['time%s' % i for i in range(1, 11)]
train_df[times] = train_df[times].apply(pd.to_datetime)
test_df[times] = test_df[times].apply(pd.to_datetime)
# отсортируем данные по времени
train_df = train_df.sort_values(by='time1')
# посмотрим на заголовок обучающей выборки
train_df.head()
sites = ['site%s' % i for i in range(1, 11)]
#заменим nan на 0
train_df[sites] = train_df[sites].fillna(0).astype('int').astype('str')
test_df[sites] = test_df[sites].fillna(0).astype('int').astype('str')
#создадим тексты необходимые для обучения word2vec
train_df['list'] = train_df['site1']
test_df['list'] = test_df['site1']
for s in sites[1:]:
train_df['list'] = train_df['list']+","+train_df[s]
test_df['list'] = test_df['list']+","+test_df[s]
train_df['list_w'] = train_df['list'].apply(lambda x: x.split(','))
test_df['list_w'] = test_df['list'].apply(lambda x: x.split(','))
#В нашем случае предложение это набор сайтов, которые посещал пользователь
#нам необязательно переводить цифры в названия сайтов, т.к. алгоритм будем выявлять взаимосвязь их друг с другом.
train_df['list_w'][10]
# подключим word2vec
from gensim.models import word2vec
#объединим обучающую и тестовую выборки и обучим нашу модель на всех данных
#с размером окна в 6=3*2 (длина предложения 10 слов) и итоговыми векторами размерности 300, параметр workers отвечает за количество ядер
test_df['target'] = -1
data = pd.concat([train_df,test_df], axis=0)
model = word2vec.Word2Vec(data['list_w'], size=300, window=3, workers=4)
#создадим словарь со словами и соответсвующими им векторами
w2v = dict(zip(model.wv.index2word, model.wv.syn0))
class mean_vectorizer(object):
def __init__(self, word2vec):
self.word2vec = word2vec
self.dim = len(next(iter(w2v.values())))
def fit(self, X):
return self
def transform(self, X):
return np.array([
np.mean([self.word2vec[w] for w in words if w in self.word2vec]
or [np.zeros(self.dim)], axis=0)
for words in X
])
data_mean=mean_vectorizer(w2v).fit(train_df['list_w']).transform(train_df['list_w'])
data_mean.shape
# Воспользуемся валидацией
def split(train,y,ratio):
idx = round(train.shape[0] * ratio)
return train[:idx, :], train[idx:, :], y[:idx], y[idx:]
y = train_df['target']
Xtr, Xval, ytr, yval = split(data_mean, y,0.8)
Xtr.shape,Xval.shape,ytr.mean(),yval.mean()
# подключим библиотеки keras
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Input
from keras.preprocessing.text import Tokenizer
from keras import regularizers
# опишем нейронную сеть
model = Sequential()
model.add(Dense(128, input_dim=(Xtr.shape[1])))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['binary_accuracy'])
history = model.fit(Xtr, ytr,
batch_size=128,
epochs=10,
validation_data=(Xval, yval),
class_weight='auto',
verbose=0)
classes = model.predict(Xval, batch_size=128)
roc_auc_score(yval, classes)
"""
Explanation: Word2Vec
End of explanation
"""
|
MehtapIsik/assaytools | examples/competition-fluorescence-assay/2b MLE fit for three component binding - simulated data.ipynb | lgpl-2.1 | import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
import seaborn as sns
%pylab inline
#Competitive binding function
#This function and its assumptions are defined in greater detail in this notebook:
## modelling-CompetitiveBinding-ThreeComponentBinding.ipynb
def three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A):
"""
Parameters
----------
Ptot : float
Total protein concentration
Ltot : float
Total tracer(fluorescent) ligand concentration
Kd_L : float
Dissociation constant of the fluorescent ligand
Atot : float
Total competitive ligand concentration
Kd_A : float
Dissociation constant of the competitive ligand
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
A : float
Free ligand concentration
PL : float
Complex concentration
Kd_L_app : float
Apparent dissociation constant of L in the presence of A
Usage
-----
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
"""
Kd_L_app = Kd_L*(1+Atot/Kd_A)
PL = 0.5 * ((Ptot + Ltot + Kd_L_app) - np.sqrt((Ptot + Ltot + Kd_L_app)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free tracer ligand concentration in sample cell after n injections (uM)
A = Atot - PL; # free competitive ligand concentration in sample cell after n injections (uM)
return [P, L, A, PL, Kd_L_app]
#Let's define our parameters
Kd = 3800e-9 # M
Kd_Competitor = 3000e-9 # M
Ptot = 1e-9 * np.ones([12],np.float64) # M
Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M
L_Competitor = 10e-6 # M
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd, L_Competitor, Kd_Competitor)
#usin
[P_base, L_base, A_base, PL_base, Kd_L_app_base] = three_component_competitive_binding(Ptot, Ltot, Kd, 0, Kd_Competitor)
# y will be complex concentration
# x will be total ligand concentration
plt.title('Competition assay')
plt.semilogx(Ltot,PL_base,'green', marker='o', label = 'Fluorescent Ligand')
plt.semilogx(Ltot,PL,'cyan', marker='o', label = 'Fluorescent Ligand + Competitor')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.legend(loc=0);
#What if we change our Kd's a little
#Kd_Gef_Abl
Kd = 480e-9 # M
#Kd_Ima_Abl
Kd_Competitor = 21.0e-9 # M
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd, L_Competitor, Kd_Competitor)
#using _base as a subscript to define when we have no competitive ligand
[P_base, L_base, A_base, PL_base, Kd_L_app_base] = three_component_competitive_binding(Ptot, Ltot, Kd, 0, Kd_Competitor)
# y will be complex concentration
# x will be total ligand concentration
plt.title('Competition assay')
plt.semilogx(Ltot,PL_base,'green', marker='o', label = 'Fluorescent Ligand')
plt.semilogx(Ltot,PL,'cyan', marker='o', label = 'Fluorescent Ligand + Competitor')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.legend(loc=0);
"""
Explanation: MLE fit for three component binding - simulated data
In this notebook we will see how well we can reproduce Kd of a non-fluorescent ligand from simulated experimental data with a maximum likelihood function.
End of explanation
"""
# Making max 400 relative fluorescence units, and scaling all of PL to that
npoints = len(Ltot)
sigma = 10.0 # size of noise
F_i = (400/1e-9)*PL + sigma * np.random.randn(npoints)
F_i_base = (400/1e-9)*PL_base + sigma * np.random.randn(npoints)
# y will be complex concentration
# x will be total ligand concentration
plt.title('Competition assay')
plt.semilogx(Ltot,F_i_base,'green', marker='o', label = 'Fluorescent Ligand')
plt.semilogx(Ltot,F_i,'cyan', marker='o', label = 'Fluorescent Ligand + Competitor')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$Fluorescence$')
plt.legend(loc=0);
#And makeup an F_L
F_L = 0.3
"""
Explanation: Now make this a fluorescence experiment.
End of explanation
"""
# This function fits Kd_L when L_Competitor is 0
def find_Kd_from_fluorescence_base(params):
[F_background, F_PL, Kd_L] = params
N = len(Ltot)
Fmodel_i = np.zeros([N])
for i in range(N):
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot[0], Ltot[i], Kd_L, 0, Kd_Competitor)
Fmodel_i[i] = (F_PL*PL + F_L*L) + F_background
return Fmodel_i
initial_guess = [1,400/1e-9,3800e-9]
prediction = find_Kd_from_fluorescence_base(initial_guess)
plt.semilogx(Ltot,prediction,color='k')
plt.semilogx(Ltot,F_i_base, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
def sumofsquares(params):
prediction = find_Kd_from_fluorescence_base(params)
return np.sum((prediction - F_i_base)**2)
initial_guess = [0,3E11,2000E-9]
fit = optimize.minimize(sumofsquares,initial_guess,method='Nelder-Mead')
print "The predicted parameters are", fit.x
fit_prediction = find_Kd_from_fluorescence_base(fit.x)
plt.semilogx(Ltot,fit_prediction,color='k')
plt.semilogx(Ltot,F_i_base, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
Kd_L_MLE = fit.x[2]
def Kd_format(Kd):
if (Kd < 1e-12):
Kd_summary = "Kd = %.1f nM " % (Kd/1e-15)
elif (Kd < 1e-9):
Kd_summary = "Kd = %.1f pM " % (Kd/1e-12)
elif (Kd < 1e-6):
Kd_summary = "Kd = %.1f nM " % (Kd/1e-9)
elif (Kd < 1e-3):
Kd_summary = "Kd = %.1f uM " % (Kd/1e-6)
elif (Kd < 1):
Kd_summary = "Kd = %.1f mM " % (Kd/1e-3)
else:
Kd_summary = "Kd = %.3e M " % (Kd)
return Kd_summary
Kd_format(Kd_L_MLE)
delG_summary = "delG = %s kT" %np.log(Kd_L_MLE)
delG_summary
"""
Explanation: First let's see if we can find Kd of our fluorescent ligand from the three component binding model, if we know there's no competitive ligand
End of explanation
"""
# This function fits Kd_A when Kd_L already has an estimate
def find_Kd_from_fluorescence_competitor(params):
[F_background, F_PL, Kd_Competitor] = params
N = len(Ltot)
Fmodel_i = np.zeros([N])
for i in range(N):
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot[0], Ltot[i], Kd_L_MLE, L_Competitor, Kd_Competitor)
Fmodel_i[i] = (F_PL*PL + F_L*L) + F_background
return Fmodel_i
initial_guess = [0,400/1e-9,3800e-9]
prediction = find_Kd_from_fluorescence_competitor(initial_guess)
plt.semilogx(Ltot,prediction,color='k')
plt.semilogx(Ltot,F_i, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
def sumofsquares(params):
prediction = find_Kd_from_fluorescence_competitor(params)
return np.sum((prediction - F_i)**2)
initial_guess = [0,3E11,2000E-9]
fit_comp = optimize.minimize(sumofsquares,initial_guess,method='Nelder-Mead')
print "The predicted parameters are", fit_comp.x
fit_prediction_comp = find_Kd_from_fluorescence_competitor(fit_comp.x)
plt.semilogx(Ltot,fit_prediction_comp,color='k')
plt.semilogx(Ltot,F_i, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
Kd_A_MLE = fit_comp.x[2]
Kd_format(Kd_A_MLE)
delG_summary = "delG = %s kT" %np.log(Kd_A_MLE)
delG_summary
"""
Explanation: Okay, cool now let's try to fit our competitive ligand
End of explanation
"""
plt.semilogx(Ltot,fit_prediction_comp,color='k')
plt.semilogx(Ltot,F_i, 'o')
plt.axvline(Kd_A_MLE,color='k',label='%s (MLE)'%Kd_format(Kd_A_MLE))
plt.axvline(Kd_Competitor,color='b',label='%s'%Kd_format(Kd_Competitor))
plt.semilogx(Ltot,fit_prediction,color='k')
plt.semilogx(Ltot,F_i_base, 'o')
plt.axvline(Kd_L_MLE,color='k',label='%s (MLE)'%Kd_format(Kd_L_MLE))
plt.axvline(Kd,color='g',label='%s'%Kd_format(Kd))
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend(loc=0);
#Awesome
"""
Explanation: Let's plot all these results with each other and see how we did
End of explanation
"""
|
MissouriDSA/twitter-locale | twitter/twitter_2.ipynb | mit | # BE SURE TO RUN THIS CELL BEFORE ANY OF THE OTHER CELLS
import psycopg2
import pandas as pd
# query database
statement = """
SELECT *
FROM twitter.tweet
WHERE job_id = 261
LIMIT 1000;
"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
job_261 = {}
for i in list(range(len(column_names))):
job_261['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(job_261)
"""
Explanation: Twitter: An Analysis of Linguistic Diversity
Part II
Up until this point, we haven't been specifying a specific location of where the tweet was made. As mentioned earlier, this database is constantly updating and collecting tweets from 50 different cities across the United States. This is done by specifying a radius around cooridinates of a city through our API call. Twitter then returns tweets from that specific geographic area whenever that call is made. Now, you may have noticed that the tweet table contains a location_geo column that references the geographic coordinates of where the tweet was sent out to the world. This is one reference that the API will return from our call. You may have also noticed that this column is largely empty. That is simply because most users have this data collection feature turned off (it turns out that it is the default setting to have it turned off). So what about the other tweets, the majority that don't have a location_geo? Well, Twitter has an unspecified algorithm that can identify where a tweeter is based off of particular features.
For our purposes, this city-level location information is stored in the job_id column, specifically ids 255 and 257 through 305. These job ids refer to a job that contains the query for a specific city, therefore, when each job is run, a different city's data is being gathered.
To see how this works, we can start by querying a specific job in the tweet table to return only the data from once city. We can go ahead and query for job id 261. All we have to do is add a WHERE clause to our query and specify that we want job_id to equal 261.
Here are the "Twitter Collection Jobs" we are using in this notebook (and throughout the Twitter collection). Tweets were collected using the Twitter Search API from January 14th, 2017 through May 18, 2017. We continue to collect tweets for these jobs as part of a research project led by James Bain, Sean Goggins & Grant Scott.
<table border=0 cellpadding=0 cellspacing=0 width=691 style='border-collapse:
collapse;table-layout:fixed;width:518pt'>
<col width=51 style='mso-width-source:userset;mso-width-alt:1621;width:38pt'>
<col width=416 style='mso-width-source:userset;mso-width-alt:13312;width:312pt'>
<col width=224 style='mso-width-source:userset;mso-width-alt:7168;width:168pt'>
<tr height=21 style='height:16.0pt'>
<td height=21 class=xl65 width=51 style='height:16.0pt;width:38pt'><b>job_id</b></td>
<td class=xl65 width=416 style='width:312pt'><b>query</b></td>
<td class=xl65 width=224 style='width:168pt'><b>description</b></td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>255</td>
<td>q=&geocode=42.5144566,-83.01465259999999,40km</td>
<td>Warren, Michigan</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>256</td>
<td>q=&geocode=39.0997,-94.5786,40km</td>
<td>Kansas City, Missouri</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>257</td>
<td>q=&geocode=36.7468422,-119.7725868,40km</td>
<td>Fresno, California</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>258</td>
<td>q=&geocode=39.103118200000004,-84.5120196,40km</td>
<td>Cincinnati, Ohio</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>259</td>
<td>q=&geocode=35.4675602,-97.5164276,40km</td>
<td>Oklahoma City, Oklahoma</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>260</td>
<td>q=&geocode=40.6084305,-75.4901833,40km</td>
<td>Allentown, Pennsylvania</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>261</td>
<td>q=&geocode=38.95170529999999,-92.33407240000001,40km</td>
<td>Columbia, Missouri</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>262</td>
<td>q=&geocode=41.499320000000004,-81.6943605,40km</td>
<td>Cleveland, Ohio</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>263</td>
<td>q=&geocode=41.600544799999994,-93.6091064,40km</td>
<td>Des Moines, Iowa</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>264</td>
<td>q=&geocode=42.9633599,-85.6680863,40km</td>
<td>Grand Rapids, Michigan</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>265</td>
<td>q=&geocode=44.0121221,-92.4801989,40km</td>
<td>Rochester, Minnesota</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>266</td>
<td>q=&geocode=33.5805955,-112.23737790000001,40km</td>
<td>Peoria, Arizona</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>267</td>
<td>q=&geocode=37.20895720000001,-93.2922989,40km</td>
<td>Springfield, Missouri</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>268</td>
<td>q=&geocode=30.2240897,-92.0198427,40km</td>
<td>Lafayette, Louisiana</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>269</td>
<td>q=&geocode=38.9822282,-94.6707917,40km</td>
<td>Overland Park, Kansas</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>270</td>
<td>q=&geocode=43.0389025,-87.9064736,40km</td>
<td>Milwaukee, Wisconsin</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>271</td>
<td>q=&geocode=36.0395247,-114.9817213,40km</td>
<td>Henderson, Nevada</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>272</td>
<td>q=&geocode=35.2270869,-80.8431267,40km</td>
<td>Charlotte, North Carolina</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>273</td>
<td>q=&geocode=41.8781136,-87.62979820000001,40km</td>
<td>Chicago, Illinois</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>274</td>
<td>q=&geocode=25.9017472,-97.4974838,40km</td>
<td>Brownsville, Texas</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>275</td>
<td>q=&geocode=42.360082500000004,-71.0588801,40km</td>
<td>Boston, Massachusetts</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>276</td>
<td>q=&geocode=30.458282899999997,-91.1403196,40km</td>
<td>Baton Rouge, Louisiana</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>277</td>
<td>q=&geocode=33.3061605,-111.8412502,40km</td>
<td>Chandler, Arizona</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>278</td>
<td>q=&geocode=39.529632899999996,-119.8138027,40km</td>
<td>Reno, Nevada</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>279</td>
<td>q=&geocode=33.7455731,-117.8678338,40km</td>
<td>Santa Ana, California</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>280</td>
<td>q=&geocode=41.308274,-72.9278835,40km</td>
<td>New Haven, Connecticut</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>281</td>
<td>q=&geocode=36.060949,-95.7974526,40km</td>
<td>Broken Arrow, Oklahoma</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>282</td>
<td>q=&geocode=40.5187154,-74.4120953,40km</td>
<td>Edison, New Jersey</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>283</td>
<td>q=&geocode=42.2711311,-89.09399520000001,40km</td>
<td>Rockford, Illinois</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>284</td>
<td>q=&geocode=39.9525839,-75.1652215,40km</td>
<td>Philadelphia, Pennsylvania</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>285</td>
<td>q=&geocode=40.6096698,-111.9391031,40km</td>
<td>West Jordan, Utah</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>286</td>
<td>q=&geocode=36.0998596,-80.24421600000001,40km</td>
<td>Winston Salem, North Carolina</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>287</td>
<td>q=&geocode=32.5251516,-93.75017890000001,40km</td>
<td>Shreveport, Louisiana</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>288</td>
<td>q=&geocode=31.7618778,-106.48502169999999,40km</td>
<td>El Paso, Texas</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>289</td>
<td>q=&geocode=33.5206608,-86.80249,40km</td>
<td>Birmingham, Alabama</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>290</td>
<td>q=&geocode=42.8864468,-78.8783689,40km</td>
<td>Buffalo, New York</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>291</td>
<td>q=&geocode=40.7127837,-74.00594129999999,40km</td>
<td>New York City, New York</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>292</td>
<td>q=&geocode=37.9715592,-87.5710898,40km</td>
<td>Evansville, Indiana</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>293</td>
<td>q=&geocode=32.776474900000004,-79.9310512,40km</td>
<td>Charleston, South Carolina</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>294</td>
<td>q=&geocode=44.953702899999996,-93.0899578,40km</td>
<td>Saint Paul, Minnesota</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>295</td>
<td>q=&geocode=45.5001357,-122.43020130000001,40km</td>
<td>Gresham, Oregon</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>296</td>
<td>q=&geocode=38.804835499999996,-77.0469214,40km</td>
<td>Alexandria, Virginia</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>297</td>
<td>q=&geocode=29.7604267,-95.36980279999999,40km</td>
<td>Houston, Texas</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>298</td>
<td>q=&geocode=40.6936488,-89.58898640000001,40km</td>
<td>Peoria, Illinois</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>299</td>
<td>q=&geocode=32.8546197,-79.9748103,40km</td>
<td>North Charleston, South Carolina</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>300</td>
<td>q=&geocode=40.233843799999995,-111.6585337,40km</td>
<td>Provo, Utah</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>301</td>
<td>q=&geocode=35.222566799999996,-97.4394777,40km</td>
<td>Norman, Oklahoma</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>302</td>
<td>q=&geocode=33.4734978,-82.0105148,40km</td>
<td>Augusta, Georgia</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>303</td>
<td>q=&geocode=47.6062095,-122.33207079999998,40km</td>
<td>Seattle, Washington</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>304</td>
<td>q=&geocode=35.99403289999999,-78.898619,40km</td>
<td>Durham, North Carolina</td>
</tr>
<tr height=21 style='height:16.0pt'>
<td height=21 align=right style='height:16.0pt'>305</td>
<td>q=&geocode=39.768403,-86.158068,40km</td>
<td>Indianapolis, Indiana</td>
</tr>
</table>
End of explanation
"""
# query database
statement = """
SELECT j.description, t.*
FROM twitter.tweet t, twitter.job j
WHERE t.job_id = 261 AND t.job_id = j.job_id
LIMIT 1000;
"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
job_261 = {}
for i in list(range(len(column_names))):
job_261['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(job_261)
"""
Explanation: Wonderful! But we are missing what exactly job id 261 means, however, this information isn't stored in the tweet table. Instead, it is located in the job table. This is the table that is responsible for collecting all of the data. It also contains a description column that contains the city name for that job. Let's go ahead and JOIN this column to the query that we created above so we can see what city we are looking at.
End of explanation
"""
# put your code here
# ------------------
# query database
statement = """
SELECT DISTINCT from_user, COUNT(*)
FROM (
SELECT from_user
FROM twitter.tweet
WHERE job_id = 261
LIMIT 10000) AS users
GROUP BY from_user
ORDER BY count;
"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
job_261 = {}
for i in list(range(len(column_names))):
job_261['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(job_261)
"""
Explanation: There we have it. Take a look at the description column. 261 = Columbia, Missouri. This was achieved by joining the tweet table to the job table's description where job_id = 261. This can be done so easily because job_id in the tweet table is a foreign key, which corresponds to the job_id column in the job table.
Now that we know we are working with tweets from Columbia, MO, let's start digging into some summaries. The first thing we want to look at is whether or not the tweeters of Columbia are tweeting at relatively the same rate.
<span style="background-color: #FFFF00">YOUR TURN</span>
Using what we learned in the previous twitter notebook, count the number of tweets per user in Columbia when limiting it by 10,000 rows. Do users tweet at relatively the same amount?
End of explanation
"""
statement = """
SELECT
DISTINCT ON (from_user, iso_language)
*
FROM (SELECT * FROM twitter.tweet WHERE job_id = 261 LIMIT 10000) as T
ORDER BY from_user, iso_language;
"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
job_261 = {}
for i in list(range(len(column_names))):
job_261['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(job_261)
"""
Explanation: SPOILER ALERT! There are some people who tweet a whole lot more than others. These high volume tweeters could distort our diversity measure when we eventually get to calculating it. Since we are interested in how many speakers of a language there are, if there are several people who are tweeting a lot, this could artificially boost the count of a language. Instead, we want to limit our result to one user per city.
Let's do just that. In our query, we want to specify that we only want one row per user. In fact, we can add one more condition on top of that as bilingual/multilingual individuals might be interesting to keep track of, so we can say one row per user per language. This way we can still capture those users who tweet in different languages.
End of explanation
"""
unique_users = pd.DataFrame(job_261)
print("Number of rows: {}".format(len(unique_users)))
print("Number of unique users: {}".format(len(pd.unique(unique_users['from_user']))))
"""
Explanation: We can now save this data frame to an object. We will call this unique_users. Let's see if we actually have unique users.
End of explanation
"""
print("This should be equal to the number of rows of the entire data frame: {}".format(
len(pd.unique(unique_users['from_user'] + unique_users['iso_language']))))
"""
Explanation: Whoops! The number of unique users is less than the number of rows. That means there are some duplicate users. But remember, we didn't just limit it to unique user. We did it by unique language per user. Let's see if that works.
End of explanation
"""
statement = """
SELECT
DISTINCT ON (from_user, iso_language)
*
FROM (SELECT * FROM twitter.tweet WHERE job_id = 261 AND iso_language != 'und' LIMIT 10000) as T
ORDER BY from_user, iso_language;
"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
job_261 = {}
for i in list(range(len(column_names))):
job_261['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(job_261)
"""
Explanation: And now, after this step, we want to run this query again, but there is one more step. In the previous notebook we counted the number of speakers per iso_language. However, there is a subset of rows that don't provide any meaning to us. You will notice that one of the languages is "und". This is not actually a language. Instead it is Twitter's way of saying "we don't know how to identify the written language of this tweet." Since unidentified languages aren't actually languages, we need to remove these rows where a language isn't specified.
End of explanation
"""
df = pd.DataFrame(job_261)
df[df['iso_language'] == 'und']
"""
Explanation: ..and we can check to see if any of these rows with "und" still exist.
End of explanation
"""
statement = """
SELECT DISTINCT iso_language, COUNT(*)
FROM
(SELECT
DISTINCT ON (from_user, iso_language)
*
FROM (SELECT * FROM twitter.tweet WHERE job_id = 261 AND iso_language != 'und' LIMIT 10000) as T
ORDER BY from_user, iso_language) as UNIQ
GROUP BY iso_language;
"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
job_261 = {}
for i in list(range(len(column_names))):
job_261['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(job_261)
"""
Explanation: Okay, and now the final step is to count the number of speakers of each language after all of this clean up is done.
End of explanation
"""
# put your code here
# ------------------
statement = """
SELECT *
FROM twitter.tweet
WHERE job_id = 261 AND location_geo IS NOT NULL
LIMIT 100;
"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
job_261 = {}
for i in list(range(len(column_names))):
job_261['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(job_261)
"""
Explanation: <span style="background-color: #FFFF00">YOUR TURN</span>
Okay, one thing. Most tweets don't have location coordinates tied to it. This is quite unfortunate, but it is what it is. But you can bet that we are collecting data that does have location_geo that is not empty.
Now test your skills out and remove any row where location_geo is Null. For guidance, take a look here (https://www.postgresql.org/docs/9.1/static/functions-comparison.html). Limit by 10,000.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/ab20eadd8e6e3c70dc4dd75cfef6ca4c/60_visualize_stc.ipynb | bsd-3-clause | import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample, fetch_hcp_mmp_parcellation
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne import read_evokeds
data_path = sample.data_path()
sample_dir = op.join(data_path, 'MEG', 'sample')
subjects_dir = op.join(data_path, 'subjects')
fname_evoked = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_stc = op.join(sample_dir, 'sample_audvis-meg')
fetch_hcp_mmp_parcellation(subjects_dir)
"""
Explanation: Visualize source time courses (stcs)
This tutorial focuses on visualization of :term:source estimates <STC>.
Surface Source Estimates
First, we get the paths for the evoked data and the source time courses (stcs).
End of explanation
"""
stc = mne.read_source_estimate(fname_stc, subject='sample')
"""
Explanation: Then, we read the stc from file.
End of explanation
"""
print(stc)
"""
Explanation: This is a :class:SourceEstimate <mne.SourceEstimate> object.
End of explanation
"""
initial_time = 0.1
brain = stc.plot(subjects_dir=subjects_dir, initial_time=initial_time,
clim=dict(kind='value', lims=[3, 6, 9]),
smoothing_steps=7)
"""
Explanation: The SourceEstimate object is in fact a surface source estimate. MNE also
supports volume-based source estimates but more on that later.
We can plot the source estimate using the
:func:stc.plot <mne.SourceEstimate.plot> just as in other MNE
objects. Note that for this visualization to work, you must have mayavi
and pysurfer installed on your machine.
End of explanation
"""
stc_fs = mne.compute_source_morph(stc, 'sample', 'fsaverage', subjects_dir,
smooth=5, verbose='error').apply(stc)
brain = stc_fs.plot(subjects_dir=subjects_dir, initial_time=initial_time,
clim=dict(kind='value', lims=[3, 6, 9]),
surface='flat', hemi='both', size=(1000, 500),
smoothing_steps=5, time_viewer=False,
add_data_kwargs=dict(
colorbar_kwargs=dict(label_font_size=10)))
# to help orient us, let's add a parcellation (red=auditory, green=motor,
# blue=visual)
brain.add_annotation('HCPMMP1_combined', borders=2, subjects_dir=subjects_dir)
# You can save a movie like the one on our documentation website with:
# brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16,
# interpolation='linear', framerate=10)
"""
Explanation: You can also morph it to fsaverage and visualize it using a flatmap.
End of explanation
"""
mpl_fig = stc.plot(subjects_dir=subjects_dir, initial_time=initial_time,
backend='matplotlib', verbose='error', smoothing_steps=7)
"""
Explanation: Note that here we used initial_time=0.1, but we can also browse through
time using time_viewer=True.
In case mayavi is not available, we also offer a matplotlib
backend. Here we use verbose='error' to ignore a warning that not all
vertices were used in plotting.
End of explanation
"""
evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False).crop(0.05, 0.15)
# this risks aliasing, but these data are very smooth
evoked.decimate(10, verbose='error')
"""
Explanation: Volume Source Estimates
We can also visualize volume source estimates (used for deep structures).
Let us load the sensor-level evoked data. We select the MEG channels
to keep things simple.
End of explanation
"""
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-vol-7-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
src = inv['src']
mri_head_t = inv['mri_head_t']
"""
Explanation: Then, we can load the precomputed inverse operator from a file.
End of explanation
"""
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
stc = apply_inverse(evoked, inv, lambda2, method)
del inv
"""
Explanation: The source estimate is computed using the inverse operator and the
sensor-space data.
End of explanation
"""
print(stc)
"""
Explanation: This time, we have a different container
(:class:VolSourceEstimate <mne.VolSourceEstimate>) for the source time
course.
End of explanation
"""
stc.plot(src, subject='sample', subjects_dir=subjects_dir)
"""
Explanation: This too comes with a convenient plot method.
End of explanation
"""
stc.plot(src, subject='sample', subjects_dir=subjects_dir, mode='glass_brain')
"""
Explanation: For this visualization, nilearn must be installed.
This visualization is interactive. Click on any of the anatomical slices
to explore the time series. Clicking on any time point will bring up the
corresponding anatomical map.
We could visualize the source estimate on a glass brain. Unlike the previous
visualization, a glass brain does not show us one slice but what we would
see if the brain was transparent like glass, and
:term:maximum intensity projection) is used:
End of explanation
"""
fname_aseg = op.join(subjects_dir, 'sample', 'mri', 'aparc+aseg.mgz')
label_names = mne.get_volume_labels_from_aseg(fname_aseg)
label_tc = stc.extract_label_time_course(fname_aseg, src=src)
lidx, tidx = np.unravel_index(np.argmax(label_tc), label_tc.shape)
fig, ax = plt.subplots(1)
ax.plot(stc.times, label_tc.T, 'k', lw=1., alpha=0.5)
xy = np.array([stc.times[tidx], label_tc[lidx, tidx]])
xytext = xy + [0.01, 1]
ax.annotate(
label_names[lidx], xy, xytext, arrowprops=dict(arrowstyle='->'), color='r')
ax.set(xlim=stc.times[[0, -1]], xlabel='Time (s)', ylabel='Activation')
for key in ('right', 'top'):
ax.spines[key].set_visible(False)
fig.tight_layout()
"""
Explanation: You can also extract label time courses using volumetric atlases. Here we'll
use the built-in aparc+aseg.mgz:
End of explanation
"""
labels = [label_names[idx] for idx in np.argsort(label_tc.max(axis=1))[:7]
if 'unknown' not in label_names[idx].lower()] # remove catch-all
brain = mne.viz.Brain('sample', hemi='both', surf='pial', alpha=0.5,
cortex='low_contrast', subjects_dir=subjects_dir)
brain.add_volume_labels(aseg='aparc+aseg', labels=labels)
brain.show_view(azimuth=250, elevation=40, distance=400)
brain.enable_depth_peeling()
"""
Explanation: We can plot several labels with the most activation in their time course
for a more fine-grained view of the anatomical loci of activation.
End of explanation
"""
stc_back = mne.labels_to_stc(fname_aseg, label_tc, src=src)
stc_back.plot(src, subjects_dir=subjects_dir, mode='glass_brain')
"""
Explanation: And we can project these label time courses back to their original
locations and see how the plot has been smoothed:
End of explanation
"""
fname_inv = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv = read_inverse_operator(fname_inv)
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
brain = stc.plot(subject='sample', subjects_dir=subjects_dir,
initial_time=initial_time, brain_kwargs=dict(
silhouette=True), smoothing_steps=7)
"""
Explanation: Vector Source Estimates
If we choose to use pick_ori='vector' in
:func:apply_inverse <mne.minimum_norm.apply_inverse>
End of explanation
"""
fname_cov = op.join(sample_dir, 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(sample_dir, 'sample_audvis_raw-trans.fif')
"""
Explanation: Dipole fits
For computing a dipole fit, we need to load the noise covariance, the BEM
solution, and the coregistration transformation files. Note that for the
other methods, these were already used to generate the inverse operator.
End of explanation
"""
evoked.crop(0.1, 0.1)
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
"""
Explanation: Dipoles are fit independently for each time point, so let us crop our time
series to visualize the dipole fit for the time point of interest.
End of explanation
"""
dip.plot_locations(fname_trans, 'sample', subjects_dir)
"""
Explanation: Finally, we can visualize the dipole.
End of explanation
"""
|
Upward-Spiral-Science/uhhh | code/.ipynb_checkpoints/[Assignment 11] JM-checkpoint.ipynb | apache-2.0 | y_sum = [0] * len(vol[0,:,0])
for i in range(len(vol[0,:,0])):
y_sum[i] = sum(sum(vol[:,i,:]))
ax = sns.barplot(x=range(len(y_sum)), y=y_sum, color="b")
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Great — we're done with setup.
Analysis — Week 2
1. Layers of Cortex
Let's look closer at the layers of cortex and determine their boundaries statistically.
End of explanation
"""
from scipy.signal import argrelextrema
def local_minima(a):
return argrelextrema(a, np.less)
whole_volume_minima = local_minima(np.array(y_sum))
whole_volume_minima
"""
Explanation: Above, we see a histogram of y_sum that indicates that there is a local minimum at the 12th layer of y-sampling, which colocates with where we anticipate seeing the boundary between layers I and II. Here is the biological substantiation:
As we can see, at about 1/3 of the 'depth' into cortex is the boundary to layer II.
2. Generate local minima by subsection of cortex
We'll use these local minima to decide where to draw the lines between layers of cortex on the imagery.
End of explanation
"""
CHUNK_SIZE = 25
sections = [(i*CHUNK_SIZE, (i+1)*CHUNK_SIZE) for i in range(len(vol[:,0,0]) / CHUNK_SIZE)]
histogram = {}
for s in sections:
section = vol[s[0]:s[1]]
histogram[s] = [0] * len(vol[0,:,0])
for i in range(len(vol[0,:,0])):
histogram[s][i] = sum(sum(vol[s[0]:s[1],i,:]))
h_local_minima = []
for t, h in histogram.iteritems():
h_local_minima.extend([i for i in local_minima(np.array(h))])
total_histogram = [item for sublist in h_local_minima for item in sublist]
sns.distplot(total_histogram, bins=26)
sns.distplot(total_histogram, bins=15)
scatterable = []
i = 0
for h in h_local_minima:
[scatterable.append([i, m]) for m in h]
i += 1
plt.scatter(x=[s[0] for s in scatterable], y=[s[1] for s in scatterable])
"""
Explanation: Now let's examine smaller chunks of the volume:
End of explanation
"""
from sklearn.cluster import KMeans
plt.scatter([0] * len(total_histogram), total_histogram)
NUM_CLUSTERS = 3
k3cluster = KMeans(n_clusters=NUM_CLUSTERS)
total_histogram.sort()
clusters_for_th = k3cluster.fit_predict(np.array(total_histogram).reshape(-1, 1))
"""
Explanation: This coincides with our understanding that our sample space extends midway into layer 4, but covers all of layers 1, 2, and 3.
To play with these parameters, try changing CHUNK_SIZE to other values to change how small the sections of subcortex are.
3. Clustering and Selecting "Real" Cortex Boundaries
We'll flatten this list in 2D and then pick centers of mass for each local-minima cluster.
End of explanation
"""
clusters = { n: [] for n in range(NUM_CLUSTERS) }
for i in range(len(total_histogram)):
clusters[clusters_for_th[i]].append(total_histogram[i])
clusters
"""
Explanation: Now we can get the centroids from these clusters. I wish I understood what I was doing.
End of explanation
"""
cluster_means = [np.mean(v) for _, v in clusters.iteritems() ]
cluster_means
"""
Explanation: Now we can assume that the means of these clusters are the actual boundaries between cortex.
End of explanation
"""
from PIL import Image
import urllib, cStringIO
file = cStringIO.StringIO(urllib.urlopen("http://openconnecto.me/ocp/ca/bock11/image/xy/7/350,850/50,936/2917/").read())
img = Image.open(file)
img_array = np.array(img)
cluster_means = np.array(cluster_means)
cluster_means_mapped = (cluster_means / vol.shape[1]) * img_array.shape[0]
for i in cluster_means_mapped:
img_array[[i, i+10, i-10], :] -= 50
Image.fromarray(img_array)
"""
Explanation: 4. Verifying our statistical boundaries against an image
We know that our data are taken from bock11 on ndstore, so... Now let's overlay those averages over our image.
End of explanation
"""
yflat = np.amax(vol, axis=1)
frame_y = pd.DataFrame(yflat)
sns.heatmap(frame_y)
processkmeans = KMeans()
print yflat
processkmeans.fit_predict(yflat)
"""
Explanation: 5. Finding Descending Processes in Cortex
We should be able to find collections of high synaptic density in regions where processes exist. We can use a simple variant of a k-means algorithm to find these clusters.
To do this, we'll first flatten along the y-plane to find descending processes.
End of explanation
"""
|
kubeflow/examples | financial_time_series/Financial Time Series with Finance Data.ipynb | apache-2.0 | !pip3 install google-cloud-bigquery==1.6.0 pandas==0.23.4 matplotlib==3.0.3 scipy==1.2.1 --user
"""
Explanation: TensorFlow Machine Learning with Financial Data on Google Cloud Platform
This solution presents an accessible, non-trivial example of machine learning with financial time series on Google Cloud Platform (GCP).
Time series are an essential part of financial analysis. Today, you have more data at your disposal than ever, more sources of data, and more frequent delivery of that data. New sources include new exchanges, social media outlets, and news sources. The frequency of delivery has increased from tens of messages per second 10 years ago, to hundreds of thousands of messages per second today. Naturally, more and different analysis techniques are being brought to bear as a result. Most of the modern analysis techniques aren't different in the sense of being new, and they all have their basis in statistics, but their applicability has closely followed the amount of computing power available. The growth in available computing power is faster than the growth in time series volumes, so it is possible to analyze time series today at scale in ways that weren't previously practical.
In particular, machine learning techniques, especially deep learning, hold great promise for time series analysis. As time series become more dense and many time series overlap, machine learning offers a way to separate the signal from the noise, even when the noise can seem overwhelming. Deep learning holds great potential because it is often the best fit for the seemingly random nature of financial time series.
In this solution, you will:
Obtain data for a number of financial markets.
Munge that data into a usable format and perform exploratory data analysis in order to explore and validate a premise.
Use TensorFlow to build, train and evaluate a number of models for predicting what will happen in financial markets
Important: This solution is intended to illustrate the capabilities of GCP and TensorFlow for fast, interactive, iterative data analysis and machine learning. It does not offer any advice on financial markets or trading strategies. The scenario presented in the tutorial is an example. Don't use this code to make investment decisions.
The premise
The premise is straightforward: financial markets are increasingly global, and if you follow the sun from Asia to Europe to the US and so on, you can use information from an earlier time zone to your advantage in a later time zone.
The following table shows a number of stock market indices from around the globe, their closing times in Eastern Standard Time (EST), and the delay in hours between the close that index and the close of the S&P 500 in New York. This makes EST the base time zone. For example, Australian markets close for the day 15 hours before US markets close. If the close of the All Ords in Australia is a useful predictor of the close of the S&P 500 for a given day we can use that information to guide our trading activity. Continuing our example of the Australian All Ords, if this index closes up and we think that means the S&P 500 will close up as well then we should either buy stocks that compose the S&P 500 or, more likely, an ETF that tracks the S&P 500. In reality, the situation is more complex because there are commissions and tax to account for. But as a first approximation, we'll assume an index closing up indicates a gain, and vice-versa.
|Index|Country|Closing Time (EST)|Hours Before S&P Close|
|---|---|---|---|
|All Ords|Australia|0100|15|
|Nikkei 225|Japan|0200|14|
|Hang Seng|Hong Kong|0400|12|
|DAX|Germany|1130|4.5|
|FTSE 100|UK|1130|4.5|
|NYSE Composite|US|1600|0|
|Dow Jones Industrial Average|US|1600|0|
|S&P 500|US|1600|0|
Set up
First, install and import necessary libraries.
End of explanation
"""
import tensorflow as tf
# import google.datalab.bigquery as bq
from google.cloud import bigquery
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pandas.plotting import autocorrelation_plot
from pandas.plotting import scatter_matrix
"""
Explanation: You might need to restart the kernel if the import bigquery fails.
End of explanation
"""
# Instantiates a client
bigquery_client = bigquery.Client()
tickers = ['snp', 'nyse', 'djia', 'nikkei', 'hangseng', 'ftse', 'dax', 'aord']
bq_query = {}
for ticker in tickers:
bq_query[ticker] = bigquery_client.query('SELECT Date, Close from `bingo-ml-1.market_data.{}`'.format(ticker))
results = {}
for ticker in tickers:
results[ticker] = bq_query[ticker].result().to_dataframe().set_index('Date')
"""
Explanation: Get the data
The data covers roughly the last 5 years, using the date range from 1/1/2010 to 10/1/2015. Data comes from the S&P 500 (S&P), NYSE, Dow Jones Industrial Average (DJIA), Nikkei 225 (Nikkei), Hang Seng, FTSE 100 (FTSE), DAX, and All Ordinaries (AORD) indices.
This data is publicly available and is stored in BigQuery for convenience.
End of explanation
"""
closing_data = pd.DataFrame()
for ticker in tickers:
closing_data['{}_close'.format(ticker)] = results[ticker]['Close']
closing_data.info()
# Pandas includes a very convenient function for filling gaps in the data.
closing_data.sort_index(inplace=True)
closing_data = closing_data.fillna(method='ffill')
closing_data.info()
"""
Explanation: Munge the data
In the first instance, munging the data is straightforward. The closing prices are of interest, so for convenience extract the closing prices for each of the indices into a single Pandas DataFrame, called closing_data. Because not all of the indices have the same number of values, mainly due to bank holidays, we'll forward-fill the gaps. This means that, if a value isn't available for day N, fill it with the value for another day, such as N-1 or N-2, so that it contains the latest available value.
End of explanation
"""
closing_data.describe()
"""
Explanation: At this point, you've sourced five years of time series for eight financial indices, combined the pertinent data into a single data structure, and harmonized the data to have the same number of entries, by using only the 20 lines of code in this notebook.
Exploratory data analysis
Exploratory Data Analysis (EDA) is foundational to working with machine learning, and any other sort of analysis. EDA means getting to know your data, getting your fingers dirty with your data, feeling it and seeing it. The end result is you know your data very well, so when you build models you build them based on an actual, practical, physical understanding of the data, not assumptions or vaguely held notions. You can still make assumptions of course, but EDA means you will understand your assumptions and why you're making those assumptions.
First, take a look at the data.
End of explanation
"""
# N.B. A super-useful trick-ette is to assign the return value of plot to _
# so that you don't get text printed before the plot itself.
_ = pd.concat([closing_data['{}_close'.format(ticker)] for ticker in tickers], axis=1).plot(figsize=(20, 15))
"""
Explanation: You can see that the various indices operate on scales differing by orders of magnitude. It's best to scale the data so that, for example, operations involving multiple indices aren't unduly influenced by a single, massive index.
Plot the data.
End of explanation
"""
for ticker in tickers:
closing_data['{}_close_scaled'.format(ticker)] = closing_data['{}_close'.format(ticker)]/max(closing_data['{}_close'.format(ticker)])
_ = pd.concat([closing_data['{}_close_scaled'.format(ticker)] for ticker in tickers], axis=1).plot(figsize=(20, 15))
"""
Explanation: As expected, the structure isn't uniformly visible for the indices. Divide each value in an individual index by the maximum value for that index., and then replot. The maximum value of all indices will be 1.
End of explanation
"""
fig = plt.figure()
fig.set_figwidth(20)
fig.set_figheight(15)
for ticker in tickers:
_ = autocorrelation_plot(closing_data['{}_close'.format(ticker)], label='{}_close'.format(ticker))
_ = plt.legend(loc='upper right')
"""
Explanation: You can see that, over the five-year period, these indices are correlated. Notice that sudden drops from economic events happened globally to all indices, and they otherwise exhibited general rises. This is an good start, though not the complete story. Next, plot autocorrelations for each of the indices. The autocorrelations determine correlations between current values of the index and lagged values of the same index. The goal is to determine whether the lagged values are reliable indicators of the current values. If they are, then we've identified a correlation.
End of explanation
"""
_ = scatter_matrix(pd.concat([closing_data['{}_close_scaled'.format(ticker)] for ticker in tickers], axis=1), figsize=(20, 20), diagonal='kde')
"""
Explanation: You should see strong autocorrelations, positive for around 500 lagged days, then going negative. This tells us something we should intuitively know: if an index is rising it tends to carry on rising, and vice-versa. It should be encouraging that what we see here conforms to what we know about financial markets.
Next, look at a scatter matrix, showing everything plotted against everything, to see how indices are correlated with each other.
End of explanation
"""
log_return_data = pd.DataFrame()
for ticker in tickers:
log_return_data['{}_log_return'.format(ticker)] = np.log(closing_data['{}_close'.format(ticker)]/closing_data['{}_close'.format(ticker)].shift())
log_return_data.describe()
"""
Explanation: You can see significant correlations across the board, further evidence that the premise is workable and one market can be influenced by another.
As an aside, this process of gradual, incremental experimentation and progress is the best approach and what you probably do normally. With a little patience, we'll get to some deeper understanding.
The actual value of an index is not that useful for modeling. It can be a useful indicator, but to get to the heart of the matter, we need a time series that is stationary in the mean, thus having no trend in the data. There are various ways of doing that, but they all essentially look at the difference between values, rather than the absolute value. In the case of market data, the usual practice is to work with logged returns, calculated as the natural logarithm of the index today divided by the index yesterday:
ln(Vt/Vt-1)
There are more reasons why the log return is preferable to the percent return (for example the log is normally distributed and additive), but they don't matter much for this work. What matters is to get to a stationary time series.
Calculate and plot the log returns in a new DataFrame.
End of explanation
"""
_ = pd.concat([log_return_data['{}_log_return'.format(ticker)] for ticker in tickers], axis=1).plot(figsize=(20, 15))
"""
Explanation: Looking at the log returns, you should see that the mean, min, max are all similar. You could go further and center the series on zero, scale them, and normalize the standard deviation, but there's no need to do that at this point. Let's move forward with plotting the data, and iterate if necessary.
End of explanation
"""
fig = plt.figure()
fig.set_figwidth(20)
fig.set_figheight(15)
for ticker in tickers:
_ = autocorrelation_plot(log_return_data['{}_log_return'.format(ticker)], label='{}_log_return'.format(ticker))
_ = plt.legend(loc='upper right')
"""
Explanation: You can see from the plot that the log returns of our indices are similarly scaled and centered, with no visible trend in the data. It's looking good, so now look at autocorrelations.
End of explanation
"""
_ = scatter_matrix(log_return_data, figsize=(20, 20), diagonal='kde')
"""
Explanation: No autocorrelations are visible in the plot, which is what we're looking for. Individual financial markets are Markov processes, knowledge of history doesn't allow you to predict the future.
You now have time series for the indices, stationary in the mean, similarly centered and scaled. That's great! Now start to look for signals to try to predict the close of the S&P 500.
Look at a scatterplot to see how the log return indices correlate with each other.
End of explanation
"""
tmp = pd.DataFrame()
tmp['snp_0'] = log_return_data['snp_log_return']
tmp['nyse_1'] = log_return_data['nyse_log_return'].shift()
tmp['djia_1'] = log_return_data['djia_log_return'].shift()
tmp['ftse_0'] = log_return_data['ftse_log_return']
tmp['dax_0'] = log_return_data['dax_log_return']
tmp['hangseng_0'] = log_return_data['hangseng_log_return']
tmp['nikkei_0'] = log_return_data['nikkei_log_return']
tmp['aord_0'] = log_return_data['aord_log_return']
tmp.corr().iloc[:,0]
"""
Explanation: The story with the previous scatter plot for log returns is more subtle and more interesting. The US indices are strongly correlated, as expected. The other indices, less so, which is also expected. But there is structure and signal there. Now let's move forward and start to quantify it so we can start to choose features for our model.
First look at how the log returns for the closing value of the S&P 500 correlate with the closing values of other indices available on the same day. This essentially means to assume the indices that close before the S&P 500 (non-US indices) are available and the others (US indices) are not.
End of explanation
"""
tmp = pd.DataFrame()
tmp['snp_0'] = log_return_data['snp_log_return']
tmp['nyse_1'] = log_return_data['nyse_log_return'].shift(2)
tmp['djia_1'] = log_return_data['djia_log_return'].shift(2)
tmp['ftse_0'] = log_return_data['ftse_log_return'].shift()
tmp['dax_0'] = log_return_data['dax_log_return'].shift()
tmp['hangseng_0'] = log_return_data['hangseng_log_return'].shift()
tmp['nikkei_0'] = log_return_data['nikkei_log_return'].shift()
tmp['aord_0'] = log_return_data['aord_log_return'].shift()
tmp.corr().iloc[:,0]
"""
Explanation: Here, we are directly working with the premise. We're correlating the close of the S&P 500 with signals available before the close of the S&P 500. And you can see that the S&P 500 close is correlated with European indices at around 0.65 for the FTSE and DAX, which is a strong correlation, and Asian/Oceanian indices at around 0.15-0.22, which is a significant correlation, but not with US indices. We have available signals from other indices and regions for our model.
Now look at how the log returns for the S&P closing values correlate with index values from the previous day to see if they previous closing is predictive. Following from the premise that financial markets are Markov processes, there should be little or no value in historical values.
End of explanation
"""
tmp = pd.DataFrame()
tmp['snp_0'] = log_return_data['snp_log_return']
tmp['nyse_1'] = log_return_data['nyse_log_return'].shift(3)
tmp['djia_1'] = log_return_data['djia_log_return'].shift(3)
tmp['ftse_0'] = log_return_data['ftse_log_return'].shift(2)
tmp['dax_0'] = log_return_data['dax_log_return'].shift(2)
tmp['hangseng_0'] = log_return_data['hangseng_log_return'].shift(2)
tmp['nikkei_0'] = log_return_data['nikkei_log_return'].shift(2)
tmp['aord_0'] = log_return_data['aord_log_return'].shift(2)
tmp.corr().iloc[:,0]
"""
Explanation: You should see little to no correlation in this data, meaning that yesterday's values are no practical help in predicting today's close. Let's go one step further and look at correlations between today and the the day before yesterday.
End of explanation
"""
log_return_data['snp_log_return_positive'] = 0
log_return_data.ix[log_return_data['snp_log_return'] >= 0, 'snp_log_return_positive'] = 1
log_return_data['snp_log_return_negative'] = 0
log_return_data.ix[log_return_data['snp_log_return'] < 0, 'snp_log_return_negative'] = 1
training_test_data = pd.DataFrame(
columns=[
'snp_log_return_positive', 'snp_log_return_negative',
'snp_log_return_1', 'snp_log_return_2', 'snp_log_return_3',
'nyse_log_return_1', 'nyse_log_return_2', 'nyse_log_return_3',
'djia_log_return_1', 'djia_log_return_2', 'djia_log_return_3',
'nikkei_log_return_0', 'nikkei_log_return_1', 'nikkei_log_return_2',
'hangseng_log_return_0', 'hangseng_log_return_1', 'hangseng_log_return_2',
'ftse_log_return_0', 'ftse_log_return_1', 'ftse_log_return_2',
'dax_log_return_0', 'dax_log_return_1', 'dax_log_return_2',
'aord_log_return_0', 'aord_log_return_1', 'aord_log_return_2'])
for i in range(7, len(log_return_data)):
snp_log_return_positive = log_return_data['snp_log_return_positive'].ix[i]
snp_log_return_negative = log_return_data['snp_log_return_negative'].ix[i]
snp_log_return_1 = log_return_data['snp_log_return'].ix[i-1]
snp_log_return_2 = log_return_data['snp_log_return'].ix[i-2]
snp_log_return_3 = log_return_data['snp_log_return'].ix[i-3]
nyse_log_return_1 = log_return_data['nyse_log_return'].ix[i-1]
nyse_log_return_2 = log_return_data['nyse_log_return'].ix[i-2]
nyse_log_return_3 = log_return_data['nyse_log_return'].ix[i-3]
djia_log_return_1 = log_return_data['djia_log_return'].ix[i-1]
djia_log_return_2 = log_return_data['djia_log_return'].ix[i-2]
djia_log_return_3 = log_return_data['djia_log_return'].ix[i-3]
nikkei_log_return_0 = log_return_data['nikkei_log_return'].ix[i]
nikkei_log_return_1 = log_return_data['nikkei_log_return'].ix[i-1]
nikkei_log_return_2 = log_return_data['nikkei_log_return'].ix[i-2]
hangseng_log_return_0 = log_return_data['hangseng_log_return'].ix[i]
hangseng_log_return_1 = log_return_data['hangseng_log_return'].ix[i-1]
hangseng_log_return_2 = log_return_data['hangseng_log_return'].ix[i-2]
ftse_log_return_0 = log_return_data['ftse_log_return'].ix[i]
ftse_log_return_1 = log_return_data['ftse_log_return'].ix[i-1]
ftse_log_return_2 = log_return_data['ftse_log_return'].ix[i-2]
dax_log_return_0 = log_return_data['dax_log_return'].ix[i]
dax_log_return_1 = log_return_data['dax_log_return'].ix[i-1]
dax_log_return_2 = log_return_data['dax_log_return'].ix[i-2]
aord_log_return_0 = log_return_data['aord_log_return'].ix[i]
aord_log_return_1 = log_return_data['aord_log_return'].ix[i-1]
aord_log_return_2 = log_return_data['aord_log_return'].ix[i-2]
training_test_data = training_test_data.append(
{'snp_log_return_positive':snp_log_return_positive,
'snp_log_return_negative':snp_log_return_negative,
'snp_log_return_1':snp_log_return_1,
'snp_log_return_2':snp_log_return_2,
'snp_log_return_3':snp_log_return_3,
'nyse_log_return_1':nyse_log_return_1,
'nyse_log_return_2':nyse_log_return_2,
'nyse_log_return_3':nyse_log_return_3,
'djia_log_return_1':djia_log_return_1,
'djia_log_return_2':djia_log_return_2,
'djia_log_return_3':djia_log_return_3,
'nikkei_log_return_0':nikkei_log_return_0,
'nikkei_log_return_1':nikkei_log_return_1,
'nikkei_log_return_2':nikkei_log_return_2,
'hangseng_log_return_0':hangseng_log_return_0,
'hangseng_log_return_1':hangseng_log_return_1,
'hangseng_log_return_2':hangseng_log_return_2,
'ftse_log_return_0':ftse_log_return_0,
'ftse_log_return_1':ftse_log_return_1,
'ftse_log_return_2':ftse_log_return_2,
'dax_log_return_0':dax_log_return_0,
'dax_log_return_1':dax_log_return_1,
'dax_log_return_2':dax_log_return_2,
'aord_log_return_0':aord_log_return_0,
'aord_log_return_1':aord_log_return_1,
'aord_log_return_2':aord_log_return_2},
ignore_index=True)
training_test_data.describe()
log_return_data['snp_log_return_positive'].value_counts()
"""
Explanation: Again, there are little to no correlations.
Summing up the EDA
At this point, you've done a good enough job of exploratory data analysis. You've visualized our data and come to know it better. You've transformed it into a form that is useful for modelling, log returns, and looked at how indices relate to each other. You've seen that indices from Europe strongly correlate with US indices, and that indices from Asia/Oceania significantly correlate with those same indices for a given day. You've also seen that if you look at historical values, they do not correlate with today's values. Summing up:
European indices from the same day were a strong predictor for the S&P 500 close.
Asian/Oceanian indices from the same day were a significant predictor for the S&P 500 close.
Indices from previous days were not good predictors for the S&P close.
What should we think so far?
JupyterHub is working great. With just a few lines of code, you were able to munge the data, visualize the changes, and make decisions. You could easily analyze and iterate. This is a common feature of iPython, but the advantage here is that JupyterHub is a managed service that you can simply click and use, so you can focus on your analysis.
Feature selection
At this point, we can see a model:
We'll predict whether the S&P 500 close today will be higher or lower than yesterday.
We'll use all our data sources: NYSE, DJIA, Nikkei, Hang Seng, FTSE, DAX, AORD.
We'll use three sets of data points—T, T-1, and T-2—where we take the data available on day T or T-n, meaning today's non-US data and yesterday's US data.
Predicting whether the log return of the S&P 500 is positive or negative is a classification problem. That is, we want to choose one option from a finite set of options, in this case positive or negative. This is the base case of classification where we have only two values to choose from, known as binary classification, or logistic regression.
This uses the findings from of our exploratory data analysis, namely that log returns from other regions on a given day are strongly correlated with the log return of the S&P 500, and there are stronger correlations from those regions that are geographically closer with respect to time zones. However, our models also use data outside of those findings. For example, we use data from the past few days in addition to today. There are two reasons for using this additional data. First, we're adding additional features to our model for the purpose of this solution to see how things perform. which is not a good reason to add features outside of a tutorial setting. Second, machine learning models are very good at finding weak signals from data.
In machine learning, as in most things, there are subtle tradeoffs happening, but in general good data is better than good algorithms, which are better than good frameworks. You need all three pillars but in that order of importance: data, algorithms, frameworks.
Tensorflow
TensorFlow is an open source software library, initiated by Google, for numerical computation using data flow graphs. TensorFlow is based on Google's machine learning expertise and is the next generation framework used internally at Google for tasks such as translation and image recognition. It's a wonderful framework for machine learning because it's expressive, efficient, and easy to use.
Feature engineering for Tensorflow
From a training and testing perspective, time series data is easy. Training data should come from events that happened before test data events, and be contiguous in time. Otherwise, your model would be trained on events from "the future", at least as compared to the test data. It would then likely perform badly in practice, because you can’t really have access to data from the future. That means random sampling or cross validation don't apply to time series data. Decide on a training-versus-testing split, and divide your data into training and test datasets.
In this case, you'll create the features together with two additional columns:
snp_log_return_positive, which is 1 if the log return of the S&P 500 close is positive, and 0 otherwise.
snp_log_return_negative, which is 1 if the log return of the S&P 500 close is negative, and 1 otherwise.
Now, logically you could encode this information in one column, named snp_log_return, which is 1 if positive and 0 if negative, but that's not the way TensorFlow works for classification models. TensorFlow uses the general definition of classification, that there can be many different potential values to choose from, and a form or encoding for these options called one-hot encoding. One-hot encoding means that each choice is an entry in an array, and the actual value has an entry of 1 with all other values being 0. This encoding (i.e. a single 1 in an array of 0s) is for the input of the model, where you categorically know which value is correct. A variation of this is used for the output, where each entry in the array contains the probability of the answer being that choice. You can then choose the most likely value by choosing the highest probability, together with having a measure of the confidence you can place in that answer relative to other answers.
We'll use 80% of our data for training and 20% for testing.
End of explanation
"""
predictors_tf = training_test_data[training_test_data.columns[2:]]
classes_tf = training_test_data[training_test_data.columns[:2]]
training_set_size = int(len(training_test_data) * 0.8)
test_set_size = len(training_test_data) - training_set_size
training_predictors_tf = predictors_tf[:training_set_size]
training_classes_tf = classes_tf[:training_set_size]
test_predictors_tf = predictors_tf[training_set_size:]
test_classes_tf = classes_tf[training_set_size:]
training_predictors_tf.describe()
test_predictors_tf.describe()
"""
Explanation: The odds are more in favor of s&p ending positive then negative (55% for positive, 45% for negative)
Now, create the training and test data.
End of explanation
"""
def tf_confusion_metrics(model, actual_classes, session, feed_dict):
predictions = tf.argmax(model, 1)
actuals = tf.argmax(actual_classes, 1)
ones_like_actuals = tf.ones_like(actuals)
zeros_like_actuals = tf.zeros_like(actuals)
ones_like_predictions = tf.ones_like(predictions)
zeros_like_predictions = tf.zeros_like(predictions)
tp_op = tf.reduce_sum(
tf.cast(
tf.logical_and(
tf.equal(actuals, ones_like_actuals),
tf.equal(predictions, ones_like_predictions)
),
"float"
)
)
tn_op = tf.reduce_sum(
tf.cast(
tf.logical_and(
tf.equal(actuals, zeros_like_actuals),
tf.equal(predictions, zeros_like_predictions)
),
"float"
)
)
fp_op = tf.reduce_sum(
tf.cast(
tf.logical_and(
tf.equal(actuals, zeros_like_actuals),
tf.equal(predictions, ones_like_predictions)
),
"float"
)
)
fn_op = tf.reduce_sum(
tf.cast(
tf.logical_and(
tf.equal(actuals, ones_like_actuals),
tf.equal(predictions, zeros_like_predictions)
),
"float"
)
)
tp, tn, fp, fn = \
session.run(
[tp_op, tn_op, fp_op, fn_op],
feed_dict
)
tpfn = float(tp) + float(fn)
tpr = 0 if tpfn == 0 else float(tp)/tpfn
fpr = 0 if tpfn == 0 else float(fp)/tpfn
total = float(tp) + float(fp) + float(fn) + float(tn)
accuracy = 0 if total == 0 else (float(tp) + float(tn))/total
recall = tpr
tpfp = float(tp) + float(fp)
precision = 0 if tpfp == 0 else float(tp)/tpfp
f1_score = 0 if recall == 0 else (2 * (precision * recall)) / (precision + recall)
print('Precision = ', precision)
print('Recall = ', recall)
print('F1 Score = ', f1_score)
print('Accuracy = ', accuracy)
"""
Explanation: Define some metrics here to evaluate the models.
Precision - The ability of the classifier not to label as positive a sample that is negative.
Recall - The ability of the classifier to find all the positive samples.
F1 Score - A weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.
Accuracy - The percentage correctly predicted in the test data.
End of explanation
"""
sess = tf.Session()
# Define variables for the number of predictors and number of classes to remove magic numbers from our code.
num_predictors = len(training_predictors_tf.columns) # 24 in the default case
num_classes = len(training_classes_tf.columns) # 2 in the default case
# Define placeholders for the data we feed into the process - feature data and actual classes.
feature_data = tf.placeholder("float", [None, num_predictors])
actual_classes = tf.placeholder("float", [None, num_classes])
# Define a matrix of weights and initialize it with some small random values.
weights = tf.Variable(tf.truncated_normal([num_predictors, num_classes], stddev=0.0001))
biases = tf.Variable(tf.ones([num_classes]))
# Define our model...
# Here we take a softmax regression of the product of our feature data and weights.
model = tf.nn.softmax(tf.matmul(feature_data, weights) + biases)
# Define a cost function (we're using the cross entropy).
cost = -tf.reduce_sum(actual_classes*tf.log(model))
# Define a training step...
# Here we use gradient descent with a learning rate of 0.01 using the cost function we just defined.
training_step = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)
init = tf.global_variables_initializer()
sess.run(init)
"""
Explanation: Binary classification with Tensorflow
Now, get some tensors flowing. The model is binary classification expressed in TensorFlow.
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(model, 1), tf.argmax(actual_classes, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
for i in range(1, 30001):
sess.run(
training_step,
feed_dict={
feature_data: training_predictors_tf.values,
actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)
}
)
if i%5000 == 0:
print(i, sess.run(
accuracy,
feed_dict={
feature_data: training_predictors_tf.values,
actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)
}
))
"""
Explanation: We'll train our model in the following snippet. The approach of TensorFlow to executing graph operations allows fine-grained control over the process. Any operation you provide to the session as part of the run operation will be executed and the results returned. You can provide a list of multiple operations.
You'll train the model over 30,000 iterations using the full dataset each time. Every thousandth iteration we'll assess the accuracy of the model on the training data to assess progress.
End of explanation
"""
feed_dict= {
feature_data: test_predictors_tf.values,
actual_classes: test_classes_tf.values.reshape(len(test_classes_tf.values), 2)
}
tf_confusion_metrics(model, actual_classes, sess, feed_dict)
"""
Explanation: An accuracy of 65% on the training data is fine, certainly better than random.
End of explanation
"""
sess1 = tf.Session()
num_predictors = len(training_predictors_tf.columns)
num_classes = len(training_classes_tf.columns)
feature_data = tf.placeholder("float", [None, num_predictors])
actual_classes = tf.placeholder("float", [None, 2])
weights1 = tf.Variable(tf.truncated_normal([24, 50], stddev=0.0001))
biases1 = tf.Variable(tf.ones([50]))
weights2 = tf.Variable(tf.truncated_normal([50, 25], stddev=0.0001))
biases2 = tf.Variable(tf.ones([25]))
weights3 = tf.Variable(tf.truncated_normal([25, 2], stddev=0.0001))
biases3 = tf.Variable(tf.ones([2]))
hidden_layer_1 = tf.nn.relu(tf.matmul(feature_data, weights1) + biases1)
hidden_layer_2 = tf.nn.relu(tf.matmul(hidden_layer_1, weights2) + biases2)
model = tf.nn.softmax(tf.matmul(hidden_layer_2, weights3) + biases3)
cost = -tf.reduce_sum(actual_classes*tf.log(model))
train_op1 = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)
init = tf.global_variables_initializer()
sess1.run(init)
"""
Explanation: The metrics for this most simple of TensorFlow models are unimpressive, an F1 Score of 0.36 is not going to blow any light bulbs in the room. That's partly because of its simplicity and partly because It hasn't been tuned; selection of hyperparameters is very important in machine learning modelling.
Feed-forward neural network with two hidden layers
You'll now build a proper feed-forward neural net with two hidden layers.
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(model, 1), tf.argmax(actual_classes, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
for i in range(1, 30001):
sess1.run(
train_op1,
feed_dict={
feature_data: training_predictors_tf.values,
actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)
}
)
if i%5000 == 0:
print(i, sess1.run(
accuracy,
feed_dict={
feature_data: training_predictors_tf.values,
actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)
}
))
"""
Explanation: Again, you'll train the model over 30,000 iterations using the full dataset each time. Every thousandth iteration, you'll assess the accuracy of the model on the training data to assess progress.
End of explanation
"""
feed_dict= {
feature_data: test_predictors_tf.values,
actual_classes: test_classes_tf.values.reshape(len(test_classes_tf.values), 2)
}
tf_confusion_metrics(model, actual_classes, sess1, feed_dict)
"""
Explanation: A significant improvement in accuracy with the training data shows that the hidden layers are adding additional capacity for learning to the model.
Looking at precision, recall, and accuracy, you can see a measurable improvement in performance, but certainly not a step function. This indicates that we're likely reaching the limits of this relatively simple feature set.
End of explanation
"""
# run inference on first 5 samples in the test set
sess1.run(
[model, actual_classes],
feed_dict={
feature_data: test_predictors_tf.values[:5],
actual_classes: test_classes_tf.values[:5].reshape(len(test_classes_tf.values[:5]), 2)
}
)
"""
Explanation: Inference
End of explanation
"""
# investigate the close values of the corresponding days
closing_data['snp_close'][training_set_size-1+7:training_set_size+7+5]
index = closing_data.index.get_loc("2014-08-12")
index_test_set = index - 7 - training_set_size
test_predictors_tf.values[index_test_set].shape
sess1.run(
[tf.argmax(model, 1), tf.argmax(actual_classes,1)],
feed_dict={
feature_data: np.expand_dims(test_predictors_tf.values[index_test_set], axis=0),
actual_classes: np.expand_dims(test_classes_tf.values[index_test_set], axis=0)
}
)
"""
Explanation: The model seems to be right for all five days (if treshold probability is set at 0.5)
End of explanation
"""
|
alfkjartan/control-computarizado | introduction/notebooks/Continuous-PID.ipynb | mit | # Uncomment and run the commands in this cell if a packages is missing
!pip install slycot
!pip install control
%matplotlib widget
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
import control.matlab as cm
"""
Explanation: Tuning the parameters of a PID controller
In this notebook you can test your intuition for how to adjust the parameters of a PID controller.
Start by watching this excellent video.
Blockdiagram
We consider the model used in the video: Velocity control of a car. The plant model describes how the velocity of the car responds to the position of the accelerator (the gas pedal). In addition to the signal from the accelerator, there are also unknown forces such as wind resistance and gravity when the car is going uphill or downhill. These forces are represented by a disturbance signal entering at the input to the system.
<!---  -->
<img src="cruise-control-pid-block.svg" alt="Block diagram of cruise control system" width="900">
The PID controller
The PID-controller is on so-called parallel form
\begin{equation}
F(s) = K_p + \frac{K_i}{s} + K_d s.
\end{equation}
The closed-loop system from the reference signal to the output
The model is linear and hence the principle of superposition holds. This mean that we can look at the response to the reference signal and the response to the disturbance signal separately. Setting $$d=0,$$ we get a closed-loop response given by
\begin{equation}
Y(s) = \frac{\frac{1}{s(s+1)}F(s)}{1 + \frac{1}{s(s+1)}F(s)}R(s).
\end{equation}
The closed-loop system from disturbance to the output
Setting $$r=0,$$ the reponse to the disturbance is given by
\begin{equation}
Y(s) = \frac{\frac{1}{s(s + 1)}}{1 + \frac{1}{s(s+1)}F(s)}D(s)
\end{equation}
The full closed-loop system
We can find the response to a combination of input signals $r$ and $d$ by summation:
\begin{equation}
Y(s) = \frac{\frac{1}{s(s+1)}F(s)}{1 + \frac{1}{s(s+1)}F(s)}R(s) + \frac{\frac{1}{s(s + 1)}}{1 + \frac{1}{s(s+1)}F(s)}D(s)
\end{equation}
End of explanation
"""
G1 = cm.tf([1.], [1, 1.])
Gint = cm.tf([1], [1, 0])
G = Gint*G1
print(G)
N = 600
t_end = 30
t = np.linspace(0, t_end, N)
# The reference signal
r = np.zeros(N)
r[int(N/t_end):] = 1.0
# The disturbance signal
d = np.zeros(N)
d[int(N/t_end)*10:] = -1.0
# set up plot
fig, ax = plt.subplots(figsize=(8, 3))
#ax.set_ylim([-.1, 4])
ax.grid(True)
def sim_PID(G, Kp, Ki, Kd, r,d,t):
"""
Returns the simulated response of the closed-loop system with a
PID controller.
"""
F = cm.tf([Kd, Kp, Ki], [1.0, 0])
Gr = cm.feedback(G*F,1)
Gd = cm.feedback(G,F)
yr = cm.lsim(Gr, r, t)
yd = cm.lsim(Gd, d, t)
return (yr, yd)
@widgets.interact(Kp=(0, 10, .2), Ki=(0, 8, .2), Kd=(0, 8, .2))
def update(Kp = 1.0, Ki=0, Kd=0):
"""Remove old lines from plot and plot new one"""
[l.remove() for l in ax.lines]
yr, yd = sim_PID(G, Kp, Ki, Kd, r,d,t)
ax.plot(yr[1], yr[0]+yd[0], color='C0')
#ax.plot(yd[1], yd[0], color='C1')
"""
Explanation: Step response
Below you can manipulate the $K_p$, $K_i$ and $K_d$ parameters of the PID-controller, and see a time-response of the system. At time $t_1=1$ there is a unit step in the reference signal, and at time $t_2=10$ yhere is a negative step in the disturbance signal. Note that since we scaled time using the time constant of the system, the time is not measured in seconds but in the length of time constant. So to get $t_2$ in seconds you will have to multiply with the time constant
\begin{equation}
t_2 = 5 T = 5 \frac{1}{\omega}
\end{equation}
where $\omega$ has unit $1/s$ or $Hz$.
End of explanation
"""
# The reference signal
dramp = np.linspace(0, t_end, N)
rr = np.zeros(N)
# set up plot
fig, ax = plt.subplots(figsize=(8, 3))
#ax.set_ylim([-.1, 4])
ax.grid(True)
@widgets.interact(Kp=(0, 10, .2), Ki=(0, 8, .2), Kd=(0, 8, .2))
def update(Kp = 1.0, Ki=1, Kd=0):
"""Remove old lines from plot and plot new one"""
[l.remove() for l in ax.lines]
yr, yd = sim_PID(G, Kp, Ki, Kd, rr,dramp,t)
ax.plot(yr[1], yr[0]+yd[0], color='C0')
#ax.plot(yd[1], yd[0], color='C1')
"""
Explanation: Exercise
Try to find PID parameters that give
1. about 10% overshoot,
2. settling time of about 4,
3. negligable stationary error at $t=14$ (4 after onset of constant disturbance)
Ramp response
A negative unit ramp disturbance starts at time $t_1=0$.
End of explanation
"""
|
maubarsom/ORFan-proteins | phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb | mit | #Load blast hits
blastp_hits = pd.read_csv("2_blastp_hits.csv")
blastp_hits.head()
#Filter out Metahit 2010 hits, keep only Metahit 2014
blastp_hits = blastp_hits[blastp_hits.db != "metahit_pep"]
"""
Explanation: 1. Load blast hits
End of explanation
"""
#Assumes the Fasta file comes with the header format of EMBOSS getorf
fh = open("1_orf/d9539_asm_v1.2_orf.fa")
header_regex = re.compile(r">([^ ]+?) \[([0-9]+) - ([0-9]+)\]")
orf_stats = []
for line in fh:
header_match = header_regex.match(line)
if header_match:
is_reverse = line.rstrip(" \n").endswith("(REVERSE SENSE)")
q_id = header_match.group(1)
#Position in contig
q_cds_start = int(header_match.group(2) if not is_reverse else header_match.group(3))
q_cds_end = int(header_match.group(3) if not is_reverse else header_match.group(2))
#Length of orf in aminoacids
q_len = (q_cds_end - q_cds_start + 1) / 3
orf_stats.append( pd.Series(data=[q_id,q_len,q_cds_start,q_cds_end,("-" if is_reverse else "+")],
index=["q_id","orf_len","q_cds_start","q_cds_end","strand"]))
orf_stats_df = pd.DataFrame(orf_stats)
print(orf_stats_df.shape)
orf_stats_df.head()
#Write orf stats to fasta
orf_stats_df.to_csv("1_orf/orf_stats.csv",index=False)
"""
Explanation: 2. Process blastp results
2.1 Extract ORF stats from fasta file
End of explanation
"""
blastp_hits_annot = blastp_hits.merge(orf_stats_df,left_on="query_id",right_on="q_id")
#Add query coverage calculation
blastp_hits_annot["q_cov_calc"] = (blastp_hits_annot["q_end"] - blastp_hits_annot["q_start"] + 1 ) * 100 / blastp_hits_annot["q_len"]
blastp_hits_annot.sort_values(by="bitscore",ascending=False).head()
assert blastp_hits_annot.shape[0] == blastp_hits.shape[0]
"""
Explanation: 2.2 Annotate blast hits with orf stats
End of explanation
"""
! mkdir -p 4_msa_prots
#Get best hit (highest bitscore) for each ORF
gb = blastp_hits_annot[ (blastp_hits_annot.q_cov > 80) & (blastp_hits_annot.pct_id > 40) & (blastp_hits_annot.e_value < 1) ].groupby("query_id")
reliable_orfs = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for q_id,hits in gb )[["query_id","db","subject_id","pct_id","q_cov","q_len",
"bitscore","e_value","strand","q_cds_start","q_cds_end"]]
reliable_orfs = reliable_orfs.sort_values(by="q_cds_start",ascending=True)
reliable_orfs
"""
Explanation: 2.3 Extract best hit for each ORF ( q_cov > 0.8 and pct_id > 40% and e-value < 1)
Define these resulting 7 ORFs as the core ORFs for the d9539 assembly.
The homology between the Metahit gene catalogue is very good, and considering the catalogue was curated
on a big set of gut metagenomes, it is reasonable to assume that these putative proteins would come
from our detected circular putative virus/phage genome
Two extra notes:
* Additionally, considering only these 7 ORFs , almost the entire genomic region is covered, with very few non-coding regions, still consistent with the hypothesis of a small viral genome which should be mainly coding
Also, even though the naive ORF finder detected putative ORFs in both positive and negative strands, the supported ORFs only occur in the positive strand. This could be an indication of a ssDNA or ssRNA virus.
End of explanation
"""
reliable_orfs["orf_id"] = ["orf{}".format(x) for x in range(1,reliable_orfs.shape[0]+1) ]
reliable_orfs["cds_len"] = reliable_orfs["q_cds_end"] - reliable_orfs["q_cds_start"] +1
reliable_orfs.sort_values(by="q_cds_start",ascending=True).to_csv("3_filtered_orfs/filt_orf_stats.csv",index=False,header=True)
reliable_orfs.sort_values(by="q_cds_start",ascending=True).to_csv("3_filtered_orfs/filt_orf_list.txt",index=False,header=False,columns=["query_id"])
"""
Explanation: 2.4 Extract selected orfs for further analysis
End of explanation
"""
! ~/utils/bin/seqtk subseq 1_orf/d9539_asm_v1.2_orf.fa 3_filtered_orfs/filt_orf_list.txt > 3_filtered_orfs/d9539_asm_v1.2_orf_filt.fa
"""
Explanation: 2.4.2 Extract fasta
End of explanation
"""
filt_blastp_hits = blastp_hits_annot[ blastp_hits_annot.query_id.apply(lambda x: x in reliable_orfs.query_id.tolist())]
filt_blastp_hits.to_csv("3_filtered_orfs/d9539_asm_v1.2_orf_filt_blastp.csv")
filt_blastp_hits.head()
"""
Explanation: 2.4.3 Write out filtered blast hits
End of explanation
"""
|
chbrandt/pynotes | xmatch/xNN_v1-mock_sources.ipynb | gpl-2.0 | %matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import cm
import numpy
plt.rcParams['figure.figsize'] = (10.0, 10.0)
"""
Explanation: The search for nearest-neighbors between (two) mock catalogs
As a first step in working over the cross-matching of two astronomical catalogs, below I experiment a nearest-neighbor (NN) method using two sets of artificial sources.
At the first part of this notebook I generate the (mock) sources and then search for the (positional) matching pairs.
TOC:
* Simulation of source images
* Resultant simulation for the first image/catalog
* Resultant simulation for the second image/catalog
* Resultant merged images
* Cross-match the tables
End of explanation
"""
# Define parameters for the images and sources therein.
# size of the images
sx = 500
sy = 500
# number of sources on each image
nsrc1 = int( 0.05 * (sx*sy)/(sx+sy) )
nsrc2 = int( 0.5 * nsrc1 )
# typical error radius (in pixels)
rerr1 = 20
rerr2 = rerr1
def generate_positions(npts,img_shape):
"""
Generate 'npts' points uniformly across 'image_shape'.
Args:
npts : number of points to generate
img_shape : (y,x) shape where to generate points
Returns:
Pair_Coordinates_List : list of (y,x) tuples
"""
import numpy as np
_sy,_sx = img_shape
assert _sy>=5 and _sx>=5 # because I want
indy = np.random.randint(0,_sy-1,npts)
indx = np.random.randint(0,_sx-1,npts)
_inds = zip(indy,indx)
return _inds
# "sources 1"
coords1 = generate_positions(nsrc1,(sy,sx))
assert isinstance(coords1,list) and len(coords1) is nsrc1
# Below are utility functions just to handle an properly format
# the position table to be used -- first, as a dictionary -- by the image generation function
# and then -- as a pandas.DataFrame -- through the rest of the work
def create_positions_table(coords,err_radius):
"""
"""
tab = {}
for i,oo in enumerate(coords):
i = i+1
tab[i] = [i,oo[1],oo[0],err_radius]
return tab
# table for "sources 1"
tab1 = create_positions_table(coords1,rerr1)
def tab2df(tab):
nt = {'ID':[],'x':[],'y':[],'r':[]}
for k,v in tab.iteritems():
nt['ID'].append(v[0])
nt['x'].append(v[1])
nt['y'].append(v[2])
nt['r'].append(v[3])
import pandas
df = pandas.DataFrame(nt)
return df
df1 = tab2df(tab1)
# create and draw each source on black(null) images
def draw_image_sources(tab_positions,img_shape,colormap='colorful'):
"""
Returns a ~PIL.Image with the objects draw in it
Input:
- tab_positions : dict()
dictionary with keys as row numbers (index)
and as values a tuple (index,x_position,y_position,radius)
- img_shape : tuple
tuple with (y,x) sizes, as a ~numpy.array.shape output
- colomap : str
name of the colormap to use: {colorful, blue, red, green}
Output:
- tuple with: ~PIL.Image with the sources(circles) draw
, a dictionary with identifiers for each source (internal use only)
"""
def color_filling(mode='colorful'):
def _colorful(x,y,size):
_R = int(255 - ( int(x/256) + int(y/256)*(1 + ceil(size[0]/256)) )) #TODO: restrict total size of image to avoid _R<=0
_G = x%256
_B = y%256
return (_R,_G,_B)
def _blue(x,y,size):
_R = 0
_G = 0
_B = 255
return (_R,_G,_B)
def _green(x,y,size):
_R = 0
_G = 255
_B = 0
return (_R,_G,_B)
def _red(x,y,size):
_R = 255
_G = 0
_B = 0
return (_R,_G,_B)
foos = {'blue' : _blue,
'red' : _red,
'green' : _green,
'colorful': _colorful}
try:
foo = foos[mode]
except:
foo = _colorful
return foo
from math import ceil
from PIL import Image,ImageDraw
assert(isinstance(img_shape,tuple) and len(img_shape) is 2)
size = img_shape[::-1]
# Modification to accomplish color codes ---
#mode = 'L'
mode = 'RGB'
# ---
color = "black"
img = Image.new(mode,size,color)
assert(len(tab_positions)>=1)
#
dictColorId = {}
filling_foo = color_filling(colormap)
#
for i,src in tab_positions.items():
assert isinstance(src,list) and src is tab_positions[i]
assert len(src)>=4, "length of table raw %d is %d" % (i,len(src))
assert i==src[0]
draw = ImageDraw.Draw(img)
x = src[1]
assert 0<=x and x<size[0], "coordinate x is %d" % x
y = src[2]
assert 0<=y and y<size[1], "coordinate y is %d" % y
r = src[3]
assert r<size[0]/2 and r<size[1]/2
box = (x-r,y-r,x+r,y+r)
# Modification to accomplish color codes ---
#fill=255
fill = filling_foo(x,y,size)
# ---
dictColorId[str(fill)] = i
draw.ellipse(box,fill=fill)
del draw,box,x,y,r
return (img,dictColorId)
img1,cor2id1 = draw_image_sources(tab1,(sy,sx),colormap='blue')
#img1.show()
## Utility functions to handle convertion between PIL -> numpy, to show it with Matplotlib
#
# cmap reference:
#
# cm api: http://matplotlib.org/api/cm_api.html
# cmaps : http://matplotlib.org/users/colormaps.html
# imshow: http://matplotlib.org/users/image_tutorial.html
#cmap = cm.get_cmap('Blues')
def pilImage_2_numpyArray(img,shape):
sx,sy = shape
img_array = numpy.array(list(img.getdata())).reshape(sx,sy,3)
return img_array
def rgbArray_2_mono(img_arr,chanel='R'):
chanels = {'R':0,
'G':1,
'B':2}
_i = chanels[chanel]
return img_arr[:,:,_i]
"""
Explanation: Simulation of source images
Object (or "sources") in astronomical catalogs come from the processing of astronomical images through a detection algorithm. It is out the scope here to discuss such detection algorithms, as such we generate the sources across a region mocking some region of the sky. The images help us to see what were are going to process, and by all means add to the quality/completeness of the workflow.
End of explanation
"""
img1_array = pilImage_2_numpyArray(img1,[sx,sy])
img1_mono = rgbArray_2_mono(img1_array,'B')
plt.imshow(img1_mono,cmap='Blues')
print "Catalog A:"
print "----------"
print df1
# do the same steps for "sources 2"
coords2 = generate_positions(nsrc2,(sy,sx))
assert isinstance(coords2,list) and len(coords2) is nsrc2
tab2 = create_positions_table(coords2,rerr2)
img2,cor2id2 = draw_image_sources(tab2,(sy,sx),colormap='red')
#img2.show()
df2 = tab2df(tab2)
"""
Explanation: Result of the simulation for the first image
End of explanation
"""
img2_array = pilImage_2_numpyArray(img2,[sx,sy])
img2_mono = rgbArray_2_mono(img2_array,'R')
print "Catalog B:"
print "----------"
print df2
plt.imshow(img2_mono,cmap='Reds')
"""
Explanation: Result of the simulation for the second image
End of explanation
"""
def add_arrays_2_image(img1,img2):
"""
"""
def array_2_image(arr):
from PIL import Image
imgout = Image.fromarray(numpy.uint8(arr))
return imgout
return array_2_image(img1+img2)
img_sum = add_arrays_2_image(img1_array,img2_array)
plt.imshow(img_sum)
"""
Explanation: Merge images
End of explanation
"""
def nn_search(catA,catB):
"""
"""
import pandas
assert isinstance(catA,pandas.DataFrame)
assert isinstance(catB,pandas.DataFrame)
A = catA.copy()
B = catB.copy()
from astropy.coordinates import SkyCoord
from astropy import units
norm_fact = 500.0
Ax_norm = A.x / norm_fact
Ay_norm = A.y / norm_fact
A_coord = SkyCoord(ra=Ax_norm, dec=Ay_norm, unit=units.deg)
Bx_norm = B.x / norm_fact
By_norm = B.y / norm_fact
B_coord = SkyCoord(ra=Bx_norm, dec=By_norm, unit=units.deg)
from astropy.coordinates import match_coordinates_sky
match_A_nn_idx, match_A_nn_sep, _d3d = match_coordinates_sky(A_coord, B_coord)
match_B_nn_idx, match_B_nn_sep, _d3d = match_coordinates_sky(B_coord, A_coord)
A['NN_in_B'] = B.ID[match_A_nn_idx].values
B['NN_in_A'] = A.ID[match_B_nn_idx].values
import numpy
A_matched_pairs = zip(numpy.arange(len(match_A_nn_idx)),
match_A_nn_idx )
B_matched_pairs = set(zip(match_B_nn_idx,
numpy.arange(len(match_B_nn_idx))))
duplicate_pairs = []
duplicate_dists = []
for i,p in enumerate(A_matched_pairs):
if p in B_matched_pairs:
duplicate_pairs.append(p)
duplicate_dists.append(match_A_nn_sep[i].value)
A_matched_idx,B_matched_idx = zip(*duplicate_pairs)
df_matched = pandas.DataFrame({ 'A_idx':A_matched_idx,
'B_idx':B_matched_idx,
'separation':duplicate_dists})
df_matched = df_matched.set_index('A_idx')
A.columns = [ 'A_'+c for c in A.columns ]
B.columns = [ 'B_'+c for c in B.columns ]
B_matched = B.iloc[df_matched.B_idx]
B_matched['A_idx'] = df_matched.index
B_matched = B_matched.set_index('A_idx')
B_matched['dist'] = numpy.asarray(df_matched.separation * norm_fact, dtype=int)
df = pandas.concat([A,B_matched],axis=1)
return df
from astropy.table import Table
table_match = Table.from_pandas( nn_search(df1,df2) )
table_match.show_in_notebook()
"""
Explanation: Finally cross-match the catalogs
End of explanation
"""
|
shngli/Data-Mining-Python | Mining massive datasets/Frequent itemsets.ipynb | gpl-3.0 | import os
import sys
# N = 100,000; M = 50,000,000; S = 5,000,000,000
# N = 40,000; M = 60,000,000; S = 3,200,000,000
# N = 50,000; M = 80,000,000; S = 1,500,000,000
# N = 100,000; M = 100,000,000; S = 1,200,000,000
soln = [[100000, 50000000, 5000000000],
[40000, 60000000, 3200000000],
[50000, 80000000, 1500000000],
[100000, 100000000, 1200000000]]
def a(N, M):
memory = min(12*(M + 10**6), 4*N+2*N*(N-1))
#memory = min(12*(M + 10**6), 4*N*N)
return memory
def pct(e, a):
return 100 * (abs(e - a) / float(a))
for n, m, s in soln:
print pct(s, a(n, m))
"""
Explanation: Frequent Itemsets
Question 1
Suppose we have transactions that satisfy the following assumptions:
- s, the support threshold, is 10,000.
- There are one million items, which are represented by the integers 0,1,...,999999.
- There are N frequent items, that is, items that occur 10,000 times or more.
- There are one million pairs that occur 10,000 times or more.
- There are 2M pairs that occur exactly once. M of these pairs consist of two frequent items, the other M each have at least one nonfrequent item.
- No other pairs occur at all.
- Integers are always represented by 4 bytes.
Suppose we run the a-priori algorithm to find frequent pairs and can choose on the second pass between the triangular-matrix method for counting candidate pairs (a triangular array count[i][j] that holds an integer count for each pair of items (i, j) where i < j) and a hash table of item-item-count triples. Neglect in the first case the space needed to translate between original item numbers and numbers for the frequent items, and in the second case neglect the space needed for the hash table. Assume that item numbers and counts are always 4-byte integers.
As a function of N and M, what is the minimum number of bytes of main memory needed to execute the a-priori algorithm on this data? Demonstrate that you have the correct formula by selecting, from the choices below, the triple consisting of values for N, M, and the (approximate, i.e., to within 10%) minumum number of bytes of main memory, S, needed for the a-priori algorithm to execute with this data.
End of explanation
"""
baskets = range(1,101)
items = range(1,101)
# Create transactions
transactions = []
for i in baskets:
basket = []
for item in items:
if i % item == 0:
basket.append(item)
transactions.append(basket)
def check(transactions,query):
count=0
for t in transactions:
query_in = True
for q in query:
if q not in t:
query_in = False
if query_in:
count+=1
return count
def confidence(num,denom):
count_denom = check(transactions,denom)
count_num = check(transactions,num)
confidence = count_num /(1.0* count_denom ) * 100
return confidence
print "{1,2}-> 4,Condidence = %d"%(confidence([1,2,4],[1,2]))
print "{1}-> 2,Condidence = %d"%(confidence([1,2],[1]))
print "{1,4,7}-> 14,Condidence = %d"%(confidence([1,4,7,14],[1,4,7]))
print "{1,3,6}-> 12,Condidence = %d"%(confidence([1,3,6,12],[1,3,6]))
print "{4,6}-> 12,Condidence = %d"%(confidence([4,6,12],[4,6]))
print "{8,12}-> 96,Condidence = %d"%(confidence([8,12,96],[8,12]))
print "{4,6}-> 24,Condidence = %d"%(confidence([4,6,24],[4,6]))
print "{1,3,6}-> 12,Condidence = %d"%(confidence([1,3,6,12],[1,3,6]))
"""
Explanation: Answer: N = 100,000; M = 100,000,000; S = 1,200,000,000
Question 2
Imagine there are 100 baskets, numbered 1,2,...,100, and 100 items, similarly numbered. Item i is in basket j if and only if i divides j evenly. For example, basket 24 is the set of items {1,2,3,4,6,8,12,24}. Describe all the association rules that have 100% confidence. Which of the following rules has 100% confidence?
End of explanation
"""
import os
import sys
# S = 1,000,000,000; P = 10,000,000,000
# S = 300,000,000; P = 750,000,000
# S = 1,000,000,000; P = 20,000,000,000 (correct)
# S = 100,000,000; P = 120,000,000
soln = [[1000000000, 10000000000],
[300000000, 750000000],
[1000000000, 20000000000],
[100000000, 120000000]]
def b(S, P):
memory = S*S/48000000 - P
return memory
def pct(e, a):
return 100 * (abs(e - a) / float(a))
return 100 * (a/(e*e/48000000))
for s, p in soln:
print "Difference between P and S^2/48,000,000 = %d" % b(s, p)
print "Percentage of P to the largest possible P value for that S = %d" % pct(s,p)
print
"""
Explanation: Question 3
Suppose we perform the PCY algorithm to find frequent pairs, with market-basket data meeting the following specifications:
- s, the support threshold, is 10,000.
- There are one million items, which are represented by the integers 0,1,...,999999.
- There are 250,000 frequent items, that is, items that occur 10,000 times or more.
- There are one million pairs that occur 10,000 times or more.
- There are P pairs that occur exactly once and consist of 2 frequent items.
- No other pairs occur at all.
- Integers are always represented by 4 bytes.
- When we hash pairs, they distribute among buckets randomly, but as evenly as possible; i.e., you may assume that each bucket gets exactly its fair share of the P pairs that occur once.
Suppose there are S bytes of main memory. In order to run the PCY algorithm successfully, the number of buckets must be sufficiently large that most buckets are not large. In addition, on the second pass, there must be enough room to count all the candidate pairs. As a function of S, what is the largest value of P for which we can successfully run the PCY algorithm on this data? Demonstrate that you have the correct formula by indicating which of the following is a value for S and a value for P that is approximately (i.e., to within 10%) the largest possible value of P for that S.
End of explanation
"""
|
skipamos/code_guild | wk0/notebooks/.ipynb_checkpoints/primes_challenge-checkpoint.ipynb | mit | def list_primes(n):
# TODO: Implement me
pass
"""
Explanation: <small><i>This notebook was prepared by Thunder Shiviah. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement list_primes(n), which returns a list of primes up to n (inclusive).
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Does list_primes do anything else?
No
Test Cases
list_primes(1) -> [] # 1 is not prime.
list_primes(2) -> [2]
list_primes(12) -> [2, 3, 5, 7 , 11]
Algorithm
Primes are numbers which are only divisible by 1 and themselves.
5 is a prime since it can only be divided by itself and 1.
9 is not a prime since it can be divided by 3 (3*3 = 9).
1 is not a prime for reasons that only mathematicians care about.
To check if a number is prime, we can implement a basic algorithm, namely: check if a given number can be divided by any numbers smaller than the given number (note: you really only need to test numbers up to the square root of a given number, but it doesn't really matter for this assignment).
Code
End of explanation
"""
# %load test_list_primes.py
from nose.tools import assert_equal
class Test_list_primes(object):
def test_list_primes(self):
assert_equal(list_primes(1), [])
assert_equal(list_primes(2), [2])
assert_equal(list_primes(7), [2, 3, 5, 7])
assert_equal(list_primes(9), list_primes(7))
print('Success: test_list_primes')
def main():
test = Test_list_primes()
test.test_list_primes()
if __name__ == '__main__':
main()
"""
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation
"""
|
AllenDowney/ModSim | soln/chap09.ipynb | gpl-2.0 | # install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
"""
Explanation: Chapter 9
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
"""
from sympy import symbols
t = symbols('t')
"""
Explanation: In this chapter we express the models from previous chapters as
difference equations and differential equations, solve the equations,
and derive the functional forms of the solutions. We also discuss the
complementary roles of mathematical analysis and simulation.
Recurrence relations
The population models in the previous chapter and this one are simple
enough that we didn't really need to run simulations. We could have
solved them mathematically. For example, we wrote the constant growth
model like this:
results[t+1] = results[t] + annual_growth
In mathematical notation, we would write the same model like this:
$$x_{n+1} = x_n + c$$
where $x_n$ is the population during year $n$,
$x_0$ is a given initial population, and $c$ is constant annual growth.
This way of representing the model is a recurrence relation; see
http://modsimpy.com/recur.
Sometimes it is possible to solve a recurrence relation by writing an
equation that computes $x_n$, for a given value of $n$, directly; that
is, without computing the intervening values from $x_1$ through
$x_{n-1}$.
In the case of constant growth we can see that $x_1 = x_0 + c$, and
$x_2 = x_1 + c$. Combining these, we get $x_2 = x_0 + 2c$, then
$x_3 = x_0 + 3c$, and it is not hard to conclude that in general
$$x_n = x_0 + nc$$
So if we want to know $x_{100}$ and we don't care
about the other values, we can compute it with one multiplication and
one addition.
We can also write the proportional model as a recurrence relation:
$$x_{n+1} = x_n + \alpha x_n$$
Or more conventionally as:
$$x_{n+1} = x_n (1 + \alpha)$$
Now we can see that
$x_1 = x_0 (1 + \alpha)$, and $x_2 = x_0 (1 + \alpha)^2$, and in general
$$x_n = x_0 (1 + \alpha)^n$$
This result is a geometric progression;
see http://modsimpy.com/geom. When $\alpha$ is positive, the factor
$1+\alpha$ is greater than 1, so the elements of the sequence grow
without bound.
Finally, we can write the quadratic model like this:
$$x_{n+1} = x_n + \alpha x_n + \beta x_n^2$$
or with the more
conventional parameterization like this:
$$x_{n+1} = x_n + r x_n (1 - x_n / K)$$
There is no analytic solution to
this equation, but we can approximate it with a differential equation
and solve that, which is what we'll do in the next section.
Differential equations
Starting again with the constant growth model
$$x_{n+1} = x_n + c$$
If we define $\Delta x$ to be the change in $x$ from one time step to the next, we can write:
$$\Delta x = x_{n+1} - x_n = c$$
If we define
$\Delta t$ to be the time step, which is one year in the example, we can
write the rate of change per unit of time like this:
$$\frac{\Delta x}{\Delta t} = c$$
This model is discrete, which
means it is only defined at integer values of $n$ and not in between.
But in reality, people are born and die all the time, not once a year,
so a continuous model might be more realistic.
We can make this model continuous by writing the rate of change in the
form of a derivative:
$$\frac{dx}{dt} = c$$
This way of representing the model is a differential equation; see http://modsimpy.com/diffeq.
We can solve this differential equation if we multiply both sides by
$dt$:
$$dx = c dt$$
And then integrate both sides:
$$x(t) = c t + x_0$$
Similarly, we can write the proportional growth model like this:
$$\frac{\Delta x}{\Delta t} = \alpha x$$
And as a differential equation
like this:
$$\frac{dx}{dt} = \alpha x$$
If we multiply both sides by
$dt$ and divide by $x$, we get
$$\frac{1}{x}~dx = \alpha~dt$$
Now we
integrate both sides, yielding:
$$\ln x = \alpha t + K$$
where $\ln$ is the natural logarithm and $K$ is the constant of integration.
Exponentiating both sides, we have
$$\exp(\ln(x)) = \exp(\alpha t + K)$$
The exponential function can be written $\exp(x)$ or $e^x$. In this book I use the first form because it resembles the Python code.
We can rewrite the previous equation as
$$x = \exp(\alpha t) \exp(K)$$
Since $K$ is an arbitrary constant,
$\exp(K)$ is also an arbitrary constant, so we can write
$$x = C \exp(\alpha t)$$
where $C = \exp(K)$. There are many solutions
to this differential equation, with different values of $C$. The
particular solution we want is the one that has the value $x_0$ when
$t=0$.
When $t=0$, $x(t) = C$, so $C = x_0$ and the solution we want is
$$x(t) = x_0 \exp(\alpha t)$$ If you would like to see this derivation
done more carefully, you might like this video:
http://modsimpy.com/khan1.
Analysis and simulation
Once you have designed a model, there are generally two ways to proceed: simulation and analysis. Simulation often comes in the form of a computer program that models changes in a system over time, like births and deaths, or bikes moving from place to place. Analysis often comes in the form of algebra; that is, symbolic manipulation using mathematical notation.
Analysis and simulation have different capabilities and limitations.
Simulation is generally more versatile; it is easy to add and remove
parts of a program and test many versions of a model, as we have done in the previous examples.
But there are several things we can do with analysis that are harder or impossible with simulations:
With analysis we can sometimes compute, exactly and efficiently, a
value that we could only approximate, less efficiently, with
simulation. For example, in the quadratic model we plotted growth rate versus population and saw net crossed through zero when the population is
near 14 billion. We could estimate the crossing point using a
numerical search algorithm (more about that later). But with the
analysis in Section xxx, we get the general result that
$K=-\alpha/\beta$.
Analysis often provides "computational shortcuts", that is, the
ability to jump forward in time to compute the state of a system
many time steps in the future without computing the intervening
states.
We can use analysis to state and prove generalizations about models;
for example, we might prove that certain results will always or
never occur. With simulations, we can show examples and sometimes
find counterexamples, but it is hard to write proofs.
Analysis can provide insight into models and the systems they
describe; for example, sometimes we can identify regimes of
qualitatively different behavior and key parameters that control
those behaviors.
When people see what analysis can do, they sometimes get drunk with
power, and imagine that it gives them a special ability to see past the veil of the material world and discern the laws of mathematics that govern the universe. When they analyze a model of a physical system, they talk about "the math behind it" as if our world is the mere shadow of a world of ideal mathematical entities (I am not making this up; see http://modsimpy.com/plato.).
This is, of course, nonsense. Mathematical notation is a language
designed by humans for a purpose, specifically to facilitate symbolic
manipulations like algebra. Similarly, programming languages are
designed for a purpose, specifically to represent computational ideas
and run programs.
Each of these languages is good for the purposes it was designed for and less good for other purposes. But they are often complementary, and one of the goals of this book is to show how they can be used together.
Analysis with WolframAlpha
Until recently, most analysis was done by rubbing graphite on wood
pulp, a process that is laborious and error-prone. A useful
alternative is symbolic computation. If you have used services like
WolframAlpha, you have used symbolic computation.
For example, if you go to https://www.wolframalpha.com/ and type
df(t) / dt = alpha f(t)
WolframAlpha infers that f(t) is a function of t and alpha is a
parameter; it classifies the query as a "first-order linear ordinary
differential equation\", and reports the general solution:
$$f(t) = c_1 \exp(\alpha t)$$
If you add a second equation to specify
the initial condition:
df(t) / dt = alpha f(t), f(0) = p_0
WolframAlpha reports the particular solution:
$$f(t) = p_0 \exp(\alpha t)$$
WolframAlpha is based on Mathematica, a powerful programming language
designed specifically for symbolic computation.
Analysis with SymPy
Python has a library called SymPy that provides symbolic computation
tools similar to Mathematica. They are not as easy to use as
WolframAlpha, but they have some other advantages.
Before we can use SymPy, we have to import it:
SymPy defines a Symbol object that represents symbolic variable names,
functions, and other mathematical entities.
The symbols function takes a string and returns Symbol objects. So
if we run this assignment:
End of explanation
"""
expr = t + 1
expr
"""
Explanation: Python understands that t is a symbol, not a numerical value. If we
now run
End of explanation
"""
expr.subs(t, 2)
"""
Explanation: Python doesn't try to perform numerical addition; rather, it creates a
new Symbol that represents the sum of t and 1. We can evaluate the
sum using subs, which substitutes a value for a symbol. This example
substitutes 2 for t:
End of explanation
"""
from sympy import Function
f = Function('f')
f
"""
Explanation: Functions in SymPy are represented by a special kind of Symbol:
End of explanation
"""
f(t)
"""
Explanation: Now if we write f(t), we get an object that represents the evaluation of a function, $f$, at a value, $t$.
End of explanation
"""
from sympy import diff
dfdt = diff(f(t), t)
dfdt
"""
Explanation: But again SymPy doesn't actually
try to evaluate it.
Differential equations in SymPy
SymPy provides a function, diff, that can differentiate a function. We can apply it to f(t) like this:
End of explanation
"""
from sympy import Eq
alpha = symbols('alpha')
eq1 = Eq(dfdt, alpha*f(t))
eq1
"""
Explanation: The result is a Symbol that represents the derivative of f with
respect to t. But again, SymPy doesn't try to compute the derivative
yet.
To represent a differential equation, we use Eq:
End of explanation
"""
from sympy import dsolve
solution_eq = dsolve(eq1)
solution_eq
"""
Explanation: The result is an object that represents an equation. Now
we can use dsolve to solve this differential equation:
End of explanation
"""
C1, p_0 = symbols('C1 p_0')
"""
Explanation: The result is the general
solution, which still contains an unspecified constant, $C_1$. To get the particular solution where $f(0) = p_0$, we substitute p_0 for C1. First, we have to create two more symbols:
End of explanation
"""
particular = solution_eq.subs(C1, p_0)
particular
"""
Explanation: Now we can perform the substitution:
End of explanation
"""
r, K = symbols('r K')
"""
Explanation: The result is called the exponential growth curve; see
http://modsimpy.com/expo.
Solving the quadratic growth model
We'll use the (r, K) parameterization, so we'll need two more symbols:
End of explanation
"""
eq2 = Eq(diff(f(t), t), r * f(t) * (1 - f(t)/K))
eq2
"""
Explanation: Now we can write the differential equation.
End of explanation
"""
solution_eq = dsolve(eq2)
solution_eq
"""
Explanation: And solve it.
End of explanation
"""
general = solution_eq.rhs
general
"""
Explanation: The result, solution_eq, contains rhs, which is the right-hand side of the solution.
End of explanation
"""
at_0 = general.subs(t, 0)
at_0
"""
Explanation: We can evaluate the right-hand side at $t=0$
End of explanation
"""
from sympy import solve
solutions = solve(Eq(at_0, p_0), C1)
"""
Explanation: Now we want to find the value of C1 that makes f(0) = p_0.
So we'll create the equation at_0 = p_0 and solve for C1. Because this is just an algebraic identity, not a differential equation, we use solve, not dsolve.
End of explanation
"""
type(solutions), len(solutions)
value_of_C1 = solutions[0]
value_of_C1
"""
Explanation: The result from solve is a list of solutions. In this case, we have reason to expect only one solution, but we still get a list, so we have to use the bracket operator, [0], to select the first one.
End of explanation
"""
particular = general.subs(C1, value_of_C1)
particular
"""
Explanation: Now in the general solution, we want to replace C1 with the value of C1 we just figured out.
End of explanation
"""
particular.simplify()
"""
Explanation: The result is complicated, but SymPy provides a function that tries to simplify it.
End of explanation
"""
A = (K - p_0) / p_0
A
from sympy import exp
logistic = K / (1 + A * exp(-r*t))
logistic
"""
Explanation: This function is called the logistic growth curve; see
http://modsimpy.com/logistic. In the context of growth models, the
logistic function is often written like this:
$$f(t) = \frac{K}{1 + A \exp(-rt)}$$
where $A = (K - p_0) / p_0$.
We can use SymPy to confirm that these two forms are equivalent. First we represent the alternative version of the logistic function:
End of explanation
"""
(particular - logistic).simplify()
"""
Explanation: To see whether two expressions are equivalent, we can check whether their difference simplifies to 0.
End of explanation
"""
# Solution
alpha, beta = symbols('alpha beta')
# Solution
eq3 = Eq(diff(f(t), t), alpha*f(t) + beta*f(t)**2)
eq3
# Solution
solution_eq = dsolve(eq3)
solution_eq
# Solution
general = solution_eq.rhs
general
# Solution
at_0 = general.subs(t, 0)
# Solution
solutions = solve(Eq(at_0, p_0), C1)
value_of_C1 = solutions[0]
value_of_C1
# Solution
particular = general.subs(C1, value_of_C1)
particular.simplify()
"""
Explanation: This test only works one way: if SymPy says the difference reduces to 0, the expressions are definitely equivalent (and not just numerically close).
But if SymPy can't find a way to simplify the result to 0, that doesn't necessarily mean there isn't one. Testing whether two expressions are equivalent is a surprisingly hard problem; in fact, there is no algorithm that can solve it in general.
If you would like to see this differential equation solved by hand, you might like this video: http://modsimpy.com/khan2
Summary
In this chapter we wrote the growth models from the previous chapters in terms of difference and differential equations.
What I called the constant growth model is more commonly called linear growth because the solution is a line. If we model time as continuous, the solution is
$$f(t) = p_0 + \alpha t$$
where $\alpha$ is the growth rate.
Similarly, the proportional growth model is usually called exponential growth because the solution is exponential:
$$f(t) = p_0 \exp{\alpha t}$$
Finally, the quadratic growth model is called logistic growth because the solution is the logistic function:
$$f(t) = \frac{K}{1 + A \exp(-rt)}$$
where $A = (K - p_0) / p_0$.
I avoided these terms until now because they are based on results we had not derived yet.
Exercises
Exercise: Solve the quadratic growth equation using the alternative parameterization
$\frac{df(t)}{dt} = \alpha f(t) + \beta f^2(t) $
End of explanation
"""
|
mikebentley15/baxter_projectyouth | sandbox/play.ipynb | mit | import rospy
import baxter_interface
from baxter_interface import CHECK_VERSION
"""
Explanation: The below imports will only work if you have ros and baxter tools installed and working, which isn't the case on my laptop.
End of explanation
"""
def tryGetLine(inStream):
'Returns a line if there is one, else an empty string'
line = ''
fd = inStream.fileno()
timeout = 0.01 # seconds
readyToRead, _, _ = select.select([fd], [], [], timeout)
if fd in readyToRead:
line = inStream.readline()
return line[:-1] # Remove the newline
def connectToBaxter():
rospy.init_node('play')
rs = baxter_interface.RobotEnable(CHECK_VERSION)
rs.enable()
pass
def waitForButtonPress():
pass
"""
Explanation: I made this function when I was trying to make you type 'quit' to exit the command-line tool. But I just caved and said 'Press <CTRL>-C to exit'
End of explanation
"""
def saveJointAngles(filename, angles):
'''
Saves the angles dictionary to filename in python notation
Will save it into an array of dictionaries called data.
For example
saveJointAngles('out.py', {'a': 1, 'b': 2})
will write to 'out.py':
data = []
data.append({
'a': 1,
'b': 2,
})
'''
string = str(angles)
string = string.replace('{', '')
string = string.replace('}', '')
split = string.split(', ')
split.append('})')
isDataDefined = False
try:
with open(filename, 'r') as infile:
for line in infile:
if re.match('data =', line):
isDataDefined = True
break
except IOError:
pass # Ignore the problem that the file doesn't yet exist
with open(filename, 'a') as outfile:
if not isDataDefined:
outfile.write('data = []\n')
outfile.write('\n')
outfile.write('data.append({\n ')
outfile.write(',\n '.join(split))
outfile.write('\n')
def main():
''
# Connect to Baxter
leftArm, rightArm = connectToBaxter()
# Record arm positions
print 'Press <CTRL>-C to quit'
while True:
# Read button presses and see if we should record
whichArm, action = waitForButtonPress()
currentArm = leftArm if whichArm == 'left' else rightArm
jointAngles = currentArm.joint_angles()
saveJointAngles(whichArm + 'SavedJoints.py', jointAngles)
if __name__ == '__main__':
sys.exit(main())
"""
Explanation: This function will save basically any dictionary to a file in python notation.
End of explanation
"""
import os
try:
os.remove('out.py')
except OSError:
pass # Ignore if the file doesn't exist
"""
Explanation: First, let's make sure that out.py doesn't exist
End of explanation
"""
saveJointAngles('out.py', {'a': 1.02, 'b': 3.333, 'c': True})
saveJointAngles('out.py', {'abc': 'jumbo shrimp', 'tree': 'tall', 'seed': True})
with open('out.py', 'r') as infile:
print infile.read()
"""
Explanation: Now, check to see that I can call this multiple times and it creates only one data array with multiple dictionaries in it.
End of explanation
"""
import out
reload(out)
out.data
"""
Explanation: Let's see if we can import it and use it
End of explanation
"""
|
christophmark/bayesloop | docs/source/tutorials/modelselection.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt # plotting
import seaborn as sns # nicer plots
sns.set_style('whitegrid') # plot styling
import numpy as np
import bayesloop as bl
# prepare study for coal mining data
S = bl.Study()
S.loadExampleData()
L = bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000))
S.set(L)
"""
Explanation: Model Selection
In Bayesian statistics, an objective model comparison is carried out by comparing the models' marginal likelihood. The likelihood function describes the probability (density) of the data, given the parameter values (and thereby the chosen model). By integrating out (marginalizing) the parameter values, one obtains the marginal likelihood (also called the model evidence), which directly measures the fitness of the model at hand. The model evidence represents a powerful tool for model selection, as it does not assume specific distributions (e.g. Student's t-test assumes Gaussian distributed variables) and automatically follows the principle of Occam's razor.
The forward-backward algorithm that bayesloop allows to approximate the model evidence based on the discretized parameter distributions. Details on this method in the context of Hidden Markov models can be found here.
This section aims at giving a very brief introduction of Bayes factors together with an example based on the coal mining data and further introduces combined transition models in bayesloop.
End of explanation
"""
# dynamic disaster rate
T = bl.tm.GaussianRandomWalk('sigma', 0.2, target='accident_rate')
S.set(T)
S.fit(evidenceOnly=True)
dynamicLogEvidence = S.log10Evidence
#static disaster rate
T = bl.tm.Static()
S.set(T)
S.fit(evidenceOnly=True)
staticLogEvidence = S.log10Evidence
# determine Bayes factor
B = 10**(dynamicLogEvidence - staticLogEvidence)
print '\nBayes factor: B =', B
"""
Explanation: Bayes factors
Instead of interpreting the value of the marginal likelihood for a single model, it is common to compare two competing models/explanations $M_1$ and $M_2$ by evaluating the Bayes factor, here denoted as $B_{12}$. The Bayes factor is given by the ratio of the two marginal likelihood values:
$$B_{12} = \frac{p(D|M_1)}{p(D|M_2)}$$
where $p(D|M_i)$ states the probability of the data (marginal likelihood) given model $M_i$. A value of $B_{12} > 1$ therefore indicates that the data is better explained by model $M_1$, compared to $M_2$. More detailed information for the interpretation of the value of Bayes factors can be found here and in the references therein.
As a first example, we investigate whether the inferred disaster rate of the coal mining data set indeed should be modeled as a time-varying parameter (a constant rate is an equally valid hypothesis). We first fit the model using the GaussianRandomWalk transition model with a standard deviation of $\sigma = 0.2$ to determine the corresponding model evidence (on a log scale). Subsequently, we use the simpler Static transition model (assuming no change of the disaster rate over time) and compare the resulting model evidence by computing the Bayes factor. Note that for computational efficiency, the keyword argument evidenceOnly=True is used, which evaluates the model evidence, but does not store any results for plotting.
End of explanation
"""
# dynamic disaster rate (change-point model)
T = bl.tm.ChangePoint('tChange', 1890)
S.set(T)
S.fit()
dynamicLogEvidence2 = S.log10Evidence
# determine Bayes factor
B = 10**(dynamicLogEvidence2 - dynamicLogEvidence)
print '\nBayes factor: B =', B
plt.figure(figsize=(8, 4))
plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5)
S.plot('accident_rate')
plt.xlim([1851, 1962])
plt.xlabel('year');
"""
Explanation: The computed Bayes factor $B = 2.6 \cdot 10^{13}$ shows decisive support for the first hypothesis of a dynamic disaster rate.
While this analysis conducted above clearly rules for a time-varying rate, there may still exist a dynamic model that represents a better fit than the Gaussian random walk with $\sigma=0.2$. A very simple dynamic model is given by the transition model ChangePoint that assumes an abrupt change of the disaster rate at a predefined point in time, we choose 1890 here. Note that the transition model ChangePoint does not need a target parameter, as it is applied to all parameters of an observation model. We perform a full fit in this case, as we want to provide a plot of the result.
End of explanation
"""
# combined model (change-point model + Gaussian random walk)
T = bl.tm.CombinedTransitionModel(bl.tm.ChangePoint('tChange', 1890),
bl.tm.GaussianRandomWalk('sigma', 0.2, target='accident_rate'))
S.set(T)
S.fit()
combinedLogEvidence = S.log10Evidence
# determine Bayes factor
B = 10**(combinedLogEvidence - dynamicLogEvidence2)
print '\nBayes factor: B =', B
plt.figure(figsize=(8, 4))
plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5)
S.plot('accident_rate')
plt.xlim([1851, 1962])
plt.xlabel('year');
"""
Explanation: The Bayes factor shows support in favor of the new change-point model. There is a 50% increased probability that the data is generated by the change-point model, compared to the Gaussian random walk model. Some may however argue that the value of $B = 1.5$ indicates only very weak support and is not worth more than a bare mention. Based on the data at hand, no clear decision between the two models can be made. While the change-point model is favored because it is more restricted (the number of possible data sets that can be described by this model is much smaller than for the Gaussian random walk) and therefore "simpler", it cannot capture fluctuations of the disaster rate before and after 1890 like the Gaussian random walk model does.
Combined transition models
The discussion above shows that depending on the data set, different transition models better explain different aspects of the data. For some data sets, a satisfactory transition model may only be found by combining several simple transition models. bayesloop provides a transition model class called CombinedTransitionModel that can be supplied with a sequence of transition models that are subsequently applied at every time step. Here, we combine the change-point model and the Gaussian random walk model to check whether the combined model yields a better explanation of the data, compared to the change-point model alone:
End of explanation
"""
# Definition of a linear decrease transition model.
# The first argument of any determinsitic model must be the time stamp
# and any hyper-parameters of the model are supplied as keyword-arguments.
# The hyper-parameter value(s) is/are supplied as default values.
def linear(t, slope=-0.2):
return slope*t
T = bl.tm.SerialTransitionModel(bl.tm.Static(),
bl.tm.BreakPoint('t_1', 1885),
bl.tm.Deterministic(linear, target='accident_rate'),
bl.tm.BreakPoint('t_2', 1895),
bl.tm.GaussianRandomWalk('sigma', 0.1, target='accident_rate'))
S.set(T)
S.fit()
serialLogEvidence = S.log10Evidence
# determine Bayes factor
B = 10**(serialLogEvidence - combinedLogEvidence)
print '\nBayes factor: B =', B
plt.figure(figsize=(8, 4))
plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5)
S.plot('accident_rate')
plt.xlim([1851, 1962])
plt.xlabel('year');
"""
Explanation: Again, the refined model is favored by a Bayes factor of $B = 2.5$.
Serial transition model
The combined transition models introduced above substantially extend the number of different transition models. These transition models imply identical parameter dynamics for all time steps. In many applications, however, there exist so-called structural breaks when parameter dynamics exhibit a fundamental change. In contrast to an abrupt change of the parameter values in case of a change-point, a structural break indicates an abrupt change of the transition model at a specific point in time. The class SerialTransitionModel allows to define a sequence of transition models (including combined transition models) together with a sequence of time steps that denote the structural breaks. Each break-point is defined just like any other transition model, containing a unique name and a time stamp (or an array of possible time stamps, see this tutorial on change-point studies). Note, however, that the BreakPoint transition model class can only be used as a sub-model of an SerialTransitionModel instance.
We use this new class of transition model to explore the idea that the number of coal mining disasters did not decrease instantaneously, but instead decreased linearly over the course of a few years. We assume a static disaster rate until 1885, followed by a deterministic decrease of 0.2 disasters per year (deterministic transition models are defined easily by custom Python functions, see below) until 1895. Finally, the disaster rate after the year 1895 is modeled by a Gaussian random walk of the disaster rate with a standard deviation of $\sigma=0.1$. Note that we pass the transition models and the corresponding structural breaks (time steps) to the SerialTransitionModel in turns. While this order may increase readability, one can also pass the models first, followed by the corresponding time steps.
End of explanation
"""
|
eigenholser/python-magic-methods | slides.ipynb | mit | methods = []
for item in dir(2):
if item.startswith('__') and item.endswith('__'):
methods.append(item)
print(methods)
"""
Explanation: Python Magic...Methods
<br/>
<br/>
Scott Overholser
<br/>
<br/>
https://github.com/eigenholser/python-magic-methods
Terminology
"Dunder" is used to reference "double underscore" names.
E.g. __init__() is called "dunder init".
Magic Methods
Special methods with reserved names.
Beautiful, intuitive, and standard ways of performing basic operations.
Define meaning for operators so that we can use them on our own classes like they were built in types.
Python Built-in Types
End of explanation
"""
(2).__str__()
str(2)
(2).__pow__(3)
2 ** 3
"""
Explanation: Examples
End of explanation
"""
import this
"""
Explanation: What about our own custom objects?
We can add magic methods to make our own objects behave like built-in types.
Why would we do that?
Expressiveness!
It is Zen.
The Zen of Python
End of explanation
"""
from great_circle import (
CAN, JFK, LAX, SLC,
Point, Distance,
MagicPoint, MagicDistance)
"""
Explanation: Great circle calculation
<br/>
<br/>
https://github.com/eigenholser/python-magic-methods/blob/master/great_circle.py
Initialize our notebook
End of explanation
"""
CAN, JFK, LAX, SLC
"""
Explanation: I asked Google Maps for the GPS coordinates of these airports
End of explanation
"""
# Initialize some non-magic objects
slc1 = Point(SLC)
slc2 = Point(SLC)
lax = Point(LAX)
jfk = Point(JFK)
can = Point(CAN)
# Initialize some objects with magic methods
m_slc1 = MagicPoint(SLC)
m_slc2 = MagicPoint(SLC)
m_lax = MagicPoint(LAX)
m_jfk = MagicPoint(JFK)
m_can = MagicPoint(CAN)
"""
Explanation: Object Construction
The __init__(self, [args...]) magic method.
End of explanation
"""
# Both p1 and p2 were instantiated using SLC coordinates.
slc1.coordinates(), slc2.coordinates()
"""
Explanation: Object Equality
The __eq__(self, other) magic method.
End of explanation
"""
slc1 == slc2
"""
Explanation: So they're equal...right?
End of explanation
"""
hex(id(slc1)), hex(id(slc2))
slc1 is slc2
"""
Explanation: Um...wrong.
This is why!
The identity operator is returns true only if the id() function on both objects are equal.
End of explanation
"""
slc1.latitude == slc2.latitude and slc1.longitude == slc2.longitude
"""
Explanation: How should we define equality?
If latitude and longitude of both points are equal then the points are equal.
More generally, if the attributes of both objects are equal, the objects are equal.
End of explanation
"""
def is_equal(self, p):
"""
Test for equality with point p.
"""
return self.latitude == p.latitude and self.longitude == p.longitude
slc1.is_equal(slc2)
"""
Explanation: How expressive is that!?
Not very...
We could define a method on our object...
End of explanation
"""
m_slc1 == m_slc2
"""
Explanation: Still cumbersome.
Now let's try it with Magic!
The __eq__() magic method is defined.
End of explanation
"""
slc1.calculate_distance(lax)
jfk.calculate_distance(slc1)
"""
Explanation: Big Kermit Thee Frog Yaaay!
What's the difference?
Equality is still defined the same.
We still define a method on our object to implement the equality test.
Python makes an implicit call to our method. This is the secret sauce...the magic!
Python requires our method have the name __eq__(), take a single argument, and return a boolean.
Pick up the pace
Calculating distance between points
Intuitively, the distance between two points is the difference.
This implies subtraction.
The __sub__(self, other) magic method.
Old and busted
Create a method to compute distance from the instance to supplied Point instance.
End of explanation
"""
# __sub__() returns MagicDistance instance.
dist_lax_slc = m_slc1 - m_lax
dist_jfk_slc = m_slc1 - m_jfk
type(dist_lax_slc)
# Jumping ahead a bit to representation of objects...
print(dist_lax_slc)
print(dist_jfk_slc)
"""
Explanation: Still cumbersome.
New hotness
The __sub__() magic method.
Takes a single argument and returns whatever you want.
End of explanation
"""
# Point
slc1
# MagicPoint
m_slc1
# MagicDistance
dist_lax_slc
"""
Explanation: Representation of objects
The __repr__(self) magic method.
The __str__(self) magic method.
The __format__(self, formatstr) magic method.
__repr__(self)
End of explanation
"""
# Point
str(slc1)
# MagicPoint
str(m_slc1)
# MagicDistance
str(dist_lax_slc)
"""
Explanation: __str__(self)
End of explanation
"""
# Point
"SLC coordinates: {}.".format(slc1)
"""
Explanation: __format__(self, formatstr)
Old and busted
End of explanation
"""
# Point
"SLC coordinates: {}.".format(slc1.coordinates())
"""
Explanation: That didn't work very well.
End of explanation
"""
# MagicPoint
"SLC coordinates: {}.".format(m_slc1)
"""
Explanation: That's better, but cumbersome.
New hotness
End of explanation
"""
# MagicPoint
"SLC coordinates: {:.4f}.".format(m_slc1)
"""
Explanation: Perfect. But wait, there's more!
End of explanation
"""
# MagicDistance
"Distance from LAX to SLC is {} nautical miles.".format(dist_lax_slc)
# MagicDistance
"Distance from JFK to SLC is {} nautical miles.".format(dist_jfk_slc)
"""
Explanation: Specifying a floating point format results in radians.
More new hotness
End of explanation
"""
dist_jfk_slc == dist_lax_slc
dist_jfk_slc > dist_lax_slc
dist_lax_slc >= dist_lax_slc
"""
Explanation: About distance
__eq__()
__lt__(), __le__()
__gt__(), __ge__()
Let's take a quick look at these methods in the code.
Nothing but magic from here.
Just a sampling
End of explanation
"""
dist_lax_slc(slc1, jfk)
"""
Explanation: Calling an object like a function
__call__(self, [args...])
End of explanation
"""
# No magic methods.
slc1 = None
# Magic methods.
m_can = None
"""
Explanation: This is a very contrived example.
Object Destruction
The __del__(self) magic method.
Called when the object is garbage collected.
End of explanation
"""
|
KshitijT/fundamentals_of_interferometry | 1_Radio_Science/1_9_a_brief_introduction_to_interferometry.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
"""
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.8 Astronomical radio sources
Next: 1.10 The Limits of Single Dish Astronomy
Section status: <span style="background-color:green"> </span>
Import standard modules:
End of explanation
"""
from IPython.display import display
from ipywidgets import interact
HTML('../style/code_toggle.html')
"""
Explanation: Import section specific modules:
End of explanation
"""
def double_slit (p0=[0],a0=[1],baseline=1,d1=5,d2=5,wavelength=.1,maxint=None):
"""Renders a toy dual-slit experiment.
'p0' is a list or array of source positions (drawn along the vertical axis)
'a0' is an array of source intensities
'baseline' is the distance between the slits
'd1' and 'd2' are distances between source and plate and plate and screen
'wavelength' is wavelength
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of double_slit() into the same intensity scale, i.e. for comparison.
"""
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([]) and plt.yticks([])
plt.axhline(0, ls=':')
baseline /= 2.
## draw representation of slits
plt.arrow(0, 1,0, baseline-1, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0,-1,0, 1-baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, -baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
## draw representation of lightpath from slits to centre of screen
plt.arrow(0, baseline,d2,-baseline, length_includes_head=True)
plt.arrow(0,-baseline,d2, baseline, length_includes_head=True)
## draw representation of sinewave from the central position
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/2
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
## and we accumulate the interference pattern for each source into 'pattern'
xs = np.arange(-1, 1, .01)
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for p,a in np.broadcast(p0,a0):
plt.plot(-d1, p, marker='o', ms=10, mfc='red', mew=0)
total_intensity += a
if p == p0[0] or p == p0[-1]:
plt.arrow(-d1, p, d1, baseline-p, length_includes_head=True)
plt.arrow(-d1, p, d1,-baseline-p, length_includes_head=True)
# compute the two pathlenghts
path1 = np.sqrt(d1**2 + (p-baseline)**2) + np.sqrt(d2**2 + (xs-baseline)**2)
path2 = np.sqrt(d1**2 + (p+baseline)**2) + np.sqrt(d2**2 + (xs+baseline)**2)
diff = path1 - path2
# caccumulate interference pattern from this source
pattern = pattern + a*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
# show pattern for one source at 0
double_slit(p0=[0])
"""
Explanation: 1.9 A brief introduction to interferometry and its history
1.9.1 The double-slit experiment
The basics of interferometry date back to Thomas Young's double-slit experiment ➞ of 1801. In this experiment, a plate pierced by two parallel slits is illuminated by a monochromatic source of light. Due to the wave-like nature of light, the waves passing through the two slits interfere, resulting in an interference pattern, or fringe, projected onto a screen behind the slits:
<img src="figures/514px-Doubleslit.svg.png" width="50%"/>
Figure 1.9.1: Schematic diagram of Young's double-slit experiment. Credit: Unknown.
The position on the screen $P$ determines the phase difference between the two arriving wavefronts. Waves arriving in phase interfere constructively and produce bright strips in the interference pattern. Waves arriving out of phase interfere destructively and result in dark strips in the pattern.
In this section we'll construct a toy model of a dual-slit experiment. Note that this model is not really physically accurate, it is literally just a "toy" to help us get some intuition for what's going on. A proper description of interfering electromagnetic waves will follow later.
Firstly, a monochromatic electromagnetic wave of wavelength $\lambda$ can be described by at each point in time and space as a complex quantity i.e. having an amplitude and a phase, $A\mathrm{e}^{\imath\phi}$. For simplicity, let us assume a constant amplitude $A$ but allow the phase to vary as a function of time and position.
Now if the same wave travels along two paths of different lengths and recombines at point $P$, the resulting electric field is a sum:
$E=E_1+E_2 = A\mathrm{e}^{\imath\phi}+A\mathrm{e}^{\imath(\phi-\phi_0)},$
where the phase delay $\phi_0$ corresponds to the pathlength difference $\tau_0$:
$\phi_0 = 2\pi\tau_0/\lambda.$
What is actually "measured" on the screen, the brightness, is, physically, a time-averaged electric field intensity $EE^$, where the $^$ represents complex conjugation (this is exactly what our eyes, or a photographic plate, or a detector in the camera perceive as "brightness"). We can work this out as
$
EE^ = (E_1+E_2)(E_1+E_2)^ = E_1 E_1^ + E_2 E_2^ + E_1 E_2^ + E_2 E_1^ = A^2 + A^2
+ A^2 \mathrm{e}^{\imath\phi_0}
+ A^2 \mathrm{e}^{-\imath\phi_0} =
2A^2 + 2A^2 \cos{\phi_0}.
$
Note how phase itself has dropped out, and the only thing that's left is the phase delay $\phi_0$. The first part of the sum is constant, while the second part, the interfering term, varies with phase difference $\phi_0$, which in turn depends on position on the screen $P$. It is easy to see that the resulting intensity $EE^*$ is a purely real quantity that varies from 0 to $4A^2$. This is exactly what produces the alternating bright and dark stripes on the screen.
1.9.2 A toy double-slit simulator
Let us write a short Python function to (very simplistically) simulate a double-slit experiment. Note, understanding the code presented is not a requirement to understand the experiment. Those not interested in the code implementation should feel free to look only at the results.
End of explanation
"""
interact(lambda baseline,wavelength:double_slit(p0=[0],baseline=baseline,wavelength=wavelength),
baseline=(0.1,2,.01),wavelength=(.05,.2,.01)) and None
"""
Explanation: This function draws a double-slit setup, with a light source at position $p$ (in fact the function can render multiple sources, but we'll only use it for one source for the moment). The dotted blue line shows the optical axis ($p=0$). The sine wave (schematically) shows the wavelength. (Note that the units here are arbitrary, since it is only geometry relative to wavelength that determines the results). The black lines show the path of the light waves through the slits and onto the screen at the right. The strip on the right schematically renders the resulting interference pattern, and the red curve shows a cross-section through the pattern.
Inside the function, we simply compute the pathlength difference along the two paths, convert it to phase delay, and render the corresponding interference pattern.
<div class=warn>
<b>Warning:</b> Once again, let us stress that this is just a "toy" rendering of an interferometer. It serves to demonstrate the basic principles, but it is not physically accurate. In particular, it does not properly model diffraction or propagation. Also, since astronomical sources are effectively infinitely distant (compared to the size of the interferometer), the incoming light rays should be parallel (or equivalently, the incoming wavefront should be planar, as in the first illustration in this chapter).
</div>
1.9.3 Playing with the baseline
First of all, note how the properties of the interference pattern vary with baseline $B$ (the distance between the slits) and wavelength $\lambda$. Use the sliders below to adjust both. Note how increasing the baseline increases the frequency of the fringe, as does reducing the wavelength.
End of explanation
"""
interact(lambda position,baseline,wavelength:double_slit(p0=[position],baseline=baseline,wavelength=wavelength),
position=(-1,1,.01),baseline=(0.1,2,.01),wavelength=(.05,.2,.01)) and None
"""
Explanation: 1.9.4 From the double-slit box to an interferometer
The original double-slit experiment was conceived as a demonstration of the wave-like nature of light. The role of the light source in the experiment was simply to illuminate the slits. Let us now turn it around and ask ourselves, given a working dual-slit setup, could we use it to obatin some information about the light source? Could we use the double-slit experiment as a measurement device, i.e. an interferometer?
1.9.4.1 Measuring source position
Obviously, we could measure source intensity -- but that's not very interesting, since we can measure that by looking at the source directly. Less obviously, we could measure the source position. Observe what happens when we move the source around, and repeat this experiment for longer and shorter baselines:
End of explanation
"""
double_slit([0],baseline=1.5,wavelength=0.1)
double_slit([0.69],baseline=1.5,wavelength=0.1)
"""
Explanation: Note that long baselines are very sensitive to change in source position, while short baselines are less sensitive. As we'll learn in Chapter 4, the spatial resolution (i.e. the distance at which we can distinguish sources) of an interfrometer is given by $\lambda/B$ , while the spatial resolution of a conventional telescope is given by $\lambda/D$, where $D$ is the dish (or mirror) aperture. This is a fortunate fact, as in practice it is much cheaper to build long baselines than large apertures!
On the other hand, due to the periodic nature of the interference pattern, the position measurement of a long baseline is ambiguous. Consider that two sources at completely different positions produce the same interference pattern:
End of explanation
"""
double_slit([0],baseline=0.5,wavelength=0.1)
double_slit([0.69],baseline=0.5,wavelength=0.1)
"""
Explanation: On the other hand, using a shorter baseline resolves the ambiguity:
End of explanation
"""
interact(lambda position,intensity,baseline,wavelength:
double_slit(p0=[0,position],a0=[1,intensity],baseline=baseline,wavelength=wavelength),
position=(-1,1,.01),intensity=(.2,1,.01),baseline=(0.1,2,.01),wavelength=(.01,.2,.01)) and None
"""
Explanation: Modern interferometers exploit this by using an array of elements, which provides a whole range of possible baselines.
1.9.4.2 Measuring source size
Perhaps less obviously, we can use an inteferometer to measure source size. Until now we have been simulating only point-like sources. First, consider what happens when we add a second source to the experiment (fortunately, we wrote the function above to accommodate such a scenario). The interference pattern from two (independent) sources is the sum of the individual interference patterns. This seems obvious, but will be shown more formally later on. Here we add a second source, with a slider to control its position and intensity. Try to move the second source around, and observe how the superimposed interference pattern can become attenuated or even cancel out.
End of explanation
"""
double_slit(p0=[0,0.25],baseline=1,wavelength=0.1)
double_slit(p0=[0,0.25],baseline=1.5,wavelength=0.1)
"""
Explanation: So we can already use our double-slit box to infer something about the structure of the light source. Note that with two sources of equal intensity, it is possible to have the interference pattern almost cancel out on any one baseline -- but never on all baselines at once:
End of explanation
"""
interact(lambda extent,baseline,wavelength:
double_slit(p0=np.arange(-extent,extent+.01,.01),baseline=baseline,wavelength=wavelength),
extent=(0,1,.01),baseline=(0.1,2,.01),wavelength=(.01,.2,.01)) and None
"""
Explanation: Now, let us simulate an extended source, by giving the simulator an array of closely spaced point-like sources. Try playing with the extent slider. What's happening here is that the many interference patterns generated by each little part of the extended source tend to "wash out" each other, resulting in a net loss of amplitude in the pattern. Note also how each particular baseline length is sensitive to a particular range of source sizes.
End of explanation
"""
double_slit(p0=[0],baseline=1,wavelength=0.1)
double_slit(p0=np.arange(-0.2,.21,.01),baseline=1,wavelength=0.1)
"""
Explanation: We can therefore measure source size by measuring the reduction in the amplitude of the interference pattern:
End of explanation
"""
interact(lambda d1,d2,position,extent: double_slit(p0=np.arange(position-extent,position+extent+.01,.01),d1=d1,d2=d2),
d1=(1,5,.1),d2=(1,5,.1),
position=(-1,1,.01),extent=(0,1,.01)) and None
"""
Explanation: In fact historically, this was the first application of interferometry in astronomy. In a famous experiment in 1920, a Michelson interferometer installed at Mount Wilson Observatory was used to measure the diameter of the red giant star Betelgeuse.
<div class=advice>
The historical origins of the term <em><b>visibility</b></em>, which you will become intimately familiar with in the course of these lectures, actually lie in the experiment described above. Originally, "visibility" was defined as just that, i.e. a measure of the contrast between the light and dark stripes of the interference pattern.
</div>
<div class=advice>
Modern interferometers deal in terms of <em><b>complex visibilities</b></em>, i.e. complex quantitities. The amplitude of a complex visibility, or <em>visibility amplitude</em>, corresponds to the intensity of the interference pattern, while the <em>visibility phase</em> corresponds to its relative phase (in our simulator, this is the phase of the fringe at the centre of the screen). This one complex number is all the information we have about the light source. Note that while our double-slit experiment shows an entire pattern, the variation in that pattern across the screen is entirely due to the geometry of the "box" (generically, this is the instrument used to make the measurement) -- the informational content, as far as the light source is concerned, is just the amplitude and the phase!
</div>
<div class=advice>
In the single-source simulations above, you can clearly see that amplitude encodes source shape (and intensity), while phase encodes source position. <b>Visibility phase measures position, amplitude measures shape and intensity.</b> This is a recurring theme in radio interferometry, one that we'll revisit again and again in subsequent lectures.
</div>
Note that a size measurement is a lot simpler than a position measurement. The phase of the fringe pattern gives us a very precise measurement of the position of the source relative to the optical axis of the instrument. To get an absolute position, however, we would need to know where the optical axis is pointing in the first place -- for practical reasons, the precision of this is a lot less. The amplitude of the fringe partern, on the other hand, is not very sensitive to errors in the instrument pointing. It is for this reason that the first astronomical applications of interferometry dealt with size measurements.
1.9.4.3 Measuring instrument geometry
Until now, we've only been concerned with measuring source properties. Obviously, the interference pattern is also quite sensitive to instrument geometry. We can easily see this in our toy simulator, by playing with the position of the slits and the screen:
End of explanation
"""
double_slit(p0=[0], a0=[0.4], maxint=2)
double_slit(p0=[0,0.25], a0=[1, 0.6], maxint=2)
double_slit(p0=np.arange(-0.2,.21,.01), a0=.05, maxint=2)
"""
Explanation: This simple fact has led to many other applications for interferometers, from geodetic VLBI (where continental drift is measured by measuring extremely accurate antenna positions via radio interferometry of known radio sources), to the recent gravitational wave detection by LIGO (where the light source is a laser, and the interference pattern is used to measure miniscule distortions in space-time -- and thus the geometry of the interferometer -- caused by gravitational waves).
1.9.5 Practical interferometers
If you were given the job of constructing an interferometer for astronomical measurements, you would quickly find that the double-slit experiment does not translate into a very practical design. The baseline needs to be quite large; a box with slits and a screen is physically unwieldy. A more viable design can be obtained by playing with the optical path.
The basic design still used in optical interferometry to this day is the Michelson stellar interferometer mentioned above. This is schematically laid out as follows:
<IMG SRC="figures/471px-Michelson_stellar_interferometer.svg.png" width="50%"/>
Figure 1.9.2: Schematic of a Michelson interferometer. Credit: Unknown.
The outer set of mirrors plays the role of slits, and provides a baseline of length $d$, while the rest of the optical path serves to bring the two wavefronts together onto a common screen. The first such interferometer, used to carry out the Betelgeuse size measurement, looked like this:
<IMG SRC="figures/Hooker_interferometer.jpg" width="50%"/>
Figure 1.9.3: 100-inch Hooker Telescope at Mount Wilson Observatory in southern California, USA. Credit: Unknown.
In modern optical interferometers using the Michelson layout, the role of the "outer" mirrors is played by optical telescopes in their own right. For example, the Very Large Telescope operated by ESO can operate as an inteferometer, combining four 8.2m and four 1.8m individual telescopes:
<IMG SRC="figures/Hard_Day's_Night_Ahead.jpg" width="100%"/>
Figure 1.9.4: The Very Large Telescope operated by ESO. Credit: European Southern Observatory.
In the radio regime, the physics allow for more straightforward designs. The first radio interferometric experiment was the sea-cliff interferometer developed in Australia during 1945-48. This used reflection off the surface of the sea to provide a "virtual" baseline, with a single antenna measuring the superimposed signal:
<IMG SRC="figures/sea_int_medium.jpg" width="50%"/>
Figure 1.9.5: Schematic of the sea-cliff single antenna interferometer developed in Australia post-World War 2. Credit: Unknown.
In a modern radio interferometer, the "slits" are replaced by radio dishes (or collections of antennas called aperture arrays) which sample and digitize the incoming wavefront. The part of the signal path between the "slits" and the "screen" is then completely replaced by electronics. The digitized signals are combined in a correlator, which computes the corresponding complex visibilities. We will study the details of this process in further lectures.
In contrast to the delicate optical path of an optical interferometer, digitized signals have the advantage of being endlessly and losslessly replicatable. This has allowed us to construct entire intererometric arrays. An example is the the Jansky Very Large Array (JVLA, New Mexico, US) consisting of 27 dishes:
<IMG SRC="figures/USA.NM.VeryLargeArray.02.jpg" width="50%"/>
Figure 1.9.6: Telescope elments of the Jansky Very Large Array (JVLA) in New Mexico, USA. Credit: Unknown.
The MeerKAT telescope coming online in the Karoo, South Africa, will consist of 64 dishes. This is an aerial photo showing the dish foundations being prepared:
<IMG SRC="figures/2014_core_02.jpg" width="50%"/>
Figure 1.9.7: Layout of the core of the MeerKAT array in the Northern Cape, South Africa. Credit: Unknown.
In an interferometer array, each pair of antennas forms a different baseline. With $N$ antennas, the correlator can then simultaneously measure the visibilities corresponding to $N(N-1)/2$ baselines, with each pairwise antenna combination yielding a unique baseline.
1.9.5.1 Additive vs. multiplicative interferometers
The double-slit experiment, the Michelson interferometer, and the sea-cliff interferometer are all examples of additive interferometers, where the fringe pattern is formed up by adding the two interfering signals $E_1$ and $E_2$:
$$
EE^ = (E_1+E_2)(E_1+E_2)^ = E_1 E_1^ + E_2 E_2^ + E_1 E_2^ + E_2 E_1^
$$
As we already discussed above, the first two terms in this sum are constant (corresponding to the total intensity of the two signals), while the cross-term $E_1 E_2^$ and its complex conjugate is the interfering* term that is responsible for fringe formation.
Modern radio interferometers are multiplicative. Rather than adding the signals, the antennas measure $E_1$ and $E_2$ and feed these measurements into a cross-correlator, which directly computes the $E_1 E_2^*$ term.
1.9.6 Aperture synthesis vs. targeted experiments
Interferometry was born as a way of conducting specific, targeted, and rather exotic experiments. The 1920 Betelgeuse size measurement is a typical example. In contrast to a classical optical telescope, which could directly obtain an image of the sky containing information on hundreds to thousands of objects, an interferometer was a very delicate apparatus for indirectly measuring a single physical quantity (the size of the star in this case). The spatial resolution of that single measurement far exceeded anything available to a conventional telescope, but in the end it was always a specific, one-off measurement. The first interferometers were not capable of directly imaging the sky at that improved resolution.
In radio interferometry, all this changed in the late 1960s with the development of the aperture synthesis technique by Sir Martin Ryle's group in Cambridge. The crux of this tehnique lies in combining the information from multiple baselines.
To understand this point, consider the following. As you saw from playing with the toy double-slit simulator above, for each baseline length, the interference pattern conveys a particular piece of information about the sky. For example, the following three "skies" yield exactly the same interference pattern on a particular baseline, so a single measurement would be unable to distinguish between them:
End of explanation
"""
double_slit(p0=[0], a0=[0.4], baseline=0.5, maxint=2)
double_slit(p0=[0,0.25], a0=[1, 0.6], baseline=0.5, maxint=2)
double_slit(p0=np.arange(-0.2,.21,.01), a0=.05, baseline=0.5, maxint=2)
"""
Explanation: However, as soon as we take a measurement on another baseline, the difference becomes apparent:
End of explanation
"""
def michelson (p0=[0],a0=[1],baseline=50,maxbaseline=100,extent=0,d1=9,d2=1,d3=.2,wavelength=.1,fov=5,maxint=None):
"""Renders a toy Michelson interferometer with an infinitely distant (astronomical) source
'p0' is a list or array of source positions (as angles, in degrees).
'a0' is an array of source intensities
'extent' are source extents, in degrees
'baseline' is the baseline, in lambdas
'maxbaseline' is the max baseline to which the plot is scaled
'd1' is the plotted distance between the "sky" and the interferometer arms
'd2' is the plotted distance between arms and screen, in plot units
'd3' is the plotted distance between inner mirrors, in plot units
'fov' is the notionally rendered field of view radius (in degrees)
'wavelength' is wavelength, used for scale
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of michelson() into the same intensity scale, i.e. for comparison.
"""
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([])
# label Y axis with degrees
yt,ytlab = plt.yticks()
plt.yticks(yt,["-%g"%(float(y)*fov) for y in yt])
plt.ylabel("Angle of Arrival (degrees)")
plt.axhline(0, ls=':')
## draw representation of arms and light path
maxbaseline = max(maxbaseline,baseline)
bl2 = baseline/float(maxbaseline) # coordinate of half a baseline, in plot units
plt.plot([0,0],[-bl2,bl2], 'o', ms=10)
plt.plot([0,d2/2.,d2/2.,d2],[-bl2,-bl2,-d3/2.,0],'-k')
plt.plot([0,d2/2.,d2/2.,d2],[ bl2, bl2, d3/2.,0],'-k')
plt.text(0,0,'$b=%d\lambda$'%baseline, ha='right', va='bottom', size='xx-large')
## draw representation of sinewave from the central position
if isinstance(p0,(int,float)):
p0 = [p0]
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/(2.*fov)
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
xs = np.arange(-1, 1, .01)
## xsdiff is corresponding pathlength difference
xsdiff = (np.sqrt(d2**2 + (xs-d3)**2) - np.sqrt(d2**2 + (xs+d3)**2))
## and we accumulate the interference pattern for each source into 'pattern'
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for pos,ampl in np.broadcast(p0,a0):
total_intensity += ampl
pos1 = pos/float(fov)
if extent: # simulate extent by plotting 100 sources of 1/100th intensity
positions = np.arange(-1,1.01,.01)*extent/fov + pos1
else:
positions = [pos1]
# draw arrows indicating lightpath
plt.arrow(-d1, bl2+pos1, d1, -pos1, head_width=.1, fc='k', length_includes_head=True)
plt.arrow(-d1,-bl2+pos1, d1, -pos1, head_width=.1, fc='k', length_includes_head=True)
for p in positions:
# compute the pathlength difference between slits and position on screen
plt.plot(-d1, p, marker='o', ms=10*ampl, mfc='red', mew=0)
# add pathlength difference at slits
diff = xsdiff + (baseline*wavelength)*np.sin(p*fov*np.pi/180)
# accumulate interference pattern from this source
pattern = pattern + (float(ampl)/len(positions))*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
print "visibility (Imax-Imin)/(Imax+Imin): ",(pattern.max()-pattern.min())/(total_intensity*2)
# show patern for one source at 0
michelson(p0=[0])
"""
Explanation: With a larger number of baselines, we can gather enough information to reconstruct an image of the sky. This is because each baseline essentially measures one Fourier component of the sky brightness distribution (Chapter 4 will explain this in more detail); and once we know the Fourier components, we can compute a Fourier transform in order to recover the sky image. The advent of sufficiently powerful computers in the late 1960s made this technique practical, and turned radio interferometers from exotic contraptions into generic imaging instruments. With a few notable exceptions, modern radio interferometry is aperture synthesis.
This concludes our introduction to radio interferometry; the rest of this course deals with aperture synthesis in detail. The remainder of this notebook consists of a few more interactive widgets that you can use to play with the toy dual-slit simulator.
Appendix: Recreating the Michelson interferometer
For completeness, let us modify the function above to make a more realistic interferometer. We'll implement two changes:
we'll put the light source infinitely far away, as an astronomical source should be
we'll change the light path to mimic the layout of a Michelson interferometer.
End of explanation
"""
# single source
interact(lambda position, intensity, baseline:
michelson(p0=[position], a0=[intensity], baseline=baseline, maxint=2),
position=(-5,5,.01),intensity=(.2,1,.01),baseline=(10,100,.01)) and None
"""
Explanation: We have modified the setup as follows. First, the source is now infinitely distant, so we define the source position in terms of the angle of arrival of the incoming wavefront (with 0 meaning on-axis, i.e. along the vertical axis). We now define the baseline in terms of wavelengths. The phase difference of the wavefront arriving at the two arms of the interferometer is completely defined in terms of the angle of arrival. The two "rays" entering the outer arms of the interferometer indicate the angle of arrival.
The rest of the optical path consists of a series of mirrors to bring the two signals together. Note that the frequency of the fringe pattern is now completely determined by the internal geometry of the instrument (i.e. the distances between the inner set of mirrors and the screen); however the relative phase of the pattern is determined by source angle. Use the sliders below to get a feel for this.
Note that we've also modified the function to print the "visibility", as originally defined by Michelson.
End of explanation
"""
interact(lambda position1,position2,intensity1,intensity2,baseline:
michelson(p0=[position1,position2], a0=[intensity1,intensity2], baseline=baseline, maxint=2),
position1=(-5,5,.01), position2=(-5,5,.01), intensity1=(.2,1,.01), intensity2=(.2,1,.01),
baseline=(10,100,.01)) and None
"""
Explanation: And here's the same experiment for two sources:
End of explanation
"""
arcsec = 1/3600.
interact(lambda extent_arcsec, baseline:
michelson(p0=[0], a0=[1], extent=extent_arcsec*arcsec, maxint=1,
baseline=baseline,fov=1*arcsec),
extent_arcsec=(0,0.1,0.001),
baseline=(1e+4,1e+7,1e+4)
) and None
"""
Explanation: A.1 The Betelgeuse size measurement
For fun, let us use our toy to re-create the Betelgeuse size measurement of 1920 by A.A. Michelson and F.G. Pease. Their experiment was set up as follows. The interferometer they constructed had movable outside mirrors, giving it a baseline that could be adjusted from a maximum of 6m downwards. Red light has a wavelength of ~650n; this gave them a maximum baseline of 10 million wavelengths.
For the experiment, they started with a baseline of 1m (1.5 million wavelengths), and verified that they could see fringes from Betelguese with the naked eye. They then adjusted the baseline up in small increments, until at 3m the fringes disappeared. From this, they inferred the diameter of Betelgeuse to be about 0.05".
You can repeat the experiment using the sliders below. You will probably find your toy Betelegeuse to be somewhat larger than 0.05". This is because our simulator is too simplistic -- in particular, it assumes a monochromatic source of light, which makes the fringes a lot sharper.
End of explanation
"""
|
GoogleCloudPlatform/openmrs-fhir-analytics | dwh/test_query_lib.ipynb | apache-2.0 | from datetime import datetime
import pandas
from typing import List, Any
import pyspark.sql.functions as F
import query_lib
import indicator_lib
BASE_DIR='./test_files/parquet_big_db'
#CODE_SYSTEM='http://snomed.info/sct'
CODE_SYSTEM='http://www.ampathkenya.org'
# Note since this issue is resolved we don't need BASE_URL:
# https://github.com/GoogleCloudPlatform/openmrs-fhir-analytics/issues/55
#BASE_URL='http://localhost:8099/openmrs/ws/fhir2/R4/'
BASE_URL=''
"""
Explanation: Sample query library usage
This notebook loads data from Parquet files generated from the "big" test
database (i.e., the openmrs-fhir-mysql-ocl-big docker image). This dataset
has 7892 Patients, 396,650 Encounters, and 1,690,632 Observations. The
timings are on an Intel Xeon E5-1650 CPU (6 cores and 12 threads) with
64 GB of memory.
End of explanation
"""
patient_query = query_lib.patient_query_factory(
query_lib.Runner.SPARK, BASE_DIR, CODE_SYSTEM)
flat_enc_df = patient_query.get_patient_encounter_view(BASE_URL)
flat_enc_df.head()
flat_enc_df[flat_enc_df['locationId'].notna()].head()
"""
Explanation: Encounter view
Note the first time the patient_query object is created, it also
starts the Spark environment which takes some time.
The total time for this and loading Encounters is ~25 seconds.
End of explanation
"""
# Add encounter location constraint
patient_query.encounter_constraints(locationId=['58c57d25-8d39-41ab-8422-108a0c277d98'])
flat_enc_df = patient_query.get_patient_encounter_view(BASE_URL)
flat_enc_df.head()
flat_enc_df[flat_enc_df['encPatientId'] == '8295eb5b-fba6-4e83-a5cb-2817b135cd27']
flat_enc = patient_query._flatten_encounter('')
flat_enc.head().asDict()
"""
Explanation: Adding an encounter location constraint
End of explanation
"""
_VL_CODE = '856' # HIV VIRAL LOAD
_ARV_PLAN = '1255' # ANTIRETROVIRAL PLAN
end_date='2018-01-01'
start_date='1998-01-01'
old_start_date='1978-01-01'
# Creating a new `patient_query` to drop all previous constraints
# and recreate flat views.
patient_query = query_lib.patient_query_factory(
query_lib.Runner.SPARK, BASE_DIR, CODE_SYSTEM)
patient_query.include_obs_values_in_time_range(
_VL_CODE, min_time=start_date, max_time=end_date)
patient_query.include_obs_values_in_time_range(
_ARV_PLAN, min_time=start_date, max_time=end_date)
patient_query.include_all_other_codes(min_time=start_date, max_time=end_date)
# Note the first call to `find_patient_aggregates` starts a local Spark
# cluster, load input files, and flattens observations. These won't be
# done in subsequent calls of this function on the same instance.
# Also same cluster will be reused for other instances of `PatientQuery`.
agg_df = patient_query.get_patient_obs_view(BASE_URL)
agg_df.head(10)
# Inspecting one specific patient.
agg_df[agg_df['patientId'] == '00c1426f-ca04-414a-8db7-043bb41b64d2'].head()
agg_df[(agg_df['code'] == '856') & (agg_df['min_date'] != agg_df['max_date'])][
['patientId', 'code', 'min_date', 'max_date', 'first_value_code', 'last_value_code']].head()
"""
Explanation: Observation view
Loading all Observation data needed for the view generation takes ~50 seconds.
End of explanation
"""
_DRUG1 = '1256' # START DRUGS
_DRUG2 = '1260' # STOP ALL MEDICATIONS
patient_query._obs_df.head().asDict()
exp_obs = patient_query._obs_df.withColumn('coding', F.explode('code.coding'))
exp_obs.head().asDict()
exp_obs.where('coding.code = "159800AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"').head().asDict()
exp_obs.where('coding.code = "1268"').head().asDict()
exp_obs.where(
'coding.system IN ("http://snomed.info/sct", "http://loinc.org", "http://www.ampathkenya.org") \
AND coding.display LIKE "%viral%" '
).groupBy(['coding']).agg(F.count('*')).head(20)
agg_df[(agg_df['code'] == _ARV_PLAN) & agg_df['last_value_code'].isin([_DRUG1, _DRUG2])].head()
agg_df[(agg_df['code'] == _ARV_PLAN) & agg_df['last_value_code'].isin([_DRUG1, _DRUG2])].index.size
agg_df[(agg_df['code'] == _ARV_PLAN) & agg_df['last_value_code'].isin([_DRUG1, _DRUG2])].groupby(
'patientId').count().index.size
indicator_lib.calc_TX_NEW(agg_df, ARV_plan=_ARV_PLAN, start_drug=[_DRUG1], end_date_str=end_date)
indicator_lib.calc_TX_PVLS(
agg_df, VL_code=_VL_CODE, failure_threshold=10000,
end_date_str=end_date)
"""
Explanation: Inspecting underlying Spark data-frames
The user of the library does not need to deal with the underlying distributed query processing system. However, the developer of the library needs an easy way to inspect the internal data of these systems. Here is how:
End of explanation
"""
patient_query._flat_obs.head().asDict()
agg_df[(agg_df['code'] == _VL_CODE)].head()
def _find_age_band(birth_date: str, end_date: datetime) -> str:
"""Given the birth date, finds the age_band for PEPFAR disaggregation."""
age = None
try:
# TODO handle all different formats (issues #174)
birth = datetime.strptime(birth_date, '%Y-%m-%d')
age = int((end_date - birth).days / 365.25)
except Exception as e:
common.custom_log('Invalid birth_date format: {}'.format(e))
age = 999999
if age == 999999:
return 'ERROR'
if age < 1:
return '0-1'
if age <= 4:
return '1-4'
if age <= 9:
return '5-9'
if age <= 14:
return '10-14'
if age <= 19:
return '15-19'
if age <= 24:
return '20-24'
if age <= 49:
return '25-49'
return '50+'
def _agg_buckets(birth_date: str, gender: str, end_date: datetime) -> List[str]:
"""Generates the list of all PEPFAR disaggregation buckets."""
age_band = _find_age_band(birth_date, end_date)
return [age_band + '_' + gender, 'ALL-AGES_' + gender,
age_band + '_ALL-GENDERS', 'ALL-AGES_ALL-GENDERS']
def calc_TX_PVLS(patient_agg_obs: pandas.DataFrame, VL_code: str,
failure_threshold: int, end_date_str: str = None) -> pandas.DataFrame:
"""Calculates TX_PVLS indicator with its corresponding disaggregations.
Args:
patient_agg_obs: An output from `patient_query.find_patient_aggregates()`.
VL_code: The code for viral load values.
failure_threshold: VL count threshold of failure.
end_date: The string representation of the last date as 'YYYY-MM-DD'.
Returns:
The aggregated DataFrame.
"""
end_date = datetime.today()
if end_date_str:
end_date = datetime.strptime(end_date_str, '%Y-%m-%d')
temp_df = patient_agg_obs[(patient_agg_obs['code'] == VL_code)].copy()
# Note the above copy is used to avoid setting a new column on a slice next:
temp_df['sup_VL'] = (temp_df['max_value'] < failure_threshold)
temp_df['buckets'] = temp_df.apply(
lambda x: _agg_buckets(x.birthDate, x.gender, end_date), axis=1)
temp_df_exp = temp_df.explode('buckets')
temp_df_exp = temp_df_exp.groupby(['sup_VL', 'buckets'], as_index=False)\
.count()[['sup_VL', 'buckets', 'patientId']]\
.rename(columns={'patientId': 'count'})
# calculate ratio
num_patients = len(temp_df.index)
temp_df_exp['ratio'] = temp_df_exp['count']/num_patients
return temp_df_exp
calc_TX_PVLS(agg_df, _VL_CODE, 10000, end_date_str='2020-12-30')
"""
Explanation: Indicator library development
This is an example to show how the indicator_lib.py functions can be incrementally developed based on the query library DataFrames.
End of explanation
"""
|
kkhenriquez/python-for-data-science | Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb | mit | from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
import python_utils
import pandas as pd
import numpy as np
from itertools import cycle, islice
import matplotlib.pyplot as plt
from pandas.tools.plotting import parallel_coordinates
%matplotlib inline
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Clustering with scikit-learn
<br><br></p>
In this notebook, we will learn how to perform k-means lustering using scikit-learn in Python.
We will use cluster analysis to generate a big picture model of the weather at a local station using a minute-graunlarity data. In this dataset, we have in the order of millions records. How do we create 12 clusters our of them?
NOTE: The dataset we will use is in a large CSV file called minute_weather.csv. Please download it into the weather directory in your Week-7-MachineLearning folder. The download link is: https://drive.google.com/open?id=0B8iiZ7pSaSFZb3ItQ1l4LWRMTjg
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Importing the Necessary Libraries<br></p>
End of explanation
"""
data = pd.read_csv('./weather/minute_weather.csv')
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Creating a Pandas DataFrame from a CSV file<br><br></p>
End of explanation
"""
data.shape
data.head()
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold">Minute Weather Data Description</p>
<br>
The minute weather dataset comes from the same source as the daily weather dataset that we used in the decision tree based classifier notebook. The main difference between these two datasets is that the minute weather dataset contains raw sensor measurements captured at one-minute intervals. Daily weather dataset instead contained processed and well curated data. The data is in the file minute_weather.csv, which is a comma-separated file.
As with the daily weather data, this data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data for different seasons and weather conditions is captured.
Each row in minute_weather.csv contains weather data captured for a one-minute interval. Each row, or sample, consists of the following variables:
rowID: unique number for each row (Unit: NA)
hpwren_timestamp: timestamp of measure (Unit: year-month-day hour:minute:second)
air_pressure: air pressure measured at the timestamp (Unit: hectopascals)
air_temp: air temperature measure at the timestamp (Unit: degrees Fahrenheit)
avg_wind_direction: wind direction averaged over the minute before the timestamp (Unit: degrees, with 0 means coming from the North, and increasing clockwise)
avg_wind_speed: wind speed averaged over the minute before the timestamp (Unit: meters per second)
max_wind_direction: highest wind direction in the minute before the timestamp (Unit: degrees, with 0 being North and increasing clockwise)
max_wind_speed: highest wind speed in the minute before the timestamp (Unit: meters per second)
min_wind_direction: smallest wind direction in the minute before the timestamp (Unit: degrees, with 0 being North and inceasing clockwise)
min_wind_speed: smallest wind speed in the minute before the timestamp (Unit: meters per second)
rain_accumulation: amount of accumulated rain measured at the timestamp (Unit: millimeters)
rain_duration: length of time rain has fallen as measured at the timestamp (Unit: seconds)
relative_humidity: relative humidity measured at the timestamp (Unit: percent)
End of explanation
"""
sampled_df = data[(data['rowID'] % 10) == 0]
sampled_df.shape
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Data Sampling<br></p>
Lots of rows, so let us sample down by taking every 10th row. <br>
End of explanation
"""
sampled_df.describe().transpose()
sampled_df[sampled_df['rain_accumulation'] == 0].shape
sampled_df[sampled_df['rain_duration'] == 0].shape
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Statistics
<br><br></p>
End of explanation
"""
del sampled_df['rain_accumulation']
del sampled_df['rain_duration']
rows_before = sampled_df.shape[0]
sampled_df = sampled_df.dropna()
rows_after = sampled_df.shape[0]
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Drop all the Rows with Empty rain_duration and rain_accumulation
<br><br></p>
End of explanation
"""
rows_before - rows_after
sampled_df.columns
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
How many rows did we drop ?
<br><br></p>
End of explanation
"""
features = ['air_pressure', 'air_temp', 'avg_wind_direction', 'avg_wind_speed', 'max_wind_direction',
'max_wind_speed','relative_humidity']
select_df = sampled_df[features]
select_df.columns
select_df
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Select Features of Interest for Clustering
<br><br></p>
End of explanation
"""
X = StandardScaler().fit_transform(select_df)
X
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Scale the Features using StandardScaler
<br><br></p>
End of explanation
"""
kmeans = KMeans(n_clusters=12)
model = kmeans.fit(X)
print("model\n", model)
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Use k-Means Clustering
<br><br></p>
End of explanation
"""
centers = model.cluster_centers_
centers
"""
Explanation: <p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
What are the centers of 12 clusters we formed ?
<br><br></p>
End of explanation
"""
# Function that creates a DataFrame with a column for Cluster Number
def pd_centers(featuresUsed, centers):
colNames = list(featuresUsed)
colNames.append('prediction')
# Zip with a column called 'prediction' (index)
Z = [np.append(A, index) for index, A in enumerate(centers)]
# Convert to pandas data frame for plotting
P = pd.DataFrame(Z, columns=colNames)
P['prediction'] = P['prediction'].astype(int)
return P
# Function that creates Parallel Plots
def parallel_plot(data):
my_colors = list(islice(cycle(['b', 'r', 'g', 'y', 'k']), None, len(data)))
plt.figure(figsize=(15,8)).gca().axes.set_ylim([-3,+3])
parallel_coordinates(data, 'prediction', color = my_colors, marker='o')
P = pd_centers(features, centers)
P
"""
Explanation: <p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
Plots
<br><br></p>
Let us first create some utility functions which will help us in plotting graphs:
End of explanation
"""
parallel_plot(P[P['relative_humidity'] < -0.5])
"""
Explanation: Dry Days
End of explanation
"""
parallel_plot(P[P['air_temp'] > 0.5])
"""
Explanation: Warm Days
End of explanation
"""
parallel_plot(P[(P['relative_humidity'] > 0.5) & (P['air_temp'] < 0.5)])
"""
Explanation: Cool Days
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.