markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Define a Galactocentric Coordinate FrameWe will start by defining a Galactocentric coordinate system using `astropy.coordinates`. We will adopt the latest parameter set assumptions for the solar Galactocentric position and velocity as implemented in Astropy, but note that these parameters are customizable by passing parameters into the `Galactocentric` class below (e.g., you could change the sun-galactic center distance by setting `galcen_distance=...`).
with coord.galactocentric_frame_defaults.set("v4.0"): galcen_frame = coord.Galactocentric() galcen_frame
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Define the Solar Position and Velocity In this coordinate system, the sun is along the $x$-axis (at a negative $x$ value), and the Galactic rotation at this position is in the $+y$ direction. The 3D position of the sun is therefore given by:
sun_xyz = u.Quantity( [-galcen_frame.galcen_distance, 0 * u.kpc, galcen_frame.z_sun] # x,y,z )
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
We can combine this with the solar velocity vector (defined in the `astropy.coordinates.Galactocentric` frame) to define the sun's phase-space position, which we will use as initial conditions shortly to compute the orbit of the Sun:
sun_vxyz = galcen_frame.galcen_v_sun sun_vxyz sun_w0 = gd.PhaseSpacePosition(pos=sun_xyz, vel=sun_vxyz)
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
To compute the sun's orbit, we need to specify a mass model for the Galaxy. Here, we will use the default Milky Way mass model implemented in Gala, which is defined in detail in the Gala documentation: [Defining a Milky Way model](define-milky-way-model.html). Here, we will initialize the potential model with default parameters:
mw_potential = gp.MilkyWayPotential() mw_potential
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
This potential is composed of four mass components meant to represent simple models of the different structural components of the Milky Way:
for k, pot in mw_potential.items(): print(f"{k}: {pot!r}")
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
With a potential model for the Galaxy and initial conditions for the sun, we can now compute the Sun's orbit using the default integrator (Leapfrog integration): We will compute the orbit for 4 Gyr, which is about 16 orbital periods.
sun_orbit = mw_potential.integrate_orbit(sun_w0, dt=0.5 * u.Myr, t1=0, t2=4 * u.Gyr)
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Let's plot the Sun's orbit in 3D to get a feel for the geometry of the orbit:
fig, ax = sun_orbit.plot_3d() lim = (-12, 12) ax.set(xlim=lim, ylim=lim, zlim=lim)
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Retrieve Gaia Data for Kepler-444 As a comparison, we will compute the orbit of the exoplanet-hosting star "Kepler-444." To get Gaia data for this star, we first have to retrieve its sky coordinates so that we can do a positional cross-match query on the Gaia catalog. We can retrieve the sky position of Kepler-444 from Simbad using the `SkyCoord.from_name()` classmethod, which queries Simbad under the hood to resolve the name:
star_sky_c = coord.SkyCoord.from_name("Kepler-444") star_sky_c
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
We happen to know a priori that Kepler-444 has a large proper motion, so the sky position reported by Simbad could be off from the Gaia sky position (epoch=2016) by many arcseconds. To run and retrieve the Gaia data, we will use the [pyia](http://pyia.readthedocs.io/) package: We can pass in an ADQL query, which `pyia` uses to query the Gaia science archive using `astroquery`, and returns the data as a `pyia.GaiaData` object. To run the query, we will do a sky position cross-match with a large positional tolerance by setting the cross-match radius to 15 arcseconds, but we will take the brightest cross-matched source within this region as our match:
star_gaia = GaiaData.from_query( f""" SELECT TOP 1 * FROM gaiaedr3.gaia_source WHERE 1=CONTAINS( POINT('ICRS', {star_sky_c.ra.degree}, {star_sky_c.dec.degree}), CIRCLE('ICRS', ra, dec, {(15*u.arcsec).to_value(u.degree)}) ) ORDER BY phot_g_mean_mag """ ) star_gaia
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
We will assume (and hope!) that this source is Kepler-444, but we know that it is fairly bright compared to a typical Gaia source, so we should be safe.We can now use the returned `pyia.GaiaData` object to retrieve an astropy `SkyCoord` object with all of the position and velocity measurements taken from the Gaia archive record for this source:
star_gaia_c = star_gaia.get_skycoord() star_gaia_c
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
To compute this star's Galactic orbit, we need to convert its observed, Heliocentric (actually solar system barycentric) data into the Galactocentric coordinate frame we defined above. To do this, we will use the `astropy.coordinates` transformation framework using the `.transform_to()` method, and we will pass in the `Galactocentric` coordinate frame we defined above:
star_galcen = star_gaia_c.transform_to(galcen_frame) star_galcen
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Let's print out the Cartesian position and velocity for Kepler-444:
print(star_galcen.cartesian) print(star_galcen.velocity)
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Now with Galactocentric position and velocity components for Kepler-444, we can create Gala initial conditions and compute its orbit on the time grid used to compute the Sun's orbit above:
star_w0 = gd.PhaseSpacePosition(star_galcen.data) star_orbit = mw_potential.integrate_orbit(star_w0, t=sun_orbit.t)
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
We can now compare the orbit of Kepler-444 to the solar orbit we computed above. We will plot the two orbits in two projections: First in the $x$-$y$ plane (Cartesian positions), then in the *meridional plane*, showing the cylindrical $R$ and $z$ position dependence of the orbits:
fig, axes = plt.subplots(1, 2, figsize=(10, 5), constrained_layout=True) sun_orbit.plot(["x", "y"], axes=axes[0]) star_orbit.plot(["x", "y"], axes=axes[0]) axes[0].set_xlim(-10, 10) axes[0].set_ylim(-10, 10) sun_orbit.cylindrical.plot( ["rho", "z"], axes=axes[1], auto_aspect=False, labels=["$R$ [kpc]", "$z$ [kpc]"], label="Sun", ) star_orbit.cylindrical.plot( ["rho", "z"], axes=axes[1], auto_aspect=False, labels=["$R$ [kpc]", "$z$ [kpc]"], label="Kepler-444", ) axes[1].set_xlim(0, 10) axes[1].set_ylim(-5, 5) axes[1].set_aspect("auto") axes[1].legend(loc="best", fontsize=15)
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Exercise: How does Kepler-444's orbit differ from the Sun's?- What are the guiding center radii of the two orbits? - What is the maximum $z$ height reached by each orbit? - What are their eccentricities? - Can you guess which star is older based on their kinematics? - Which star do you think has a higher metallicity? Exercise: Compute orbits for Monte Carlo sampled initial conditions using the Gaia error distribution*Hint: Use the `pyia.GaiaData.get_error_samples()` method to generate samples from the Gaia error distribution*- Generate 128 samples from the error distribution- Construct a `SkyCoord` object with all of these Monte Carlo samples - Transform the error sample coordinates to the Galactocentric frame and define Gala initial conditions (a `PhaseSpacePosition` object)- Compute orbits for all error samples using the same time grid we used above- Compute the eccentricity and $L_z$ for all samples: what is the standard deviation of the eccentricity and $L_z$ values? - With what fractional precision can we measure this star's eccentricity and $L_z$? (i.e. what is $\textrm{std}(e) / \textrm{mean}(e)$ and the same for $L_z$) Exercise: Comparing these orbits to the orbits of other Gaia starsRetrieve Gaia data for a set of 100 random Gaia stars within 200 pc of the sun with measured radial velocities and well-measured parallaxes using the query: SELECT TOP 100 * FROM gaiaedr3.gaia_source WHERE dr2_radial_velocity IS NOT NULL AND parallax_over_error > 10 AND ruwe < 1.2 AND parallax > 5 ORDER BY random_index
# random_stars_g = ..
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Compute orbits for these stars for the same time grid used above to compute the sun's orbit:
# random_stars_c = ... # random_stars_galcen = ... # random_stars_w0 = ... # random_stars_orbits = ...
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Plot the initial (present-day) positions of all of these stars in Galactocentric Cartesian coordinates: Now plot the orbits of these stars in the x-y and R-z planes:
fig, axes = plt.subplots(1, 2, figsize=(10, 5), constrained_layout=True) random_stars_orbits.plot(["x", "y"], axes=axes[0]) axes[0].set_xlim(-15, 15) axes[0].set_ylim(-15, 15) random_stars_orbits.cylindrical.plot( ["rho", "z"], axes=axes[1], auto_aspect=False, labels=["$R$ [kpc]", "$z$ [kpc]"], ) axes[1].set_xlim(0, 15) axes[1].set_ylim(-5, 5) axes[1].set_aspect("auto")
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Compute maximum $z$ heights ($z_\textrm{max}$) and eccentricities for all of these orbits. Compare the Sun, Kepler-444, and this random sampling of nearby stars. Where do the Sun and Kepler-444 sit relative to the random sample of nearby stars in terms of $z_\textrm{max}$ and eccentricity? (Hint: plot $z_\textrm{max}$ vs. eccentricity and highlight the Sun and Kepler-444!) Are either of them outliers in any way?
# rand_zmax = ... # rand_ecc = ... fig, ax = plt.subplots(figsize=(8, 6)) ax.scatter( rand_ecc, rand_zmax, color="k", alpha=0.4, s=14, lw=0, label="random nearby stars" ) ax.scatter(sun_orbit.eccentricity(), sun_orbit.zmax(), color="tab:orange", label="Sun") ax.scatter( star_orbit.eccentricity(), star_orbit.zmax(), color="tab:cyan", label="Kepler-444" ) ax.legend(loc="best", fontsize=14) ax.set_xlabel("eccentricity, $e$") ax.set_ylabel(r"max. $z$ height, $z_{\rm max}$ [kpc]")
_____no_output_____
MIT
4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
CCADynamicsGroup/SummerSchoolWorkshops
Image Denoising with Autoencoders Introduction and Importing Libraries___Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and then selecting Kernel > Restart and Run All___
import numpy as np from tensorflow.keras.datasets import mnist from matplotlib import pyplot as plt from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Dense, Input from tensorflow.keras.callbacks import EarlyStopping, LambdaCallback from tensorflow.keras.utils import to_categorical %matplotlib inline
_____no_output_____
MIT
Image_Noise_Reduction.ipynb
Hevenicio/Image-Noise-Reduction-with-Auto-encoders-using-TensorFlow
Data Preprocessing___Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and then selecting Kernel > Restart and Run All___
(x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float')/255. x_test = x_test.astype('float')/255. x_train = np.reshape(x_train, (60000, 784)) x_test = np.reshape(x_test, (10000, 784))
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step
MIT
Image_Noise_Reduction.ipynb
Hevenicio/Image-Noise-Reduction-with-Auto-encoders-using-TensorFlow
Adding Noise___Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and then selecting Kernel > Restart and Run All___
x_train_noisy = x_train + np.random.rand(60000, 784)*0.9 x_test_noisy = x_test + np.random.rand(10000, 784)*0.9 x_train_noisy = np.clip(x_train_noisy, 0., 1.) x_test_noisy = np.clip(x_test_noisy, 0., 1.) def Plot(x, p, labels = False): plt.figure(figsize = (20, 2)) for i in range(10): plt.subplot(1, 10, i + 1) plt.imshow(x[i].reshape(28,28), cmap = 'viridis') plt.xticks([]) plt.yticks([]) if labels: plt.xlabel(np.argmax(p[i])) plt.show() Plot(x_train, None) Plot(x_train_noisy, None)
_____no_output_____
MIT
Image_Noise_Reduction.ipynb
Hevenicio/Image-Noise-Reduction-with-Auto-encoders-using-TensorFlow
Building and Training a Classifier___Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and then selecting Kernel > Restart and Run All___
classifier = Sequential([ Dense(256, activation = 'relu', input_shape = (784,)), Dense(256, activation = 'relu'), Dense(256, activation = 'softmax') ]) classifier.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) classifier.fit(x_train, y_train, batch_size = 512, epochs = 3) loss, acc = classifier.evaluate(x_test, y_test) print(acc) loss, acc = classifier.evaluate(x_test_noisy, y_test) print(acc)
313/313 [==============================] - 0s 1ms/step - loss: 11.9475 - accuracy: 0.1621 0.16210000216960907
MIT
Image_Noise_Reduction.ipynb
Hevenicio/Image-Noise-Reduction-with-Auto-encoders-using-TensorFlow
Building the Autoencoder___Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and then selecting Kernel > Restart and Run All___
input_image = Input(shape = (784,)) encoded = Dense(64, activation = 'relu')(input_image) decoded = Dense(784, activation = 'sigmoid')(encoded) autoencoder = Model(input_image, decoded) autoencoder.compile(loss = 'binary_crossentropy', optimizer = 'adam')
_____no_output_____
MIT
Image_Noise_Reduction.ipynb
Hevenicio/Image-Noise-Reduction-with-Auto-encoders-using-TensorFlow
Training the Autoencoder___Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and then selecting Kernel > Restart and Run All___
autoencoder.fit( x_train_noisy, x_train, epochs = 100, batch_size = 512, validation_split = 0.2, verbose = False, callbacks = [ EarlyStopping(monitor = 'val_loss', patience = 5), LambdaCallback(on_epoch_end = lambda e,l: print('{:.3f}'.format(l['val_loss']), end = ' _ ')) ] ) print(' _ ') print('Training is complete!')
0.261 _ 0.236 _ 0.204 _ 0.187 _ 0.176 _ 0.166 _ 0.158 _ 0.151 _ 0.146 _ 0.141 _ 0.138 _ 0.134 _ 0.132 _ 0.129 _ 0.127 _ 0.125 _ 0.123 _ 0.122 _ 0.121 _ 0.119 _ 0.118 _ 0.117 _ 0.117 _ 0.116 _ 0.115 _ 0.115 _ 0.114 _ 0.114 _ 0.113 _ 0.113 _ 0.113 _ 0.113 _ 0.112 _ 0.112 _ 0.112 _ 0.112 _ 0.112 _ 0.112 _ 0.112 _ 0.112 _ 0.112 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.111 _ 0.110 _ 0.110 _ 0.110 _ _ Training is complete!
MIT
Image_Noise_Reduction.ipynb
Hevenicio/Image-Noise-Reduction-with-Auto-encoders-using-TensorFlow
Denoised Images___Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and then selecting Kernel > Restart and Run All___
preds = autoencoder.predict(x_test_noisy) Plot(x_test_noisy, None) Plot(preds, None) loss, acc = classifier.evaluate(preds, y_test) print(acc)
313/313 [==============================] - 0s 2ms/step - loss: 0.2170 - accuracy: 0.9334 0.9333999752998352
MIT
Image_Noise_Reduction.ipynb
Hevenicio/Image-Noise-Reduction-with-Auto-encoders-using-TensorFlow
Composite Model___Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and then selecting Kernel > Restart and Run All___
input_image=Input(shape=(784,)) x=autoencoder(input_image) y=classifier(x) denoise_and_classfiy = Model(input_image, y) predictions=denoise_and_classfiy.predict(x_test_noisy) Plot(x_test_noisy, predictions, True) Plot(x_test, to_categorical(y_test), True)
_____no_output_____
MIT
Image_Noise_Reduction.ipynb
Hevenicio/Image-Noise-Reduction-with-Auto-encoders-using-TensorFlow
Text Annotation Example
%pip install transformers==4.17.0 -qq !git clone https://github.com/AMontgomerie/bulgarian-nlp %cd bulgarian-nlp
Cloning into 'bulgarian-nlp'... remote: Enumerating objects: 141, done. remote: Counting objects: 100% (141/141), done. remote: Compressing objects: 100% (126/126), done. remote: Total 141 (delta 62), reused 9 (delta 2), pack-reused 0 Receiving objects: 100% (141/141), 70.39 KiB | 1.68 MiB/s, done. Resolving deltas: 100% (62/62), done. /content/bulgarian-nlp
MIT
examples/text_annotator_example.ipynb
iarfmoose/bulgarian-nlp
First we create an instance of the annotator.
from annotation.annotators import TextAnnotator annotator = TextAnnotator()
_____no_output_____
MIT
examples/text_annotator_example.ipynb
iarfmoose/bulgarian-nlp
Next we create an example input and pass it as an argument to the annotator.
example_input = 'България е член на ЕС.' annotations = annotator(example_input) annotations
_____no_output_____
MIT
examples/text_annotator_example.ipynb
iarfmoose/bulgarian-nlp
As you can see, the raw output is a dictionary of tokens and corresponding tags. To make it more readable, let's display the tag level output as a dataframe.
import pandas as pd tokens = [t["text"] for t in annotations["tokens"]] pos_tags = [t["pos"] for t in annotations["tokens"]] entity_tags = [t["entity"] for t in annotations["tokens"]] df = pd.DataFrame({"token": tokens, "pos": pos_tags, "entity": entity_tags}) df
_____no_output_____
MIT
examples/text_annotator_example.ipynb
iarfmoose/bulgarian-nlp
For more information about the meanings of the POS tags, see https://universaldependencies.org/u/pos/The sentence level entities are also available in `annotations["entities"]`:
for entity in annotations["entities"]: print(f"{entity['text']}: {entity['type']}")
България: LOCATION ЕС: ORGANISATION
MIT
examples/text_annotator_example.ipynb
iarfmoose/bulgarian-nlp
Lab 04 : Train vanilla neural network -- solution Training a one-layer net on FASHION-MNIST
# For Google Colaboratory import sys, os if 'google.colab' in sys.modules: from google.colab import drive drive.mount('/content/gdrive') file_name = 'train_vanilla_nn_solution.ipynb' import subprocess path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8") print(path_to_file) path_to_file = path_to_file.replace(file_name,"").replace('\n',"") os.chdir(path_to_file) !pwd import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from random import randint import utils
_____no_output_____
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Download the TRAINING SET (data+labels)
from utils import check_fashion_mnist_dataset_exists data_path=check_fashion_mnist_dataset_exists() train_data=torch.load(data_path+'fashion-mnist/train_data.pt') train_label=torch.load(data_path+'fashion-mnist/train_label.pt') print(train_data.size()) print(train_label.size())
torch.Size([60000, 28, 28]) torch.Size([60000])
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Download the TEST SET (data only)
test_data=torch.load(data_path+'fashion-mnist/test_data.pt') print(test_data.size())
torch.Size([10000, 28, 28])
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Make a one layer net class
class one_layer_net(nn.Module): def __init__(self, input_size, output_size): super(one_layer_net , self).__init__() self.linear_layer = nn.Linear( input_size, output_size , bias=False) def forward(self, x): y = self.linear_layer(x) prob = F.softmax(y, dim=1) return prob
_____no_output_____
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Build the net
net=one_layer_net(784,10) print(net)
one_layer_net( (linear_layer): Linear(in_features=784, out_features=10, bias=False) )
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Take the 4th image of the test set:
im=test_data[4] utils.show(im)
_____no_output_____
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
And feed it to the UNTRAINED network:
p = net( im.view(1,784)) print(p)
tensor([[0.1320, 0.0970, 0.0802, 0.0831, 0.1544, 0.0777, 0.1040, 0.1219, 0.0820, 0.0678]], grad_fn=<SoftmaxBackward>)
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Display visually the confidence scores
utils.show_prob_fashion_mnist(p)
_____no_output_____
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Train the network (only 5000 iterations) on the train set
criterion = nn.NLLLoss() optimizer=torch.optim.SGD(net.parameters() , lr=0.01 ) for iter in range(1,5000): # choose a random integer between 0 and 59,999 # extract the corresponding picture and label # and reshape them to fit the network idx=randint(0, 60000-1) input=train_data[idx].view(1,784) label=train_label[idx].view(1) # feed the input to the net input.requires_grad_() prob=net(input) # update the weights (all the magic happens here -- we will discuss it later) log_prob=torch.log(prob) loss = criterion(log_prob, label) optimizer.zero_grad() loss.backward() optimizer.step()
_____no_output_____
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Take the 34th image of the test set:
im=test_data[34] utils.show(im)
_____no_output_____
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Feed it to the TRAINED net:
p = net( im.view(1,784)) print(p)
tensor([[2.3781e-04, 8.4407e-06, 6.5949e-03, 6.4070e-03, 5.8398e-03, 3.5421e-02, 5.3267e-03, 5.8309e-04, 9.3951e-01, 6.6500e-05]], grad_fn=<SoftmaxBackward>)
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Display visually the confidence scores
utils.show_prob_fashion_mnist(prob)
_____no_output_____
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
Choose image at random from the test set and see how good/bad are the predictions
# choose a picture at random idx=randint(0, 10000-1) im=test_data[idx] # diplay the picture utils.show(im) # feed it to the net and display the confidence scores prob = net( im.view(1,784)) utils.show_prob_fashion_mnist(prob)
_____no_output_____
MIT
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_solution.ipynb
alanwuha/CE7454_2019
#https://machinelearningmastery.com/how-to-develop-a-convolutional-neural-network-from-scratch-for-mnist-handwritten-digit-classification/ #https://towardsdatascience.com/convolutional-neural-networks-for-beginners-using-keras-and-tensorflow-2-c578f7b3bf25 #https://github.com/jorditorresBCN/python-deep-learning/blob/master/08_redes_neuronales_convolucionales.ipynb import tensorflow as tf import numpy as np import matplotlib.pyplot as plt #importa o dataset (as imagens da base "mnist") mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() #inspeciona o data set print('train imagens original shape:',train_images.shape) print('train labels original shape:',train_labels.shape) plt.rcParams.update({'font.size':14}) plt.figure(figsize=(8,4)) for i in range(2*4): plt.subplot(2,4,i+1) plt.xticks([]);plt.yticks([]) plt.imshow(train_images[i],cmap=plt.cm.binary) plt.xlabel(str(train_labels[i])) plt.show() #prepara o data set train_images = train_images.reshape((60000, 28, 28, 1)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28, 28, 1)) test_images = test_images.astype('float32') / 255 #inspeciona os dados preparados print ('train images new shape:',train_images.shape) N_class=10 #Criando a rede neural model = tf.keras.Sequential(name='rede_IF_CNN_MNIST') #Adicionando as camadas model.add(tf.keras.layers.Conv2D(12, (5, 5), activation='relu', input_shape=(28, 28, 1))) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(24, (3, 3), activation='relu')) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(N_class, activation='softmax')) #compilando a rede opt=tf.keras.optimizers.Adam(learning_rate=0.002) model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.summary() # treinando a rede history=model.fit(train_images, train_labels,epochs=8,verbose=1) #mostra a performace do treinamento da rede plt.figure() plt.subplot(2,1,1);plt.semilogy(history.history['loss'],'k') plt.legend(['loss']) plt.subplot(2,1,2);plt.plot(history.history['accuracy'],'k') plt.legend(['acuracia']) plt.tight_layout() #testando a rede com os dados de teste pred = model.predict(test_images) test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print('\n accuracia dos dados de teste: ', test_acc) #encontra a classe de maior probabilidade labels_pred=np.argmax(pred,axis=1) #mostra 15 resultados esperados e os alcançados lado a lado print('data and pred = \n',np.concatenate( (test_labels[None].T[0:15], labels_pred[None].T[0:15]),axis=1))
train imagens original shape: (60000, 28, 28) train labels original shape: (60000,)
MIT
IA_ConvNN_classificacao_MNIST.ipynb
TerradasExatas/Introdu-o-IA-e-Machine-Learning
Note: Notebook was updated July 2, 2019 with bug fixes. If you were working on the older version:* Please click on the "Coursera" icon in the top right to open up the folder directory. * Navigate to the folder: Week 3/ Planar data classification with one hidden layer. You can see your prior work in version 5: Planar data classification with one hidden layer v5.ipynb List of bug fixes and enhancements* Clarifies that the classifier will learn to classify regions as either red or blue.* compute_cost function fixes np.squeeze by casting it as a float.* compute_cost instructions clarify the purpose of np.squeeze.* compute_cost clarifies that "parameters" parameter is not needed, but is kept in the function definition until the auto-grader is also updated.* nn_model removes extraction of parameter values, as the entire parameter dictionary is passed to the invoked functions. Planar data classification with one hidden layerWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. **You will learn how to:**- Implement a 2-class classification neural network with a single hidden layer- Use units with a non-linear activation function, such as tanh - Compute the cross entropy loss - Implement forward and backward propagation 1 - Packages Let's first import all the packages that you will need during this assignment.- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis. - [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.- testCases provides some test examples to assess the correctness of your functions- planar_utils provide various useful functions used in this assignment
# Package imports import numpy as np import matplotlib.pyplot as plt from testCases_v2 import * import sklearn import sklearn.datasets import sklearn.linear_model from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets %matplotlib inline np.random.seed(1) # set a seed so that the results are consistent
_____no_output_____
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
2 - Dataset First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
X, Y = load_planar_dataset()
_____no_output_____
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
# Visualize the data: plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
_____no_output_____
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1).Lets first get a better sense of what our data is like. **Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`? **Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
### START CODE HERE ### (≈ 3 lines of code) shape_X = None shape_Y = None m = X.shape[1] # training set size ### END CODE HERE ### print ('The shape of X is: ' + str(shape_X)) print ('The shape of Y is: ' + str(shape_Y)) print ('I have m = %d training examples!' % (m))
The shape of X is: (2, 400) The shape of Y is: (1, 400) I have m = 400 training examples!
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **shape of X** (2, 400) **shape of Y** (1, 400) **m** 400 3 - Simple Logistic RegressionBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
# Train the logistic regression classifier clf = sklearn.linear_model.LogisticRegressionCV(); clf.fit(X.T, Y.T);
_____no_output_____
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
You can now plot the decision boundary of these models. Run the code below.
# Plot the decision boundary for logistic regression plot_decision_boundary(lambda x: clf.predict(x), X, Y) plt.title("Logistic Regression") # Print accuracy LR_predictions = clf.predict(X.T) print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) + '% ' + "(percentage of correctly labelled datapoints)")
Accuracy of logistic regression: 47 % (percentage of correctly labelled datapoints)
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **Accuracy** 47% **Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! 4 - Neural Network modelLogistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.**Here is our model**:**Mathematically**:For one example $x^{(i)}$:$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$ $$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$Given the predictions on all the examples, you can also compute the cost $J$ as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$**Reminder**: The general methodology to build a Neural Network is to: 1. Define the neural network structure ( of input units, of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent)You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data. 4.1 - Defining the neural network structure **Exercise**: Define three variables: - n_x: the size of the input layer - n_h: the size of the hidden layer (set this to 4) - n_y: the size of the output layer**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
# GRADED FUNCTION: layer_sizes def layer_sizes(X, Y): """ Arguments: X -- input dataset of shape (input size, number of examples) Y -- labels of shape (output size, number of examples) Returns: n_x -- the size of the input layer n_h -- the size of the hidden layer n_y -- the size of the output layer """ ### START CODE HERE ### (≈ 3 lines of code) n_x = None # size of input layer n_h = None n_y = None # size of output layer ### END CODE HERE ### return (n_x, n_h, n_y) X_assess, Y_assess = layer_sizes_test_case() (n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess) print("The size of the input layer is: n_x = " + str(n_x)) print("The size of the hidden layer is: n_h = " + str(n_h)) print("The size of the output layer is: n_y = " + str(n_y))
The size of the input layer is: n_x = 5 The size of the hidden layer is: n_h = 4 The size of the output layer is: n_y = 2
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). **n_x** 5 **n_h** 4 **n_y** 2 4.2 - Initialize the model's parameters **Exercise**: Implement the function `initialize_parameters()`.**Instructions**:- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.- You will initialize the weights matrices with random values. - Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).- You will initialize the bias vectors as zeros. - Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
# GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: params -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random. ### START CODE HERE ### (≈ 4 lines of code) W1 = None b1 = None W2 = None b2 = None ### END CODE HERE ### assert (W1.shape == (n_h, n_x)) assert (b1.shape == (n_h, 1)) assert (W2.shape == (n_y, n_h)) assert (b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters n_x, n_h, n_y = initialize_parameters_test_case() parameters = initialize_parameters(n_x, n_h, n_y) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
W1 = [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] b1 = [[ 0.] [ 0.] [ 0.] [ 0.]] W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]] b2 = [[ 0.]]
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **W1** [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.01057952 -0.00909008 0.00551454 0.02292208]] **b2** [[ 0.]] 4.3 - The Loop **Question**: Implement `forward_propagation()`.**Instructions**:- Look above at the mathematical representation of your classifier.- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.- You can use the function `np.tanh()`. It is part of the numpy library.- The steps you have to implement are: 1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`. 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Argument: X -- input data of size (n_x, m) parameters -- python dictionary containing your parameters (output of initialization function) Returns: A2 -- The sigmoid output of the second activation cache -- a dictionary containing "Z1", "A1", "Z2" and "A2" """ # Retrieve each parameter from the dictionary "parameters" ### START CODE HERE ### (≈ 4 lines of code) W1 = None b1 = None W2 = None b2 = None ### END CODE HERE ### # Implement Forward Propagation to calculate A2 (probabilities) ### START CODE HERE ### (≈ 4 lines of code) Z1 = None A1 = None Z2 = None A2 = None ### END CODE HERE ### assert(A2.shape == (1, X.shape[1])) cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2} return A2, cache X_assess, parameters = forward_propagation_test_case() A2, cache = forward_propagation(X_assess, parameters) # Note: we use the mean here just to make sure that your output matches ours. print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
0.262818640198 0.091999045227 -1.30766601287 0.212877681719
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.**Instructions**:- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:```pythonlogprobs = np.multiply(np.log(A2),Y)cost = - np.sum(logprobs) no need to use a for loop!```(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`). Note that if you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array. We can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array). We can cast the array as a type `float` using `float()`.
# GRADED FUNCTION: compute_cost def compute_cost(A2, Y, parameters): """ Computes the cross-entropy cost given in equation (13) Arguments: A2 -- The sigmoid output of the second activation, of shape (1, number of examples) Y -- "true" labels vector of shape (1, number of examples) parameters -- python dictionary containing your parameters W1, b1, W2 and b2 [Note that the parameters argument is not used in this function, but the auto-grader currently expects this parameter. Future version of this notebook will fix both the notebook and the auto-grader so that `parameters` is not needed. For now, please include `parameters` in the function signature, and also when invoking this function.] Returns: cost -- cross-entropy cost given equation (13) """ m = Y.shape[1] # number of example # Compute the cross-entropy cost ### START CODE HERE ### (≈ 2 lines of code) logprobs = None cost = None ### END CODE HERE ### cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect. # E.g., turns [[17]] into 17 assert(isinstance(cost, float)) return cost A2, Y_assess, parameters = compute_cost_test_case() print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
cost = 0.6930587610394646
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **cost** 0.693058761... Using the cache computed during forward propagation, you can now implement backward propagation.**Question**: Implement the function `backward_propagation()`.**Instructions**:Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. <!--$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$- Note that $*$ denotes elementwise multiplication.- The notation you will use is common in deep learning coding: - dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$ - db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$ - dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$ - db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$ !-->- Tips: - To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute $g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
# GRADED FUNCTION: backward_propagation def backward_propagation(parameters, cache, X, Y): """ Implement the backward propagation using the instructions above. Arguments: parameters -- python dictionary containing our parameters cache -- a dictionary containing "Z1", "A1", "Z2" and "A2". X -- input data of shape (2, number of examples) Y -- "true" labels vector of shape (1, number of examples) Returns: grads -- python dictionary containing your gradients with respect to different parameters """ m = X.shape[1] # First, retrieve W1 and W2 from the dictionary "parameters". ### START CODE HERE ### (≈ 2 lines of code) W1 = None W2 = None ### END CODE HERE ### # Retrieve also A1 and A2 from dictionary "cache". ### START CODE HERE ### (≈ 2 lines of code) A1 = None A2 = None ### END CODE HERE ### # Backward propagation: calculate dW1, db1, dW2, db2. ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above) dZ2 = None dW2 = None db2 = None dZ1 = None dW1 = None db1 = None ### END CODE HERE ### grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2} return grads parameters, cache, X_assess, Y_assess = backward_propagation_test_case() grads = backward_propagation(parameters, cache, X_assess, Y_assess) print ("dW1 = "+ str(grads["dW1"])) print ("db1 = "+ str(grads["db1"])) print ("dW2 = "+ str(grads["dW2"])) print ("db2 = "+ str(grads["db2"]))
dW1 = [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] db1 = [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] dW2 = [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] db2 = [[-0.16655712]]
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected output**: **dW1** [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] **db1** [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] **dW2** [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] **db2** [[-0.16655712]] **Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
# GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate = 1.2): """ Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients Returns: parameters -- python dictionary containing your updated parameters """ # Retrieve each parameter from the dictionary "parameters" ### START CODE HERE ### (≈ 4 lines of code) W1 = None b1 = None W2 = None b2 = None ### END CODE HERE ### # Retrieve each gradient from the dictionary "grads" ### START CODE HERE ### (≈ 4 lines of code) dW1 = None db1 = None dW2 = None db2 = None ## END CODE HERE ### # Update rule for each parameter ### START CODE HERE ### (≈ 4 lines of code) W1 = None b1 = None W2 = None b2 = None ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
W1 = [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]] b1 = [[ 1.21732533e-05] [ 2.12263977e-05] [ 1.36755874e-05] [ 1.05251698e-05]] W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]] b2 = [[ 0.00010457]]
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **W1** [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]] **b1** [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]] **W2** [[-0.01041081 -0.04463285 0.01758031 0.04747113]] **b2** [[ 0.00010457]] 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() **Question**: Build your neural network model in `nn_model()`.**Instructions**: The neural network model has to use the previous functions in the right order.
# GRADED FUNCTION: nn_model def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False): """ Arguments: X -- dataset of shape (2, number of examples) Y -- labels of shape (1, number of examples) n_h -- size of the hidden layer num_iterations -- Number of iterations in gradient descent loop print_cost -- if True, print the cost every 1000 iterations Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ np.random.seed(3) n_x = layer_sizes(X, Y)[0] n_y = layer_sizes(X, Y)[2] # Initialize parameters ### START CODE HERE ### (≈ 1 line of code) parameters = None ### END CODE HERE ### # Loop (gradient descent) for i in range(0, num_iterations): ### START CODE HERE ### (≈ 4 lines of code) # Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache". A2, cache = None # Cost function. Inputs: "A2, Y, parameters". Outputs: "cost". cost = None # Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads". grads = None # Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters". parameters = None ### END CODE HERE ### # Print the cost every 1000 iterations if print_cost and i % 1000 == 0: print ("Cost after iteration %i: %f" %(i, cost)) return parameters X_assess, Y_assess = nn_model_test_case() parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
Cost after iteration 0: 0.692739 Cost after iteration 1000: 0.000218 Cost after iteration 2000: 0.000107 Cost after iteration 3000: 0.000071 Cost after iteration 4000: 0.000053 Cost after iteration 5000: 0.000042 Cost after iteration 6000: 0.000035 Cost after iteration 7000: 0.000030 Cost after iteration 8000: 0.000026 Cost after iteration 9000: 0.000023 W1 = [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]] b1 = [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]] W2 = [[-2.45566237 -3.27042274 2.00784958 3.36773273]] b2 = [[ 0.20459656]]
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **cost after iteration 0** 0.692739 $\vdots$ $\vdots$ **W1** [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]] **b1** [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]] **W2** [[-2.45566237 -3.27042274 2.00784958 3.36773273]] **b2** [[ 0.20459656]] 4.5 Predictions**Question**: Use your model to predict by building predict().Use forward propagation to predict results.**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases} 1 & \text{if}\ activation > 0.5 \\ 0 & \text{otherwise} \end{cases}$ As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
# GRADED FUNCTION: predict def predict(parameters, X): """ Using the learned parameters, predicts a class for each example in X Arguments: parameters -- python dictionary containing your parameters X -- input data of size (n_x, m) Returns predictions -- vector of predictions of our model (red: 0 / blue: 1) """ # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold. ### START CODE HERE ### (≈ 2 lines of code) A2, cache = None predictions = None ### END CODE HERE ### return predictions parameters, X_assess = predict_test_case() predictions = predict(parameters, X_assess) print("predictions mean = " + str(np.mean(predictions)))
predictions mean = 0.666666666667
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **predictions mean** 0.666666666667 It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
# Build a model with a n_h-dimensional hidden layer parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y) plt.title("Decision Boundary for hidden layer size " + str(4))
Cost after iteration 0: 0.693048 Cost after iteration 1000: 0.288083 Cost after iteration 2000: 0.254385 Cost after iteration 3000: 0.233864 Cost after iteration 4000: 0.226792 Cost after iteration 5000: 0.222644 Cost after iteration 6000: 0.219731 Cost after iteration 7000: 0.217504 Cost after iteration 8000: 0.219471 Cost after iteration 9000: 0.218612
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **Cost after iteration 9000** 0.218607
# Print accuracy predictions = predict(parameters, X) print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
Accuracy: 90%
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Expected Output**: **Accuracy** 90% Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. Now, let's try out several hidden layer sizes. 4.6 - Tuning hidden layer size (optional/ungraded exercise) Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
# This may take about 2 minutes to run plt.figure(figsize=(16, 32)) hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50] for i, n_h in enumerate(hidden_layer_sizes): plt.subplot(5, 2, i+1) plt.title('Hidden Layer of size %d' % n_h) parameters = nn_model(X, Y, n_h, num_iterations = 5000) plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y) predictions = predict(parameters, X) accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
Accuracy for 1 hidden units: 67.5 % Accuracy for 2 hidden units: 67.25 % Accuracy for 3 hidden units: 90.75 % Accuracy for 4 hidden units: 90.5 % Accuracy for 5 hidden units: 91.25 % Accuracy for 20 hidden units: 90.0 % Accuracy for 50 hidden units: 90.25 %
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
**Interpretation**:- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. **Optional questions**:**Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right. Some optional/ungraded questions that you can explore if you wish: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?- Play with the learning_rate. What happens?- What if we change the dataset? (See part 5 below!) **You've learnt to:**- Build a complete neural network with a hidden layer- Make a good use of a non-linear unit- Implemented forward propagation and backpropagation, and trained a neural network- See the impact of varying the hidden layer size, including overfitting. Nice work! 5) Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
# Datasets noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets() datasets = {"noisy_circles": noisy_circles, "noisy_moons": noisy_moons, "blobs": blobs, "gaussian_quantiles": gaussian_quantiles} ### START CODE HERE ### (choose your dataset) dataset = "noisy_moons" ### END CODE HERE ### X, Y = datasets[dataset] X, Y = X.T, Y.reshape(1, Y.shape[0]) # make blobs binary if dataset == "blobs": Y = Y%2 # Visualize the data plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
_____no_output_____
MIT
neural_networks_and_deep_learning/Week 3/Planar data classification with one hidden layer/Planar_data_classification_with_onehidden_layer_v6b.ipynb
shengfeng/coursera_deep_learning
I get the data
server = ECMWFDataServer(url = "https://api.ecmwf.int/v1", key = "XXXXXXXXXXXXXXXX", email = "Sylvie.Lamy-Thepaut@ecmwf.int") request = { "dataset": "geff_reanalysis", "date": "2016-12-01/to/2016-12-31", "origin": "fwis", "param": "fwi", "step": "00", "time": "0000", "type": "an", "target": target, } server.retrieve(request) print "Data are downloaded" import Magics.macro as magics #Setting the coordinates of the geographical area projection = magics.mmap(subpage_map_projection = 'robinson', ) netcdf = magics.mnetcdf(netcdf_filename='geff_reanalysis_an_fwis_fwi_20161214_0000_00.nc', netcdf_value_variable = 'fwi') contour = magics.mcont( contour_shade='on', contour_shade_method = 'area_fill', contour_shade_colour_direction = "clockwise", contour_shade_colour_method = "calculate", contour_shade_max_level_colour= "red", contour_shade_min_level_colour= "blue", legend="on", contour='off', contour_min_level=10. ) title = magics.mtext(text_lines=["<netcdf_info variable='fwi' attribute='title'/>"]) legend = magics.mlegend( legend_display_type = "continuous") magics.plot(projection, netcdf, contour, title, magics.mcoast(), legend)
_____no_output_____
ECL-2.0
notebook/GEFF Access.ipynb
EduardRosert/magics
__Standardize timestamps__
#temp = pd.DatetimeIndex(articles['timeStamp']) # Gather all datetime objects #articles['date'] = temp.date # Pull out the date from the datetime objects and assign to Date column #articles['time'] = temp.time # Pull out the time from the datetime objects and assign to Time column print(len(articles)) articles.tail(3) articles.contents[10]
_____no_output_____
MIT
past-team-code/Fall2018Team1/News Articles Data/1119_article_and_bitcoin.ipynb
shun-lin/project-paradigm-chatbot
__Preprocess text for NLP formulations__
articles.head() #Clean the articles - Remove stopwords, remove punctuation, all lowercase import re for i in articles.index: text = articles.loc[i, 'contents'] if pd.isnull(text): pass else: text = re.sub(r"[^a-zA-Z]", " ", text) text = [word for word in text.split() if not word in eng_stopwords] text = (' '.join(text)) text = text.lower() articles.loc[i, 'contents'] = text
_____no_output_____
MIT
past-team-code/Fall2018Team1/News Articles Data/1119_article_and_bitcoin.ipynb
shun-lin/project-paradigm-chatbot
__Combine cleaned articles with "Markers" from Time Series event detection__
df=articles df.to_csv("1119_article_data_and_price_labeled_publisher.csv")
_____no_output_____
MIT
past-team-code/Fall2018Team1/News Articles Data/1119_article_and_bitcoin.ipynb
shun-lin/project-paradigm-chatbot
2D Numpy in Python Welcome! This notebook will teach you about using Numpy in the Python Programming Language. By the end of this lab, you'll know what Numpy is and the Numpy operations. Table of Contents Create a 2D Numpy Array Accessing different elements of a Numpy Array Basic Operations Estimated time needed: 20 min Create a 2D Numpy Array
# Import the libraries import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Consider the list a, the list contains three nested lists **each of equal size**.
# Create a list a = [[11, 12, 13], [21, 22, 23], [31, 32, 33]] a
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
We can cast the list to a Numpy Array as follow
# Convert list to Numpy Array # Every element is the same type A = np.array(a) A
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
We can use the attribute ndim to obtain the number of axes or dimensions referred to as the rank.
# Show the numpy array dimensions A.ndim
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Attribute shape returns a tuple corresponding to the size or number of each dimension.
# Show the numpy array shape A.shape
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
The total number of elements in the array is given by the attribute size.
# Show the numpy array size A.size
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Accessing different elements of a Numpy Array We can use rectangular brackets to access the different elements of the array. The correspondence between the rectangular brackets and the list and the rectangular representation is shown in the following figure for a 3x3 array: We can access the 2nd-row 3rd column as shown in the following figure: We simply use the square brackets and the indices corresponding to the element we would like:
# Access the element on the second row and third column A[1, 2]
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
We can also use the following notation to obtain the elements:
# Access the element on the second row and third column A[1][2]
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Consider the elements shown in the following figure We can access the element as follows
# Access the element on the first row and first column A[0][0]
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
We can also use slicing in numpy arrays. Consider the following figure. We would like to obtain the first two columns in the first row This can be done with the following syntax
# Access the element on the first row and first and second columns A[0][0:2]
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Similarly, we can obtain the first two rows of the 3rd column as follows:
# Access the element on the first and second rows and third column A[0:2, 2]
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Corresponding to the following figure: Basic Operations We can also add arrays. The process is identical to matrix addition. Matrix addition of X and Y is shown in the following figure: The numpy array is given by X and Y
# Create a numpy array X X = np.array([[1, 0], [0, 1]]) X # Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
We can add the numpy arrays as follows.
# Add X and Y Z = X + Y Z
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Multiplying a numpy array by a scaler is identical to multiplying a matrix by a scaler. If we multiply the matrix Y by the scaler 2, we simply multiply every element in the matrix by 2 as shown in the figure. We can perform the same operation in numpy as follows
# Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y # Multiply Y with 2 Z = 2 * Y Z
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Multiplication of two arrays corresponds to an element-wise product or Hadamard product. Consider matrix X and Y. The Hadamard product corresponds to multiplying each of the elements in the same position, i.e. multiplying elements contained in the same color boxes together. The result is a new matrix that is the same size as matrix Y or X, as shown in the following figure. We can perform element-wise product of the array X and Y as follows:
# Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y # Create a numpy array X X = np.array([[1, 0], [0, 1]]) X # Multiply X with Y Z = X * Y Z
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
We can also perform matrix multiplication with the numpy arrays A and B as follows: First, we define matrix A and B:
# Create a matrix A A = np.array([[0, 1, 1], [1, 0, 1]]) A # Create a matrix B B = np.array([[1, 1], [1, 1], [-1, 1]]) B
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
We use the numpy function dot to multiply the arrays together.
# Calculate the dot product Z = np.dot(A,B) Z # Calculate the sine of Z np.sin(Z)
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
We use the numpy attribute T to calculate the transposed matrix
# Create a matrix C C = np.array([[1,1],[2,2],[3,3]]) C # Get the transposed of C C.T
_____no_output_____
MIT
Python for Data Science and AI/w5/PY0101EN-5-2-Numpy2D.ipynb
Carlosriosch/IBM-Data-Science
Hill-Langmuir Bayesian Regression Goals similar to: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3773943/pdf/nihms187302.pdf However, they use a different paramerization that does not include Emax Bayesian Hill Model Regression The Hill model is defined as: $$ F(c, E_{max}, E_0, EC_{50}, H) = E_0 + \frac{E_{max} - E_0}{1 + (\frac{EC_{50}}{C})^H} $$Where concentration, $c$ is in uM, and is *not* in logspace. To quantify uncertainty in downstream modeling, and to allow placement of priors on the relevant variables, we will do this in a bayesian framework. Building Intuition with the Hill Equation![](https://media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fsrep14701/MediaObjects/41598_2015_Article_BFsrep14701_Fig1_HTML.jpg?as=webp)1. Di Veroli GY, Fornari C, Goldlust I, Mills G, Koh SB, Bramhall JL, et al. An automated fitting procedure and software for dose-response curves with multiphasic features. Scientific Reports. 2015 Oct 1;5(1):1–11.
# https://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html def f(E0=2.5, Emax=0, log_EC50=-2, H=1): EC50 = 10**log_EC50 plt.figure(2, figsize=(10,5)) xx = np.logspace(-4, 1, 100) yy = E0 + (Emax - E0)/(1+(EC50/xx)**H) plt.plot(np.log10(xx),yy, 'r-') plt.ylim(-0.2, 3) plt.xlabel('log10 [Concentration (uM)] ') plt.ylabel('cell response') plt.show() interactive_plot = interactive(f, E0=(0,3,0.5), Emax=(0.,1.,0.05), log_EC50=(-5,2,0.1), H=(1,5,1)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
Define Model + Guide
class plotter: def __init__(self, params, figsize=(20,10), subplots = (2,7)): ''' ''' assert len(params) <= subplots[0]*subplots[1], 'wrong number of subplots for given params to report' self.fig, self.axes = plt.subplots(*subplots,figsize=figsize, sharex='col', sharey='row') self.vals = {p:[] for p in params} self.params = params def record(self): ''' ''' for p in self.params: self.vals[p].append(pyro.param(p).item()) def plot_all(self): ''' ''' for p, ax in zip(self.params, self.axes.flat): ax.plot(self.vals[p], 'b-') ax.set_title(p, fontsize=25) ax.set_xlabel('step', fontsize=20) ax.set_ylabel('param value', fontsize=20) plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.35) plt.show() def model(X, Y=None): ''' ''' E0 = pyro.sample('E0', dist.Normal(1., E0_std)) Emax = pyro.sample('Emax', dist.Beta(a_emax, b_emax)) H = pyro.sample('H', dist.Gamma(alpha_H, beta_H)) EC50 = 10**pyro.sample('log_EC50', dist.Normal(mu_ec50, std_ec50)) obs_sigma = pyro.sample("obs_sigma", dist.Gamma(a_obs, b_obs)) obs_mean = E0 + (Emax - E0)/(1+(EC50/X)**H) with pyro.plate("data", X.shape[0]): obs = pyro.sample("obs", dist.Normal(obs_mean.squeeze(-1), obs_sigma), obs=Y) return obs_mean def guide(X, Y=None): _E0_mean = pyro.param('E0_mean', torch.tensor(0.)) _E0_std = pyro.param('E0_std', torch.tensor(E0_std), constraint=constraints.positive) E0 = pyro.sample('E0', dist.Normal(_E0_mean, _E0_std)) _a_emax = pyro.param('_a_emax', torch.tensor(a_emax), constraint=constraints.positive) _b_emax = pyro.param('_b_emax', torch.tensor(b_emax), constraint=constraints.positive) Emax = pyro.sample('Emax', dist.Beta(_a_emax, _b_emax)) _alpha_H = pyro.param('_alpha_H', torch.tensor(alpha_H), constraint=constraints.positive) _beta_H = pyro.param('_beta_H', torch.tensor(beta_H), constraint=constraints.positive) H = pyro.sample('H', dist.Gamma(_alpha_H, _beta_H)) _mu_ec50 = pyro.param('_mu_ec50', torch.tensor(mu_ec50)) _std_ec50 = pyro.param('_std_ec50', torch.tensor(std_ec50), constraint=constraints.positive) EC50 = pyro.sample('log_EC50', dist.Normal(_mu_ec50, _std_ec50)) _a_obs = pyro.param('_a_obs', torch.tensor(a_obs), constraint=constraints.positive) _b_obs = pyro.param('_b_obs', torch.tensor(b_obs), constraint=constraints.positive) obs_sigma = pyro.sample("obs_sigma", dist.Gamma(_a_obs, _b_obs)) obs_mean = E0 + (Emax - E0)/(1+(EC50/X)**H) return obs_mean
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
choosing priors $E_0$The upper bound or maximum value of our function, $E_0$ should be centered at 1, although it's possible to be a little above or below that, we'll model this with a Normal distribution and a fairly tight variance around 1. $$ E_0 \propto N(1, \sigma_{E_0}) $$ $E_{max}$ $E_{max}$ is the lower bound, or minimum value of our function, and is expected to be at 0, however, for some inhibitors it's significantly above this. $$ E_{max} \propto Beta(a_{E_{max}}, b_{E_{max}}) $$ $$ e[E_{max}] = \frac{a}{a+b} $$ H Hill coefficient, $H$ should be a positive integer, however, we're going to approximate this as gamma since a poisson is not flexible enough to characterize this properly. $$ H \propto gamma(\alpha_{H}, \beta_{H}) $$$$ Mean = E[gamma] = \frac{ \alpha_{H} }{\beta_{H}} $$ $EC_{50}$ EC50 was actually a little tough, we could imagine encoding IC50 as a gamma distribution in concentration space, however, this results in poor behavior when used in logspace. Therefore, it actually works much better to encode this as a Normal distribution in logspace. $$ log10(EC50) \propto Normal(\mu_{EC50}, \sigma_{EC50}) $$ cell viability ($Y$) We'll assume this is a normal distribution, centered around the hill function with standard deviation $\sigma_{obs}$. $$ \mu_{obs} = E_0 + \frac{E_{max} - E_0}{1 + (\frac{EC_{50}}{C})^H} $$$$ Y \propto N(\mu_{obs}, \sigma_{obs}) $$ Building Prior Intuition E0 Prior
def f(E0_std): plt.figure(2) xx = np.linspace(-2, 4, 50) rv = norm(1, E0_std) yy = rv.pdf(xx) plt.ylim(0,1) plt.title('E0 parameter') plt.xlabel('E0') plt.ylabel('probability') plt.plot(xx, yy, 'r-') plt.show() interactive_plot = interactive(f, E0_std=(0.1,4,0.1)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
Expecation, Variance to Alpha,Beta for Gamma
def gamma_modes_to_params(E, S): ''' ''' beta = E/S alpha = E**2/S return alpha, beta
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
Emax Prior
# TODO: Have inputs be E[] and Var[] rather than a,b... more useful for setting up priors. def f(emax_mean=1, emax_var=3): a_emax, b_emax = gamma_modes_to_params(emax_mean, emax_var) plt.figure(2) xx = np.linspace(0, 1.2, 100) rv = gamma(a_emax, scale=1/b_emax, loc=0) yy = rv.pdf(xx) plt.title('Emax Parameter') plt.xlabel('Emax') plt.ylabel('probability') plt.ylim(0,5) plt.plot(xx, yy, 'r-', label=f'alpha={a_emax:.2f}, beta={b_emax:.2f}') plt.legend() plt.show() interactive_plot = interactive(f, emax_mean=(0.1,1.2,0.05), emax_var=(0.01,1,0.05)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot def f(alpha_H=1, beta_H=0.5): f, axes = plt.subplots(1,1,figsize=(5,5)) xx = np.linspace(0, 5, 100) g = gamma(alpha_H, scale=1/beta_H, loc=0) yy = g.pdf(xx) axes.set_xlabel('H') axes.set_ylabel('probability') plt.xlim(0,5) plt.ylim(0,5) axes.plot(xx,yy, 'r-') plt.tight_layout() plt.title('Hill Coefficient') plt.show() interactive_plot = interactive(f, alpha_H=(1,10,1), beta_H=(0.1,5,0.1)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot def f(mu_ec50=-1, std_ec50=0.5): f, axes = plt.subplots(1,1,figsize=(5,5)) xx = np.log10( np.logspace(-5, 2, 100) ) g = norm(mu_ec50, std_ec50) yy = g.pdf(xx) axes.plot(xx,yy, 'r-') plt.xlabel('log10 EC50') plt.ylabel('probability') plt.title('EC50 parameter') plt.tight_layout() plt.show() interactive_plot = interactive(f, mu_ec50=(-5,2,0.1), std_ec50=(0.01,5,0.1)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot def f(a_obs=1, b_obs=1): plt.figure(2) xx = np.linspace(0, 3, 50) rv = gamma(a_obs, scale=1/b_obs, loc=0) yy = rv.pdf(xx) plt.ylim(0,5) plt.plot(xx, yy, 'r-') plt.xlabel('std_obs') plt.ylabel('probability') plt.title('Observation (Y) std') plt.show() interactive_plot = interactive(f, a_obs=(1,100,1), b_obs=(1,100,1)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
Define Priors
############ PRIORS ############### E0_std = 0.05 # uniform # 50,100 -> example if we have strong support for Emax around 0.5 a_emax = 50. #2. b_emax = 100. #8. # H gamma prior alpha_H = 1 beta_H = 1 #EC50 # this is in logspace, so in uM -> 10**mu_ec50 mu_ec50 = -2. std_ec50 = 3. # obs error a_obs = 1 b_obs = 1 ###################################
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
Define DataWe'll use fake data for now.
Y = torch.tensor([1., 1., 1., 0.9, 0.7, 0.6, 0.5], dtype=torch.float) X = torch.tensor([10./3**i for i in range(7)][::-1], dtype=torch.float).unsqueeze(-1)
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
Fit model with MCMChttps://forum.pyro.ai/t/need-help-with-very-simple-model/600https://pyro.ai/examples/bayesian_regression_ii.html
torch.manual_seed(99999) nuts_kernel = NUTS(model, adapt_step_size=True) mcmc_run = MCMC(nuts_kernel, num_samples=400, warmup_steps=100, num_chains=1) mcmc_run.run(X,Y)
Sample: 100%|██████████| 500/500 [00:32, 15.43it/s, step size=2.56e-01, acc. prob=0.949]
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
visualize results
samples = {k: v.detach().cpu().numpy() for k, v in mcmc_run.get_samples().items()} f, axes = plt.subplots(3,2, figsize=(10,5)) for ax, key in zip(axes.flat, samples.keys()): ax.set_title(key) ax.hist(samples[key], bins=np.linspace(min(samples[key]), max(samples[key]), 50), density=True) ax.set_xlabel(key) ax.set_ylabel('probability') axes.flat[-1].hist(10**samples['log_EC50'], bins=np.linspace(min(10**(samples['log_EC50'])), max(10**(samples['log_EC50'])), 50)) axes.flat[-1].set_title('EC50') axes.flat[-1].set_xlabel('EC50 [uM]') plt.tight_layout() plt.show()
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
plot fitted hill f-n
plt.figure(figsize=(7,7)) xx = np.logspace(-7, 6, 200) for i,s in pd.DataFrame(samples).iterrows(): yy = s.E0 + (s.Emax - s.E0)/(1+(10**s.log_EC50/xx)**s.H) plt.plot(np.log10(xx), yy, 'ro', alpha=0.01) plt.plot(np.log10(X), Y, 'b.', label='data') plt.xlabel('log10 Concentration') plt.ylabel('cell_viability') plt.ylim(0,1.2) plt.legend() plt.title('MCMC results') plt.show()
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
Deprecated EC50 example - gamma in concentration space
def f(alpha_ec50=1, beta_ec50=0.5): f, axes = plt.subplots(1,2,figsize=(8,4)) xx = np.logspace(-5, 2, 100) g = gamma(alpha_ec50, scale=1/beta_ec50, loc=0) yy = g.pdf(xx) g_samples = g.rvs(1000) axes[0].plot(xx,yy, 'r-') axes[1].plot(np.log10(xx), yy, 'b-') plt.tight_layout() plt.show() interactive_plot = interactive(f, alpha_ec50=(1,10,1), beta_ec50=(0.01,5,0.1)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
Fit Model with `stochastic variational inference`
adam = optim.Adam({"lr": 1e-1}) svi = SVI(model, guide, adam, loss=Trace_ELBO()) tic = time.time() STEPS = 2500 pyro.clear_param_store() myplotter = plotter(['_alpha_H', '_beta_H', '_a_emax', '_b_emax', '_a_obs', '_b_obs', '_mu_ec50', '_std_ec50'], figsize=(12, 8), subplots=(2,5)) _losses = [] last=0 loss = 0 n = 100 try: for j in range(STEPS): loss += svi.step(X, Y) myplotter.record() if j % n == 0: print(f"[iteration {j}] loss: {(loss / n) :.2f} [change={(loss/n - last/n):.2f}]", end='\t\t\t\r') _losses.append(np.log10(loss)) last = loss loss = 0 myplotter.plot_all() except: myplotter.plot_all() raise plt.figure() plt.plot(_losses) plt.xlabel('steps') plt.ylabel('loss') plt.show() toc = time.time() print(f'time to train {STEPS} iterations: {toc-tic:.2g}s') x_data = torch.tensor(np.logspace(-5, 5, 200)).unsqueeze(-1) def summary(samples): site_stats = {} for k, v in samples.items(): site_stats[k] = { "mean": torch.mean(v, 0), "std": torch.std(v, 0), "5%": v.kthvalue(int(len(v) * 0.05), dim=0)[0], "95%": v.kthvalue(int(len(v) * 0.95), dim=0)[0], } return site_stats predictive = Predictive(model, guide=guide, num_samples=800, return_sites=("linear.weight", "obs", "_RETURN")) samples = predictive(x_data) pred_summary = summary(samples) y_mean = pred_summary['obs']['mean'].detach().numpy() y_5 = pred_summary['obs']['5%'].detach().numpy() y_95 = pred_summary['obs']['95%'].detach().numpy() plt.figure(figsize=(7,7)) plt.plot(np.log10(X),Y, 'k*', label='data') plt.plot(np.log10(x_data), y_mean, 'r-') plt.plot(np.log10(x_data), y_5, 'g-', label='95% Posterior Predictive CI') plt.plot(np.log10(x_data), y_95, 'g-') plt.ylim(0,1.2) plt.legend() plt.show()
_____no_output_____
MIT
Trematinib-Combo-CI/python/Hill-Equation-Bayesian-Regression.ipynb
nathanieljevans/HNSCC_functional_data_pipeline
ライブラリのインポートとバージョン表示
import pandas as pd import numpy as np import cesiumpy cesiumpy.__version__
_____no_output_____
Apache-2.0
Cesium_Advent_Calendar_3rd.ipynb
tkama/hello_cesiumpy
CSVファイルの読み込み
filename = '07hoikuennyoutien-asakashi_utf8.csv' df = pd.read_csv( filename )
_____no_output_____
Apache-2.0
Cesium_Advent_Calendar_3rd.ipynb
tkama/hello_cesiumpy
バブルチャートの表示
v = cesiumpy.Viewer() for i, row in df.iterrows(): l = row['施設_収容人数[総定員]人数'] p = cesiumpy.Point(position=[row['施設_経度'], row['施設_緯度'], 0] , pixelSize=l/5, color='blue') v.entities.add(p) v
_____no_output_____
Apache-2.0
Cesium_Advent_Calendar_3rd.ipynb
tkama/hello_cesiumpy