markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Le score td-idf pour le terme "Mr. Green" est le plus élevé pour le document a. Exercice 1 Quel document est le plus proche du terme "green plant ? Calculer les scores TF-IDF pour le terme "green plan". Cela correspond-il à vos attentes ? Que se passe-t-il si vous inversez les termes "green" et "plant" ? Que se passe-t-il avec "green" seul ? Introduction à NLTK
import nltk # nltk donne accès a des methodes, mais aussi à des données, qui faut télécharge grâce à la commande .download() nltk.download('stopwords') from pprint import pprint len(activity_results) if len(activity_results) > 0: pprint(activity_results[0]) if len(activity_results) > 0: pprint(activity_results[0]['object']['content']) all_content = " ".join([ a['object']['content'] for a in activity_results ]) print("Nombre de caractères : ",len(all_content)) print('\n') #Tokenisation naïve sur les espaces entre les mots => on obtient une liste de mots tokens = all_content.split() #On transforme cette liste en objet nltk "Text" (objet chaine de caractère qui conserve la notion de tokens, et qui #comprend un certain nombre de méthodes utiles pour explorer les données. text = nltk.Text(tokens) #Comme par exemple "concordance" : montre les occurences d'un mot dans son contexte print("Exemples d'occurences du terme 'open' :") text.concordance("open") print('\n') # Analyse de la fréquence des termes d'intérêt fdist = text.vocab() print("Co-occurences fréquentes :") colloc = text.collocation_list() print(colloc) print('\n') print("Nombre de mots :", len(tokens)) print('\n') print("Nombre de mots uniques :",len(fdist.keys())) print('\n') print("Nombre de mots uniques v2 :",len(set(tokens))) print("Nombre d'occurences du terme 'open' :",fdist["open"]) print("Nombre d'occurences du terme 'source' :", fdist["source"]) print("Nombre d'occurences du terme 'web' :", fdist["web"]) print("Nombre d'occurences du terme 'API' :",fdist["API"]) print('\n') #100 tokens les plus fréquents top100_items = sorted(fdist.items(),key=lambda x: x[1],reverse=True)[:100] #sans les fréquences top100 = [t[0] for t in top100_items] print("Top 100 :", top100) print('\n') #sans les termes trop frequents ("stopwords") top100_without_stopwords = [w for w in top100 if w.lower() \ not in nltk.corpus.stopwords.words('english')] print("Top 100 sans les mots fréquents :", top100_without_stopwords) print('\n') long_words_not_urls = [w for w in fdist.keys() if len(w) > 15 and not w.startswith("http")] print("Longs mots sans les urls :", long_words_not_urls) print('\n') # Nombre d'urls print("Nombre d'urls :", len([w for w in fdist.keys() if w.startswith("http")])) print('\n') # Enumerate the frequency distribution for rank, word in enumerate(sorted(fdist.items(),key=lambda x: x[1],reverse=True)): print(rank, word) if rank > 75: print("....") break fdist = text.vocab() %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1, figsize=(16,4)) fdist.plot(100,cumulative=True);
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 4 Quel document est le plus proche de celui-ci: https://plus.google.com/+TimOReilly/posts/7EaHeYc1BiB ? Représenter la matrice de distance à l'aide d'une heatmap. Que donne un clustering hiérarchique ? Approche contextuelle Les approches bag-of-words, bien que simplistes, permettent de créer d'indexer et de comparer des documents. La prise en compte des suites de 2, 3 ou plus mots serait un moyen d'affiner de tels modèles. Cela permet aussi de mieux comprendre le sens des homonymes, et des phrases (d'une manière générale, la sémantique). nltk offre des methodes pour tenir compte du contexte : pour ce faire, nous calculons les n-grams, c'est-à-dire l'ensemble des co-occurrences successives de mots deux-à-deux (bigrams), trois-à-trois (tri-grams), etc. En général, on se contente de bi-grams, au mieux de tri-grams : - les modèles de classification, analyse du sentiment, comparaison de documents, etc. qui comparent des n-grams avec n trop grands sont rapidement confrontés au problème de données sparse, cela réduit la capacité prédictive des modèles ; - les performances décroissent très rapidement en fonction de n, et les coûts de stockage des données augmentent rapidement (environ n fois plus élevé que la base de donnée initiale). Exemple avec un petit corpus
import nltk sentence = "Mr. Green killed Colonel Mustard in the study with the " + \ "candlestick. Mr. Green is not a very nice fellow." print(list(nltk.ngrams(sentence.split(), 2))) txt = nltk.Text(sentence.split()) txt.collocation_list()
_doc/notebooks/td2a_eco/td2a_TD5_Traitement_automatique_des_langues_en_Python.ipynb
sdpython/ensae_teaching_cs
mit
$\alpha$-CsCl ($Pm\overline{3}m$) Let's start with the typical $\alpha$ form of CsCl.
# Create CsCl structure a = 4.209 #Angstrom latt = Lattice.cubic(a) structure = Structure(latt, ["Cs", "Cl"], [[0, 0, 0], [0.5, 0.5, 0.5]]) c = XRDCalculator() c.show_plot(structure)
notebooks/2013-01-01-Calculating XRD patterns.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
$\beta$-CsCl ($Fm\overline{3}m$) Let's now look at the $\beta$ (high-temperature) form of CsCl.
# Create CsCl structure a = 6.923 #Angstrom latt = Lattice.cubic(a) structure = Structure(latt, ["Cs", "Cs", "Cs", "Cs", "Cl", "Cl", "Cl", "Cl"], [[0, 0, 0], [0.5, 0.5, 0], [0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0.5], [0, 0, 0.5], [0, 0.5, 0], [0.5, 0, 0]]) c.show_plot(structure)
notebooks/2013-01-01-Calculating XRD patterns.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
Define some helpful functions
def estimate_d(img, name, wavelength, pixel_size): # find the res = find_ring_center_acorr_1D(img) res2 = refine_center(img, res, pixel_size, 25, 5, thresh=0.1, window_size=5) bins, sums, counts = img_to_relative_xyi(img, res2, radial_grid) mask = counts > 10 bin_centers = bin_edges_to_centers(bins)[mask] ring_averages = sums[mask] / counts[mask] d_mean, d_std = estimate_d_blind(name,wavelength, bin_centers, ring_averages, 5, 7, thresh=0.03) return d_mean, d_std, res2 def show_rings_on_image(ax, image, ring_radius, center): vmin, vmax = np.percentile(image, [80, 100]) my_cmap = copy(matplotlib.cm.get_cmap('gray')) my_cmap.set_bad('k') im = ax.imshow(image, cmap=my_cmap, interpolation='none', norm=LogNorm(), vmin=vmin, vmax=vmax) for r in ring_radius: c = Circle(center[::-1], r , facecolor='none', edgecolor='r', lw=2, linestyle='dashed') ax.add_patch(c) ax.axhline(center[0], color='r') ax.axvline(center[1], color='r') ax.set_ylim([center[0] - ring_radius[-1], center[0] + ring_radius[-1]]) ax.set_xlim([center[1] - ring_radius[-1], center[1] + ring_radius[-1]])
broken/powder_calibration/D_estimate_demo.ipynb
sameera2004/scikit-xray-examples
bsd-3-clause
Setup data for Si standard sample
si_fname = 'Si_STD_d204-00002.tif' si_name = 'Si' si_wavelength = 0.1839 si_data = TiffStack(si_fname)
broken/powder_calibration/D_estimate_demo.ipynb
sameera2004/scikit-xray-examples
bsd-3-clause
Setup data for LaB6 calibration standard
lab6_fname = 'LaB6_d500-0p72959-2Kx2K_pix200.tif' lab6_name = 'LaB6' lab6_wavelength = .72959 lab6_data = TiffStack(lab6_fname)
broken/powder_calibration/D_estimate_demo.ipynb
sameera2004/scikit-xray-examples
bsd-3-clause
Calibrate Si data
calib_si = estimate_d(si_data[0], si_name, si_wavelength, pixel_size) print("D: {} ± {}".format(calib_si[0], calib_si[1])) print("center: {}".format(calib_si[2])) cal_si = skbeam.core.calibration.calibration_standards['Si'] si_rings = calib_si[0] * np.tan(cal_si.convert_2theta(si_wavelength)) / .2 fig, ax = plt.subplots() show_rings_on_image(ax, si_data[0], si_rings, calib_si[2]) plt.show()
broken/powder_calibration/D_estimate_demo.ipynb
sameera2004/scikit-xray-examples
bsd-3-clause
Calibrate using Lab 6 data
calib_lab6 = estimate_d(lab6_data[0], lab6_name, lab6_wavelength, pixel_size) print("D: {} ± {}".format(calib_lab6[0], calib_lab6[1])) print("center: {}".format(calib_lab6[2])) cal_lab6 = nsls2.calibration.calibration_standards['LaB6']\n", lab6_rings = calib_lab6[0] * np.tan(cal_lab6.convert_2theta(lab6_wavelength)) / .2\n", fig, ax = plt.subplots()\n", show_rings_on_image(ax, lab6_data[0], lab6_rings, calib_lab6[2])\n", ax.set_xlim([0, lab6_data.frame_shape[0]])\n", ax.set_ylim([0, lab6_data.frame_shape[1]])\n", plt.show()
broken/powder_calibration/D_estimate_demo.ipynb
sameera2004/scikit-xray-examples
bsd-3-clause
Polynomial a Polynomial is an Expr defined by factors and with some more methods
p1=Polynomial([-1,1,3]) # inited from coefficients in ascending power order p1 # Latex output by default p2=Polynomial('- 5x^3 +3*x') # inited from string, in any power order, with optional spaces and * p2.plot() [(x,p1(x)) for x in itertools2.linspace(-1,1,11)] #evaluation p1-p2+2 # addition and subtraction of polynomials and scalars -3*p1*p2**2 # polynomial (and scalar) multiplication and scalar power p1.derivative()+p2.integral() #integral and derivative
notebooks/polynomial.ipynb
goulu/Goulib
lgpl-3.0
Motion "motion laws" are functions of time which return (position, velocity, acceleration, jerk) tuples
from Goulib.motion import *
notebooks/polynomial.ipynb
goulu/Goulib
lgpl-3.0
Polynomial Segments Polynomials are very handy to define Segments as coefficients can easily be determined from start/end conditions. Also, polynomials can easily be integrated or derivated in order to obtain position, velocity, or acceleration laws from each other. Motion defines several handy functions that return SegmentPoly matching common situations
seg=Segment2ndDegree(0,1,(-1,1,2)) # time interval and initial position,velocity and constant acceleration seg.plot() seg=Segment4thDegree(0,0,(-2,1),(2,3)) #start time and initial and final (position,velocity) seg.plot() seg=Segment4thDegree(0,2,(-2,1),(None,3)) # start and final time, initial (pos,vel) and final vel seg.plot()
notebooks/polynomial.ipynb
goulu/Goulib
lgpl-3.0
Interval operations on [a..b[ intervals
from Goulib.interval import * Interval(5,6)+Interval(2,3)+Interval(3,4)
notebooks/polynomial.ipynb
goulu/Goulib
lgpl-3.0
Piecewise Piecewise defined functions
from Goulib.piecewise import *
notebooks/polynomial.ipynb
goulu/Goulib
lgpl-3.0
The simplest are piecewise continuous functions. They are defined by $(x_i,y_i)$ tuples given in any order. $f(x) = \begin{cases}y_0 & x < x_1 \ y_i & x_i \le x < x_{i+1} \ y_n & x > x_n \end{cases}$
p1=Piecewise([(4,4),(3,3),(1,1),(5,0)]) p1 # default rendering is LaTeX p1.plot() #pity that matplotlib doesn't accept large LaTeX as title...
notebooks/polynomial.ipynb
goulu/Goulib
lgpl-3.0
By default y0=0 , but it can be specified at construction. Piecewise functions can also be defined by adding (x0,y,x1) segments
p2=Piecewise(default=1) p2+=(2.5,1,6.5) p2+=(1.5,1,3.5) p2.plot(xmax=7,ylim=(-1,5)) plot.plot([p1,p2,p1+p2,p1-p2,p1*p2,p1/p2], labels=['p1','p2','p1+p2','p1-p2','p1*p2','p1/p2'], xmax=7, ylim=(-2,10), offset=0.02) p1=Piecewise([(2,True)],False) p2=Piecewise([(1,True),(2,False),(3,True)],False) plot.plot([p1,p2,p1|p2,p1&p2,p1^p2,p1>>3], labels=['p1','p2','p1 or p2','p1 and p2','p1 xor p2','p1>>3'], xmax=7,ylim=(-.5,1.5), offset=0.02)
notebooks/polynomial.ipynb
goulu/Goulib
lgpl-3.0
Piecewise Expr function
from math import cos f=Piecewise().append(0,cos).append(1,lambda x:x**x) f f.plot()
notebooks/polynomial.ipynb
goulu/Goulib
lgpl-3.0
Expected Output <table> <center> Total Params: 3743280 </center> </table> By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows: <img src="images/distance_kiank.png" style="width:680px;height:250px;"> <caption><center> <u> <font color='purple'> Figure 2: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption> So, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other - The encodings of two images of different persons are very different The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. <img src="images/triplet_comparison.png" style="width:280px;height:150px;"> <br> <caption><center> <u> <font color='purple'> Figure 3: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption> 1.2 - The Triplet Loss For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network. <img src="images/f_x.png" style="width:380px;height:150px;"> <!-- We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1). !--> Training will use triplets of images $(A, P, N)$: A is an "Anchor" image--a picture of a person. P is a "Positive" image--a picture of the same person as the Anchor image. N is a "Negative" image--a picture of a different person than the Anchor image. These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$: $$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$ You would thus like to minimize the following "triplet cost": $$\mathcal{J} = \sum^{m}{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}\text{(2)} + \alpha \large ] \small+ \tag{3}$$ Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes: - The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it. - $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$. Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here. Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps: 1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ 2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ 3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$ 3. Compute the full formula by taking the max with zero and summing over the training examples: $$\mathcal{J} = \sum^{m}{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small+ \tag{3}$$ Useful functions: tf.reduce_sum(), tf.square(), tf.subtract(), tf.add(), tf.maximum(). For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
# GRADED FUNCTION: triplet_loss def triplet_loss(y_true, y_pred, alpha = 0.2): """ Implementation of the triplet loss as defined by formula (3) Arguments: y_true -- true labels, required when you define a loss in Keras, you don't need it in this function. y_pred -- python list containing three objects: anchor -- the encodings for the anchor images, of shape (None, 128) positive -- the encodings for the positive images, of shape (None, 128) negative -- the encodings for the negative images, of shape (None, 128) Returns: loss -- real number, value of the loss """ anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2] ### START CODE HERE ### (≈ 4 lines) # Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1 pos_dist = tf.reduce_sum(tf.square(y_pred[1] - y_pred[0]), axis = -1) # Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1 neg_dist = tf.reduce_sum(tf.square(y_pred[2] - y_pred[0]), axis = -1) # Step 3: subtract the two previous distances and add alpha. basic_loss = pos_dist - neg_dist + alpha # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples. loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0)) ### END CODE HERE ### return loss with tf.Session() as test: tf.set_random_seed(1) y_true = (None, None, None) y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1), tf.random_normal([3, 128], mean=1, stddev=1, seed = 1), tf.random_normal([3, 128], mean=3, stddev=4, seed = 1)) loss = triplet_loss(y_true, y_pred) print("loss = " + str(loss.eval()))
Course 4/Face Recognition for the Happy House v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID. Exercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called "identity". You will have to go through the following steps: 1. Compute the encoding of the image from image_path 2. Compute the distance about this encoding and the encoding of the identity image stored in the database 3. Open the door if the distance is less than 0.7, else do not open. As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
# GRADED FUNCTION: verify def verify(image_path, identity, database, model): """ Function that verifies if the person on the "image_path" image is "identity". Arguments: image_path -- path to an image identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house. database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors). model -- your Inception model instance in Keras Returns: dist -- distance between the image_path and the image of "identity" in the database. door_open -- True, if the door should open. False otherwise. """ ### START CODE HERE ### # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line) encoding = img_to_encoding(image_path, model) # Step 2: Compute distance with identity's image (≈ 1 line) dist = np.linalg.norm(encoding - database[identity]) # Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines) if dist < 0.7: print("It's " + str(identity) + ", welcome home!") door_open = None else: print("It's not " + str(identity) + ", please go away") door_open = None ### END CODE HERE ### return dist, door_open
Course 4/Face Recognition for the Happy House v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Expected Output: <table> <tr> <td> **It's not kian, please go away** </td> <td> (0.86224014, False) </td> </tr> </table> 3.2 - Face Recognition Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in! To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them! You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input. Exercise: Implement who_is_it(). You will have to go through the following steps: 1. Compute the target encoding of the image from image_path 2. Find the encoding from the database that has smallest distance with the target encoding. - Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding. - Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items(). - Compute L2 distance between the target "encoding" and the current "encoding" from the database. - If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
# GRADED FUNCTION: who_is_it def who_is_it(image_path, database, model): """ Implements face recognition for the happy house by finding who is the person on the image_path image. Arguments: image_path -- path to an image database -- database containing image encodings along with the name of the person on the image model -- your Inception model instance in Keras Returns: min_dist -- the minimum distance between image_path encoding and the encodings from the database identity -- string, the name prediction for the person on image_path """ ### START CODE HERE ### ## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line) encoding = img_to_encoding(image_path, model) ## Step 2: Find the closest encoding ## # Initialize "min_dist" to a large value, say 100 (≈1 line) min_dist = 100 # Loop over the database dictionary's names and encodings. for (name, db_enc) in database.items(): # Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line) dist = np.linalg.norm(np.subtract(encoding ,database[name])) # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines) if dist < min_dist: min_dist = dist identity = name ### END CODE HERE ### if min_dist > 0.7: print("Not in the database.") else: print ("it's " + str(identity) + ", the distance is " + str(min_dist)) return min_dist, identity
Course 4/Face Recognition for the Happy House v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Now that we have a function to generate hillshade, we need to read in the NEON LiDAR Digital Terrain Model (DTM) geotif using the raster2array function and then calculate hillshade using the hillshade function. We can then plot both using the plot_band_array function.
# Use raster2array to convert TEAK DTM Geotif to array & plot dtm_array, dtm_metadata = raster2array('TEAK_D17_Data_2013/TEAK_LiDAR_Data/TEAK_lidarDTM.tif') plot_band_array(dtm_array,dtm_metadata['extent'],'TEAK DTM','Elevation, m',colormap='gist_earth') ax = plt.gca(); plt.grid('on') # Use hillshade function on a DTM Geotiff hs_array = hillshade(dtm_array,225,45) plot_band_array(hs_array,dtm_metadata['extent'],'TEAK Hillshade, Aspect=225°', 'Hillshade',colormap='Greys',alpha=0.8) ax = plt.gca(); plt.grid('on') #Overlay transparent hillshade on DTM: fig = plt.figure(frameon=False) im1 = plt.imshow(dtm_array,cmap='terrain_r',extent=dtm_metadata['extent']); cbar = plt.colorbar(); cbar.set_label('Elevation, m',rotation=270,labelpad=20) im2 = plt.imshow(hs_array,cmap='Greys',alpha=0.8,extent=dtm_metadata['extent']); #plt.colorbar() ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees plt.grid('on'); # plt.colorbar(); plt.title('TEAK Hillshade + DTM')
code/Python/remote-sensing/lidar/create_hillshade_from_terrain_raster_py.ipynb
mjones01/NEON-Data-Skills
agpl-3.0
Calculate CHM & Overlay on Top of Hillshade
#Calculate CHM from DSM & DTM: dsm_array, dsm_metadata = raster2array('TEAK_D17_Data_2013/TEAK_LiDAR_Data/TEAK_lidarDSM.tif') teak_chm = dsm_array - dtm_array; plot_band_array(teak_chm,dtm_metadata['extent'],'TEAK Canopy Height Model','Canopy Height, m',colormap='Greens') ax = plt.gca(); plt.grid('on') #Overlay transparent hillshade on DTM: fig = plt.figure(frameon=False) #Terrain im1 = plt.imshow(dtm_array,cmap='YlOrBr',extent=dtm_metadata['extent']); cbar1 = plt.colorbar(); cbar1.set_label('Elevation, m',rotation=270,labelpad=20) #Hillshade im2 = plt.imshow(hs_array,cmap='Greys',alpha=.5,extent=dtm_metadata['extent']); #plt.colorbar() #Canopy im3 = plt.imshow(teak_chm,cmap='Greens',alpha=0.6,extent=dtm_metadata['extent']); cbar2 = plt.colorbar(); cbar2.set_label('Canopy Height, m',rotation=270,labelpad=20) ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees plt.grid('on'); # plt.colorbar(); plt.title('TEAK 2013 \n Terrain, Hillshade, & Canopy Height')
code/Python/remote-sensing/lidar/create_hillshade_from_terrain_raster_py.ipynb
mjones01/NEON-Data-Skills
agpl-3.0
Links to Tutorials on Creating Hillshades: Python Hillshade: - http://geoexamples.blogspot.com/2014/03/shaded-relief-images-using-gdal-python.html - http://pangea.stanford.edu/~samuelj/musings/dems-in-python-pt-3-slope-and-hillshades-.html ESRI ArcGIS Hillshade Algorithm: - http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=How%20Hillshade%20works GitHub Hillshade Functions/Tutorials: - https://github.com/rveciana/introduccion-python-geoespacial/blob/master/hillshade.py - https://github.com/clhenrick/gdal_hillshade_tutorial GDAL Hillshade: - http://www.gdal.org/gdaldem.html - http://gis.stackexchange.com/questions/144535/how-to-create-transparent-hillshade/144700 Scratch Code
#Importing the TEAK CHM Geotiff resulted in v. sparse data ? chm_array, chm_metadata = raster2array('TEAK_lidarCHM.tif') print('TEAK CHM Array\n:',chm_array) # print(chm_metadata) #print metadata in alphabetical order for item in sorted(chm_metadata): print(item + ':', chm_metadata[item]) # print(chm_metadata['extent']) import copy chm_nonzero_array = copy.copy(chm_array) chm_nonzero_array[chm_array==0]=np.nan print('TEAK CHM nonzero array:\n',chm_nonzero_array) print(np.nanmin(chm_nonzero_array)) print(np.nanmax(chm_nonzero_array))
code/Python/remote-sensing/lidar/create_hillshade_from_terrain_raster_py.ipynb
mjones01/NEON-Data-Skills
agpl-3.0
Now we are ready to set up our AIRL trainer. Note, that the reward_net is actually the network of the discriminator. We evaluate the learner before and after training so we can see if it made any progress.
from imitation.algorithms.adversarial.airl import AIRL from imitation.rewards.reward_nets import BasicShapedRewardNet from imitation.util.networks import RunningNorm from stable_baselines3 import PPO from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.vec_env import DummyVecEnv, SubprocVecEnv import gym import seals venv = DummyVecEnv([lambda: gym.make("seals/CartPole-v0")] * 8) learner = PPO( env=venv, policy=MlpPolicy, batch_size=64, ent_coef=0.0, learning_rate=0.0003, n_epochs=10, ) reward_net = BasicShapedRewardNet( venv.observation_space, venv.action_space, normalize_input_layer=RunningNorm ) airl_trainer = AIRL( demonstrations=rollouts, demo_batch_size=1024, gen_replay_buffer_capacity=2048, n_disc_updates_per_round=4, venv=venv, gen_algo=learner, reward_net=reward_net, ) learner_rewards_before_training, _ = evaluate_policy( learner, venv, 100, return_episode_rewards=True ) airl_trainer.train(20000) # Note: set to 300000 for better results learner_rewards_after_training, _ = evaluate_policy( learner, venv, 100, return_episode_rewards=True )
examples/4_train_airl.ipynb
HumanCompatibleAI/imitation
mit
Prepare the dependencies
%%px import os import time import nnabla as nn import nnabla.communicators as C from nnabla.ext_utils import get_extension_context import nnabla.functions as F from nnabla.initializer import ( calc_uniform_lim_glorot, UniformInitializer) import nnabla.parametric_functions as PF import nnabla.solvers as S import numpy as np
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
Define the communicator for gradients exchange.
%%px extension_module = "cudnn" ctx = get_extension_context(extension_module) comm = C.MultiProcessCommunicator(ctx) comm.init() n_devices = comm.size mpi_rank = comm.rank device_id = mpi_rank ctx = get_extension_context(extension_module, device_id=device_id)
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
Check different ranks are assigned to different devices
%%px print("n_devices={}".format(n_devices)) print("mpi_rank={}".format(mpi_rank))
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
Create data points and a very simple neural network
%%px # Data points setting n_class = 2 b, c, h, w = 4, 1, 32, 32 # Data points x_data = np.random.rand(b, c, h, w) y_data = np.random.choice(n_class, b).reshape((b, 1)) x = nn.Variable(x_data.shape) y = nn.Variable(y_data.shape) x.d = x_data y.d = y_data # Network setting C = 1 kernel = (3, 3) pad = (1, 1) stride = (1, 1) %%px rng = np.random.RandomState(0) w_init = UniformInitializer( calc_uniform_lim_glorot(C, C/2, kernel=(1, 1)), rng=rng) %%px # Network with nn.context_scope(ctx): h = PF.convolution(x, C, kernel, pad, stride, w_init=w_init) pred = PF.affine(h, n_class, w_init=w_init) loss = F.mean(F.softmax_cross_entropy(pred, y))
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
Important notice here is that w_init is passed to parametric functions to let the network on each GPU start from the same values of trainable parameters in the optimization process. Create a solver.
%%px # Solver and add parameters solver = S.Adam() solver.set_parameters(nn.get_parameters())
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
Training Recall the basic usage of nnabla API for training a neural network, it is loss.forward() solver.zero_grad() loss.backward() solver.update() In use of C.MultiProcessCommunicator, these steps are performed in different GPUs, and the only difference from these steps is comm.all_reduce(). Thus, in case of C.MultiProcessCommunicator training steps are as follows, loss.forward() solver.zero_grad() loss.backward() comm.all_reduce([x.grad for x in nn.get_parameters().values()]) solver.update() First, forward, zero_grad, and backward,
%%px # Training steps loss.forward() solver.zero_grad() loss.backward()
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
Check gradients of weights once,
%%px for n, v in nn.get_parameters().items(): print(n, v.g)
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
You can see the different values on each device, then call all_reduce,
%%px comm.all_reduce([x.grad for x in nn.get_parameters().values()], division=True)
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
Commonly, all_reduce only means the sum; however, comm.all_reduce addresses both cases: summation and summation division. Again, check gradients of weights,
%%px for n, v in nn.get_parameters().items(): print(n, v.g)
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
You can see the same values over the devices because of all_reduce. Update weights,
%%px solver.update()
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
This concludes the usage of C.MultiProcessCommunicator for Data Parallel Distributed Training. Now you should have an understanding of how to use C.MultiProcessCommunicator, go to the cifar10 example, classification.py for more details. Advanced Topics When working with multiple nodes with multiple devices (e.g. GPUs), one or a few of them might stop response for some special cases. When your training process originally takes time, it is hard to identify the elapsed time is in training or for dead device. In current implementation, we introduced the watch dog in all_reduce(). When any node or any device stop response, the watch dog will raise an exception. The typical time for all_reduce() is 60 seconds. It means the process in any node or any device cannot wait at all_reduce() for more than 60 seconds, otherwise, some node or device might highly definitely stop response. But in pratice, some task required to be performed on one a few of nodes, and let other nodes wait there. If no explicitly sychronization, the watch dog might be unexpectedly triggered. As the following:
extension_module = "cudnn" type_config = "float" ctx = get_extension_context(extension_module, type_config=type_config) comm = C.MultiProcessDataParalellCommunicator(ctx) comm.init() if comm.rank == 0: ... # Here, we do some task on node 0 if comm.rank != 0: ... # here, we do some task on other nodes # Till here, multiple nodes has different progress for d in data_iterator(): ... comm.all_reduce(...) # Here, since different nodes has different # start points, all_reduce() might trigger # watch dog timeout exception.
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
In order to avoid above unexpected exception, we have to explicitly set the synchronization point.
extension_module = "cudnn" type_config = "float" ctx = get_extension_context(extension_module, type_config=type_config) comm = C.MultiProcessDataParalellCommunicator(ctx) comm.init() if comm.rank == 0: ... # Here, we do some task on node 0 if comm.rank != 0: ... # here, we do some task on other nodes comm.barrier() # we placed the synchronization point immediate before # comm.all_reduce(). for d in data_iterator(): ... comm.all_reduce(...) # The wait time at all_reduce() should be strictly # limited in a relative short time.
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
We placed the synchronization point immediately before comm.all_reduce(), which means that we knew comm.all_reduce() should be perform synchronously after this point. Thus, we may ensure the whole training can be performed stably and not need to wait forever due to a corrupted process. If want to disable this watch dog, please set environment variable NNABLA_MPI_WATCH_DOG_MUTE to any none-zero value:
%%bash export NNABLA_MPI_WATCH_DOG_MUTE=1
tutorial/multi_device_training.ipynb
sony/nnabla
apache-2.0
Project 1: Quick Theory Validation
from collections import Counter import numpy as np positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for word in reviews[i].split(" "): negative_counts[word] += 1 total_counts[word] += 1 # positive_counts.most_common() pos_neg_ratios = Counter() for term,cnt in list(total_counts.most_common()): if(cnt > 100): pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1) pos_neg_ratios[term] = pos_neg_ratio for word,ratio in pos_neg_ratios.most_common(): if(ratio > 1): pos_neg_ratios[word] = np.log(ratio) else: pos_neg_ratios[word] = -np.log((1 / (ratio+0.01))) # words most frequently seen in a review with a "POSITIVE" label # pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label # list(reversed(pos_neg_ratios.most_common()))[0:30]
sentiment-network/Sentiment Classification - Mini Project 5.ipynb
swirlingsand/deep-learning-foundations
mit
Project 2: Creating the Input/Output Data
vocab = set(total_counts.keys()) vocab_size = len(vocab) print(vocab_size) # list(vocab) import numpy as np layer_0 = np.zeros((1,vocab_size)) layer_0 from IPython.display import Image Image(filename='sentiment_network.png') word2index = {} for i,word in enumerate(vocab): word2index[word] = i # word2index def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 def get_target_for_label(label): if(label == 'POSITIVE'): return 1 else: return 0 labels[0] get_target_for_label(labels[0]) labels[1] get_target_for_label(labels[1])
sentiment-network/Sentiment Classification - Mini Project 5.ipynb
swirlingsand/deep-learning-foundations
mit
Project 3: Building a Neural Network Start with your neural network from the last chapter 3 layer neural network no non-linearity in hidden layer use our functions to create the training data create a "pre_process_data" function to create vocabulary for our training data generating functions modify "train" to train over the entire corpus Where to Get Help if You Need it Re-watch previous week's Udacity Lectures Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
import time import sys import numpy as np # Let's tweak our network from before to model these phenomena class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): # set our random number generator np.random.seed(1) self.pre_process_data(reviews, labels) self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): review_vocab = set() for review in reviews: for word in review.split(" "): review_vocab.add(word) self.review_vocab = list(review_vocab) label_vocab = set() for label in labels: label_vocab.add(label) self.label_vocab = list(label_vocab) self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.learning_rate = learning_rate self.layer_0 = np.zeros((1,input_nodes)) self.layer_1 = np.zeros((1, hidden_nodes)) def update_input_layer(self,review): # clear out previous state, reset the layer to be all 0s self.layer_0 *= 0 for word in review.split(" "): if(word in self.word2index.keys()): self.layer_0[0][self.word2index[word]] += 1 def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) def train(self, training_reviews_raw, training_labels): # Performance pre-processing training_reviews = list() for review in training_reviews_raw: indices = set() for word in review.split(" "): if(word in self.word2index.keys()): indices.add(self.word2index[word]) training_reviews.append(list(indices)) assert(len(training_reviews) == len(training_labels)) correct_so_far = 0 start = time.time() for i in range(len(training_reviews)): review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### # Input Layer # Handled in performance pre processsing # self.update_input_layer(review) # Hidden layer self.layer_1 *= 0 for index in review: self.layer_1 += self.weights_0_1[index] # Output layer layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # TODO: Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # TODO: Update the weights self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step if(np.abs(layer_2_error) < 0.5): correct_so_far += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): correct = 0 start = time.time() for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): # Input Layer self.update_input_layer(review.lower()) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) if(layer_2[0] > 0.5): return "POSITIVE" else: return "NEGATIVE" # mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) # evaluate our model before training (just to show how horrible it is) # mlp.test(reviews[-1000:],labels[-1000:]) # train the network # mlp.train(reviews[:-1000],labels[:-1000]) # mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01) # train the network # mlp.train(reviews[:-1000],labels[:-1000]) # mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001) # train the network # mlp.train(reviews[:-1000],labels[:-1000])
sentiment-network/Sentiment Classification - Mini Project 5.ipynb
swirlingsand/deep-learning-foundations
mit
Understanding Neural Noise
from IPython.display import Image Image(filename='sentiment_network.png') def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 review_counter = Counter() for word in reviews[0].split(" "): review_counter[word] += 1 # review_counter.most_common()
sentiment-network/Sentiment Classification - Mini Project 5.ipynb
swirlingsand/deep-learning-foundations
mit
Project 4: Reducing Noise in our Input Data
import time import sys import numpy as np # Let's tweak our network from before to model these phenomena class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): # set our random number generator np.random.seed(1) self.pre_process_data(reviews, labels) self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): review_vocab = set() for review in reviews: for word in review.split(" "): review_vocab.add(word) self.review_vocab = list(review_vocab) label_vocab = set() for label in labels: label_vocab.add(label) self.label_vocab = list(label_vocab) self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.learning_rate = learning_rate self.layer_0 = np.zeros((1,input_nodes)) self.layer_1 = np.zeros((1,hidden_nodes)) def update_input_layer(self,review): # clear out previous state, reset the layer to be all 0s self.layer_0 *= 0 for word in review.split(" "): if(word in self.word2index.keys()): self.layer_0[0][self.word2index[word]] = 1 def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) def train(self, training_reviews_raw, training_labels): # Performance pre-processing training_reviews = list() for review in training_reviews_raw: indices = set() for word in review.split(" "): if(word in self.word2index.keys()): indices.add(self.word2index[word]) training_reviews.append(list(indices)) assert(len(training_reviews) == len(training_labels)) correct_so_far = 0 start = time.time() for i in range(len(training_reviews)): review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### # Input Layer # Remove as now performance preprocessing # self.update_input_layer(review) # Hidden layer self.layer_1 *= 0 for index in review: self.layer_1 += self.weights_0_1[index] # Output layer layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # TODO: Update the weights self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step # Update with code below for better performance # self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step # only update weights we are USING for index in review: self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate if(np.abs(layer_2_error) < 0.5): correct_so_far += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): correct = 0 start = time.time() for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): # Hidden layer self.layer_1 *= 0 unique_indices = set() for word in review.lower().split(" "): if word in self.word2index.keys(): unique_indices.add(self.word2index[word]) for index in unique_indices: self.layer_1 += self.weights_0_1[index] # Output layer layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) if(layer_2[0] > 0.5): return "POSITIVE" else: return "NEGATIVE" mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) # evaluate our model after training time.clock() mlp.test(reviews[-1000:],labels[-1000:])
sentiment-network/Sentiment Classification - Mini Project 5.ipynb
swirlingsand/deep-learning-foundations
mit
Problem 1) An (oversimplified) 1-D Model For this introductory problem we are going to simulate a 1 dimensional detector (the more complex issues associated will real stars on 2D detectors will be covered tomorrow by Dora). We will generate stars as Gaussians $N(\mu, \sigma^2)$, with mean $\mu$ and variance $\sigma^2$. As observed by LSST, all stars are point sources that reflect the point spread function (PSF), which is produced by a combination of the atmosphere, telescope, and detector. A standard measure of the PSF's width is the Full Width Half Maximum (FWHM). There is also a smooth background of light from several sources that I previously mentioned (the atmosphere, the detector, etc). We will refer to this background simply as "The Sky". Problem 1a Write a function phi() to simulate a (noise-free) 1D Gaussian PSF. The function should take mu and fwhm as arguments, and evaluate the PSF along a user-supplied array x. Hint - for a Gaussian $N(0, \sigma^2)$, the FWHM is $2\sqrt{2\ln(2)}\,\sigma \approx 2.3548\sigma$.
def phi(x, mu, fwhm): # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1b Plot the noise-free PSF for a star with $\mu = 10$ and $\mathrm{FWHM} = 3$. What is the flux of this star?
x = # complete plt.plot( # complete print("The flux of the star is: {:.3f}".format( # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1c Add Sky noise (a constant in this case) to your model. Define the sky as S, with total stellar flux F. Plot the model for S = 100 and F = 500.
plt.plot( # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2) Add Noise We will add noise to this simulation assuming that photon counting contributes the only source of uncertainty (this assumption is far from sufficient in real life). Within each pixel, $n$ photons are detected with an uncertainty that follows a Poisson distribution, which has the property that the mean $\mu$ is equal to the variance $\mu$. If $n \gg 1$ then $P(\mu) \approx N(\mu, \mu)$ [you can safely assume we will be in this regime for the remainder of this problem]. Problem 2a Calculate the noisy flux for the simulated star in Problem 1c. Hint - you may find the function np.random.normal() helpful.
# complete noisy_flux = # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2b Overplot the noisy signal, with the associated uncertainties, on top of the noise-free signal.
plt.plot( # complete plt.errorbar( # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3) Flux Measurement We will now attempt to measure the flux from a simulated star. Problem 3a Write a function simulate() to simulate the noisy flux measurements of a star with centroid mu, FWHM fwhm, sky background S, and flux F. Hint - it may be helpful to plot the output of your function.
def simulate(# complete # complete # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3b Using an aperture with radius of 5 pixels centered on the source, measure the flux from a star centered at mu = 0, with fwhm = 5, S = 100, and F = 1000. Hint - assume you can perfectly measure the background, and subtract this prior to the measurement.
# complete sim_star = simulate( # complete ap_flux = # complete print("The star has flux = {:.3f}".format( # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3c Write a Monte Carlo simulator to estimate the mean and standard deviation of the flux from the simulated star. Food for thought - what do you notice if you run your simulator many times?
sim_fluxes = # complete for # complete print("The mean flux = {:.3f} with variance = {:.3f}".format( # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4) PSF Flux measurement In this problem we are going to use our knowledge of the PSF to estimate the flux of the star. We will compare these measurements to the aperture flux measurements above. Problem 4a Create the psf model, psf, which is equivalent to a noise-free star with fwhm = 5.
psf = # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4b Using the same parameters as problem 3, simulate a star and measure it's PSF flux.
sim_star = simulate( # complete psf_flux = # complete print("The PSF flux is {:.3f}".format( # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4c As before, write a Monte Carlo simulator to estimate the PSF flux of the star. How do your results compare to above?
sim_fluxes = # complete for # complete print("The mean flux = {:.3f} with variance = {:.3f}".format( # complete
Sessions/Session05/Day1/ReIntroductionToImageProcessing.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
2. Load daily SST data The daily time series of SST off Western Australia at the location of [112.5$^∘$E, 29.5$^∘$S] has been preprocessed over the 1982 to 2017 period in advance. This can be done using NCO, CDO, Matlab, or Python itself. The location is right at the center of domain [112.375~112.625$^∘$E, 29.375~29.625$^∘$S]. So the daily time series was produced from the nearest 4 grids over the domain using a bilinear interpolation method. The data is stored as a CSV file of sst_WA.csv.
sst = np.loadtxt('data/sst_WA.csv', delimiter=',') # Generate time vector using datetime format (January 1 of year 1 is day 1) t = np.arange(date(1982,1,1).toordinal(),date(2017,12,31).toordinal()+1) dates = [date.fromordinal(tt.astype(int)) for tt in t]
ex26-Identify Marine Heatwaves from High-resolution Daily SST Data.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
3. Detect Marine Heatwave The marineHeatWaves (mhw) module consists of a number of functions for the detection and characterization of MHWs. The main function is the detection function (detect) which takes as input a time series of temperature (and a corresponding time vector) and outputs a set of detected MHWs. 3.1 Detect Run the MHW detection algorithm which returns the variable mhws, consisting of the detected MHWs, and clim, consisting of the climatological (varying by day-of-year) seasonal cycle and extremes threshold.
mhws, clim = mhw.detect(t, sst)
ex26-Identify Marine Heatwaves from High-resolution Daily SST Data.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
3.2 Check properties of MHWs The number of MHW events:
mhws['n_events']
ex26-Identify Marine Heatwaves from High-resolution Daily SST Data.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
Maximum intensities (in $^∘$C) of the first ten events
mhws['intensity_max'][0:10]
ex26-Identify Marine Heatwaves from High-resolution Daily SST Data.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
Properties of the event with the largest maximum intensity
ev = np.argmax(mhws['intensity_max']) # Find largest event print 'Maximum intensity:', mhws['intensity_max'][ev], 'deg. C' print 'Average intensity:', mhws['intensity_mean'][ev], 'deg. C' print 'Cumulative intensity:', mhws['intensity_cumulative'][ev], 'deg. C-days' print 'Duration:', mhws['duration'][ev], 'days' print 'Start date:', mhws['date_start'][ev].strftime("%d %B %Y") print 'End date:', mhws['date_end'][ev].strftime("%d %B %Y")
ex26-Identify Marine Heatwaves from High-resolution Daily SST Data.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4. Visualize From the properties of the event with the largest maximum intensity, it can be found that it is the most famous 2011 MHW off WA. 4.1 Plot the SST time series and have a closer look at the identified MHW event
plt.figure(figsize=(14,10)) plt.subplot(2,1,1) # Plot SST, seasonal cycle, and threshold plt.plot(dates, sst, 'k-') plt.plot(dates, clim['thresh'], 'g-') plt.plot(dates, clim['seas'], 'b-') plt.title('SST (black), seasonal climatology (blue), \ threshold (green), detected MHW events (shading)') plt.xlim(t[0], t[-1]) plt.ylim(sst.min()-0.5, sst.max()+0.5) plt.ylabel(r'SST [$^\circ$C]') plt.subplot(2,1,2) # Find indices for all ten MHWs before and after event of interest and shade accordingly for ev0 in np.arange(ev-10, ev+11, 1): t1 = np.where(t==mhws['time_start'][ev0])[0][0] t2 = np.where(t==mhws['time_end'][ev0])[0][0] plt.fill_between(dates[t1:t2+1], sst[t1:t2+1], clim['thresh'][t1:t2+1], \ color=(1,0.6,0.5)) # Find indices for MHW of interest (2011 WA event) and shade accordingly t1 = np.where(t==mhws['time_start'][ev])[0][0] t2 = np.where(t==mhws['time_end'][ev])[0][0] plt.fill_between(dates[t1:t2+1], sst[t1:t2+1], clim['thresh'][t1:t2+1], \ color='r') # Plot SST, seasonal cycle, threshold, shade MHWs with main event in red plt.plot(dates, sst, 'k-', linewidth=2) plt.plot(dates, clim['thresh'], 'g-', linewidth=2) plt.plot(dates, clim['seas'], 'b-', linewidth=2) plt.title('SST (black), seasonal climatology (blue), \ threshold (green), detected MHW events (shading)') plt.xlim(mhws['time_start'][ev]-150, mhws['time_end'][ev]+150) plt.ylim(clim['seas'].min() - 1, clim['seas'].max() + mhws['intensity_max'][ev] + 0.5) plt.ylabel(r'SST [$^\circ$C]')
ex26-Identify Marine Heatwaves from High-resolution Daily SST Data.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
Yep, It's certainly picked out the largest event in the series (dark red shading). This event also seems to have been preceded and succeeded by a number of shorter, weaker events (light red shading). 4.2 Visualize distributions of MHW statistics across all the detected events
plt.figure(figsize=(15,7)) # Duration plt.subplot(2,2,1) evMax = np.argmax(mhws['duration']) plt.bar(range(mhws['n_events']), mhws['duration'], width=0.6, \ color=(0.7,0.7,0.7)) plt.bar(evMax, mhws['duration'][evMax], width=0.6, color=(1,0.5,0.5)) plt.bar(ev, mhws['duration'][ev], width=0.6, edgecolor=(1,0.,0.), \ color='none') plt.xlim(0, mhws['n_events']) plt.ylabel('[days]') plt.title('Duration') # Maximum intensity plt.subplot(2,2,2) evMax = np.argmax(mhws['intensity_max']) plt.bar(range(mhws['n_events']), mhws['intensity_max'], width=0.6, \ color=(0.7,0.7,0.7)) plt.bar(evMax, mhws['intensity_max'][evMax], width=0.6, color=(1,0.5,0.5)) plt.bar(ev, mhws['intensity_max'][ev], width=0.6, edgecolor=(1,0.,0.), \ color='none') plt.xlim(0, mhws['n_events']) plt.ylabel(r'[$^\circ$C]') plt.title('Maximum Intensity') # Mean intensity plt.subplot(2,2,4) evMax = np.argmax(mhws['intensity_mean']) plt.bar(range(mhws['n_events']), mhws['intensity_mean'], width=0.6, \ color=(0.7,0.7,0.7)) plt.bar(evMax, mhws['intensity_mean'][evMax], width=0.6, color=(1,0.5,0.5)) plt.bar(ev, mhws['intensity_mean'][ev], width=0.6, edgecolor=(1,0.,0.), \ color='none') plt.xlim(0, mhws['n_events']) plt.title('Mean Intensity') plt.ylabel(r'[$^\circ$C]') plt.xlabel('MHW event number') # Cumulative intensity plt.subplot(2,2,3) evMax = np.argmax(mhws['intensity_cumulative']) plt.bar(range(mhws['n_events']), mhws['intensity_cumulative'], width=0.6, \ color=(0.7,0.7,0.7)) plt.bar(evMax, mhws['intensity_cumulative'][evMax], width=0.6, color=(1,0.5,0.5)) plt.bar(ev, mhws['intensity_cumulative'][ev], width=0.6, edgecolor=(1,0.,0.), \ color='none') plt.xlim(0, mhws['n_events']) plt.title(r'Cumulative Intensity') plt.ylabel(r'[$^\circ$C$\times$days]') plt.xlabel('MHW event number')
ex26-Identify Marine Heatwaves from High-resolution Daily SST Data.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
Lecture des données d'apprentissage et de test Les données peuvent être préalablement téléchargées ou directement lues. Ce sont celles originales du site MNIST DataBase mais préalablement converties au format .csv, certes plus volumineux mais plus facile à lire. Attention le fichier mnist_train.zip présent dans le dépôt est compressé.
# Lecture des données d'apprentissage N_classes = 10 # path="" # Si les données sont dans le répertoire courant sinon: path="" Dtrain=pd.read_csv(path+"mnist_train.zip",header=None) X_train = Dtrain.values[:,:-1] Y_train = Dtrain.values[:,-1] Dtest=pd.read_csv(path+"mnist_test.csv",header=None) X_test = Dtest.values[:,:-1] Y_test = Dtest.values[:,-1]
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Attention, avec Keras, la variable réponse doit être une matrice binaire où chaque classe est représentée par une indicatrice: pour chaque individu, l'élément de la colone correspondant à la classe à laquelle il appartient est à 1, sinon il est à 0. Keras possède une fonction to_catergorical permettant de convertir directement le vecteur variable Y_train de réponse en matrice (array numpy) indicatriceY_train_cat. C'est l'équivalent de get_dummies de pandas ou OneHotEncoder de scikit-learn.
Y_train_cat = ku.to_categorical(Y_train, N_classes) Y_test_cat = ku.to_categorical(Y_test, N_classes)
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Apprentissage et prévision du test Avec réseau dense Première tentative d'appliquer un réseaux de neurone de type Perceptron classique avec 4 couches: * Dense: 52 neurones + Foncton d'activation relu * Dropout: 20% des neurones tiré aléatoirement sont desactivés * Dense: 52 neurones + Foncton d'activation relu * Dropout: 20% des neurones tiré aléatoirement sont desactivés Une dernière couche softmax fournit la classification Apprentissage
X_train.shape # Définition du réseau model = km.Sequential() model.add(kl.Dense(128, activation='relu', input_shape=(784,))) model.add(kl.Dropout(0.2)) model.add(kl.Dense(128, activation='relu')) model.add(kl.Dropout(0.2)) model.add(kl.Dense(N_classes, activation='softmax')) # Réumé model.summary()
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Q Retrouvez manuellement le nombre de paramètres.
# apprentissage model.compile(loss='categorical_crossentropy', optimizer=ko.RMSprop(), metrics=['accuracy']) ts = time.time() history = model.fit(X_train, Y_train_cat, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(X_test, Y_test_cat)) te = time.time() t_train_mpl = te-ts
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Résultats
score_mpl = model.evaluate(X_test, Y_test_cat, verbose=0) predict_mpl = model.predict(X_test) print('Test loss:', score_mpl[0]) print('Test accuracy:', score_mpl[1]) print("Time Running: %.2f seconds" %t_train_mpl ) fig=plt.figure(figsize=(7,6)) ax = fig.add_subplot(1,1,1) ax = sb.heatmap(pd.DataFrame(confusion_matrix(Y_test, predict_mpl.argmax(1))), annot=True, fmt="d")
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Q Que dire de ces résultats ? Q Faites tourner de nouveaux l'algorithme en normalisant les données afin que celles-ci soit comprises entre 0 et 1. Qu'observez vous? Convolutional Layers Format des données Dans les exemples précédents. Les données était "applaties". Une imade de $28\times 28=784$ pixels est considérée comme un vecteur. Pour pouvoir utiliser le principe de la convolution la structure des images est conservée. Une image n'est pas un vecteur de tailles $784\times 1$ mais une matrice de taille $28\times 28$. Une troisième dimension est également nécessaire pour décrire afin de prendre en compte les différents channels de l'image. Dans le cas de MNIST cette dernière dimension est de taille 1 car les pixels ne sont décrits qu'avec un seul niveau de gris. Cependant, des images couleurs en RGB sont généralement codées avec trois niveaux d'intensité (Rouge, Vert et Bleus). Ainsi X_train est réorganisée en cube ou multitableau de dimensions $60000\times 28\times 28\times 1$ pour être utilisé dans un réseau de convolution avec Keras.
X_train_conv = X_train.reshape(60000, 28, 28, 1) X_test_conv = X_test.reshape(10000, 28, 28, 1)
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Visualisation des données
import keras.preprocessing.image as kpi fig = plt.figure(figsize=(5,5)) ax = fig.add_subplot(1,1,1) x = kpi.img_to_array(X_train_conv[0]) ax.imshow(x[:,:,0]/255, interpolation='nearest', cmap="binary") ax.grid(False) plt.show()
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Edge detection Dans cette partie vous pouvez explorer l'effet de filtre de convolution simple sur une image. Un réseau de neuronne constitué d'une couche de convolution constitué d'un seul filtre définie manuellement (non appris par optimisation) est définie dans le code ci-dessous.
from keras.models import Sequential from keras.layers import Conv2D conv_filter = np.array([ [0.2, -0.2, 0], [0.2, -0.2, 0], [0.2, -0.2, 0], ]) def my_init_filter(shape, conv_filter = conv_filter, dtype=None): xf,yf = conv_filter.shape array = conv_filter.reshape(xf, yf, 1, 1) return array my_init_filter(0).shape conv_edge = Sequential([ Conv2D(kernel_size=(3,3), filters=1, kernel_initializer=my_init_filter, input_shape=(28, 28, 1)) ])
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Q Notez que dans la fonction my_init_filter les dimensions de l'image sont modifiés. A quoi correspondent les deux dimensions ajoutées?
img_in = np.expand_dims(x, 0) img_out = conv_edge.predict(img_in) fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(15, 5)) ax0.imshow(img_in[0,:,:,0], cmap="binary") ax0.set_title("Image originale") ax0.grid(False) norm_conv_filter = (conv_filter-conv_filter.min())/conv_filter.max() ax1.imshow(norm_conv_filter.astype(np.uint8), cmap="binary") ax1.set_title("Filtre") ax1.grid(False) ax2.imshow(img_out[0,:,:,0].astype(np.uint8), cmap="binary") ax2.set_title("Image Filtre") ax2.grid(False)
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Q Que constatez vous? Verifiez que les dimensions de l'image en sortie sont cohérentes. Q Testez ce même code avec un filtre différent. Strides and Padding Dans cette partie vous pouvez explorer l'effet des arguments strideset padding sur une image.
from keras.models import Sequential from keras.layers import Conv2D conv_filter = np.array([ [0, 0, 0], [0, 1, 0], [0, 0, 0], ]) def my_init_filter(shape, conv_filter = conv_filter, dtype=None): xf,yf = conv_filter.shape array = conv_filter.reshape(xf, yf, 1, 1) return array my_init_filter(0).shape conv_sp = Sequential([ Conv2D(kernel_size=(3,3), filters=1, kernel_initializer=my_init_filter, input_shape=(28, 28, 1), strides=2, padding="SAME") ])
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Q Quel est l'effet du filtre défini ici?
img_in = np.expand_dims(x, 0) img_out = conv_sp.predict(img_in) fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(15, 5)) ax0.imshow(img_in[0,:,:,0].astype(np.uint8), cmap="binary"); ax0.grid(False) norm_conv_filter = (conv_filter-conv_filter.min())/conv_filter.max() ax1.imshow(norm_conv_filter.astype(np.uint8), cmap="binary"); ax1.grid(False) ax2.imshow(img_out[0,:,:,0].astype(np.uint8), cmap="binary"); ax2.grid(False)
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Q Modifiez les paramètres strideet padding, et observez l'effet sur la dimension des images. Max Pooling Exercice Ecrivez un code similaire pour observez l'effet du MaxPooling.
# %load max_pooling.py
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Convolutional Network (ConvNet) Les propriété d'invariance par translation introduites par les couches opérant une convolution des images ont un impact important sur la qualité des résultats. LeNet5 On teste dans un premier temps le modèle LeNet5 proposer par LeCun et al.
LeNet5model = km.Sequential() LeNet5model.add(kl.Conv2D(filters = 6, kernel_size = 5, strides = 1, activation = 'tanh', input_shape = (28,28,1))) LeNet5model.add(kl.MaxPooling2D(pool_size = 2, strides = 2)) LeNet5model.add(kl.Conv2D(filters = 16, kernel_size = 5,strides = 1, activation = 'tanh')) LeNet5model.add(kl.MaxPooling2D(pool_size = 2, strides = 2)) LeNet5model.add(kl.Flatten()) LeNet5model.add(kl.Dense(units = 120, activation = 'tanh')) LeNet5model.add(kl.Dense(units = 84, activation = 'tanh')) LeNet5model.add(kl.Dense(units = 10, activation = 'softmax')) LeNet5model.summary()
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Q Retrouvez manuellement le nombre de paramètres. Q Que dire du nombre de paramètres de ce réseau par rapport au réseau dense précédement défini?
# Apprentissage LeNet5model.compile(loss="categorical_crossentropy", optimizer=ko.Adadelta(), metrics=['accuracy']) ts=time.time() LeNet5model.fit(X_train_conv, Y_train_cat, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(X_test_conv, Y_test_cat)) te=time.time() t_train_conv = te-ts
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Q Que dire du temps de calcul? Pourquoi est-il plus long que le réseau Dense? Résultats
score_conv = LeNet5model.evaluate(X_test_conv, Y_test_cat, verbose=0) predict_conv = LeNet5model.predict(X_test_conv) print('Test loss:', score_conv[0]) print('Test accuracy:', score_conv[1]) print("Time Running: %.2f seconds" %t_train_conv ) fig=plt.figure(figsize=(7,6)) ax = fig.add_subplot(1,1,1) ax = sb.heatmap(pd.DataFrame(confusion_matrix(Y_test, predict_conv.argmax(1))), annot=True, fmt="d")
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Autre architecture Réseau Test d'un réseau de convolution constitué de 7 couches: Une couche de convolution 2D, avec fenêtre de taille 3x3 et une fonction d'activation relu Une couche de convolution 2D, avec fenêtre de taille 3x3 et une fonction d'activation relu Une couche max pooling de fenêtre 2x2 Une couche dropout où 25% des neurones sont desactivés Une couche Flatten transforme les images $N \times N$ en vecteurs $N^2$. Une couche classique de 128 neurones Une couche dropout ou 50% des neurones sont desactivés Une couche softmax fournit la classification
# descrition du réseau model = km.Sequential() model.add(kl.Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28,28, 1), data_format="channels_last")) model.add(kl.Conv2D(64, (3, 3), activation='relu')) model.add(kl.MaxPooling2D(pool_size=(2, 2))) model.add(kl.Dropout(0.25)) model.add(kl.Flatten()) model.add(kl.Dense(128, activation='relu')) model.add(kl.Dropout(0.5)) model.add(kl.Dense(N_classes, activation='softmax')) # Résumé model.summary() # Apprentissage model.compile(loss="categorical_crossentropy", optimizer=ko.Adadelta(), metrics=['accuracy']) ts=time.time() model.fit(X_train_conv, Y_train_cat, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(X_test_conv, Y_test_cat)) te=time.time() t_train_conv = te-ts score_conv = model.evaluate(X_test_conv, Y_test_cat, verbose=0) predict_conv = model.predict(X_test_conv) print('Test loss:', score_conv[0]) print('Test accuracy:', score_conv[1]) print("Time Running: %.2f seconds" %t_train_conv ) fig=plt.figure(figsize=(7,6)) ax = fig.add_subplot(1,1,1) ax = sb.heatmap(pd.DataFrame(confusion_matrix(Y_test, predict_conv.argmax(1))), annot=True, fmt="d")
MNIST/Atelier-keras-MNIST.ipynb
wikistat/Ateliers-Big-Data
mit
Let's check how many items in the dictionary.
print len(questions)
howto/make_data_a_serialized_object.ipynb
sangheestyle/ml2015project
mit
Yes, 7949 items is right. How about question numbers? It is continuous number or not? We might want to check the first and last 10 items.
print sorted(questions.keys())[:10] print sorted(questions.keys())[-10:]
howto/make_data_a_serialized_object.ipynb
sangheestyle/ml2015project
mit
Yes, it's not continuous data in terms of qid. But, it's OK. What about just see one question? How can we do it? Just look at qid 1.
questions[1]
howto/make_data_a_serialized_object.ipynb
sangheestyle/ml2015project
mit
Yes, it's dictionary. So, you can use some dictionary functions. Check this out.
questions[1].keys() questions[1]['answer'] questions[1]['pos_token'] questions[1]['pos_token'].keys() questions[1]['pos_token'].values() questions[1]['pos_token'].items()
howto/make_data_a_serialized_object.ipynb
sangheestyle/ml2015project
mit
How can figure out questions length without tokenizing question it self?
max(questions[1]['pos_token'].keys())
howto/make_data_a_serialized_object.ipynb
sangheestyle/ml2015project
mit
Make questions pickled data As you know that, reading csv and coverting it as a dictionary spend more than one minute. Once we convert it as a dictionary, we can save it as a pickled data and we can load it when we need. It is really simple and fast. Look at that! Wait! We will use gzip.open instead of open because pickled file is too big. So we will use compression. It's easy and it consumes only 1/10 size of that of original one. Of course, it will take few more seconds than plain one. original: 1 sec in my PC compressed: 5 sec in my PC Also, "wb" means writing as binary mode, and "rb" means reading file as binary mode.
with gzip.open("questions.pklz", "wb") as output: pickle.dump(questions, output)
howto/make_data_a_serialized_object.ipynb
sangheestyle/ml2015project
mit
Yes, now we can load pickled data as a variable.
with gzip.open("questions.pklz", "rb") as fp: questions_new = pickle.load(fp) print len(questions_new)
howto/make_data_a_serialized_object.ipynb
sangheestyle/ml2015project
mit
Yes, it took only few second. I will save it, make it as a commit, and push it into github. So, you can use pickled data instead of converting questions.csv
print questions == questions print questions == questions_new questions_new[0] = 1 print questions == questions_new
howto/make_data_a_serialized_object.ipynb
sangheestyle/ml2015project
mit
Now that the <b>data</b> is accessible, it will need to be sliced so that I can create a manageable data object in order to further analyze the data in the <i>csv file</i>.
status = [n for n in data['status']] airline = [n for n in data['airline']] la = [n for n in data['LosAngeles']] phx = [n for n in data['Phoenix']] sandg = [n for n in data['SanDiego']] sanfrn = [n for n in data['SanFrancisco']] seatl = [n for n in data['Seattle']]
project_1.ipynb
ModestoCabrera/IS360Project_1
gpl-2.0
I will use the zip function to create a tuple set of the above sliced data list, to create a <b>DataFrame</b> object I can work with.
flight_status = zip(status, la, phx, sandg, sanfrn, seatl) flight_status flight_df = DataFrame(data = flight_status, columns = ['status', 'la','phx','sandiego','sanfrancisco','seattle'], index = airline) flight_df flight_df.plot(kind='bar', title='Flights Database Visualization')
project_1.ipynb
ModestoCabrera/IS360Project_1
gpl-2.0
<img src="bar1.png"> The bar graph above allows me to visualize which cities have the most flights, ontime+delayed each by airline and diferentiated by <b>colored cities</b>. <b>AMWEST</b> and <b>American Airlines</b> both have the highest number of flights in phoenix, followed by <b>ALASKA</b> and <b>United Airlines</b> in flights on/to Seatle. We would like to see how this relates to ontime and delayed, since its difficult to visualize in this bar graph due to how the data is being presented. I want to see how the ontime flights, compare with delayed flights, so i will <i>plot a graph indexed by airlines and plot it to display status times in comparison.</i>
flight_df.loc['AMWEST', 'phx'].plot(kind='bar', title='Phoenix AMWEST on-time vs delayed')
project_1.ipynb
ModestoCabrera/IS360Project_1
gpl-2.0
<img src='phx_1aw.png'> <b>AMWEST</b> does relatively well in <i>Phoenix</i> for the number of ontime flights, compared to delayed flights. <img src='phx_1aa.png'> <b>American Airlines</b> does well also in <i> Phoenix</i> although the number of flights are higher in relation to the ontime flights when compared with <b>AMWEST</b> although when looking at the numbers closer, they do comparatively the same ranging about 100 delays per 1000 ontime flights.
flight_df.loc['ALASKA', 'seattle'].plot(kind='bar', title='ALASKA Seattle on-time vs delayed')
project_1.ipynb
ModestoCabrera/IS360Project_1
gpl-2.0
<img src='stl_1al.png'> The same argument could be made in this observation here, where <b>ALASKA</b> does well over all other flights ontime for Seatle, but ranges close to 400 delays for about nearly 2000 <i>ontime flights</i>.
flight_df.loc['United Airlines', 'seattle'].plot(kind='bar', title='Seatle U. Airlines on-time vs delayed')
project_1.ipynb
ModestoCabrera/IS360Project_1
gpl-2.0
<img src='stl_1ul.png'>
flight_df.loc['American Airlines', 'phx'].plot(kind='bar', title='Phoenix American Airlines on-time vs delayed')
project_1.ipynb
ModestoCabrera/IS360Project_1
gpl-2.0
Load the MNIST database
# --- load the data img_rows, img_cols = 28, 28 (X_train, y_train), (X_test, y_test) = load_mnist() X_train = 2 * X_train - 1 # normalize to -1 to 1 X_test = 2 * X_test - 1 # normalize to -1 to 1 random.seed(1) #same orientations rotated_X_train = X_train.copy() for img in rotated_X_train: img[:] = scipy.ndimage.interpolation.rotate(np.reshape(img,(28,28)), random.randint(1,360),reshape=False,mode="nearest").ravel()
Project/trained_mental_rotation_ens.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
Load the saved weight matrices that were created by trainging the model
label_weights = cPickle.load(open("label_weights1000.p", "rb")) activity_to_img_weights = cPickle.load(open("activity_to_img_weights1000.p", "rb")) rotated_after_encoder_weights = cPickle.load(open("rotated_after_encoder_weights1000.p", "r")) #rotated_after_encoder_weights_5000 = cPickle.load(open("rotated_after_encoder_weights_5000.p", "r")) #rotation_weights = cPickle.load(open("rotation_weights_clockwise5000.p","rb"))
Project/trained_mental_rotation_ens.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
The network where the mental imagery and rotation occurs The state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work The number of neurons (n_hid) must be the same as was used for training The input must be shown for a short period of time to be able to view the rotation The recurrent connection must be from the neurons because the weight matices were trained on the neuron activities
rng = np.random.RandomState(9) n_hid = 1000 model = nengo.Network(seed=3) with model: #Stimulus only shows for brief period of time stim = nengo.Node(lambda t: ONE if t < 0.1 else 0) #nengo.processes.PresentInput(labels,1))# ens_params = dict( eval_points=X_train, neuron_type=nengo.LIF(), #Why not use LIF? intercepts=nengo.dists.Choice([-0.5]), max_rates=nengo.dists.Choice([100]), ) # linear filter used for edge detection as encoders, more plausible for human visual system #encoders = Gabor().generate(n_hid, (11, 11), rng=rng) #encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True) degrees = 6 #must have same number of excoders as neurons (Want each random encoder to have same encoder at every angle) encoders = Gabor().generate(n_hid/(360/degrees), (11, 11), rng=rng) encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True) rotated_encoders = encoders.copy() #For each randomly generated encoder, create the same encoder at every angle (increments chosen by degree) for encoder in encoders: for i in range(59): #new_gabor = rotate(encoder.reshape(28,28),degrees*i,reshape = False).ravel() rotated_encoders = np.append(rotated_encoders, [rotate(encoder.reshape(28,28),degrees*i,reshape = False).ravel()],axis =0) #Num of neurons does not divide evenly with 6 degree increments, so add random encoders extra_encoders = Gabor().generate(n_hid - len(rotated_encoders), (11, 11), rng=rng) extra_encoders = Mask((28, 28)).populate(extra_encoders, rng=rng, flatten=True) all_encoders = np.append(rotated_encoders, extra_encoders, axis =0) a.encoders = all_encoders ens = nengo.Ensemble(n_hid, dim**2, seed=3, encoders=encoders, **ens_params) #Recurrent connection on the neurons of the ensemble to perform the rotation nengo.Connection(ens.neurons, ens.neurons, transform = rotated_after_encoder_weights.T, synapse=0.2) #Connect stimulus to ensemble, transform using learned weight matrices nengo.Connection(stim, ens, transform = np.dot(label_weights,activity_to_img_weights).T, synapse=0.1) #Collect output, use synapse for smoothing probe = nengo.Probe(ens.neurons,synapse=0.1) sim = nengo.Simulator(model) sim.run(5)
Project/trained_mental_rotation_ens.ipynb
science-of-imagination/nengo-buffer
gpl-3.0
We can now build a temporary server which acts just like a normal server, but we have a bit more direct control of it. Warning! All data is lost when this notebook shuts down! This is for demonstration purposes only! For information about how to setup a permanent QCFractal server, see the Setup Quickstart Guide.
server = FractalSnowflakeHandler() server
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
We can then build a typical FractalClient to automatically connect to this server using the client() helper command. Note that the server names and addresses are identical in both the server and client.
client = server.client() client
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
Adding and Querying data A server starts with no data, so let's add some! We can do this by adding a water molecule at a poor geometry from XYZ coordinates. Note that all internal QCFractal values are stored and used in atomic units; whereas, the standard Molecule.from_data() assumes an input of Angstroms. We can switch this back to Bohr by adding a units command in the text string.
mol = ptl.Molecule.from_data(""" O 0 0 0 H 0 0 2 H 0 2 0 units bohr """) mol
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
We can then measure various aspects of this molecule to determine its shape. Note that the measure command will provide a distance, angle, or dihedral depending if 2, 3, or 4 indices are passed in. This molecule is quite far from optimal, so let's run an geometry optimization!
print(mol.measure([0, 1])) print(mol.measure([1, 0, 2]))
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
Evaluating a Geometry Optimization We originally installed psi4 and geometric, so we can use these programs to perform a geometry optimization. In QCFractal, we call a geometry optimization a procedure, where procedure is a generic term for a higher level operation that will run multiple individual quantum chemistry energy, gradient, or Hessian evaluations. Other procedure examples are finite-difference computations, n-body computations, and torsiondrives. We provide a JSON-like input to the client.add_procedure() command to specify the method, basis, and program to be used. The qc_spec field is used in all procedures to determine the underlying quantum chemistry method behind the individual procedure. In this way, we can use any program or method that returns a energy or gradient quantity to run our geometry optimization! (See also add_compute().)
spec = { "keywords": None, "qc_spec": { "driver": "gradient", "method": "b3lyp", "basis": "6-31g", "program": "psi4" }, } # Ask the server to compute a new computation r = client.add_procedure("optimization", "geometric", spec, [mol]) print(r) print(r.ids)
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
We can see that we submitted a single task to be evaluated and the server has not seen this particular procedure before. The ids field returns the unique id of the procedure. Different procedures will always have a unique id, while identical procedures will always return the same id. We can submit the same procedure again to see this effect:
r2 = client.add_procedure("optimization", "geometric", spec, [mol]) print(r) print(r.ids)
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
Querying Procedures Once a task is submitted, it will be placed in the compute queue and evaluated. In this particular case the FractalSnowflakeHandler uses your local hardware to evaluate these jobs. We recommend avoiding large tasks! In general, the server can handle anywhere between laptop-scale resources to many hundreds of thousands of concurrent cores at many physical locations. The amount of resources to connect is up to you and the amount of compute that you require. Since we did submit a very small job it is likely complete by now. Let us query this procedure from the server using its id like so:
proc = client.query_procedures(id=r.ids)[0] proc
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause