markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The plot function The plot function creates a two-dimensional plot of one variable against another.
# Use the help function to see the documentation for plot
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Example Create a plot of $f(x) = x^2$ with $x$ between -2 and 2. * Set the linewidth to 3 points * Set the line transparency (alpha) to 0.6 * Set axis labels and title * Add a grid to the plot
# Create an array of x values from -6 to 6 # Create a variable y equal to the x squared # Use the plot function to plot the line # Add a title and axis labels # Add grid
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Example Create plots of the functions $f(x) = \log x$ (natural log) and $g(x) = 1/x$ between 0.01 and 5 * Set the limits for the $x$-axis to (0,5) * Set the limits for the $y$-axis to (-2,5) * Make the line for $log(x)$ solid blue * Make the line for $1/x$ dashd magenta * Set the linewidth of each line to 3 points * Set the line transparency (alpha) for each line to 0.6 * Set axis labels and title * Add a legend * Add a grid to the plot
# Create an array of x values from -6 to 6 # Create y variables # Use the plot function to plot the lines # Add a title and axis labels # Set axis limits # legend # Add grid
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Example Consider the linear regression model: \begin{align} y_i = \beta_0 + \beta_1 x_i + \epsilon_i \end{align} where $x_i$ is the independent variable, $\epsilon_i$ is a random regression error term, $y_i$ is the dependent variable and $\beta_0$ and $\beta_1$ are constants. Let's simulate the model * Set values for $\beta_0$ and $\beta_1$ * Create an array of $x_i$ values from -5 to 5 * Create an array of $\epsilon_i$ values from the standard normal distribution equal in length to the array of $x_i$s * Create an array of $y_i$s * Plot y against x with either a circle ('o'), triangle ('^'), or square ('s') marker and transparency (alpha) to 0.5 * Add axis lables, a title, and a grid to the plot
# Set betas # Create x values # create epsilon values from the standard normal distribution # create y # plot # Add a title and axis labels # Set axis limits # Add grid
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Example Create plots of the functions $f(x) = x$, $g(x) = x^2$, and $h(x) = x^3$ for $x$ between -2 and 2 * Use the optional string format argument to format the lines: - $x$: solid blue line - $x^2$: dashed green line - $x^3$: dash-dot magenta line * Set the linewidth of each line to 3 points * Set transparency (alpha) for each line to 0.6 * Add a legend to lower right with 3 columns * Set axis labels and title * Add a grid to the plot
# Create an array of x values from -6 to 6 # Create y variables # Use the plot function to plot the lines # Add a title and axis labels # Add grid # legend
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Figures, axes, and subplots Often we want to create plots with multiple axes or we want to modify the size and shape of the plot areas. To be able to do these things, we need to explicity create a figure and then create the axes within the figure. The best way to see how this works is by example. Example: A single plot with double width The default dimensions of a matplotlib figure are 6 inches by 4 inches. As we saw above, this leaves some whitespace on the right side of the figure. Suppose we want to remove that by making the plot area twice as wide. Plot the sine function on -6 to 6 using a figure with dimensions 12 inches by 4 inches
# Create data # Create a new figure # Create axis # Plot # Add grid
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
In the previous example the figure() function creates a new figure and add_subplot() puts a new axis on the figure. The command fig.add_subplot(1,1,1) means divide the figure fig into a 1 by 1 grid and assign the first component of that grid to the variable ax1. Example: Two plots side-by-side Create a new figure with two axes side-by-side and plot the sine function on -6 to 6 on the left axis and the cosine function on -6 to 6 on the right axis.
# Create data # Create a new figure # Create axis 1 and plot with title # Create axis 2 and plot with title
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Example: Block of four plots The default dimensions of a matplotlib figure are 6 inches by 4 inches. As we saw above, this leaves some whitespace on the right side of the figure. Suppose we want to remove that by making the plot area twice as wide. Create a new figure with four axes in a two-by-two grid. Plot the following functions on the interval -2 to 2: * $y = x$ * $y = x^2$ * $y = x^3$ * $y = x^4$ Leave the figure size at the default (6in. by 4in.) but run the command plt.tight_layout() to adust the figure's margins after creating your figure, axes, and plots.
# Create data # Create a new figure # Create axis 1 and plot with title # Create axis 2 and plot with title # Create axis 3 and plot with title # Create axis 4 and plot with title # Adjust margins
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Exporting figures to image files Use the plt.savefig() function to save figures to images.
# Create data x = np.arange(-6,6,0.001) y = np.sin(x) # Create a new figure, axis, and plot fig = plt.figure() ax1 = fig.add_subplot(1,1,1) ax1.plot(x,y,lw=3,alpha = 0.6) ax1.grid() # Save plt.savefig('fig_econ129_class04_sine.png',dpi=120)
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Représentation Le module pandas manipule des tables et c'est la façon la plus commune de représenter les données. Lorsque les données sont multidimensionnelles, on distingue les coordonnées des valeurs :
NbImage("cube1.png")
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Dans cet exemple, il y a : 3 coordonnées : Age, Profession, Annéee 2 valeurs : Espérance de vie, Population On peut représenter les donnés également comme ceci :
NbImage("cube2.png")
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
C'est assez simple. Prenons un exemple : table de mortalité de 1960 à 2010 qu'on récupère à l'aide de la fonction table_mortalite_euro_stat. C'est assez long (4-5 minutes) sur l'ensemble des données car elles doivent être prétraitées (voir la documentation de la fonction). Pour écouter, il faut utiliser le paramètre stop_at.
from actuariat_python.data import table_mortalite_euro_stat table_mortalite_euro_stat() import os os.stat("mortalite.txt") import pandas df = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8", low_memory=False) df.head()
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Les indicateurs pour deux âges différents :
df [ ((df.age=="Y60") | (df.age=="Y61")) & (df.annee == 2000) & (df.pays=="FR") & (df.genre=="F")]
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Exercice 1 : filtre On veut comparer les espérances de vie pour deux pays et deux années.
#
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Données trop grosses pour tenir en mémoire : SQLite
df.shape
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Les données sont trop grosses pour tenir dans une feuille Excel. Pour les consulter, il n'y a pas d'autres moyens que d'en regarder des extraits. Que passe quand même ceci n'est pas possible ? Quelques solutions : augmenter la mémoire de l'ordinateur, avec 20 Go, on peut faire beaucoup de choses stocker les données dans un serveur SQL stocker les données sur un système distribué (cloud, Hadoop, ...) La seconde option n'est pas toujours simple, il faut installer un serveur SQL. Pour aller plus vite, on peut simplement utiliser SQLite qui est une façon de faire du SQL sans serveur (cela prend quelques minutes). On utilise la méthode to_sql.
import sqlite3 from pandas.io import sql cnx = sqlite3.connect('mortalite.db3') try: df.to_sql(name='mortalite', con=cnx) except ValueError as e: if "Table 'mortalite' already exists" not in str(e): # seulement si l'erreur ne vient pas du fait que cela # a déjà été fait raise e # on peut ajouter d'autres dataframe à la table comme si elle était créée par morceau # voir le paramètre if_exists de la fonction to_sql
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
On peut maintenant récupérer un morceau avec la fonction read_sql.
import pandas example = pandas.read_sql('select * from mortalite where age_num==50 limit 5', cnx) example
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
L'ensemble des données restent sur le disque, seul le résultat de la requête est chargé en mémoire. Si on ne peut pas faire tenir les données en mémoire, il faut soit en obtenir une vue partielle (un échantillon aléatoire, un vue filtrée), soit une vue agrégrée. Pour finir, il faut fermer la connexion pour laisser d'autres applications ou notebook modifier la base ou tout simplement supprimer le fichier.
cnx.close()
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Sous Windows, on peut consulter la base avec le logiciel SQLiteSpy.
NbImage("sqlite.png")
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Sous Linux ou Max, on peut utiliser une extension Firefox SQLite Manager. Dans ce notebook, on utilisera la commande magique %%SQL du module pyensae :
%load_ext pyensae %SQL_connect mortalite.db3 %SQL_tables %SQL_schema mortalite %%SQL SELECT COUNT(*) FROM mortalite %SQL_close
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Exercice 2 : échantillon aléatoire Si on ne peut pas faire tenir les données en mémoire, on peut soit regarder les premières lignes soit prendre un échantillon aléatoire. Deux options : Dataframe.sample create_function La première fonction est simple :
sample = df.sample(frac=0.1) sample.shape, df.shape
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Je ne sais pas si cela peut être réalisé sans charger les données en mémoire. Si les données pèsent 20 Go, cette méthode n'aboutira pas. Pourtant, on veut juste un échantillon pour commencer à regarder les données. On utilise la seconde option avec create_function et la fonction suivante :
import random #loi uniforme def echantillon(proportion): return 1 if random.random() < proportion else 0 import sqlite3 from pandas.io import sql cnx = sqlite3.connect('mortalite.db3') cnx.create_function('echantillon', 1, echantillon)
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Que faut-il écrire ici pour récupérer 1% de la table ?
import pandas #example = pandas.read_sql(' ??? ', cnx) #example cnx.close()
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Pseudo Map/Reduce avec SQLite La liste des mots-clés du langage SQL utilisés par SQLite n'est pas aussi riche que d'autres solutions de serveurs SQL. La médiane ne semble pas en faire partie. Cependant, pour une année, un genre, un âge donné, on voudrait calculer la médiane de l'espérance de vie sur l'ensembles des pays.
import sqlite3, pandas from pandas.io import sql cnx = sqlite3.connect('mortalite.db3') pandas.read_sql('select pays,count(*) from mortalite group by pays', cnx)
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Il n'y a pas le même nombre de données selon les pays, il est probable que le nombre de pays pour lesquels il existe des données varie selon les âges et les années.
query = """SELECT nb_country, COUNT(*) AS nb_rows FROM ( SELECT annee,age,age_num, count(*) AS nb_country FROM mortalite WHERE indicateur=="LIFEXP" AND genre=="F" GROUP BY annee,age,age_num ) GROUP BY nb_country""" df = pandas.read_sql(query, cnx) df.sort_values("nb_country", ascending=False).head(n=2) df.plot(x="nb_country", y="nb_rows")
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Soit un nombre inconstant de pays. Le fait qu'on est 100 pays suggère qu'on ait une erreur également.
query = """SELECT annee,age,age_num, count(*) AS nb_country FROM mortalite WHERE indicateur=="LIFEXP" AND genre=="F" GROUP BY annee,age,age_num HAVING nb_country >= 100""" df = pandas.read_sql(query, cnx) df.head()
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Ce sont des valeurs manquantes. Le problème pour calculer la médiane pour chaque observation est qu'il faut d'abord regrouper les lignes de la table par indicateur puis choisir la médiane dans chaque de ces petits groupes. On s'inspire pour cela de la logique Map/Reduce et de la fonction create_aggregate. Exercice 3 : reducer SQL Il faut compléter le programme suivant.
class ReducerMediane: def __init__(self): # ??? pass def step(self, value): # ??? # pass def finalize(self): # ??? # return ... //2 ] pass cnx.create_aggregate("ReducerMediane", 1, ReducerMediane) #query = """SELECT annee,age,age_num, ...... AS mediane FROM mortalite # WHERE indicateur=="LIFEXP" AND genre=="F" # GROUP BY annee,age,age_num""" #df = pandas.read_sql(query, cnx) cnx.close()
_doc/notebooks/sessions/seance5_sql_multidimensionnelle_enonce.ipynb
sdpython/actuariat_python
mit
Convert A List To A Tuple You can change a list to a tuple in Python by using the tuple() function. Pass your list to this function and you will get a tuple back! NOTE: tuples are immutable thus can’t change them afterwards :( Convert Your List To A Set In Python a set is an unordered collection of unique items. That means not only means that any duplicates that you might have had in your original list will be lost once you convert it to a set, but also the order of the list elements. You can change a list into a set with the set() function. Just pass your list to it! Convert Lists To A Dictionaries A dictionary works with keys and values, so the conversion from a list to a dictionary might be less straightforward. helloWorld = ['hello','world','1','2'] You will need to make sure that hello and world and ‘1’ and ‘2’ are interpreted as key-value pairs. The way to do this is to select them with the slice notation and pass them to zip(). zip() actually works like expected: it zips elements together. In this case, when you zip the helloWorld elements helloWorld[0::2] and helloWorld[1::2]
helloWorld = ['hello','world','1','2'] # print(list(zip(helloWorld))) helloWorldDictionary = dict(zip(helloWorld[0::2], helloWorld[1::2])) # Print out the result print(helloWorldDictionary)
Section 1 - Core Python/Chapter 05 - Data Types/5.1 Uses of collection and datatype.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Note that the second element that is passed to the zip() function makes use of the step value to make sure that only the world and 2 elements are selected. Likewise, the first element uses the step value to select hello and 1. If your list is large, you will probably want to do something like this:
a = [1, 2, 3, 4, 5] # Create a list iterator object i = iter(a) for k in i: print(k) # Zip and create a dictionary print(dict(zip(i, i))) ## Difference Between The Python append() and extend() Methods? # Append [4,5] to `shortList` # This is your list shortList = [1, 2, 3] longerList = [1, 2, 3] # Check whether it's iterable list.__iter__ shortList.append([4, 5]) # Use the `print()` method to show `shortList` print(shortList) # Extend `longerList` with [4,5] longerList.extend([4, 5]) # Use the `print()` method to see `longerList` print(longerList)
Section 1 - Core Python/Chapter 05 - Data Types/5.1 Uses of collection and datatype.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Clone Or Copy A List in Python here are a lot of ways of cloning or copying a list: You can slice your original list and store it into a new variable: newList = oldList[:] You can use the built-in list() function: newList = list(oldList) You can use the copy library: With the copy() method: newList = copy.copy(oldList) If your list contains objects and you want to copy those as well, you can use copy.deepcopy(): copy.deepcopy(oldList)
# Copy the grocery list by slicing and store it in the `newGroceries` variable groceries = [1, 2, 3, 4, 5, 6] newGroceries = groceries[:] # Copy the grocery list with the `list()` function and store it in a `groceriesForFriends` variable groceriesForFriends = list(groceries) # Import the copy library import copy as c # Create a `groceriesForFamily` variable and assign the copied grocery list to it groceriesForFamily = c.copy(groceries) # Use `deepcopy()` and assign the copied list to a `groceriesForKids` variable groceriesForKids = c.deepcopy(groceries)
Section 1 - Core Python/Chapter 05 - Data Types/5.1 Uses of collection and datatype.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.
index = 1 print(X_train[index], label_to_emoji(Y_train[index]))
deeplearning.ai/C5.SequenceModel/Week2_NLP_WordEmbeddings/assignment/Emojify/Emojify - v2.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Let's see what convert_to_one_hot() did. Feel free to change index to print out different values.
index = 50 print(Y_train[index], "is converted into one hot", Y_oh_train[index])
deeplearning.ai/C5.SequenceModel/Week2_NLP_WordEmbeddings/assignment/Emojify/Emojify - v2.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Exercise: Implement sentence_to_avg(). You will need to carry out two steps: 1. Convert every sentence to lower-case, then split the sentence into a list of words. X.lower() and X.split() might be useful. 2. For each word in the sentence, access its GloVe representation. Then, average all these values.
# GRADED FUNCTION: sentence_to_avg def sentence_to_avg(sentence, word_to_vec_map): """ Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word and averages its value into a single vector encoding the meaning of the sentence. Arguments: sentence -- string, one training example from X word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation Returns: avg -- average vector encoding information about the sentence, numpy-array of shape (50,) """ ### START CODE HERE ### # Step 1: Split sentence into list of lower case words (≈ 1 line) words = None # Initialize the average word vector, should have the same shape as your word vectors. avg = None # Step 2: average the word vectors. You can loop over the words in the list "words". for w in None: avg += None avg = None ### END CODE HERE ### return avg avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map) print("avg = ", avg)
deeplearning.ai/C5.SequenceModel/Week2_NLP_WordEmbeddings/assignment/Emojify/Emojify - v2.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table> <tr> <td> **avg= ** </td> <td> [-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983 -0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867 0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767 0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061 0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265 1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925 -0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333 -0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433 0.1445417 0.09808667] </td> </tr> </table> Model You now have all the pieces to finish implementing the model() function. After using sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax's parameters. Exercise: Implement the model() function described in Figure (2). Assuming here that $Yoh$ ("Y one hot") is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are: $$ z^{(i)} = W . avg^{(i)} + b$$ $$ a^{(i)} = softmax(z^{(i)})$$ $$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Yoh^{(i)}_k * log(a^{(i)}_k)$$ It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let's not bother this time. We provided you a function softmax().
# GRADED FUNCTION: model def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400): """ Model to train word vector representations in numpy. Arguments: X -- input data, numpy array of sentences as strings, of shape (m, 1) Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1) word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation learning_rate -- learning_rate for the stochastic gradient descent algorithm num_iterations -- number of iterations Returns: pred -- vector of predictions, numpy-array of shape (m, 1) W -- weight matrix of the softmax layer, of shape (n_y, n_h) b -- bias of the softmax layer, of shape (n_y,) """ np.random.seed(1) # Define number of training examples m = Y.shape[0] # number of training examples n_y = 5 # number of classes n_h = 50 # dimensions of the GloVe vectors # Initialize parameters using Xavier initialization W = np.random.randn(n_y, n_h) / np.sqrt(n_h) b = np.zeros((n_y,)) # Convert Y to Y_onehot with n_y classes Y_oh = convert_to_one_hot(Y, C = n_y) # Optimization loop for t in range(num_iterations): # Loop over the number of iterations for i in range(m): # Loop over the training examples ### START CODE HERE ### (≈ 4 lines of code) # Average the word vectors of the words from the i'th training example avg = None # Forward propagate the avg through the softmax layer z = None a = None # Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax) cost = None ### END CODE HERE ### # Compute gradients dz = a - Y_oh[i] dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h)) db = dz # Update parameters with Stochastic Gradient Descent W = W - learning_rate * dW b = b - learning_rate * db if t % 100 == 0: print("Epoch: " + str(t) + " --- cost = " + str(cost)) pred = predict(X, Y, W, b, word_to_vec_map) return pred, W, b print(X_train.shape) print(Y_train.shape) print(np.eye(5)[Y_train.reshape(-1)].shape) print(X_train[0]) print(type(X_train)) Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4]) print(Y.shape) X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear', 'Lets go party and drinks','Congrats on the new job','Congratulations', 'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you', 'You totally deserve this prize', 'Let us go play football', 'Are you down for football this afternoon', 'Work hard play harder', 'It is suprising how people can be dumb sometimes', 'I am very disappointed','It is the best day in my life', 'I think I will end up alone','My life is so boring','Good job', 'Great so awesome']) print(X.shape) print(np.eye(5)[Y_train.reshape(-1)].shape) print(type(X_train))
deeplearning.ai/C5.SequenceModel/Week2_NLP_WordEmbeddings/assignment/Emojify/Emojify - v2.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
2.1 - Overview of the model Here is the Emojifier-v2 you will implement: <img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br> <caption><center> Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier. </center></caption> 2.2 Keras and mini-batching In this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time. The common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with "0"s so that each input sentence is of length 20. Thus, a sentence "i love you" would be represented as $(e_{i}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set. 2.3 - The Embedding layer In Keras, the embedding matrix is represented as a "layer", and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we'll show you how Keras allows you to either train or leave fixed this layer. The Embedding() layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below. <img src="images/embedding1.png" style="width:700px;height:250px;"> <caption><center> Figure 4: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of max_len=5. The final dimension of the representation is (2,max_len,50) because the word embeddings we are using are 50 dimensional. </center></caption> The largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors). The first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence. Exercise: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to Embedding() (described in Figure 4).
# GRADED FUNCTION: sentences_to_indices def sentences_to_indices(X, word_to_index, max_len): """ Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to `Embedding()` (described in Figure 4). Arguments: X -- array of sentences (strings), of shape (m, 1) word_to_index -- a dictionary containing the each word mapped to its index max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this. Returns: X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len) """ m = X.shape[0] # number of training examples ### START CODE HERE ### # Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line) X_indices = None for i in range(m): # loop over training examples # Convert the ith training sentence in lower case and split is into words. You should get a list of words. sentence_words =None # Initialize j to 0 j = None # Loop over the words of sentence_words for w in None: # Set the (i,j)th entry of X_indices to the index of the correct word. X_indices[i, j] = None # Increment j to j + 1 j = None ### END CODE HERE ### return X_indices
deeplearning.ai/C5.SequenceModel/Week2_NLP_WordEmbeddings/assignment/Emojify/Emojify - v2.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table> <tr> <td> **X1 =** </td> <td> ['funny lol' 'lets play football' 'food is ready for you'] </td> </tr> <tr> <td> **X1_indices =** </td> <td> [[ 155345. 225122. 0. 0. 0.] <br> [ 220930. 286375. 151266. 0. 0.] <br> [ 151204. 192973. 302254. 151349. 394475.]] </td> </tr> </table> Let's build the Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of sentences_to_indices() to it as an input, and the Embedding() layer will return the word embeddings for a sentence. Exercise: Implement pretrained_embedding_layer(). You will need to carry out the following steps: 1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape. 2. Fill in the embedding matrix with all the word embeddings extracted from word_to_vec_map. 3. Define Keras embedding layer. Use Embedding(). Be sure to make this layer non-trainable, by setting trainable = False when calling Embedding(). If you were to set trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings. 4. Set the embedding weights to be equal to the embedding matrix
# GRADED FUNCTION: pretrained_embedding_layer def pretrained_embedding_layer(word_to_vec_map, word_to_index): """ Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors. Arguments: word_to_vec_map -- dictionary mapping words to their GloVe vector representation. word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words) Returns: embedding_layer -- pretrained layer Keras instance """ vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement) emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50) ### START CODE HERE ### # Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim) emb_matrix = None # Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary for word, index in word_to_index.items(): emb_matrix[index, :] = None # Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False. embedding_layer = None ### END CODE HERE ### # Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None". embedding_layer.build((None,)) # Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained. embedding_layer.set_weights([emb_matrix]) return embedding_layer embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index) print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
deeplearning.ai/C5.SequenceModel/Week2_NLP_WordEmbeddings/assignment/Emojify/Emojify - v2.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table> <tr> <td> **weights[0][1][3] =** </td> <td> -0.3403 </td> </tr> </table> 2.3 Building the Emojifier-V2 Lets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network. <img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br> <caption><center> Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier. </center></caption> Exercise: Implement Emojify_V2(), which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape. It should output a softmax probability vector of shape (m, C = 5). You may need Input(shape = ..., dtype = '...'), LSTM(), Dropout(), Dense(), and Activation().
# GRADED FUNCTION: Emojify_V2 def Emojify_V2(input_shape, word_to_vec_map, word_to_index): """ Function creating the Emojify-v2 model's graph. Arguments: input_shape -- shape of the input, usually (max_len,) word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words) Returns: model -- a model instance in Keras """ ### START CODE HERE ### # Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices). sentence_indices = None # Create the embedding layer pretrained with GloVe Vectors (≈1 line) embedding_layer = None # Propagate sentence_indices through your embedding layer, you get back the embeddings embeddings = None # Propagate the embeddings through an LSTM layer with 128-dimensional hidden state # Be careful, the returned output should be a batch of sequences. X = None # Add dropout with a probability of 0.5 X = None # Propagate X trough another LSTM layer with 128-dimensional hidden state # Be careful, the returned output should be a single hidden state, not a batch of sequences. X = None # Add dropout with a probability of 0.5 X = None # Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors. X = None # Add a softmax activation X = None # Create Model instance which converts sentence_indices into X. model = None ### END CODE HERE ### return model
deeplearning.ai/C5.SequenceModel/Week2_NLP_WordEmbeddings/assignment/Emojify/Emojify - v2.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
1. Test Brown Corpus
from nltk.corpus import brown brown.words()[0:10] brown.tagged_words()[0:10] len(brown.words()) dir(brown)
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
2. Test NLTK Book Resources
from nltk.book import * dir(text1) len(text1)
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
3. Sent Tokenize(sentence boundary detection, sentence segmentation), Word Tokenize and Pos Tagging
from nltk import sent_tokenize, word_tokenize, pos_tag text = "Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you'll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you'll learn about some of Silicon Valley's best practices in innovation as it pertains to machine learning and AI." sents = sent_tokenize(text) sents len(sents) tokens = word_tokenize(text) tokens len(tokens) tagged_tokens = pos_tag(tokens) tagged_tokens
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
4. Sentence Tokenize and Word Tokenize Sentence boundary disambiguation (SBD), also known as sentence breaking, is the problem in natural language processing of deciding where sentences begin and end. Often natural language processing tools require their input to be divided into sentences for a number of reasons. However sentence boundary identification is challenging because punctuation marks are often ambiguous. For example, a period may denote an abbreviation, decimal point, an ellipsis, or an email address – not the end of a sentence. About 47% of the periods in the Wall Street Journal corpus denote abbreviations. As well, question marks and exclamation marks may appear in embedded quotations, emoticons, computer code, and slang. Languages like Japanese and Chinese have unambiguous sentence-ending markers.
text = "this’s a sent tokenize test. this is sent two. is this sent three? sent 4 is cool! Now it’s your turn." from nltk.tokenize import sent_tokenize sent_tokenize_list = sent_tokenize(text) len(sent_tokenize_list) sent_tokenize_list import nltk.data tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') tokenizer.tokenize(text) spanish_tokenizer = nltk.data.load('tokenizers/punkt/spanish.pickle') spanish_tokenizer.tokenize('Hola amigo. Estoy bien.')
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
Tokenizing text into words
from nltk.tokenize import word_tokenize word_tokenize('Hello World.') word_tokenize("this's a test") from nltk.tokenize import TreebankWordTokenizer tokenizer = TreebankWordTokenizer() tokenizer.tokenize("this’s a test") # Standard word tokenizer. _word_tokenize = TreebankWordTokenizer().tokenize def word_tokenize(text): """ Return a tokenized copy of *text*, using NLTK's recommended word tokenizer (currently :class:`.TreebankWordTokenizer`). This tokenizer is designed to work on a sentence at a time. """ return _word_tokenize(text) word_tokenize("this’s a test") from nltk.tokenize import WordPunctTokenizer word_punct_tokenizer = WordPunctTokenizer() word_punct_tokenizer.tokenize('This’s a test')
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
5. Part-Of-Speech Tagging and POS Tagger In corpus linguistics, part-of-speech tagging (POS tagging or POST), also called grammatical tagging or word-category disambiguation, is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition, as well as its context—i.e. relationship with adjacent and related words in a phrase, sentence, or paragraph. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc. Once performed by hand, POS tagging is now done in the context of computational linguistics, using algorithms which associate discrete terms, as well as hidden parts of speech, in accordance with a set of descriptive tags. POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic. E. Brill’s tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms.
text = nltk.word_tokenize("Dive into NLTK: Part-of-speech tagging and POS Tagger") text nltk.pos_tag(text) nltk.help.upenn_tagset('NN.*') nltk.help.upenn_tagset('VB.*') nltk.help.upenn_tagset('JJ.*') nltk.help.upenn_tagset('CC.*') nltk.help.upenn_tagset('IN.*') nltk.help.upenn_tagset('PRP.*') nltk.help.upenn_tagset('DT.*')
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
TnT POS Tagger Model
# Natural Language Toolkit: TnT Tagger # # Copyright (C) 2001-2013 NLTK Project # Author: Sam Huston <sjh900@gmail.com> # # URL: <http://www.nltk.org/> # For license information, see LICENSE.TXT ''' Implementation of 'TnT - A Statisical Part of Speech Tagger' by Thorsten Brants http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf ''' from __future__ import print_function from math import log from operator import itemgetter from nltk.probability import FreqDist, ConditionalFreqDist from nltk.tag.api import TaggerI class TnT(TaggerI): ''' TnT - Statistical POS tagger IMPORTANT NOTES: * DOES NOT AUTOMATICALLY DEAL WITH UNSEEN WORDS - It is possible to provide an untrained POS tagger to create tags for unknown words, see __init__ function * SHOULD BE USED WITH SENTENCE-DELIMITED INPUT - Due to the nature of this tagger, it works best when trained over sentence delimited input. - However it still produces good results if the training data and testing data are separated on all punctuation eg: [,.?!] - Input for training is expected to be a list of sentences where each sentence is a list of (word, tag) tuples - Input for tag function is a single sentence Input for tagdata function is a list of sentences Output is of a similar form * Function provided to process text that is unsegmented - Please see basic_sent_chop() TnT uses a second order Markov model to produce tags for a sequence of input, specifically: argmax [Proj(P(t_i|t_i-1,t_i-2)P(w_i|t_i))] P(t_T+1 | t_T) IE: the maximum projection of a set of probabilities The set of possible tags for a given word is derived from the training data. It is the set of all tags that exact word has been assigned. To speed up and get more precision, we can use log addition to instead multiplication, specifically: argmax [Sigma(log(P(t_i|t_i-1,t_i-2))+log(P(w_i|t_i)))] + log(P(t_T+1|t_T)) The probability of a tag for a given word is the linear interpolation of 3 markov models; a zero-order, first-order, and a second order model. P(t_i| t_i-1, t_i-2) = l1*P(t_i) + l2*P(t_i| t_i-1) + l3*P(t_i| t_i-1, t_i-2) A beam search is used to limit the memory usage of the algorithm. The degree of the beam can be changed using N in the initialization. N represents the maximum number of possible solutions to maintain while tagging. It is possible to differentiate the tags which are assigned to capitalized words. However this does not result in a significant gain in the accuracy of the results.''' from nltk.corpus import treebank len(treebank.tagged_sents()) train_data = treebank.tagged_sents()[:3000] test_data = treebank.tagged_sents()[3000:] train_data[0] test_data[0] from nltk.tag import tnt tnt_pos_tagger = tnt.TnT() tnt_pos_tagger.train(train_data) tnt_pos_tagger.evaluate(test_data) import pickle f = open('tnt_pos_tagger.pickle', "w") pickle.dump(tnt_pos_tagger, f) f.close() tnt_tagger.tag(nltk.word_tokenize("this is a tnt treebank tnt tagger"))
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
6. Stemming In linguistic morphology and information retrieval, stemming is the process for reducing inflected (or sometimes derived) words to their stem, base or root form—generally a written word form. The stem need not be identical to the morphological root of the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root. Algorithms for stemming have been studied in computer science since the 1960s. Many search engines treat words with the same stem as synonyms as a kind of query expansion, a process called conflation. Stemming programs are commonly referred to as stemming algorithms or stemmers.
from nltk.stem.porter import PorterStemmer porter_stemmer = PorterStemmer() from nltk.stem.lancaster import LancasterStemmer lancaster_stemmer = LancasterStemmer() from nltk.stem import SnowballStemmer snowball_stemmer = SnowballStemmer("english") #from nltk.stem.api import StemmerI #api_stemmer = StemmerI() from nltk.stem.regexp import RegexpStemmer regexp_stemmer = RegexpStemmer("english") from nltk.stem.isri import ISRIStemmer isri_stemmer = ISRIStemmer() from nltk.stem.rslp import RSLPStemmer rlsp_stemmer = RSLPStemmer() if __name__ == "__main__": import doctest doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE) words = ['maximum','presumably','multiply','provision','owed','ear','saying','crying','string','meant','cement'] porter_words = [] for word in words: porter_words.append(porter_stemmer.stem(word)) porter_words lancaster_words = [] for word in words: lancaster_words.append(lancaster_stemmer.stem(word)) lancaster_words snowball_words = [] for word in words: snowball_words.append(snowball_stemmer.stem(word)) snowball_words isri_words = [] for word in words: isri_words.append(isri_stemmer.stem(word)) isri_words rlsp_words = [] for word in words: rlsp_words.append(rlsp_stemmer.stem(word)) rlsp_words regexp_words = [] for word in words: regexp_words.append(regexp_stemmer.stem(word)) regexp_words
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
7. Lemmatization Lemmatisation (or lemmatization) in linguistics, is the process of grouping together the different inflected forms of a word so they can be analysed as a single item. In computational linguistics, lemmatisation is the algorithmic process of determining the lemma for a given word. Since the process may involve complex tasks such as understanding context and determining the part of speech of a word in a sentence (requiring, for example, knowledge of the grammar of a language) it can be a hard task to implement a lemmatiser for a new language. In many languages, words appear in several inflected forms. For example, in English, the verb ‘to walk’ may appear as ‘walk’, ‘walked’, ‘walks’, ‘walking’. The base form, ‘walk’, that one might look up in a dictionary, is called the lemma for the word. The combination of the base form with the part of speech is often called the lexeme of the word. Lemmatisation is closely related to stemming. The difference is that a stemmer operates on a single word without knowledge of the context, and therefore cannot discriminate between words which have different meanings depending on part of speech. However, stemmers are typically easier to implement and run faster, and the reduced accuracy may not matter for some applications.
from nltk.stem import WordNetLemmatizer wordnet_lemmatizer = WordNetLemmatizer() words_lem = ['dogs','churches','aardwolves','abaci','hardrock','attractive','are','is'] #words_lem_pos = pos_tag(words_lem) wordnet_words = [] for word in words_lem: if word == 'is' or word == 'are': # for verbs wordnet_words.append(wordnet_lemmatizer.lemmatize(word, pos='v')) else: # wordnet_words.append(wordnet_lemmatizer.lemmatize(word)) wordnet_words
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
Some simple things you can do with NLTK http://www.nltk.org/
import nltk sentence = "At eight o'clock on Thursday morning Arthur didn't feel very good." tokens = nltk.word_tokenize(sentence) tokens tagged = nltk.pos_tag(tokens) tagged[0:6] entities = nltk.chunk.ne_chunk(tagged) entities from nltk.corpus import treebank t = treebank.parsed_sents('wsj_0001.mrg')[0] t.draw()
test/Learning/MN - text mining test.ipynb
datahac/jup
apache-2.0
Plotting 2 moons dataset Code taken directly from Chris Waites's jax-flows demo. This is the distribution we want to create a bijection to from a simple base distribution, such as a gaussian distribution.
n_samples = 10000 plot_range = [(-2, 2), (-2, 2)] n_bins = 100 scaler = preprocessing.StandardScaler() X, _ = datasets.make_moons(n_samples=n_samples, noise=0.05) X = scaler.fit_transform(X) plt.hist2d(X[:, 0], X[:, 1], bins=n_bins, range=plot_range)[-1] plt.savefig("two-moons-original.pdf") plt.savefig("two-moons-original.png")
notebooks/book2/22/two_moons_normalizingFlow.ipynb
probml/pyprobml
mit
Creating the normalizing flow in distrax+haiku Instead of a uniform distribution, we use a normal distribution as the base distribution. This makes more sense for a standardized two moons dataset that is scaled according to a normal distribution using sklearn's StandardScaler(). Using a uniform base distribution will result in inf and nan loss.
from typing import Any, Iterator, Mapping, Optional, Sequence, Tuple # Hyperparams - change these to experiment flow_num_layers = 8 mlp_num_layers = 4 hidden_size = 1000 num_bins = 8 batch_size = 512 learning_rate = 1e-4 eval_frequency = 100 Array = jnp.ndarray PRNGKey = Array Batch = Mapping[str, np.ndarray] OptState = Any # Functions to create a distrax normalizing flow def make_conditioner( event_shape: Sequence[int], hidden_sizes: Sequence[int], num_bijector_params: int ) -> hk.Sequential: """Creates an MLP conditioner for each layer of the flow.""" return hk.Sequential( [ hk.Flatten(preserve_dims=-len(event_shape)), hk.nets.MLP(hidden_sizes, activate_final=True), # We initialize this linear layer to zero so that the flow is initialized # to the identity function. hk.Linear(np.prod(event_shape) * num_bijector_params, w_init=jnp.zeros, b_init=jnp.zeros), hk.Reshape(tuple(event_shape) + (num_bijector_params,), preserve_dims=-1), ] ) def make_flow_model( event_shape: Sequence[int], num_layers: int, hidden_sizes: Sequence[int], num_bins: int ) -> distrax.Transformed: """Creates the flow model.""" # Alternating binary mask. mask = jnp.arange(0, np.prod(event_shape)) % 2 mask = jnp.reshape(mask, event_shape) mask = mask.astype(bool) def bijector_fn(params: Array): return distrax.RationalQuadraticSpline(params, range_min=-2.0, range_max=2.0) # Number of parameters for the rational-quadratic spline: # - `num_bins` bin widths # - `num_bins` bin heights # - `num_bins + 1` knot slopes # for a total of `3 * num_bins + 1` parameters. num_bijector_params = 3 * num_bins + 1 layers = [] for _ in range(num_layers): layer = distrax.MaskedCoupling( mask=mask, bijector=bijector_fn, conditioner=make_conditioner(event_shape, hidden_sizes, num_bijector_params), ) layers.append(layer) # Flip the mask after each layer. mask = jnp.logical_not(mask) # We invert the flow so that the `forward` method is called with `log_prob`. flow = distrax.Inverse(distrax.Chain(layers)) # Making base distribution normal distribution mu = jnp.zeros(event_shape) sigma = jnp.ones(event_shape) base_distribution = distrax.Independent(distrax.MultivariateNormalDiag(mu, sigma)) return distrax.Transformed(base_distribution, flow) def load_dataset(split: tfds.Split, batch_size: int) -> Iterator[Batch]: # ds = tfds.load("mnist", split=split, shuffle_files=True) ds = split ds = ds.shuffle(buffer_size=10 * batch_size) ds = ds.batch(batch_size) ds = ds.prefetch(buffer_size=1000) ds = ds.repeat() return iter(tfds.as_numpy(ds)) def prepare_data(batch: Batch, prng_key: Optional[PRNGKey] = None) -> Array: data = batch.astype(np.float32) return data @hk.without_apply_rng @hk.transform def model_sample(key: PRNGKey, num_samples: int) -> Array: model = make_flow_model( event_shape=TWO_MOONS_SHAPE, num_layers=flow_num_layers, hidden_sizes=[hidden_size] * mlp_num_layers, num_bins=num_bins, ) return model.sample(seed=key, sample_shape=[num_samples]) @hk.without_apply_rng @hk.transform def log_prob(data: Array) -> Array: model = make_flow_model( event_shape=TWO_MOONS_SHAPE, num_layers=flow_num_layers, hidden_sizes=[hidden_size] * mlp_num_layers, num_bins=num_bins, ) return model.log_prob(data) def loss_fn(params: hk.Params, prng_key: PRNGKey, batch: Batch) -> Array: data = prepare_data(batch, prng_key) # Loss is average negative log likelihood. loss = -jnp.mean(log_prob.apply(params, data)) return loss @jax.jit def eval_fn(params: hk.Params, batch: Batch) -> Array: data = prepare_data(batch) # We don't dequantize during evaluation. loss = -jnp.mean(log_prob.apply(params, data)) return loss
notebooks/book2/22/two_moons_normalizingFlow.ipynb
probml/pyprobml
mit
Setting up the optimizer
optimizer = optax.adam(learning_rate) @jax.jit def update(params: hk.Params, prng_key: PRNGKey, opt_state: OptState, batch: Batch) -> Tuple[hk.Params, OptState]: """Single SGD update step.""" grads = jax.grad(loss_fn)(params, prng_key, batch) updates, new_opt_state = optimizer.update(grads, opt_state) new_params = optax.apply_updates(params, updates) return new_params, new_opt_state
notebooks/book2/22/two_moons_normalizingFlow.ipynb
probml/pyprobml
mit
Training the flow
# Event shape TWO_MOONS_SHAPE = (2,) # Create tf dataset from sklearn dataset dataset = tf.data.Dataset.from_tensor_slices(X) # Splitting into train/validate ds train = dataset.skip(2000) val = dataset.take(2000) # load_dataset(split: tfds.Split, batch_size: int) train_ds = load_dataset(train, 512) valid_ds = load_dataset(val, 512) # Initializing PRNG and Neural Net params prng_seq = hk.PRNGSequence(1) params = log_prob.init(next(prng_seq), np.zeros((1, *TWO_MOONS_SHAPE))) opt_state = optimizer.init(params) training_steps = 1000 for step in range(training_steps): params, opt_state = update(params, next(prng_seq), opt_state, next(train_ds)) if step % eval_frequency == 0: val_loss = eval_fn(params, next(valid_ds)) print(f"STEP: {step:5d}; Validation loss: {val_loss:.3f}") n_samples = 10000 plot_range = [(-2, 2), (-2, 2)] n_bins = 100 X_transf = model_sample.apply(params, next(prng_seq), num_samples=n_samples) plt.hist2d(X_transf[:, 0], X_transf[:, 1], bins=n_bins, range=plot_range)[-1] plt.savefig("two-moons-flow.pdf") plt.savefig("two-moons-flow.png") plt.show()
notebooks/book2/22/two_moons_normalizingFlow.ipynb
probml/pyprobml
mit
Overriding the __repr__ method:
class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b)
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
IPython expands on this idea and allows objects to declare other, rich representations including: HTML JSON PNG JPEG SVG LaTeX A single object can declare some or all of these representations; all of them are handled by IPython's display system. . Basic display imports The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations.
from IPython.display import display
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
A few points: Calling display on an object will send all possible representations to the Notebook. These representations are stored in the Notebook document. In general the Notebook will use the richest available representation. If you want to display a particular representation, there are specific functions for that:
from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg )
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
Images To work with images (JPEG, PNG) use the Image class.
from IPython.display import Image i = Image(filename='./ipython-image.png') display(i)
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
Returning an Image object from an expression will automatically display it:
i
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
An image can also be displayed from raw data or a URL.
Image(url='http://python.org/images/python-logo.gif')
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h)
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
You can also use the %%html cell magic to accomplish the same thing.
%%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style>
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
from IPython.display import Javascript
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
Pass a string of JavaScript source code to the JavaScript object and then display it.
js = Javascript('alert("hi")'); display(js)
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
The same thing can be accomplished using the %%javascript cell magic:
%%javascript alert("hi");
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px");
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
Audio IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
from IPython.display import Audio Audio("./scrubjay.mp3")
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook. For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur:
import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate)
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load:
from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0')
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page:
from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350)
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object:
from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb')
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
FileLinks('./')
days/day08/Display.ipynb
CalPolyPat/phys202-2015-work
mit
From the above code, we read the text file and saved the JSON data to the variable data to work with. We need to pull the time stamps, and their accompanying day of week. We want to convert a list of timestamps into a list of formatted timestamps. But first, we import datetime and parser.
from datetime import datetime from dateutil.parser import parse # limit to just march posts march_posts = list(filter(lambda x: parse(x['created_time'][:-5]) >= datetime(2017, 3, 1), data )) print(len(march_posts), "posts since", datetime(2017, 3, 1).date()) # get days to count occurrence march_days = list (map( lambda x: parse(x['created_time']).strftime("%A"), march_posts )) # count number of posts by day of week for day in march_days: print(day, " \t", march_days.count(day))
facebook_posting_activity_part2.ipynb
katychuang/ipython-notebooks
gpl-2.0
Now let's write some utility functions to process the data for the chart. Scrub turns the raw string type into a datetime type. Then we can pass that into dow() and hod() to format the strings.
def scrub(raw_timestamp): timestamp = parser.parse(raw_timestamp) return dow(timestamp), hod(timestamp) # returns day of week def dow(date): return date.strftime("%A") # i.e. Monday # returns hour of day def hod(time): return time.strftime("%-I:%M%p") # i.e.
facebook_posting_activity_part2.ipynb
katychuang/ipython-notebooks
gpl-2.0
Now we want to try create nested lists. A month contains weeks, which in turn contains days. To express this in code, it would be something like so: M = [W, W, W, ...] W = ["Mon", "Tues, "Wed", "Thu", ... ] The lists are combined into a list of lists. The
yIndex = ["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"] # Get a list of week numbers, 0-3. Note that March starts on week 9 of 2017 # but we subtract 9 to start at index 0 get_week = list (map( lambda x: parse(x['created_time']).isocalendar()[1]-9, march_posts )) # Get a list of day numbers, 0-6 get_day = list (map( lambda x: yIndex.index(parse(x['created_time']).strftime("%A")), march_posts )) # create empty array from itertools import repeat month = [[0] * 7 for i in repeat(None, 5)] print(month) # go thru posts to fill in empty array for i, (w, d) in enumerate(zip(get_week, get_day)): month[w][d] = 1 print("active days: \n", month)
facebook_posting_activity_part2.ipynb
katychuang/ipython-notebooks
gpl-2.0
Now let's step it up and count posts per day, so we can have more than 2 shades of colors on the heatmap.
# empty list of lists activity = [[0] * 7 for i in repeat(None, 5)] # the total number of posts limit = len(get_week) # fill in empty array with a fraction for i, (w, d) in enumerate(zip(get_week, get_day)): activity[w][d] += 1/limit print("activity per day: \n", activity)
facebook_posting_activity_part2.ipynb
katychuang/ipython-notebooks
gpl-2.0
Now our data is ready for plotting. Let's do the important config stuff.
%matplotlib inline import matplotlib.pyplot as plt import numpy as np
facebook_posting_activity_part2.ipynb
katychuang/ipython-notebooks
gpl-2.0
Here's how you create a chart of heatmap type, filled in with values from activity.
fig, ax = plt.subplots() heatmap = ax.pcolor(activity, cmap=plt.cm.Greens, alpha=0.8) # put the major ticks at the middle of each cell ax.set_xticks(np.arange(0,7)+0.5, minor=False) ax.set_yticks(np.arange(0,5)+0.5, minor=False) # want a more natural, table-like display ax.invert_yaxis() ax.xaxis.tick_top() # labels column_labels = ["Mon", "Tues", "Wed", "Thurs", "Fri", "Sat", "Sun"] ax.set_xticklabels(column_labels, minor=False) ax.set_yticklabels(list(''), minor=False) plt.show()
facebook_posting_activity_part2.ipynb
katychuang/ipython-notebooks
gpl-2.0
I'm not liking how the borders look like. So I'm going to create another version with the seaborn library.
import seaborn as sns sns.set(font_scale=1.2) sns.set_style({"savefig.dpi": 100}) ax = sns.heatmap(activity, cmap=plt.cm.Greens, linewidths=.1) ax.xaxis.tick_top() ax.set_xticklabels(column_labels, minor=False) ax.set_yticklabels(list(''), minor=False) fig = ax.get_figure()
facebook_posting_activity_part2.ipynb
katychuang/ipython-notebooks
gpl-2.0
<h3>Binary gaussian fit</h3> Now we fit two gaussians to the data to test the hypothesis that there are two stars in the data. To do this, we fit the data to the function \begin{eqnarray} f_B(x,y,\vec{z}) & = & F_1\exp\left[-\frac{(x-x_c-\frac{1}{2}r\cos\theta)^2 + (y-y_c-\frac{1}{2}r\sin\theta)^2}{2\sigma_0^2}\right]\ & & + F_2\exp\left[-\frac{(x-x_c+\frac{1}{2}r\cos\theta)^2 + (y-y_c+\frac{1}{2}r\sin\theta)^2}{2\sigma_0^2}\right]\ & & + B \end{eqnarray} where $\vec{z} = (\sigma_0,x_c,y_c,r,\theta,F_1,F_2,B)$. The parameters $x_c,y_c$ are the center of the binary system, and $r,\theta$ are the length and angle with respect to $x$ of the line connecting the two stars. Here, we also assume that the variance of the data is proportional to the data, so $\sigma_{x,y} \propto D_{x,y}$.
#Fit data to binary star model def binary_gaussian_f(z,sig,xc,yc,r,theta,F1,F2,B): f1 = F1*np.exp(-0.5*((z[0]-xc-0.5*r*np.cos(theta))**2 + (z[1]-yc-0.5*r*np.sin(theta))**2)/sig**2) f2 = F2*np.exp(-0.5*((z[0]-xc+0.5*r*np.cos(theta))**2 + (z[1]-yc+0.5*r*np.sin(theta))**2)/sig**2) return f1 + f2 + B ini_guess_b = [sig_s*0.5, x0_s, y0_s, 10., np.pi/4, F_s*0.5, F_s*0.5, B_s*0.5] popt_b, pcov_b = optimize.curve_fit(binary_gaussian_f, (x_vals.ravel(), y_vals.ravel()), stardata.ravel(), p0=ini_guess_b, sigma=sigma_0.ravel()) print(popt_b)
stats/HW_4_PS.ipynb
mbuchove/notebook-wurk-b
mit
<h3>Model comparison</h3> To compare the binary and single star models, we compute the ratio of the posteriors for each model, given by $$\frac{P(B \mid {D_{x,y}})}{P(S \mid {D_{x,y}})} = \frac{P({D_{x,y}}\mid B)P(B)}{P({D_{x,y}}\mid S)P(S)}$$ We assume that the priors for each model are equal, so these terms cancel in the ratio. The likelihoods are given by (we write $M$ for a generic model and ${\lambda}$ for generic parameters) \begin{eqnarray} P({D_{x,y}}\mid M) &=& \int P({D_{x,y}},{\lambda}\mid M)d{\lambda}\ &=& \int P({D_{x,y}}\mid {\lambda}, M)P({\lambda}\mid M)d{\lambda} \end{eqnarray} For the first term in the integrand, we make the approximation that the parameters are distributed as a gaussian, so we have $$P({D_{x,y}},{\lambda}\mid M) \approx P({D_{x,y}},{\lambda_0}\mid M)\exp\left[-\frac{1}{2}(\lambda-\lambda_0)^T\Sigma^{-1}(\lambda-\lambda_0)\right]$$ where ${\lambda_0}$ is the collection of best-fit parameters, $\Sigma$ is the covariance matrix we get from the fitting. Therefore, we have $$P({D_{x,y}}\mid M) = P({D_{x,y}},{\lambda_0}\mid M)\int \exp\left[-\frac{1}{2}(\lambda-\lambda_0)^T\Sigma^{-1}(\lambda-\lambda_0)\right]P({\lambda}\mid M)d{\lambda}$$ We assume uniform priors for the parameters, which means $$P({\lambda}\mid M) = \prod_i P(\lambda_i\mid M) = \prod_i\frac{1}{\lambda_i^{\text{max}} - \lambda_i^{\text{min}}}$$ This means that we can pull this term out of the integral, leaving us with a multivariate gaussian integral. This can be done easily, and gives \begin{eqnarray} P({D_{x,y}}\mid M) &=& P({D_{x,y}},{\lambda_0}\mid M)\prod_i\frac{1}{\lambda_i^{\text{max}} - \lambda_i^{\text{min}}}\int \exp\left[-\frac{1}{2}(\lambda-\lambda_0)^T\Sigma^{-1}(\lambda-\lambda_0)\right]d{\lambda}\ &=& P({D_{x,y}},{\lambda_0}\mid M)\prod_i\frac{1}{\lambda_i^{\text{max}} - \lambda_i^{\text{min}}} \sqrt{(2\pi)^{n}\det\Sigma} \end{eqnarray} where $n$ is the number of parameters. Plugging this formula into our comparison, we first note that there are 5 parameters for the single star model and 8 for the binary star model. For the priors, we take the position parameters to be anywhere in the grid, so $x^{\text{max}} - x^{\text{min}} = 256\text{ pix}$ and same for $y,r$. For $\sigma$, we assume both models have the same range, so they cancel when we take the ratio. For $\theta$, we assume it can be in the range $0\ldots\pi$. For the background, $B$, we take the same range for the two models. For the fluxes, we take $F^{\text{max}} - F^{\text{min}} = 300\text{ DN}$ for both models. Putting this all together, we have $$\frac{P(B \mid {D_{x,y}})}{P(S \mid {D_{x,y}})} = \frac{(2\pi)^{3/2}}{300\cdot256\cdot\pi}\sqrt{\frac{\det\Sigma_B}{\det\Sigma_S}}\exp\left[-\frac{1}{2}(\chi_B^2-\chi_S^2)\right]$$ where the denominator of the first factor comes from the uniform priors and $$\chi_M^2 = \sum_{x,y}\frac{(D_{x,y}-f_M(x,y,\vec{z}))^2}{D_{x,y}}$$ for each model $M=B,S$. Below is the result of computing this ratio for the parameters we found.
F_max = 300. r_max = stardata.shape[0] det_cov_b = np.linalg.det(pcov_b) det_cov_s = np.linalg.det(pcov_s) sig_b, x0_b, y0_b, r_b, theta_b, F1_b, F2_b, B_b = popt_b chi2_b = 0. for xx in range(0,stardata.shape[0]): for yy in range(0,stardata.shape[1]): res_b = stardata[yy,xx] - binary_gaussian_f((xx,yy), *popt_b) chi2_b += res_b**2/abs(stardata[xx,yy]) chi2_s = 0. for xx in range(0,stardata.shape[0]): for yy in range(0,stardata.shape[1]): res_s = stardata[yy,xx] - gaussian_f((xx,yy), *popt_s) chi2_s += res_s**2/abs(stardata[xx,yy]) num_den = np.exp(-(chi2_b-chi2_s)/2) param_diff = len(pcov_b) - len(pcov_s) ratio = np.power(2*np.pi,0.5*param_diff)/(F_max*r_max*np.pi)*np.sqrt(det_cov_b/det_cov_s)*num_den print(ratio)
stats/HW_4_PS.ipynb
mbuchove/notebook-wurk-b
mit
<h3>Joint distribution for star fluxes in the binary model</h3> Assuming the joint distribution for the fluxes of star 1 and 2 is gaussian (this follows from the gaussian approximation of the distribution of the best-fit parameters and integrating out the unwanted parameters. The result is still a gaussian), we have $$P(F_1,F_2\mid {D_{x,y}}) = \frac{1}{2\pi\sqrt{\det\Sigma_F}}\exp\left[-\frac{1}{2}(F-F_0)^T\Sigma_F^{-1}(F-F_0)\right]$$ where $$F_0 = \left( \begin{array}{cc} 10.8\ 25.3 \end{array} \right)$$ are the best fit parameters $F_1$ and $F_2$, respectively, and the covariance matrix for these parameters is $$\Sigma_F = \left( \begin{array}{cc} 1.96 & -0.63\ -0.63 & 1.58 \end{array} \right)$$ which is obtained from the $2\times 2$ block of $\Sigma_B$ corresponding to the $F_1$ and $F_2$. To get the flux, we multiply these parameters by the integral of the point spread function, which comes out to $2\pi\sigma_0^2$. This results in $$F_\text{tot} = \left( \begin{array}{cc} 2538.58\ 5953.12 \end{array} \right)$$ for the fluxes of star 1 and 2, with errors given by the matrix $$\Sigma_{F_\text{tot}} = \left( \begin{array}{cc} 460.08 & -148.16\ -148.16 & 371.74 \end{array} \right)$$ <h3> Joint distribution of $(r,\theta)$ in the binary model</h3> Similar to the above discussion, the joint distribution for the position $\vec{r}=(r,\theta)$ of the stars in the binary model is given by $$P(r,\theta\mid {D_{x,y}}) = \frac{1}{2\pi\sqrt{\det\Sigma_\vec{r}}}\exp\left[-\frac{1}{2}(\vec{r}-\vec{r}0)^T\Sigma\vec{r}(\vec{r}-\vec{r}_0)\right]$$ where $$\vec{r}_0 = \left( \begin{array}{cc} 11.9\ 0.77 \end{array} \right)$$ are the best fit parameters for relative positions of the stars, and the covariance matrix is $$\Sigma_\vec{r} = \left( \begin{array}{cc} 0.46 & -4.73\times 10^{-5}\ -4.73\times 10^{-5} & 3.20\times10^{-3} \end{array} \right)$$ which is obtained from the $2\times 2$ block of $\Sigma_B$ corresponding to $r,\theta$. Below is a plot of this distribution.
theta = np.linspace(0,2*np.pi,100) r = np.linspace(0,stardata.shape[0]-1,stardata.shape[0]) cov_r = pcov_b[3:5,3:5] det_covr = np.linalg.det(cov_r) inv_covr = np.linalg.inv(cov_r) joint_rt = np.array([[1/(2*np.pi*np.sqrt(det_covr))*np.exp(-0.5*float(np.dot(np.dot(np.transpose(np.array([[rr-r_b], [tt-theta_b]])), inv_covr),np.array([[rr-r_b],[tt-theta_b]])))) for rr in r] for tt in theta]) #plt.contourf(stardata) # #plt.rc('text', usetex = True) #plt.rc('font', family='serif') plt.contour(r, theta, joint_rt) plt.colorbar() plt.axis([9,15,0.6,1.]) plt.xlabel('r [pix]') plt.ylabel('theta [rad]') plt.title('Joint distribution of (r,theta) for binary stars') #Data subtracted by binary fit sub_b = stardata - binary_gaussian_f((x_vals,y_vals), *popt_b) smoothed_sub_b = ndimage.gaussian_filter(sub_b, 3) plt.imshow(smoothed_sub_b) sub_plot = plt.imshow(smoothed_sub_b) plt.colorbar(label='DN')
stats/HW_4_PS.ipynb
mbuchove/notebook-wurk-b
mit
Here are the RadarSat-2 quadpol coherency matrix image directories as created from the Sentinel-1 Toolbox:
ls /home/imagery
mohammed.ipynb
mortcanty/SARDocker
mit
To combine the matrix bands into a single GeoTiff image, we run the python script ingestrs2quad.py:
run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721079_DK650144_FQ17W_20160403_230258_HH_VV_HV_VH_SLC/ run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721080_DK650145_FQ17W_20160427_230257_HH_VV_HV_VH_SLC/ run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/
mohammed.ipynb
mortcanty/SARDocker
mit
Here is an RGB display of the three diagonal matrix elements of the above image (bands 1,6 and 9):
run /home/dispms -f /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/polSAR.tif \ -p [1,6,9]
mohammed.ipynb
mortcanty/SARDocker
mit
To estimate the equivalent number of looks, run the python script enlml.py:
run /home/enlml /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/polSAR.tif
mohammed.ipynb
mortcanty/SARDocker
mit
So the ENL would appear to be about 5. To run the change sequential change detection on the three images, run the bash script sar_seq_rs2quad.sh. It gathers the three images together and calls the python script sar_seq.py which does the change detection. By choosing a spatial subset (in this case 400x400), the images are clipped and co-registered to the first image. This might be unnecessary if the images are well registered anyway. If you have a multicore processor you can eneable parallel computation by openeing a terminal window in the container (new terminal) and running ipcluster start -n 4
!/home/sar_seq_rs2quad.sh 20160403 20160427 20160614 [50,50,400,400] 5 0.01
mohammed.ipynb
mortcanty/SARDocker
mit
Here is the change map for the most recent changes:
run /home/dispms \ -f /home/imagery/RS2_OK82571_PK721079_DK650144_FQ17W_20160403_230258_HH_VV_HV_VH_SLC/sarseq(20160403-1-20160614)_cmap.tif -c
mohammed.ipynb
mortcanty/SARDocker
mit
Transliteration
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator bangla_text = "ami apni tumi tomar tomader amar apnar apnader akash" text_trans = ItransTransliterator.from_itrans(bangla_text, "bn") print repr(text_trans).decode("unicode_escape")
stemming_and_transliteration/Bangla Stemming and Transliteration.ipynb
salman-jpg/maya
mit
Using Silpa https://github.com/libindic/Silpa-Flask Transliteration
from transliteration import getInstance trans = getInstance() text_trans = trans.transliterate(bangla_text, "bn_IN") print repr(text_trans).decode("unicode_escape")
stemming_and_transliteration/Bangla Stemming and Transliteration.ipynb
salman-jpg/maya
mit
Using BengaliStemmer https://github.com/gdebasis/BengaliStemmer Stemming
import rbs word_stem1 = [] for i in word_token: word_stem1.append(rbs.stemWord(i, True)) bs1 = pd.DataFrame({"1_Word": word_token, "2_Stem": word_stem1}) bs1
stemming_and_transliteration/Bangla Stemming and Transliteration.ipynb
salman-jpg/maya
mit
Using BanglaStemmer https://github.com/rafi-kamal/Bangla-Stemmer Stemming
import jnius_config jnius_config.set_classpath(".", "path to class") from jnius import autoclass cls = autoclass("RuleFileParser") stemmer = cls() word_stem2 = [] for i in word_token: word_stem2.append(stemmer.stemOfWord(i)) bs2 = pd.DataFrame({"1_Word": word_token, "2_Stem": word_stem2}) bs2
stemming_and_transliteration/Bangla Stemming and Transliteration.ipynb
salman-jpg/maya
mit
Using Avro https://github.com/kaustavdm/pyAvroPhonetic Transliteration
from pyavrophonetic import avro trans_text = avro.parse(bangla_text) print repr(trans_text).decode("unicode_escape")
stemming_and_transliteration/Bangla Stemming and Transliteration.ipynb
salman-jpg/maya
mit
In the following sections we will load the data, pre-process it, train the model, and explore the results using some of the implementation's functionality. Feel free to skip the loading and pre-processing for now, if you are familiar with the process. Loading the data In the cell below, we crawl the folders and files in the dataset, and read the files into memory.
import os, re # Folder containing all NIPS papers. data_dir = '/tmp/nipstxt/' # Set this path to the data on your machine. # Folders containin individual NIPS papers. yrs = ['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12'] dirs = ['nips' + yr for yr in yrs] # Get all document texts and their corresponding IDs. docs = [] doc_ids = [] for yr_dir in dirs: files = os.listdir(data_dir + yr_dir) # List of filenames. for filen in files: # Get document ID. (idx1, idx2) = re.search('[0-9]+', filen).span() # Matches the indexes of the start end end of the ID. doc_ids.append(yr_dir[4:] + '_' + str(int(filen[idx1:idx2]))) # Read document text. # Note: ignoring characters that cause encoding errors. with open(data_dir + yr_dir + '/' + filen, errors='ignore', encoding='utf-8') as fid: txt = fid.read() # Replace any whitespace (newline, tabs, etc.) by a single space. txt = re.sub('\s', ' ', txt) docs.append(txt)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Construct a mapping from author names to document IDs.
filenames = [data_dir + 'idx/a' + yr + '.txt' for yr in yrs] # Using the years defined in previous cell. # Get all author names and their corresponding document IDs. author2doc = dict() i = 0 for yr in yrs: # The files "a00.txt" and so on contain the author-document mappings. filename = data_dir + 'idx/a' + yr + '.txt' for line in open(filename, errors='ignore', encoding='utf-8'): # Each line corresponds to one author. contents = re.split(',', line) author_name = (contents[1] + contents[0]).strip() # Remove any whitespace to reduce redundant author names. author_name = re.sub('\s', '', author_name) # Get document IDs for author. ids = [c.strip() for c in contents[2:]] if not author2doc.get(author_name): # This is a new author. author2doc[author_name] = [] i += 1 # Add document IDs to author. author2doc[author_name].extend([yr + '_' + id for id in ids]) # Use an integer ID in author2doc, instead of the IDs provided in the NIPS dataset. # Mapping from ID of document in NIPS datast, to an integer ID. doc_id_dict = dict(zip(doc_ids, range(len(doc_ids)))) # Replace NIPS IDs by integer IDs. for a, a_doc_ids in author2doc.items(): for i, doc_id in enumerate(a_doc_ids): author2doc[a][i] = doc_id_dict[doc_id]
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Pre-processing text The text will be pre-processed using the following steps: * Tokenize text. * Replace all whitespace by single spaces. * Remove all punctuation and numbers. * Remove stopwords. * Lemmatize words. * Add multi-word named entities. * Add frequent bigrams. * Remove frequent and rare words. A lot of the heavy lifting will be done by the great package, Spacy. Spacy markets itself as "industrial-strength natural language processing", is fast, enables multiprocessing, and is easy to use. First, let's import it and load the NLP pipline in english.
import spacy nlp = spacy.load('en')
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
In the code below, Spacy takes care of tokenization, removing non-alphabetic characters, removal of stopwords, lemmatization and named entity recognition. Note that we only keep named entities that consist of more than one word, as single word named entities are already there.
%%time processed_docs = [] for doc in nlp.pipe(docs, n_threads=4, batch_size=100): # Process document using Spacy NLP pipeline. ents = doc.ents # Named entities. # Keep only words (no numbers, no punctuation). # Lemmatize tokens, remove punctuation and remove stopwords. doc = [token.lemma_ for token in doc if token.is_alpha and not token.is_stop] # Remove common words from a stopword list. #doc = [token for token in doc if token not in STOPWORDS] # Add named entities, but only if they are a compound of more than word. doc.extend([str(entity) for entity in ents if len(entity) > 1]) processed_docs.append(doc) docs = processed_docs del processed_docs
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Below, we use a Gensim model to add bigrams. Note that this achieves the same goal as named entity recognition, that is, finding adjacent words that have some particular significance.
# Compute bigrams. from gensim.models import Phrases # Add bigrams and trigrams to docs (only ones that appear 20 times or more). bigram = Phrases(docs, min_count=20) for idx in range(len(docs)): for token in bigram[docs[idx]]: if '_' in token: # Token is a bigram, add to document. docs[idx].append(token)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Now we are ready to construct a dictionary, as our vocabulary is finalized. We then remove common words (occurring $> 50\%$ of the time), and rare words (occur $< 20$ times in total).
# Create a dictionary representation of the documents, and filter out frequent and rare words. from gensim.corpora import Dictionary dictionary = Dictionary(docs) # Remove rare and common tokens. # Filter out words that occur too frequently or too rarely. max_freq = 0.5 min_wordcount = 20 dictionary.filter_extremes(no_below=min_wordcount, no_above=max_freq) _ = dictionary[0] # This sort of "initializes" dictionary.id2token.
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
We produce the vectorized representation of the documents, to supply the author-topic model with, by computing the bag-of-words.
# Vectorize data. # Bag-of-words representation of the documents. corpus = [dictionary.doc2bow(doc) for doc in docs]
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Let's inspect the dimensionality of our data.
print('Number of authors: %d' % len(author2doc)) print('Number of unique tokens: %d' % len(dictionary)) print('Number of documents: %d' % len(corpus))
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
Train and use model We train the author-topic model on the data prepared in the previous sections. The interface to the author-topic model is very similar to that of LDA in Gensim. In addition to a corpus, ID to word mapping (id2word) and number of topics (num_topics), the author-topic model requires either an author to document ID mapping (author2doc), or the reverse (doc2author). Below, we have also (this can be skipped for now): * Increased the number of passes over the dataset (to improve the convergence of the optimization problem). * Decreased the number of iterations over each document (related to the above). * Specified the mini-batch size (chunksize) (primarily to speed up training). * Turned off bound evaluation (eval_every) (as it takes a long time to compute). * Turned on automatic learning of the alpha and eta priors (to improve the convergence of the optimization problem). * Set the random state (random_state) of the random number generator (to make these experiments reproducible). We load the model, and train it.
from gensim.models import AuthorTopicModel %time model = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \ author2doc=author2doc, chunksize=2000, passes=1, eval_every=0, \ iterations=1, random_state=1)
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1
If you believe your model hasn't converged, you can continue training using model.update(). If you have additional documents and/or authors call model.update(corpus, author2doc). Before we explore the model, let's try to improve upon it. To do this, we will train several models with different random initializations, by giving different seeds for the random number generator (random_state). We evaluate the topic coherence of the model using the top_topics method, and pick the model with the highest topic coherence.
%%time model_list = [] for i in range(5): model = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \ author2doc=author2doc, chunksize=2000, passes=100, gamma_threshold=1e-10, \ eval_every=0, iterations=1, random_state=i) top_topics = model.top_topics(corpus) tc = sum([t[1] for t in top_topics]) model_list.append((model, tc))
docs/notebooks/atmodel_tutorial.ipynb
markroxor/gensim
lgpl-2.1