markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
8 - Using $\textbf{A}^{-1}$ compute $(\textbf{A}^{-1})^T$ and comment on your results. $(A^{-1})^{T}$ is the same as $(A^T)^{-1}$:
print(Ainv.T) print(np.linalg.inv(A.T)) print(np.linalg.inv(Ainv.T))
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
9 - Using $\textbf{A}$, $\textbf{B}$, and $\textbf{C}$, compute the determinant of each. Comment on your results.
print(np.linalg.det(A)) # print(np.linalg.det(B)) # print(np.linalg.det(C))
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
We can't compute the determinant for $B$ and $C$ since they are not square matrices. 10 - Construct a square ($2\times2$) matrix, $\textbf{D}$,that is not invertible.
D = np.array([[0, 0], [0, 0]])
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
11 - How would you go about solving the equation $\textbf{A}\vec{x} = 0$, using $\textbf{A}$ as above for an unknown $\vec{x}$? Do so and comment on your results. Hint consider parts (6) and (7).
c = np.array([0, 0]) x = Ainv.dot(c) x
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
12 - Using the same method as in part (11), solve the equation $\textbf{A}\vec{x} = \vec{y}$ where $\vec{y} = \begin{bmatrix} 1 \ -1 \end{bmatrix}$
y = np.array([1, -1]) x = Ainv.dot(y) x
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
13 - Solve the system of equations $$x_0 + 2x_1 = 3$$ $$-x_0 + x_1 = 1$$ using both matrix inversion and built in numpy functions.
A = np.array([[1, 2], [-1, 1]]) b = np.array([3, 1]) # Using inversion x = np.linalg.inv(A).dot(b) x # Using linalg.solve x = np.linalg.solve(A, b) x
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
14 - Solve the system of equations $$x_0 + x_1 = 1$$ $$2x_0 + 2x_1 = 2$$ $$-3x_0 + -3x_1 = -3$$ using both matrix inversion and built in numpy functions. Are these results what you expected? Comment on your results. The three equations are the same, so there are infinite solutions and I expect that numpy can't solve it; instead the error returned in both cases essentially says that the number of equations is different from the number of unknowns:
A = np.array([[1, 1], [2, 2], [-3, -3]]) b = np.array([1, 2, -3]) # Using inversion x = np.linalg.inv(A).dot(b) x # Using linalg.solve x = np.linalg.solve(A, b) x
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
15 - Solve the system of equations $$x_0 + x_1 = 0$$ $$x_0 + x_1 = 1$$ using both matrix inversion and built in numpy functions. Are these results what you expected? Comment on your results. The system has no solution and, using both methods, I get something similar to what I expected: the error says that the matrix is singular so it can't be inverted:
A = np.array([[1, 1], [1, 1]]) b = np.array([0, 1]) # Using inversion x = np.linalg.inv(A).dot(b) x # Using linalg.solve x = np.linalg.solve(A, b) x
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
Eigenvalues and Eigenvectors No discussion of Linear Algebra would be complete without taking a look at Eigenvalues and Eigenvectors. The root word "eigen" comes from the German meaning "characteristic", and these values and associated vectors, represent some interesting properties of a given matrix. Namely, for a given matrix $\textbf{A}$ and vector $\vec{v}$, the eigenvalue(s), $\lambda$, of $\textbf{A}$, are the $\lambda$ that satisfy the relationship $$\textbf{A} \vec{v} = \lambda \vec{v}$$ Keep in mind that $\lambda$ is a scalar quantity, and when you multiply a vector by a scalar quantity, you just scale, or stretch, the vector in space. Therefore, the above relationship says that the eigenvalues $\lambda$ of $\textbf{A}$, and associated eigenvectors $\vec{v}$, are the vectors that when multiplied by $\textbf{A}$ just "stretch" in space (no rotations). Now that may not sound very special, but the applicability of these concepts cannot be understated. Eigenvalues and vectors have a tendency to crop up in any mathematically grounded discipline and Data Science is no exception. For a more detailed explanation, see - Great math formula explanation - Visual explanation of Eigenvectors and Eigenvalues before proceding with the following exercises. 1 - Generate a matrix $$\textbf{A} = \begin{bmatrix} 0 & 1 \ -2 & -3 \end{bmatrix}$$ and two vectors of your choosing, labeled $\vec{v}_1$ and $\vec{v}_2$. Then compute the vectors $$\vec{v}_1' = \textbf{A}\vec{v}_1$$ $$\vec{v}_2' = \textbf{A}\vec{v}_2$$ And plot all 4 vectors with appropriate labels. Comment on your results.
import matplotlib.pyplot as plt %matplotlib inline A = np.array([[0, 1], [-2, 3]]) v1 = np.array([1, 0]) v2 = np.array([-1, 1]) v1a = A.dot(v1) v2a = A.dot(v2) # This is the plotting function from the instructional notebook: def plot_vector2d(vector2d, origin=[0, 0], **options): return plt.arrow(origin[0], origin[1], vector2d[0], vector2d[1], head_width=0.2, head_length=0.3, length_includes_head=True, **options)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
Applying the transformation $A$ causes the vector to be rotated and scaled:
fig = plt.figure(figsize=(10, 10)) ax = plt.axes() ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) plot_vector2d(v1, color='red', label='v1') plot_vector2d(v2, color='blue', label='v2') plot_vector2d(v1a, color='orange', label='v1a') plot_vector2d(v2a, color='green', label='v2a'); #ax.legend();
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
2 - Now compute the eigenvalues and eigenvectors of $\textbf{A}$, then plot $\textbf{A}\vec{v}$ and $\lambda\vec{v}$ on seperate plots, where $\lambda$ is the eigenvalue of $\textbf{A}$. Comment on your results.
val, vec = np.linalg.eig(A) print(val) print(vec) print(A.dot(vec[:,0])) print(val[0]*vec[:,0])
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
The eigenvectors are mapped to themselves by $A$, since this is their defining property:
fig = plt.figure(figsize=(10, 10)) ax = plt.axes() ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) plot_vector2d(A.dot(vec[:, 0]), color='red', label='eig1') plot_vector2d(A.dot(vec[:, 1]), color='blue', label='eig2'); #ax.legend(); fig = plt.figure(figsize=(10, 10)) ax = plt.axes() ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) plot_vector2d(val[0]*vec[:, 0], color='orange', label='lambda1') plot_vector2d(val[1]*vec[:, 1], color='green', label='lambda2'); #ax.legend();
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
3 - How do the results of part (2) differ from part (1)? The vectors in part (1) are rotated by $A$ while the eigenvectors of part (2) are just scaled by the transformation. 4 - Define a new 3x3 matrix of the form $$\textbf{A} = \begin{bmatrix} -2 & -4 & 2 \ -2 & 1 & 2 \ 4 & 2 & 5 \end{bmatrix}$$ and compute the eigenvalues and vectors. What can you say about the number of eigenvectors in your results? Since the matrix has rank 3 we get three eigenvectors as expected:
A = np.array([[-2, -4, 2], [-2, 1, 2], [4, 2, 5]]) np.linalg.eig(A)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
5 - Define a new 3x3 matrix of the form $$\textbf{B} = \begin{bmatrix} -2 & -4 & 2 \ -2 & 1 & 2 \ 1 & 2 & -1 \end{bmatrix}$$ and compute the eigenvalues and vectors. What can you say about the eigenvalues in your results? Do they differ from what you saw in part (4)? The matrix has rank 2 (the third column is equal to the first times $-1$), so I expect two eigenvectors; the fact that we get three is due to float arithmetic precision: the second eigenvalue is almost 0.
B = np.array([[-2, -4, 2], [-2, 1, 2], [1, 2, -1]]) np.linalg.eig(B)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
6 - Compute the inverse of $\textbf{A}$ and $\textbf{B}$ above. Comment on your results. We can't invert the matrix $B$ because it has two proportional columns:
np.linalg.inv(A) np.linalg.inv(B)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
7 - Compute the determinant of $\textbf{A}$ and $\textbf{B}$. How might your results relate to the eigen values you computed above? The matrix $B$ is not invertibile because it has determinant equal to 0; this means that it projects the arrays in the space on a single plane: this has dimension 2 so we can only have two eigenvectors.
print(np.linalg.det(A)) print(np.linalg.det(B))
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
8 - Consider the rotation matrix $$\textbf{R} = \begin{bmatrix} cos(\theta) & sin(\theta) \ -sin(\theta) & cos(\theta)\end{bmatrix}$$. Using a value of $\theta = 90$, compute the inner product of the columns, $\textbf{R}^T$, $\textbf{R}^{-1}$, $det(\textbf{R})$, and the eigenvalues and eigenvectors. Comment on your results. A rotation matrix is unitary, so it has determinant equal to 1 and $R^T = R^{-1}$.
t = np.pi/2 R = np.array([[np.cos(t), np.sin(t)], [-np.sin(t), np.cos(t)]]) print(R.T) print(np.linalg.inv(R)) print(np.linalg.det(R))
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
Eigenvalues and eigenvectors are complex numbers because a rotation has no fixed directions:
np.linalg.eig(R)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
Matrix Decomposition Matrix Decomposition can be thought of as rewriting a given matrix as a product of other (and often simpler) matrices. For example, given a matrix $\textbf{A}$, one can decompose $\textbf{A}$ into the following. $$\textbf{A} = \textbf{Q} \Lambda \textbf{Q}^{-1}$$ where $\textbf{Q}$ is a matrix whose $i^{th}$ column is the $i^{th}$ eigenvector of $\textbf{A}$, and $\Lambda$ is a matrix containing all of the corresponding eigenvalues on the main diagonal. Decomposing $\textbf{A}$ in this manner is called an Eigendecomposition. Such matrix decompositions, form the basis of many techniques in Data Science and other mathematical disciplines. 1 - Compute the eigenvalues and eigenvectors of matrix $$\textbf{A} = \begin{bmatrix} -2 & -4 & 2 \ -2 & 1 & 2 \ 4 & 2 & 5 \end{bmatrix}$$
A = np.array([[-2, -4, 2], [-2, 1, 2], [4, 2, 5]]) val, vec = np.linalg.eig(A)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
2 - Construct a matrix $\textbf{Q}$ whose columns are the eigenvectors of $\textbf{A}$.
Q = vec
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
3 - Construct a set of three vectors $\vec{\lambda_1} \dots \vec{\lambda_n}$, whose $n^{th}$ element is the $n^{th}$ eigenvalue of $\textbf{A}$ while all other elements are 0. The second vector, for example, would be $$\vec{\lambda_2} = \begin{bmatrix} 0 \ \lambda_2 \ 0 \end{bmatrix}$$
l = np.diag(val)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
4 - Now try multiplying various combinations of $\textbf{A}$, $\textbf{Q}$, and $\vec{\lambda_n}$ together. What is the relationship among them? Each $\lambda_i$ multiplied by $Q$ gives one of the columns of $A \cdot Q$:
print(l[0] * Q) print(l[1] * Q) print(l[2] * Q) print(A.dot(Q))
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
5 - Solve the relationship you found in part (4) for $\textbf{A}$ and verify that this is the eigenvalue decomposition.
print(Q.dot(l).dot(np.linalg.inv(Q))) print(A)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
6 - Another very useful matrix decomposition is the Singular Value Decomposition (SVD) which is used, for example, in Principal Component Analysis. A full discussion of this decomposition is beyond the scope of this exercise, but singular values are the square roots of the eigenvalues of $\textbf{A}\textbf{A}^T$ (for the real case). Using numpy, perform a SVD on $\textbf{A}$ used above, and verify that the values on the main diagonal of the singular matrix are the square roots of the eigenvalues of $\textbf{A}$.
u, diag, v = np.linalg.svd(A) eigenval, eigenvec = np.linalg.eig(A.dot(A.T)) print(diag**2) print(eigenval) print(u.dot(np.diag(diag)).dot(v)) print(A)
Foundations/Math/linear-algebra_exercise.ipynb
aleph314/K2
gpl-3.0
Explore the Data Play around with view_line_range to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character \n.
view_line_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) lines = text.split('\n') print('Number of lines: {}'.format(len(lines))) word_count_line = [len(line.split()) for line in lines] print('Average number of words in each line: {}'.format(np.average(word_count_line))) print() print('The lines {} to {}:'.format(*view_line_range)) print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Implement Pre-processing Functions The first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function # return tuple return (None, None) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids. Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( - ) - Return ( \n ) This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenized dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Pre-process all the data and save it Running the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for preprocess_and_save_data in the helpers.py file to see what it's doing in detail, but you do not need to change this code.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # pre-process training data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Build the Neural Network In this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
""" DON'T MODIFY ANYTHING IN THIS CELL """ import torch # Check for a GPU train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('No GPU found. Please use a GPU to train your neural network.')
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Input Let's start with the preprocessed input data. We'll use TensorDataset to provide a known format to our dataset; in combination with DataLoader, it will handle batching, shuffling, and other dataset iteration functions. You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual. data = TensorDataset(feature_tensors, target_tensors) data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size) Batching Implement the batch_data function to batch words data into chunks of size batch_size using the TensorDataset and DataLoader classes. You can batch words using the DataLoader, but it will be up to you to create feature_tensors and target_tensors of the correct size and content for a given sequence_length. For example, say we have these as input: words = [1, 2, 3, 4, 5, 6, 7] sequence_length = 4 Your first feature_tensor should contain the values: [1, 2, 3, 4] And the corresponding target_tensor should just be the next "word"/tokenized word value: 5 This should continue with the second feature_tensor, target_tensor being: [2, 3, 4, 5] # features 6 # target
from torch.utils.data import TensorDataset, DataLoader def batch_data(words, sequence_length, batch_size): """ Batch the neural network data using DataLoader :param words: The word ids of the TV scripts :param sequence_length: The sequence length of each batch :param batch_size: The size of each batch; the number of sequences in a batch :return: DataLoader with batched data """ # TODO: Implement function # return a dataloader return None # there is no test for this function, but you are encouraged to create # print statements and tests of your own
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar. Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs sample_x and targets sample_y from our dataloader. Your code should return something like the following (likely in a different order, if you shuffled your data): ``` torch.Size([10, 5]) tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]]) torch.Size([10]) tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12]) ``` Sizes Your sample_x should be of size (batch_size, sequence_length) or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). Values You should also notice that the targets, sample_y, are the next value in the ordered test_text data. So, for an input sequence [ 28, 29, 30, 31, 32] that ends with the value 32, the corresponding output should be 33.
# test dataloader test_text = range(50) t_loader = batch_data(test_text, sequence_length=5, batch_size=10) data_iter = iter(t_loader) sample_x, sample_y = data_iter.next() print(sample_x.shape) print(sample_x) print() print(sample_y.shape) print(sample_y)
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Build the Neural Network Implement an RNN using PyTorch's Module class. You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - __init__ - The initialize function. - init_hidden - The initialization function for an LSTM/GRU hidden state - forward - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state. The output of this model should be the last batch of word scores after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim) You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so: ``` reshape into (batch_size, seq_length, output_size) output = output.view(batch_size, -1, self.output_size) get last batch out = output[:, -1] ```
import torch.nn as nn class RNN(nn.Module): def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5): """ Initialize the PyTorch RNN Module :param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary) :param output_size: The number of output dimensions of the neural network :param embedding_dim: The size of embeddings, should you choose to use them :param hidden_dim: The size of the hidden layer outputs :param dropout: dropout to add in between LSTM/GRU layers """ super(RNN, self).__init__() # TODO: Implement function # set class variables # define model layers def forward(self, nn_input, hidden): """ Forward propagation of the neural network :param nn_input: The input to the neural network :param hidden: The hidden state :return: Two Tensors, the output of the neural network and the latest hidden state """ # TODO: Implement function # return one batch of output word scores and the hidden state return None, None def init_hidden(self, batch_size): ''' Initialize the hidden state of an LSTM/GRU :param batch_size: The batch_size of the hidden state :return: hidden state of dims (n_layers, batch_size, hidden_dim) ''' # Implement function # initialize hidden state with zero weights, and move to GPU if available return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_rnn(RNN, train_on_gpu)
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Define forward and backpropagation Use the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows: loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target) And it should return the average loss over a batch and the hidden state returned by a call to RNN(inp, hidden). Recall that you can get this loss by computing it, as usual, and calling loss.item(). If a GPU is available, you should move your data to that GPU device, here.
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden): """ Forward and backward propagation on the neural network :param decoder: The PyTorch Module that holds the neural network :param decoder_optimizer: The PyTorch optimizer for the neural network :param criterion: The PyTorch loss function :param inp: A batch of input to the neural network :param target: The target output for the batch of input :return: The loss and the latest hidden state Tensor """ # TODO: Implement Function # move data to GPU, if available # perform backpropagation and optimization # return the loss over a batch and the hidden state produced by our model return None, None # Note that these tests aren't completely extensive. # they are here to act as general checks on the expected outputs of your functions """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Neural Network Training With the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train Loop The training loop is implemented for you in the train_decoder function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the show_every_n_batches parameter. You'll set this parameter along with other parameters in the next section.
""" DON'T MODIFY ANYTHING IN THIS CELL """ def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100): batch_losses = [] rnn.train() print("Training for %d epoch(s)..." % n_epochs) for epoch_i in range(1, n_epochs + 1): # initialize hidden state hidden = rnn.init_hidden(batch_size) for batch_i, (inputs, labels) in enumerate(train_loader, 1): # make sure you iterate over completely full batches, only n_batches = len(train_loader.dataset)//batch_size if(batch_i > n_batches): break # forward, back prop loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) # record loss batch_losses.append(loss) # printing loss stats if batch_i % show_every_n_batches == 0: print('Epoch: {:>4}/{:<4} Loss: {}\n'.format( epoch_i, n_epochs, np.average(batch_losses))) batch_losses = [] # returns a trained rnn return rnn
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Hyperparameters Set and train the neural network with the following parameters: - Set sequence_length to the length of a sequence. - Set batch_size to the batch size. - Set num_epochs to the number of epochs to train for. - Set learning_rate to the learning rate for an Adam optimizer. - Set vocab_size to the number of unique tokens in our vocabulary. - Set output_size to the desired size of the output. - Set embedding_dim to the embedding dimension; smaller than the vocab_size. - Set hidden_dim to the hidden dimension of your RNN. - Set n_layers to the number of layers/cells in your RNN. - Set show_every_n_batches to the number of batches at which the neural network should print progress. If the network isn't getting the desired results, tweak these parameters and/or the layers in the RNN class.
# Data params # Sequence Length sequence_length = # of words in a sequence # Batch Size batch_size = # data loader - do not change train_loader = batch_data(int_text, sequence_length, batch_size) # Training parameters # Number of Epochs num_epochs = # Learning Rate learning_rate = # Model parameters # Vocab size vocab_size = # Output size output_size = # Embedding Dimension embedding_dim = # Hidden Dimension hidden_dim = # Number of RNN Layers n_layers = # Show stats for every n number of batches show_every_n_batches = 500
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Train In the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. You should aim for a loss less than 3.5. You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # create model and move to gpu if available rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5) if train_on_gpu: rnn.cuda() # defining loss and optimization functions for training optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate) criterion = nn.CrossEntropyLoss() # training the model trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches) # saving the trained model helper.save_model('./save/trained_rnn', trained_rnn) print('Model Trained and Saved')
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? Answer: (Write answer, here) Checkpoint After running the above training cell, your model will be saved by name, trained_rnn, and if you save your notebook progress, you can pause here and come back to this code at another time. You can resume your progress by running the next cell, which will load in our word:id dictionaries and load in your saved model by name!
""" DON'T MODIFY ANYTHING IN THIS CELL """ import torch import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() trained_rnn = helper.load_model('./save/trained_rnn')
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Generate TV Script With the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate Text To generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the generate function to do this. It takes a word id to start with, prime_id, and generates a set length of text, predict_len. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ import torch.nn.functional as F def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100): """ Generate text using the neural network :param decoder: The PyTorch Module that holds the trained neural network :param prime_id: The word id to start the first prediction :param int_to_vocab: Dict of word id keys to word values :param token_dict: Dict of puncuation tokens keys to puncuation values :param pad_value: The value used to pad a sequence :param predict_len: The length of text to generate :return: The generated text """ rnn.eval() # create a sequence (batch_size=1) with the prime_id current_seq = np.full((1, sequence_length), pad_value) current_seq[-1][-1] = prime_id predicted = [int_to_vocab[prime_id]] for _ in range(predict_len): if train_on_gpu: current_seq = torch.LongTensor(current_seq).cuda() else: current_seq = torch.LongTensor(current_seq) # initialize the hidden state hidden = rnn.init_hidden(current_seq.size(0)) # get the output of the rnn output, _ = rnn(current_seq, hidden) # get the next word probabilities p = F.softmax(output, dim=1).data if(train_on_gpu): p = p.cpu() # move to cpu # use top_k sampling to get the index of the next word top_k = 5 p, top_i = p.topk(top_k) top_i = top_i.numpy().squeeze() # select the likely next word index with some element of randomness p = p.numpy().squeeze() word_i = np.random.choice(top_i, p=p/p.sum()) # retrieve that word from the dictionary word = int_to_vocab[word_i] predicted.append(word) # the generated word becomes the next "current sequence" and the cycle can continue current_seq = np.roll(current_seq, -1, 1) current_seq[-1][-1] = word_i gen_sentences = ' '.join(predicted) # Replace punctuation tokens for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' gen_sentences = gen_sentences.replace(' ' + token.lower(), key) gen_sentences = gen_sentences.replace('\n ', '\n') gen_sentences = gen_sentences.replace('( ', '(') # return all the sentences return gen_sentences
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Generate a New Script It's time to generate the text. Set gen_length to the length of TV script you want to generate and set prime_word to one of the following to start the prediction: - "jerry" - "elaine" - "george" - "kramer" You can set the prime word to any word in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
# run the cell multiple times to get different results! gen_length = 400 # modify the length to your preference prime_word = 'jerry' # name for starting the script """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ pad_word = helper.SPECIAL_WORDS['PADDING'] generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length) print(generated_script)
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Save your favorite scripts Once you have a script that you like (or find interesting), save it to a text file!
# save script to a text file f = open("generated_script_1.txt","w") f.write(generated_script) f.close()
DEEP LEARNING/NLP/LSTM RNN/Next Chars pytorch/project-tv-script-generation/dlnd_tv_script_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Aggregation examples First example 1) Aggregated sum of all steps.performance.cpu values. In this case the result is a single line that can be easily stored back in HDFS, also in a textual format.
aggregation1 = df.select("steps.performance.cpu") \ .rdd \ .flatMap(lambda cpuArrayRows: cpuArrayRows[0]) \ .map(lambda row: row.asDict()) \ .flatMap(lambda rowDict: [(k,v) for k,v in rowDict.iteritems()]) \ .reduceByKey(lambda x,y: x+y) %%time aggregation1.collect() # Store the file as a simple text file aggregation1.saveAsTextFile("wmarchive/test-plaintext-aggregation1") %%bash hadoop fs -text wmarchive/test-plaintext-aggregation1/* aggregated1DF = sqlContext.createDataFrame([{v[0]:v[1] for v in aggregation1.collect()}]) # saving in Json format aggregated1DF.toJSON().saveAsTextFile("wmarchive/test-json-aggregation1") %%bash hadoop fs -text wmarchive/test-json-aggregation1/* # how to write in Avro format aggregated1DF.write.format("com.databricks.spark.avro").save("wmarchive/test-avro-aggregation1") %%bash hadoop fs -text wmarchive/test-avro-aggregation1/*
How to write results into HDFS - example.ipynb
H4ml3t/wmarchive-examples
mit
Reading Excel files
example_csv = pd.read_csv('../resources/B1_mosquito_data.csv', parse_dates=True, index_col=0) example_csv[0:10] example_csv.corr()
course_content/notebooks/pandas_introduction.ipynb
ajdawson/python_for_climate_scientists
gpl-3.0
Edit the following to point to the model directory in which the trained model that you want to use resides. If you just did a training run, the directory name will have been printed to STDOUT.
# Replace MODEL_DIR with the path to the directory in which your learned model resides. MODEL_DIR = '/tmp/tfmodels/img_classify/your-model-dir'
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/transfer_learning/TF_Estimator/transfer_learning_prediction.ipynb
shareactorIO/pipeline
apache-2.0
Now define some helper functions. You'll find these functions in transfer_learning.py also. (In run_bottleneck_on_image, note that we're calling sess.run() to get the value of the 'bottleneck' layer of the Inception graph, with image data fed to the JPEG_DATA_TENSOR_NAME node.)
def create_inception_graph(): """"Creates a graph from saved GraphDef file and returns a Graph object. """ with tf.Session() as sess: model_filename = os.path.join( INCEPTION_MODEL_DIR, 'classify_image_graph_def.pb') with gfile.FastGFile(model_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) bottleneck_tensor, jpeg_data_tensor, resized_input_tensor = ( tf.import_graph_def(graph_def, name='', return_elements=[ BOTTLENECK_TENSOR_NAME, JPEG_DATA_TENSOR_NAME, RESIZED_INPUT_TENSOR_NAME])) return sess.graph, bottleneck_tensor, jpeg_data_tensor, resized_input_tensor def run_bottleneck_on_image(sess, image_data, image_data_tensor, bottleneck_tensor): """Runs inference on an image to extract the 'bottleneck' summary layer. """ bottleneck_values = sess.run( bottleneck_tensor, {image_data_tensor: image_data}) bottleneck_values = np.squeeze(bottleneck_values) return bottleneck_values
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/transfer_learning/TF_Estimator/transfer_learning_prediction.ipynb
shareactorIO/pipeline
apache-2.0
Next, we need to load the file that gives us the class name ordering used for the result vectors during training. (Since this info was generated from reading the photos directories structure, the ordering can potentially change. We need to make sure that doesn't happen, so that we interpret the prediction results consistently).
# load the labels list, needed to create the model; if it's # not there, we can't proceed output_labels_file = os.path.join(MODEL_DIR, "output_labels.json") if gfile.Exists(output_labels_file): with open(output_labels_file, 'r') as lfile: labels_string = lfile.read() labels_list = json.loads(labels_string) print("labels list: %s" % labels_list) class_count = len(labels_list) else: print("Labels list %s not found; we can't proceed." % output_labels_file)
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/transfer_learning/TF_Estimator/transfer_learning_prediction.ipynb
shareactorIO/pipeline
apache-2.0
Define a function to run the image predictions. First, we need to get the 'bottleneck' values, using the graph loaded from the Inception model. Then, we feed that data to our own trained model. classifier is a custom Estimator, and we will use its predict method. (We'll define the Estimator in a few more cells).
def make_image_predictions( classifier, jpeg_data_tensor, bottleneck_tensor, path_list, labels_list): """Use the learned model to make predictions.""" if not labels_list: output_labels_file = os.path.join(MODEL_DIR, LABELS_FILENAME) if gfile.Exists(output_labels_file): with open(output_labels_file, 'r') as lfile: labels_string = lfile.read() labels_list = json.loads(labels_string) print("labels list: %s" % labels_list) else: print("Labels list %s not found" % output_labels_file) return None sess = tf.Session() bottlenecks = [] print("Predicting for images: %s" % path_list) for img_path in path_list: # get bottleneck for an image path. if not gfile.Exists(img_path): tf.logging.fatal('File does not exist %s', img_path) image_data = gfile.FastGFile(img_path, 'rb').read() bottleneck_values = run_bottleneck_on_image(sess, image_data, jpeg_data_tensor, bottleneck_tensor) bottlenecks.append(bottleneck_values) prediction_input = np.array(bottlenecks) predictions = classifier.predict(x=prediction_input, as_iterable=True) print("Predictions:") for _, p in enumerate(predictions): print("---------") for k in p.keys(): print("%s is: %s " % (k, p[k])) if k == "index": print("index label is: %s" % labels_list[p[k]])
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/transfer_learning/TF_Estimator/transfer_learning_prediction.ipynb
shareactorIO/pipeline
apache-2.0
Define the Inception-based graph we'll use to generate the 'bottleneck' values. Wait for this to print "Finished" before continuing.
# Set up the pre-trained graph transfer_learning.maybe_download_and_extract(INCEPTION_MODEL_DIR) graph, bottleneck_tensor, jpeg_data_tensor, resized_image_tensor = ( create_inception_graph()) print("Finished.")
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/transfer_learning/TF_Estimator/transfer_learning_prediction.ipynb
shareactorIO/pipeline
apache-2.0
Define our custom Estimator. (As the lab exercise, you will write some of the code that does this).
# Define the custom estimator model_fn = transfer_learning.make_model_fn(class_count, 'final_result') model_params = {} classifier = tf.contrib.learn.Estimator( model_fn=model_fn, params=model_params, model_dir=MODEL_DIR) img_list = transfer_learning.get_prediction_images('prediction_images')
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/transfer_learning/TF_Estimator/transfer_learning_prediction.ipynb
shareactorIO/pipeline
apache-2.0
For fun, display the images that we're going to predict the classification for.
# If PIL/Pillow is not installed, this step is not important import PIL.Image from IPython.display import display for imgfile in img_list: img = PIL.Image.open(imgfile) display(img)
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/transfer_learning/TF_Estimator/transfer_learning_prediction.ipynb
shareactorIO/pipeline
apache-2.0
Run the predict() method of our Estimator to predict the classifications of our list of images.
make_image_predictions( classifier, jpeg_data_tensor, bottleneck_tensor, img_list, labels_list)
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/transfer_learning/TF_Estimator/transfer_learning_prediction.ipynb
shareactorIO/pipeline
apache-2.0
Le paramètre encoding de la fonction open est utilisé pour préciser que le texte contient des caractères non ASCII, typiquement des accents. Mais pandas gère plutôt bien ce bruit.
import pandas df = pandas.read_csv("exemple_fichier.txt", encoding="utf8") df
_doc/notebooks/exemples/tables_avec_guillemets.ipynb
sdpython/actuariat_python
mit
On vérifie que les variables numériques sont numériques :
df.dtypes
_doc/notebooks/exemples/tables_avec_guillemets.ipynb
sdpython/actuariat_python
mit
On peut décider de conserver les guillements en spécifiant le paramètre quoting :
df2 = pandas.read_csv("exemple_fichier.txt", encoding="utf8" ,quoting=3) df2 df2.dtypes
_doc/notebooks/exemples/tables_avec_guillemets.ipynb
sdpython/actuariat_python
mit
Et si jamais les virgules sont des points-virgules, il faut préciser le paramètre sep :
texte = """ "Libellé";"Produit";"Prix";"Quantite" "L123";"meuble";"1000";"1" "L321";"portable";"500";"2" "L333";"lampe";"100";"4" """ with open("exemple_fichier2.txt", "w", encoding="utf8") as f: f.write(texte) df3 = pandas.read_csv("exemple_fichier2.txt", encoding="utf8", sep=";") df3
_doc/notebooks/exemples/tables_avec_guillemets.ipynb
sdpython/actuariat_python
mit
Et si jamais le fichier initial est très grand... On veut simplement lire les premières lignes :
df4 = pandas.read_csv("exemple_fichier2.txt", encoding="utf8", sep=";", nrows=2) df4
_doc/notebooks/exemples/tables_avec_guillemets.ipynb
sdpython/actuariat_python
mit
Ou encore une lecture par morceau de 2 lignes :
reader = pandas.read_csv("exemple_fichier2.txt", encoding="utf8", sep=";", iterator=True, chunksize=2) for i, extrait in enumerate(reader): print("extrait",i) print(extrait)
_doc/notebooks/exemples/tables_avec_guillemets.ipynb
sdpython/actuariat_python
mit
First let's define some vector fields. This is basically just an MLP, using tanh as the activation function and "ConcatSquash" instead of linear layers. This is the vector field on the right hand side of the ODE.
class Func(eqx.Module): layers: List[eqx.nn.Linear] def __init__(self, *, data_size, width_size, depth, key, **kwargs): super().__init__(**kwargs) keys = jrandom.split(key, depth + 1) layers = [] if depth == 0: layers.append( ConcatSquash(in_size=data_size, out_size=data_size, key=keys[0]) ) else: layers.append( ConcatSquash(in_size=data_size, out_size=width_size, key=keys[0]) ) for i in range(depth - 1): layers.append( ConcatSquash( in_size=width_size, out_size=width_size, key=keys[i + 1] ) ) layers.append( ConcatSquash(in_size=width_size, out_size=data_size, key=keys[-1]) ) self.layers = layers def __call__(self, t, y, args): t = jnp.asarray(t)[None] for layer in self.layers[:-1]: y = layer(t, y) y = jnn.tanh(y) y = self.layers[-1](t, y) return y # Credit: this layer, and some of the default hyperparameters below, are taken from the # FFJORD repo. class ConcatSquash(eqx.Module): lin1: eqx.nn.Linear lin2: eqx.nn.Linear lin3: eqx.nn.Linear def __init__(self, *, in_size, out_size, key, **kwargs): super().__init__(**kwargs) key1, key2, key3 = jrandom.split(key, 3) self.lin1 = eqx.nn.Linear(in_size, out_size, key=key1) self.lin2 = eqx.nn.Linear(1, out_size, key=key2) self.lin3 = eqx.nn.Linear(1, out_size, use_bias=False, key=key3) def __call__(self, t, y): return self.lin1(y) * jnn.sigmoid(self.lin2(t)) + self.lin3(t)
examples/continuous_normalising_flow.ipynb
patrick-kidger/diffrax
apache-2.0
When training, we need to wrap our vector fields in something that also computes the change in log-density. This can be done either approximately (using Hutchinson's trace estimator) or exactly (the divergence of the vector field; relatively computationally expensive).
def approx_logp_wrapper(t, y, args): y, _ = y *args, eps, func = args fn = lambda y: func(t, y, args) f, vjp_fn = jax.vjp(fn, y) (eps_dfdy,) = vjp_fn(eps) logp = jnp.sum(eps_dfdy * eps) return f, logp def exact_logp_wrapper(t, y, args): y, _ = y *args, _, func = args fn = lambda y: func(t, y, args) f, vjp_fn = jax.vjp(fn, y) (size,) = y.shape # this implementation only works for 1D input eye = jnp.eye(size) (dfdy,) = jax.vmap(vjp_fn)(eye) logp = jnp.trace(dfdy) return f, logp
examples/continuous_normalising_flow.ipynb
patrick-kidger/diffrax
apache-2.0
Wrap up the differential equation solve into a model.
def normal_log_likelihood(y): return -0.5 * (y.size * math.log(2 * math.pi) + jnp.sum(y**2)) class CNF(eqx.Module): funcs: List[Func] data_size: int exact_logp: bool t0: float t1: float dt0: float def __init__( self, *, data_size, exact_logp, num_blocks, width_size, depth, key, **kwargs, ): super().__init__(**kwargs) keys = jrandom.split(key, num_blocks) self.funcs = [ Func( data_size=data_size, width_size=width_size, depth=depth, key=k, ) for k in keys ] self.data_size = data_size self.exact_logp = exact_logp self.t0 = 0.0 self.t1 = 0.5 self.dt0 = 0.05 # Runs backward-in-time to train the CNF. def train(self, y, *, key): if self.exact_logp: term = diffrax.ODETerm(exact_logp_wrapper) else: term = diffrax.ODETerm(approx_logp_wrapper) solver = diffrax.Tsit5() eps = jrandom.normal(key, y.shape) delta_log_likelihood = 0.0 for func in reversed(self.funcs): y = (y, delta_log_likelihood) sol = diffrax.diffeqsolve( term, solver, self.t1, self.t0, -self.dt0, y, (eps, func) ) (y,), (delta_log_likelihood,) = sol.ys return delta_log_likelihood + normal_log_likelihood(y) # Runs forward-in-time to draw samples from the CNF. def sample(self, *, key): y = jrandom.normal(key, (self.data_size,)) for func in self.funcs: term = diffrax.ODETerm(func) solver = diffrax.Tsit5() sol = diffrax.diffeqsolve(term, solver, self.t0, self.t1, self.dt0, y) (y,) = sol.ys return y # To make illustrations, we have a variant sample method we can query to see the # evolution of the samples during the forward solve. def sample_flow(self, *, key): t_so_far = self.t0 t_end = self.t0 + (self.t1 - self.t0) * len(self.funcs) save_times = jnp.linspace(self.t0, t_end, 6) y = jrandom.normal(key, (self.data_size,)) out = [] for i, func in enumerate(self.funcs): if i == len(self.funcs) - 1: save_ts = save_times[t_so_far <= save_times] - t_so_far else: save_ts = ( save_times[ (t_so_far <= save_times) & (save_times < t_so_far + self.t1 - self.t0) ] - t_so_far ) t_so_far = t_so_far + self.t1 - self.t0 term = diffrax.ODETerm(func) solver = diffrax.Tsit5() saveat = diffrax.SaveAt(ts=save_ts) sol = diffrax.diffeqsolve( term, solver, self.t0, self.t1, self.dt0, y, saveat=saveat ) out.append(sol.ys) y = sol.ys[-1] out = jnp.concatenate(out) assert len(out) == 6 # number of points we saved at return out
examples/continuous_normalising_flow.ipynb
patrick-kidger/diffrax
apache-2.0
Alright, that's the models done. Now let's get some data. First we have a function for taking the specified input image, and turning it into data.
def get_data(path): # integer array of shape (height, width, channels) with values in {0, ..., 255} img = jnp.asarray(imageio.imread(path)) if img.shape[-1] == 4: img = img[..., :-1] # ignore alpha channel height, width, channels = img.shape assert channels == 3 # Convert to greyscale for simplicity. img = img @ jnp.array([0.2989, 0.5870, 0.1140]) img = jnp.transpose(img)[:, ::-1] # (width, height) x = jnp.arange(width, dtype=jnp.float32) y = jnp.arange(height, dtype=jnp.float32) x, y = jnp.broadcast_arrays(x[:, None], y[None, :]) weights = 1 - img.reshape(-1).astype(jnp.float32) / jnp.max(img) dataset = jnp.stack( [x.reshape(-1), y.reshape(-1)], axis=-1 ) # shape (dataset_size, 2) # For efficiency we don't bother with the particles that will have weight zero. cond = img.reshape(-1) < 254 dataset = dataset[cond] weights = weights[cond] mean = jnp.mean(dataset, axis=0) std = jnp.std(dataset, axis=0) + 1e-6 dataset = (dataset - mean) / std return dataset, weights, mean, std, img, width, height
examples/continuous_normalising_flow.ipynb
patrick-kidger/diffrax
apache-2.0
Now to load the data during training, we need a dataloader. In this case our dataset is small enough to fit in-memory, so we use a dataloader implementation that we can include within our overall JIT wrapper, for speed.
class DataLoader(eqx.Module): arrays: Tuple[jnp.ndarray] batch_size: int key: jrandom.PRNGKey def __post_init__(self): dataset_size = self.arrays[0].shape[0] assert all(array.shape[0] == dataset_size for array in self.arrays) def __call__(self, step): dataset_size = self.arrays[0].shape[0] num_batches = dataset_size // self.batch_size epoch = step // num_batches key = jrandom.fold_in(self.key, epoch) perm = jrandom.permutation(key, jnp.arange(dataset_size)) start = step * self.batch_size slice_size = self.batch_size batch_indices = lax.dynamic_slice_in_dim(perm, start, slice_size) return tuple(array[batch_indices] for array in self.arrays)
examples/continuous_normalising_flow.ipynb
patrick-kidger/diffrax
apache-2.0
Bring everything together. This function is our entry point.
def main( in_path, out_path=None, batch_size=500, virtual_batches=2, lr=1e-3, weight_decay=1e-5, steps=10000, exact_logp=True, num_blocks=2, width_size=64, depth=3, print_every=100, seed=5678, ): if out_path is None: out_path = here / pathlib.Path(in_path).name else: out_path = pathlib.Path(out_path) key = jrandom.PRNGKey(seed) model_key, loader_key, loss_key, sample_key = jrandom.split(key, 4) dataset, weights, mean, std, img, width, height = get_data(in_path) dataset_size, data_size = dataset.shape dataloader = DataLoader((dataset, weights), batch_size, key=loader_key) model = CNF( data_size=data_size, exact_logp=exact_logp, num_blocks=num_blocks, width_size=width_size, depth=depth, key=model_key, ) optim = optax.adamw(lr, weight_decay=weight_decay) opt_state = optim.init(eqx.filter(model, eqx.is_inexact_array)) @eqx.filter_value_and_grad def loss(model, data, weight, loss_key): batch_size, _ = data.shape noise_key, train_key = jrandom.split(loss_key, 2) train_key = jrandom.split(key, batch_size) data = data + jrandom.normal(noise_key, data.shape) * 0.5 / std log_likelihood = jax.vmap(model.train)(data, key=train_key) return -jnp.mean(weight * log_likelihood) # minimise negative log-likelihood @eqx.filter_jit def make_step(model, opt_state, step, loss_key): # We only need gradients with respect to floating point JAX arrays, not any # other part of our model. (e.g. the `exact_logp` flag. What would it even mean # to differentiate that? Note that `eqx.filter_value_and_grad` does the same # filtering by `eqx.is_inexact_array` by default.) value = 0 grads = jax.tree_map( lambda leaf: jnp.zeros_like(leaf) if eqx.is_inexact_array(leaf) else None, model, ) # Get more accurate gradients by accumulating gradients over multiple batches. # (Or equivalently, get lower memory requirements by splitting up a batch over # multiple steps.) def make_virtual_step(_, state): value, grads, step, loss_key = state data, weight = dataloader(step) value_, grads_ = loss(model, data, weight, loss_key) value = value + value_ grads = jax.tree_map(lambda a, b: a + b, grads, grads_) step = step + 1 loss_key = jrandom.split(loss_key, 1)[0] return value, grads, step, loss_key value, grads, step, loss_key = lax.fori_loop( 0, virtual_batches, make_virtual_step, (value, grads, step, loss_key) ) value = value / virtual_batches grads = jax.tree_map(lambda a: a / virtual_batches, grads) updates, opt_state = optim.update(grads, opt_state, model) model = eqx.apply_updates(model, updates) return value, model, opt_state, step, loss_key step = 0 while step < steps: start = time.time() value, model, opt_state, step, loss_key = make_step( model, opt_state, step, loss_key ) end = time.time() if (step % print_every) == 0 or step == steps - 1: print(f"Step: {step}, Loss: {value}, Computation time: {end - start}") num_samples = 5000 sample_key = jrandom.split(sample_key, num_samples) samples = jax.vmap(model.sample)(key=sample_key) sample_flows = jax.vmap(model.sample_flow, out_axes=-1)(key=sample_key) fig, (*axs, ax, axtrue) = plt.subplots( 1, 2 + len(sample_flows), figsize=((2 + len(sample_flows)) * 10 * height / width, 10), ) samples = samples * std + mean x = samples[:, 0] y = samples[:, 1] ax.scatter(x, y, c="black", s=2) ax.set_xlim(-0.5, width - 0.5) ax.set_ylim(-0.5, height - 0.5) ax.set_aspect(height / width) ax.set_xticks([]) ax.set_yticks([]) axtrue.imshow(img.T, origin="lower", cmap="gray") axtrue.set_aspect(height / width) axtrue.set_xticks([]) axtrue.set_yticks([]) x_resolution = 100 y_resolution = int(x_resolution * (height / width)) sample_flows = sample_flows * std[:, None] + mean[:, None] x_pos, y_pos = jnp.broadcast_arrays( jnp.linspace(-1, width + 1, x_resolution)[:, None], jnp.linspace(-1, height + 1, y_resolution)[None, :], ) positions = jnp.stack([jnp.ravel(x_pos), jnp.ravel(y_pos)]) densities = [stats.gaussian_kde(samples)(positions) for samples in sample_flows] for i, (ax, density) in enumerate(zip(axs, densities)): density = jnp.reshape(density, (x_resolution, y_resolution)) ax.imshow(density.T, origin="lower", cmap="plasma") ax.set_aspect(height / width) ax.set_xticks([]) ax.set_yticks([]) plt.savefig(out_path) plt.show()
examples/continuous_normalising_flow.ipynb
patrick-kidger/diffrax
apache-2.0
And now the following commands will reproduce the images displayed at the start. python main(in_path="../imgs/cat.png") main(in_path="../imgs/butterfly.png", num_blocks=3) main(in_path="../imgs/target.png", width_size=128) Let's run the first one of those as a demonstration.
main(in_path="../imgs/cat.png")
examples/continuous_normalising_flow.ipynb
patrick-kidger/diffrax
apache-2.0
This collection of rates has the main CNO rates plus a breakout rate into the hot CNO cycle
files = ["p-p-d-ec", "d-pg-he3-de04", "he3-he3pp-he4-nacr", "c12-pg-n13-ls09", "c13-pg-n14-nacr", "n13--c13-wc12", "n13-pg-o14-lg06", "n14-pg-o15-im05", "n15-pa-c12-nacr", "o14--n14-wc12", "o15--n15-wc12", "o14-ap-f17-Ha96c", "f17-pg-ne18-cb09", "ne18--f18-wc12", "f18-pa-o15-il10"] rc = pyrl.RateCollection(files)
examples/pp-CNO-example.ipynb
pyreaclib/pyreaclib
bsd-3-clause
Interactive exploration is enabled through the Explorer class, which takes a RateCollection and a Composition
re = pyrl.Explorer(rc, comp, size=(1000,1000), ydot_cutoff_value=1.e-20, always_show_alpha=True) re.explore()
examples/pp-CNO-example.ipynb
pyreaclib/pyreaclib
bsd-3-clause
(1b) Plural Vamos criar uma função que transforma uma palavra no plural adicionando uma letra 's' ao final da string. Em seguida vamos utilizar a função map() para aplicar a transformação em cada palavra do RDD. Em Python (e muitas outras linguagens) a concatenação de strings é custosa. Uma alternativa melhor é criar uma nova string utilizando str.format(). Nota: a string entre os conjuntos de três aspas representa a documentação da função. Essa documentação é exibida com o comando help(). Vamos utilizar a padronização de documentação sugerida para o Python, manteremos essa documentação em inglês.
# EXERCICIO def Plural(palavra): """Adds an 's' to `palavra`. Args: palavra (str): A string. Returns: str: A string with 's' added to it. """ return <COMPLETAR> print(Plural('gato')) help(Plural) assert Plural('rato')=='ratos', 'resultado incorreto!' print ('OK')
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(1c) Aplicando a função ao RDD Transforme cada palavra do nosso RDD em plural usando map() Em seguida, utilizaremos o comando collect() que retorna a RDD como uma lista do Python.
# EXERCICIO pluralRDD = palavrasRDD.<COMPLETAR> print (pluralRDD.collect()) assert pluralRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!' print ('OK')
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
Nota: utilize o comando collect() apenas quando tiver certeza de que a lista caberá na memória. Para gravar os resultados de volta em arquivo texto ou base de dados utilizaremos outro comando. (1d) Utilizando uma função lambda Repita a criação de um RDD de plurais, porém utilizando uma função lambda.
# EXERCICIO pluralLambdaRDD = palavrasRDD.<COMPLETAR> print (pluralLambdaRDD.collect()) assert pluralLambdaRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!' print ('OK')
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(1e) Tamanho de cada palavra Agora use map() e uma função lambda para retornar o número de caracteres em cada palavra. Utilize collect() para armazenar o resultado em forma de listas na variável destino.
# EXERCICIO pluralTamanho = (pluralRDD <COMPLETAR> .collect() ) print (pluralTamanho) assert pluralTamanho==[5,9,5,5,5], 'valores incorretos' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(1f) RDDs de pares e tuplas Para contar a frequência de cada palavra de maneira distribuída, primeiro devemos atribuir um valor para cada palavra do RDD. Isso irá gerar um base de dados (chave, valor). Desse modo podemos agrupar a base através da chave, calculando a soma dos valores atribuídos. No nosso caso, vamos atribuir o valor 1 para cada palavra. Um RDD contendo a estrutura de tupla chave-valor (k,v) é chamada de RDD de tuplas ou pair RDD. Vamos criar nosso RDD de pares usando a transformação map() com uma função lambda().
# EXERCICIO palavraPar = palavrasRDD.<COMPLETAR> print (palavraPar.collect()) assert palavraPar.collect() == [('gato',1),('elefante',1),('rato',1),('rato',1),('gato',1)], 'valores incorretos!' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(2b) Calculando as contagens Após o groupByKey() nossa RDD contém elementos compostos da palavra, como chave, e um iterador contendo todos os valores correspondentes aquela chave. Utilizando a transformação map() e a função sum(), contrua um novo RDD que consiste de tuplas (chave, soma).
# EXERCICIO contagemGroup = palavrasGrupo.<COMPLETAR> print (contagemGroup.collect()) assert sorted(contagemGroup.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(2c) reduceByKey Um comando mais interessante para a contagem é o reduceByKey() que cria uma nova RDD de tuplas. Essa transformação aplica a transformação reduce() vista na aula anterior para os valores de cada chave. Dessa forma, a função de transformação pode ser aplicada em cada partição local para depois ser enviada para redistribuição de partições, reduzindo o total de dados sendo movidos e não mantendo listas grandes na memória.
# EXERCICIO contagem = palavraPar.<COMPLETAR> print( contagem.collect()) assert sorted(contagem.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(2d) Agrupando os comandos A forma mais usual de realizar essa tarefa, partindo do nosso RDD palavrasRDD, é encadear os comandos map e reduceByKey em uma linha de comando.
# EXERCICIO contagemFinal = (palavrasRDD <COMPLETAR> <COMPLETAR> ) print (contagemFinal.collect()) assert sorted(contagemFinal.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
Parte 3: Encontrando as palavras únicas e calculando a média de contagem (3a) Palavras Únicas Calcule a quantidade de palavras únicas do RDD. Utilize o mesmo pipeline anterior porém substituindo .collect() por outro método do API da RDD.
# EXERCICIO palavrasUnicas = (contagemFinal <COMPLETAR> ) print (palavrasUnicas) assert palavrasUnicas==3, 'valor incorreto!' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(3b) Calculando a Média de contagem de palavras Encontre a média de frequência das palavras utilizando o RDD contagem. Note que a função do comando reduce() é aplicada em cada tupla do RDD. Para realizar a soma das contagens, primeiro é necessário mapear o RDD para um RDD contendo apenas os valores das frequências (sem as chaves).
# EXERCICIO # add é equivalente a lambda x,y: x+y from operator import add total = (contagemFinal <COMPLETAR> <COMPLETAR> ) media = total / float(palavrasUnicas) print (total) print (round(media, 2)) assert round(media, 2)==1.67, 'valores incorretos!' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
Parte 4: Aplicar nosso algoritmo em um arquivo (4a) Função contaPalavras Para podermos aplicar nosso algoritmo genéricamente em diversos RDDs, vamos primeiro criar uma função para aplicá-lo em qualquer fonte de dados. Essa função recebe de entrada um RDD contendo uma lista de chaves (palavras) e retorna um RDD de tuplas com as chaves e a contagem delas nessa RDD
# EXERCICIO def contaPalavras(chavesRDD): """Creates a pair RDD with word counts from an RDD of words. Args: chavesRDD (RDD of str): An RDD consisting of words. Returns: RDD of (str, int): An RDD consisting of (word, count) tuples. """ return (chavesRDD <COMPLETAR> <COMPLETAR> ) print (contaPalavras(palavrasRDD).collect()) assert sorted(contaPalavras(palavrasRDD).collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(4d) Extraindo as palavras Antes de poder usar nossa função contaPalavras(), temos ainda que trabalhar em cima da nossa RDD: Precisamos gerar listas de palavras ao invés de listas de sentenças. Eliminar linhas vazias. As strings em Python tem o método split() que faz a separação de uma string por separador. No nosso caso, queremos separar as strings por espaço. Utilize a função map() para gerar um novo RDD como uma lista de palavras.
# EXERCICIO shakesPalavrasRDD = shakesRDD.<COMPLETAR> total = shakesPalavrasRDD.count() print (shakesPalavrasRDD.take(5)) print (total)
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
Reparem que o flatMap já eliminou as entradas vazias. (4f) Contagem de palavras Agora que nossa RDD contém uma lista de palavras, podemos aplicar nossa função contaPalavras(). Aplique a função em nossa RDD e utilize a função takeOrdered para imprimir as 15 palavras mais frequentes. takeOrdered() pode receber um segundo parâmetro que instrui o Spark em como ordenar os elementos. Ex.: takeOrdered(15, key=lambda x: -x): ordem decrescente dos valores de x
# EXERCICIO top15 = <COMPLETAR> print ('\n'.join(map(lambda x: f'{x[0]}: {x[1]}', top15))) assert top15 == [('the', 29996), ('and', 28353), ('i', 21860), ('to', 20885), ('of', 18811), ('a', 15992), ('you', 14439), ('my', 13191), ('in', 12027), ('that', 11782), ('is', 9711), ('not', 9068), ('with', 8521), ('me', 8271), ('for', 8184)],'valores incorretos!' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
Parte 5: Similaridade entre Objetos Nessa parte do laboratório vamos aprender a calcular a distância entre atributos numéricos, categóricos e textuais. (5a) Vetores no espaço Euclidiano Quando nossos objetos são representados no espaço Euclidiano, medimos a similaridade entre eles através da p-Norma definida por: $$d(x,y,p) = (\sum_{i=1}^{n}{|x_i - y_i|^p})^{1/p}$$ As normas mais utilizadas são $p=1,2,\infty$ que se reduzem em distância absoluta, Euclidiana e máxima distância: $$d(x,y,1) = \sum_{i=1}^{n}{|x_i - y_i|}$$ $$d(x,y,2) = (\sum_{i=1}^{n}{|x_i - y_i|^2})^{1/2}$$ $$d(x,y,\infty) = \max(|x_1 - y_1|,|x_2 - y_2|, ..., |x_n - y_n|)$$
import numpy as np # Vamos criar uma função pNorm que recebe como parâmetro p e retorna uma função que calcula a pNorma def pNorm(p): """Generates a function to calculate the p-Norm between two points. Args: p (int): The integer p. Returns: Dist: A function that calculates the p-Norm. """ def Dist(x,y): return np.power(np.power(np.abs(x-y),p).sum(),1/float(p)) return Dist # Vamos criar uma RDD com valores numéricos np.random.seed(42) numPointsRDD = sc.parallelize(enumerate(np.random.random(size=(10,100)))) # EXERCICIO # Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma cartPointsRDD = numPointsRDD.<COMPLETAR> # Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2)) # DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD cartPointsParesRDD = cartPointsRDD.<COMPLETAR> # Aplique um mapa para calcular a Distância Euclidiana entre os pares Euclid = pNorm(2) distRDD = cartPointsParesRDD.<COMPLETAR> # Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor # e utilizando os comandos internos do pyspark para o cálculo da min, max, mean statRDD = distRDD.<COMPLETAR> minv, maxv, meanv = statRDD.<COMPLETAR>, statRDD.<COMPLETAR>, statRDD.<COMPLETAR> print (minv, maxv, meanv) assert (minv.round(2), maxv.round(2), meanv.round(2))==(0.0, 4.71, 3.75), 'Valores incorretos' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
(5b) Valores Categóricos Quando nossos objetos são representados por atributos categóricos, eles não possuem uma similaridade espacial. Para calcularmos a similaridade entre eles podemos primeiro transformar nosso vetor de atrbutos em um vetor binário indicando, para cada possível valor de cada atributo, se ele possui esse atributo ou não. Com o vetor binário podemos utilizar a distância de Hamming definida por: $$ H(x,y) = \sum_{i=1}^{n}{x_i != y_i} $$ Também é possível definir a distância de Jaccard como: $$ J(x,y) = \frac{\sum_{i=1}^{n}{x_i == y_i} }{\sum_{i=1}^{n}{\max(x_i, y_i}) } $$
# Vamos criar uma função para calcular a distância de Hamming def Hamming(x,y): """Calculates the Hamming distance between two binary vectors. Args: x, y (np.array): Array of binary integers x and y. Returns: H (int): The Hamming distance between x and y. """ return (x!=y).sum() # Vamos criar uma função para calcular a distância de Jaccard def Jaccard(x,y): """Calculates the Jaccard distance between two binary vectors. Args: x, y (np.array): Array of binary integers x and y. Returns: J (int): The Jaccard distance between x and y. """ return (x==y).sum()/float( np.maximum(x,y).sum() ) # Vamos criar uma RDD com valores categóricos catPointsRDD = sc.parallelize(enumerate([['alto', 'caro', 'azul'], ['medio', 'caro', 'verde'], ['alto', 'barato', 'azul'], ['medio', 'caro', 'vermelho'], ['baixo', 'barato', 'verde'], ])) # EXERCICIO # Crie um RDD de chaves únicas utilizando flatMap chavesRDD = (catPointsRDD .<COMPLETAR> .<COMPLETAR> .<COMPLETAR> ) chaves = dict((v,k) for k,v in chavesRDD.collect()) nchaves = len(chaves) print (chaves, nchaves) assert chaves=={'alto': 2, 'caro': 0, 'baixo': 4, 'verde': 1, 'azul': 2, 'medio': 3, 'barato': 4, 'vermelho': 3}, 'valores incorretos!' print ("OK") assert nchaves==8, 'número de chaves incorreta' print ("OK") def CreateNP(atributos,chaves): """Binarize the categorical vector using a dictionary of keys. Args: atributos (list): List of attributes of a given object. chaves (dict): dictionary with the relation attribute -> index Returns: array (np.array): Binary array of attributes. """ array = np.zeros(len(chaves)) for atr in atributos: array[ chaves[atr] ] = 1 return array # Converte o RDD para o formato binário, utilizando o dict chaves binRDD = catPointsRDD.map(lambda rec: (rec[0],CreateNP(rec[1], chaves))) binRDD.collect() # EXERCICIO # Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma cartBinRDD = binRDD.<COMPLETAR> # Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2)) # DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD cartBinParesRDD = cartBinRDD.<COMPLETAR> # Aplique um mapa para calcular a Distância de Hamming e Jaccard entre os pares hamRDD = cartBinParesRDD.<COMPLETAR> jacRDD = cartBinParesRDD.<COMPLETAR> # Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor # e utilizando os comandos internos do pyspark para o cálculo da min, max, mean statHRDD = hamRDD.<COMPLETAR> statJRDD = jacRDD.<COMPLETAR> Hmin, Hmax, Hmean = statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR> Jmin, Jmax, Jmean = statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR> print ("\t\tMin\tMax\tMean") print ("Hamming:\t{:.2f}\t{:.2f}\t{:.2f}".format(Hmin, Hmax, Hmean )) print ("Jaccard:\t{:.2f}\t{:.2f}\t{:.2f}".format( Jmin, Jmax, Jmean )) assert (Hmin.round(2), Hmax.round(2), Hmean.round(2)) == (0.00,5.00,2.40), 'valores incorretos' print ("OK") assert (Jmin.round(2), Jmax.round(2), Jmean.round(2)) == (0.60,4.00,1.90), 'valores incorretos' print ("OK")
Spark/Lab01.ipynb
folivetti/BIGDATA
mit
Données Le code suivant télécharge les données nécessaires salaries2010.zip. On reprend le code qui permet d'obtenir les données (il faut l'exécuter avant chaque partie).
import os if not os.path.exists("salaries2010.db3"): import pyensae.datasource db3 = pyensae.datasource.download_data("salaries2010.zip") import sqlite3, pandas con = sqlite3.connect("salaries2010.db3") df = pandas.io.sql.read_sql("select * from varmod", con) con.close() values = df[ df.VARIABLE == "TRNETTOT"].copy() def process_intervalle(s): if "euros et plus" in s : return float ( s.replace("euros et plus", "").replace(" ","") ) spl = s.split("à") if len(spl) == 2 : s1 = spl[0].replace("Moins de","").replace("euros","").replace(" ","") s2 = spl[1].replace("Moins de","").replace("euros","").replace(" ","") return (float(s1)+float(s2))/2 else : s = spl[0].replace("Moins de","").replace("euros","").replace(" ","") return float(s)/2 values["montant"] = values.apply(lambda r : process_intervalle(r ["MODLIBELLE"]), axis = 1) con = sqlite3.connect("salaries2010.db3") data = pandas.io.sql.read_sql("select TRNETTOT,AGE,SEXE,DEPT,DEPR,TYP_EMPLOI,PCS,CS,CONT_TRAV,CONV_COLL from salaries", con) con.close() salaires = data.merge ( values, left_on = "TRNETTOT", right_on="MODALITE" ) salaires.dropna(inplace=True) salaires.head() salaires.shape
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 1 : Bases d'apprentissage, test, courbes On ne considère qu'une partie de la base pour éviter de passer toute la séance à attendre les résultats :
import random salaires["rnd"] = salaires.apply (lambda r : random.randint(0,50),axis=1) ech = salaires [ salaires.rnd == 0 ] X,Y = ech[["AGE","SEXE","TYP_EMPLOI","CONT_TRAV", "CS"]], ech[["montant"]] Xd = X.T.to_dict().values() X.shape
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
On découpe la base en base d'apprentissage et test :
from sklearn.model_selection import train_test_split a_train, a_test, b_train, b_test = train_test_split(Xt, Y, test_size=0.33)
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
Expérience 1 On fait varier un paramètre et on observe l'erreur sur la base d'apprentissage et de test.
from sklearn.tree import DecisionTreeRegressor import matplotlib.pyplot as plt from sklearn.metrics import mean_squared_error curves = [] for max_depth in range(1,10) : clf = DecisionTreeRegressor(min_samples_leaf=10, max_depth=max_depth) clf = clf.fit(a_train, b_train) erra = mean_squared_error( clf.predict(a_train), b_train)**0.5 errb = mean_squared_error( clf.predict(a_test), b_test)**0.5 print("max_depth",max_depth, "erreur",erra,errb) curves.append((max_depth, erra,errb, clf) ) plt.plot ( [c[0] for c in curves], [c[1] for c in curves], label="train") plt.plot ( [c[0] for c in curves], [c[2] for c in curves], label="test") plt.legend()
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
L'erreur sur la base de test baisse légèrement jusqu'à ce que l'arbre ait une profondeur de 3 ou 4. C'est la taille de l'arbre qu'il faudrait choisir et qu'on dessine en remplaçant les noms des variables X[i] par des noms plus intelligibles.
import sys cwd = os.getcwd() if sys.platform.startswith("win"): exe = 'C:\\Program Files (x86)\\Graphviz2.38\\bin\\dot.exe' if not os.path.exists(exe): raise FileNotFoundError(exe) exe = '"{0}"'.format(exe) else: exe = "dot" clf = curves[2][-1] from sklearn.tree import export_graphviz export_graphviz(clf, out_file="arbrec.dot") # on remplace X[i] par les noms des variables with open("arbrec.dot","r") as f: text = f.read() for i in range(len(prep.feature_names_)): text=text.replace("X[{0}]".format(i), prep.feature_names_[i]) with open("arbrec.dot","w") as f: f.write(text) cwd = os.getcwd() cmd = '"{0}" -Tpng {1}\\arbrec.dot -o {1}\\arbrec.png'.format(exe, cwd) os.system(cmd) from IPython.core.display import Image Image("arbrec.png")
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
Version javascript :
from jyquickhelper import RenderJsVis dot = export_graphviz(clf, out_file=None, feature_names=prep.feature_names_) RenderJsVis(dot=dot, height="400px", layout='hierarchical')
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
Expérience 2 On remplace l'arbre de décision par des random forest.
import random salaires["rnd"] = salaires.apply (lambda r : random.randint(0,50),axis=1) ech = salaires [ salaires.rnd == 0 ] X,Y = ech[["AGE","SEXE","TYP_EMPLOI","CONT_TRAV", "CS"]], ech[["montant"]] Xd = X.T.to_dict().values() from sklearn.feature_extraction import DictVectorizer prep = DictVectorizer() Xt = prep.fit_transform(Xd).toarray() from sklearn.model_selection import train_test_split a_train, a_test, b_train, b_test = train_test_split(Xt, Y, test_size=0.33) from sklearn.ensemble import RandomForestRegressor import matplotlib.pyplot as plt from sklearn.metrics import mean_squared_error from numpy import array curves = [] for n_estimators in range(1,10) : clf = RandomForestRegressor(n_estimators=n_estimators,max_depth=1,min_samples_leaf=10) clf = clf.fit(a_train, b_train["montant"].ravel()) erra = mean_squared_error( clf.predict(a_train), b_train)**0.5 errb = mean_squared_error( clf.predict(a_test), b_test)**0.5 print("n_estimators",n_estimators, "erreur",erra,errb) curves.append((n_estimators, erra,errb, clf) ) plt.plot ( [c[0] for c in curves], [c[1] for c in curves], label="train") plt.plot ( [c[0] for c in curves], [c[2] for c in curves], label="test") #plt.ylim( [11300,11600] ) plt.legend();
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
On obtient plus rapidement le même résultat qu'avec les arbres de décision. Exercice 2 : Courbes ROC Première courbe On enlève juste les valeurs imprévues.
import random, numpy tsalaires = salaires[ (salaires["SEXE"].notnull()) & ((salaires["SEXE"] == "1") | (salaires["SEXE"] == "2")) ].copy() tsalaires["rnd"] = tsalaires.apply (lambda r : random.randint(0,50),axis=1) ech = tsalaires [ tsalaires.rnd == 0 ].copy() X,Y = ech[["AGE","TYP_EMPLOI","CONT_TRAV", "CS", "montant"]], ech[["SEXE"]].copy() Xd = X.T.to_dict().values() Y["SEXE"] = Y.apply ( lambda r : int(r["SEXE"])-1, axis=1) from sklearn.feature_extraction import DictVectorizer prep = DictVectorizer() Xt = prep.fit_transform(Xd).toarray() from sklearn.model_selection import train_test_split a_train, a_test, b_train, b_test = train_test_split(Xt, Y, test_size=0.33)
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
On apprend un classifieur (masculin ou féminin) :
from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(max_depth=10) clf = clf.fit(a_train, b_train["SEXE"].ravel())
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
Et la courbe ROC :
from sklearn.metrics import roc_curve, auc probas = clf.predict_proba(a_test) # probas est une matrice de deux colonnes avec la proabilités d'appartenance à chaque classe fpr, tpr, thresholds = roc_curve(b_test["SEXE"].ravel(), probas[:, 1]) roc_auc = auc(fpr, tpr) print("Area under the ROC curve : %f" % roc_auc) plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right");
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
Une courbe comme celle-là veut dire que le classifieur ne fait pas beaucoup mieux que le hasard. On va tricher un peu et apprendre le modèle sur la base de test pour voir ce qu'on devrait voir si le modèle était un bon prédicteur :
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=10) clf = clf.fit(a_test, b_test["SEXE"].ravel()) probas = clf.predict_proba(a_test) # probas est une matrice de deux colonnes avec la proabilités d'appartenance à chaque classe fpr, tpr, thresholds = roc_curve(b_test["SEXE"].ravel(), probas[:, 1]) roc_auc = auc(fpr, tpr) plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example\nArea under the ROC curve : %f' % roc_auc) plt.legend(loc="lower right");
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
Un peu plus de features et... Dans les exemples précédents, deux variables numériques (AGE et montant) ont été traitées comme catégorielles et il serait plus efficace de les traiter comme numériques (1 feature au lieu $n$ features avec $n$ = le nombre d'âges différents). Cela présente l'avantage de ne pas avoir que des features sparses.
import random, numpy tsalaires = salaires[ (salaires["SEXE"].notnull()) & ((salaires["SEXE"] == "1") | (salaires["SEXE"] == "2")) ].copy() tsalaires["rnd"] = tsalaires.apply (lambda r : random.randint(0,50),axis=1) ech = tsalaires [ tsalaires.rnd == 0 ].copy() Xn,Xc,Y = ech[["AGE","montant"]], \ ech[["TYP_EMPLOI","CONT_TRAV", "CS", "CONV_COLL", "DEPT", "DEPR", "PCS"]], \ ech[["SEXE"]].copy() Xd = Xc.T.to_dict().values() Y["SEXE"] = Y.apply ( lambda r : int(r["SEXE"])-1, axis=1) from sklearn.feature_extraction import DictVectorizer prep = DictVectorizer() Xt = prep.fit_transform(Xd).toarray() Xt = numpy.hstack((Xn,Xt)) from sklearn.model_selection import train_test_split a_train, a_test, b_train, b_test = train_test_split(Xt, Y, test_size=0.33)
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
On teste différents modèles et on vérifie que le fait de considérer les variables AGE et montant est plus efficace. Les deux dernières options proposent deux façons de réduire le nombre de varaibles en entrée avec une ACP et en utilisant un objet Pipeline qui permet de chaîner les opérations sur le même jeu de données. Ce dernière exemple explique l'intérêt d'avoir les mêmes méthodes pour chaque modèle.
model = "GBC" if model == "lda": from sklearn.discriminant_analysis import LinearDiscriminantAnalysis clf = LinearDiscriminantAnalysis(n_components=2) clf = clf.fit(a_train, (b_train["SEXE"]+1.0).ravel()) elif model == "tree": from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(max_depth=10) clf = clf.fit(a_train, b_train["SEXE"].ravel()) elif model == "forest": from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier() clf = clf.fit(a_train, b_train["SEXE"].ravel()) elif model=="pipeline": from sklearn.pipeline import Pipeline from sklearn.svm import LinearSVC from sklearn.decomposition import PCA from sklearn.discriminant_analysis import LinearDiscriminantAnalysis clf = Pipeline([ ('feature_selection', PCA(n_components=10)), ('classification', LinearDiscriminantAnalysis()) ]) clf = clf.fit(a_train, b_train["SEXE"].ravel()) elif model=="GBC": from sklearn.ensemble import GradientBoostingClassifier clf = GradientBoostingClassifier(n_estimators=10) clf = clf.fit(a_train, b_train["SEXE"].ravel()) else: raise Exception("unknown model: " + model)
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
On trace la courbe ROC pour mesurer la performance du modèle.
from sklearn.metrics import roc_curve, auc probas = clf.predict_proba(a_test) fpr, tpr, thresholds = roc_curve(b_test["SEXE"].ravel(), probas[:, 1]) roc_auc = auc(fpr, tpr) plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC ' + model + '\n' + "Area under the ROC curve : %f" % roc_auc) plt.legend(loc="lower right");
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
Si le model choisi est un GradientBoostingClassifier, on peut regarder l'importance des variables dans la construction du résultat. Le graphe suivant est inspiré de la page Gradient Boosting regression même si ce n'est pas une régression qui a été utilisée ici.
if model == "GBC": import pandas import numpy as np feature_name = pandas.Series(["AGE","montant"] + prep.feature_names_) limit = 20 feature_importance = clf.feature_importances_[:20] feature_importance = 100.0 * (feature_importance / feature_importance.max()) sorted_idx = np.argsort(feature_importance) pos = np.arange(sorted_idx.shape[0]) + .5 plt.subplot(1, 2, 2) plt.barh(pos, feature_importance[sorted_idx], align='center') plt.yticks(pos, feature_name[sorted_idx]) plt.xlabel('Relative Importance') plt.title('Variable Importance');
_doc/notebooks/td2a_ml/td2a_correction_session_3B.ipynb
sdpython/ensae_teaching_cs
mit
If instead of a simple accepter that returns "yes" or "no", you want to compute an integer, work in $\mathbb{Z}$:
vcsn.context('lal<char(abc)>, z')
doc/notebooks/Contexts.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
To use words on the usual alphabet as labels:
vcsn.context('law<char(a-z)>, z')
doc/notebooks/Contexts.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
$k$-tape Automata To create a "classical" two-tape automaton:
vcsn.context('lat<lal<char(a-f)>, lal<char(A-F)>>, b')
doc/notebooks/Contexts.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
Multiple Weights To compute a Boolean and an integer:
vcsn.context('lal<char(ab)>, lat<b, z>')
doc/notebooks/Contexts.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0