markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Note that the sliders can be linkedd in order to preserve the aspect ratio of the figure. The state can be updated as:
zoom_options = {'min': 0.5, 'max': 10., 'step': 0.3, 'zoom': [2., 3.]} wid.set_widget_state(zoom_options, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:image"></a>7. Image Options This is a widget for selecting options related to rendering an image. It defines the colourmap, the alpha value for transparency as well as the interpolation. Specifically:
# Initial options image_options = {'alpha': 1., 'interpolation': 'bilinear', 'cmap_name': None} # Create widget wid = ImageOptionsWidget(image_options, render_function=render_function) # Set styling wid.style(box_style='success', padding=10, border_visible=True, border_radius=45) # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
The widget can be updated with a new dict of options as:
wid.set_widget_state({'alpha': 0.8, 'interpolation': 'none', 'cmap_name': 'gray'}, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:line"></a>8. Line Options The following widget allows the selection of options for rendering line objects. The initial options are passed in as a dict and control the width, style and colour of the lines. Note that a different colour can be defined for different objects using the labels argument.
# Initial options line_options = {'render_lines': True, 'line_width': 1, 'line_colour': ['blue', 'red'], 'line_style': '-'} # Create widget wid = LineOptionsWidget(line_options, render_function=render_function, labels=['menpo', 'widgets']) # Set styling wid.style(box_style='danger', padding=6) # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
The Render lines tick box also controls the visibility of the rest of the options. So by updating the state with render_lines=False, the options disappear.
wid.set_widget_state({'render_lines': False, 'line_width': 5, 'line_colour': ['purple'], 'line_style': '--'}, allow_callback=True, labels=None)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:marker"></a>9. Marker Options Similar to the LineOptionsWidget, this widget allows to selecting options for rendering markers. The options define the edge width, face colour, edge colour, style and size of the markers.
# Initial options marker_options = {'render_markers': True, 'marker_size': 20, 'marker_face_colour': ['red', 'green'], 'marker_edge_colour': ['black', 'blue'], 'marker_style': 'o', 'marker_edge_width': 1} # Create widget wid = MarkerOptionsWidget(marker_options, render_function=render_function, labels=['a', 'b']) # Set styling wid.style(box_style='info', padding=6) # Display widget wid wid.set_widget_state({'render_markers': True, 'marker_size': 20, 'marker_face_colour': ['red'], 'marker_edge_colour': ['black'], 'marker_style': 'o', 'marker_edge_width': 1}, labels=None, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:numbering"></a>10. Numbering Options The NumberingOptionsWidget is used in case you want to render some numbers next to the plotted points.
# Initial options numbers_options = {'render_numbering': True, 'numbers_font_name': 'serif', 'numbers_font_size': 10, 'numbers_font_style': 'normal', 'numbers_font_weight': 'normal', 'numbers_font_colour': ['black'], 'numbers_horizontal_align': 'center', 'numbers_vertical_align': 'bottom'} # Create widget wid = NumberingOptionsWidget(numbers_options, render_function=render_function) # Set styling wid.style(box_style='success', border_visible=True, border_colour='black', border_style='solid', border_width=1, border_radius=0, padding=10, margin=10) # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Of course the state of the widget can be updated as:
wid.set_widget_state({'render_numbering': True, 'numbers_font_name': 'serif', 'numbers_font_size': 10, 'numbers_font_style': 'normal', 'numbers_font_weight': 'normal', 'numbers_font_colour': ['green'], 'numbers_horizontal_align': 'center', 'numbers_vertical_align': 'bottom'}, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:axes"></a>11. Axes Options Before presenting the AxesOptionsWidget, let's first see two widgets that are ued as its basic components for selecting the axes limits as well as the axes ticks. AxesLimitsWidget has 3 basic functions per axis: * auto: Allows matplotlib to automatically set the limits. * percentage: It expects a float that defines the percentage of padding to allow around the rendered object's region. * range: It expects two numbers that define the minimum and maximum values of the limits.
# Create widget wid = AxesLimitsWidget(axes_x_limits=[0, 10], axes_y_limits=0.1, render_function=render_function) # Set styling wid.style(box_style='danger') # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Note that the percentage mode is accompanied by a ListWidget that expects a single float, whereas the range mode invokes a ListWidget that expects two float numbers. The state of the widget can be changed as:
wid.set_widget_state([-200, 200], None, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
On the other hand, AxesTicksWidget has two functionalities per axis: * auto: Allows matplotlib to automatically set the ticks. * list: Enables a ListWidget to select the ticks.
# Initial options axes_ticks = {'x': [], 'y': [10., 20., 30.]} # Create widget wid = AxesTicksWidget(axes_ticks, render_function=render_function) # St styling wid.style(box_style='danger') # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
The state can be updated as:
wid.set_widget_state({'x': list(range(5)), 'y': None}, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
The AxesOptionsWidget involves the AxesLimitsWidget and AxesTicksWidget widgets and also allows the selection of font-related options. As always, the initial options are provided in a dict:
# Initial options axes_options = {'render_axes': True, 'axes_font_name': 'serif', 'axes_font_size': 10, 'axes_font_style': 'normal', 'axes_font_weight': 'normal', 'axes_x_limits': None, 'axes_y_limits': None, 'axes_x_ticks': [0, 100], 'axes_y_ticks': None} # Create widget wid = AxesOptionsWidget(axes_options, render_function=render_function) # Set styling wid.style(box_style='warning', padding=6, border_visible=True, border_colour=map_styles_to_hex_colours('warning')) # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
The state of the widget can be updated as:
axes_options = {'render_axes': True, 'axes_font_name': 'serif', 'axes_font_size': 10, 'axes_font_style': 'normal', 'axes_font_weight': 'normal', 'axes_x_limits': [0., 0.05], 'axes_y_limits': 0.1, 'axes_x_ticks': [0, 100], 'axes_y_ticks': None} wid.set_widget_state(axes_options, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:legend"></a>12. Legend Options LegendOptionsWidget allows to control the (many) options of renderinf the legend of a figure.
# Initial options legend_options = {'render_legend': True, 'legend_title': '', 'legend_font_name': 'serif', 'legend_font_style': 'normal', 'legend_font_size': 10, 'legend_font_weight': 'normal', 'legend_marker_scale': 1., 'legend_location': 2, 'legend_bbox_to_anchor': (1.05, 1.), 'legend_border_axes_pad': 1., 'legend_n_columns': 1, 'legend_horizontal_spacing': 1., 'legend_vertical_spacing': 1., 'legend_border': True, 'legend_border_padding': 0.5, 'legend_shadow': False, 'legend_rounded_corners': True} # Create widget wid = LegendOptionsWidget(legend_options, render_function=render_function) # Set styling wid.style(border_visible=True, font_size=15) # Display widget wid legend_options = {'render_legend': True, 'legend_title': 'asd', 'legend_font_name': 'sans-serif', 'legend_font_style': 'normal', 'legend_font_size': 60, 'legend_font_weight': 'normal', 'legend_marker_scale': 2., 'legend_location': 7, 'legend_bbox_to_anchor': (1.05, 1.), 'legend_border_axes_pad': 1., 'legend_n_columns': 2, 'legend_horizontal_spacing': 3., 'legend_vertical_spacing': 7., 'legend_border': False, 'legend_border_padding': 0.5, 'legend_shadow': True, 'legend_rounded_corners': True} wid.set_widget_state(legend_options, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:grid"></a>13. Grid Options The following simple widget controls the rendering of the grid lines of a plot, their style and width.
# Initial options grid_options = {'render_grid': True, 'grid_line_width': 1, 'grid_line_style': '-'} # Create widget wid = GridOptionsWidget(grid_options, render_function=render_function) # Set styling wid.style(box_style='warning') # Display widget wid wid.set_widget_state({'render_grid': True, 'grid_line_width': 10, 'grid_line_style': ':'})
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:features"></a>14. HOG, DSIFT, Daisy, LBP, IGO Options The following widgets allow to select options regarding HOG, DSIFT, Daisy, LBP and IGO features.
# Initial options hog_options = {'mode': 'dense', 'algorithm': 'dalaltriggs', 'num_bins': 9, 'cell_size': 8, 'block_size': 2, 'signed_gradient': True, 'l2_norm_clip': 0.2, 'window_height': 1, 'window_width': 1, 'window_unit': 'blocks', 'window_step_vertical': 1, 'window_step_horizontal': 1, 'window_step_unit': 'pixels', 'padding': True} # Create widget wid = HOGOptionsWidget(hog_options, render_function=render_function) # Set styling wid.style('info') # Display widget wid # Initial options dsift_options = {'window_step_horizontal': 1, 'window_step_vertical': 1, 'num_bins_horizontal': 2, 'num_bins_vertical': 2, 'num_or_bins': 9, 'cell_size_horizontal': 6, 'cell_size_vertical': 6, 'fast': True} # Create widget wid = DSIFTOptionsWidget(dsift_options, render_function=render_function) # Set styling wid.style('success') # Display widget wid # Initial options daisy_options = {'step': 1, 'radius': 15, 'rings': 2, 'histograms': 2, 'orientations': 8, 'normalization': 'l1', 'sigmas': None, 'ring_radii': None} # Create widget wid = DaisyOptionsWidget(daisy_options, render_function=render_function) # Set styling wid.style('danger') # Display widget wid # Initial options lbp_options = {'radius': list(range(1, 5)), 'samples': [8] * 4, 'mapping_type': 'u2', 'window_step_vertical': 1, 'window_step_horizontal': 1, 'window_step_unit': 'pixels', 'padding': True} # Create widget wid = LBPOptionsWidget(lbp_options, render_function=render_function) # Set styling wid.style(box_style='warning') # Display widget wid wid = IGOOptionsWidget({'double_angles': True}, render_function=render_function) wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
If we visualize how the algorithm progresses, we can pre-emptiveley stop execution of the tour evaluation. Since the order of the permutations is deterministic, we can observe that the cost monotonically decreases. This monotonic decrease is a result of the min function we call on costs. In actuality, since we're evaluating all tours, and only storing the smallest one (a reduce), we make no assumptions about the structure of the graph. One can see that all edge evaluations are seperate from one another, so our final evaluation is equally likeley to be the lowest-weight tour as the last Let's set up our visualization, creating a random euclidean 2D graph, and seeing how it performs as we vary $N$, the tour at which it stops evaluating. If we choose the size of the graph to be 8, solving it exactly is feasable. Any larger, and this notebook becomes computationally intractable.
from algs import brute_force_N, brute_force from parsers import TSP from graphgen import EUC_2D from parstats import get_stats, dist_across_cost, scatter_vis from itertools import permutations tsp_prob = TSP('../data/a280.tsp') tsp_prob.graph = EUC_2D(6) tsp_prob.spec = dict(comment="Random euclidean graph", dimension=11, edge_weight_type="EUC_2D", name="Random cities") %%bash ./cluster.sh 8 @get_stats(name="Brute force, monotonic reduction", data=tsp_prob, plots=[scatter_vis]) def vis_brute(*args, **kwargs): return brute_force_N(*args, **kwargs) vis_brute(range(2, len(list(permutations(tsp_prob.graph.nodes())))));
reports/01_exact_algorithms.ipynb
DhashS/Olin-Complexity-Final-Project
gpl-3.0
If we tweak the code slightly, we can see what it's doing without a reduce step:
# %load -s brute_force_N_no_reduce algs.py def brute_force_N_no_reduce(p, n, perf=False): import itertools as it #Generate all possible tours (complete graph) tours = list(it.permutations(p.nodes())) #O(V!) costs = [] if not perf: cost_data = pd.DataFrame(columns=["$N$", "cost", "opt_cost"]) #Evaluate all tours for tour in tours[:n]: cost = 0 for n1, n2 in zip(tour, tour[1:]): #O(V) cost += p[n1][n2]['weight'] costs.append(cost) if not perf: cost_data = cost_data.append({"$N$" : n, "cost" : costs[-1], "opt_cost" : min(costs)}, ignore_index = True) return (cost_data, pd.DataFrame()) #Choose tour with lowest cost return tours[np.argmin(costs)] @get_stats(name="Brute force, no reduce", data=tsp_prob, plots=[scatter_vis, dist_across_cost]) def vis_brute_no_reduce(*args, **kwargs): return brute_force_N_no_reduce(*args, **kwargs) cost_stats, _ = vis_brute_no_reduce(range(2, len(list(permutations(tsp_prob.graph.nodes())))))
reports/01_exact_algorithms.ipynb
DhashS/Olin-Complexity-Final-Project
gpl-3.0
Given this is a randomly distributed dataset, it makes sense that the distribution across costs looks like a gaussian. Let's confirm by checking how correlated they are
from scipy.stats import pearsonr pearsonr(cost_stats.cost, cost_stats.opt_cost) pearsonr(cost_stats["$N$"], cost_stats.cost)
reports/01_exact_algorithms.ipynb
DhashS/Olin-Complexity-Final-Project
gpl-3.0
2. Visualize the First 24 Training Images
import numpy as np import matplotlib.pyplot as plt %matplotlib inline fig = plt.figure(figsize=(20,5)) for i in range(36): ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_train[i]))
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
elmaso/tno-ai
gpl-3.0
3. Rescale the Images by Dividing Every Pixel in Every Image by 255
# rescale [0,255] --> [0,1] x_train = x_train.astype('float32')/255 x_test = x_test.astype('float32')/255
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
elmaso/tno-ai
gpl-3.0
4. Break Dataset into Training, Testing, and Validation Sets
from keras.utils import np_utils # one-hot encode the labels num_classes = len(np.unique(y_train)) y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) # break training set into training and validation sets (x_train, x_valid) = x_train[5000:], x_train[:5000] (y_train, y_valid) = y_train[5000:], y_train[:5000] # print shape of training set print('x_train shape:', x_train.shape) # print number of training, validation, and test images print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') print(x_valid.shape[0], 'validation samples')
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
elmaso/tno-ai
gpl-3.0
5. Define the Model Architecture
from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten # define the model model = Sequential() model.add(Flatten(input_shape = x_train.shape[1:])) model.add(Dense(1000, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) model.summary()
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
elmaso/tno-ai
gpl-3.0
6. Compile the Model
# compile the model model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
elmaso/tno-ai
gpl-3.0
7. Train the Model
from keras.callbacks import ModelCheckpoint # train the model checkpointer = ModelCheckpoint(filepath='MLP.weights.best.hdf5', verbose=1, save_best_only=True) hist = model.fit(x_train, y_train, batch_size=32, epochs=20, validation_data=(x_valid, y_valid), callbacks=[checkpointer], verbose=2, shuffle=True)
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
elmaso/tno-ai
gpl-3.0
8. Load the Model with the Best Classification Accuracy on the Validation Set
# load the weights that yielded the best validation accuracy model.load_weights('MLP.weights.best.hdf5')
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
elmaso/tno-ai
gpl-3.0
9. Calculate Classification Accuracy on Test Set
# evaluate and print test accuracy score = model.evaluate(x_test, y_test, verbose=0) print('\n', 'Test accuracy:', score[1])
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
elmaso/tno-ai
gpl-3.0
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 0 # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = None # signals into hidden layer hidden_outputs = None # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = None # signals into final output layer final_outputs = None # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = None # Output layer error is the difference between desired target and actual output. # TODO: Calculate the hidden layer's contribution to the error hidden_error = None # TODO: Backpropagated error terms - Replace these values with your calculations. output_error_term = None hidden_error_term = None # Weight step (input to hidden) delta_weights_i_h += None # Weight step (hidden to output) delta_weights_h_o += None # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += None # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += None # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = None # signals into hidden layer hidden_outputs = None # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = None # signals into final output layer final_outputs = None # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
first-neural-network/Your_first_neural_network.ipynb
ClementPhil/deep-learning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### iterations = 100 learning_rate = 0.1 hidden_nodes = 2 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim()
first-neural-network/Your_first_neural_network.ipynb
ClementPhil/deep-learning
mit
We create a system object and provide: Hamiltonian, dynamics, and magnetisation configuration.
system = oc.System(name="first_notebook")
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
joommf/tutorial
bsd-3-clause
Our Hamiltonian should only contain exchange, demagnetisation, and Zeeman energy terms. We will apply the external magnetic field in the $x$ direction for the purpose of this demonstration:
A = 1e-12 # exchange energy constant (J/m) H = (5e6, 0, 0) # external magnetic field in x-direction (A/m) system.hamiltonian = oc.Exchange(A=A) + oc.Demag() + oc.Zeeman(H=H)
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
joommf/tutorial
bsd-3-clause
The dynamics of the system is governed by the LLG equation containing precession and damping terms:
gamma = 2.211e5 # gamma parameter (m/As) alpha = 0.2 # Gilbert damping system.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
joommf/tutorial
bsd-3-clause
We initialise the system in positive $y$ direction, i.e. (0, 1, 0), which is different from the equlibrium state we expect for the external Zeeman field applied in $x$ direction:
L = 100e-9 # cubic sample edge length (m) d = 5e-9 # discretisation cell size (m) mesh = oc.Mesh(p1=(0, 0, 0), p2=(L, L, L), cell=(d, d, d)) Ms = 8e6 # saturation magnetisation (A/m) system.m = df.Field(mesh, value=(0, 1, 0), norm=Ms)
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
joommf/tutorial
bsd-3-clause
We can check the characteristics of the system we defined by asking objects to represent themselves:
mesh system.hamiltonian system.dynamics
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
joommf/tutorial
bsd-3-clause
We can also visualise the current magnetisation field:
system.m.plot_plane("z");
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
joommf/tutorial
bsd-3-clause
After the system object is created, we can minimise its energy (relax it) using the Minimisation Driver (MinDriver).
md = oc.MinDriver() md.drive(system)
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
joommf/tutorial
bsd-3-clause
The system is now relaxed, and we can plot its slice and compute its average magnetisation.
# centre of the system is assumed for plane to be plotted system.m.plot_plane("z"); # plane can be chosen manually as well system.m.plot_plane(z=10e-9); system.m.average
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
joommf/tutorial
bsd-3-clause
As you can see above, we have a session on the server. It has been assigned a unique session ID and more user-friendly name. In this case, we are using the binary CAS protocol as opposed to the REST interface. We can now run CAS actions in the session. Let's begin with a simple one: listnodes.
# Run the builtins.listnodes action nodes = conn.listnodes() nodes
communities/Your First CAS Connection from Python.ipynb
sassoftware/sas-viya-programming
apache-2.0
The listnodes action returns a CASResults object (which is just a subclass of Python's ordered dictionary). It contains one key ('nodelist') which holds a Pandas DataFrame. We can now grab that DataFrame to do further operations on it.
# Grab the nodelist DataFrame df = nodes['nodelist'] df
communities/Your First CAS Connection from Python.ipynb
sassoftware/sas-viya-programming
apache-2.0
Use DataFrame selection to subset the columns.
roles = df[['name', 'role']] roles # Extract the worker nodes using a DataFrame mask roles[roles.role == 'worker'] # Extract the controllers using a DataFrame mask roles[roles.role == 'controller']
communities/Your First CAS Connection from Python.ipynb
sassoftware/sas-viya-programming
apache-2.0
In the code above, we are doing some standard DataFrame operations using expressions to filter the DataFrame to include only worker nodes or controller nodes. Pandas DataFrames support lots of ways of slicing and dicing your data. If you aren't familiar with them, you'll want to get acquainted on the Pandas web site. When you are finished with a CAS session, it's always a good idea to clean up.
conn.close()
communities/Your First CAS Connection from Python.ipynb
sassoftware/sas-viya-programming
apache-2.0
4. Solving the model 4.1 Solow model as an initial value problem The Solow model with can be formulated as an initial value problem (IVP) as follows. $$ \dot{k}(t) = sf(k(t)) - (g + n + \delta)k(t),\ t\ge t_0,\ k(t_0) = k_0 \tag{4.1.0} $$ The solution to this IVP is a function $k(t)$ describing the time-path of capital stock (per unit effective labor). Our objective in this section will be to explore methods for approximating the function $k(t)$. The methods we will learn are completely general and can be used to solve any IVP. Those interested in learning more about these methods should start by reading Chapter 10 of Numerical Methods for Economists by Ken Judd before proceeding to John Butcher's excellent book entitled Numerical Methods for solving Ordinary Differential Equations. Before discussing numerical methods we should stop and consider whether or not there are any special cases (i.e., combintions of parameters) for which the the initial value problem defined in 4.2.1 might have an analytic solution. Analytic results can be very useful in building intuition about the economic mechanisms at play in a model and are invaluable for debugging code. 4.2 Analytic methods 4.2.1 Analytic solution for a model with Cobb-Douglas production The Solow model with Cobb-Douglas production happens to have a completely general analytic solution: $$ k(t) = \left[\left(\frac{s}{n+g+\delta}\right)\left(1 - e^{-(n + g + \delta) (1-\alpha) t}\right) + k_0^{1-\alpha}e^{-(n + g + \delta) (1-\alpha) t}\right]^{\frac{1}{1-\alpha}} \tag{4.2.0}$$ This analytic result is available via the analytic_solution method of the solow.CobbDouglasModel class.
solowpy.CobbDouglasModel.analytic_solution?
notebooks/4 Solving the model.ipynb
solowPy/binder
mit
Example: Computing the analytic trajectory We can compute an analytic solution for our Solow model like so...
# define model parameters cobb_douglas_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15, 'delta': 0.05, 'alpha': 0.33} # create an instance of the solow.Model class cobb_douglas_model = solowpy.CobbDouglasModel(params=cobb_douglas_params) # specify some initial condition k0 = 0.5 * cobb_douglas_model.steady_state # grid of t values for which we want the value of k(t) ti = np.linspace(0, 100, 10) # generate a trajectory! cobb_douglas_model.analytic_solution(ti, k0)
notebooks/4 Solving the model.ipynb
solowPy/binder
mit
...and we can make a plot of this solution like so...
fig, ax = plt.subplots(1, 1, figsize=(8,6)) # compute the solution ti = np.linspace(0, 100, 1000) analytic_traj = cobb_douglas_model.analytic_solution(ti, k0) # plot this trajectory ax.plot(ti, analytic_traj[:,1], 'r-') # equilibrium value of capital stock (per unit effective labor) ax.axhline(cobb_douglas_model.steady_state, linestyle='dashed', color='k', label='$k^*$') # axes, labels, title, etc ax.set_xlabel('Time, $t$', fontsize=20, family='serif') ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif') ax.set_title('Analytic solution to a Solow model\nwith Cobb-Douglas production', fontsize=25, family='serif') ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0)) ax.grid('on') plt.show()
notebooks/4 Solving the model.ipynb
solowPy/binder
mit
4.2.2 Linearized solution to general model In general there will not be closed-form solutions for the Solow model. The standard approach to obtaining general analytical results for the Solow model is to linearize the equation of motion for capital stock (per unit effective labor). Linearizing the equation of motion of capital (per unit effective labor) amounts to taking a first-order Taylor approximation of equation 4.1.0 around its long-run equilibrium $k^*$: $$ \dot{k}(t) \approx -\lambda (k(t) - k^*),\ t \ge t_0,\ k(t_0)=k_0 \tag{4.2.1}$$ where the speed of convergence, $\lambda$, is defined as $$ \lambda = -\frac{\partial \dot{k}(k(t))}{\partial k(t)}\bigg|_{k(t)=k^*} \tag{4.2.2} $$ The solution the the linear differential equation 4.2.1 is $$ k(t) = k^ + e^{-\lambda t}(k_0 - k^). \tag{4.2.3} $$ To complete the solution it remains to find an expression for the speed of convergence $\lambda$: \begin{align} \lambda \equiv -\frac{\partial \dot{k}(k(t))}{\partial k(t)}\bigg|_{k(t)=k^} =& -[sf'(k^) - (g + n+ \delta)] \ =& (g + n+ \delta) - sf'(k^) \ =& (g + n + \delta) - (g + n + \delta)\frac{k^f'(k^)}{f(k^)} \ =& (1 - \alpha_K(k^*))(g + n + \delta) \tag{4.2.4} \end{align} where the elasticity of output with respect to capital, $\alpha_K(k)$, is $$\alpha_K(k) = \frac{k^f'(k^)}{f(k^*)}. \tag{4.2.5}$$ Example: Computing the linearized trajectory One can compute a linear approximation of the model solution using the linearized_solution method of the solow.Model class as follows.
# specify some initial condition k0 = 0.5 * cobb_douglas_model.steady_state # grid of t values for which we want the value of k(t) ti = np.linspace(0, 100, 10) # generate a trajectory! cobb_douglas_model.linearized_solution(ti, k0)
notebooks/4 Solving the model.ipynb
solowPy/binder
mit
4.2.3 Accuracy of the linear approximation
# initial condition t0, k0 = 0.0, 0.5 * cobb_douglas_model.steady_state # grid of t values for which we want the value of k(t) ti = np.linspace(t0, 100, 1000) # generate the trajectories analytic = cobb_douglas_model.analytic_solution(ti, k0) linearized = cobb_douglas_model.linearized_solution(ti, k0) fig, ax = plt.subplots(1, 1, figsize=(8,6)) ax.plot(ti, analytic[:,1], 'r-', label='Analytic') ax.plot(ti, linearized[:,1], 'b-', label='Linearized') # demarcate k* ax.axhline(cobb_douglas_model.steady_state, linestyle='dashed', color='k', label='$k^*$') # axes, labels, title, etc ax.set_xlabel('Time, $t$', fontsize=20, family='serif') ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif') ax.set_title('Analytic vs. linearized solutions', fontsize=25, family='serif') ax.legend(loc='best', frameon=False, prop={'family':'serif'}, bbox_to_anchor=(1.0, 1.0)) ax.grid('on') fig.show()
notebooks/4 Solving the model.ipynb
solowPy/binder
mit
4.3 Finite-difference methods Four of the best, most widely used ODE integrators have been implemented in the scipy.integrate module (they are called dopri5, dop85, lsoda, and vode). Each of these integrators uses some type of adaptive step-size control: the integrator adaptively adjusts the step size $h$ in order to keep the approximation error below some user-specified threshold). The cells below contain code which compares the approximation error of the forward Euler with those of lsoda and vode. Instead of simple linear interpolation (i.e., k=1), I set k=5 for 5th order B-spline interpolation. ...finally, we can plot trajectories for different initial conditions. Note that the analytic solutions converge to the long-run equilibrium no matter the initial condition of capital stock (per unit of effective labor) providing a nice graphical demonstration that the Solow model is globally stable.
fig, ax = plt.subplots(1, 1, figsize=(8,6)) # lower and upper bounds for initial conditions k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model) k_l = 0.5 * k_star k_u = 2.0 * k_star for k0 in np.linspace(k_l, k_u, 5): # compute the solution ti = np.linspace(0, 100, 1000) analytic_traj = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0) # plot this trajectory ax.plot(ti, analytic_traj[:,1]) # equilibrium value of capital stock (per unit effective labor) ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$') # axes, labels, title, etc ax.set_xlabel('Time, $t$', fontsize=15, family='serif') ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif') ax.set_title('Analytic solution to a Solow model\nwith Cobb-Douglas production', fontsize=20, family='serif') ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0)) ax.grid('on') plt.show() k0 = 0.5 * ces_model.steady_state numeric_trajectory = ces_model.ivp.solve(t0=0, y0=k0, h=0.5, T=100, integrator='dopri5') ti = numeric_trajectory[:,0] linearized_trajectory = ces_model.linearized_solution(ti, k0)
notebooks/4 Solving the model.ipynb
solowPy/binder
mit
4.3.2 Accuracy of finite-difference methods
t0, k0 = 0.0, 0.5 numeric_soln = cobb_douglas_model.ivp.solve(t0, k0, T=100, integrator='lsoda') fig, ax = plt.subplots(1, 1, figsize=(8,6)) # compute and plot the numeric approximation t0, k0 = 0.0, 0.5 numeric_soln = cobb_douglas_model.ivp.solve(t0, k0, T=100, integrator='lsoda') ax.plot(numeric_soln[:,0], numeric_soln[:,1], 'bo', markersize=3.0) # compute and plot the analytic solution ti = np.linspace(0, 100, 1000) analytic_soln = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0) ax.plot(ti, analytic_soln[:,1], 'r-') # equilibrium value of capital stock (per unit effective labor) k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model) ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$') # axes, labels, title, etc ax.set_xlabel('Time, $t$', fontsize=15, family='serif') ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif') ax.set_title('Numerical approximation of the solution', fontsize=20, family='serif') ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0)) ax.grid('on') plt.show() ti = np.linspace(0, 100, 1000) interpolated_soln = cobb_douglas_model.ivp.interpolate(numeric_soln, ti, k=3) fig, ax = plt.subplots(1, 1, figsize=(8,6)) # compute and plot the numeric approximation ti = np.linspace(0, 100, 1000) interpolated_soln = cobb_douglas_model.ivp.interpolate(numeric_soln, ti, k=3) ax.plot(ti, interpolated_soln[:,1], 'b-') # compute and plot the analytic solution analytic_soln = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0) ax.plot(ti, analytic_soln[:,1], 'r-') # equilibrium value of capital stock (per unit effective labor) k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model) ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$') # axes, labels, title, etc ax.set_xlabel('Time, $t$', fontsize=15, family='serif') ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif') ax.set_title('Numerical approximation of the solution', fontsize=20, family='serif') ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0)) ax.grid('on') plt.show() ti = np.linspace(0, 100, 1000) residual = cobb_douglas_model.ivp.compute_residual(numeric_soln, ti, k=3) # extract the raw residuals capital_residual = residual[:, 1] # typically, normalize residual by the level of the variable norm_capital_residual = np.abs(capital_residual) / interpolated_soln[:,1] # create the plot fig = plt.figure(figsize=(8, 6)) plt.plot(ti, norm_capital_residual, 'b-', label='$k(t)$') plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps') plt.xlabel('Time', fontsize=15) plt.ylim(1e-16, 1) plt.ylabel('Residuals (normalized)', fontsize=15, family='serif') plt.yscale('log') plt.title('Residual', fontsize=20, family='serif') plt.grid() plt.legend(loc=0, frameon=False, bbox_to_anchor=(1.0,1.0)) plt.show()
notebooks/4 Solving the model.ipynb
solowPy/binder
mit
Create Model Test/Validation Data
x_test = np.random.rand(len(x_train)).astype(np.float32) print(x_test) noise = np.random.normal(scale=0.01, size=len(x_train)) y_test = x_test * 0.1 + 0.3 + noise print(y_test) pylab.plot(x_train, y_train, '.') with tf.device("/cpu:0"): W = tf.get_variable(shape=[], name='weights') print(W) b = tf.get_variable(shape=[], name='bias') print(b) x_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='x_observed') print(x_observed) with tf.device("/cpu:0"): y_pred = W * x_observed + b print(y_pred) with tf.device("/cpu:0"): y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed') print(y_observed) loss_op = tf.reduce_mean(tf.square(y_pred - y_observed)) optimizer_op = tf.train.GradientDescentOptimizer(0.025) train_op = optimizer_op.minimize(loss_op) print("loss:", loss_op) print("optimizer:", optimizer_op) print("train:", train_op) with tf.device("/cpu:0"): init_op = tf.global_variables_initializer() print(init_op) train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/cpu/%s/train' % version, graph=tf.get_default_graph()) test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/cpu/%s/test' % version, graph=tf.get_default_graph()) config = tf.ConfigProto( log_device_placement=True, ) print(config) sess = tf.Session(config=config) sess.run(init_op) print(sess.run(W)) print(sess.run(b))
oreilly.ml/high-performance-tensorflow/notebooks/03_Train_Model_CPU.ipynb
shareactorIO/pipeline
apache-2.0
Look at the Model Graph In Tensorboard Navigate to the Graph tab at this URL: http://[ip-address]:6006 Accuracy of Random Weights
def test(x, y): return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y}) test(x=x_test, y=y_test) loss_summary_scalar_op = tf.summary.scalar('loss', loss_op) loss_summary_merge_all_op = tf.summary.merge_all()
oreilly.ml/high-performance-tensorflow/notebooks/03_Train_Model_CPU.ipynb
shareactorIO/pipeline
apache-2.0
Train Model
%%time max_steps = 400 run_metadata = tf.RunMetadata() for step in range(max_steps): if (step < max_steps): test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test}) train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train}) else: test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test}) train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train}, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE), run_metadata=run_metadata) trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('cpu-timeline.json', 'w') as trace_file: trace_file.write(trace.generate_chrome_trace_format(show_memory=True)) if step % 1 == 0: print(step, sess.run([W, b])) train_summary_writer.add_summary(train_summary_log, step) train_summary_writer.flush() test_summary_writer.add_summary(test_summary_log, step) test_summary_writer.flush() pylab.plot(x_train, y_train, '.', label="target") pylab.plot(x_train, sess.run(y_pred, feed_dict={x_observed: x_train, y_observed: y_train}), ".", label="predicted") pylab.legend() pylab.ylim(0, 1.0) test(x=x_test, y=y_test)
oreilly.ml/high-performance-tensorflow/notebooks/03_Train_Model_CPU.ipynb
shareactorIO/pipeline
apache-2.0
Look at the Train and Test Loss Summary In Tensorboard Navigate to the Scalars tab at this URL: http://[ip-address]:6006
from tensorflow.python.saved_model import utils tensor_info_x_observed = utils.build_tensor_info(x_observed) print(tensor_info_x_observed) tensor_info_y_pred = utils.build_tensor_info(y_pred) print(tensor_info_y_pred) export_path = "/root/models/linear/cpu/%s" % version print(export_path) from tensorflow.python.saved_model import builder as saved_model_builder from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import signature_def_utils from tensorflow.python.saved_model import tag_constants with tf.device("/cpu:0"): builder = saved_model_builder.SavedModelBuilder(export_path) prediction_signature = signature_def_utils.build_signature_def( inputs = {'x_observed': tensor_info_x_observed}, outputs = {'y_pred': tensor_info_y_pred}, method_name = signature_constants.PREDICT_METHOD_NAME) legacy_init_op = tf.group(tf.initialize_all_tables(), name='legacy_init_op') builder.add_meta_graph_and_variables(sess, [tag_constants.SERVING], signature_def_map={'predict':prediction_signature, signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature}, legacy_init_op=legacy_init_op) builder.save()
oreilly.ml/high-performance-tensorflow/notebooks/03_Train_Model_CPU.ipynb
shareactorIO/pipeline
apache-2.0
Look at the Model On Disk You must replace [version] with the version number
%%bash ls -l /root/models/linear/cpu/[version]
oreilly.ml/high-performance-tensorflow/notebooks/03_Train_Model_CPU.ipynb
shareactorIO/pipeline
apache-2.0
HACK: Save Model in Previous Model Format We will use this later.
from tensorflow.python.framework import graph_io graph_io.write_graph(sess.graph, "/root/models/optimize_me/", "unoptimized_cpu.pb") sess.close()
oreilly.ml/high-performance-tensorflow/notebooks/03_Train_Model_CPU.ipynb
shareactorIO/pipeline
apache-2.0
S be carefull! while Loop
i=0 while i<10: print(i) i+=1
Python-02.ipynb
ComputationalPhysics2015-IPM/Python-01
gpl-2.0
Queez Once upon a time, there was king, who wanted lots of soldiers. So he commanded every couple in the country to have children, until their first dauter born. Then the family is banned from having any more child. What will be the ratio of boy/girls in this country?
from random import randint children = 0 boy = 0 for i in range(10000): gender = randint(0,1) # boy=1, girl=0 children += 1 while gender != 0: boy += gender gender = randint(0,1) children += 1 print(boy/children)
Python-02.ipynb
ComputationalPhysics2015-IPM/Python-01
gpl-2.0
Control Statments break, continue and pass
for i in range(10): print(i) if i == 5: break for i in range(10): print(i) if i > 5: continue print("Hey") def func(): pass func()
Python-02.ipynb
ComputationalPhysics2015-IPM/Python-01
gpl-2.0
tuple
t = (0,1,'test') print(t) t[0]=1 (1,)
Python-02.ipynb
ComputationalPhysics2015-IPM/Python-01
gpl-2.0
Dictionaries items get keys pop update values
d = {} d['name'] = 'Hamed' d['family name'] = 'Seyed-allaei' d[0]=12 d['a']='' print(d) print(d['name']) print(d[0]) for i,j in d.items(): print(i,j)
Python-02.ipynb
ComputationalPhysics2015-IPM/Python-01
gpl-2.0
set in, not in, len(), ==, !=, <=, <, |, &, -, ^
a = set(['c', 'a','b','b']) b = set(['c', 'd','e']) print(a,b) a | b a & b a - b b - a a ^ b
Python-02.ipynb
ComputationalPhysics2015-IPM/Python-01
gpl-2.0
List comprehention
l = [] for i in range(10): l.append(i*i) print(l) [i*i for i in range(10)] {i:i**2 for i in range(10)}
Python-02.ipynb
ComputationalPhysics2015-IPM/Python-01
gpl-2.0
Generators next()
def myrange(n): i = 0 while i < n: yield i yield i**2 i+=1 x = myrange(10) type(x) next(x) [i for i in myrange(10)] for i in myrange(10): print(i)
Python-02.ipynb
ComputationalPhysics2015-IPM/Python-01
gpl-2.0
Pandas has great support for datetime objects and general time series analysis operations. We'll be working with an example of predicting the number of airline passengers (in thousands) by month adapted from this tutorial. First, download this dataset and load it into a Pandas Dataframe by specifying the 'Month' column as the datetime index.
dateparse = lambda dates: pd.datetime.strptime(dates, '%Y-%m') data = pd.read_csv('AirPassengers.csv', parse_dates=['Month'], index_col='Month',date_parser=dateparse) print data.head()
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Note that Pandas is using the 'Month' column as the Dataframe index.
ts = data["#Passengers"] ts.index
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
We can index into the Dataframe in two ways - either by using a string representation for the index or by constructing a datetime object.
ts['1949-01-01'] from datetime import datetime ts[datetime(1949,1,1)]
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
We can also use the Pandas datetime index support to retrieve entire years
ts['1949'] ts['1949-01-01':'1949-05-01']
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Finally, let's plot the time series to get an intial visualization of how the series grows.
plt.plot(ts)
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Stationarity Most of the important results for time series forecasting (including the ARIMA model, which we focus on today) assume that the series is stationary - that is, its statistical properties like mean and variance are constant. However, the graph above certainly isn't stationary, given the obvious growth. Thus, we want to manipulate the time series to make it stationary. This process of reducing a time series to a stationary series is a hallmark of time series analysis and forecasting, as most real world time series' aren't initially stationary. To solve this nonstationarity issue, we can break a time series up into its trend and seasonality. These are the two factors that make a series nonstationary, so the main idea is to remove these factors, operate on the resulting stationary series, then add these factors back in. First, we will take the log of the series to reduce the positive trend. This gives a seemingly linear trend, making it easier to estimate.
ts_log = np.log(ts) plt.plot(ts_log)
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
A simple moving average is the most basic way to predict the trend of a series, taking advantage of the generally continuous nature of trends. For example, if I told you to predict the number of wins of a basketball team this season, without giving you any information about the team apart from its past record, you would take the average of the team's wins over the last few seasons as a reasonable predictor. The simple moving average operates on this exact principle. Choosing an $n$ element window to average over, the prediction at each point is obtained by taking the average of the last $n$ values. Notice that the moving average is undefined for first 12 values because we're looking at a 12 value window.
moving_avg = pd.Series(ts_log).rolling(window=12).mean() plt.plot(ts_log) plt.plot(moving_avg, color='red')
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
You might be unhappy with having to choose a window size. How do we know what window size we want if we don't know much about the data? One solution is to average over all past data, discounting earlier values because they have less predictive power than more recent values. This method is known as smoothing.
expwighted_avg = pd.Series(ts_log).ewm(halflife=12).mean() plt.plot(ts_log) plt.plot(expwighted_avg, color='red')
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Now we can subtract the trend from the original data (eliminating the null values in the case of the simple moving average) to create a new series that is hopefully more stationary. The blue graph represents the smoothing difference, while the red graph represents the simple moving average difference
ts_exp_moving_avg_diff = ts_log - expwighted_avg ts_log_moving_avg_diff = ts_log - moving_avg ts_log_moving_avg_diff.dropna(inplace=True) plt.plot(ts_exp_moving_avg_diff) plt.plot(ts_log_moving_avg_diff, color='red')
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Now there is no longer an upward trend, suggesting a stationarity. There does seem to be a strong seasonality effect, as the number of passengers is low at the beginning and middle of the year but spikes at the first and third quarters. Dealing with Seasonality One baseline way of dealing with both trend and seasonality at once is differencing, taking a single step lag (subtracting the last value of the series from the current value at each step) to represent how the time series grows. Of course, this method can be refined by factoring in more complex lags.
ts_log_diff = ts_log - ts_log.shift() plt.plot(ts_log_diff)
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Another method of dealing with trend and seasonality is separating the two effects, then removing both from the time series to obtain the stationary series. We'll be using the statsmodels module, which you can get via pip by running the following command in the terminal. python -mpip install statsmodels We will use the seasonal decompose tool to separate seasonality from trend.
from statsmodels.tsa.seasonal import seasonal_decompose decomposition = seasonal_decompose(ts_log) trend = decomposition.trend seasonal = decomposition.seasonal residual = decomposition.resid plt.subplot(411) plt.plot(ts_log, label='Original') plt.legend(loc='best') plt.subplot(412) plt.plot(trend, label='Trend') plt.legend(loc='best') plt.subplot(413) plt.plot(seasonal,label='Seasonality') plt.legend(loc='best') plt.subplot(414) plt.plot(residual, label='Residuals') plt.legend(loc='best') plt.tight_layout()
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Forecasting Using the seasonal decomposition, we were able to separate the trend and seasonality effects, which is great for time series analysis. However, another goal of working with time series is forecasting the future - how do we do that given the tools that we've been using and the stationary series we've obtained? The ARIMA (Autoregressive Integrated Moving Average) model, which operates on stationary series', is one of the most commonly used models for time series forecasting. ARIMA, with parameters $p$, $d$, and $q$, combines an Autoregressive Model with a Moving Average model. Let's take a look at what this means. Autoregressive model: output variable depends linearly on previous values. The $p$ parameter determines the number of lag terms used in the regression. Formally, $X_t = c + \sum_{i = 1}^p \varphi_iX_{t - i} + \epsilon_t$. Moving average model: generalizes the same concept of moving average we saw earlier - the $q$ parameter determines the order of the model. Formally, $X_t = \mu + \sum_{i = 1}^q \theta_i\epsilon_{t - i}$. Integrated model: the $d$ parameter represents the number of times past values have been subtracted, extending on the differencing method described earlier. This integrates the differencing for stationality into the ARIMA model, allowing it to be fit on non-stationalized data. We don't have time to cover the math behind these models in depth, but Wikipedia provides some more detailed descriptions of the AR, MA, ARMA, and ARIMA models. Comparing our model's results (red) to the actual differenced data (blue).
from statsmodels.tsa.arima_model import ARIMA model = ARIMA(ts_log, order=(2, 1, 2)) results_ARIMA = model.fit(disp=-1) plt.plot(ts_log_diff) plt.plot(results_ARIMA.fittedvalues, color='red') plt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues-ts_log_diff)**2))
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Now that we have a model for the stationary series that we can use to predict future values in the stationary series, and we want to get back to the original series. Note that we won't have a value for the first element because we are working with a one step lag. The following procedure takes care of that.
predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True) predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum() predictions_ARIMA_log = pd.Series(ts_log.ix[0], index=ts_log.index) predictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum,fill_value=0)
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Now, we can plot the prediction (green) against the actual data. Note that the prediction model captures the seasonality and trend of the original series. It's not perfect, and additional steps can be made to tune the model. The important takeaway from this workshop is the general time series procedure of separating the time series into the trend and seasonality effects, and understanding how to work with time series' in Pandas.
predictions_ARIMA = np.exp(predictions_ARIMA_log) plt.plot(ts) plt.plot(predictions_ARIMA) plt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-ts)**2)/len(ts)))
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
Challenge: ARIMA Tuning This is an open ended challenge. There aren't any right or wrong answers, we'd just like to see how you would approach tuning the ARIMA model. As you can see above, the ARIMA predictions could certainly use some tuning. Try manually tuning $p$, $d$, and $q$ and see how that changes the ARIMA predictions. How would you use the AR, MA, and ARMA models individually using the ARIMA model? Do these results match what you would expect from these individual models? Can you automate this process to find the parameters that minimize RMSE? Do you see any issues with tuning $p$, $d$ and $q$ this way?
# TODO: adjust the p, d, and q parameters to model the AR, MA, and ARMA models. Then, adjust these parameters to optimally tune the ARIMA model. test_model = ARIMA(ts_log, order=(2, 1, 2)) test_results_ARIMA = test_model.fit(disp=-1) test_predictions_ARIMA_diff = pd.Series(test_results_ARIMA.fittedvalues, copy=True) test_predictions_ARIMA_diff_cumsum = test_predictions_ARIMA_diff.cumsum() test_predictions_ARIMA_log = pd.Series(ts_log.ix[0], index=ts_log.index) test_predictions_ARIMA_log = test_predictions_ARIMA_log.add(test_predictions_ARIMA_diff_cumsum,fill_value=0) test_predictions_ARIMA = np.exp(test_predictions_ARIMA_log) plt.plot(ts) plt.plot(test_predictions_ARIMA) plt.title('RMSE: %.4f'% np.sqrt(sum((test_predictions_ARIMA-ts)**2)/len(ts)))
4/0-Time-Series-Analysis.ipynb
dataventures/workshops
mit
1. What are truncated distributions? <a class="anchor" id="1"></a> The support of a probability distribution is the set of values in the domain with non-zero probability. For example, the support of the normal distribution is the whole real line (even if the density gets very small as we move away from the mean, technically speaking, it is never quite zero). The support of the uniform distribution, as coded in jax.random.uniform with the default arguments, is the interval $\left[0, 1)\right.$, because any value outside of that interval has zero probability. The support of the Poisson distribution is the set of non-negative integers, etc. Truncating a distribution makes its support smaller so that any value outside our desired domain has zero probability. In practice, this can be useful for modelling situations in which certain biases are introduced during data collection. For example, some physical detectors only get triggered when the signal is above some minimum threshold, or sometimes the detectors fail if the signal exceeds a certain value. As a result, the observed values are constrained to be within a limited range of values, even though the true signal does not have the same constraints. See, for example, section 3.1 of Information Theory and Learning Algorithms by David Mackay. Naively, if $S$ is the support of the original density $p_Y(y)$, then by truncating to a new support $T\subset S$ we are effectively defining a new random variable $Z$ for which the density is $$ \begin{align} p_Z(z) \propto \begin{cases} p_Y(z) & \text{if $z$ is in $T$}\ 0 & \text{if $z$ is outside $T$}\ \end{cases} \end{align} $$ The reason for writing a $\propto$ (proportional to) sign instead of a strict equation is that, defined in the above way, the resulting function does not integrate to $1$ and so it cannot be strictly considered a probability density. To make it into a probability density we need to re-distribute the truncated mass among the part of the distribution that remains. To do this, we simply re-weight every point by the same constant: $$ \begin{align} p_Z(z) = \begin{cases} \frac{1}{M}p_Y(z) & \text{if $z$ is in $T$}\ 0 & \text{if $z$ is outside $T$}\ \end{cases} \end{align} $$ where $M = \int_T p_Y(y)\mathrm{d}y$. In practice, the truncation is often one-sided. This means that if, for example, the support before truncation is the interval $(a, b)$, then the support after truncation is of the form $(a, c)$ or $(c, b)$, with $a < c < b$. The figure below illustrates a left-sided truncation at zero of a normal distribution $N(1, 1)$. <figure> <img src="https://i.ibb.co/6vHyFfq/truncated-normal.png" alt="truncated" width="900"/> </figure> The original distribution (left side) is truncated at the vertical dotted line. The truncated mass (orange region) is redistributed in the new support (right side image) so that the total area under the curve remains equal to 1 even after truncation. This method of re-weighting ensures that the density ratio between any two points, $p(a)/p(b)$ remains the same before and after the reweighting is done (as long as the points are inside the new support, of course). Note: Truncated data is different from censored data. Censoring also hides values that are outside some desired support but, contrary to truncated data, we know when a value has been censored. The typical example is the household scale which does not report values above 300 pounds. Censored data will not be covered in this tutorial. 2. What is a folded distribution? <a class="anchor" id="2"></a> Folding is achieved by taking the absolute value of a random variable, $Z = \lvert Y \rvert$. This obviously modifies the support of the original distribution since negative values now have zero probability: $$ \begin{align} p_Z(z) = \begin{cases} p_Y(z) + p_Y(-z) & \text{if $z\ge 0$}\ 0 & \text{if $z\lt 0$}\ \end{cases} \end{align} $$ The figure below illustrates a folded normal distribution $N(1, 1)$. <figure> <img src="https://i.ibb.co/3d2xJbc/folded-normal.png" alt="folded" width="900"/> </figure> As you can see, the resulting distribution is different from the truncated case. In particular, the density ratio between points, $p(a)/p(b)$, is in general not the same after folding. For some examples in which folding is relevant see references 3 and 4 If the original distribution is symmetric around zero, then folding and truncating at zero have the same effect. 3. Sampling from truncated and folded distributions <a class="anchor" id="3"></a> Truncated distributions Usually, we already have a sampler for the pre-truncated distribution (e.g. np.random.normal). So, a seemingly simple way of generating samples from the truncated distribution would be to sample from the original distribution, and then discard the samples that are outside the desired support. For example, if we wanted samples from a normal distribution truncated to the support $(-\infty, 1)$, we'd simply do: python upper = 1 samples = np.random.normal(size=1000) truncated_samples = samples[samples &lt; upper] This is called rejection sampling but it is not very efficient. If the region we truncated had a sufficiently high probability mass, then we'd be discarding a lot of samples and it might be a while before we accumulate sufficient samples for the truncated distribution. For example, the above snippet would only result in approximately 840 truncated samples even though we initially drew 1000. This can easily get a lot worse for other combinations of parameters. A more efficient approach is to use a method known as inverse transform sampling. In this method, we first sample from a uniform distribution in (0, 1) and then transform those samples with the inverse cumulative distribution of our truncated distribution. This method ensures that no samples are wasted in the process, though it does have the slight complication that we need to calculate the inverse CDF (ICDF) of our truncated distribution. This might sound too complicated at first but, with a bit of algebra, we can often calculate the truncated ICDF in terms of the untruncated ICDF. The untruncated ICDF for many distributions is already available. Folded distributions This case is a lot simpler. Since we already have a sampler for the pre-folded distribution, all we need to do is to take the absolute value of those samples: python samples = np.random.normal(size=1000) folded_samples = np.abs(samples) 4. Ready to use truncated and folded distributions <a class="anchor" id="4"></a> The later sections in this tutorial will show you how to construct your own truncated and folded distributions, but you don't have to reinvent the wheel. NumPyro has a bunch of truncated distributions already implemented. Suppose, for example, that you want a normal distribution truncated on the right. For that purpose, we use the TruncatedNormal distribution. The parameters of this distribution are loc and scale, corresponding to the loc and scale of the untruncated normal, and low and/or high corresponding to the truncation points. Importantly, the low and high are keyword only arguments, only loc and scale are valid as positional arguments. This is how you can use this class in a model:
def truncated_normal_model(num_observations, high, x=None): loc = numpyro.sample("loc", dist.Normal()) scale = numpyro.sample("scale", dist.LogNormal()) with numpyro.plate("observations", num_observations): numpyro.sample("x", TruncatedNormal(loc, scale, high=high), obs=x)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Let's now check that we can use this model in a typical MCMC workflow. Prior simulation
high = 1.2 num_observations = 250 num_prior_samples = 100 prior = Predictive(truncated_normal_model, num_samples=num_prior_samples) prior_samples = prior(PRIOR_RNG, num_observations, high)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Inference To test our model, we run mcmc against some synthetic data. The synthetic data can be any arbitrary sample from the prior simulation.
# -- select an arbitrary prior sample as true data true_idx = 0 true_loc = prior_samples["loc"][true_idx] true_scale = prior_samples["scale"][true_idx] true_x = prior_samples["x"][true_idx] plt.hist(true_x.copy(), bins=20) plt.axvline(high, linestyle=":", color="k") plt.xlabel("x") plt.show() # --- Run MCMC and check estimates and diagnostics mcmc = MCMC(NUTS(truncated_normal_model), **MCMC_KWARGS) mcmc.run(MCMC_RNG, num_observations, high, true_x) mcmc.print_summary() # --- Compare to ground truth print(f"True loc : {true_loc:3.2}") print(f"True scale: {true_scale:3.2}")
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Removing the truncation Once we have inferred the parameters of our model, a common task is to understand what the data would look like without the truncation. In this example, this is easily done by simply "pushing" the value of high to infinity.
pred = Predictive(truncated_normal_model, posterior_samples=mcmc.get_samples()) pred_samples = pred(PRED_RNG, num_observations, high=float("inf"))
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Let's finally plot these samples and compare them to the original, observed data.
# thin the samples to not saturate matplotlib samples_thinned = pred_samples["x"].ravel()[::1000] f, axes = plt.subplots(1, 2, figsize=(15, 5), sharex=True) axes[0].hist( samples_thinned.copy(), label="Untruncated posterior", bins=20, density=True ) axes[0].set_title("Untruncated posterior") vals, bins, _ = axes[1].hist( samples_thinned[samples_thinned < high].copy(), label="Tail of untruncated posterior", bins=10, density=True, ) axes[1].hist( true_x.copy(), bins=bins, label="Observed, truncated data", density=True, alpha=0.5 ) axes[1].set_title("Comparison to observed data") for ax in axes: ax.axvline(high, linestyle=":", color="k", label="Truncation point") ax.legend() plt.show()
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
The plot on the left shows data simulated from the posterior distribution with the truncation removed, so we are able to see how the data would look like if it were not truncated. To sense check this, we discard the simulated samples that are above the truncation point and make histogram of those and compare it to a histogram of the true data (right plot). The TruncatedDistribution class The source code for the TruncatedNormal in NumPyro uses a class called TruncatedDistribution which abstracts away the logic for sample and log_prob that we will discuss in the next sections. At the moment, though, this logic only works continuous, symmetric distributions with real support. We can use this class to quickly construct other truncated distributions. For example, if we need a truncated SoftLaplace we can use the following pattern:
def TruncatedSoftLaplace( loc=0.0, scale=1.0, *, low=None, high=None, validate_args=None ): return TruncatedDistribution( base_dist=SoftLaplace(loc, scale), low=low, high=high, validate_args=validate_args, ) def truncated_soft_laplace_model(num_observations, high, x=None): loc = numpyro.sample("loc", dist.Normal()) scale = numpyro.sample("scale", dist.LogNormal()) with numpyro.plate("obs", num_observations): numpyro.sample("x", TruncatedSoftLaplace(loc, scale, high=high), obs=x)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
And, as before, we check that we can use this model in the steps of a typical workflow:
high = 2.3 num_observations = 200 num_prior_samples = 100 prior = Predictive(truncated_soft_laplace_model, num_samples=num_prior_samples) prior_samples = prior(PRIOR_RNG, num_observations, high) true_idx = 0 true_x = prior_samples["x"][true_idx] true_loc = prior_samples["loc"][true_idx] true_scale = prior_samples["scale"][true_idx] mcmc = MCMC( NUTS(truncated_soft_laplace_model), **MCMC_KWARGS, ) mcmc.run( MCMC_RNG, num_observations, high, true_x, ) mcmc.print_summary() print(f"True loc : {true_loc:3.2}") print(f"True scale: {true_scale:3.2}")
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Important The sample method of the TruncatedDistribution class relies on inverse-transform sampling. This has the implicit requirement that the base distribution should have an icdf method already available. If this is not the case, we will not be able to call the sample method on any instances of our distribution, nor use it with the Predictive class. However, the log_prob method only depends on the cdf method (which is more frequently available than the icdf). If the log_prob method is available, then we can use our distribution as prior/likelihood in a model. The FoldedDistribution class Similar to truncated distributions, NumPyro has the FoldedDistribution class to help you quickly construct folded distributions. Popular examples of folded distributions are the so-called "half-normal", "half-student" or "half-cauchy". As the name suggests, these distributions keep only (the positive) half of the distribution. Implicit in the name of these "half" distributions is that they are centered at zero before folding. But, of course, you can fold a distribution even if its not centered at zero. For instance, this is how you would define a folded student-t distribution.
def FoldedStudentT(df, loc=0.0, scale=1.0): return FoldedDistribution(StudentT(df, loc=loc, scale=scale)) def folded_student_model(num_observations, x=None): df = numpyro.sample("df", dist.Gamma(6, 2)) loc = numpyro.sample("loc", dist.Normal()) scale = numpyro.sample("scale", dist.LogNormal()) with numpyro.plate("obs", num_observations): numpyro.sample("x", FoldedStudentT(df, loc, scale), obs=x)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
And we check that we can use our distribution in a typical workflow:
# --- prior sampling num_observations = 500 num_prior_samples = 100 prior = Predictive(folded_student_model, num_samples=num_prior_samples) prior_samples = prior(PRIOR_RNG, num_observations) # --- choose any prior sample as the ground truth true_idx = 0 true_df = prior_samples["df"][true_idx] true_loc = prior_samples["loc"][true_idx] true_scale = prior_samples["scale"][true_idx] true_x = prior_samples["x"][true_idx] # --- do inference with MCMC mcmc = MCMC( NUTS(folded_student_model), **MCMC_KWARGS, ) mcmc.run(MCMC_RNG, num_observations, true_x) # --- Check diagostics mcmc.print_summary() # --- Compare to ground truth: print(f"True df : {true_df:3.2f}") print(f"True loc : {true_loc:3.2f}") print(f"True scale: {true_scale:3.2f}")
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
5. Building your own truncated distribution <a class="anchor" id="5"></a> If the TruncatedDistribution and FoldedDistribution classes are not sufficient to solve your problem, you might want to look into writing your own truncated distribution from the ground up. This can be a tedious process, so this section will give you some guidance and examples to help you with it. 5.1 Recap of NumPyro distributions <a class="anchor" id="5.1"></a> A NumPyro distribution should subclass Distribution and implement a few basic ingredients: Class attributes The class attributes serve a few different purposes. Here we will mainly care about two: 1. arg_constraints: Impose some requirements on the parameters of the distribution. Errors are raised at instantiation time if the parameters passed do not satisfy the constraints. 2. support: It is used in some inference algorithms like MCMC and SVI with auto-guides, where we need to perform the algorithm in the unconstrained space. Knowing the support, we can automatically reparametrize things under the hood. We'll explain other class attributes as we go. The __init__ method This is where we define the parameters of the distribution. We also use jax and lax to promote the parameters to shapes that are valid for broadcasting. The __init__ method of the parent class is also required because that's where the validation of our parameters is done. The log_prob method Implementing the log_prob method ensures that we can do inference. As the name suggests, this method returns the logarithm of the density evaluated at the argument. The sample method This method is used for drawing independent samples from our distribution. It is particularly useful for doing prior and posterior predictive checks. Note, in particular, that this method is not needed if you only need to use your distribution as prior in a model - the log_prob method will suffice. The place-holder code for any of our implementations can be written as ```python class MyDistribution(Distribution): # class attributes arg_constraints = {} support = None def init(self): pass def log_prob(self, value): pass def sample(self, key, sample_shape=()): pass ``` 5.2 Example: Right-truncated normal <a class="anchor" id="5.2"></a> We are going to modify a normal distribution so that its new support is of the form (-inf, high), with high a real number. This could be done with the TruncatedNormal distribution but, for the sake of illustration, we are not going to rely on it. We'll call our distribution RightTruncatedNormal. Let's write the skeleton code and then proceed to fill in the blanks. ```python class RightTruncatedNormal(Distribution): # <class attributes> def init(self): pass def log_prob(self, value): pass def sample(self, key, sample_shape=()): pass ``` Class attributes Remember that a non-truncated normal distribution is specified in NumPyro by two parameters, loc and scale, which correspond to the mean and standard deviation. Looking at the source code for the Normal distribution we see the following lines: python arg_constraints = {"loc": constraints.real, "scale": constraints.positive} support = constraints.real reparametrized_params = ["loc", "scale"] The reparametrized_params attribute is used by variational inference algorithms when constructing gradient estimators. The parameters of many common distributions with continuous support (e.g. the Normal distribution) are reparameterizable, while the parameters of discrete distributions are not. Note that reparametrized_params is irrelevant for MCMC algorithms like HMC. See SVI Part III for more details. We must adapt these attributes to our case by including the "high" parameter, but there are two issues we need to deal with: constraints.real is a bit too restrictive. We'd like jnp.inf to be a valid value for high (equivalent to no truncation), but at the moment infinity is not a valid real number. We deal with this situation by defining our own constraint. The source code for constraints.real is easy to imitate: ```python class _RightExtendedReal(constraints.Constraint): """ Any number in the interval (-inf, inf]. """ def call(self, x): return (x == x) & (x != float("-inf")) def feasible_like(self, prototype): return jnp.zeros_like(prototype) right_extended_real = _RightExtendedReal() ``` support can no longer be a class attribute as it will depend on the value of high. So instead we implement it as a dependent property. Our distribution then looks as follows: ```python class RightTruncatedNormal(Distribution): arg_constraints = { "loc": constraints.real, "scale": constraints.positive, "high": right_extended_real, } reparametrized_params = ["loc", "scale", "high"] # ... @constraints.dependent_property def support(self): return constraints.lower_than(self.high) ``` The __init__ method Once again we take inspiration from the source code for the normal distribution. The key point is the use of lax and jax to check the shapes of the arguments passed and make sure that such shapes are consistent for broadcasting. We follow the same pattern for our use case -- all we need to do is include the high parameter. In the source implementation of Normal, both parameters loc and scale are given defaults so that one recovers a standard normal distribution if no arguments are specified. In the same spirit, we choose float("inf") as a default for high which would be equivalent to no truncation. ```python ... def __init__(self, loc=0.0, scale=1.0, high=float("inf"), validate_args=None): batch_shape = lax.broadcast_shapes( jnp.shape(loc), jnp.shape(scale), jnp.shape(high), ) self.loc, self.scale, self.high = promote_shapes(loc, scale, high) super().__init__(batch_shape, validate_args=validate_args) ... ``` The log_prob method For a truncated distribution, the log density is given by $$ \begin{align} \log p_Z(z) = \begin{cases} \log p_Y(z) - \log M & \text{if $z$ is in $T$}\ -\infty & \text{if $z$ is outside $T$}\ \end{cases} \end{align} $$ where, again, $p_Z$ is the density of the truncated distribution, $p_Y$ is the density before truncation, and $M = \int_T p_Y(y)\mathrm{d}y$. For the specific case of truncating the normal distribution to the interval (-inf, high), the constant $M$ is equal to the cumulative density evaluated at the truncation point. We can easily implement this log-density method because jax.scipy.stats already has a norm module that we can use. ```python ... def log_prob(self, value): log_m = norm.logcdf(self.high, self.loc, self.scale) log_p = norm.logpdf(value, self.loc, self.scale) return jnp.where(value &lt; self.high, log_p - log_m, -jnp.inf) ... ``` The sample method To implement the sample method using inverse-transform sampling, we need to also implement the inverse cumulative distribution function. For this, we can use the ndtri function that lives inside jax.scipy.special. This function returns the inverse cdf for the standard normal distribution. We can do a bit of algebra to obtain the inverse cdf of the truncated, non-standard normal. First recall that if $X\sim Normal(0, 1)$ and $Y = \mu + \sigma X$, then $Y\sim Normal(\mu, \sigma)$. Then if $Z$ is the truncated $Y$, its cumulative density is given by: $$ \begin{align} F_Z(y) &= \int_{-\infty}^{y}p_Z(r)dr\newline &= \frac{1}{M}\int_{-\infty}^{y}p_Y(s)ds \quad\text{if $y < high$} \newline &= \frac{1}{M}F_Y(y) \end{align} $$ And so its inverse is $$ \begin{align} F_Z^{-1}(u) = \left(\frac{1}{M}F_Y\right)^{-1}(u) = F_Y^{-1}(M u) = F_{\mu + \sigma X}^{-1}(Mu) = \mu + \sigma F_X^{-1}(Mu) \end{align} $$ The translation of the above math into code is ```python ... def sample(self, key, sample_shape=()): shape = sample_shape + self.batch_shape minval = jnp.finfo(jnp.result_type(float)).tiny u = random.uniform(key, shape, minval=minval) return self.icdf(u) def icdf(self, u): m = norm.cdf(self.high, self.loc, self.scale) return self.loc + self.scale * ndtri(m * u) ``` With everything in place, the final implementation is as below.
class _RightExtendedReal(constraints.Constraint): """ Any number in the interval (-inf, inf]. """ def __call__(self, x): return (x == x) & (x != float("-inf")) def feasible_like(self, prototype): return jnp.zeros_like(prototype) right_extended_real = _RightExtendedReal() class RightTruncatedNormal(Distribution): """ A truncated Normal distribution. :param numpy.ndarray loc: location parameter of the untruncated normal :param numpy.ndarray scale: scale parameter of the untruncated normal :param numpy.ndarray high: point at which the truncation happens """ arg_constraints = { "loc": constraints.real, "scale": constraints.positive, "high": right_extended_real, } reparametrized_params = ["loc", "scale", "high"] def __init__(self, loc=0.0, scale=1.0, high=float("inf"), validate_args=True): batch_shape = lax.broadcast_shapes( jnp.shape(loc), jnp.shape(scale), jnp.shape(high), ) self.loc, self.scale, self.high = promote_shapes(loc, scale, high) super().__init__(batch_shape, validate_args=validate_args) def log_prob(self, value): log_m = norm.logcdf(self.high, self.loc, self.scale) log_p = norm.logpdf(value, self.loc, self.scale) return jnp.where(value < self.high, log_p - log_m, -jnp.inf) def sample(self, key, sample_shape=()): shape = sample_shape + self.batch_shape minval = jnp.finfo(jnp.result_type(float)).tiny u = random.uniform(key, shape, minval=minval) return self.icdf(u) def icdf(self, u): m = norm.cdf(self.high, self.loc, self.scale) return self.loc + self.scale * ndtri(m * u) @constraints.dependent_property def support(self): return constraints.less_than(self.high)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Let's try it out!
def truncated_normal_model(num_observations, x=None): loc = numpyro.sample("loc", dist.Normal()) scale = numpyro.sample("scale", dist.LogNormal()) high = numpyro.sample("high", dist.Normal()) with numpyro.plate("observations", num_observations): numpyro.sample("x", RightTruncatedNormal(loc, scale, high), obs=x) num_observations = 1000 num_prior_samples = 100 prior = Predictive(truncated_normal_model, num_samples=num_prior_samples) prior_samples = prior(PRIOR_RNG, num_observations)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
As before, we run mcmc against some synthetic data. We select any random sample from the prior as the ground truth:
true_idx = 0 true_loc = prior_samples["loc"][true_idx] true_scale = prior_samples["scale"][true_idx] true_high = prior_samples["high"][true_idx] true_x = prior_samples["x"][true_idx] plt.hist(true_x.copy()) plt.axvline(true_high, linestyle=":", color="k") plt.xlabel("x") plt.show()
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Run MCMC and check the estimates:
mcmc = MCMC(NUTS(truncated_normal_model), **MCMC_KWARGS) mcmc.run(MCMC_RNG, num_observations, true_x) mcmc.print_summary()
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Compare estimates against the ground truth:
print(f"True high : {true_high:3.2f}") print(f"True loc : {true_loc:3.2f}") print(f"True scale: {true_scale:3.2f}")
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Note that, even though we can recover good estimates for the true values, we had a very high number of divergences. These divergences happen because the data can be outside of the support that we are allowing with our priors. To fix this, we can change the prior on high so that it depends on the observations:
def truncated_normal_model_2(num_observations, x=None): loc = numpyro.sample("loc", dist.Normal()) scale = numpyro.sample("scale", dist.LogNormal()) if x is None: high = numpyro.sample("high", dist.Normal()) else: # high is greater or equal to the max value in x: delta = numpyro.sample("delta", dist.HalfNormal()) high = numpyro.deterministic("high", delta + x.max()) with numpyro.plate("observations", num_observations): numpyro.sample("x", RightTruncatedNormal(loc, scale, high), obs=x) mcmc = MCMC(NUTS(truncated_normal_model_2), **MCMC_KWARGS) mcmc.run(MCMC_RNG, num_observations, true_x) mcmc.print_summary(exclude_deterministic=False)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
And the divergences are gone. In practice, we usually want to understand how the data would look like without the truncation. To do that in NumPyro, there is no need of writing a separate model, we can simply rely on the condition handler to push the truncation point to infinity:
model_without_truncation = numpyro.handlers.condition( truncated_normal_model, {"high": float("inf")}, ) estimates = mcmc.get_samples().copy() estimates.pop("high") # Drop to make sure these are not used pred = Predictive( model_without_truncation, posterior_samples=estimates, ) pred_samples = pred(PRED_RNG, num_observations=1000) # thin the samples for a faster histogram samples_thinned = pred_samples["x"].ravel()[::1000] f, axes = plt.subplots(1, 2, figsize=(15, 5)) axes[0].hist( samples_thinned.copy(), label="Untruncated posterior", bins=20, density=True ) axes[0].axvline(true_high, linestyle=":", color="k", label="Truncation point") axes[0].set_title("Untruncated posterior") axes[0].legend() axes[1].hist( samples_thinned[samples_thinned < true_high].copy(), label="Tail of untruncated posterior", bins=20, density=True, ) axes[1].hist(true_x.copy(), label="Observed, truncated data", density=True, alpha=0.5) axes[1].axvline(true_high, linestyle=":", color="k", label="Truncation point") axes[1].set_title("Comparison to observed data") axes[1].legend() plt.show()
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
5.3 Example: Left-truncated Poisson <a class="anchor" id="5.3"></a> As a final example, we now implement a left-truncated Poisson distribution. Note that a right-truncated Poisson could be reformulated as a particular case of a categorical distribution, so we focus on the less trivial case. Class attributes For a truncated Poisson we need two parameters, the rate of the original Poisson distribution and a low parameter to indicate the truncation point. As this is a discrete distribution, we need to clarify whether or not the truncation point is included in the support. In this tutorial, we'll take the convention that the truncation point low is part of the support. The low parameter has to be given a 'non-negative integer' constraint. As it is a discrete parameter, it will not be possible to do inference for this parameter using NUTS. This is likely not a problem since the truncation point is often known in advance. However, if we really must infer the low parameter, it is possible to do so with DiscreteHMCGibbs though one is limited to using priors with enumerate support. Like in the case of the truncated normal, the support of this distribution will be defined as a property and not as a class attribute because it depends on the specific value of the low parameter. ```python class LeftTruncatedPoisson: arg_constraints = { "low": constraints.nonnegative_integer, "rate": constraints.positive, } # ... @constraints.dependent_property(is_discrete=True) def support(self): return constraints.integer_greater_than(self.low - 1) ``` The is_discrete argument passed in the dependent_property decorator is used to tell the inference algorithms which variables are discrete latent variables. The __init__ method Here we just follow the same pattern as in the previous example. python # ... def __init__(self, rate=1.0, low=0, validate_args=None): batch_shape = lax.broadcast_shapes( jnp.shape(low), jnp.shape(rate) ) self.low, self.rate = promote_shapes(low, rate) super().__init__(batch_shape, validate_args=validate_args) # ... The log_prob method The logic is very similar to the truncated normal case. But this time we are truncating on the left, so the correct normalization is the complementary cumulative density: $$ \begin{align} M = \sum_{n=L}^{\infty} p_Y(n) = 1 - \sum_{n=0}^{L - 1} p_Y(n) = 1 - F_Y(L - 1) \end{align} $$ For the code, we can rely on the poisson module that lives inside jax.scipy.stats. python # ... def log_prob(self, value): m = 1 - poisson.cdf(self.low - 1, self.rate) log_p = poisson.logpmf(value, self.rate) return jnp.where(value &gt;= self.low, log_p - jnp.log(m), -jnp.inf) # ... The sample method Inverse-transform sampling also works for discrete distributions. The "inverse" cdf of a discrete distribution being defined as: $$ \begin{align} F^{-1}(u) = \max\left{n\in \mathbb{N} \rvert F(n) \lt u\right} \end{align} $$ Or, in plain English, $F^{-1}(u)$ is the highest number for which the cumulative density is less than $u$. However, there's currently no implementation of $F^{-1}$ for the Poisson distribution in Jax (at least, at the moment of writing this tutorial). We have to rely on our own implementation. Fortunately, we can take advantage of the discrete nature of the distribution and easily implement a "brute-force" version that will work for most cases. The brute force approach consists of simply scanning all non-negative integers in order, one by one, until the value of the cumulative density exceeds the argument $u$. The implicit requirement is that we need a way to evaluate the cumulative density for the truncated distribution, but we can calculate that: $$ \begin{align} F_Z(z) &= \sum_{n=0}^z p_z(n)\newline &= \frac{1}{M}\sum_{n=L}^z p_Y(n)\quad \text{assuming $z >= L$}\newline &= \frac{1}{M}\left(\sum_{n=0}^z p_Y(n) - \sum_{n=0}^{L-1}p_Y(n)\right)\newline &= \frac{1}{M}\left(F_Y(z) - F_Y (L-1)\right) \end{align} $$ And, of course, the value of $F_Z(z)$ is equal to zero if $z < L$. (As in the previous example, we are using $Y$ to denote the original, un-truncated variable, and we are using $Z$ to denote the truncated variable) ```python # ... def sample(self, key, sample_shape=()): shape = sample_shape + self.batch_shape minval = jnp.finfo(jnp.result_type(float)).tiny u = random.uniform(key, shape, minval=minval) return self.icdf(u) def icdf(self, u): def cond_fn(val): n, cdf = val return jnp.any(cdf &lt; u) def body_fn(val): n, cdf = val n_new = jnp.where(cdf &lt; u, n + 1, n) return n_new, self.cdf(n_new) low = self.low * jnp.ones_like(u) cdf = self.cdf(low) n, _ = lax.while_loop(cond_fn, body_fn, (low, cdf)) return n.astype(jnp.result_type(int)) def cdf(self, value): m = 1 - poisson.cdf(self.low - 1, self.rate) f = poisson.cdf(value, self.rate) - poisson.cdf(self.low - 1, self.rate) return jnp.where(k &gt;= self.low, f / m, 0) ``` A few comments with respect to the above implementation: * Even with double precision, if rate is much less than low, the above code will not work. Due to numerical limitations, one obtains that poisson.cdf(low - 1, rate) is equal (or very close) to 1.0. This makes it impossible to re-weight the distribution accurately because the normalization constant would be 0.0. * The brute-force icdf is of course very slow, particularly when rate is high. If you need faster sampling, one option would be to rely on a faster search algorithm. For example: python def icdf_faster(self, u): num_bins = 200 # Choose a reasonably large value bins = jnp.arange(num_bins) cdf = self.cdf(bins) indices = jnp.searchsorted(cdf, u) return bins[indices] The obvious limitation here is that the number of bins has to be fixed a priori (jax does not allow for dynamically sized arrays). Another option would be to rely on an approximate implementation, as proposed in this article. Yet another alternative for the icdf is to rely on scipy's implementation and make use of Jax's host_callback module. This feature allows you to use Python functions without having to code them in Jax. This means that we can simply make use of scipy's implementation of the Poisson ICDF! From the last equation, we can write the truncated icdf as: $$ \begin{align} F_Z^{-1}(u) = F_Y^{-1}(Mu + F_Y(L-1)) \end{align} $$ And in python: python def scipy_truncated_poisson_icdf(args): # Note: all arguments are passed inside a tuple rate, low, u = args rate = np.asarray(rate) low = np.asarray(low) u = np.asarray(u) density = sp_poisson(rate) low_cdf = density.cdf(low - 1) normalizer = 1.0 - low_cdf x = normalizer * u + low_cdf return density.ppf(x) In principle, it wouldn't be possible to use the above function in our NumPyro distribution because it is not coded in Jax. The jax.experimental.host_callback.call function solves precisely that problem. The code below shows you how to use it, but keep in mind that this is currently an experimental feature so you should expect changes to the module. See the host_callback docs for more details. python # ... def icdf_scipy(self, u): result_shape = jax.ShapeDtypeStruct( u.shape, jnp.result_type(float) # int type not currently supported ) result = jax.experimental.host_callback.call( scipy_truncated_poisson_icdf, (self.rate, self.low, u), result_shape=result_shape ) return result.astype(jnp.result_type(int)) # ... Putting it all together, the implementation is as below:
def scipy_truncated_poisson_icdf(args): # Note: all arguments are passed inside a tuple rate, low, u = args rate = np.asarray(rate) low = np.asarray(low) u = np.asarray(u) density = sp_poisson(rate) low_cdf = density.cdf(low - 1) normalizer = 1.0 - low_cdf x = normalizer * u + low_cdf return density.ppf(x) class LeftTruncatedPoisson(Distribution): """ A truncated Poisson distribution. :param numpy.ndarray low: lower bound at which truncation happens :param numpy.ndarray rate: rate of the Poisson distribution. """ arg_constraints = { "low": constraints.nonnegative_integer, "rate": constraints.positive, } def __init__(self, rate=1.0, low=0, validate_args=None): batch_shape = lax.broadcast_shapes(jnp.shape(low), jnp.shape(rate)) self.low, self.rate = promote_shapes(low, rate) super().__init__(batch_shape, validate_args=validate_args) def log_prob(self, value): m = 1 - poisson.cdf(self.low - 1, self.rate) log_p = poisson.logpmf(value, self.rate) return jnp.where(value >= self.low, log_p - jnp.log(m), -jnp.inf) def sample(self, key, sample_shape=()): shape = sample_shape + self.batch_shape float_type = jnp.result_type(float) minval = jnp.finfo(float_type).tiny u = random.uniform(key, shape, minval=minval) # return self.icdf(u) # Brute force # return self.icdf_faster(u) # For faster sampling. return self.icdf_scipy(u) # Using `host_callback` def icdf(self, u): def cond_fn(val): n, cdf = val return jnp.any(cdf < u) def body_fn(val): n, cdf = val n_new = jnp.where(cdf < u, n + 1, n) return n_new, self.cdf(n_new) low = self.low * jnp.ones_like(u) cdf = self.cdf(low) n, _ = lax.while_loop(cond_fn, body_fn, (low, cdf)) return n.astype(jnp.result_type(int)) def icdf_faster(self, u): num_bins = 200 # Choose a reasonably large value bins = jnp.arange(num_bins) cdf = self.cdf(bins) indices = jnp.searchsorted(cdf, u) return bins[indices] def icdf_scipy(self, u): result_shape = jax.ShapeDtypeStruct(u.shape, jnp.result_type(float)) result = jax.experimental.host_callback.call( scipy_truncated_poisson_icdf, (self.rate, self.low, u), result_shape=result_shape, ) return result.astype(jnp.result_type(int)) def cdf(self, value): m = 1 - poisson.cdf(self.low - 1, self.rate) f = poisson.cdf(value, self.rate) - poisson.cdf(self.low - 1, self.rate) return jnp.where(value >= self.low, f / m, 0) @constraints.dependent_property(is_discrete=True) def support(self): return constraints.integer_greater_than(self.low - 1)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Let's try it out!
def discrete_distplot(samples, ax=None, **kwargs): """ Utility function for plotting the samples as a barplot. """ x, y = np.unique(samples, return_counts=True) y = y / sum(y) if ax is None: ax = plt.gca() ax.bar(x, y, **kwargs) return ax def truncated_poisson_model(num_observations, x=None): low = numpyro.sample("low", dist.Categorical(0.2 * jnp.ones((5,)))) rate = numpyro.sample("rate", dist.LogNormal(1, 1)) with numpyro.plate("observations", num_observations): numpyro.sample("x", LeftTruncatedPoisson(rate, low), obs=x)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Prior samples
# -- prior samples num_observations = 1000 num_prior_samples = 100 prior = Predictive(truncated_poisson_model, num_samples=num_prior_samples) prior_samples = prior(PRIOR_RNG, num_observations)
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
Inference As in the case for the truncated normal, here it is better to replace the prior on the low parameter so that it is consistent with the observed data. We'd like to have a categorical prior on low (so that we can use DiscreteHMCGibbs) whose highest category is equal to the minimum value of x (so that prior and data are consistent). However, we have to be careful in the way we write such model because Jax does not allow for dynamically sized arrays. A simple way of coding this model is to simply specify the number of categories as an argument:
def truncated_poisson_model(num_observations, x=None, k=5): zeros = jnp.zeros((k,)) low = numpyro.sample("low", dist.Categorical(logits=zeros)) rate = numpyro.sample("rate", dist.LogNormal(1, 1)) with numpyro.plate("observations", num_observations): numpyro.sample("x", LeftTruncatedPoisson(rate, low), obs=x) # Take any prior sample as the true process. true_idx = 6 true_low = prior_samples["low"][true_idx] true_rate = prior_samples["rate"][true_idx] true_x = prior_samples["x"][true_idx] discrete_distplot(true_x.copy());
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
To do inference, we set k = x.min() + 1. Note also the use of DiscreteHMCGibbs:
mcmc = MCMC(DiscreteHMCGibbs(NUTS(truncated_poisson_model)), **MCMC_KWARGS) mcmc.run(MCMC_RNG, num_observations, true_x, k=true_x.min() + 1) mcmc.print_summary() true_rate
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0
As before, one needs to be extra careful when estimating the truncation point. If the truncation point is known is best to provide it.
model_with_known_low = numpyro.handlers.condition( truncated_poisson_model, {"low": true_low} )
notebooks/source/truncated_distributions.ipynb
pyro-ppl/numpyro
apache-2.0