repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
facaiy/book_notes
Mining_of_Massive_Datasets/MapReduce_and_the_New_Software_Stack/note.ipynb
cc0-1.0
plt.imshow(plt.imread('./res/fig2_1.png')) """ Explanation: 2 MapReduce and the New Software Stack "big-data" analysis: manage immense amounts of data quickly. data is extremely regular $\to$ exploit parallelism. new software stack: "distributed file system" $\to$ MapReduce When designing MapReduce algorithms, we often find that the greatest cost is in the communication. 2.1 Distributed File Systems commodity hardware + network 2.1.1 Physical Organization of Compute Nodes cluster computing: the new parallel-computing architecture. racks: compute nodes are stored on racks. communication: The nodes on a single rack are connected by a network, typically gigabit Ethernet. And racks are connected by another level of network or a switch. The bandwidth of inter-rack communication is somewhat greater than the intrarack Ethernet. End of explanation """ plt.imshow(plt.imread('./res/fig2_2.png')) """ Explanation: the solution for components failure (loss of node or rack): Files must be stored redundantly. Computations must be divided into tasks. 2.1.2 Large-Scale File-System Organization DFS: distributed file system Google File System (GFS) Hadoop Distributed File System (HDFS) CloudStore It is typically used as follows: Files can be enormous, possibly a terabyte in size. Files are rarely updated. manage: Files are divided into chunks. Chunks are replicated at different compute nodes of different racks. master node or name node: another small file to find the chunks of a file. master node is iteself replicated, and a directory for the file systme as a whole knows where to find its copies. The directory itself can be replicated, and all participants using the DFS know where the directory copies are. 2.2 MapReduce All you need to write are two functions, called Map and Reduce. a MapReduce computation executes as follows: Map function: Map tasks turn the chunk given into a sequence of key-value pairs. The key-value pairs from each Map task are collected by a master controller and sorted by key. divide by key: all pairs with same key $\to$ same Reduce task. Reduce function: The Reduce tasks work on one key at a time, and combine all the values associated with that key in some way. End of explanation """ plt.imshow(plt.imread('./res/fig2_3.png')) """ Explanation: 2.2.1 The Map Tasks The Map function takes an input element as its argument and produces zero or more key-value pairs. 2.2.2 Grouping by Key Group: the key-value pairs are groued by key, which is performed by the system. partition: hash keys to reduce tasks. 2.2.3 The Reduce Tasks reducer: the application of the Reduce function to a single key and its associated list of values. a Reduce task executes one or more reducers. why not to execute each reducer a separate Reduce task for maximum parallelism? There is overhead associated with each task we create. number of Reduce tasks $<$ number of reducers. There is often significant variation in the lengths of the value lists for different keys, so different reducers take different amount of time. $\to$ skew. if keys are sent randomly to less Reduce tasks $\to$ expect that time is average. $\to$ number of Reduce tasks $<$ compute nodes. 2.2.4 Combiners If a Reduce function is associative and commutative, (the values to be combined can be combined in any order, with the same result), we can push some of what the reducers do to the Map tasks (combiner). 2.2.5 Details of MapReduce Execution End of explanation """ plt.imshow(plt.imread('./res/fig2_4.png')) """ Explanation: 2.2.6 Coping With Node Failures Master fails $\to$ restart. Map worker fails $\to$ all the Map tasks assigned to the worker will have to redone. and also inform each Reduce task about the change. Reduce worker fails. $\to$ resheduled on another reduce worker later. 2.2.7 Exercises for Section 2.2 (a) Yes, significant skew exists. The word frequent is not equal in text. (b) the skew should be insignificant. 10,000 more significant. (c) combiner can help reduce skew. 2.3 Algorithms Using MapReduce Operations that can use MapReduce effectively: very large matrix-vector multiplicatons relational-algebra opertions 2.3.1 Matrix-Vector Multiplication by MapReduce \begin{align} \mathbf{X}{1 \times n} &= \mathbf{M}{n \times n} \mathbf{v}{1 \times n} \ x_i &= \displaystyle \sum{j=1}^n m_{ij} v_j \end{align} We first assume that $n$ is large, but not so larget that $\mathbf{v}$ cannot fit in main memory and thus be available to every Map task. Map: read $\mathbf{v}$. produces $(i, m_{ij} v_j)$ Reduce: simply sums all the values and produces pair $(i, x_i)$. 2.3.2 If the Vector $\mathbf{v}$ Cannot Fit in Main Memory We can divide the matrix into vertical stripes of equal width and divide the vector into an equal number of horizontal stripes, of the same height. End of explanation """ plt.imshow(plt.imread('./res/fig2_5.png')) """ Explanation: Each Map task is assigned a chunk from one of the stripes of the matrix and gets the entire corresponding stripe of the vector. particular application (PageRank calculation) has an additional constraint that the result vector should be partitioned in the same way as the input vector. We shall see there that the best strategy involves partitioning the matrix $\mathbf{M}$ into square blocks, rahter than stripes. 2.3.3 Relational-Algebra Operations There are many operations on data that can be described easily in terms of the common database-query primitives. a relation is a table with column headers called attributes. Rows of the relation are called tuples. The set of attributes of a relation is called its schema. $R(A_1, A_2, \dotsc, A_n)$: the relation name is $R$ and its attributes are $A_1, A_2, \dotsc, A_n$. End of explanation """ plt.imshow(plt.imread('./res/fig2_6.png')) """ Explanation: relational algebra: several standard operations on relations. Selection $\sigma{C} (R)$: select tuples that satisfy $C$ Map: produce $(t, t)$ if $t$ satisfies $C$ where $t \in R$. Reduce: simple passes each key-value pair to the output. Projection $\pi_{S} (R)$: produces subset $S$ of the attributed Map: output $(t', t')$ where $t'$ is subset of $t$. Reduce: eliminate duplicates, turns $(t', [t', t', \dotsc, t'])$ into $(t', t')$. associative and commutative $\to$ combiner Union, Intersection, and Difference Union: Map: turn $t$ into $(t, t)$. Reduce: Produce $(t, t)$ for input $(t, [t])$ or $(t, [t, t])$. Intersection: Map: turn $t$ into $(t, t)$. Reduce: produce $(t, t)$ if input $(t, [t, t])$. Difference $S - R$: Map: produce $(t, R)$ where $t \in R$, or $(t, S)$ where $t \in S$. Reduce: produce $(t, t)$ if input $(t, [R])$. Natural Join $R \bowtie S$ Map: produce $(b, (R, a))$ for $(a, b) \in R$, or $(b, (S, c))$ for $(b, c) \in S$. Reduce: $(a, b, c)$ when input $(b, [(R, a), (S, c)])$. Grouping and Aggregation $\gamma_{X} (R)$: where $X$ consists of: a grouping attribute, an expression $\theta(A)$ Let $R(A, B, C)$, for $\gamma_{A, \theta(B)} (R)$: + Map: produce $(a, b)$ for each tuple $(a, b, c)$. + Reduce: apply the aggregation operator $\theta$ to the list $[b_1, b_2, \dotsc, b_n]$ of B-values associated with key $a$. 2.3.9 Matrix Multiplication $\mathbf{M} \times \mathbf{N}$ grouping and aggregation two MapReduce step 1st MapReduce Map: $(j, (M, i, m_{ij}))$ and $(j, (N, k, n_{jk}))$. Reduce: $((i, k), m_{ij} n_{jk})$ for its associated values of each key $j$. 2nd MapReduce Map: identity Reduce: for each key $(i, k)$, produce the sum of the list of values associated with this key. one MapReduce step Map: for each $m_{ij}$, produce all pairs $((i, k), (M, j, m_{ij}))$ for $k = 1, 2, \dotsc$. for each $n_{jk}$, produce all pairs $((i, k), (N, j, n_{jk}))$ for $i = 1, 2, \dotsc$. Reduce: Each key $(i, k)$ will have an associated list with all the values $(M, j, m_{ij})$ and $(N, j, n_{jk})$, for all possible values of $j$. connect the two values on the list that have the same values of $j$: An easy way to do this step is to sort by $j$ the values. then multiply and sum. 2.3.11 Exercises for Section 2.3 Ex 2.3.5 2.4 Extensions to MapReduce some extensions and modifications, share the same characteristics: 1. built on a distributed file system. 2. very large numbers of tasks, and a small number of user-written functions. 3. dealing with most of the failures without restart. 2.4.1 Workflow Systems idea: two-step workflow (Map, Reduce) $\to$ any collection of functions two experimental systems + Clustera + Hyracks advantage: without need to store the temporary file that is output of one MapReduce job in the distributed file system. End of explanation """ plt.imshow(plt.imread('./res/fig2_7.png')) """ Explanation: 2.4.2 Recursive Extensions to MapReduce Many large-scale computations are really recursions. mutually recursive tasks: modify input (flow graphs that are not acyclic), so it is not feasible to simple start when some node failed. solution A: split to two step $\to$ backup data Example 2.6 End of explanation """ def calc_prob(p, t): return 10 - 9 * ((1 - p)**t) p = np.linspace(0, 1, 100) y1 = calc_prob(p, 10) y2 = calc_prob(p, 100) plt.plot(p, y1, 'b', p, y2, 'g') """ Explanation: 2.4.3 Pregel solution B: backup entire stats of each task checkpoints, so recovery the backup point if fail. checkpoints is triggered at fixed supersteps. 2.4.4 Exercises for Section 2.4 Ex 2.4.1 for the prob of a taks, success is $(1-p)^t$, fail is $(1 - (1 - p)^t)$ . So the expected execution time of a task is $(1-p)^t t + (1 - (1-p)^t) 10 t = 10t - 9t(1-p)^t$. Thus, total expected time is $n(10t - 9t (1-p)^t) = nt(10 - 9 (1-p)^t)$. End of explanation """ plt.imshow(plt.imread('./res/fig2_9.png')) """ Explanation: Ex 2.4.2 suppose that supersteps should be $n$, the time of one execting a superstep is $t$. 2.5 The Communication Cost Model for many applications, the bottleneck is moving data among tasks. 2.5.1 Communication-Cost for Task Networks The communication cost of a task is the size of the input to the task. we shall often use the number of tuples as a measure size, rather than bytes. The communication cost of an algorithm is the sum of the communication cost of all the tasks implementing that algorithm. We shall focus on the communication cost as the way to measure the efficiency of an algorithm, since the exceptions, where execution time of tasks dominates, are rare in practice. We count only input size, and not output size. 2.5.2 Wall-Clock Time Besides communication cost, we must also be aware of the importance of wall-clock time, the time it takes a prallel algorithm to finish. The algorithms shall have the property that the work is divided fairly among the tasks. 2.5.3 Multiway Joins example: $R(A, B) \bowtie S(B, C) \bowtie T(C, D)$. Suppose that the relation $R, S, T$ have sizes $r, s$, and $t$, respectively. and for simplicity, suppose $p$ is the probability that any two tuples in each relations agree on the item they share. Solution 1: general theory $\operatorname{join} \left ( \operatorname{join} \left ( R(A, B) \bowtie S(B, C) \right ) \bowtie T(C, D) \right )$ or exchange the sequence of join. 1st MapReduce $\operatorname{join} \left ( R(A, B) \bowtie S(B, C) \right )$ $O(t + prs)$ 2nd MapReduce $O(r + s + t + prs)$ Solution 2: use a single MapReduce job that joins the three relations at once. Assume: We plan to use $k$ reducers for the job. $b, c$ represents the number of buckets into which we shall hash $B-$ and $C-$values, respectively. $h(B) \to b$. $g(C) \to c$. we require $b c = k$. So, the reducer corresponding to bucket pair $(i, j)$ is responsible for joining the tuples $R(u, v), S(v, w)$, and $T(w, x)$ whenever $h(v) = i$ and $g(w) = j$. $S(v, w)$ to reducer $(h(v), g(w))$. communication cost: $s$ $R(u, v)$ to $c$ reducers $(h(v), c)$. communication cost: $c r$ $T(w, x)$ to $b$ reducers $(b, g(w))$. communication cost: $b t$ There is also a fixed cost $r + s + t$ to make each tuple of each relation be input to one of the Map tasks. The problem arises: $$\operatorname{arg \, min}_{b, c} s + cr + bt \text{where} bc = k$$ We get the solution: $c = \sqrt{kt / r}$ and $b = \sqrt{kr / t}$. So $s + cr + bt = s + 2 \sqrt{k r t}$. In all, the total communication cost is $r + 2s + t + 2 \sqrt{k r t}$. 2.5.4 Exercises for Section 2.5 #todo 2.6 Complexity Theory for MapReduce our desire in the section: to shrink the wall-clock time to execute each reducer in main memory 2.6.1 Reducer Size and Replication Rate two parameters that characterize families of MapReduce algorithms: reducer size$q$: the upper bound on the number of values that are allowed to appear in the list associated with a single key. It can be selected with at least two goals in mind: By making the reducer size small $\to$ we get many reducers. By making the reducer size small $\to$ computation in reducer can be exected entirely in the main memory. replication rate$r$: the number of key-value pairs producted by all the Map tasks on all the inputs, divided by the number of inputs. It is the average communication from Map tasks to Reduce tasks per input. 2.6.2 An Example: Similarity Joins we are given a large set of element $X$ and a similarity measure $s(x, y)$ which is symmetric. The output of the algorithm is those pairs whose similarity exceeds a given threshold $t$. eg: discover similar images in a collection of one million images. solution: 1. $({i, j}, [P_i, P_j])$ this algorithm will fail completely: the reducer size is small, however, the replication rate is 999,999 $\to$ the communication cost is extremelly large. We can group pictures into $g$ groups, each of $10^6 / g$ pictures. 2.6.3 A Graph Model for MapReduce Problems In this section, we hope to prove lower bounds on the replication rate. The first step is to introduce a graph model of problems. For each problem solvable by a MapReduce algorithm there is: A set of inputs. A set of outputs. A many-many relationship between the inputs and outputs, which describes which inputs are necessary to produce which outputs. End of explanation """
mne-tools/mne-tools.github.io
0.15/_downloads/plot_decoding_csp_space.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Romain Trachel <romain.trachel@inria.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path() """ Explanation: ==================================================================== Decoding in sensor space data using the Common Spatial Pattern (CSP) ==================================================================== Decoding applied to MEG data in sensor space decomposed using CSP. Here the classifier is applied to features extracted on CSP filtered signals. See http://en.wikipedia.org/wiki/Common_spatial_pattern and [1]_. References .. [1] Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440--447, December 1991. End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.2, 0.5 event_id = dict(aud_l=1, vis_l=3) # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(2, None, fir_design='firwin') # replace baselining with high-pass events = mne.read_events(event_fname) raw.info['bads'] = ['MEG 2443'] # set bad channels picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=False, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=None, preload=True) labels = epochs.events[:, -1] evoked = epochs.average() """ Explanation: Set parameters and read data End of explanation """ from sklearn.svm import SVC # noqa from sklearn.model_selection import ShuffleSplit # noqa from mne.decoding import CSP # noqa n_components = 3 # pick some components svc = SVC(C=1, kernel='linear') csp = CSP(n_components=n_components, norm_trace=False) # Define a monte-carlo cross-validation generator (reduce variance): cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42) scores = [] epochs_data = epochs.get_data() for train_idx, test_idx in cv.split(labels): y_train, y_test = labels[train_idx], labels[test_idx] X_train = csp.fit_transform(epochs_data[train_idx], y_train) X_test = csp.transform(epochs_data[test_idx]) # fit classifier svc.fit(X_train, y_train) scores.append(svc.score(X_test, y_test)) # Printing the results class_balance = np.mean(labels == labels[0]) class_balance = max(class_balance, 1. - class_balance) print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores), class_balance)) # Or use much more convenient scikit-learn cross_val_score function using # a Pipeline from sklearn.pipeline import Pipeline # noqa from sklearn.model_selection import cross_val_score # noqa cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42) clf = Pipeline([('CSP', csp), ('SVC', svc)]) scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1) print(scores.mean()) # should match results above # And using reuglarized csp with Ledoit-Wolf estimator csp = CSP(n_components=n_components, reg='ledoit_wolf', norm_trace=False) clf = Pipeline([('CSP', csp), ('SVC', svc)]) scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1) print(scores.mean()) # should get better results than above # plot CSP patterns estimated on full data for visualization csp.fit_transform(epochs_data, labels) data = csp.patterns_ fig, axes = plt.subplots(1, 4) for idx in range(4): mne.viz.plot_topomap(data[idx], evoked.info, axes=axes[idx], show=False) fig.suptitle('CSP patterns') fig.tight_layout() fig.show() """ Explanation: Decoding in sensor space using a linear SVM End of explanation """
keras-team/keras-io
examples/vision/ipynb/mlp_image_classification.ipynb
apache-2.0
import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa """ Explanation: Image classification with modern MLP models Author: Khalid Salama<br> Date created: 2021/05/30<br> Last modified: 2021/05/30<br> Description: Implementing the MLP-Mixer, FNet, and gMLP models for CIFAR-100 image classification. Introduction This example implements three modern attention-free, multi-layer perceptron (MLP) based models for image classification, demonstrated on the CIFAR-100 dataset: The MLP-Mixer model, by Ilya Tolstikhin et al., based on two types of MLPs. The FNet model, by James Lee-Thorp et al., based on unparameterized Fourier Transform. The gMLP model, by Hanxiao Liu et al., based on MLP with gating. The purpose of the example is not to compare between these models, as they might perform differently on different datasets with well-tuned hyperparameters. Rather, it is to show simple implementations of their main building blocks. This example requires TensorFlow 2.4 or higher, as well as TensorFlow Addons, which can be installed using the following command: shell pip install -U tensorflow-addons Setup End of explanation """ num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}") print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}") """ Explanation: Prepare the data End of explanation """ weight_decay = 0.0001 batch_size = 128 num_epochs = 50 dropout_rate = 0.2 image_size = 64 # We'll resize input images to this size. patch_size = 8 # Size of the patches to be extracted from the input images. num_patches = (image_size // patch_size) ** 2 # Size of the data array. embedding_dim = 256 # Number of hidden units. num_blocks = 4 # Number of blocks. print(f"Image size: {image_size} X {image_size} = {image_size ** 2}") print(f"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} ") print(f"Patches per image: {num_patches}") print(f"Elements per patch (3 channels): {(patch_size ** 2) * 3}") """ Explanation: Configure the hyperparameters End of explanation """ def build_classifier(blocks, positional_encoding=False): inputs = layers.Input(shape=input_shape) # Augment data. augmented = data_augmentation(inputs) # Create patches. patches = Patches(patch_size, num_patches)(augmented) # Encode patches to generate a [batch_size, num_patches, embedding_dim] tensor. x = layers.Dense(units=embedding_dim)(patches) if positional_encoding: positions = tf.range(start=0, limit=num_patches, delta=1) position_embedding = layers.Embedding( input_dim=num_patches, output_dim=embedding_dim )(positions) x = x + position_embedding # Process x using the module blocks. x = blocks(x) # Apply global average pooling to generate a [batch_size, embedding_dim] representation tensor. representation = layers.GlobalAveragePooling1D()(x) # Apply dropout. representation = layers.Dropout(rate=dropout_rate)(representation) # Compute logits outputs. logits = layers.Dense(num_classes)(representation) # Create the Keras model. return keras.Model(inputs=inputs, outputs=logits) """ Explanation: Build a classification model We implement a method that builds a classifier given the processing blocks. End of explanation """ def run_experiment(model): # Create Adam optimizer with weight decay. optimizer = tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay, ) # Compile the model. model.compile( optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[ keras.metrics.SparseCategoricalAccuracy(name="acc"), keras.metrics.SparseTopKCategoricalAccuracy(5, name="top5-acc"), ], ) # Create a learning rate scheduler callback. reduce_lr = keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=5 ) # Create an early stopping callback. early_stopping = tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=10, restore_best_weights=True ) # Fit the model. history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[early_stopping, reduce_lr], ) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f"Test accuracy: {round(accuracy * 100, 2)}%") print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%") # Return history to plot learning curves. return history """ Explanation: Define an experiment We implement a utility function to compile, train, and evaluate a given model. End of explanation """ data_augmentation = keras.Sequential( [ layers.Normalization(), layers.Resizing(image_size, image_size), layers.RandomFlip("horizontal"), layers.RandomZoom( height_factor=0.2, width_factor=0.2 ), ], name="data_augmentation", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) """ Explanation: Use data augmentation End of explanation """ class Patches(layers.Layer): def __init__(self, patch_size, num_patches): super(Patches, self).__init__() self.patch_size = patch_size self.num_patches = num_patches def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding="VALID", ) patch_dims = patches.shape[-1] patches = tf.reshape(patches, [batch_size, self.num_patches, patch_dims]) return patches """ Explanation: Implement patch extraction as a layer End of explanation """ class MLPMixerLayer(layers.Layer): def __init__(self, num_patches, hidden_units, dropout_rate, *args, **kwargs): super(MLPMixerLayer, self).__init__(*args, **kwargs) self.mlp1 = keras.Sequential( [ layers.Dense(units=num_patches), tfa.layers.GELU(), layers.Dense(units=num_patches), layers.Dropout(rate=dropout_rate), ] ) self.mlp2 = keras.Sequential( [ layers.Dense(units=num_patches), tfa.layers.GELU(), layers.Dense(units=embedding_dim), layers.Dropout(rate=dropout_rate), ] ) self.normalize = layers.LayerNormalization(epsilon=1e-6) def call(self, inputs): # Apply layer normalization. x = self.normalize(inputs) # Transpose inputs from [num_batches, num_patches, hidden_units] to [num_batches, hidden_units, num_patches]. x_channels = tf.linalg.matrix_transpose(x) # Apply mlp1 on each channel independently. mlp1_outputs = self.mlp1(x_channels) # Transpose mlp1_outputs from [num_batches, hidden_dim, num_patches] to [num_batches, num_patches, hidden_units]. mlp1_outputs = tf.linalg.matrix_transpose(mlp1_outputs) # Add skip connection. x = mlp1_outputs + inputs # Apply layer normalization. x_patches = self.normalize(x) # Apply mlp2 on each patch independtenly. mlp2_outputs = self.mlp2(x_patches) # Add skip connection. x = x + mlp2_outputs return x """ Explanation: The MLP-Mixer model The MLP-Mixer is an architecture based exclusively on multi-layer perceptrons (MLPs), that contains two types of MLP layers: One applied independently to image patches, which mixes the per-location features. The other applied across patches (along channels), which mixes spatial information. This is similar to a depthwise separable convolution based model such as the Xception model, but with two chained dense transforms, no max pooling, and layer normalization instead of batch normalization. Implement the MLP-Mixer module End of explanation """ mlpmixer_blocks = keras.Sequential( [MLPMixerLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.005 mlpmixer_classifier = build_classifier(mlpmixer_blocks) history = run_experiment(mlpmixer_classifier) """ Explanation: Build, train, and evaluate the MLP-Mixer model Note that training the model with the current settings on a V100 GPUs takes around 8 seconds per epoch. End of explanation """ class FNetLayer(layers.Layer): def __init__(self, num_patches, embedding_dim, dropout_rate, *args, **kwargs): super(FNetLayer, self).__init__(*args, **kwargs) self.ffn = keras.Sequential( [ layers.Dense(units=embedding_dim), tfa.layers.GELU(), layers.Dropout(rate=dropout_rate), layers.Dense(units=embedding_dim), ] ) self.normalize1 = layers.LayerNormalization(epsilon=1e-6) self.normalize2 = layers.LayerNormalization(epsilon=1e-6) def call(self, inputs): # Apply fourier transformations. x = tf.cast( tf.signal.fft2d(tf.cast(inputs, dtype=tf.dtypes.complex64)), dtype=tf.dtypes.float32, ) # Add skip connection. x = x + inputs # Apply layer normalization. x = self.normalize1(x) # Apply Feedfowrad network. x_ffn = self.ffn(x) # Add skip connection. x = x + x_ffn # Apply layer normalization. return self.normalize2(x) """ Explanation: The MLP-Mixer model tends to have much less number of parameters compared to convolutional and transformer-based models, which leads to less training and serving computational cost. As mentioned in the MLP-Mixer paper, when pre-trained on large datasets, or with modern regularization schemes, the MLP-Mixer attains competitive scores to state-of-the-art models. You can obtain better results by increasing the embedding dimensions, increasing the number of mixer blocks, and training the model for longer. You may also try to increase the size of the input images and use different patch sizes. The FNet model The FNet uses a similar block to the Transformer block. However, FNet replaces the self-attention layer in the Transformer block with a parameter-free 2D Fourier transformation layer: One 1D Fourier Transform is applied along the patches. One 1D Fourier Transform is applied along the channels. Implement the FNet module End of explanation """ fnet_blocks = keras.Sequential( [FNetLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.001 fnet_classifier = build_classifier(fnet_blocks, positional_encoding=True) history = run_experiment(fnet_classifier) """ Explanation: Build, train, and evaluate the FNet model Note that training the model with the current settings on a V100 GPUs takes around 8 seconds per epoch. End of explanation """ class gMLPLayer(layers.Layer): def __init__(self, num_patches, embedding_dim, dropout_rate, *args, **kwargs): super(gMLPLayer, self).__init__(*args, **kwargs) self.channel_projection1 = keras.Sequential( [ layers.Dense(units=embedding_dim * 2), tfa.layers.GELU(), layers.Dropout(rate=dropout_rate), ] ) self.channel_projection2 = layers.Dense(units=embedding_dim) self.spatial_projection = layers.Dense( units=num_patches, bias_initializer="Ones" ) self.normalize1 = layers.LayerNormalization(epsilon=1e-6) self.normalize2 = layers.LayerNormalization(epsilon=1e-6) def spatial_gating_unit(self, x): # Split x along the channel dimensions. # Tensors u and v will in th shape of [batch_size, num_patchs, embedding_dim]. u, v = tf.split(x, num_or_size_splits=2, axis=2) # Apply layer normalization. v = self.normalize2(v) # Apply spatial projection. v_channels = tf.linalg.matrix_transpose(v) v_projected = self.spatial_projection(v_channels) v_projected = tf.linalg.matrix_transpose(v_projected) # Apply element-wise multiplication. return u * v_projected def call(self, inputs): # Apply layer normalization. x = self.normalize1(inputs) # Apply the first channel projection. x_projected shape: [batch_size, num_patches, embedding_dim * 2]. x_projected = self.channel_projection1(x) # Apply the spatial gating unit. x_spatial shape: [batch_size, num_patches, embedding_dim]. x_spatial = self.spatial_gating_unit(x_projected) # Apply the second channel projection. x_projected shape: [batch_size, num_patches, embedding_dim]. x_projected = self.channel_projection2(x_spatial) # Add skip connection. return x + x_projected """ Explanation: As shown in the FNet paper, better results can be achieved by increasing the embedding dimensions, increasing the number of FNet blocks, and training the model for longer. You may also try to increase the size of the input images and use different patch sizes. The FNet scales very efficiently to long inputs, runs much faster than attention-based Transformer models, and produces competitive accuracy results. The gMLP model The gMLP is a MLP architecture that features a Spatial Gating Unit (SGU). The SGU enables cross-patch interactions across the spatial (channel) dimension, by: Transforming the input spatially by applying linear projection across patches (along channels). Applying element-wise multiplication of the input and its spatial transformation. Implement the gMLP module End of explanation """ gmlp_blocks = keras.Sequential( [gMLPLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.003 gmlp_classifier = build_classifier(gmlp_blocks) history = run_experiment(gmlp_classifier) """ Explanation: Build, train, and evaluate the gMLP model Note that training the model with the current settings on a V100 GPUs takes around 9 seconds per epoch. End of explanation """
kdungs/teaching-SMD2-2016
solutions/3.ipynb
mit
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit from scipy.stats import norm plt.style.use('ggplot') """ Explanation: Übungsblatt 3: sWeights Aufgabe 1 Aufgabe 2 End of explanation """ def generate_sx(size): xs = -0.2 * np.log(np.random.uniform(size=2 * size)) xs = xs[xs < 1] return xs[:size] def generate_sm(size): return np.random.normal(0.5, 0.05, size=size) def generate_s(size): return np.array([generate_sx(size), generate_sm(size)]) def generate_bx(size): return 1 - np.sqrt(np.random.uniform(size=size)) def generate_bm(size): return np.sqrt(np.random.uniform(size=size)) def generate_b(size): return np.array([generate_bx(size), generate_bm(size)]) def generate_sample(sig_size=20000, bkg_size=100000): return np.append(generate_s(sig_size), generate_b(bkg_size), axis=1) def efficiency(x, m): return (x + m) / 2 def generate_with_efficiency(generator, efficiency, size): def reset(): xs, ms = generator(size) effs = efficiency(xs, ms) accept = np.random.uniform(size=size) > effs return np.array([xs[accept], ms[accept]]) sample = reset() while sample.shape[1] < size: sample = np.append(sample, reset(), axis=1) return sample[:size] def generate_sample_with_efficiency(efficiency, sig_size=20000, bkg_size=100000): return np.append(generate_with_efficiency(generate_s, efficiency, sig_size), generate_with_efficiency(generate_b, efficiency, bkg_size), axis=1) n = 20000 xs, ms = generate_sample() xs_s, xs_b = xs[:n], xs[n:] ms_s, ms_b = ms[:n], ms[n:] plt.hist([xs_s, xs_b], bins=40, histtype='barstacked', label=['Signal', 'Background']) plt.xlabel(r'$x$') plt.legend() plt.show() plt.hist([ms_s, ms_b], bins=40, histtype='barstacked', label=['Signal', 'Background']) plt.xlabel(r'$m$') plt.legend() plt.show() effs = efficiency(xs, ms) effs_s, effs_b = effs[:n], effs[n:] plt.hist([effs_s, effs_b], bins=40, histtype='barstacked', label=['Signal', 'Background']) plt.xlabel(r'$\varepsilon$') plt.legend() plt.show() exs, ems = generate_sample_with_efficiency(efficiency) exs_s, exs_b = exs[:n], exs[n:] ems_s, ems_b = ems[:n], ems[n:] plt.hist([exs_s, exs_b], bins=40, histtype='barstacked', label=['Signal', 'Background']) plt.xlabel(r'$x$') plt.legend() plt.show() plt.hist([ems_s, ems_b], bins=40, histtype='barstacked', label=['Signal', 'Background']) plt.xlabel(r'$m$') plt.legend() plt.show() """ Explanation: Eine experimentelle Verteilung in den Variablen $(x, m)$ habe eine Signalkomponente $s(x, m)$ = $s(x)s(m)$ und eine Untergrundkomponente $b(x,m)$ = $b(x)b(m)$. Der erlaubte Bereich ist $0 < x < 1$ und $0 < m < 1$. Es sei $s(m)$ eine Gaussverteilung mit Mittelwert $\mu = 0.5$ und Standardabweichung $\sigma = 0.05$. Die Verteilungen der anderen Komponenten werden aus gleichverteilten Zufallzahlen $z$ gewonnen. Für $s(x)$ verwende man $x = −0.2\ln{z}$, für $b(m)$ verwende man $m = \sqrt{z}$ und für $b(x)$ die Transformation $x = 1 − \sqrt{z}$. Erzeugen Sie für zwei angenommene Effizienzfunktionen $\varepsilon(x, m) = 1$ $\varepsilon(x, m) = (x + m) / 2$ Datensätze von Paaren $(x, m)$ die 20000 akzeptierte Signalereignisse und 100000 akzeptierte Untergrundereignisse umfassen. Betrachten Sie nun die gemeinsame $m$-Verteilung und parametrisieren Sie diese durch \begin{equation} f(m) = s(m) + b(m) \end{equation} mit \begin{equation} s(m) = p_0 \exp\left(-\frac{(m - p_1)^2}{2p_2^2}\right) \end{equation} und \begin{equation} b(m) = p_3 + p_4m + p_5m^2 + p_6\sqrt{m} \,. \end{equation} Für den Fall $\varepsilon(x, m) = (x + m)/2$ benutzen Sie die obige Parametrisierung auch zur Beschreibung der $m_c$ und $m_{cc}$-Verteilungen, für die jeder $m$-Wert mit $1/\varepsilon(x, m)$, bzw. $1/\varepsilon^2(x, m)$ gewichtet wird, und die für die korrekte Behandlung von nicht-konstanten Effizienzen benötigt werden. End of explanation """ hist, medges, xedges = np.histogram2d(ms, xs, bins=(100, 140), range=((0, 1), (0, 1))) mwidth = medges[1] - medges[0] mcentres = medges[:-1] + mwidth / 2 xwidth = xedges[1] - xedges[0] xcentres = xedges[:-1] + xwidth / 2 mhist = np.sum(hist, axis=1) xhist = np.sum(hist, axis=0) plt.plot(mcentres, mhist, '.', label='$m$') plt.plot(xcentres, xhist, '.', label='$x$') plt.ylabel('Absolute Häufigkeit') plt.legend() plt.show() def pdf_ms(m, p0, p1, p2): return p0 * np.exp(-(m - p1) ** 2 / 2 / p2 ** 2) def pdf_mb(m, p3, p4, p5, p6): return p3 + p4 * m + p5 * m ** 2 + p6 * np.sqrt(m) def pdf_m(m, p0, p1, p2, p3, p4, p5, p6): return pdf_ms(m, p0, p1, p2) + pdf_mb(m, p3, p4, p5, p6) def fit_mass(centres, ns, pars=None): if pars is None: pars = [20000, 0.5, 0.5, 100000, 0.1, 0, 1] return curve_fit(pdf_m, centres, ns, p0=pars) popt, _ = fit_mass(mcentres, mhist) plt.plot(mcentres, mhist, '.') plt.plot(mcentres, pdf_m(mcentres, *popt)) plt.plot(mcentres, pdf_ms(mcentres, *popt[:3]), '--') plt.plot(mcentres, pdf_mb(mcentres, *popt[3:]), '--') plt.xlabel('$m$') plt.show() """ Explanation: Aufgabe 1 Bestimmen Sie für beide Effizienzfunktion die sWeights $w(m)$ aus den beobachteten $m$-Verteilungen, und verwenden Sie $w(m)/\varepsilon(x, m)$ um die Verteilung $N_{s}s(x)$ aus den Daten heraus zu projizieren. Vergleichen Sie für beide Effizienzfunktionen das Resultat mit der Erwartung. Zunächst fitten wir die kombinierte Massenverteilung von Signal und Untergrund an unsere beiden Datensätze. Dabei müssen wir daran denken, die Anzahl Signal- und Untergrundereignissen als Fitparamter zu behandeln. Betrachten wir zuerst den Fall $\varepsilon = 1$. End of explanation """ def sweights(centres, mhist, popt): s = pdf_ms(centres, *popt[:3]) b = pdf_mb(centres, *popt[3:]) n = mhist # Normierung der PDFs s = s / np.sum(s) b = b / np.sum(b) Wss = np.sum((s * s) / n) Wsb = np.sum((s * b) / n) Wbb = np.sum((b * b) / n) alpha = Wbb / (Wss * Wbb - Wsb ** 2) beta = -Wsb / (Wss * Wbb - Wsb ** 2) weights = (alpha * s + beta * b) / n return weights sw = sweights(mcentres, mhist, popt) plt.plot(mcentres, sw, '.') plt.xlabel('$m$') plt.ylabel('sWeight') plt.show() """ Explanation: Als nächstes können wir mit Hilfe der bestimmten Parameter die sWeights bestimmen. End of explanation """ def apply_sweights(sweights, hist): return np.array([w * row for w, row in zip(sweights, hist)]).sum(axis=0) xweighted = apply_sweights(sw, hist) plt.plot(xcentres, xweighted, '.', label='sWeighted') plt.plot(xcentres, xhist, '.', label='s+b') plt.xlabel('$x$') plt.ylabel('Häufigkeit') plt.yscale('log') plt.legend(loc='lower left') plt.show() """ Explanation: Diese können wir nun verwenden, um die Signalkomponente $s(x)$ herauszuprojizieren. End of explanation """ ehist, emedges, exedges = np.histogram2d(ems, exs, bins=(100, 140), range=((0, 1), (0, 1))) exwidth = exedges[1] - exedges[0] emwidth = emedges[1] - emedges[0] excentres = exedges[:-1] + exwidth / 2 emcentres = emedges[:-1] + emwidth / 2 emhist = np.sum(ehist, axis=1) exhist = np.sum(ehist, axis=0) plt.plot(emcentres, emhist, '.', label='$m$') plt.plot(excentres, exhist, '.', label='$x$') plt.ylabel('Häufigkeit') plt.legend() plt.show() epopt, _ = fit_mass(emcentres, emhist, pars=[20000, 0.5, 0.5, 1, 1, -0.1, 100]) plt.plot(emcentres, emhist, '.') plt.plot(emcentres, pdf_m(emcentres, *epopt)) plt.plot(emcentres, pdf_ms(emcentres, *epopt[:3]), '--') plt.plot(emcentres, pdf_mb(emcentres, *epopt[3:]), '--') plt.xlabel('$m$') plt.show() esw = sweights(emcentres, emhist, epopt) plt.plot(emcentres, esw, '.') plt.xlabel('$m$') plt.ylabel('sWeight') plt.show() exweighted = apply_sweights(esw, ehist) plt.plot(excentres, exweighted, '.', label='sWeighted') plt.plot(excentres, exhist, '.', label='s+b') plt.plot(xcentres, xweighted, '.', label='sWeighted correct') plt.xlabel('$x$') plt.ylabel('Häufigkeit') plt.yscale('log') plt.legend(loc='lower left') plt.show() """ Explanation: Für $\varepsilon = (x + m) / 2$ verwenden wir an dieser Stelle fälschlicherweise genau das gleiche Vorgehen. End of explanation """ eeffs = efficiency(exs, ems) ehist, emedges, exedges = np.histogram2d( ems, exs, bins=(100, 140), range=((0, 1), (0, 1)), weights=1 / eeffs ) emwidth = emedges[1] - emedges[0] emcentres = emedges[:-1] + emwidth / 2 exwidth = exedges[1] - exedges[0] excentres = exedges[:-1] + exwidth / 2 emhist = np.sum(ehist, axis=1) exhist = np.sum(ehist, axis=0) plt.plot(emcentres, emhist, 'o', label='$m$') plt.plot(excentres, exhist, 's', label='$x$') plt.ylabel('Gewichtete Häufigkeit') plt.legend() plt.show() epopt, _ = fit_mass(emcentres, emhist, pars=[2000, 0.5, 0.5, 1, 1, -0.1, 10]) plt.plot(emcentres, emhist, '.') plt.plot(emcentres, pdf_m(emcentres, *epopt)) plt.plot(emcentres, pdf_ms(emcentres, *epopt[:3]), '--') plt.plot(emcentres, pdf_mb(emcentres, *epopt[3:]), '--') plt.xlabel('$m$') plt.show() eeffhist = efficiency(*np.meshgrid(excentres, emcentres)) def sweights_q(centres, qs, popt): s = pdf_ms(centres, *popt[:3]) s = s / np.sum(s) b = pdf_mb(centres, *popt[3:]) b = b / np.sum(b) Wss = np.sum((s * s) / qs) Wsb = np.sum((s * b) / qs) Wbb = np.sum((b * b) / qs) alpha = Wbb / (Wss * Wbb - Wsb ** 2) beta = -Wsb / (Wss * Wbb - Wsb ** 2) weights = (alpha * s + beta * b) / qs return weights qs = np.sum(ehist / eeffhist, axis=1) esw = sweights_q(emcentres, qs, epopt) plt.plot(emcentres, esw, '.') plt.xlabel('$m$') plt.ylabel('sWeight') plt.show() exweighted = np.array([s * h for s, h in zip(sw, ehist)]).sum(axis=0) plt.plot(excentres, exweighted, '.', label='sWeighted') plt.plot(excentres, exhist, '.', label='s+b') plt.plot(xcentres, xweighted, '.', label='sWeighted correct') plt.xlabel('$x$') plt.ylabel('Häufigkeit') plt.yscale('log') plt.legend(loc='lower left') plt.show() """ Explanation: Aufgabe 2 Bestimmen Sie für $\varepsilon(x, m) = (x + m)/2$ unter Berücksichtigung der Funktion $\varepsilon(x, m)$ in der Bestimmung von $w(m)$ die korrekten sWeights aus den mit $1/\varepsilon(x, m)$ gewichteten Daten. Verwenden Sie die korrekten sWeights um mit $w(m)/\varepsilon(x, m)$ um die Verteilung $N_{s}s(x)$ zu extrahieren. End of explanation """
physion/ovation-python
examples/download-demographics.ipynb
gpl-3.0
import csv import dateutil.parser import ovation.session as session import ovation.lab.workflows as workflows from tqdm import tqdm_notebook as tqdm """ Explanation: Download Batch demographics This notebook demonstrates using the Ovation API to download patient demographics and sample metadata for all samples in a workflow batch. End of explanation """ s = session.connect_lab(input('email: ')) """ Explanation: Create a session object End of explanation """ workflow = s.get(s.path('workflow', int(input('Workflow ID: ')))) """ Explanation: Retrieve the workflow by Id: End of explanation """ sample = workflow.samples[1] requisition = s.get(s.path('requisition', 2151, include_org=False)) requisition.requisition.physician def physician_first_name(physician): if physician is None or physician.name is None: return "" comps = physician.name.split(' ') if len(comps) > 1: return comps[0] else: return "" def physician_last_name(physician): if physician is None or physician.name is None: return "" comps = physician.name.split(' ') if len(comps) > 1: return ' '.join(comps[1:]) else: return comps[0] output_name = 'workflow_{}.csv'.format(workflow.workflow.id) with open(output_name, 'w') as csvfile: fieldnames = ['Sample ID', 'Date Received', 'Patient First Name', 'Patient MI', 'Patient Last Name', 'Sex', 'DOB', 'MRN/Submitter ID', 'Additional ID #', 'Collection Date', 'Specimen Type', 'Patient Diagnostic Test', 'Physician Name'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() for sample in tqdm(workflow.samples): requisition = s.get(s.path('requisition', sample.requisition_id, include_org=False)) sex = '/'.join([k for k in sample.patient.gender if sample.patient.gender[k] == True]) requested_tests = '/'.join([k for k in requisition.requisition.requested_tests if requisition.requisition.requested_tests[k] == True]) physician = requisition.requisition.physician row = {'Sample ID': sample.identifier, 'Date Received': dateutil.parser.parse(sample.date_received).isoformat() if sample.date_received is not None else '', 'Patient First Name': sample.patient.first_name, 'Patient Last Name': sample.patient.last_name, 'Sex': sex, 'DOB': dateutil.parser.parse(sample.patient.date_of_birth).isoformat() if sample.patient.date_of_birth is not None else '', 'Collection Date': dateutil.parser.parse(requisition.requisition.sample_collection_date).isoformat() if requisition.requisition.sample_collection_date is not None else '', 'Specimen Type': requisition.requisition.sample_type if requisition.requisition.sample_type is not None else '', 'Patient Diagnostic Test': requested_tests, 'Physician First Name': physician_first_name(physician), 'Physician Last Name': physician_last_name(physician)} writer.writerow(row) """ Explanation: Iterate the samples in the workflow, producing one row in the CSV output per sample: End of explanation """
JAmarel/Phys202
Algorithms/AlgorithmsEx02.ipynb
mit
%matplotlib inline from matplotlib import pyplot as plt import seaborn as sns import numpy as np """ Explanation: Algorithms Exercise 2 Imports End of explanation """ def find_peaks(a): """Find the indices of the local maxima in a sequence.""" peaks = np.array([],np.dtype('int')) search = np.array([entry for entry in a]) if search[0] > search[1]: peaks = np.append(peaks,np.array(0)) for i in range(1,len(search)-1): if search[i] > search[i+1] and search[i] > search[i-1]: peaks = np.append(peaks,i) if search[-1] > search[-2]: peaks = np.append(peaks,np.array(len(search)-1)) return peaks p1 = find_peaks([2,0,1,0,2,0,1]) assert np.allclose(p1, np.array([0,2,4,6])) p2 = find_peaks(np.array([0,1,2,3])) assert np.allclose(p2, np.array([3])) p3 = find_peaks([3,2,1,0]) assert np.allclose(p3, np.array([0])) """ Explanation: Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should: Properly handle local maxima at the endpoints of the input array. Return a Numpy array of integer indices. Handle any Python iterable as input. End of explanation """ from sympy import pi, N pi_digits_str = str(N(pi, 10001))[2:] ints = [int(a) for a in pi_digits_str] diff = np.diff(find_peaks(ints)) plt.hist(diff,np.arange(0,15)); plt.xlim(2,15); plt.xlabel('Number of digits between maxima'); plt.ylabel('Occurence'); plt.title('Occurences of Maxima spacing for 10,000 digits of Pi'); assert True # use this for grading the pi digits histogram """ Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following: Convert that string to a Numpy array of integers. Find the indices of the local maxima in the digits of $\pi$. Use np.diff to find the distances between consequtive local maxima. Visualize that distribution using an appropriately customized histogram. End of explanation """
Leguark/pynoddy
docs/notebooks/8-Sensitivity-Analysis.ipynb
gpl-2.0
from IPython.core.display import HTML css_file = 'pynoddy.css' HTML(open(css_file, "r").read()) %matplotlib inline """ Explanation: Sensitivity Analysis Test here: (local) sensitivity analysis of kinematic parameters with respect to a defined objective function. Aim: test how sensitivity the resulting model is to uncertainties in kinematic parameters to: Evaluate which the most important parameters are, and to Determine which parameters could, in principle, be inverted with suitable information. Theory: local sensitivity analysis Basic considerations: parameter vector $\vec{p}$ residual vector $\vec{r}$ calculated values at observation points $\vec{z}$ Jacobian matrix $J_{ij} = \frac{\partial \vec{z}}{\partial \vec{p}}$ Numerical estimation of Jacobian matrix with central difference scheme (see Finsterle): $$J_{ij} = \frac{\partial z_i}{\partial p_j} \approx \frac{z_i(\vec{p}; p_j + \delta p_j) - z_i(\vec{p};p_j - \delta p_j)}{2 \delta p_j}$$ where $\delta p_j$ is a small perturbation of parameter $j$, often as a fraction of the value. Defining the responses A meaningful sensitivity analysis obviously depends on the definition of a suitable response vector $\vec{z}$. Ideally, these responses are related to actual observations. In our case, we first want to determine how sensitive a kinematic structural geological model is with respect to uncertainties in the kinematic parameters. We therefore need calculatable measures that describe variations of the model. As a first-order assumption, we will use a notation of a stratigraphic distance for discrete subsections of the model, for example in single voxets for the calculated model. We define distance $d$ of a subset $\omega$ as the (discrete) difference between the (discrete) stratigraphic value of an ideal model, $\hat{s}$, to the value of a model realisation $s_i$: $$d(\omega) = \hat{s} - s_i$$ In the first example, we will consider only one response: the overall sum of stratigraphic distances for a model realisation $r$ of all subsets (= voxets, in the practical sense), scaled by the number of subsets (for a subsequent comparison of model discretisations): $$D_r = \frac{1}{n} \sum_{i=1}^n d(\omega_i)$$ Note: mistake before: not considering distances at single nodes but only the sum - this lead to "zero-difference" for simple translation! Now: consider more realistic objective function, squared distance: $$r = \sqrt{\sum_i (z_{i calc} - z_{i ref})^2}$$ End of explanation """ import sys, os import matplotlib.pyplot as plt import numpy as np # adjust some settings for matplotlib from matplotlib import rcParams # print rcParams rcParams['font.size'] = 15 # determine path of repository to set paths corretly below repo_path = os.path.realpath('../..') import pynoddy.history import pynoddy.events import pynoddy.output reload(pynoddy.history) reload(pynoddy.events) nm = pynoddy.history.NoddyHistory() # add stratigraphy strati_options = {'num_layers' : 8, 'layer_names' : ['layer 1', 'layer 2', 'layer 3', 'layer 4', 'layer 5', 'layer 6', 'layer 7', 'layer 8'], 'layer_thickness' : [1500, 500, 500, 500, 500, 500, 500, 500]} nm.add_event('stratigraphy', strati_options ) # The following options define the fault geometry: fault_options = {'name' : 'Fault_W', 'pos' : (4000, 3500, 5000), 'dip_dir' : 90, 'dip' : 60, 'slip' : 1000} nm.add_event('fault', fault_options) # The following options define the fault geometry: fault_options = {'name' : 'Fault_E', 'pos' : (6000, 3500, 5000), 'dip_dir' : 270, 'dip' : 60, 'slip' : 1000} nm.add_event('fault', fault_options) history = "two_faults_sensi.his" nm.write_history(history) output_name = "two_faults_sensi_out" # Compute the model pynoddy.compute_model(history, output_name) # Plot output nout = pynoddy.output.NoddyOutput(output_name) nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1], colorbar = True, title="", savefig = False) """ Explanation: Setting up the base model For a first test: use simple two-fault model from paper End of explanation """ H1 = pynoddy.history.NoddyHistory(history) # get the original dip of the fault dip_ori = H1.events[3].properties['Dip'] # dip_ori1 = H1.events[2].properties['Dip'] # add 10 degrees to dip add_dip = -20 dip_new = dip_ori + add_dip # dip_new1 = dip_ori1 + add_dip # and assign back to properties dictionary: H1.events[3].properties['Dip'] = dip_new reload(pynoddy.output) new_history = "sensi_test_dip_changed.his" new_output = "sensi_test_dip_changed_out" H1.write_history(new_history) pynoddy.compute_model(new_history, new_output) # load output from both models NO1 = pynoddy.output.NoddyOutput(output_name) NO2 = pynoddy.output.NoddyOutput(new_output) # create basic figure layout fig = plt.figure(figsize = (15,5)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) NO1.plot_section('y', position=0, ax = ax1, colorbar=False, title="Dip = %.0f" % dip_ori) NO2.plot_section('y', position=0, ax = ax2, colorbar=False, title="Dip = %.0f" % dip_new) plt.show() """ Explanation: Define parameter uncertainties We will start with a sensitivity analysis for the parameters of the fault events. End of explanation """ # def determine_strati_diff(NO1, NO2): # """calculate total stratigraphic distance between two models""" # return np.sum(NO1.block - NO2.block) / float(len(NO1.block)) def determine_strati_diff(NO1, NO2): """calculate total stratigraphic distance between two models""" return np.sqrt(np.sum((NO1.block - NO2.block)**2)) / float(len(NO1.block)) diff = determine_strati_diff(NO1, NO2) print(diff) """ Explanation: Calculate total stratigraphic distance End of explanation """ # set parameter changes in dictionary changes_fault_1 = {'Dip' : -20} changes_fault_2 = {'Dip' : -20} param_changes = {2 : changes_fault_1, 3 : changes_fault_2} reload(pynoddy.history) H2 = pynoddy.history.NoddyHistory(history) H2.change_event_params(param_changes) new_history = "param_dict_changes.his" new_output = "param_dict_changes_out" H2.write_history(new_history) pynoddy.compute_model(new_history, new_output) # load output from both models NO1 = pynoddy.output.NoddyOutput(output_name) NO2 = pynoddy.output.NoddyOutput(new_output) # create basic figure layout fig = plt.figure(figsize = (15,5)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) NO1.plot_section('y', position=0, ax = ax1, colorbar=False, title="Original Model") NO2.plot_section('y', position=0, ax = ax2, colorbar=False, title="Changed Model") plt.show() """ Explanation: Function to modify parameters Multiple event parameters can be changed directly with the function change_event_params, which takes a dictionarly of events and parameters with according changes relative to the defined parameters. Here a brief example: End of explanation """ import copy new_history = "sensi_tmp.his" new_output = "sensi_out" def noddy_sensitivity(history_filename, param_change_vals): """Perform noddy sensitivity analysis for a model""" param_list = [] # list to store parameters for later analysis distances = [] # list to store calcualted distances # Step 1: # create new parameter list to change model for event_id, event_dict in param_change_vals.items(): # iterate over events for key, val in event_dict.items(): # iterate over all properties separately changes_list = dict() changes_list[event_id] = dict() param_list.append("event_%d_property_%s" % (event_id, key)) for i in range(2): # calculate positive and negative values his = pynoddy.history.NoddyHistory(history_filename) if i == 0: changes_list[event_id][key] = val # set changes his.change_event_params(changes_list) # save and calculate model his.write_history(new_history) pynoddy.compute_model(new_history, new_output) # open output and calculate distance NO_tmp = pynoddy.output.NoddyOutput(new_output) dist_pos = determine_strati_diff(NO1, NO_tmp) NO_tmp.plot_section('y', position = 0, colorbar = False, title = "Dist: %.2f" % dist_pos, savefig = True, fig_filename = "event_%d_property_%s_val_%d.png" \ % (event_id, key,val)) if i == 1: changes_list[event_id][key] = -val his.change_event_params(changes_list) # save and calculate model his.write_history(new_history) pynoddy.compute_model(new_history, new_output) # open output and calculate distance NO_tmp = pynoddy.output.NoddyOutput(new_output) dist_neg = determine_strati_diff(NO1, NO_tmp) NO_tmp.plot_section('y', position=0, colorbar=False, title="Dist: %.2f" % dist_neg, savefig=True, fig_filename="event_%d_property_%s_val_%d.png" \ % (event_id, key,val)) # calculate central difference central_diff = (dist_pos + dist_neg) / (2.) distances.append(central_diff) return param_list, distances """ Explanation: Full sensitivity analysis Perform now a full sensitivity analysis for all defined parameters and analyse the output matrix. For a better overview, we first create a function to perform the sensitivity analysis: End of explanation """ changes_fault_1 = {'Dip' : 1.5, 'Dip Direction' : 10, 'Slip': 100.0, 'X': 500.0} changes_fault_2 = {'Dip' : 1.5, 'Dip Direction' : 10, 'Slip': 100.0, 'X': 500.0} param_changes = {2 : changes_fault_1, 3 : changes_fault_2} """ Explanation: As a next step, we define the parameter ranges for the local sensitivity analysis (i.e. the $\delta p_j$ from the theoretical description above): End of explanation """ param_list_1, distances = noddy_sensitivity(history, param_changes) """ Explanation: And now, we perform the local sensitivity analysis: End of explanation """ for p,d in zip(param_list_1, distances): print "%s \t\t %f" % (p, d) """ Explanation: The function passes back a list of the changed parameters and the calculated distances according to this change. Let's have a look at the results: End of explanation """ d = np.array([distances]) fig = plt.figure(figsize=(5,3)) ax = fig.add_subplot(111) ax.bar(np.arange(0.6,len(distances),1.), np.array(distances[:])) """ Explanation: Results of this local sensitivity analysis suggest that the model is most sensitive to the X-position of the fault, when we evaluate distances as simple stratigraphic id differences. Here just a bar plot for better visualisation (feel free to add proper labels): End of explanation """
turbomanage/training-data-analyst
courses/machine_learning/deepdive/05_artandscience/b_hyperparam.ipynb
apache-2.0
import os PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # for bash os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['TFVERSION'] = '1.8' # Tensorflow version %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION """ Explanation: Hyperparameter tuning with Cloud AI Platform Learning Objectives: * Improve the accuracy of a model by hyperparameter tuning End of explanation """ %%bash rm -rf house_prediction_module mkdir house_prediction_module mkdir house_prediction_module/trainer touch house_prediction_module/trainer/__init__.py %%writefile house_prediction_module/trainer/task.py import argparse import os import json import shutil from . import model if __name__ == '__main__' and "get_ipython" not in dir(): parser = argparse.ArgumentParser() parser.add_argument( '--learning_rate', type = float, default = 0.01 ) parser.add_argument( '--batch_size', type = int, default = 30 ) parser.add_argument( '--output_dir', help = 'GCS location to write checkpoints and export models.', required = True ) parser.add_argument( '--job-dir', help = 'this model ignores this field, but it is required by gcloud', default = 'junk' ) args = parser.parse_args() arguments = args.__dict__ # Unused args provided by service arguments.pop('job_dir', None) arguments.pop('job-dir', None) # Append trial_id to path if we are doing hptuning # This code can be removed if you are not using hyperparameter tuning arguments['output_dir'] = os.path.join( arguments['output_dir'], json.loads( os.environ.get('TF_CONFIG', '{}') ).get('task', {}).get('trial', '') ) # Run the training shutil.rmtree(arguments['output_dir'], ignore_errors=True) # start fresh each time # Pass the command line arguments to our model's train_and_evaluate function model.train_and_evaluate(arguments) %%writefile house_prediction_module/trainer/model.py import numpy as np import pandas as pd import tensorflow as tf tf.logging.set_verbosity(tf.logging.INFO) # Read dataset and split into train and eval df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep = ",") df['num_rooms'] = df['total_rooms'] / df['households'] np.random.seed(seed = 1) #makes split reproducible msk = np.random.rand(len(df)) < 0.8 traindf = df[msk] evaldf = df[~msk] # Train and eval input functions SCALE = 100000 def train_input_fn(df, batch_size): return tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"] / SCALE, # note the scaling num_epochs = None, batch_size = batch_size, # note the batch size shuffle = True) def eval_input_fn(df, batch_size): return tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, batch_size = batch_size, shuffle = False) # Define feature columns features = [tf.feature_column.numeric_column('num_rooms')] def train_and_evaluate(args): # Compute appropriate number of steps num_steps = (len(traindf) / args['batch_size']) / args['learning_rate'] # if learning_rate=0.01, hundred epochs # Create custom optimizer myopt = tf.train.FtrlOptimizer(learning_rate = args['learning_rate']) # note the learning rate # Create rest of the estimator as usual estimator = tf.estimator.LinearRegressor(model_dir = args['output_dir'], feature_columns = features, optimizer = myopt) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'], tf.float64) return {'rmse': tf.metrics.root_mean_squared_error(labels * SCALE, pred_values * SCALE)} estimator = tf.contrib.estimator.add_metrics(estimator, rmse) train_spec = tf.estimator.TrainSpec(input_fn = train_input_fn(df = traindf, batch_size = args['batch_size']), max_steps = num_steps) eval_spec = tf.estimator.EvalSpec(input_fn = eval_input_fn(df = evaldf, batch_size = len(evaldf)), steps = None) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) %%bash rm -rf house_trained export PYTHONPATH=${PYTHONPATH}:${PWD}/house_prediction_module gcloud ai-platform local train \ --module-name=trainer.task \ --job-dir=house_trained \ --package-path=$(pwd)/trainer \ -- \ --batch_size=30 \ --learning_rate=0.02 \ --output_dir=house_trained """ Explanation: Create command-line program In order to submit to Cloud AI Platform, we need to create a distributed training program. Let's convert our housing example to fit that paradigm, using the Estimators API. End of explanation """ %%writefile hyperparam.yaml trainingInput: hyperparameters: goal: MINIMIZE maxTrials: 5 maxParallelTrials: 1 hyperparameterMetricTag: rmse params: - parameterName: batch_size type: INTEGER minValue: 8 maxValue: 64 scaleType: UNIT_LINEAR_SCALE - parameterName: learning_rate type: DOUBLE minValue: 0.01 maxValue: 0.1 scaleType: UNIT_LOG_SCALE %%bash OUTDIR=gs://${BUCKET}/house_trained # CHANGE bucket name appropriately gsutil rm -rf $OUTDIR export PYTHONPATH=${PYTHONPATH}:${PWD}/house_prediction_module gcloud ai-platform jobs submit training house_$(date -u +%y%m%d_%H%M%S) \ --config=hyperparam.yaml \ --module-name=trainer.task \ --package-path=$(pwd)/house_prediction_module/trainer \ --job-dir=$OUTDIR \ --runtime-version=$TFVERSION \ --\ --output_dir=$OUTDIR \ !gcloud ai-platform jobs describe house_180912_195904 # CHANGE jobId appropriately """ Explanation: Create hyperparam.yaml End of explanation """
LorenzoBi/courses
TSAADS/tutorial 4/.ipynb_checkpoints/Untitled-checkpoint.ipynb
mit
np.random.seed(19) T = 1000 a = np.array([.2, -.1, .1]) mu0 = .5 c, mu = simARPoisson(T, a, mu0) plt.plot(c,'.', label='Countings') plt.plot(mu, label='Mean') plt.legend() plt.xlabel('time') plt.ylabel('countings') """ Explanation: Task 1. AR Poisson process. 1.1 We simulate our poisson process with the given parameters. We plotted both the countings and the mean at each time step. End of explanation """ import collections counter=collections.Counter(c) print(counter) plt.bar(list(counter.keys()), list(counter.values()) / np.sum(list(counter.values())), label='data') plt.plot(np.arange(10), stats.poisson.pmf(np.arange(10), np.mean(mu), loc=0), '*r', label='approx') plt.xlabel('countings') plt.ylabel('p') plt.legend() """ Explanation: We can see from the previous plot that the mean is more or less the same in the entire process, with a noise component. We know of course that the mean is dependent on the previous counting, but we could decide to test in approximation if our counting data is poisson distributed with a parameter equal to the mean of the means through time. What we obtain is a good compatability between our data and the hypothesis. We will not do a quantitative analysis of this, but it is nice to see that our approximation is not totally off. End of explanation """ a0, a1, a2 = a z = a0 + a2 * c[:-2] x = np.arange(-.6, .4, .001) loss = np.zeros(len(x)) for i, a1 in enumerate(x): a[1] = a1 logmu = a0 + a1 * c[1:-1] + a2 * c[:-2] loss[i] = np.sum((c[2:] * logmu - np.exp(logmu))) plt.plot(x, loss) plt.plot(x[loss==max(loss)], loss[loss==max(loss)], '*') plt.grid() x[loss==max(loss)] """ Explanation: 1.2 Computing the loglikelihood for various a1, we can see that the maxima is approximately the true value of a1. Approximately for the obvious noise in the data End of explanation """ gamma = 0.000005 itera = 500 a1 = 0.4 a1s = np.zeros(itera) loss = np.zeros(itera) for i in range(itera): logmu = a0 + a1 * c[1:-1] + a2 * c[:-2] loss[i] = np.sum((c[2:] * logmu - np.exp(logmu))) a1 += gamma * np.sum(c[2:] * c[1:-1] - np.exp(logmu) * c[1:-1]) a1s[i] = a1 print (a1) plt.plot(loss) plt.xlabel('iteration') plt.ylabel('Log Likelihood') plt.figure() plt.plot(a1s) plt.xlabel('iteration') plt.ylabel('a1') """ Explanation: 1.3 For finding the parameter that maximizes our loglikelihood we can run a gradient ascend. The result is quite close to the real one, even if not equal to -0.1 End of explanation """ T = 1000 np.random.seed(1) a0 = np.array([[0], [0]]) A1 = np.array([[.2, -.2], [0., .1]]) A2 = np.array([[.1, -.1], [0., .1]]) Sigma = np.eye(2) * 0.01 x = np.zeros((2, T + 2)) for t in range(2, T + 2): x1 = np.dot(A1, x[:, [t - 1]]) x2 = np.dot(A2, x[:, [t -2]]) x[:, [t]] = (a0 + x1 + x2 + np.random.normal(0, 0.01, size=(2, 1))) x = x[:, 2:] plt.plot(x[0, :], label='x_1') plt.plot(x[1, :], label='x_2') plt.xlabel('time') x.T.shape """ Explanation: Task 2. Granger Causality. 2.1 End of explanation """ p = 2 X_p, x_1 = set_data(2, x[0,:]) XpXp = np.dot(X_p.T, X_p) A = np.dot(LA.inv(XpXp), np.dot(X_p.T, x_1)) A #X_foo, _ = set_data(2, x[1, :]) #X_p = np.hstack((X_p, X_foo[:, 1:])) res = x_1 - np.dot(X_p, A) S = np.dot(res.T, res) / (T - p) LL1 = -(T - p) / 2 * np.log(2 * np.pi) - (T - p) / 2 * \ np.log(LA.det(S)) - 1 / 2 * \ np.trace(np.dot(np.dot(res, LA.inv(S)), res.T)) LL1 p = 2 #X_p, x_1 = set_data(2, x[0,:]) X_foo, _ = set_data(2, x[1, :]) X_p = np.hstack((X_p, X_foo[:, 1:])) XpXp = np.dot(X_p.T, X_p) A = np.dot(LA.inv(XpXp), np.dot(X_p.T, x_1)) A res = x_1 - np.dot(X_p, A) S = np.dot(res.T, res) / (T - p) LL2 = -(T - p) / 2 * np.log(2 * np.pi) - (T - p) / 2 * \ np.log(LA.det(S)) - 1 / 2 * \ np.trace(np.dot(np.dot(res, LA.inv(S)), res.T)) LL2 com = 2 * (LL2 - LL1) from scipy.stats import chi2 chi2.sf(com, 2, loc=0, scale=1) p = 2 X_p, x_1 = set_data(2, x[1,:]) XpXp = np.dot(X_p.T, X_p) A = np.dot(LA.inv(XpXp), np.dot(X_p.T, x_1)) A res = x_1 - np.dot(X_p, A) S = np.dot(res.T, res) / (T - p) LL1 = -(T - p) / 2 * np.log(2 * np.pi) - (T - p) / 2 * \ np.log(LA.det(S)) - 1 / 2 * \ np.trace(np.dot(np.dot(res, LA.inv(S)), res.T)) LL1 p = 2 #X_p, x_1 = set_data(2, x[0,:]) X_foo, _ = set_data(2, x[0, :]) X_p = np.hstack((X_p, X_foo[:, 1:])) XpXp = np.dot(X_p.T, X_p) A = np.dot(LA.inv(XpXp), np.dot(X_p.T, x_1)) A res = x_1 - np.dot(X_p, A) S = np.dot(res.T, res) / (T - p) LL2 = -(T - p) / 2 * np.log(2 * np.pi) - (T - p) / 2 * \ np.log(LA.det(S)) - 1 / 2 * \ np.trace(np.dot(np.dot(res, LA.inv(S)), res.T)) LL2 com = 2 * (LL2 - LL1) chi2.sf(com, 2, loc=0, scale=1) """ Explanation: We End of explanation """
GoogleCloudPlatform/professional-services
examples/bigquery-table-access-pattern-analysis/pipeline.ipynb
apache-2.0
import sys !{sys.executable} -m pip install -r requirements.txt !jupyter nbextension enable --py widgetsnbextension !jupyter serverextension enable voila --sys-prefix """ Explanation: License Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. 1. Installing Dependencies Run this file to install all dependencies needed by the Notebook. You can skip running this cell if you have runned this cell in the current environment previously. End of explanation """ %reload_ext autoreload %autoreload 2 from dotenv import load_dotenv load_dotenv('var.env') import os from itables import init_notebook_mode if os.getenv('IS_INTERACTIVE_TABLES_MODE') == 'TRUE': init_notebook_mode(all_interactive=True) from IPython.display import display import ipywidgets as widgets from ipywidgets import HBox start_date_picker = widgets.DatePicker(description='Start Date') end_date_picker = widgets.DatePicker(description='End Date') date_pickers = HBox(children=[start_date_picker, end_date_picker]) display(date_pickers) os.environ['START_DATE'] = str(start_date_picker.value) os.environ['END_DATE'] = str(end_date_picker.value) print('--- LOADED ENVIRONMENT VARIABLES ---') print(f"INPUT_PROJECT_ID: {os.getenv('INPUT_PROJECT_ID')}") print(f"INPUT_DATASET_ID: {os.getenv('INPUT_DATASET_ID')}") print(f"INPUT_AUDIT_LOGS_TABLE_ID: {os.getenv('INPUT_AUDIT_LOGS_TABLE_ID')}") print(f"IS_AUDIT_LOGS_TABLE_PARTITIONED: {os.getenv('IS_AUDIT_LOGS_INPUT_TABLE_PARTITIONED')}") print(f"OUTPUT_PROJECT_ID: {os.getenv('OUTPUT_PROJECT_ID')}") print(f"OUTPUT_DATASET_ID: {os.getenv('OUTPUT_DATASET_ID')}") print(f"OUTPUT_TABLE_SUFFIX: {os.getenv('OUTPUT_TABLE_SUFFIX')}") print(f"LOCATION: {os.getenv('LOCATION')}") print(f"START_DATE: {os.getenv('START_DATE')}") print(f"END_DATE: {os.getenv('END_DATE')}") print(f"IS_INTERACTIVE_TABLES_MODE: {os.getenv('IS_INTERACTIVE_TABLES_MODE')}") """ Explanation: 2. Setting Environment Variables <ul> <li> Make sure you have set your environment vairables in `var.env` file. <li> Pick the date range for your analysis <li> After resetting any environment variables, you need to restart the kernel because otherwise it will not be loaded by Jupyter. To restart, go to the menu 'Kernel' and choose 'Restart' <li> Run all the cells in this section <li> Make sure the environment variables are set correctly <ul> End of explanation """ from src.bq_query import BQQuery try: BQQuery.create_functions_for_pipeline_analysis() BQQuery.create_tables_for_pipeline_analysis() except Exception as e: print('Unable to create tables, do not continue with the analysis') print(e) """ Explanation: 3. Creating Tables for Current Analysis Environment Run the cell below to create tables that is necessary for the analysis End of explanation """ import src.pipeline_analysis as pipeline_analysis import ipywidgets as widgets from IPython.display import display import pandas as pd limited_imbalance_tables = [] limited_imbalance_tables_df = pd.DataFrame() def get_limited_imbalance_tables_df(limit): global limited_imbalance_tables, limited_imbalance_tables_df limited_imbalance_tables_df = pipeline_analysis.get_tables_read_write_frequency_df(limit) limited_imbalance_tables = limited_imbalance_tables_df['Table'].tolist() return limited_imbalance_tables_df widgets.interact_manual.opts['manual_name'] = 'Run' widgets.interact_manual(get_limited_imbalance_tables_df, limit= widgets.IntText(value=3)) ; """ Explanation: 4. Getting Analysis Result Get the tables with highest discrepancy on write vs read frequency throughout the data warehouse This will list down tables with the highest discrepancy on write vs read frequency. Run the cell Set the limit on how many tables you want to be displayed using the text box, please insert positive values only. Click 'Run' and wait until the result is retrieved. End of explanation """ def visualise_table_pipelines(table): pipeline_analysis.display_pipelines_of_table(table) widgets.interact_manual(visualise_table_pipelines, table = widgets.Dropdown(options=limited_imbalance_tables+ [''], value='', description='Table:')) ; """ Explanation: Get the pipeline graph data of the table This will generate a pipeline graph file, in HTML format, under pipeline_graph directory. It may take sometime for this to run and generate. Choose the table of interest, the table that you are interested to explore further by displaying its pipeline graph. Click 'Run' and wait until the run is finished (indicated by non grayed-out box). Run the next cell to display the graph End of explanation """ from IPython.display import IFrame,HTML, display display(IFrame('./pipeline_graph/index.html', width="1000", height="800")) """ Explanation: Display the pipeline graph of the table Display the pipeline graph of the table. The thickness of the edges indicates the frequency compared to the rest of the edges in the current graph. Run the cell to display the pipeline graph of the table in the iFrame below You can click on the different nodes of the graph, each representing different tbales that are part of the pipeline of this table of interest. When you click on a node, it will display more information for this table. End of explanation """
irazhur/StatisticalMethods
examples/XrayImage/Inference.ipynb
gpl-2.0
# import cluster_pgm # cluster_pgm.inverse() from IPython.display import Image Image(filename="cluster_pgm_inverse.png") """ Explanation: Inferring Cluster Model Parameters from an X-ray Image Forward modeling is always instructive: we got a good sense of the parameters of our cluster + background model simply by generating mock data and visualizing it. The "inverse problem", also known as "inference," is to learn the parameters of an assumed model from a set of data. Intuitively we can see how it is going to work: try a lot of possible parameter combinations, and see which ones "match" the data. Our inability to guess parameter values accurately first time shows that we are uncertain about them. In Bayesian inference, we use probability distributions to describe this uncertainty mathematically. The sampling distribution ${\rm Pr}(d|\theta,H)$ encodes uncertainty about what might have been, given a model (or hypothesis) $H$ with parameters $\theta$. It allows us to generate mock datasets that are similar to the data that we do observe. Before we take any data, our uncertainty about our model parameter values is encoded in the prior PDF for the parameters given the model, ${\rm Pr}(\theta|H)$. Similarly, the sampling distribution ${\rm Pr}(d|\theta,H)$ can be thought of as the prior over datasets - a PDF could be imagined and used to generate mock data without us ever having seen any real data at all! Probability The idea of using probability distributions to quantify the uncertainty in our model parameters (and indeed in the models themselves) is due to Pierre Simon Laplace (1774), who rediscovered Thomas Bayes' earlier results on the probability of future events given their past history. Let's remind ourselves how probabilities work Laplace and Bayes' key result is the following, usually referred to as "Bayes' Theorem:" ${\rm Pr}(\theta|d,H) = \frac{1}{{\rm Pr}(d|H)}\;{\rm Pr}(d|\theta,H)\;{\rm Pr}(\theta|H)$ What you know about your model parameters given the data is what you knew about them before $\left[ {\rm Pr}(\theta|H) \right]$, combined with what the data are telling you $\left[ {\rm Pr}(d|\theta,H) \right]$. ${\rm Pr}(\theta|d,H)$ is called the posterior probability distribution for the parameters given the data, and is the general solution to the inverse problem. Both the posterior and prior PDFs are functions of the model parameters. The sampling distribution is a function of the data given the parameters - when written as a function of $\theta$ it is called the likelihood of the parameters given the model. PGMs for Inverse Problems Here's the probabilistic graphical model for the inverse X-ray cluster model problem. Q: Spot the difference! End of explanation """ %load_ext autoreload %autoreload 2 import cluster lets = cluster.XrayData() lets.read_in_data() lets.set_up_maps() x0,y0 = 328,328 # The center of the image is 328,328 S0,b = 0.001,1e-6 # Cluster and background surface brightness, arbitrary units beta = 2.0/3.0 # Canonical value is beta = 2/3 rc = 12 # Core radius, in pixels logprob = lets.evaluate_unnormalised_log_posterior(x0,y0,S0,rc,beta,b) print logprob """ Explanation: This PGM illustrates the joint PDF for the parameters and the data, which can be factorised as: $\prod_k \; {\rm Pr}(N_k\;|\;\mu_k(\theta),{\rm ex}_k,{\rm pb}_k,H) \; {\rm Pr}(\,\theta\,|H)$ It can also be factorised to: ${\rm Pr}(\,\theta\,|{N_k}\,H) \; {\rm Pr}({N_k}\,|H)$ which is, up to the normalizing constant, the posterior PDF for the model parameters, given all the data ${N_k}$. PGMs can be used to design inferences Calculating Posterior PDFs Notice that the prior PDF ${\rm Pr}(\theta|H)$ and the likelihood function ${\rm Pr}(d|\theta,H)$ can typically be evaluated at any point in the parameter space. This means that we can always simply evaluate the posterior PDF on a grid (or at least attempt to), and normalize it by numerical integration. Let's do this for a simplified version of our X-ray cluster model. End of explanation """ import numpy as np npix = 15 xmin,xmax = 327.7,328.3 ymin,ymax = 346.4,347.0 x0grid = np.linspace(xmin,xmax,npix) y0grid = np.linspace(ymin,ymax,npix) logprob = np.zeros([npix,npix]) for i,x0 in enumerate(x0grid): for j,y0 in enumerate(y0grid): logprob[j,i] = lets.evaluate_unnormalised_log_posterior(x0,y0,S0,rc,beta,b) print "Done column",i print logprob[0:5,0] """ Explanation: Good. Here's the code that is being run, inside the "XrayData" class: ```python def evaluate_log_prior(self): # Uniform in all parameters... return 0.0 def evaluate_log_likelihood(self): self.make_mean_image() # Return un-normalized Poisson sampling distribution: # log (\mu^N e^{-\mu} / N!) = N log \mu - \mu + constant return np.sum(self.im * np.log(self.mu) - self.mu) def evaluate_unnormalised_log_posterior(self,x0,y0,S0,rc,beta,b): self.set_pars(x0,y0,S0,rc,beta,b) return self.evaluate_log_likelihood() + self.evaluate_log_prior() ``` Now let's try evaluating the 2D posterior PDF for cluster position, conditioned on reasonable values of the cluster and background flux, cluster size and beta: End of explanation """ Z = np.max(logprob) prob = np.exp(logprob - Z) norm = np.sum(prob) prob /= norm print prob[0:5,0] """ Explanation: To normalize this, we need to take care not to try and exponentiate any very large or small numbers... End of explanation """ import astropy.visualization as viz import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 10.0) plt.imshow(prob, origin='lower', cmap='Blues', interpolation='gaussian', extent=[xmin,xmax,ymin,ymax]) plt.xlabel('x / pixels') plt.ylabel('y / pixels') """ Explanation: Let's plot this as a 2D probability density map. End of explanation """
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session03/Day1/SoftwareRepositories.ipynb
mit
! #complete ! #complete """ Explanation: Code Repositories Version 0.1 The notebook contains problems oriented around building a basic Python code repository and making it public via Github. Of course there are other places to put code repositories, with complexity ranging from services comparable to github to simple hosting a git server on your local machine. But this focuses on git and github as a ready-to-use example with plenty of additional resources to be found online. Note that these problems assum you are using the Anaconda Python distribution. This is particular useful for these problems because it makes it very easy to install testing packages in virtual environments quickly and with little wasted disk space. If you are not using anaconda, you can either use an alternative virtual environment scheme (e.g. in Py 3, the built-in venv), or just install pacakges directly into your default python (and hope for the best...). For git interaction, this notebook also uses the git command line tools directly. There are a variety of GUI tools that make working with git more visually intuitive (e.g. SourceTree, gitkraken, or the github desktop client), but this notebook uses the command line tools as the lowest common denominator. You are welcome to try to reproduce the steps with your client, however - feel free to ask your neighbors or instructors if you run into trouble there. As a final note, this notebook's examples assume you are using a system with a unix-like shell (e.g. macOS, Linux, or Windows with git-bash or the Linux subsystem shell). E Tollerud (STScI) Problem 0: Using Jupyter as a shell As an initial step before diving into code repositories, it's important to understand how you can use Jupyter as a shell. Most of the steps in this notebook require interaction with the system that's easier done with a shell or editor rather than using Python code in a notebook. While this could be done by opening up a terminal beside this notebook, to keep most of your work in the notebook itself, you can use the capabilities Jupyter + IPython offer for shell interaction. 0a: Figure out your base shell path and what's in it The critical trick here is the ! magic in IPython. Anything after a leading ! in IPython gets run by the shell instead of as python code. Run the shell command pwd and ls to see where IPython thinks you are on your system, and the contents of the directory. hint: Be sure to remove the "#complete"s below when you've done so. IPython will interpret that as part of the shell command if you don't End of explanation """ %%sh #complete """ Explanation: 0b: Try a multi-line shell command IPython magics often support "cell" magics by having %%&lt;command&gt; at the top of a cell. Use that to cd into the directory below this one ("..") and then ls inside that directory. Hint: if you need syntax tips, run the magic() function and look for the ! or !! commands End of explanation """ ! #complete """ Explanation: 0c: Create a new directory from Jupyter While you can do this almost as easily with os.mkdir in Python, for this case try to do it using shell magics instead. Make a new directory in the directory you are currently in. Use your system file browser to ensure you were sucessful. End of explanation """ %cd #complete """ Explanation: 0d: Change directory to your new directory One thing about shell commands is that they always start wherever you started your IPython instance. So doing cd as a shell command only changes things temporarily (i.e. within that shell command). IPython provides a %cd magic that makes this change last, though. Use this to %cd into the directory you just created, and then use the pwd shell command to ensure this cd "stuck" (You can also try doing cd as a shell command to prove to yourself that it's different from the %cd magic.) End of explanation """ !mkdir #complete only if you didn't do 0c, or want a different name for your code directory %%file <yourdirectory>/code.py def do_something(): # complete print(something)# this will make it much easier in future problems to see that something is actually happening """ Explanation: Final note: %cd -0 is a convenient shorthand to switch back to the initial directory. Problem 1: Creating a bare-bones repo and getting it on Github Here we'll create a simple (public) code repository with a minimal set of content, and publish it in github. 1a: Create a basic repository locally Start by creating the simplest possible code repository, composed of a single code file. Create a directory (or use the one from 0c), and place a code.py file in it, with a bit of Python code of your choosing. (Bonus points for witty or sarcastic code...) You could even use non-Python code if you desired, although Problems 3 & 4 feature Python-specific bits so I wouldn't recommend it. To make the file from the notebook, the %%file &lt;filename&gt; magic is a convenient way to write the contents of a notebook cell to a file. End of explanation """ %run <yourdirectory>/code.py # complete do_something() """ Explanation: If you want to test-run your code: End of explanation """ %cd # complete !git init !git add code.py !git commit -m #complete """ Explanation: 1b: Convert the directory into a git repo Make that code into a git repository by doing git init in the directory you created, then git add and git commit. End of explanation """ !git remote add <yourgithubusername> <the url github shows you on the repo web page> #complete !git push <yourgithubusername> master -u """ Explanation: 1c: Create a repository for your code in Github Go to github's web site in your web browser. If you do not have a github account, you'll need to create one (follow the prompts on the github site). Once you've got an account, you'll need to make sure your git client can authenticate with github. If you're using a GUI, you'll have to figure it out (usually it's pretty easy). On the command line you have two options: * The simplest way is to connect to github using HTTPS. This requires no initial setup, but git will prompt you for your github username and password every so often. * If you find that annoying (I do...), you can set up your system to use SSH to talk to github. Look for the "SSH and GPG keys" section of your settings on github's site, or if you're not sure how to work with SSH keys, check out github's help on the subject. Once you've got github set up to talk to your computer, you'll need to create a new repository for the code you created. Hit the "+" in the upper-right, create a "new repository" and fill out the appropriate details (don't create a README just yet). To stay sane, I recommend using the same name for your repository as the local directory name you used... But that is not a requirement, just a recommendation. Once you've created the repository, connect your local repository to github and push your changes up to github. End of explanation """ %%file README.md # complete """ Explanation: The -u is a convenience that means from then on you can use just git push and git pull to send your code to and from github. 1e: Modify the code and send it back up to github We'll discuss proper documentation later. But for now make sure to add a README to your code repository. Always add a README with basic documentation. Always. Even if only you are going to use this code, trust me, future you will be very happy you did it. You can just call it README, but to get it to get rendered nicely on the github repository, you can call it README.md and write it using markdown syntax, REAMDE.rst in ReST (if you know what that is) or various other similar markup languages github understands. If you don't know/care, just use README.md, as that's pretty standard at this point. End of explanation """ !git #complete """ Explanation: Don't forget to add and commit via git and push up to github... End of explanation """ !git #complete """ Explanation: 1f: Choose a License A bet you didn't expect to be reading legalese today... but it turns out this is important. If you do not explicitly license your code, in most countries (including the US and EU) it is technically illegal for anyone to use your code for any purpose other than just looking at it. (Un?)Fortunately, there are a lot of possible open source licenses out there. Assuming you want an open license, the best resources is to use the "Choose a License" website. Have a look over the options there and decide which you think is appropriate for your code. Once you've chosen a License, grab a copy of the license text, and place it in your repository as a file called LICENSE (or LICENSE.md or the like). Some licenses might also suggest you place the license text or just a copyright notice in the source code as well, but that's up to you. Once you've done that, do as we've done before: push all your additions up to github. If you've done it right, github will automatically figure out your license and show it in the upper-right corner of your repo's github page. End of explanation """ # Don't forget to do this cd or something like it... otherwise you'll clone *inside* your repo %cd -0 !git clone <url from github>#complete %cd <reponame>#complete """ Explanation: Problem 2: Collaborating with others' repos There's not much point in having open source code if no one else can look at it or use it. So now we'll have you try modify your neighbors' project using github's Pull Request feature. 2a: Get (git?) your neighbor's code repo Find someone sitting near you who has gotten through Problem 1. Ask them their github user name and the name of their repository. Once you've got the name of their repo, navigate to it on github. The URL pattern is always "https://www.github.com/theirusername/reponame". Use the github interface to "fork" that repo, yielding a "yourusername/reponame" repository. Go to that one, take note of the URL needed to clone it (you'll need to grab it from the repo web page, either in "HTTPS" or "SSH" form, depending on your choice in 1a). Then clone that onto your local machine. End of explanation """ !git branch <name-of-branch>#complete """ Explanation: 2c: create a branch for your change You're going to make some changes to their code, but who knows... maybe they'll spend so long reviewing it that you want to do another. So it's always best to make changes in a specific "branch" for that change. So to do this we need to make a github branch. End of explanation """ !git add <files modified>#complete !git commit -m ""#complete """ Explanation: 2c: modify the code Make some change to their code repo. Usually this would be a new feature or a bug fix or documentation clarification or the like... But it's up to you. Once you've done that, be sure to commit the change locally. End of explanation """ !git push origin <name-of-branch>#complete """ Explanation: and push it up (to a branch on your github fork). End of explanation """ !git #complete """ Explanation: 2d: Issue a pull request Now use the github interface to create a new "pull request". If you time it right, once you've pushed your new branch up, you'll see a prompt to do this automatically appear on your fork's web page. But if you don't, use the "branches" drop-down to navigate to the new branch, and then hit the "pull request" button. That should show you an interface that you can use to leave a title and description (in github markdown), and then submit the PR. Go ahead and do this. 2e: Have them review the PR Tell your neighbor that you've issued the PR. They should be able to go to their repo, and see that a new pull request has been created. There they'll review the PR, possibly leaving comments for you to change. If so, go to 2f, but if not, they should hit the "Merge" button, and you can jump to 2g. 2f: (If necessary) make changes and update the code If they left you some comments that require changing prior to merging, you'll need to make those changes in your local copy, commit those changes, and then push them up to your branch on your fork. End of explanation """ !git remote add <neighbors-username> <url-from-neighbors-github-repo> #complete !git fetch <neighbors-username> #complete !git branch --set-upstream-to=<neighbors-username>/master master !git checkout master !git pull """ Explanation: Hopefully they are now satisfied and are willing to hit the merge button. 2g: Get the updated version Now you should get the up-to-date version from the original owner of the repo, because that way you'll have both your changes and any other changes they might have made in the meantime. To do this you'll need to connect your local copy to your nieghbor's github repo (not your fork). End of explanation """ !mkdir <yourpkgname>#complete !git mv code.py <yourpkgname>#complete #The "touch" unix command simply creates an empty file if there isn't one already. #You could also use an editor to create an empty file if you prefer. !touch <yourpkgname>/__init__.py#complete """ Explanation: Now if you look at the local repo, it should include your changes. Suggestion To stay sane, you might change the "origin" remote to your username. E.g. git remote rename origin &lt;yourusername&gt;. To go further, you might even delete your fork's master branch, so that only your neighbor's master exists. That might save you headaches in the long run if you were to ever access this repo again in the future. 2h: Have them reciprocate Science (Data or otherwise) and open source code is a social enterprise built on shared effort, mutual respect, and trust. So ask them to issue a PR aginst your code, too. The more we can stand on each others' shoulders, the farther we will all see. Hint: Ask them nicely. Maybe offer a cookie or something? Problem 3: Setting up a bare-bones Python Package Up to this point we've been working on the simplest possible shared code: a single file with all the content. But for most substantial use cases this isn't going to cut it. After all, Python was designed around the idea of namespaces that let you hide away or show code to make writing, maintaining, and versioning code much easier. But to make use of these, we need to deploy the installational tools that Python provides. This is typically called "packaging". In this problem we will take the code you just made it and build it into a proper python package that can be installed and then used anywhere. For more background and detail (and the most up-to-date recommendations) see the Python Packaging Guide. 3a: Set up a Python package structure for your code First we adjust the structure of your code from Problem 1 to allow it to live in a package structure rather than as a stand-alone .py file. All you need to do is create a directory, move the code.py file into that directory, and add a file (can be empty) called __init__.py into the directory. You'll have to pick a name for the package, which is usually the same as the repo name (although that's not strictly required). Hint: don't forget to switch back to your code repo directory, if you are doing this immediately after Problem 2. End of explanation """ from <yourpkgname> import code#complete #if your code.py has a function called `do_something` as in the example above, you can now run it like: code.do_something() """ Explanation: 3b: Test your package You should now be able to import your package and the code inside it as though it were some installed package like numpy, astropy, pandas, etc. End of explanation """ %%file <yourpkgname>/__init__.py #complete """ Explanation: 3c: Apply packaging tricks One of the nice things about packages is that they let you hide the implementation of some part of your code in one place while exposing a "cleaner" namespace to the users of your package. To see a (trivial) example, of this, lets pull a function from your code.py into the base namespace of the package. In the below make the __init__.py have one line: from .code import do_something. That places the do_something() function into the package's root namespace. End of explanation """ import <yourpkgname>#complete <yourpkgname>.do_something()#complete """ Explanation: Now the following should work. End of explanation """ from importlib import reload #not necessary on Py 2.x, where reload() is built-in reload(<yourpkgname>)#complete <yourpkgname>.do_something()#complete """ Explanation: BUT you will probably get an error here. That's because Python is smart about imports: once it's imported a package once it won't re-import it later. Usually that saves time, but here it's a hassle. Fortunately, we can use the reload function to get around this: End of explanation """ %%file /Users/erik/tmp/lsst-test/setup.py #!/usr/bin/env python from distutils.core import setup setup(name='<yourpkgname>', version='0.1dev', description='<a description>', author='<your name>', author_email='<youremail>', packages=['<yourpkgname>'], ) #complete """ Explanation: 3d: Create a setup.py file Ok, that's great in a pinch, but what if you want your package to be available from other directories? If you open a new terminal somewhere else and try to import &lt;yourpkgname&gt; you'll see that it will fail, because Python doesn't know where to find your package. Fortunately, Python (both the language and the larger ecosystem) provide built-in tools to install packages. These are built around creating a setup.py script that controls installation of a python packages into a shared location on your machine. Essentially all Python packages are installed this way, even if it happens silently behind-the-scenes. Below is a template bare-bones setup.py file. Fill it in with the relevant details for your package. End of explanation """ !python setup.py build """ Explanation: 3e: Build the package Now you should be able to "build" the package. In complex packages this will involve more involved steps like linking against C or FORTRAN code, but for pure-python packages like yours, it simply involves filtering out some extraneous files and copying the essential pieces into a build directory. End of explanation """ %%sh cd build/lib.X-Y-Z #complete python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete """ Explanation: To test that it built sucessfully, the easiest thing to do is cd into the build/lib.X-Y-Z directory ("X-Y-Z" here is OS and machine-specific). Then you should be able to import &lt;yourpkgname&gt;. It's usually best to do this as a completely independent process in python. That way you can be sure you aren't accidentally using an old import as we saw above. End of explanation """ %%sh conda create -n test_<yourpkgname> anaconda #complete source activate test_<yourpkgname> #complete python setup.py install """ Explanation: 3f: Install the package Alright, now that it looks like it's all working as expected, we can install the package. Note that if we do this willy-nilly, we'll end up with lots of packages, perhaps with the wrong versions, and it's easy to get confused about what's installed (there's no reliable uninstall command...) So before installing we first create a virtual environment using Anaconda, and install into that. If you don't have anaconda or a similar virtual environment scheme, you can just do python setup.py install. But just remember that this will be difficult to back out (hence the reason for Python environments in the first place!) End of explanation """ %%sh cd $HOME source activate test_<yourpkgname> #complete python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete """ Explanation: Now we can try running the package from anywhere (not just the source code directory), as long as we're in the same environment that we installed the package in. End of explanation """ !git #complete """ Explanation: 3g: Update the package on github OK, it's now installable. You'll now want to make sure to update the github version to reflect these improvements. You'll need to add and commit all the files. You'll also want to update the README to instruct users that they should use python setup.py install to install the package. End of explanation """ %%file ~/.pypirc [distutils] index-servers= testpypi [testpypi] repository = https://testpypi.python.org/pypi username = <your user name goes here> password = <your password goes here> """ Explanation: Problem 4: Publishing your package on (fake) PyPI Now that your package can be installed by anyone who comes across it on github. But it tends to scare some people that they need to download the source code and know git to use your code. The Python Package Index (PyPI), combined with the pip tool (now standard in Python) provides a much simpler way to distribute code. Here we will publish your code to a testing version of PyPI. 4a: Create a PyPI account First you'll need an account on PyPI to register new packages. Go to the testing PyPI, and register. You'll also need to supply your login details in the .pypirc directory in your home directory as shown below. (If it were the real PyPI you'd want to be more secure and not have your password in plain text. But for the testing server that's not really an issue.) End of explanation """ !python setup.py register -r https://testpypi.python.org/pypi """ Explanation: 4b: Register your package on PyPI distutils has built-in functionality for interacting with PyPI. This includes the ability to register your package directly from the command line, automatically filling out the details you provided in your setup.py. Hint: You'll want to make sure your package version is something you want to release before executing the register command. Released versions can't be duplicates of existing versions, and shouldn't end in "dev" or "b" or the like. End of explanation """ !python setup.py sdist """ Explanation: (The -r is normally unnecessary, but we need it here because we're using the "testing" PyPI) 4c: Build a "source" version of your package Check out the PyPI page for your package. You'll see it now has the info from your setup.py but there's no package. Again, distutils provides a tool to do this automatically - you take the source distribution that was created, and upload it: End of explanation """ !python setup.py sdist upload -r https://testpypi.python.org/pypi """ Explanation: Verify that there is a &lt;yourpkg&gt;-&lt;version&gt;.tar.gz file in the dist directory. It should have all of the source code necessary for your package. 4d: Upload your package to PyPI Check out the PyPI page for your package. You'll see it now has the info from your setup.py but there's no package. Again, distutils provides a tool to do this automatically - you take the source distribution that was created, and upload it: End of explanation """ %%sh conda create -n test_pypi_<yourpkgname> anaconda #complete source activate test_pypi_<yourpkgname> #complete pip install -i https://testpypi.python.org/pypi <yourpkgname> %%sh cd $HOME source activate test_pypi_<yourpkgname> #complete python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete """ Explanation: If for some reason this fails (which does happen for unclear reasons on occasion), you can usually just directly upload the .tar.gz file from the web interface without too much trouble. 4e: Install your package with pip The pip tool is a convenient way to install packages on PyPI. Again, we use Anaconda to create a testing environment to make sure everything worked correctly. (Normally the -i wouldn't be necessary - we're using it here only because we're using the "testing" PyPI) End of explanation """
phoebe-project/phoebe2-docs
2.3/examples/requiv_max_limit.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.3,<2.4" import phoebe b = phoebe.default_binary() b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101)) """ Explanation: jktebop: requiv_max_limit Here we'll examine how well jktebop agrees with PHOEBE with increased distortion. Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ b.add_compute('jktebop', requiv_max_limit=1.0) """ Explanation: In order to allow jktebop to compute models, we'll set requiv_max_limit=1.0, effectively disabling the error that would otherwise be raised at a default factor of 0.5 by b.run_checks_compute. End of explanation """ b.set_value_all('ld_mode', 'manual') b.set_value_all('ld_func', 'linear') b.set_value_all('ld_coeffs', [0.5]) b.set_value_all('irrad_method', 'none') """ Explanation: And to avoid any issues with falling outside the atmosphere grids, we'll set a simple flat limb-darkening model and disable irradiation. End of explanation """ requiv_max = b.get_value('requiv_max', component='primary', context='component') for requiv_max_factor in [0.6, 0.7, 0.8, 0.9, 1.0]: b.set_value('requiv', component='primary', value=requiv_max_factor*requiv_max) b.run_compute(kind='phoebe', atm='blackbody', model='phoebe2_model', overwrite=True) b.run_compute(kind='jktebop', model='jktebop_model', overwrite=True) _ = b.plot(context='model', title='requiv = {:0.1f} / requiv_max'.format(requiv_max_factor), draw_title=True, legend=True, show=True) """ Explanation: For PHOEBE, we'll use blackbody atmospheres (again to avoid any issues of falling out of the grid). For jktebop, we'll keep 'ck2004' - this will only be used to compute the flux-scaling factor based on mean stellar values, so should not fall outside the grid. At a quick glance, we can see the jktebop agrees quite well at a factor of 0.6, but noticeable differences appear by 0.7 (keep in mind, the default value before an error will be raise within PHOEBE is 0.5, but this can be adjusted as necessary, with caution). End of explanation """
BinRoot/TensorFlow-Book
ch06_hmm/Concept01_forward.ipynb
mit
import numpy as np import tensorflow as tf """ Explanation: Ch 06: Concept 01 Hidden Markov model forward algorithm Oof this code's a bit complicated if you don't already know how HMMs work. Please see the book chapter for step-by-step explanations. I'll try to improve the documentation, or feel free to send a pull request with your own documentation! First, let's import TensorFlow and NumPy: End of explanation """ class HMM(object): def __init__(self, initial_prob, trans_prob, obs_prob): self.N = np.size(initial_prob) self.initial_prob = initial_prob self.trans_prob = trans_prob self.emission = tf.constant(obs_prob) assert self.initial_prob.shape == (self.N, 1) assert self.trans_prob.shape == (self.N, self.N) assert obs_prob.shape[0] == self.N self.obs_idx = tf.placeholder(tf.int32) self.fwd = tf.placeholder(tf.float64) def get_emission(self, obs_idx): slice_location = [0, obs_idx] num_rows = tf.shape(self.emission)[0] slice_shape = [num_rows, 1] return tf.slice(self.emission, slice_location, slice_shape) def forward_init_op(self): obs_prob = self.get_emission(self.obs_idx) fwd = tf.multiply(self.initial_prob, obs_prob) return fwd def forward_op(self): transitions = tf.matmul(self.fwd, tf.transpose(self.get_emission(self.obs_idx))) weighted_transitions = transitions * self.trans_prob fwd = tf.reduce_sum(weighted_transitions, 0) return tf.reshape(fwd, tf.shape(self.fwd)) """ Explanation: Define the HMM model: End of explanation """ def forward_algorithm(sess, hmm, observations): fwd = sess.run(hmm.forward_init_op(), feed_dict={hmm.obs_idx: observations[0]}) for t in range(1, len(observations)): fwd = sess.run(hmm.forward_op(), feed_dict={hmm.obs_idx: observations[t], hmm.fwd: fwd}) prob = sess.run(tf.reduce_sum(fwd)) return prob """ Explanation: Define the forward algorithm: End of explanation """ if __name__ == '__main__': initial_prob = np.array([[0.6], [0.4]]) trans_prob = np.array([[0.7, 0.3], [0.4, 0.6]]) obs_prob = np.array([[0.1, 0.4, 0.5], [0.6, 0.3, 0.1]]) hmm = HMM(initial_prob=initial_prob, trans_prob=trans_prob, obs_prob=obs_prob) observations = [0, 1, 1, 2, 1] with tf.Session() as sess: prob = forward_algorithm(sess, hmm, observations) print('Probability of observing {} is {}'.format(observations, prob)) """ Explanation: Let's try it out: End of explanation """
aasensio/SolarnetGranada
notebooks/Profiles and classification.ipynb
mit
sn.set_style("dark") f, ax = pl.subplots(figsize=(9,9)) ax.imshow(stI[:,:,0], aspect='auto', cmap=pl.cm.gray) """ Explanation: Index Contrast and velocity fields Classification Contrast and velocity fields <a id='contrast'></a> End of explanation """ contrastFull = np.std(stI[:,:,0]) / np.mean(stI[:,:,0]) contrastQuiet = np.std(stI[400:,100:300,0]) / np.mean(stI[400:,100:300,0]) print("Contrast in the image : {0}%".format(contrastFull * 100.0)) print("Contrast in the quiet Sun : {0}%".format(contrastQuiet * 100.0)) """ Explanation: Let us compute simple things like the contrast and the Doppler velocity field End of explanation """ v = np.zeros((512,512)) for i in range(512): for j in range(512): pos = np.argmin(stI[i,j,20:40]) + 20 res = np.polyfit(wave[pos-2:pos+2], stI[i,j,pos-2:pos+2], 2) w = -res[1] / (2.0 * res[0]) v[i,j] = (w-6301.5) / 6301.5 * 3e5 f, ax = pl.subplots(figsize=(9,9)) ax.imshow(np.clip(v,-5,5)) f.savefig('velocities.png') f, ax = pl.subplots(nrows=1, ncols=2, figsize=(15,9)) ax[0].imshow(stI[:,0,:], aspect='auto', cmap=pl.cm.gray) ax[1].imshow(stV[:,0,:], aspect='auto', cmap=pl.cm.gray) f.savefig('exampleStokes.png') """ Explanation: Now let us compute the velocity field. To this end, we compute the location of the core of the line in velocity units for each pixel. End of explanation """ X = stV[50:300,200:450,:].reshape((250*250,112)) maxV = np.max(np.abs(X), axis=1) X = X / maxV[:,None] nClusters = 9 km = MiniBatchKMeans(init='k-means++', n_clusters=nClusters, n_init=10, batch_size=500) km.fit(X) out = km.predict(X) avg = np.zeros((nClusters,112)) for i in range(nClusters): avg[i,:] = np.mean(X[out==i,:], axis=0) f, ax = pl.subplots(ncols=3, nrows=3, figsize=(12,9)) loop = 0 for i in range(3): for j in range(3): percentage = X[out==i,:].shape[0] / (250*250.) * 100.0 ax[i,j].plot(km.cluster_centers_[loop,:]) ax[i,j].set_title('Class {0} - {1}%'.format(loop, percentage)) loop += 1 pl.tight_layout() """ Explanation: Classification <a id='classification'></a> End of explanation """
weleen/mxnet
example/notebooks/moved-from-mxnet/composite_symbol.ipynb
apache-2.0
import mxnet as mx """ Explanation: Composite symbols into component In this example we will show how to make an Inception network by forming single symbol into component. Inception is currently best model. Compared to other models, it has much less parameters, and with best performance. However, it is much more complex than sequence feedforward network. The Inception network in this example is refer to Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." arXiv preprint arXiv:1502.03167 (2015). End of explanation """ # Basic Conv + BN + ReLU factory def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), name=None, suffix=''): conv = mx.symbol.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix)) bn = mx.symbol.BatchNorm(data=conv, fix_gamma=False, eps=1e-5 + 1e-10, momentum=0.9, name='bn_%s%s' %(name, suffix)) act = mx.symbol.Activation(data=bn, act_type='relu', name='relu_%s%s' %(name, suffix)) return act """ Explanation: For complex network such as inception network, building from single symbol is painful, we can make simple ComponentFactory to simplfiy the procedure. Except difference in number of filter, we find 2 major differences in each Inception module, so we can build two factories plus one basic Convolution + BatchNorm + ReLU factory to simplfiy the problem. End of explanation """ prev = mx.symbol.Variable(name="Previos Output") conv_comp = ConvFactory(data=prev, num_filter=64, kernel=(7,7), stride=(2, 2)) mx.viz.plot_network(symbol=conv_comp) """ Explanation: We can visualize our basic component End of explanation """ # param mapping to paper: # num_1x1 >>> #1x1 # num_3x3red >>> #3x3 reduce # num_3x3 >>> #3x3 # num_d3x3red >>> double #3x3 reduce # num_d3x3 >>> double #3x3 # pool >>> Pool # proj >>> proj def InceptionFactoryA(data, num_1x1, num_3x3red, num_3x3, num_d3x3red, num_d3x3, pool, proj, name): # 1x1 c1x1 = ConvFactory(data=data, num_filter=num_1x1, kernel=(1, 1), name=('%s_1x1' % name)) # 3x3 reduce + 3x3 c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce') c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), name=('%s_3x3' % name)) # double 3x3 reduce + double 3x3 cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce') cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_0' % name)) cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_1' % name)) # pool + proj pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(1, 1), pad=(1, 1), pool_type=pool, name=('%s_pool_%s_pool' % (pool, name))) cproj = ConvFactory(data=pooling, num_filter=proj, kernel=(1, 1), name=('%s_proj' % name)) # concat concat = mx.symbol.Concat(*[c1x1, c3x3, cd3x3, cproj], name='ch_concat_%s_chconcat' % name) return concat # We can also visualize network with feature map shape information # In this case, we must provide all necessary input shape info as a dict prev = mx.symbol.Variable(name="Previos Output") in3a = InceptionFactoryA(prev, 64, 64, 64, 64, 96, "avg", 32, name="in3a") # shape info # Note shape info must contain batch size although we ignore batch size in graph to save space batch_size = 128 shape = {"Previos Output" : (batch_size, 3, 28, 28)} # plot mx.viz.plot_network(symbol=in3a, shape=shape) """ Explanation: The next step is making a component factory with all stride=(1, 1) End of explanation """ # param mapping to paper: # num_1x1 >>> #1x1 (not exist!) # num_3x3red >>> #3x3 reduce # num_3x3 >>> #3x3 # num_d3x3red >>> double #3x3 reduce # num_d3x3 >>> double #3x3 # pool >>> Pool (not needed, all are max pooling) # proj >>> proj (not exist!) def InceptionFactoryB(data, num_3x3red, num_3x3, num_d3x3red, num_d3x3, name): # 3x3 reduce + 3x3 c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce') c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), stride=(2, 2), name=('%s_3x3' % name)) # double 3x3 reduce + double 3x3 cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce') cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), stride=(1, 1), name=('%s_double_3x3_0' % name)) cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), stride=(2, 2), name=('%s_double_3x3_1' % name)) # pool + proj pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(2, 2), pad=(1, 1), pool_type="max", name=('max_pool_%s_pool' % name)) # concat concat = mx.symbol.Concat(*[c3x3, cd3x3, pooling], name='ch_concat_%s_chconcat' % name) return concat prev = mx.symbol.Variable(name="Previos Output") in3c = InceptionFactoryB(prev, 128, 160, 64, 96, name='in3c') mx.viz.plot_network(symbol=in3c) """ Explanation: We will make the other factory with stride=(2, 2) End of explanation """ # data data = mx.symbol.Variable(name="data") # stage 1 conv1 = ConvFactory(data=data, num_filter=64, kernel=(7, 7), stride=(2, 2), pad=(3, 3), name='1') pool1 = mx.symbol.Pooling(data=conv1, kernel=(3, 3), stride=(2, 2), name='pool_1', pool_type='max') # stage 2 conv2red = ConvFactory(data=pool1, num_filter=64, kernel=(1, 1), stride=(1, 1), name='2_red') conv2 = ConvFactory(data=conv2red, num_filter=192, kernel=(3, 3), stride=(1, 1), pad=(1, 1), name='2') pool2 = mx.symbol.Pooling(data=conv2, kernel=(3, 3), stride=(2, 2), name='pool_2', pool_type='max') # stage 2 in3a = InceptionFactoryA(pool2, 64, 64, 64, 64, 96, "avg", 32, '3a') in3b = InceptionFactoryA(in3a, 64, 64, 96, 64, 96, "avg", 64, '3b') in3c = InceptionFactoryB(in3b, 128, 160, 64, 96, '3c') # stage 3 in4a = InceptionFactoryA(in3c, 224, 64, 96, 96, 128, "avg", 128, '4a') in4b = InceptionFactoryA(in4a, 192, 96, 128, 96, 128, "avg", 128, '4b') in4c = InceptionFactoryA(in4b, 160, 128, 160, 128, 160, "avg", 128, '4c') in4d = InceptionFactoryA(in4c, 96, 128, 192, 160, 192, "avg", 128, '4d') in4e = InceptionFactoryB(in4d, 128, 192, 192, 256, '4e') # stage 4 in5a = InceptionFactoryA(in4e, 352, 192, 320, 160, 224, "avg", 128, '5a') in5b = InceptionFactoryA(in5a, 352, 192, 320, 192, 224, "max", 128, '5b') # global avg pooling avg = mx.symbol.Pooling(data=in5b, kernel=(7, 7), stride=(1, 1), name="global_pool", pool_type='avg') # linear classifier flatten = mx.symbol.Flatten(data=avg, name='flatten') fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=1000, name='fc1') softmax = mx.symbol.SoftmaxOutput(data=fc1, name='softmax') # if you like, you can visualize full network structure mx.viz.plot_network(symbol=softmax, shape={"data" : (128, 3, 224, 224)}) """ Explanation: Now we can use these factories to build the whole network End of explanation """
dualphase90/Learning-Neural-Networks
.ipynb_checkpoints/Training-Neural-Networks-Theano-checkpoint.ipynb
mit
import theano import theano.tensor as T import numpy as np """ Explanation: Training Neural Networks with Theano Training neural networks involves quite a few tricky bits. We try to make everything clear and easy to understand, to get you training your neural networks as quickly as possible. Theano allows us to write relatively concise code that follows the structure of the underlying maths. To run the code yourself, download the notebook at https://github.com/ASIDataScience/training-neural-networks-notebook Recognising hand-written digits We will train a network to classify digits. More precisely, we want a network that when presented with an image of a hand-written digit will tell us what digit it is ('0', '1', ..., '9'). The data we will use for this task is known as the MNIST dataset. It has a long tradition in neural networks research, as the dataset is quite small but still very tricky to classify correctly. <img src="0-MNIST-Digits.png" /> You can <a href=http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz> download the MNIST dataset</a> which we are using. Import Theano You can install theano with the Python package manager pip. At the command line type pip install theano or check out the <a href=http://deeplearning.net/software/theano/install.html#basic-user-install-instructions>theano documentation</a> if you run into trouble The theano.tensor module contains useful functionality for manipulating vectors and matrices, like we will be doing here, so let's import it along with the full package. End of explanation """ import os import gzip import pickle myPath = '' # mnist.pkl.gz is in this directory f = gzip.open(os.path.join(myPath, 'mnist.pkl.gz'), 'rb') try: # for cross-platform, cross-version reasons, try two different pickle.load statements mnist_dataset = pickle.load(f, encoding='latin1') except: mnist_dataset = pickle.load(f) f.close() train_set, valid_set, test_set = mnist_dataset prepare_data = lambda x: (theano.shared(x[0].astype('float64')), theano.shared(x[1].astype('int32'))) (training_x, training_y), (valid_x, valid_y), (test_x, test_y) = map(prepare_data, (train_set, valid_set, test_set)) import matplotlib import matplotlib.pyplot as plt %matplotlib inline def plot_mnist_digit(image): '''Plot a single digit from the mnsist dataset''' image = np.reshape(image, [28,28]) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.matshow(image, cmap=matplotlib.cm.binary) plt.xticks(np.array([])) plt.yticks(np.array([])) plt.show() """ Explanation: With the data loaded into a variable mnist_dataset, we split it into three part: a training set, used to teach the network to recognize digits, a validation set that could be used to tune and compare models, and finally a test set, used to see how well it has learned. We then prepare the datasets, splitting each set into the images and their labels, and store them in Theano shared variables, a bit of theano magic that's explained later. Understanding this massaging of the data isn't crucial. End of explanation """ for i in range(3): plot_mnist_digit(training_x.get_value()[i]) # training_x is a theano object containing *images* print ('Image class: '+ str(training_y.get_value()[i])) # training_y is a theano object containing *labels* """ Explanation: Let's look at the first three images in the training set, then set about building a machine learning model that has the ability to recognise digits itself! We've defined a function plot_mnist_digit exactly for printing out the training images - it's just for prettiness. End of explanation """ n_classes = 10 # each digit is one of 0-9 dims = 28 * 28 # our input data is flattened 28x28 matrices of image pixels X = T.dmatrix() # Theano double matrix y = T.ivector() # Theano integer vector W = theano.shared(np.zeros([dims,n_classes])) # Theano shared double matrix b = theano.shared(np.zeros(n_classes)) # Theano shared double vector """ Explanation: The neural network model The neural network model that we will build defines a probability distribution $$P(Y = y \ |\ X = \boldsymbol{\textrm{x}} ; \theta),$$ where $Y$ represents the image class, which means it is a random variable that can take the values 0-9, $X$ represents the image pixels and is a vector-valued random variable (we collapse the image matrix into a vector), and $\theta$ is the set of model parameters that we are going to learn. In this tutorial we build two models: first implementing a logistic regression model, then extending it to a neural network model. Multi-class logistic regression The equation for our first model is give by $$P(Y = y \ |\ \boldsymbol{\textrm{x}} ; \theta) \propto \left[\sigma \left( \boldsymbol{\textrm{x}}^\boldsymbol{\textrm{T}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{b}}^\boldsymbol{\textrm{T}}\right)\right]_y,$$ where $[\boldsymbol{\textrm{x}}]_i$ is the $i$th entry of vector $\boldsymbol{\textrm{x}}$, and the use of the proportionality symbol $\propto$ means that the probability is equal to the expression on the right hand side times a constant chosen such that $\sum_y{P(Y = y \ |\ \boldsymbol{\textrm{x}} ; \theta)} = 1$ The parameter set $\theta$ for this model is $\theta = {\boldsymbol{\textrm{W}}, \boldsymbol{\textrm{b}}}$, where $\boldsymbol{\textrm{W}}$ is a matrix and $\boldsymbol{\textrm{b}}$ is a vector. We also use the non-linearity $\sigma$ given by $$\sigma(t) = \frac{1}{1+e^{-t}}$$ <img src=https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Logistic-curve.svg/200px-Logistic-curve.svg.png /> When applied to a vector or matrix, the sigmoid function $\sigma(t)$ is applied entrywise. Understanding the logistic regression model One way to think about the logistic regression model is that it takes the input ($\boldsymbol{\textrm{x}}$), puts it through a linear combination ($\boldsymbol{\textrm{x}}^\boldsymbol{\textrm{T}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{b}}^\boldsymbol{\textrm{T}}$) and then finally through a non-linearity: $\sigma(\boldsymbol{\textrm{x}}^\boldsymbol{\textrm{T}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{b}}^\boldsymbol{\textrm{T}})$. The result of this operation is a vector representing the entire discrete distribution over all the possible classes - in our case the ten possible digits 0-9. To get the probability of a particular class $y=6$ we extract the $6th$ entry of the probability vector: $\left[\sigma \left( \boldsymbol{\textrm{x}}^\boldsymbol{\textrm{T}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{b}}^\boldsymbol{\textrm{T}}\right)\right]_y$ Graphically the model looks like this: <img src='1-Logistic-Regression-Graph.png' /> Each indiviual entry of the vectors x and $P(Y)$ is shown as a circle -- known as the units (or artificial neurons) of the network. We have $D$ input units (the dimensionality of the input vector, which is the flattened matrix of pixels) and $C$ output units (the number of classes, which is the digits 0-9). The model parameters W and b are represented as arrows. We also show the application of the sigmoid functions, but we do not represent the normalization that makes the probabilities sum up to $1$. Another way to write the model above is using the $\textrm{SoftMax}$ function. A good exercise is deriving the $\textrm{SoftMax}$ function based on the fact that we can also write the same model using $\textrm{SoftMax}$ and the equal sign instead of the proportionality sign: $$P(Y = y \ |\ \boldsymbol{\textrm{x}} ; \theta) = \left[\textrm{SoftMax} \left( \boldsymbol{\textrm{W}} \boldsymbol{\textrm{x}} + \boldsymbol{\textrm{b}}\right)\right]_y,$$ Neural network with a single hidden layer Our second model is given by $$P(Y = y \ |\ \boldsymbol{\textrm{x}} ; \theta) = \left[ \textrm{SoftMax} \left( \boldsymbol{\textrm{h}} ^\boldsymbol{\textrm{T}} \boldsymbol{\textrm{W}}\boldsymbol{\textrm{hy}} + \boldsymbol{\textrm{b}}^\boldsymbol{\textrm{T}} \boldsymbol{\textrm{hy}}\right)\right]y, \ \boldsymbol{\textrm{h}} = \tanh \left( \boldsymbol{\textrm{x}} ^\boldsymbol{\textrm{T}} \boldsymbol{\textrm{W}}\boldsymbol{\textrm{xh}} + \boldsymbol{\textrm{b}}^\boldsymbol{\textrm{T}}_\boldsymbol{\textrm{xh}}\right)$$ This is very similar to the logisitc regression model. Here we have introduced a new vector-valued random variable $h$. We call this a 'hidden' variable or 'latent' variable, as we do not have any data observations for it. This variable may not even correspond to any quantity in the real world, but we use it to increase the power of our statistical model. We now also have more parameters: $\theta = {\boldsymbol{\textrm{W}}\boldsymbol{\textrm{xh}}, \boldsymbol{\textrm{W}}\boldsymbol{\textrm{hy}}, \boldsymbol{\textrm{b}}\boldsymbol{\textrm{xh}}, \boldsymbol{\textrm{b}}\boldsymbol{\textrm{hy}}}$. $\tanh$ is the hyperbolic tangent function given by $$\tanh(t) = \frac{e^t-e^{-t}}{e^t+e^{-t}}$$ <img src= http://www.ece.northwestern.edu/local-apps/matlabhelp/techdoc/ref/tanh.gif /> Like with the sigmoid, when applied to a vector or matrix, $\tanh$ function is applied entrywise. Graphically, this model looks like this: <img src='2-Neural-Network-Graph.png' /> Now our depiction shows a hidden layer with $M$ units (this number can be different from the number of input neurons and number of output neurons), and we have two different nonlinearities in the graph: $tanh$ and sigmoids (but again we are not graphically representing the SoftMax normalization). Teaching the network The way we're going to make our network learn is by trying to find some values for our parameters $\theta$ so that the network is as likely as possible to guess the correct class. That is, given a data set of training images $\boldsymbol{\textrm{x}}_1, \boldsymbol{\textrm{x}}_2, \dots, \boldsymbol{\textrm{x}}_N$ and correct labels $y_1, y_2, \dots, y_N$, we want to find the parameters that maximize probability of the correct labels given the images. This method of choosing parameters is called maximum likelihood (ML), and we can express it mathematically as finding the parameters $\theta$ which maximize the likelihood function: $$\theta^* = {\arg\max}_\theta \ P( Y_1 = y_1, Y_2 = y_2, \dots, Y_N = y_N \ | \ \boldsymbol{\textrm{X}}_1 = \boldsymbol{\textrm{x}}_1,\boldsymbol{\textrm{X}}_2 = \boldsymbol{\textrm{x}}_2, \dots, \boldsymbol{\textrm{X}}_N = \boldsymbol{\textrm{x}}_N ; \theta)$$ And since our data points are independent, we can write this joint probability as a product of probabilities: \begin{align} P( Y_1 = y_1,\dots, Y_N = y_N \ | \ \boldsymbol{\textrm{X}}1 = \boldsymbol{\textrm{x}}_1, \dots, \boldsymbol{\textrm{X}}_N = \boldsymbol{\textrm{x}}_N ; \theta) &= P( Y_1 = y_1 \ | \ \boldsymbol{\textrm{X}}_1 = \boldsymbol{\textrm{x}}_1 ; \theta) \times \dots \times P( Y_N = y_N \ | \ \boldsymbol{\textrm{X}}_N = \boldsymbol{\textrm{x}}_N ; \theta) \ &= \prod{i=1}^N P( Y_i = y_i \ | \ \boldsymbol{\textrm{X}}_i = \boldsymbol{\textrm{x}}_i) \end{align} Writing the likelihood function for an entire dataset In our likelihood function above, each random variable pair $(X_1, Y_1), (X_2, Y_2), \dots, (X_N, Y_N)$ refers to a single data point. But since virtually all of our computations need to deal with multiple data points, we will find it both useful and computationally efficient to express both the mathematics and our Python code in terms of datasets. Thus, we will express the scalar random variables $Y_1, Y_2, \dots, Y_N$ as the vector-valued random variable $Y$, and the vector-valued random variables $X_1, X_2, \dots, X_N$ as the matrix-valued random variable $X$, where the matrix $X$ has as many rows as there are data points. Using this notation, we rewrite the maximum likelihood equation above: $$\theta^* = {\arg\max}_\theta \ P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta)$$ Similarly, we can specify the logistic regression model it terms of multiple datapoints: $$P(Y \ |\ X = \boldsymbol{\textrm{X}} ; \theta) = \textrm{SoftMax} \left( \boldsymbol{\textrm{X}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{1b}}^\boldsymbol{\textrm{T}}\right)$$ Here the result is a matrix of probabilities with as many rows as there are data points, and as many columns as there are classes. We also consider the SoftMax to normalize the result of the linear combination $\left( \boldsymbol{\textrm{X}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{1b}}^\boldsymbol{\textrm{T}}\right)$ in such a way that each row of the result is a proper probability distribution summing to $1$. Note that we have had to multiply the bias vector $\boldsymbol{\textrm{b}}$ by the vertical vector of all ones $\boldsymbol{\textrm{1}}$ in order to add the bias term for every single data point. The neural network model equations follow a similar pattern, which it would be a good exercise to write out for yourself. Computing the log-likelihood of a dataset In most machine learning applications, it is better to maximize the log-likelihood rather than the likelihood. This is done because the log-likelihood tends to be simpler to compute and more numerically stable than the likelihood. In terms of the math, this doesn't make things much more complicated, as all we need to add is a $\log$ in front of the likelihood: $$\theta^* = {\arg\max}_\theta \ \log P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta)$$ Since the logarithm of a product is the sum of the logarithms, we can write: $$\log P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta) = \sum_{i=1}^N \log P( Y_i = \boldsymbol{\textrm{y}}_i\ | \ X_i = \boldsymbol{\textrm{X}}_i ; \theta)$$ that is, the log joint probability is the sum of the log marginal probabilities. Now let's plug in the probability of a dataset from above to obtain: $$\log P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta) = \sum \left[ \log\left( \textrm{SoftMax} \left( \boldsymbol{\textrm{X}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{1b}}^\boldsymbol{\textrm{T}}\right)\right)\right]_{\cdot,\boldsymbol{\textrm{y}}}$$ where we use the notation $[\boldsymbol{\textrm{M}}]{\cdot,\boldsymbol{\textrm{y}}}$ to mean that from the matrix $\boldsymbol{\textrm{M}}$ we construct a new vector $(a_1, a_2, \dots, a_n)$ such that $a_i = M{i, y_i} \forall i$. We use a slight abuse of notation and we use the sum symbol to indicate the summation of the entries in the vector; we need a summation because the log joint probability is the sum of the log marginal probabilities. Classifying a dataset Once we have a set of parameters $\theta$ that we are happy with, we can use the model to classify new data. We have built a model that gives us a distribution over classes given the data. How do we assign a class to a new data point? The simplest way is to choose the class with the highest probability under the model to be the class we assign. We can write this mathematically, again using vector notation: $$\hat{\boldsymbol{\textrm{y}}} = {\arg\max} \ P(Y \ |\ \boldsymbol{\textrm{X}} ; \theta) $$ Gradient ascent Computing the maximum likelihood parameters is computationally unfeasible, so we're going to use a method called gradient ascent to find a set of parameters that are really good but perhaps not the absolute best. The idea of gradient ascent is very simple. Given a function $f:\mathbb{R}^n \rightarrow \mathbb{R}$, we want to iteratively find points $f(\boldsymbol{\textrm{x}}n)$ such that the value of the function gets progressively higher, that is: $f(\boldsymbol{\textrm{x}}{n+1}) > f(\boldsymbol{\textrm{x}}_n) \forall n$. One way to do this is taking the direction of steepest ascent, which is just the gradient of the function $\frac{\partial f}{\partial \boldsymbol{\textrm{x}}}$, and taking a step in that direction times a constant $\lambda$ known as the learning rate that describes how big the step should be. We express this mathematically as: $$ \boldsymbol{\textrm{x}}_{n+1} \leftarrow \boldsymbol{\textrm{x}}_n + \lambda \frac{\partial f}{\partial \boldsymbol{\textrm{x}}} $$ The last detail is choosing the starting point, $\boldsymbol{\textrm{x}}_0$, which we can arbitrarily choose by setting to zero or to some random value. Graphically, the algorithm looks like this, with each color representing the path from a different starting point: <img src='3-Gradient-Ascent-Sktech.png' /> Note that this method tends to find the top of the nearest hill ('local' maximum), and not the overall best point ('global' maximum). It is also not guaranteed to increase the value of the function at each step; if the learning rate is too large, the algorithm could potentially jump across the top of the hill to a lower point on the other side of the hill. Much research goes into optimization methods, and many neural networks models are trained with methods that are more complicated than gradient ascent as it's presented here, but this same idea is at the base of all of those methods. Finally, most people talk about and use gradient descent on the negative log-likelihood rather than gradient ascent on the log-likelihood; this is because gradient descent is the standard algorithm in the field of optimization. Gradient ascent in used in this tutorial to keep things a bit simpler. Optimizing our model with gradient ascent This is extremely easy: we just apply the equation above to our parameters taking the gradient of the log-likelihood: \begin{align} \boldsymbol{\textrm{W}} &\leftarrow \boldsymbol{\textrm{W}} + \lambda \frac{\partial \log P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta)}{\partial \boldsymbol{\textrm{W}}} \ \boldsymbol{\textrm{b}} &\leftarrow \boldsymbol{\textrm{b}} + \lambda \frac{\partial \log P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta)}{\partial \boldsymbol{\textrm{b}}} \ \end{align} Coding with Theano Coding with Theano is extremely simple: once we have the equations behind the model, we pretty much type them directly. Since we will be training on multiple data points, we are going to encode the model as we wrote it for datasets, using vectors and matrices. We'll start with the logistic regression. Our model is: $$P(Y \ |\ \boldsymbol{\textrm{X}} ; \theta) = \textrm{SoftMax} \left( \boldsymbol{\textrm{X}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{1b}}^\boldsymbol{\textrm{T}}\right)$$ The first thing we do is declare all of the variables ($\boldsymbol{\textrm{X}}, \boldsymbol{\textrm{y}}, \boldsymbol{\textrm{W}}, \boldsymbol{\textrm{b}}$) that we will be using and their types like you would in Java or C: End of explanation """ P = T.nnet.softmax(T.dot(X,W) + b) # the matrix of probabilities of all classes for all data points """ Explanation: As you can see above, we defined $W$ and $b$ to be shared variables. This means that the values of these variables are persistent -- their values live on after we have run Theano operations. This is opposed to regular Theano variables, which only take values when Theano runs, and otherwise only exist in the abstract. The reason for making $W$ and $b$ shared variables is that we want to run multiple iterations of gradient descent, and to do that, we need their values to persist. Furthermore, we want to find good parameters through training, but we will then want to use the same parameters for prediction, so we need them to be persistent Let's now write our statistical model in Theano. We basically copy the following equation into code: $$P(Y \ |\ \boldsymbol{\textrm{X}} ; \theta) = \textrm{SoftMax} \left( \boldsymbol{\textrm{X}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{1b}}^\boldsymbol{\textrm{T}}\right)$$ End of explanation """ amatrix = np.zeros((3,2)) # 3x2 matrix of all zeros avector = np.array((1,2)) # the vector [1,2] amatrix + avector """ Explanation: Theano provides us with a T.nnet.softmax function to compute SoftMax, correctly normalizing the probabilities so that each row of the matrix P is a proper probability distribution that sums to $1$. Note that we didn't need to multiply b by the 1 vector, Theano will do the correct addition for us automatically, just like numpy would do it. Here is a simple numpy example illustrating this: End of explanation """ LL = T.mean(T.log(P)[T.arange(P.shape[0]), y]) # the log-likelihood (LL) """ Explanation: Our next equation is the log-likelihood (LL) of a dataset: $$\log P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta) = \sum \left[ \log\left( \textrm{SoftMax} \left( \boldsymbol{\textrm{X}} \boldsymbol{\textrm{W}} + \boldsymbol{\textrm{1b}}^\boldsymbol{\textrm{T}}\right)\right)\right]_{\cdot,\boldsymbol{\textrm{y}}}$$ End of explanation """ M = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) M[np.arange(M.shape[0]), (0,3,3)] """ Explanation: OK... there's a lot going on here. We have used two important tricks: - using mean instead of sum, - using the strange indexing [T.arange(P.shape[0]), y] Let's go over them one by one. Use the mean instead of the sum Imagine that we were to construct a new dataset that contains each data point in the original dataset twice. Then the log-likelihood of the new dataset will be double that of the original. More importantly, the gradient of the log-likelihood of the new dataset will also be double the gradient of the log-likelihood of the original dataset. But we would like the size of the gradient to not depend on the amount of duplication in our dataset, and the easiest way to accomplish that is to divide the gradient by the size of the dataset. Since taking the mean is equivalent to taking the sum and then dividing by the number of data points, what we are computing here is a type of "normalized" log-likelihood that will cause our gradient descent algorithm to be robust to change in dataset size. The quantity we are computing in the code can be more precisely described as the average log-likelihood for a single datapoint. Use indexing to create a vector from a matrix The second thing we do is use [T.arange(P.shape[0]), y] to apply the mathematical operation denoted in the equation by $\left[ \cdot\right]{\cdot,\boldsymbol{\textrm{y}}}$, that is, constructing a new vector $(a_1, a_2, \dots, a_n)$ from a matrix $\boldsymbol{\textrm{M}}$ such that $a_i = M{i, y_i} \forall i$. As cryptic as it may be, this is a peculiar, but standard numpy way to index. For example, given a matrix <pre> M = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]] </pre> If we wanted to extract one element from each row, say the 1st element from the first row, and the last from the others <pre>M[0,0], M[1,3], M[2,3]</pre> We could write that as a single index expression by combining the indexes <pre>M[(0,1,2), (0,3,3)]</pre> But now the first index is just $(0 \dots #\textrm{rows}-1)$, or, in code, np.arange(M.shape[0]). So we can write: End of explanation """ learning_rate = 0.5 # we tuned this parameter by hand updates = [ [W, W + learning_rate * T.grad(LL, W)], [b, b + learning_rate * T.grad(LL, b)] ] """ Explanation: We're done with the model. There's one more thing we need to do, and that is specify the gradient updates \begin{align} \boldsymbol{\textrm{W}} &\leftarrow \boldsymbol{\textrm{W}} + \lambda \frac{\partial \log P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta)}{\partial \boldsymbol{\textrm{W}}} \ \boldsymbol{\textrm{b}} &\leftarrow \boldsymbol{\textrm{b}} + \lambda \frac{\partial \log P( Y = \boldsymbol{\textrm{y}}\ | \ X = \boldsymbol{\textrm{X}} ; \theta)}{\partial \boldsymbol{\textrm{b}}} \ \end{align} End of explanation """ training_function = theano.function( inputs = [], # use givens instead of the inputs as it's more efficient outputs = LL, # output log-likelihood just to check that it is improving updates = updates, # these are the gradient updates, the one part that's really important givens = {X: training_x, # we indicate that the model variables X and y defined in the abstract y: training_y} # should take the values in the shared variables training_x and training_y ) """ Explanation: Theano functions OK, we're done coding the model. What do we do next? When working with Theano, the next step is to create a Theano function. A Theano function is the basic unit of Theano code that we call to do something. In our case, this something will be performing a single gradient ascent iteration. We create a Theano function by calling the function theano.function (yes, we create a function by calling a function). theano.function has four important parameters that we provide in order to get a Theano function: inputs -- the list of input variables of the Theano function, similar to the inputs of a Python function outputs -- the list of output variables of the Theano function, similar to the inputs of a Python function updates -- a list of updates for shared variables, in the format we used above when we defined the variable updates givens -- a dictionary that allows substituting some variables from the model with other variables In our case, we want the input to be the training dataset, the updates to be the gradient ascent updates, and while we don't really need an output, it will be helpful to get the log-likelihood as an output to see that we are doing the right things. However, we will use the givens instead of the input to provide the data to the function. Doing it this way is more efficient, as we've already loaded up the training dataset into memory as a shared Theano function, when we first loaded the data. Our Theano function will look like this: End of explanation """ for i in range(10): current_LL = training_function() print("Log-likelihood = " + str(current_LL) + "\t\t" + "Average probability of the correct class = " + str(np.exp(current_LL)) ) """ Explanation: OK, let's run ten iterations of our code and see what the log-likelihood does End of explanation """ y_hat = T.argmax(P, axis=1) """ Explanation: Using the model for classification Great, it appears we're improving the log-likelihood of the model on the training data. Now to use it to classify. Recall our equation that expresses how to use the model to get the class: $$\hat{\boldsymbol{\textrm{y}}} = {\arg\max} \ P(Y \ |\ \boldsymbol{\textrm{X}} ; \theta) $$ Let's put that in code: End of explanation """ classification_function = theano.function( inputs = [], outputs = y_hat, givens = {X:test_x} # don't need the true labels test_y here ) """ Explanation: Note that we had to specify axis=1, that is, we want to get the argmax for each row, as each row represents the distribution for one datapoint. Create a Theano function that classifies a dataset Similarly to the training function, the classification function will use givens to pass in the test dataset, and output y_hat which we just defined End of explanation """ test_y_hat = classification_function() print ("Classification error: "+ str(100 * (1 - np.mean(test_y_hat == test_y.get_value()))) + "%") for i in range(10): plot_mnist_digit( test_x.get_value()[i] # test_x is a theano object of images ) print ('Image class: \t\t' + str(test_y.get_value()[i])) # test_y is a theano object of *true labels* print ('Model-assigned class: \t' + str(test_y_hat[i])) # test_y_hat is a theano object of *predicted labels* """ Explanation: Now let's run the classification once, and print the first ten images, the true labels, and the labels assigned by the model End of explanation """ H = T.dmatrix() # Theano double matrix hidden_layer_size = 20 W_xh = theano.shared(0.01 * np.random.randn(dims, hidden_layer_size)) W_hy = theano.shared(np.zeros([hidden_layer_size, n_classes])) b_xh = theano.shared(np.zeros(hidden_layer_size)) b_hy = theano.shared(np.zeros(n_classes)) """ Explanation: Training a neural network So far we have trained a logistic regression model. The neural network model is so similar that we can imlepement it with just a few changes to the code. We need - to declare the hidden layer variable - to decide on the size of the hidden layer (we'll keep it small so it will run on your personal computer) - new parameters End of explanation """ H = T.tanh( T.dot(X, W_xh) + b_xh ) P = T.nnet.softmax( T.dot(H, W_hy) + b_hy ) """ Explanation: Remember our model? Let's write it using matrices and vector for an entire dataset: $$\boldsymbol{\textrm{H}} = \tanh \left( \boldsymbol{\textrm{X}} \boldsymbol{\textrm{W}}\boldsymbol{\textrm{xh}} + \boldsymbol{\textrm{1b}}^\boldsymbol{\textrm{T}}\boldsymbol{\textrm{xh}}\right), \ P(Y \ |\ \boldsymbol{\textrm{x}} ; \theta) = \textrm{SoftMax} \left( \boldsymbol{\textrm{H}} \boldsymbol{\textrm{W}}\boldsymbol{\textrm{hy}} + \boldsymbol{\textrm{1b}}^\boldsymbol{\textrm{T}} \boldsymbol{\textrm{hy}}\right)$$ Let's code it up: End of explanation """ LL = T.mean(T.log(P)[T.arange(P.shape[0]), y]) # the log-likelihood (LL) updates = [ [W_xh, W_xh + learning_rate * T.grad(LL, W_xh)], [W_hy, W_hy + learning_rate * T.grad(LL, W_hy)], [b_xh, b_xh + learning_rate * T.grad(LL, b_xh)], [b_hy, b_hy + learning_rate * T.grad(LL, b_hy)], ] """ Explanation: Now let's add the gradient updates: End of explanation """ training_function = theano.function( inputs = [], outputs = LL, updates = updates, givens = {X: training_x[:5000], # use only 10% of the data so model not too complicated y: training_y[:5000]} # to train on a personal computer ) for i in range(60): # train more than for logistic regression as this model is more complex current_LL = training_function() print( "Log-likelihood = " + str(current_LL) + "\t\t" + "Average probability of the correct class = " + str(np.exp(current_LL)) ) """ Explanation: Redefining Log-loss The one extremely important thing we did here was redefined LL. This is a crucial point about how Theano works: Whenever we define a theano variable, like we did with P, we create a new object. When we define a new theano variable in terms of another Theano variable, like we did with LL, using <pre> LL = T.mean(T.log(P)[T.arange(P.shape[0]), y]) </pre> we implicitly create a new object for LL that has a reference to our variable P we just defined. Now the crucial part: say we redefine P. Then our variable LL still has a reference to the old variable P, and we need to update the reference to LL by re-running the definition for LL for everything to work correctly. Bugs in Theano are very commonly produced by exactly this. It is a good reason to always use Theano in scripts rather than in a notebook like we are here. Phew, we are now ready to train! End of explanation """ y_hat = T.argmax(P, axis=1) test_y_hat = classification_function() print ("Classification error: " + str(100 * (1 - np.mean(test_y_hat == test_y.get_value()))) + "%") for i in range(10): plot_mnist_digit( test_x.get_value()[i] # test_y is a theano object of *images* ) print ('Image class: \t\t' + str(test_y.get_value()[i])) # test_y is a theano object of *true labels* print ('Model-assigned class: \t' + str(test_y_hat[i])) # test_y_hat is a theano object of *predicted labels* """ Explanation: Test the model End of explanation """
spulido99/Programacion
Alex/Taller1_term.ipynb
mit
import platform platform.python_version() """ Explanation: Taller 1: Básico de Python Funciones Listas Diccionarios Este taller es para resolver problemas básicos de python. Manejo de listas, diccionarios, etc. El taller debe ser realizado en un Notebook de Jupyter en la carpeta de cada uno. Debe haber commits con el avance del taller. Debajo de cada pregunta hay una celda para el código. Basico de Python 1. Qué versión de python está corriendo? End of explanation """ r=5 area=3.14*r**2 print (area) """ Explanation: 2. Calcule el área de un circulo de radio 5 End of explanation """ color_list_1 = set(["White", "Black", "Red"]) color_list_2 = set(["Red", "Green"]) print (color_list_1 - color_list_2) """ Explanation: 3. Escriba código que imprima todos los colores de que están en color_list_1 y no estan presentes en color_list_2 Resultado esperado : {'Black', 'White'} End of explanation """ import os s=os.getcwd() lista= s.split('\\') print ('Donde se ejecuta: '+ s ) print (lista[1]) print (lista[2]) print (lista[3]) print (lista[4]) """ Explanation: 4 Imprima una línea por cada carpeta que compone el Path donde se esta ejecutando python e.g. C:/User/sergio/code/programación Salida Esperada: + User + sergio + code + programacion End of explanation """ my_list = [5,7,8,9,17] sum=0 for i in my_list: sum=sum+i print (sum) """ Explanation: Manejo de Listas 5. Imprima la suma de números de my_list End of explanation """ elemento_a_insertar = 'E' my_list = [1, 2, 3, 4] """ Explanation: 6. Inserte un elemento_a_insertar antes de cada elemento de my_list End of explanation """ elemento_a_insertar = 'E' my_list = [1, 2, 3, 4] index=-2 while index<6: index=index+2 my_list.insert(index, elemento_a_insertar) print (my_list) """ Explanation: La salida esperada es una lista así: [E, 1, E, 2, E, 3, E, 4] End of explanation """ N = 3 my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'] """ Explanation: 7. Separe my_list en una lista de lista cada N elementos End of explanation """ new_list = [[] for _ in range(N)] for i, item in enumerate(my_list): new_list[i % N].append(item) print (new_list) """ Explanation: Salida Epserada: [['a', 'd', 'g', 'j', 'm'], ['b', 'e', 'h', 'k', 'n'], ['c', 'f', 'i', 'l']] End of explanation """ list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ] """ Explanation: 8. Encuentra la lista dentro de list_of_lists que la suma de sus elementos sea la mayor End of explanation """ print (max(list_of_lists)) """ Explanation: Salida Esperada: [10, 11, 12] End of explanation """ N = 5 """ Explanation: Manejo de Diccionarios 9. Cree un diccionario que para cada número de 1 a N de llave tenga como valor N al cuadrado End of explanation """ d = {} N=5 i=0 while i < N: i=i+1 d[i]=i**2 print (d) """ Explanation: Salida Esperada: {1:1, 2:4, 3:9, 4:16, 5:25} End of explanation """ dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}] """ Explanation: 10. Concatene los diccionarios en dictionary_list para crear uno nuevo End of explanation """ dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}] new_dic={} for i in range(len(dictionary_list)): new_dic.update(dictionary_list[i]) print (new_dic) """ Explanation: Salida Esperada: {1: 10, 2: 20, 3: 30, 4: 40, 5: 50, 6: 60} End of explanation """ dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}] """ Explanation: 11. Añada un nuevo valor "cuadrado" con el valor de "numero" de cada diccionario elevado al cuadrado End of explanation """ dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}] for i in range(len(dictionary_list)): a= dictionary_list[i]['numero'] sq=a**2 dictionary_list[i]['cuadrado']= sq print (dictionary_list) """ Explanation: Salida Esperada: [{'numero': 10, 'cantidad': 5, 'cuadrado': 100} , {'numero': 12, 'cantidad': 3, , 'cuadrado': 144}, {'numero': 5, 'cantidad': 45, , 'cuadrado': 25}] End of explanation """ def inter(list1,list2): print (list1 - list2) list1=set([1,2,3]) list2=set([1,4,5]) inter(list1,list2) """ Explanation: Manejo de Funciones 12. Defina y llame una función que reciva 2 parametros y solucione el problema 3 End of explanation """ def max_ll(list_of_lists): print (max(list_of_lists)) list_of_lists = [ [1,212,3321], [4,25,6123], [1,11,1122], [7,85,9] ] max_ll(list_of_lists) """ Explanation: 13. Defina y llame una función que reciva de parametro una lista de listas y solucione el problema 8 End of explanation """ def lis_sqr(N): d = {} i=0 while i < N: i=i+1 d[i]=i**i print (d) lis_sqr(4) """ Explanation: 14. Defina y llame una función que reciva un parametro N y resuleva el problema 9 End of explanation """
jdhp-docs/python_notebooks
nb_sci_maths/maths_mandelbrot_set_fr.ipynb
mit
%matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (10, 10) """ Explanation: L'ensemble de Mandelbrot TODO * dans la definition, ajouter le developpement sur une dizaine d'itérations de 2 ou 3 points comme exemple illustratif du calcul (ecrire z_i ou |z_i| ou les 2 ?) * dans la definition, ajouter une representation graphique (code source caché) pour un niveau d'itération donné (ex. 50) pour avoir un exemple binaire, plus coherent avec la definition: soit un point est dans l'ensemble, soit il ne l'est pas * tweet "Faire des maths (et du Python) en s'amusant: l'ensemble de Mandelbrot" * à la fin du document, ajouter une section exploration ou on incite le lecteur a explorer en zoomant sur la representation graphique, en donnant des exemples illustrés et en rappelant la propriete autoreplicative a toute echelle des fractales (ne pas oublier d'introduire en rappelant que l'ens de Mandelbrot est une fractale...) End of explanation """ import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d from matplotlib import cm EPSILON_MAX = 2. NUM_IT_MAX = 64 Z_INIT = complex(0, 0) def mandelbrot_version1(x, y): it = 0 z = Z_INIT c = complex(x, y) # Rem: abs(z) = |z| = math.sqrt(pow(z.imag,2) + pow(z.real,2)) while it < NUM_IT_MAX and abs(z) <= EPSILON_MAX: z = z**2 + c it += 1 return 1 if it == NUM_IT_MAX else 0 REAL_RANGE = np.linspace(-2.0, 1.0, 800).tolist() IMAG_RANGE = np.linspace(-1.2, 1.2, 800).tolist() # Définie un ensemble de points c et vérifie leur appartenance à l'ensemble de Mandelbrot xgrid, ygrid = np.meshgrid(REAL_RANGE, IMAG_RANGE) data = np.array([mandelbrot_version1(x, y) for y in IMAG_RANGE for x in REAL_RANGE]).reshape(len(IMAG_RANGE), len(REAL_RANGE)) # Génère l'image # (cmap alternatifs: summer, magma, gist_gray, gist_yarg, gist_heat, Blues, coolwarm, copper) fig, ax = plt.subplots() ax.imshow(data, extent=[xgrid.min(), xgrid.max(), ygrid.min(), ygrid.max()], interpolation="none", cmap=cm.gray_r) ax.set_axis_off() # Ajoute un titre à l'image et nome les axes ax.set_title("Ensemble de Mandelbrot") plt.show() """ Explanation: Définition Soit la suite ${z_i}$ de nombres complexes définie par $$ z_{i+1} = z^2_i + c $$ avec $z_0 = 0$ et avec $c \in \mathbb C$ une constante fixée. L'ensemble de Mandelbrot est l'ensemble de tous les nombres $c$ pour lesquels cette suite converge ; la suite tend vers l'infini pour les nombres $c$ n'appartenant pas à l'ensemble de Mandelbrot (i.e. $\lim_{i \to +\infty}{|z_i|} = +\infty$ où $|z_i|$ est le module de $z_i$). Ci-dessous, l'ensemble de Mandelbrot est représenté graphiquement dans le plan complexe. Référence: Toutes les mathématiques et les bases de l'informatique, H. Stöcker, Dunod, p.696 End of explanation """ import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d from matplotlib import cm """ Explanation: Une implémentation Python Note: ce script Python peut également être téléchargé ici. Commençons par importer les paquets requis : End of explanation """ EPSILON_MAX = 2. NUM_IT_MAX = 32 Z_INIT = complex(0, 0) def mandelbrot_version1(x, y): it = 0 z = Z_INIT c = complex(x, y) # Rem: abs(z) = |z| = math.sqrt(pow(z.imag,2) + pow(z.real,2)) while it < NUM_IT_MAX and abs(z) <= EPSILON_MAX: z = z**2 + c it += 1 return 1 if it == NUM_IT_MAX else 0 def mandelbrot_version2(x, y): it = 0 z = Z_INIT c = complex(x, y) # Rem: abs(z) = |z| = math.sqrt(pow(z.imag,2) + pow(z.real,2)) while it < NUM_IT_MAX and abs(z) <= EPSILON_MAX: z = z**2 + c it += 1 return it """ Explanation: Puis définissons l'ensemble de Mandelbrot par itérations successives : End of explanation """ REAL_RANGE = np.linspace(-2.0, 1.0, 800).tolist() IMAG_RANGE = np.linspace(-1.2, 1.2, 800).tolist() # Définie un ensemble de points c et vérifie leur appartenance à l'ensemble de Mandelbrot xgrid, ygrid = np.meshgrid(REAL_RANGE, IMAG_RANGE) data = np.array([mandelbrot_version2(x, y) for y in IMAG_RANGE for x in REAL_RANGE]).reshape(len(IMAG_RANGE), len(REAL_RANGE)) # Génère l'image fig, ax = plt.subplots() ax.imshow(data, extent=[xgrid.min(), xgrid.max(), ygrid.min(), ygrid.max()], interpolation="bicubic", cmap=cm.Blues) # Ajoute un titre à l'image et nome les axes ax.set_title("Ensemble de Mandelbrot") ax.set_xlabel("Re(c)") ax.set_ylabel("Im(c)") plt.show() """ Explanation: mandelbrot_version1 définie l'ensemble de Mandelbrot ; mandelbrot_version2 est une fonction alternative qui permet de voir à quelle vitesse la suite diverge (la fonction retroune une valeur d'autant plus petite que le nombre complexe $c = x + yi$ fait diverger la suite rapidement). Nous pouvons maintenant représenter graphiquement l'ensemble de Mandelbrot dans le plan complexe (plus la suite diverge vite plus le point image du nombre complexe $c=x+yi$ est claire) : End of explanation """ REAL_RANGE = np.arange(-2.0, 1.0, 0.05).tolist() IMAG_RANGE = np.arange(-1.2, 1.2, 0.05).tolist() # Définie un ensemble de points c et vérifie leur appartenance à l'ensemble de Mandelbrot xgrid, ygrid = np.meshgrid(REAL_RANGE, IMAG_RANGE) data = np.array([mandelbrot_version2(x, y) for y in IMAG_RANGE for x in REAL_RANGE]).reshape(len(IMAG_RANGE), len(REAL_RANGE)) # Génère la figure fig = plt.figure() ax = axes3d.Axes3D(fig) ax.plot_surface(xgrid, ygrid, data, cmap=cm.jet, rstride=1, cstride=1, color='b', shade=True) # Ajoute un titre à l'image et nome les axes plt.title("Ensemble de Mandelbrot") ax.set_xlabel("Re(c)") ax.set_ylabel("Im(c)") ax.set_zlabel("Itérations") plt.show() """ Explanation: Nous pouvons aussi représenter cet ensemble en 3 dimensions pour mieux mettre en évidence l'aspect itératif du processus de construction de l'ensemble de Mandelbrot. End of explanation """
Gezort/YSDA_deeplearning17
Seminar2/Homework2.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import random from IPython import display from sklearn import datasets, preprocessing (X, y) = datasets.make_circles(n_samples=1024, shuffle=True, noise=0.2, factor=0.4) ind = np.logical_or(y==1, X[:,1] > X[:,0] - 0.5) X = X[ind,:] X = preprocessing.scale(X) y = y[ind] y = 2*y - 1 plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired) plt.show() h = 0.01 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) def visualize(X, y, w, loss, n_iter): plt.clf() Z = classify(np.c_[xx.ravel(), yy.ravel()], w) Z = Z.reshape(xx.shape) plt.subplot(1,2,1) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.subplot(1,2,2) plt.plot(loss) plt.grid() ymin, ymax = plt.ylim() plt.ylim(0, ymax) display.clear_output(wait=True) display.display(plt.gcf()) """ Explanation: Homework 2 (Linear models, Optimization) In this homework you will implement a simple linear classifier using numpy and your brain. Two-dimensional classification End of explanation """ def expand(X): X_ = np.zeros((X.shape[0], 6)) X_[:,0:2] = X X_[:,2:4] = X**2 X_[:,4] = X[:,0] * X[:,1] X_[:,5] = 1 return X_ def classify(X, w): """ Given feature matrix X [n_samples,2] and weight vector w [6], return an array of +1 or -1 predictions """ X = expand(X) y = X.dot(w) return np.sign(y) """ Explanation: Your task starts here First, let's write a function that predicts class for given X. Since the problem above isn't linearly separable, we add quadratic features to the classifier. This transformation is implemented in the expand function. Don't forget to expand X inside classify and other functions Sample classification should not be much harder than computation of sign of dot product. End of explanation """ def compute_loss(X, y, w): """ Given feature matrix X [n_samples,2], target vector [n_samples] of +1/-1, and weight vector w [6], compute scalar loss function using formula above. """ X = expand(X) y_pred = X.dot(w) res = np.maximum(0, 1 - y_pred * y) return res.mean() def compute_grad(X, y, w): """ Given feature matrix X [n_samples,2], target vector [n_samples] of +1/-1, and weight vector w [6], compute vector [6] of derivatives of L over each weights. """ X = expand(X) y_pred = X.dot(w) * y y_pred = np.int0(y_pred < 1) grad = -(X.T * y).T grad = (grad.T * y_pred).T return grad.sum(axis=0) """ Explanation: The loss you should try to minimize is the Hinge Loss: $$ L = {1 \over N} \sum_{i=1}^N max(0,1-y_i \cdot w^T x_i) $$ End of explanation """ w = np.array([1,0,0,0,0,0]) alpha = 0.1 # learning rate n_iter = 50 batch_size = 4 loss = np.zeros(n_iter) plt.figure(figsize=(12,5)) for i in range(n_iter): ind = random.sample(range(X.shape[0]), batch_size) loss[i] = compute_loss(X, y, w) visualize(X[ind,:], y[ind], w, loss, n_iter) w = w - alpha * compute_grad(X[ind,:], y[ind], w) visualize(X, y, w, loss, n_iter) plt.clf() """ Explanation: Training Find an optimal learning rate for gradient descent for given batch size. You can see the example of correct output below this cell before you run it. Don't change the batch size! End of explanation """ w = np.array([1,0,0,0,0,0]) v = np.zeros(6) alpha = 0.02 # learning rate mu = 0.8 # momentum n_iter = 50 batch_size = 4 loss = np.zeros(n_iter) plt.figure(figsize=(12,5)) for i in range(n_iter): ind = random.sample(range(X.shape[0]), batch_size) loss[i] = compute_loss(X, y, w) visualize(X[ind,:], y[ind], w, loss, n_iter) v = mu * v - alpha * compute_grad(X, y, w) w = w + v visualize(X, y, w, loss, n_iter) plt.clf() """ Explanation: Implement gradient descent with momentum and test it's performance for different learning rate and momentum values. End of explanation """ w = np.array([1,0,0,0,0,0]) v = np.zeros(6) alpha = 0.01 # learning rate mu = 0.8 # momentum n_iter = 50 batch_size = 4 loss = np.zeros(n_iter) plt.figure(figsize=(12,5)) for i in range(n_iter): ind = random.sample(range(X.shape[0]), batch_size) loss[i] = compute_loss(X, y, w) visualize(X[ind,:], y[ind], w, loss, n_iter) v = mu * v - alpha * compute_grad(X, y, w + mu * v) w = w + v visualize(X, y, w, loss, n_iter) plt.clf() """ Explanation: Same task but for Nesterov's accelerated gradient: End of explanation """ w = np.array([1,0,0,0,0,0]) v = np.zeros(6) g = np.zeros(6) alpha = 2. # learning rate beta = 0.9 # (beta1 coefficient in original paper) exponential decay rate for the 1st moment estimates mu = 0.5 # (beta2 coefficient in original paper) exponential decay rate for the 2nd moment estimates eps = 1e-4 # A small constant for numerical stability n_iter = 50 batch_size = 4 loss = np.zeros(n_iter) plt.figure(figsize=(12,5)) for i in range(n_iter): ind = random.sample(range(X.shape[0]), batch_size) loss[i] = compute_loss(X, y, w) visualize(X[ind,:], y[ind], w, loss, n_iter) gr = compute_grad(X, y, w) v = beta * v + (1 - beta) * gr g = mu * g + gr * gr * (1 - mu) w = w - v * alpha * (1 - mu) / (g + eps) / (1 - beta) visualize(X, y, w, loss, n_iter) plt.clf() """ Explanation: Finally, try Adam algorithm. You can start with beta = 0.9 and mu = 0.999 End of explanation """
obulpathi/datascience
scikit/Chapter 1/Introduction.ipynb
apache-2.0
from sklearn.datasets import make_classification from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import train_test_split X, y = make_classification(random_state=0) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) lr = LogisticRegression().fit(X_train, y_train) print("predictions: %s" % lr.predict(X_test)) print("accuracy: %.2f" % lr.score(X_test, y_test)) """ Explanation: Advanced Machine Learning with scikit-learn me : Andreas Mueller <img src="portrait_amueller.jpeg" width="200px"> API Review End of explanation """ %matplotlib inline from plot_forest import plot_forest_interactive plot_forest_interactive() """ Explanation: Grid-Search and Cross-Validation <img src="grid_search_svm.png"> Model Complexity <img src="overfitting_underfitting_cartoon.svg" width="70%"> End of explanation """ X = [{'age': 15.9, 'likes puppies': 'yes', 'location': 'Tokyo'}, {'age': 21.5, 'likes puppies': 'no', 'location': 'New York'}, {'age': 31.3, 'likes puppies': 'no', 'location': 'Paris'}, {'age': 25.1, 'likes puppies': 'yes', 'location': 'New York'}, {'age': 63.6, 'likes puppies': 'no', 'location': 'Tokyo'}, {'age': 14.4, 'likes puppies': 'yes', 'location': 'Tokyo'}] from sklearn.feature_extraction import DictVectorizer vect = DictVectorizer(sparse=False).fit(X) print(vect.transform(X)) print("feature names: %s" % vect.get_feature_names()) """ Explanation: Processing Pipelines <img src="pipeline.svg" width=60%> Real World Data End of explanation """
5agado/data-science-learning
statistics/Probability - Intro.ipynb
apache-2.0
import numpy as np import seaborn as sns import pandas as pd from matplotlib import pyplot as plt, animation %matplotlib notebook #%matplotlib inline sns.set_context("paper") # interactive imports import plotly import cufflinks as cf cf.go_offline(connected=True) plotly.offline.init_notebook_mode(connected=True) class RandomVar: def __init__(self, probs): self.values = np.arange(len(probs)) self.probs = probs def pick(self, n=1): return np.random.choice(self.values, p=self.probs) coin = RandomVar([0.5, 0.5]) coin.pick() biased_coin = RandomVar([0.1, 0.9]) biased_coin.pick() die = RandomVar([1/6]*6) die.pick() """ Explanation: Table of Contents Intro Probability Information Theory Intro Exploratory notebook related to the theory and introductory concepts behind probability. Includes toy examples implementation and visualization. Probability Probability is the science concerned with the understanding and manipulation of uncertainty. End of explanation """ # information content for a target probability def info_content(p_x): return -np.log2(p_x) # entropy of a random variable probability distribution def entropy(p_x): return -sum(p_x*np.log2(p_x)) entropy([1/8]*8) """ Explanation: Information Theory We interested in understanding the amount of information related to events. For example given a random variable $x$, the amount of information of a specific value can also be seen as "degree of surprise" of seeing $x$ being equal to such value. $$ h(x) = - \log_2 p(x) $$ For a random variable $x$, the corresponding measure calles entropy is defines as: $$ H[x] = - \sum_x{ p(x) \log_2 p(x) } $$ End of explanation """ # log function x = np.linspace(0.00001, 2, 100) plt.plot(x, np.log(x), label='Log') plt.legend() plt.show() #log of product equals sum of logs n = 10 #a = np.random.random_sample(n) #b = np.random.random_sample(n) plt.plot(a, label='a') plt.plot(b, label='b') plt.plot(np.log(a), label='log(a)') plt.plot(np.log(b), label='log(b)') #plt.plot(np.log(a)+np.log(b), label='log(a)+log(b)') plt.plot(np.log(a*b), label='log(a+b)') plt.legend() plt.show() """ Explanation: Maximum entropy for a discrete random variable is obtained with a uniform distribution. For a continuous random variable we have an equivalent increase in entropy for an increase in the variance. End of explanation """
Naereen/notebooks
Solving_an_equation_and_the_Lambert_W_function.ipynb
mit
%load_ext watermark %watermark -a "Lilian Besson (Naereen)" -i -v -p numpy,matplotlib,scipy,seaborn import numpy as np from scipy import optimize as opt import matplotlib as mpl mpl.rcParams['figure.figsize'] = (15, 8) import matplotlib.pyplot as plt import seaborn as sns sns.set(context="notebook", style="darkgrid", palette="hls", font="sans-serif", font_scale=1.8) """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Solving-an-equation,-numerically-or-with-the-Lambert-W-function" data-toc-modified-id="Solving-an-equation,-numerically-or-with-the-Lambert-W-function-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Solving an equation, numerically or with the Lambert W function</a></div><div class="lev2 toc-item"><a href="#Loading-packages-and-configuring-plot-sizes" data-toc-modified-id="Loading-packages-and-configuring-plot-sizes-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Loading packages and configuring plot sizes</a></div><div class="lev2 toc-item"><a href="#Plotting-the-function-first" data-toc-modified-id="Plotting-the-function-first-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Plotting the function first</a></div><div class="lev2 toc-item"><a href="#Solving-numerically?" data-toc-modified-id="Solving-numerically?-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Solving numerically?</a></div><div class="lev2 toc-item"><a href="#How-many-solutions-for-a-given-a-?" data-toc-modified-id="How-many-solutions-for-a-given-a-?-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>How many solutions for a given a ?</a></div><div class="lev2 toc-item"><a href="#Number-of-solutions-as-function-of-a" data-toc-modified-id="Number-of-solutions-as-function-of-a-15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Number of solutions as function of a</a></div><div class="lev2 toc-item"><a href="#Plot-of-solution(s)-as-function-of-a" data-toc-modified-id="Plot-of-solution(s)-as-function-of-a-16"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Plot of solution(s) as function of a</a></div><div class="lev2 toc-item"><a href="#Solving-formally-with-the-Lambert-W-function" data-toc-modified-id="Solving-formally-with-the-Lambert-W-function-17"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Solving formally with the Lambert W function</a></div><div class="lev2 toc-item"><a href="#Asymptotic-behaviors-and-approximations" data-toc-modified-id="Asymptotic-behaviors-and-approximations-18"><span class="toc-item-num">1.8&nbsp;&nbsp;</span>Asymptotic behaviors and approximations</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-19"><span class="toc-item-num">1.9&nbsp;&nbsp;</span>Conclusion</a></div> # Solving an equation, numerically or with the Lambert W function I want to solve the equation $\exp(-ax^2)=x$ and find its solution(s) as a function of $a\in\mathbb{R}$. ## Loading packages and configuring plot sizes End of explanation """ def objective(x, a): return np.exp(- a * x**2) - x """ Explanation: Plotting the function first End of explanation """ X = np.linspace(-2, 2, 2000) for a in [0, -0.1, 0.1, -1, 1]: plt.plot(X, objective(X, a), 'o-', label=f"$a={a:.3g}$", markevery=50) plt.legend() plt.xlabel("$x$"); plt.ylabel("$y$") plt.title(r"Function $\exp(- a x^2) - x$ for different $a$") plt.show() """ Explanation: First, let's have a look to its plot for some values of $a$: End of explanation """ X = np.linspace(0, 1.5, 2000) for a in [0, -0.1, 0.1, -1, 1]: plt.plot(X, objective(X, a), 'o-', label=f"$a={a:.3g}$", markevery=50) plt.legend() plt.xlabel("$x$"); plt.ylabel("$y$") plt.title(r"Function $\exp(- a x^2) - x$ for different $a$") plt.show() """ Explanation: We can see that a solution to $\exp(-a x^2) = x$ has to be positive, as $\exp(-a x^2) > 0$ for any $x,a$. We also check that if $a < 0$, $\exp(-a x^2) - x$ seems to always be positive, but if $a \geq 0$, it seems to have a unique root. Let's zoom a little bit: End of explanation """ X = np.linspace(0, 5, 2000) for a in [0, -0.1, 0.1, 1]: plt.plot(X, objective(X, a), 'o-', label=f"$a={a:.3g}$", markevery=50) plt.legend() plt.xlabel("$x$"); plt.ylabel("$y$") plt.title(r"Function $\exp(- a x^2) - x$ for different $a$") plt.show() """ Explanation: The curve for $a=-0.1$ seems to stay negative, but that's not possible as for $a<0$ and $x\to\infty$, $\exp(-a x^2)$ dominates over $-x$. We can check that it will have a second root: End of explanation """ def one_solution(a, x0=0, verb=False): sol = opt.root(objective, x0, args=(a,)) if verb: print(sol) if sol.success: return sol.x else: raise ValueError(f"No solution was found for a = {a:.3g} (and starting at x0 = {x0:.3g}).") """ Explanation: Solving numerically? We can start to try to use scipy.optimize.root to numerically solve this equation. End of explanation """ one_solution(-1, verb=True) """ Explanation: Let's check that there is no solution for $a < 0$ too small. End of explanation """ one_solution(-0.1, x0=0, verb=True) one_solution(-0.1, x0=10, verb=True) """ Explanation: It can find a solution, but only one (depending on the starting point $x_0$) and not both: End of explanation """ one_solution(1, x0=0) one_solution(1, x0=-100) one_solution(1, x0=100) """ Explanation: For $a > 0$, the equation seems to have a unique solution: End of explanation """ def solutions(a, x0s=None, tol=1e-10, verb=False): nbdigits = int(np.log10(1. / tol)) sols = set() if x0s is None: x0s = [-10, -5, -2, -1, 0, 1, 2, 5, 10] for x0 in x0s: sol = opt.root(objective, x0, args=(a,)) if sol.success: approx = np.round(float(sol.x), nbdigits) sols.add(approx) if verb and len(sols) == 0: print(f"No solution was found for a = {a:.3g} (and starting at x0 = {x0:.3g}).") return sols solutions(-10) solutions(-0.1) solutions(0) solutions(1) solutions(2) """ Explanation: We can just hack and try different values for $x_0$, expecting to find all the roots. End of explanation """ def thresholds(amin=-10, amax=10, delta=0.01): gap_points = dict() prev_a = amin prev_nb_sol = len(solutions(prev_a)) for a in np.arange(amin, amax, delta): nb_sol = len(solutions(a)) if nb_sol != prev_nb_sol: gap_points[(prev_nb_sol, nb_sol)] = (prev_a, a) prev_nb_sol = nb_sol prev_a = a return gap_points thresholds(amin=-10, amax=10, delta=0.01) thresholds(amin=-8, amax=1, delta=0.01) """ Explanation: How many solutions for a given a ? We can use this to try to find the threshold value for $a$ from $0$ to $2$ and from $2$ to $1$ solution: End of explanation """ amin = -100 amax = 100 gap_points = thresholds(amin=amin, amax=amax, delta=0.1) gap_points """ Explanation: I think having $3$ (or more) solutions is a numerical error. End of explanation """ def plot_gap_points(gap_points, amin, amax): ys = set() for ym, yM in gap_points.keys(): ys.add(ym) ys.add(yM) print(ys) xleft = dict() xright = dict() for (ym, yM), (xm, xM) in gap_points.items(): xleft[ym] = xleft.get(ym, []) + [xm] xright[yM] = xright.get(yM, []) + [xM] for ym, yM in gap_points.keys(): xleft[ym].sort() xright[yM].sort() print(xleft) print(xright) min_xleft = min(sum(list(xleft.values()), [])) max_xright = min(sum(list(xright.values()), [])) plt.figure() for y in ys: if y not in xleft and y in xright: for x in xright[y]: plt.hlines(y, x, amax) if y in xleft and min_xleft in xleft[y]: plt.hlines(y, amin, min_xleft) del xleft[y][0] #if y in xright and max_xright in xright[y]: # plt.hlines(y, max_xright, amax) # del xright[y][-1] if y in xleft and y in xright: for xmin, xmax in zip(xleft[y], xright[y]): plt.hlines(y, xmin, xmax) plt.xlabel("Value of $a$") plt.ylabel("Number of solution") plt.title(r"Number of solutions to $\exp(- a x^2) = x$, as function of $a$") return ys, xleft, min_xleft, xright, max_xright ys, xleft, min_xleft, xright, max_xright = plot_gap_points(gap_points, amin, amax) """ Explanation: As we will see below, even having two solutions is nothing but a numerical error. Number of solutions as function of a We can plot the (estimated) number of solution as a function of $a$, to start wit, thanks to the matplotlib.pyplot.hlines function: End of explanation """ def plot_multivalued_function(X, f, maxnboutput=1, **kwargs): Y = np.zeros((maxnboutput, len(X))) Y.fill(np.nan) for i, x in enumerate(X): ys = sorted(list(f(x))) for j, y in enumerate(ys): Y[j, i] = y for j in range(maxnboutput): plt.plot(X, Y[j], 'o-', **kwargs) A = np.linspace(-100, 100, 1000) plot_multivalued_function(A, solutions, maxnboutput=2, markevery=10) plt.legend() plt.xlabel("Parameter $a$"); plt.ylabel("Solution(s)") plt.title(r"Solution(s) to $\exp(- a x^2) = x$, as function of $a$") plt.show() A = np.linspace(0, 20, 2000) plot_multivalued_function(A, solutions, maxnboutput=2, markevery=20) plt.legend() plt.xlabel("Parameter $a$"); plt.ylabel("Solution(s)") plt.title(r"Solution(s) to $\exp(- a x^2) = x$, as function of $a$") plt.show() """ Explanation: Plot of solution(s) as function of a Now we can try to use this to plot the solution(s) as function of $a$. End of explanation """ from scipy.special import lambertw """ Explanation: This shows the numerical solution to the equation, and we will check below that the formal solution coincides. Solving formally with the Lambert W function Luckily, we can transform this equation to solve it with the Lambert $W$ function, defined as $W(x) = z \Leftrightarrow x = z \mathrm{e}^{z}$. For more details, please see this page, or this article. As for (almost) all the special function, we don't need to write it ourself: it is in scipy! scipy.special.lambertw End of explanation """ def formal_solution(a): return np.sqrt(lambertw(2 * a) / (2 * a)) """ Explanation: As the only possible solution are $x>0$ $$ \exp(-a x^2) = x \Leftrightarrow \left(\exp(-a x^2)\right)^2= \exp(-2 a x^2) = x^2 \Leftrightarrow 2 a y \exp(2 a x) = 2 a \;\;(\text{with}\;\; y := x^2) \Leftrightarrow \ u \exp(u) = 2 a \;\;(\text{with}\;\; u := 2 a y) \Leftrightarrow u = W(2a) \Leftrightarrow y = \frac{W(2a)}{2a} \Leftrightarrow x(a) := \sqrt{\frac{W(2a)}{2a}}. $$ And so it is quite easy to compute, for $a > 0$ (the behavior at $0$ is undefined without a more careful study): End of explanation """ for a in [0.5, 1, 2, 3, 4]: xa = formal_solution(a) assert np.isclose(exp(-a * xa**2), xa) print(f"a = {a:.3g} gives x(a) = {float(xa):.3g}") """ Explanation: We can check some values: End of explanation """ e = np.exp(1) def upper_bound(a): up_b = np.log(2*a) - np.log(np.log(2*a)) + (e / (e - 1)) * (np.log(np.log(2*a)) / np.log(2*a)) return np.sqrt(up_b / (2*a)) def lower_bound(a): lo_b = np.log(2*a) - np.log(np.log(2*a)) + np.log(np.log(2*a)) / (2 * np.log(2*a)) return np.sqrt(lo_b / (2*a)) """ Explanation: Asymptotic behaviors and approximations We can try to approximate the solution for small $a$ or large $a$: For small $a$, $W(2a) \simeq 2a - 4a^2$ so $x(a) \simeq 1 - a$. For large $a$, we have this bound: $$ \forall x \geq \mathrm{e},{\displaystyle \ln(x)-\ln {\bigl (}\ln(x){\bigr )}+{\frac {\ln {\bigl (}\ln(x){\bigr )}}{2\ln(x)}}\leq W(x)\leq \ln(x)-\ln {\bigl (}\ln(x){\bigr )}+{\frac {e}{e-1}}{\frac {\ln {\bigl (}\ln(x){\bigr )}}{\ln(x)}}} $$ End of explanation """ A = np.linspace(0, 20, 4000) A1 = A[A <= 0.5] A2 = A[A >= 1] Ae = A[A >= e] plt.plot(A, formal_solution(A), label="Solution", markevery=20) plt.plot(A1, 1 - A1, 'b--', label=r"Tangent at $0$: $x(a) \simeq 1 - a$", markevery=20) #plt.plot(A2, np.sqrt((np.log(2*A2) - np.log(np.log(2*A2)))/(2*A2)), 'g--', label=r"Asymptote at $+\infty$", markevery=20) plt.plot(Ae, lower_bound(Ae), 'g--', label=r"Lower-bound for $a \geq e$", markevery=20) plt.plot(Ae, upper_bound(Ae), 'c--', label=r"Upper-bound for $a \geq e$", markevery=20) plt.legend() plt.xlabel("Parameter $a$"); plt.ylabel(r"Solution $x(a) = \sqrt{\frac{W(2a)}{2a}}$") plt.title(r"Solution to $\exp(- a x^2) = x$, as function of $a$") plt.show() """ Explanation: We can plot all this. End of explanation """
pysal/spaghetti
notebooks/transportation-problem.ipynb
bsd-3-clause
%config InlineBackend.figure_format = "retina" %load_ext watermark %watermark import geopandas from libpysal import examples import matplotlib import mip import numpy import os import spaghetti import matplotlib_scalebar from matplotlib_scalebar.scalebar import ScaleBar %matplotlib inline %watermark -w %watermark -iv """ Explanation: If any part of this notebook is used in your research, please cite with the reference found in README.md. The Transportation Problem Integrating pysal/spaghetti and python-mip for optimal shipping Author: James D. Gaboardi &#106;&#103;&#97;&#98;&#111;&#97;&#114;&#100;&#105;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; This notebook provides a use case for: Introducing the Transportation Problem Declaration of a solution class and model parameters Solving the Transportation Problem for an optimal shipment plan End of explanation """ supply_schools = [1, 6, 7, 8] demand_schools = [2, 3, 4, 5] """ Explanation: 1 Introduction Scenario There are 8 schools in Neighborhood Y of City X and a total of 100 microscopes for the biology classes at the 8 schools, though the microscopes are not evenly distributed across the locations. Since last academic year there has been a significant enrollment shift in the neighborhood, and at 4 of the schools there is a surplus whereas the remaining 4 schools require additional microscopes. Dr. Rachel Carson, the head of the biology department at City X's School Board decides to utilize a mathematical programming model to solve the microscope discrepency. After consideration, she selects the Transportation Problem. The Transportation Problem seeks to allocate supply to demand while minimizing transportation costs and was formally described by Hitchcock (1941). Supply ($\textit{n}$) and demand ($\textit{m}$) are generally represented as unit weights of decision variables at facilities along a network with the time or distance between nodes representing the cost of transporting one unit from a supply node to a demand node. These costs are stored in an $\textit{n x m}$ cost matrix. Integer Linear Programming Formulation based on Daskin (2013, Ch. 2). $\begin{array} \displaystyle \normalsize \textrm{Minimize} & \displaystyle \normalsize \sum_{i \in I} \sum_{j \in J} c_{ij}x_{ij} & & & & \normalsize (1) \ \normalsize \textrm{Subject To} & \displaystyle \normalsize \sum_{j \in J} x_{ij} \leq S_i & \normalsize \forall i \in I; & & &\normalsize (2)\ & \displaystyle \normalsize \sum_{i \in I} x_{ij} \geq D_j & \normalsize \forall j \in J; & & &\normalsize (3)\ & \displaystyle \normalsize x_{ij} \geq 0 & \displaystyle \normalsize \forall i \in I & \displaystyle \normalsize \normalsize \forall j \in j. & &\normalsize (4)\ \end{array}$ $\begin{array} \displaystyle \normalsize \textrm{Where} & \small i & \small = & \small \textrm{each potential origin node} &&&&\ & \small I & \small = & \small \textrm{the complete set of potential origin nodes} &&&&\ & \small j & \small = & \small \textrm{each potential destination node} &&&&\ & \small J & \small = & \small \textrm{the complete set of potential destination nodes} &&&&\ & \small x_{ij} & \small = & \small \textrm{amount to be shipped from } i \in I \textrm{ to } j \in J &&&&\ & \small c_{ij} & \small = & \small \textrm{per unit shipping costs between all } i,j \textrm{ pairs} &&&& \ & \small S_i & \small = & \small \textrm{node } i \textrm{ supply for } i \in I &&&&\ & \small D_j & \small = & \small \textrm{node } j \textrm{ demand for } j \in J &&&&\ \end{array}$ References Church, Richard L. and Murray, Alan T. (2009) Business Site Selection, Locational Analysis, and GIS. Hoboken. John Wiley & Sons, Inc. Daskin, M. (2013) Network and Discrete Location: Models, Algorithms, and Applications. New York: John Wiley & Sons, Inc. Gass, S. I. and Assad, A. A. (2005) An Annotated Timeline of Operations Research: An Informal History. Springer US. Hitchcock, Frank L. (1941) The Distribution of a Product from Several Sources to Numerous Localities. Journal of Mathematics and Physics. 20(1):224-230. Koopmans, Tjalling C. (1949) Optimum Utilization of the Transportation System. Econometrica. 17:136-146. Miller, H. J. and Shaw, S.-L. (2001) Geographic Information Systems for Transportation: Principles and Applications. New York. Oxford University Press. Phillips, Don T. and Garcia‐Diaz, Alberto. (1981) Fundamentals of Network Analysis. Englewood Cliffs. Prentice Hall. 2. A model, data, and parameters Schools labeled as either 'supply' or 'demand' locations End of explanation """ amount_supply = [20, 30, 15, 35] amount_demand = [5, 45, 10, 40] """ Explanation: Amount of supply and demand at each location (indexed by supply_schools and demand_schools) End of explanation """ class TransportationProblem: def __init__( self, supply_nodes, demand_nodes, cij, si, dj, xij_tag="x_%s,%s", supply_constr_tag="supply(%s)", demand_constr_tag="demand(%s)", solver="cbc", display=True, ): """Instantiate and solve the Primal Transportation Problem based the formulation from Daskin (2013, Ch. 2). Parameters ---------- supply_nodes : geopandas.GeoSeries Supply node decision variables. demand_nodes : geopandas.GeoSeries Demand node decision variables. cij : numpy.array Supply-to-demand distance matrix for nodes. si : geopandas.GeoSeries Amount that can be supplied by each supply node. dj : geopandas.GeoSeries Amount that can be received by each demand node. xij_tag : str Shipping decision variable names within the model. Default is 'x_%s,%s' where %s indicates string formatting. supply_constr_tag : str Supply constraint labels. Default is 'supply(%s)'. demand_constr_tag : str Demand constraint labels. Default is 'demand(%s)'. solver : str Default is 'cbc' (coin-branch-cut). Can be set to 'gurobi' (if Gurobi is installed). display : bool Print out solution results. Attributes ---------- supply_nodes : See description in above. demand_nodes : See description in above. cij : See description in above. si : See description in above. dj : See description in above. xij_tag : See description in above. supply_constr_tag : See description in above. demand_constr_tag : See description in above. rows : int The number of supply nodes. rrows : range The index of supply nodes. cols : int The number of demand nodes. rcols : range The index of demand nodes. model : mip.model.Model Integer Linear Programming problem instance. xij : numpy.array Shipping decision variables (``mip.entities.Var``). """ # all nodes to be visited self.supply_nodes, self.demand_nodes = supply_nodes, demand_nodes # shipping costs (distance matrix) and amounts self.cij, self.si, self.dj = cij, si.values, dj.values self.ensure_float() # alpha tag for decision variables self.xij_tag = xij_tag # alpha tag for supply and demand constraints self.supply_constr_tag = supply_constr_tag self.demand_constr_tag = demand_constr_tag # instantiate a model self.model = mip.Model(" TransportationProblem", solver_name=solver) # define row and column indices self.rows, self.cols = self.si.shape[0], self.dj.shape[0] self.rrows, self.rcols = range(self.rows), range(self.cols) # create and set the decision variables self.shipping_dvs() # set the objective function self.objective_func() # add supply constraints self.add_supply_constrs() # add demand constraints self.add_demand_constrs() # solve self.solve(display=display) # shipping decisions lookup self.get_decisions(display=display) def ensure_float(self): """Convert integers to floats (rough edge in mip.LinExpr)""" self.cij = self.cij.astype(float) self.si = self.si.astype(float) self.dj = self.dj.astype(float) def shipping_dvs(self): """Create the shipping decision variables - eq (4).""" def _s(_x): """Helper for naming variables""" return self.supply_nodes[_x].split("_")[-1] def _d(_x): """Helper for naming variables""" return self.demand_nodes[_x].split("_")[-1] xij = numpy.array( [ [self.model.add_var(self.xij_tag % (_s(i), _d(j))) for j in self.rcols] for i in self.rrows ] ) self.xij = xij def objective_func(self): """Add the objective function - eq (1).""" self.model.objective = mip.minimize( mip.xsum( self.cij[i, j] * self.xij[i, j] for i in self.rrows for j in self.rcols ) ) def add_supply_constrs(self): """Add supply contraints to the model - eq (2).""" for i in self.rrows: rhs, label = self.si[i], self.supply_constr_tag % i self.model += mip.xsum(self.xij[i, j] for j in self.rcols) <= rhs, label def add_demand_constrs(self): """Add demand contraints to the model - eq (3).""" for j in self.rcols: rhs, label = self.dj[j], self.demand_constr_tag % j self.model += mip.xsum(self.xij[i, j] for i in self.rrows) >= rhs, label def solve(self, display=True): """Solve the model""" self.model.optimize() if display: obj = round(self.model.objective_value, 4) print("Minimized shipping costs: %s" % obj) def get_decisions(self, display=True): """Fetch the selected decision variables.""" shipping_decisions = {} if display: print("\nShipping decisions:") for i in self.rrows: for j in self.rcols: v, vx = self.xij[i, j], self.xij[i, j].x if vx > 0: if display: print("\t", v, vx) shipping_decisions[v.name] = vx self.shipping_decisions = shipping_decisions def print_lp(self, name=None): """Save LP file in order to read in and print.""" if not name: name = self.model.name lp_file_name = "%s.lp" % name self.model.write(lp_file_name) lp_file = open(lp_file_name, "r") lp = lp_file.read() print("\n", lp) lp_file.close() os.remove(lp_file_name) def extract_shipments(self, paths, id_col, ship="ship"): """Extract the supply to demand shipments as a ``geopandas.GeoDataFrame`` of ``shapely.geometry.LineString`` objects. Parameters ---------- paths : geopandas.GeoDataFrame Shortest-path routes between all ``self.supply_nodes`` and ``self.demand_nodes``. id_col : str ID column name. ship : str Column name for the amount of good shipped. Default is 'ship'. Returns ------- shipments : geopandas.GeoDataFrame Optimal shipments from ``self.supply_nodes`` to ``self.demand_nodes``. """ def _id(sp): """ID label helper""" return tuple([int(i) for i in sp.split("_")[-1].split(",")]) paths[ship] = int # set label of the shipping path for each OD pair. for ship_path, shipment in self.shipping_decisions.items(): paths.loc[(paths[id_col] == _id(ship_path)), ship] = shipment # extract only shiiping paths shipments = paths[paths[ship] != int].copy() shipments[ship] = shipments[ship].astype(int) return shipments """ Explanation: Solution class End of explanation """ shipping_colors = ["maroon", "cyan", "magenta", "orange"] def obs_labels(o, b, s, col="id", **kwargs): """Label each point pattern observation.""" def _lab_loc(_x): """Helper for labeling observations.""" return _x.geometry.coords[0] if o.index.name != "schools": X = o.index.name[0] else: X = "" kws = {"size": s, "ha": "left", "va": "bottom", "style": "oblique"} kws.update(kwargs) o.apply(lambda x: b.annotate(text=X+str(x[col]), xy=_lab_loc(x), **kws), axis=1) def make_patches(objects): """Create patches for legend""" patches = [] for _object in objects: try: oname = _object.index.name except AttributeError: oname = "shipping" if oname.split(" ")[0] in ["schools", "supply", "demand"]: ovalue = _object.shape[0] if oname == "schools": ms, m, c, a = 3, "o", "k", 1 elif oname.startswith("supply"): ms, m, c, a = 10, "o", "b", 0.25 elif oname.startswith("demand"): ms, m, c, a = 10, "o", "g", 0.25 if oname.endswith("snapped"): ms, m, a = float(ms) / 2.0, "x", 1 _kws = {"lw": 0, "c": c, "marker": m, "ms": ms, "alpha": a} label = "%s — %s" % (oname.capitalize(), int(ovalue)) p = matplotlib.lines.Line2D([], [], label=label, **_kws) patches.append(p) else: patch_info = plot_shipments(_object, "", for_legend=True) for c, lw, lwsc, (i, j) in patch_info: label = "s%s$\\rightarrow$d%s — %s microscopes" % (i, j, lw) _kws = {"alpha": 0.75, "c": c, "lw": lwsc, "label": label} p = matplotlib.lines.Line2D([], [], solid_capstyle="round", **_kws) patches.append(p) return patches def legend(objects, anchor=(1.005, 1.016)): """Add a legend to a plot""" patches = make_patches(objects) kws = {"fancybox": True, "framealpha": 0.85, "fontsize": "x-large"} kws.update({"bbox_to_anchor":anchor, "labelspacing":2., "borderpad":2.}) legend = matplotlib.pyplot.legend(handles=patches, **kws) legend.get_frame().set_facecolor("white") def plot_shipments(sd, b, scaled=0.75, for_legend=False): """Helper for plotting shipments based on OD and magnitude""" _patches = [] _plot_kws = {"alpha":0.75, "zorder":0, "capstyle":"round"} for c, (g, gdf) in zip(shipping_colors, sd): lw, lw_scaled, ids = gdf["ship"], gdf["ship"] * scaled, gdf["id"] if for_legend: for _lw, _lwsc, _id in zip(lw, lw_scaled, ids): _patches.append([c, _lw, _lwsc, _id]) else: gdf.plot(ax=b, color=c, lw=lw_scaled, **_plot_kws) if for_legend: return _patches """ Explanation: Plotting helper functions and constants Note: originating shipments End of explanation """ streets = geopandas.read_file(examples.get_path("streets.shp")) streets.crs = "esri:102649" streets = streets.to_crs("epsg:2762") """ Explanation: Streets End of explanation """ schools = geopandas.read_file(examples.get_path("schools.shp")) schools.index.name = "schools" schools.crs = "esri:102649" schools = schools.to_crs("epsg:2762") """ Explanation: Schools End of explanation """ schools_supply = schools[schools["POLYID"].isin(supply_schools)] schools_supply.index.name = "supply" schools_supply """ Explanation: Schools - supply nodes End of explanation """ schools_demand = schools[schools["POLYID"].isin(demand_schools)] schools_demand.index.name = "demand" schools_demand """ Explanation: Schools - demand nodes End of explanation """ ntw = spaghetti.Network(in_data=streets) vertices, arcs = spaghetti.element_as_gdf(ntw, vertices=True, arcs=True) """ Explanation: Instantiate a network object End of explanation """ # plot network base = arcs.plot(linewidth=3, alpha=0.25, color="k", zorder=0, figsize=(10, 10)) vertices.plot(ax=base, markersize=2, color="red", zorder=1) # plot observations schools.plot(ax=base, markersize=5, color="k", zorder=2) schools_supply.plot(ax=base, markersize=100, alpha=0.25, color="b", zorder=2) schools_demand.plot(ax=base, markersize=100, alpha=0.25, color="g", zorder=2) # add labels obs_labels(schools, base, 14, col="POLYID", c="k", weight="bold") # add legend elements = [schools, schools_supply, schools_demand] legend(elements) # add scale bar scalebar = ScaleBar(1, units="m", location="lower left") base.add_artist(scalebar); """ Explanation: Plot End of explanation """ ntw.snapobservations(schools_supply, "supply") supply = spaghetti.element_as_gdf(ntw, pp_name="supply") supply.index.name = "supply" supply_snapped = spaghetti.element_as_gdf(ntw, pp_name="supply", snapped=True) supply_snapped.index.name = "supply snapped" supply_snapped ntw.snapobservations(schools_demand, "demand") demand = spaghetti.element_as_gdf(ntw, pp_name="demand") demand.index.name = "demand" demand_snapped = spaghetti.element_as_gdf(ntw, pp_name="demand", snapped=True) demand_snapped.index.name = "demand snapped" demand_snapped # plot network base = arcs.plot(linewidth=3, alpha=0.25, color="k", zorder=0, figsize=(10, 10)) vertices.plot(ax=base, markersize=5, color="r", zorder=1) # plot observations schools.plot(ax=base, markersize=5, color="k", zorder=2) supply.plot(ax=base, markersize=100, alpha=0.25, color="b", zorder=3) supply_snapped.plot(ax=base, markersize=20, marker="x", color="b", zorder=3) demand.plot(ax=base, markersize=100, alpha=0.25, color="g", zorder=2) demand_snapped.plot(ax=base, markersize=20, marker="x", color="g", zorder=3) # add labels obs_labels(supply, base, 14, c="b") obs_labels(demand, base, 14, c="g") # add legend elements += [supply_snapped, demand_snapped] legend(elements) # add scale bar scalebar = ScaleBar(1, units="m", location="lower left") base.add_artist(scalebar); """ Explanation: Associate both the supply and demand schools with the network and plot End of explanation """ s2d, tree = ntw.allneighbordistances("supply", "demand", gen_tree=True) s2d[:3, :3] list(tree.items())[:4], list(tree.items())[-4:] """ Explanation: Calculate distance matrix while generating shortest path trees End of explanation """ supply["dv"] = supply["id"].apply(lambda _id: "s_%s" % _id) supply["s_i"] = amount_supply supply """ Explanation: 3. The Transportation Problem Create decision variables for the supply locations and amount to be supplied End of explanation """ demand["dv"] = demand["id"].apply(lambda _id: "d_%s" % _id) demand["d_j"] = amount_demand demand """ Explanation: Create decision variables for the demand locations and amount to be received End of explanation """ s, d, s_i, d_j = supply["dv"], demand["dv"], supply["s_i"], demand["d_j"] trans_prob = TransportationProblem(s, d, s2d, s_i, d_j) """ Explanation: Solve the Transportation Problem Note: shipping costs are in meters per microscope End of explanation """ trans_prob.print_lp() """ Explanation: Linear program (compare to its formulation in the Introduction) End of explanation """ paths = ntw.shortest_paths(tree, "supply", "demand") paths_gdf = spaghetti.element_as_gdf(ntw, routes=paths) paths_gdf.head() """ Explanation: Extract all network shortest paths End of explanation """ shipments = trans_prob.extract_shipments(paths_gdf, "id") shipments """ Explanation: Extract the shipping paths End of explanation """ # plot network base = arcs.plot(alpha=0.2, linewidth=1, color="k", figsize=(10, 10), zorder=0) vertices.plot(ax=base, markersize=1, color="r", zorder=2) # plot observations schools.plot(ax=base, markersize=5, color="k", zorder=2) supply.plot(ax=base, markersize=100, alpha=0.25, color="b", zorder=3) supply_snapped.plot(ax=base, markersize=20, marker="x", color="b", zorder=3) demand.plot(ax=base, markersize=100, alpha=0.25, color="g", zorder=2) demand_snapped.plot(ax=base, markersize=20, marker="x", color="g", zorder=3) # plot shipments plot_shipments(shipments.groupby("O"), base) # add labels obs_labels(supply, base, 14, c="b") obs_labels(demand, base, 14, c="g") # add legend elements += [shipments.groupby("O")] legend(elements) # add scale bar scalebar = ScaleBar(1, units="m", location="lower left") base.add_artist(scalebar); """ Explanation: Plot optimal shipping schedule End of explanation """
paulluo/work_note
stock_RT,Colaboratory.ipynb
unlicense
import tensorflow as tf input1 = tf.ones((2, 3)) input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3)) output = input1 + input2 with tf.Session(): result = output.eval() result """ Explanation: <a href="https://colab.research.google.com/github/paulluo/work_note/blob/master/stock_RT%EF%BC%8CColaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <img height="60px" src="/img/colab_favicon.ico" align="left" hspace="20px" vspace="5px"> 欢迎使用 Colaboratory! Colaboratory 是免费的 Jupyter 笔记本环境,不需要进行任何设置就可以使用,并且完全在云端运行。要了解更多信息,请参阅我们的常见问题解答。 使用入门 Colaboratory 概览 加载和保存数据:本地文件、云端硬盘、表格、Google Cloud Storage 导入库和安装依赖项 使用 Google Cloud BigQuery 表单、图表、Markdown 以及微件 支持 GPU 的 TensorFlow 机器学习速成课程:Pandas 简介以及使用 TensorFlow 的起始步骤 重要功能 执行 TensorFlow 代码 借助 Colaboratory,您只需点击一下鼠标,即可在浏览器中执行 TensorFlow 代码。下面的示例展示了两个矩阵相加的情况。 $\begin{bmatrix} 1. & 1. & 1. \ 1. & 1. & 1. \ \end{bmatrix} + \begin{bmatrix} 1. & 2. & 3. \ 4. & 5. & 6. \ \end{bmatrix} = \begin{bmatrix} 2. & 3. & 4. \ 5. & 6. & 7. \ \end{bmatrix}$ End of explanation """ import matplotlib.pyplot as plt import numpy as np x = np.arange(20) y = [x_i + np.random.randn(1) for x_i in x] a, b = np.polyfit(x, y, 1) _ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-') """ Explanation: GitHub 您可以通过依次转到“文件”>“在 GitHub 中保存一份副本…”,保存一个 Colab 笔记本副本 只需在 colab.research.google.com/github/ 后面加上路径,即可在 GitHub 上加载任何 .ipynb。例如,colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb 将在 GitHub 上加载此 .ipynb。 可视化 Colaboratory 包含很多已被广泛使用的库(例如 matplotlib),因而能够简化数据的可视化过程。 End of explanation """ !pip install -q matplotlib-venn from matplotlib_venn import venn2 _ = venn2(subsets = (3, 2, 1)) """ Explanation: 想使用新的库?请在笔记本的顶部通过 pip install 命令安装该库。然后,您就可以在笔记本的任何其他位置使用该库。要了解导入常用库的方法,请参阅导入库示例笔记本。 End of explanation """ import numpy as np import tushare as ts !pip install tushare import tushare as ts import tensorflow as tf !pip install tushare import os import sys import pandas as pd import re import matplotlib.pyplot as plt import tushare as ts import time import numpy as np ## hengruiyiyao:601800,,hanruiguye 300618,300601 kangtai ,luoniushang:000735, hongchuanhuizhi:002930 # example of plot web_site:https://matplotlib.org/gallery/index.html ##sz3999006,601800,300732 ###: 002460 df=ts.get_k_data('300033', start='2018-06-28', ktype='5') df2=ts.get_k_data('603180', start='2018-06-28', ktype='5') ax1 = plt.subplot2grid((8,2),(0,0),rowspan=5,colspan=1) ax2 = plt.subplot2grid((8,2),(5,0),rowspan=3, colspan=1) ax3 = plt.subplot2grid((8,2),(0,1),rowspan=5, colspan=1) ax4 = plt.subplot2grid((8,2),(5,1),rowspan=3, colspan=1) #df =df.set_index(df.date) #ax3 = plt.subplot2grid((3, 3), (1, 0), colspan=2) #ax4 = plt.subplot2grid((3, 3), (1, 2), rowspan=2) #ax5 = plt.subplot2grid((3, 3), (2, 2), rowspan=1) ################################### ax1.plot(df.index[0:30],df['close'].tail(30)) ax1.plot(df.index[0:30],df['open'].tail(30)) ax2.bar(df.index[0:15],df['volume'].tail(15)) #################### ax3.plot(df2.index[0:30],df2['close'].tail(30)) ax3.plot(df2.index[0:30],df2['open'].tail(30)) ax4.bar(df2.index[0:15],df2['volume'].tail(15)) ax1.grid(True) #ax2.grid(True) ax3.grid(True) #ax4.grid(True) #ax1.hist(log=False,color=['red','green'],label=['open','close']) #print time.asctime() #print int(time.time()) plt.show() Add study web site:math https://blog.csdn.net/xianlingmao/article/details/7919597 """ Explanation: 本地运行时支持 Colab 支持连接本地计算机上的 Jupyter 运行时。有关详情,请参阅我们的文档。 End of explanation """
mitchshack/data_analysis_with_python_and_pandas
2- IPython Notebooks and Raw Python Data Analysis/2-4 Raw Python - Lambda Functions.ipynb
apache-2.0
x = range(10) x [item**2 for item in x] def square(num): return num**2 list(map(square, x)) square_lamb = lambda num: num**2 list(map(square_lamb, x)) """ Explanation: Raw Python - Lambda Functions End of explanation """ list(map(lambda num: num**2, x)) """ Explanation: Lambda functions are just anonymous functions and don't need to be created as official functions prior to being used. This makes them useful because they can really help with code readability (if used appropriately). Their true power comes from their expressiveness when performing inline operations. I'm sure that sounds abstract at this point but I figured it would lay good ground work for our example End of explanation """ [item**2 for item in range(1,20) if item % 2 == 0] """ Explanation: square every number that is divisible by 2 from 1 to 20 End of explanation """ list(map(lambda z: z**2, filter(lambda z: z % 2 == 0, range(1,20)))) """ Explanation: That’s a little abstract at this point... We'll get to those with map, filter, and reduce below Now obviously this is not appropriate for everything but I just want you to understand that lambda functions are just anonymous functions. End of explanation """
atulsingh0/MachineLearning
ML_UoW/Course01_Regression/Week04_Ridge_Regression_Assignment02.ipynb
gpl-3.0
import graphlab as gl """ Explanation: Regression Week 4: Ridge Regression (gradient descent) In this notebook, you will implement ridge regression via gradient descent. You will: * Convert an SFrame into a Numpy array * Write a Numpy function to compute the derivative of the regression weights with respect to a single feature * Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty Fire up graphlab create Make sure you have the latest version of GraphLab Create (>= 1.7) End of explanation """ sales = gl.SFrame('data/kc_house_data.gl/') """ Explanation: Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. End of explanation """ def calcRSS(model, features, output): predict = model.predict(features) error = output - predict rss = np.sum(np.square(error)) return rss """ Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features. End of explanation """ import numpy as np # note this allows us to refer to numpy as np instead def get_numpy_data(data_sframe, features, output): data_sframe['constant'] = 1 # add a constant column to an SFrame # prepend variable 'constant' to the features list features = ['constant'] + features # select the columns of data_SFrame given by the ‘features’ list into the SFrame ‘features_sframe’ features_sframe = data_sframe[features] # this will convert the features_sframe into a numpy matrix with GraphLab Create >= 1.7!! features_matrix = features_sframe.to_numpy() # assign the column of data_sframe associated with the target to the variable ‘output_sarray’ output_sarray['target']=output # this will convert the SArray into a numpy array: output_array = output_sarray.to_numpy() # GraphLab Create>= 1.7!! return(features_matrix, output_array) """ Explanation: Import useful functions from previous notebook As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_numpy_data() from the second notebook of Week 2. End of explanation """ def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant): # If feature_is_constant is True, derivative is twice the dot product of errors and feature # Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight return derivative """ Explanation: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights: Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term. Cost(w) = SUM[ (prediction - output)^2 ] + l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2). Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as: 2*SUM[ error*[feature_i] ]. The derivative of the regularization term with respect to w[i] is: 2*l2_penalty*w[i]. Summing both, we get 2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i]. That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus 2*l2_penalty*w[i]. We will not regularize the constant. Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the 2*l2_penalty*w[0] term). Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus 2*l2_penalty*w[i]. With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call feature_is_constant which you should set to True when computing the derivative of the constant and False otherwise. End of explanation """ (example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') my_weights = np.array([1., 10.]) test_predictions = predict_output(example_features, my_weights) errors = test_predictions - example_output # prediction errors # next two lines should print the same values print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False) print np.sum(errors*example_features[:,1])*2+20. print '' # next two lines should print the same values print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True) print np.sum(errors)*2. """ Explanation: To test your feature derivartive run the following: End of explanation """ def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100): print 'Starting gradient descent with l2_penalty = ' + str(l2_penalty) weights = np.array(initial_weights) # make sure it's a numpy array iteration = 0 # iteration counter print_frequency = 1 # for adjusting frequency of debugging output #while not reached maximum number of iterations: iteration += 1 # increment iteration counter ### === code section for adjusting frequency of debugging output. === if iteration == 10: print_frequency = 10 if iteration == 100: print_frequency = 100 if iteration%print_frequency==0: print('Iteration = ' + str(iteration)) ### === end code section === # compute the predictions based on feature_matrix and weights using your predict_output() function # compute the errors as predictions - output # from time to time, print the value of the cost function if iteration%print_frequency==0: print 'Cost function = ', str(np.dot(errors,errors) + l2_penalty*(np.dot(weights,weights) - weights[0]**2)) for i in xrange(len(weights)): # loop over each weight # Recall that feature_matrix[:,i] is the feature column associated with weights[i] # compute the derivative for weight[i]. #(Remember: when i=0, you are computing the derivative of the constant!) # subtract the step size times the derivative from the current weight print 'Done with gradient descent at iteration ', iteration print 'Learned weights = ', str(weights) return weights """ Explanation: Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function. The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.) With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria. End of explanation """ simple_features = ['sqft_living'] my_output = 'price' """ Explanation: Visualizing effect of L2 penalty The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature: End of explanation """ train_data,test_data = sales.random_split(.8,seed=0) """ Explanation: Let us split the dataset into training set and test set. Make sure to use seed=0: End of explanation """ (simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output) (simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output) """ Explanation: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data. End of explanation """ initial_weights = np.array([0., 0.]) step_size = 1e-12 max_iterations=1000 """ Explanation: Let's set the parameters for our optimization: End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.plot(simple_feature_matrix,output,'k.', simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-', simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-') """ Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights: simple_weights_0_penalty we'll use them later. Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights: simple_weights_high_penalty we'll use them later. This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.) End of explanation """ model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. my_output = 'price' (feature_matrix, output) = get_numpy_data(train_data, model_features, my_output) (test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output) """ Explanation: Compute the RSS on the TEST data for the following three sets of weights: 1. The initial weights (all zeros) 2. The weights learned with no regularization 3. The weights learned with high regularization Which weights perform best? QUIZ QUESTIONS 1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization? 2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper? 3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? Running a multiple regression with L2 penalty Let us now consider a model with 2 features: ['sqft_living', 'sqft_living15']. First, create Numpy versions of your training and test data with these two features. End of explanation """ initial_weights = np.array([0.0,0.0,0.0]) step_size = 1e-12 max_iterations = 1000 """ Explanation: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations. End of explanation """
deepcharles/ruptures
docs/getting-started/basic-usage.ipynb
bsd-2-clause
import matplotlib.pyplot as plt # for display purposes import ruptures as rpt # our package """ Explanation: Basic usage <!-- {{ add_binder_block(page) }} --> Let us start with a simple example to illustrate the use of ruptures: generate a 3-dimensional piecewise constant signal with noise and estimate the change points. Setup First, we make the necessary imports. End of explanation """ n_samples, n_dims, sigma = 1000, 3, 2 n_bkps = 4 # number of breakpoints signal, bkps = rpt.pw_constant(n_samples, n_dims, n_bkps, noise_std=sigma) """ Explanation: Generate and display the signal Let us generate a 3-dimensional piecewise constant signal with Gaussian noise. End of explanation """ print(bkps) """ Explanation: The true change points of this synthetic signal are available in the bkps variable. End of explanation """ fig, ax_array = rpt.display(signal, bkps) """ Explanation: Note that the first four element are change point indexes while the last is simply the number of samples. (This is a technical convention so that functions in ruptures always know the length of the signal at hand.) It is also possible to plot our $\mathbb{R}^3$-valued signal along with the true change points with the rpt.display function. In the following image, the color changes whenever the mean of the signal shifts. End of explanation """ # detection algo = rpt.Dynp(model="l2").fit(signal) result = algo.predict(n_bkps=4) print(result) """ Explanation: Change point detection We can now perform change point detection, meaning that we find the indexes where the signal mean changes. To that end, we minimize the sum of squared errors when approximating the signal by a piecewise constant signal. Formally, for a signal $y_0,y_1,\dots,y_{T-1}$ ($T$ samples), we solve the following optimization problem, over all possible change positions $t_1<t_2<\dots<t_K$ (where the number $K$ of changes is defined by the user): $$ \hat{t}1, \hat{t}_2,\dots,\hat{t}_K = \arg\min{t_1,\dots,t_K} V(t_1,t_2,\dots,t_K) $$ with $$ V(t_1,t_2,\dots,t_K) := \sum_{k=0}^K\sum_{t=t_k}^{t_{k+1}-1} \|y_t-\bar{y}{t_k..t{k+1}}\|^2 $$ where $\bar{y}{t_k..t{k+1}}$ is the empirical mean of the sub-signal $y_{t_k}, y_{t_k+1},\dots,y_{t_{k+1}-1}$. (By convention $t_0=0$ and $t_{K+1}=T$.) This optimization is solved with dynamic programming, using the Dynp class. (More information in the section What is change point detection? and the User guide.) End of explanation """ # display rpt.display(signal, bkps, result) plt.show() """ Explanation: Again the first elements are change point indexes and the last is the number of samples. Display the results To visualy compare the true segmentation (bkps) and the estimated one (result), we can resort to rpt.display a second time. In the following image, the alternating colors indicate the true breakpoints and the dashed vertical lines, the estimated breakpoints. End of explanation """
mne-tools/mne-tools.github.io
dev/_downloads/272b39eb7cbe2bfe1e8c768341ec7c56/time_frequency_simulated.ipynb
bsd-3-clause
# Authors: Hari Bharadwaj <hari@nmr.mgh.harvard.edu> # Denis Engemann <denis.engemann@gmail.com> # Chris Holdgraf <choldgraf@berkeley.edu> # # License: BSD-3-Clause import numpy as np from matplotlib import pyplot as plt from mne import create_info, EpochsArray from mne.baseline import rescale from mne.time_frequency import (tfr_multitaper, tfr_stockwell, tfr_morlet, tfr_array_morlet) from mne.viz import centers_to_edges print(__doc__) """ Explanation: Time-frequency on simulated data (Multitaper vs. Morlet vs. Stockwell) This example demonstrates the different time-frequency estimation methods on simulated data. It shows the time-frequency resolution trade-off and the problem of estimation variance. In addition it highlights alternative functions for generating TFRs without averaging across trials, or by operating on numpy arrays. End of explanation """ sfreq = 1000.0 ch_names = ['SIM0001', 'SIM0002'] ch_types = ['grad', 'grad'] info = create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types) n_times = 1024 # Just over 1 second epochs n_epochs = 40 seed = 42 rng = np.random.RandomState(seed) noise = rng.randn(n_epochs, len(ch_names), n_times) # Add a 50 Hz sinusoidal burst to the noise and ramp it. t = np.arange(n_times, dtype=np.float64) / sfreq signal = np.sin(np.pi * 2. * 50. * t) # 50 Hz sinusoid signal signal[np.logical_or(t < 0.45, t > 0.55)] = 0. # Hard windowing on_time = np.logical_and(t >= 0.45, t <= 0.55) signal[on_time] *= np.hanning(on_time.sum()) # Ramping data = noise + signal reject = dict(grad=4000) events = np.empty((n_epochs, 3), dtype=int) first_event_sample = 100 event_id = dict(sin50hz=1) for k in range(n_epochs): events[k, :] = first_event_sample + k * n_times, 0, event_id['sin50hz'] epochs = EpochsArray(data=data, info=info, events=events, event_id=event_id, reject=reject) epochs.average().plot() """ Explanation: Simulate data We'll simulate data with a known spectro-temporal structure. End of explanation """ freqs = np.arange(5., 100., 3.) vmin, vmax = -3., 3. # Define our color limits. """ Explanation: Calculate a time-frequency representation (TFR) Below we'll demonstrate the output of several TFR functions in MNE: :func:mne.time_frequency.tfr_multitaper :func:mne.time_frequency.tfr_stockwell :func:mne.time_frequency.tfr_morlet Multitaper transform First we'll use the multitaper method for calculating the TFR. This creates several orthogonal tapering windows in the TFR estimation, which reduces variance. We'll also show some of the parameters that can be tweaked (e.g., time_bandwidth) that will result in different multitaper properties, and thus a different TFR. You can trade time resolution or frequency resolution or both in order to get a reduction in variance. End of explanation """ n_cycles = freqs / 2. time_bandwidth = 2.0 # Least possible frequency-smoothing (1 taper) power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, title='Sim: Least smoothing, most variance') """ Explanation: (1) Least smoothing (most variance/background fluctuations). End of explanation """ n_cycles = freqs # Increase time-window length to 1 second. time_bandwidth = 4.0 # Same frequency-smoothing as (1) 3 tapers. power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, title='Sim: Less frequency smoothing, more time smoothing') """ Explanation: (2) Less frequency smoothing, more time smoothing. End of explanation """ n_cycles = freqs / 2. time_bandwidth = 8.0 # Same time-smoothing as (1), 7 tapers. power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, title='Sim: Less time smoothing, more frequency smoothing') """ Explanation: (3) Less time smoothing, more frequency smoothing. End of explanation """ fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True) fmin, fmax = freqs[[0, -1]] for width, ax in zip((0.2, .7, 3.0), axs): power = tfr_stockwell(epochs, fmin=fmin, fmax=fmax, width=width) power.plot([0], baseline=(0., 0.1), mode='mean', axes=ax, show=False, colorbar=False) ax.set_title('Sim: Using S transform, width = {:0.1f}'.format(width)) plt.tight_layout() """ Explanation: Stockwell (S) transform Stockwell uses a Gaussian window to balance temporal and spectral resolution. Importantly, frequency bands are phase-normalized, hence strictly comparable with regard to timing, and, the input signal can be recoverd from the transform in a lossless way if we disregard numerical errors. In this case, we control the spectral / temporal resolution by specifying different widths of the gaussian window using the width parameter. End of explanation """ fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True) all_n_cycles = [1, 3, freqs / 2.] for n_cycles, ax in zip(all_n_cycles, axs): power = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False) power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, axes=ax, show=False, colorbar=False) n_cycles = 'scaled by freqs' if not isinstance(n_cycles, int) else n_cycles ax.set_title('Sim: Using Morlet wavelet, n_cycles = %s' % n_cycles) plt.tight_layout() """ Explanation: Morlet Wavelets Finally, show the TFR using morlet wavelets, which are a sinusoidal wave with a gaussian envelope. We can control the balance between spectral and temporal resolution with the n_cycles parameter, which defines the number of cycles to include in the window. End of explanation """ n_cycles = freqs / 2. power = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False, average=False) print(type(power)) avgpower = power.average() avgpower.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, title='Using Morlet wavelets and EpochsTFR', show=False) """ Explanation: Calculating a TFR without averaging over epochs It is also possible to calculate a TFR without averaging across trials. We can do this by using average=False. In this case, an instance of :class:mne.time_frequency.EpochsTFR is returned. End of explanation """ power = tfr_array_morlet(epochs.get_data(), sfreq=epochs.info['sfreq'], freqs=freqs, n_cycles=n_cycles, output='avg_power') # Baseline the output rescale(power, epochs.times, (0., 0.1), mode='mean', copy=False) fig, ax = plt.subplots() x, y = centers_to_edges(epochs.times * 1000, freqs) mesh = ax.pcolormesh(x, y, power[0], cmap='RdBu_r', vmin=vmin, vmax=vmax) ax.set_title('TFR calculated on a numpy array') ax.set(ylim=freqs[[0, -1]], xlabel='Time (ms)') fig.colorbar(mesh) plt.tight_layout() plt.show() """ Explanation: Operating on arrays MNE also has versions of the functions above which operate on numpy arrays instead of MNE objects. They expect inputs of the shape (n_epochs, n_channels, n_times). They will also return a numpy array of shape (n_epochs, n_channels, n_freqs, n_times). End of explanation """
google/prog-edu-assistant
autograder/extract/submission.ipynb
apache-2.0
print("hello") print("bye bye") print("hey", "you") print("one") print("two") """ Explanation: Hello world In this unit you will learn how to use Python to implement the first ever program that every programmer starts with. Introduction Here is the traditional first programming exercise, called "Hello world". The task is to print the message: "Hello, world". Here are a few examples to get you started. Run the following cells and see how you can print a message. To run a cell, click with mouse inside a cell, then press Ctrl+Enter to execute it. If you want to execute a few cells sequentially, then press Shift+Enter instead, and the focus will be automatically moved to the next cell as soon as one cell finishes execution. End of explanation """ def hello(x): print("Hello, " + x) """ Explanation: Exercise Now it is your turn. Please create a program in the next cell that would print a message "Hello, world": End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cmcc/cmip6/models/sandbox-1/ocnbgchem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-1', 'ocnbgchem') """ Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: CMCC Source ID: SANDBOX-1 Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:50 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) """ Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) """ Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) """ Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) """ Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) """ Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) """ Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation """
tpin3694/tpin3694.github.io
python/if_and_if_else_statements.ipynb
mit
conflict_active = 1 """ Explanation: Title: if and if else Slug: if_and_if_else_statements Summary: if and if else Date: 2016-05-01 12:00 Category: Python Tags: Basics Authors: Chris Albon Create a variable with the status of the conflict. 1 if the conflict is active 0 if the conflict is not active unknown if the status of the conflict is unknwon End of explanation """ if conflict_active == 1: print('The conflict is active.') """ Explanation: If the conflict is active print a statement End of explanation """ if conflict_active == 1: print('The conflict is active.') else: print('The conflict is not active.') """ Explanation: If the conflict is active print a statement, if not, print a different statement End of explanation """ if conflict_active == 1: print('The conflict is active.') elif conflict_active == 'unknown': print('The status of the conflict is unknown') else: print('The conflict is not active.') """ Explanation: If the conflict is active print a statement, if not, print a different statement, if unknown, state a third statement. End of explanation """
google/applied-machine-learning-intensive
content/05_deep_learning/02_natural_language_processing/colab.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/05_deep_learning/02_natural_language_processing/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2020 Google LLC. End of explanation """ import numpy as np import tensorflow as tf np.random.seed(42) tf.random.set_seed(42) """ Explanation: Natural Language Processing Look almost anywhere around you, and you'll see an application of natural language processing (NLP) at work. This broad field covers everything from spellcheck to translation between languages to full machine understanding of human language. In this lesson we'll work through the typical process of an NLP problem. We'll first use a bag-of-words approach to train a simple classifier model. Then we'll use a sequential approach (considering the order of words) to train an RNN model. Exploratory Data Analysis We will use the Sentiment Labelled Sentences Data Set from the UCI Machine Learning Repository. This dataset was used in the paper 'From Group to Individual Labels using Deep Features,' Kotzias et. al., KDD 2015 and contains 3000 user reviews from IMDB, Amazon, and Yelp with the corresponding sentiment of each review (positive: 1 or negative: 0). This supervised problem of predicting sentiment is often called a "sentiment analysis task." Download the Data In order to get reproducible results for this lab, we'll first seed the random number generators. End of explanation """ import zipfile import io import shutil import os import urllib.request url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00331/sentiment%20labelled%20sentences.zip' # Download zip file from url. zipdata = io.BytesIO() zipdata.write(urllib.request.urlopen(url).read()) # Extract zip files. zfile = zipfile.ZipFile(zipdata) zfile.extractall() zfile.close() # Rename directory to "data". shutil.rmtree('./data', ignore_errors=True) shutil.move('sentiment labelled sentences', 'data') os.listdir('data') """ Explanation: Next we'll download and unzip the data. End of explanation """ import pandas as pd df = pd.DataFrame(columns=['review', 'label']) for file in sorted(os.listdir('data')): if file.endswith('_labelled.txt'): df = df.append(pd.read_csv(os.path.join('data', file), sep='\t', names=['review', 'label'])) df.describe() """ Explanation: There are three files that we'll use in our model: amazon_cells_labelled.txt, imdb_labelled.txt, and yelp_labelled.txt. As you can tell from the _labelled portion of the names, this will be a supervised learning problem. Load the Data The downloaded data is split across three files: amazon_cells_labelled.txt, imdb_labelled.txt, and yelp_labelled.txt. Each file has two tab-separated columns, one containing the review text and one containing the sentiment label. Let's combine all the files into one DataFrame, and then get a sense of what the data looks like. End of explanation """ df.iloc[1019]['review'] """ Explanation: Interesting. We were expecting 3000 data points, but only got 2748. What's going on? It turns out that the IMDB data contains some rows with single double quotes. By default, when the parser sees double quotes, it stops performing a search for another tab until it finds a closing double quote. Since this quote is alone on the line, it causes the parser to "eat" quite a few lines of the data file, as illustrated by the code block below. End of explanation """ import pandas as pd df = pd.DataFrame(columns=['review', 'label']) for file in sorted(os.listdir('data')): if file.endswith('_labelled.txt'): df = df.append(pd.read_csv(os.path.join('data', file), sep='\t', names=['review', 'label'], quoting=3)) df.describe() """ Explanation: In order to get around this, we need to tell the parser to turn off quote detection using the quoting argument. The possible values are: Value | Meaning ------|---------- 0 | QUOTE_MINIMAL (default) 1 | QUOTE_ALL 2 | QUOTE_NONNUMERIC 3 | QUOTE_NONE End of explanation """ df[df['label'] == 0].sample(10) """ Explanation: That looks much better. We got lucky that none of the reviews had embedded tabs, or they would have been quoted and our simple fix would not have worked. Notice that the read_csv() call didn't return an error when it encountered an unbalanced quote on a line. It happily loaded the file thinking that the quote was intentional and meant to make the data span multiple lines. Always verify that the data you loaded looks like you expected it to! Now let's look at a few of the reviews. The documentation says that positive reviews are labelled with a 1 and negative with a 0. Let's sample a few and see if we agree. First the bad, End of explanation """ df[df['label'] == 1].sample(10) """ Explanation: And then the good. End of explanation """ from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( df['review'], df['label'].astype('int'), test_size=0.2, random_state=1000) print(len(X_train), len(X_test), len(y_train), len(y_test)) """ Explanation: The sentiment seems to check out. This concludes the EDA that we'll do for this dataset. Let's move on to data preparation for the model. Train/Test Split We'll create two different models in this lab. Common to both is the need to split the dataset so that 80% is used for training and the other 20% is used for testing. End of explanation """ from sklearn.feature_extraction.text import CountVectorizer data = [ "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo", "Seattle buffalo Seattle buffalo buffalo buffalo Seattle buffalo", ] vectorizer = CountVectorizer(data) vectorizer.fit(data) data_vec = vectorizer.transform(data) print(data_vec) """ Explanation: The labels are simple 0 and 1 values, so we don't need to do any preprocessing there. The reviews themselves are variable length text strings. Each model will handle them slightly differently, so we'll save the model-specific preprocessing for when we encounter each model. Bag-of-Words Model We will first use a bag-of-words (BOW) approach to vectorize the sentences. This means we will consider each review as a "bag of words," where the order of the words does not matter. Using this bag we'll try to assign sentiment to the review. In order to create the bags, we'll use scikit-learn's CountVectorizer class. The class converts a corpus of text into a sparse matrix that represents the counts of the number of times each word appears in the text. Before applying this to our dataset, let's make sure we understand what's going on. Say we have a couple of sentences that we want to vectorize. One about bullied buffalo in Buffalo, NY and the other about their peers in Seattle, WA. We can count-vectorize the data, as shown in the code block below. End of explanation """ data = ['Buffalo Buffalo wings'] data_vec = vectorizer.transform(data) print(data_vec) """ Explanation: The resultant matrix is: Sentence | buffalo | seattle --|---------|-------- "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" | 8 | 0 "Seattle buffalo Seattle buffalo buffalo buffalo Seattle buffalo" | 5 | 3 As you can see, the first sentence has eight instances of the word buffalo and no instances of seattle, while the second sentence has five buffalo and three seattle. Case does not matter, nor does context (used as a noun, verb, etc.). Only the letters count. The representation is a sparse matrix. In these two sentences consisting of two words, that seems a little strange. But if you think about the fact that there are currently almost 200,000 English words in use while the average sentence is less than 20 words, you can see why sparse matrices make sense here. And what happens if the data we're transforming contains words we didn't fit the vectorizer to? End of explanation """ from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() vectorizer.fit(X_train) len(vectorizer.vocabulary_) """ Explanation: Unknown words, such as 'wings' in this case, just don't appear in the matrix. Let's count-vectorize our training data and see how many words are in our vocabulary. End of explanation """ from sklearn.linear_model import LogisticRegression model = LogisticRegression(solver='liblinear') X_train_vec = vectorizer.transform(X_train) model.fit(X_train_vec, y_train) print('Training accuracy: {}'.format(model.score(X_train_vec, y_train))) """ Explanation: We can now transform our training data into a count vector and train a model. For a basic model, we'll use a logistic regression. End of explanation """ X_test_vec = vectorizer.transform(X_test) print('Testing accuracy: {}'.format(model.score(X_test_vec, y_test))) """ Explanation: That is excellent training accuracy. Let's see how well it generalizes. End of explanation """ !python -m spacy download en_core_web_md """ Explanation: It seems like our model might have overfit a bit. With over 97% training accuracy and only 86% testing accuracy, we likely need to work on making our model generalize better. Grammar So far we have only used a bag of words on raw words to train our model. That's fine in some cases since words are often grammatically in the same class. But what about when they are not? In our "Buffalo buffalo..." example, the same word was used to represent a mix of nouns, verbs, and other parts of speech. What if the number of adjectives or nouns or some other part of speech affected the sentiment of the review that we are classifying? We can test this by using a toolkit that classifies words in sentences, and then we feed those classifications into our model. In this section we'll use spaCY to add metadata to our reviews and then pass the reviews and metadata through our model. spaCy is a library for advanced NLP tools. It's built based on state-of-the-art research and designed to be efficient for industry use. spaCy is extremely useful for extracting more complex linguistic features from text. Another mature and popular Python NLP toolkit is NLTK, which is a bit more academic-oriented. We must specify a linguistic model for spaCy to use. For this exercise we'll use their "medium-sized" English language model. If you already have this model downloaded, you can skip to the load step below. Note: This is a large file, so it may take a few minutes to download and process. End of explanation """ import en_core_web_md spacy_model = en_core_web_md.load() """ Explanation: After the model is downloaded, we can import it directly using a Python import statement. After the import we can load the model. End of explanation """ X_train.iloc[0] """ Explanation: And now we can use spaCY to annotate our data. Let's look at one of our reviews: End of explanation """ tokens = spacy_model(X_train.iloc[0]) for token in tokens: print(token.text, token.pos_) """ Explanation: We can then call spaCY directly and get information such as the part of speech of each word in our review. spaCy language models process raw text into a Doc object, which is a collection of Token objects. Each Token contains many useful linguistic annotations. For example, .text stores the raw text of a Token and .pos_ stores its Part of Speech (pos) tag. End of explanation """ def add_pos_tags(reviews_raw): reviews = [] for i, review in enumerate(reviews_raw): tokens = spacy_model(review) review_with_pos = [] for token in tokens: review_with_pos.append(token.text+"_"+token.pos_) reviews.append(' '.join(review_with_pos)) return reviews print(add_pos_tags("the big dog")) """ Explanation: Many of the annotations are obvious, such as NOUN, but others are less so. The spaCY annotation documentation is a good place to look if you are unsure about an annotation. So how do we actually add annotations to our reviews? Since we are using "bag of words" annotations at this point, we have a bit of flexibility. We could just add the spaCY output at the end of the sentence: the big dog jumps DET ADJ NOUN VERB or we could add it after each word: the DET big ADJ dog NOUN jumps VERB Functionally these are the same in "bag of words" models. Order and case don't matter. If the absolute number of adjectives matter or some other factor like that, then this type of feature engineering could be useful. What if it matters to us "how" a word was used, not just "that" a word was used? In this case we need to combine the grammar with the word. Let's create a function to do that. End of explanation """ X_train_annotated = add_pos_tags(X_train) X_test_annotated = add_pos_tags(X_test) vectorizer = CountVectorizer() vectorizer.fit(X_train_annotated) X_train_vec = vectorizer.transform(X_train_annotated) X_test_vec = vectorizer.transform(X_test_annotated) print(X_train_annotated[0]) from sklearn.linear_model import LogisticRegression model = LogisticRegression(solver='liblinear') model.fit(X_train_vec, y_train) print('Training accuracy: {}'.format(model.score(X_train_vec, y_train))) print('Testing accuracy: {}'.format(model.score(X_test_vec, y_test))) """ Explanation: Let's now apply this to our entire dataset. End of explanation """ from tensorflow import keras tokenizer = keras.preprocessing.text.Tokenizer() tokenizer.fit_on_texts(X_train) X_train_tokenized = tokenizer.texts_to_sequences(X_train) X_test_tokenized = tokenizer.texts_to_sequences(X_test) print(X_train.iloc[0]) print(X_train_tokenized[0]) """ Explanation: Our training accuracy really went up, but our testing accuracy went down. We are overfitting even more now. This isn't much of a surprise, but is interesting to see that adding even more context (features) can allow a model to fit even tighter than can be done with raw data. Sequential Model Much of the meaning of language depends on the order of words: "That movie was not really good" is not quite the same as "That movie was really not good." For more complicated NLP tasks, a bag-of-words approach does not capture enough useful information. In this section we will instead work with a Recurrent Neural Network (RNN) model, which is specifically designed to capture information about the order of sequences. Preprocessing We can't use CountVectorizer here, so we will need to do some slightly different preprocessing. We can first use the keras Tokenizer to learn a vocabulary, and then transform each review into a list of indices. Note that we will not include part-of-speech information for this model. End of explanation """ import matplotlib.pyplot as plt review_lengths = [len(review) for review in X_train] plt.hist(review_lengths, density=True) plt.show() """ Explanation: We need to pad our input so all vectors have the same length. A quick histogram of review lengths shows that almost all reviews have fewer than 100 words. Let's take a closer look at the distribution of lengths. End of explanation """ maxlen = 50 X_train_padded = keras.preprocessing.sequence.pad_sequences( X_train_tokenized, padding='post', maxlen=maxlen) X_test_padded = keras.preprocessing.sequence.pad_sequences( X_test_tokenized, padding='post', maxlen=maxlen) print(X_train_padded[0]) """ Explanation: Almost all reviews have fewer than 50 words! Therefore, we will pad to a maximum review length of 50. End of explanation """ # Include an extra index for the "<PAD>" token. vocab_size = len(tokenizer.word_index) + 1 embedding_dim = 300 embedding_matrix = np.zeros((vocab_size, embedding_dim)) for word, i in tokenizer.word_index.items(): token = spacy_model(word)[0] # Make sure spaCy has an embedding for this token. if not token.is_oov: embedding_matrix[i] = token.vector print(embedding_matrix.shape) """ Explanation: Pre-Trained Word Embeddings Word embeddings are foundational to most NLP tasks. It's common to experiment with embeddings, feature extraction, or a combination of both to determine what works best with your specific data and problem. In practice, instead of training our own embeddings, we can often take advantage of existing embeddings that have already been trained. This is especially useful when we have a small dataset and want or need the richer meaning that comes from embeddings trained on a larger dataset. There are a variety of extensively pre-trained word embeddings. One of the most powerful and widely-used is GloVe (Global Vectors for Word Representation). Luckily for us, the spaCy model we downloaded is already integrated with 300-dimensional GloVe embeddings. All we need to do is load these embeddings into an embedding_matrix so that each word index properly matches with the words in our dataset. We can access the tokenizer's vocabulary using .word_index. Note: This may take a few minutes to run. End of explanation """ model = keras.Sequential([ keras.layers.Embedding( vocab_size, embedding_dim, weights=[embedding_matrix], trainable=False, mask_zero=True ), keras.layers.LSTM(64), keras.layers.Dense(1, activation='sigmoid') ]) model.summary() """ Explanation: Loading the embeddings may take a little while to run. When it's done we'll have an embedding_matrix where each word index corresponds to a 300-dimensional GloVe vector. We can load this into an Embedding layer to train a model or visualize the embeddings. Also note that we have slightly more tokens now than from using CountVectorizer. This means that Keras' Tokenizer splits sentences into tokens using slightly different rules. RNN Model This model will have three layers: Embedding We initialize its weights using the embedding_matrix of pre-trained GloVe embeddings. We set trainable=False to prevent the weights from being updated during training. You can keep trainable=True to allow for additional training, or "fine-tuning", of these weights. We also set mask_zero=True to ensure we do not train parameters based on the "&lt;PAD&gt;" tokens. LSTM (Long Short-Term Memory) This is a type of RNN architecture that is especially good at handling long sequences of information. This layer takes input of dimensions (batch size, maxlen, embedding dimension) and returns output of dimensions (batch size, 64). A larger output size means a more complex model; we have chosen 64 after tuning based on model performance. Dense A final layer to return a prediction of either positive or negative sentiment. End of explanation """ model.compile( loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'] ) history = model.fit( X_train_padded, y_train, epochs=10, batch_size=64 ) """ Explanation: We will train this model for 10 epochs since it is slower to train per epoch and reaches high training accuracy after 10 epochs. We use a batch size of 64 based on hyperparameter tuning. End of explanation """ loss, acc = model.evaluate(X_test_padded, y_test) print('Test accuracy: {}'.format(acc)) """ Explanation: And finally, we can evaluate the accuracy of the model on our test data. End of explanation """ # Your code goes here """ Explanation: Note that the final testing set accuracy is not significantly higher than that of our Logistic Regression model. We are using a complex model on a small dataset, which is prone to overfitting. You can usually achieve more generalizable results with a larger dataset. Exercise 1: A Tale of Two Authors In this exercise we will create a model that can determine if a paragraph was written by Jane Austen or Charles Dickens. We'll use a dataset containing the works of the two authors sourced from Project Gutenberg. Your task is to download the data and build a classifier that can distinguish between the works of the two authors using techniques covered earlier in this lab. Experiment with different types of models, and see if you can build one that trains and generalizes well. Use as many text and code cells as you need. Be sure to explain your work. Student Solution End of explanation """
tensorflow/docs
site/en/tutorials/distribute/dtensor_keras_tutorial.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ !pip install --quiet --upgrade --pre tensorflow tensorflow-datasets """ Explanation: Using DTensors with Keras <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/dtensor_keras_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/dtensor_keras_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/dtensor_keras_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/dtensor_keras_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview In this tutoral, you will learn how to use DTensor with Keras. Through DTensor integration with Keras, you can reuse your existing Keras layers and models to build and train distributed machine learning models. You will train a multi-layer classification model with the MNIST data. Setting the layout for subclassing model, Sequential model, and functional model will be demonstrated. This tutoral assumes that you have already read the DTensor programing guide, and are familiar with basic DTensor concepts like Mesh and Layout. This tutoral is based on https://www.tensorflow.org/datasets/keras_example. Setup DTensor is part of TensorFlow 2.9.0 release. End of explanation """ import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.experimental import dtensor def configure_virtual_cpus(ncpu): phy_devices = tf.config.list_physical_devices('CPU') tf.config.set_logical_device_configuration( phy_devices[0], [tf.config.LogicalDeviceConfiguration()] * ncpu) configure_virtual_cpus(8) tf.config.list_logical_devices('CPU') devices = [f'CPU:{i}' for i in range(8)] """ Explanation: Next, import tensorflow and tensorflow.experimental.dtensor, and configure TensorFlow to use 8 virtual CPUs. Even though this example uses CPUs, DTensor works the same way on CPU, GPU or TPU devices. End of explanation """ tf.keras.backend.experimental.enable_tf_random_generator() tf.keras.utils.set_random_seed(1337) """ Explanation: Deterministic pseudo-random number generators One thing you should note is that DTensor API requires each of the running client to have the same random seeds, so that it could have deterministic behavior for initializing the weights. You can achieve this by setting the global seeds in keras via tf.keras.utils.set_random_seed(). End of explanation """ mesh = dtensor.create_mesh([("batch", 8)], devices=devices) """ Explanation: Creating a Data Parallel Mesh This tutorial demonstrates Data Parallel training. Adapting to Model Parallel training and Spatial Parallel training can be as simple as switching to a different set of Layout objects. Refer to DTensor in-depth ML Tutorial for more information on distributed training beyond Data Parallel. Data Parallel training is a commonly used parallel training scheme, also used by for example tf.distribute.MirroredStrategy. With DTensor, a Data Parallel training loop uses a Mesh that consists of a single 'batch' dimension, where each device runs a replica of the model that receives a shard from the global batch. End of explanation """ example_weight_layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh) # or example_weight_layout = dtensor.Layout.replicated(mesh, rank=2) """ Explanation: As each device runs a full replica of the model, the model variables shall be fully replicated across the mesh (unsharded). As an example, a fully replicated Layout for a rank-2 weight on this Mesh would be as follows: End of explanation """ example_data_layout = dtensor.Layout(['batch', dtensor.UNSHARDED], mesh) # or example_data_layout = dtensor.Layout.batch_sharded(mesh, 'batch', rank=2) """ Explanation: A layout for a rank-2 data tensor on this Mesh would be sharded along the first dimension (sometimes known as batch_sharded), End of explanation """ unsharded_layout_2d = dtensor.Layout.replicated(mesh, 2) unsharded_layout_1d = dtensor.Layout.replicated(mesh, 1) model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu', name='d1', kernel_layout=unsharded_layout_2d, bias_layout=unsharded_layout_1d), tf.keras.layers.Dense(10, name='d2', kernel_layout=unsharded_layout_2d, bias_layout=unsharded_layout_1d) ]) """ Explanation: Create Keras layers with layout In the data parallel scheme, you usually create your model weights with a fully replicated layout, so that each replica of the model can do calculations with the sharded input data. In order to configure the layout information for your layers' weights, Keras has exposed an extra parameter in the layer constructor for most of the built-in layers. The following example builds a small image classification model with fully replicated weight layout. You can specify layout information kernel and bias in tf.keras.layers.Dense via argument kernel_layout and bias_layout. Most of the built-in keras layers are ready for explicitly specifying the Layout for the layer weights. End of explanation """ for weight in model.weights: print(f'Weight name: {weight.name} with layout: {weight.layout}') break """ Explanation: You can check the layout information by examining the layout property on the weights. End of explanation """ (ds_train, ds_test), ds_info = tfds.load( 'mnist', split=['train', 'test'], shuffle_files=True, as_supervised=True, with_info=True, ) def normalize_img(image, label): """Normalizes images: `uint8` -> `float32`.""" return tf.cast(image, tf.float32) / 255., label batch_size = 128 ds_train = ds_train.map( normalize_img, num_parallel_calls=tf.data.AUTOTUNE) ds_train = ds_train.cache() ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples) ds_train = ds_train.batch(batch_size) ds_train = ds_train.prefetch(tf.data.AUTOTUNE) ds_test = ds_test.map( normalize_img, num_parallel_calls=tf.data.AUTOTUNE) ds_test = ds_test.batch(batch_size) ds_test = ds_test.cache() ds_test = ds_test.prefetch(tf.data.AUTOTUNE) """ Explanation: Load a dataset and build input pipeline Load a MNIST dataset and configure some pre-processing input pipeline for it. The dataset itself is not associated with any DTensor layout information. There are plans to improve DTensor Keras integration with tf.data in future TensorFlow releases. End of explanation """ @tf.function def train_step(model, x, y, optimizer, metrics): with tf.GradientTape() as tape: logits = model(x, training=True) # tf.reduce_sum sums the batch sharded per-example loss to a replicated # global loss (scalar). loss = tf.reduce_sum(tf.keras.losses.sparse_categorical_crossentropy( y, logits, from_logits=True)) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) for metric in metrics.values(): metric.update_state(y_true=y, y_pred=logits) loss_per_sample = loss / len(x) results = {'loss': loss_per_sample} return results @tf.function def eval_step(model, x, y, metrics): logits = model(x, training=False) loss = tf.reduce_sum(tf.keras.losses.sparse_categorical_crossentropy( y, logits, from_logits=True)) for metric in metrics.values(): metric.update_state(y_true=y, y_pred=logits) loss_per_sample = loss / len(x) results = {'eval_loss': loss_per_sample} return results def pack_dtensor_inputs(images, labels, image_layout, label_layout): num_local_devices = image_layout.mesh.num_local_devices() images = tf.split(images, num_local_devices) labels = tf.split(labels, num_local_devices) images = dtensor.pack(images, image_layout) labels = dtensor.pack(labels, label_layout) return images, labels """ Explanation: Define the training logic for the model Next define the training and evalution logic for the model. As of TensorFlow 2.9, you have to write a custom-training-loop for a DTensor enabled Keras model. This is to pack the input data with proper layout information, which is not integrated with the standard tf.keras.Model.fit() or tf.keras.Model.eval() functions from Keras. you will get more tf.data support in the upcoming release. End of explanation """ optimizer = tf.keras.dtensor.experimental.optimizers.Adam(0.01, mesh=mesh) metrics = {'accuracy': tf.keras.metrics.SparseCategoricalAccuracy(mesh=mesh)} eval_metrics = {'eval_accuracy': tf.keras.metrics.SparseCategoricalAccuracy(mesh=mesh)} """ Explanation: Metrics and Optimizers When using DTensor API with Keras Metric and Optimizer, you will need to provide the extra mesh information, so that any internal state variables and tensors can work with variables in the model. For an optimizer, DTensor introduces a new experimental namespace keras.dtensor.experimental.optimizers, where many existing Keras Optimizers are extended to receive an additional mesh argument. In future releases, it may be merged with Keras core optimizers. For metrics, you can directly specify the mesh to the constructor as an argument to make it a DTensor compatible Metric. End of explanation """ num_epochs = 3 image_layout = dtensor.Layout.batch_sharded(mesh, 'batch', rank=4) label_layout = dtensor.Layout.batch_sharded(mesh, 'batch', rank=1) for epoch in range(num_epochs): print("============================") print("Epoch: ", epoch) for metric in metrics.values(): metric.reset_state() step = 0 results = {} pbar = tf.keras.utils.Progbar(target=None, stateful_metrics=[]) for input in ds_train: images, labels = input[0], input[1] images, labels = pack_dtensor_inputs( images, labels, image_layout, label_layout) results.update(train_step(model, images, labels, optimizer, metrics)) for metric_name, metric in metrics.items(): results[metric_name] = metric.result() pbar.update(step, values=results.items(), finalize=False) step += 1 pbar.update(step, values=results.items(), finalize=True) for metric in eval_metrics.values(): metric.reset_state() for input in ds_test: images, labels = input[0], input[1] images, labels = pack_dtensor_inputs( images, labels, image_layout, label_layout) results.update(eval_step(model, images, labels, eval_metrics)) for metric_name, metric in eval_metrics.items(): results[metric_name] = metric.result() for metric_name, metric in results.items(): print(f"{metric_name}: {metric.numpy()}") """ Explanation: Train the model The following example shards the data from input pipeline on the batch dimension, and train with the model, which has fully replicated weights. With 3 epochs, the model should achieve about 97% of accuracy. End of explanation """ class SubclassedModel(tf.keras.Model): def __init__(self, name=None): super().__init__(name=name) self.feature = tf.keras.layers.Dense(16) self.feature_2 = tf.keras.layers.Dense(24) self.dropout = tf.keras.layers.Dropout(0.1) def call(self, inputs, training=None): x = self.feature(inputs) x = self.dropout(x, training=training) return self.feature_2(x) """ Explanation: Specify Layout for existing model code Often you have models that work well for your use case. Specifying Layout information to each individual layer within the model will be a large amount of work requiring a lot of edits. To help you easily convert your existing Keras model to work with DTensor API you can use the new dtensor.LayoutMap API that allow you to specify the Layout from a global point of view. First, you need to create a LayoutMap instance, which is a dictionary-like object that contains all the Layout you would like to specify for your model weights. LayoutMap needs a Mesh instance at init, which can be used to provide default replicated Layout for any weights that doesn't have Layout configured. In case you would like all your model weights to be just fully replicated, you can provide empty LayoutMap, and the default mesh will be used to create replicated Layout. LayoutMap uses a string as key and a Layout as value. There is a behavior difference between a normal Python dict and this class. The string key will be treated as a regex when retrieving the value Subclassed Model Consider the following model defined using the Keras subclassing Model syntax. End of explanation """ layout_map = tf.keras.dtensor.experimental.LayoutMap(mesh=mesh) layout_map['feature.*kernel'] = dtensor.Layout.batch_sharded(mesh, 'batch', rank=2) layout_map['feature.*bias'] = dtensor.Layout.batch_sharded(mesh, 'batch', rank=1) with tf.keras.dtensor.experimental.layout_map_scope(layout_map): subclassed_model = SubclassedModel() """ Explanation: There are 4 weights in this model, which are kernel and bias for two Dense layers. Each of them are mapped based on the object path: model.feature.kernel model.feature.bias model.feature_2.kernel model.feature_2.bias Note: For Subclassed Models, the attribute name, rather than the .name attribute of layer are used as the key to retrieve the Layout from the mapping. This is consistent with the convention followed by tf.Module checkpointing. For complex models with more than a few layers, you can manually inspect checkpoints to see the attribute mappings. Now define the following LayoutMap and apply it to the model. End of explanation """ dtensor_input = dtensor.copy_to_mesh(tf.zeros((16, 16)), layout=unsharded_layout_2d) # Trigger the weights creation for subclass model subclassed_model(dtensor_input) print(subclassed_model.feature.kernel.layout) """ Explanation: The model weights are created on the first call, so call the model with a DTensor input and confirm the weights have the expected layouts. End of explanation """ layout_map = tf.keras.dtensor.experimental.LayoutMap(mesh=mesh) layout_map['feature.*kernel'] = dtensor.Layout.batch_sharded(mesh, 'batch', rank=2) layout_map['feature.*bias'] = dtensor.Layout.batch_sharded(mesh, 'batch', rank=1) with tf.keras.dtensor.experimental.layout_map_scope(layout_map): inputs = tf.keras.Input((16,), batch_size=16) x = tf.keras.layers.Dense(16, name='feature')(inputs) x = tf.keras.layers.Dropout(0.1)(x) output = tf.keras.layers.Dense(32, name='feature_2')(x) model = tf.keras.Model(inputs, output) print(model.layers[1].kernel.layout) with tf.keras.dtensor.experimental.layout_map_scope(layout_map): model = tf.keras.Sequential([ tf.keras.layers.Dense(16, name='feature', input_shape=(16,)), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(32, name='feature_2') ]) print(model.layers[2].kernel.layout) """ Explanation: With this, you can quickly map the Layout to your models without updating any of your existing code. Sequential and Functional Models For keras functional and sequential models, you can use LayoutMap as well. Note: For functional and sequential models, the mappings are slightly different. The layers in the model don't have a public attribute attached to the model (though you can access them via model.layers as a list). Use the string name as the key in this case. The string name is guaranteed to be unique within a model. End of explanation """
4dsolutions/Python5
Remembering1.ipynb
mit
from pprint import pprint # I, Python am built from types, such as builtin types: the_builtins = dir(__builtins__) # always here pprint(the_builtins[-10:]) # no need to import """ Explanation: <div align="center"><h3>Remembering Python...</h3></div> Python boots up with builtins already in the namespace and checked as a part of the name resolution protocol... Using difference slices, we an check portions of a long list. End of explanation """ for the_string in ["list", "tuple", "dict", "int", "float"]: if the_string in the_builtins: print("Yes I am a native type: ", the_string) assert type(eval(the_string)) == type # all types in this club else: print("No, I'm not native: ", the_string) """ Explanation: Lets check our understanding that the native types -- the ones we count on to build more complex types -- live in builtins: End of explanation """ # usually up top from string import ascii_lowercase as all_lowers from random import shuffle class P: """ class Px is the more sophisticated version of this class """ def __init__(self, p=None): if not p: original = all_lowers + ' ' scrambled = list(original) shuffle(scrambled) self.perm = dict(zip(original, scrambled)) else: self.perm = p def __invert__(self): """reverse my perm, make a new me""" reverse = dict(zip(self.perm.values(), self.perm.keys())) return P(reverse) # <-- new P instance def encrypt(self, s): output = "" for c in s: output += self.perm[c] return output def decrypt(self, s): rev = ~self # <-- new P instance return rev.encrypt(s) # <-- symmetric key p = P() m = "i like python so much because it does everything" # palindrome c = p.encrypt(m) print(m) # plaintext print(c) # ciphertext d = p.decrypt(c) print(d) """ Explanation: And now for something completely different, lets define a class that does substitution based on a permutation of lower-case ascii letters plus space. Such a type is given more substantial implementation in the form of our px_class.py, which allows permutations to multiply, giving more permuations. End of explanation """ import sqlite3 as sql import os.path import json import time from contextlib import contextmanager PATH = "/Users/kurner/Documents/classroom_labs/session10" DB1 = os.path.join(PATH, 'periodic_table.db') def mod_date(): return time.mktime(time.gmtime()) # GMT time @contextmanager def Connector(db): try: db.conn = sql.connect(db.db_name) # connection db.curs = db.conn.cursor() # cursor yield db except Exception as oops: if oops[0]: raise db.conn.close() class elemsDB: def __init__(self, db_name): self.db_name = db_name def seek(self, elem): if self.conn: if elem != "all": query = ("SELECT * FROM Elements " "WHERE elem_symbol = '{}'".format(elem)) self.curs.execute(query) result = self.curs.fetchone() if result: return json.dumps(list(result)) else: query = "SELECT * FROM Elements ORDER BY elem_protons" self.curs.execute(query) result={} for row in self.curs.fetchall(): result[row[1]] = list(row) return json.dumps(result) return "NOT FOUND" """ Explanation: In the code below, we use a context manager to connect and disconnect from a SQLite database. The context manager is developed from a simple generator with precisely one yield statement, using the @contextmanager decorator. End of explanation """ output = "" with Connector(elemsDB(DB1)) as dbx: output = dbx.seek("C") print(output) """ Explanation: At this point, we're able to seek a specific row from the Elements table, or request all of them. In a Flask web application, the controlling argument might come from a GET request, i.e. a URL such as /api/elements?elem=H End of explanation """ import requests data = {} data["protons"]=100 data["symbol"]="Kr" data["long_name"]="Kirbium" data["mass"]=300 data["series"]="Dunno" data["secret"]="DADA" # <--- primitive authentication the_url = 'http://localhost:5000/api/elements' r = requests.post(the_url, data=data) print(r.status_code) print(r.content) """ Explanation: To be continued... End of explanation """
kmunve/APS
aps/notebooks/freezing_level.ipynb
mit
%matplotlib inline import sys import os aps_path = os.path.dirname(os.path.abspath(".")) if aps_path not in sys.path: sys.path.append(aps_path) print(aps_path, sys.path) import matplotlib.pyplot as plt import numpy as np import seaborn as sns sns.set(style="dark") import aps_io.get_arome as ga from load_region import load_region, clip_region hour_range = [24, 48] ncfile = r"\\hdata\grid\metdata\prognosis\meps\det\archive\2019\meps_det_extracted_1km_20190307T00Z.nc" #jd, altitude, land_area_fraction, nc_vars = ga.nc_load(ncfile, ["altitude_of_0_degree_isotherm"], time_period=[6, 25]) times, altitude, land_area_fraction, nc_vars = ga.nc_load(ncfile, ["altitude_of_0_degree_isotherm", "altitude_of_isoTprimW_equal_0"], time_period=hour_range) print("From {0} to {1}".format(times[0], times[-1])) len(times) #plt.imshow(np.flipud(nc_vars['altitude_of_0_degree_isotherm'][6, :, :])) """ Explanation: See also final script in ../scripts/freezing_level.py End of explanation """ #fl_max = np.amax(nc_vars['altitude_of_0_degree_isotherm'][0:6,:,:], axis=0) fl_max = np.amax(nc_vars['altitude_of_isoTprimW_equal_0'][0:24,:,:], axis=0) fl_max plt.imshow(fl_max) """ Explanation: Calculating the freezing level We use the parameters "altitude_of_0_degree_isotherm" and "altitude_of_isoTprimW_equal_0" from MEPS_extracted. Under dry conditions we use altitude_of_0_degree_isotherm and for timing we use the period with the highest values. With precipitation we use altitude_of_isoTprimW_equal_0 and the period with the highest amount of precipitation. split data into four chunks: 0-6, 6-12, 12-18, 18-24 compress time dimension to 1 by keeping only the maximum value in each cell for each chunk calculate the 90-percentile for all max-values within a region round 90-percentile for each region to the next 50 m Logic: python If regional_precipitation &lt; 2 (mm/døgn): use "altitude_of_0_degree_isotherm" else: find period of day with most intense precip (e.g. 06-12) use "altitude_of_isoTprimW_equal_0" for that period return freezing level Compress time dimension End of explanation """ # Load region mask - only for data on 1km xgeo-grid region_id = 3031 region_mask, y_min, y_max, x_min, x_max = load_region(region_id) print(y_max-y_min, x_max-x_min) print(np.unique(region_mask)) plt.imshow(region_mask) var_name = 'altitude_of_isoTprimW_equal_0' for t in range(len(times)): t2 = t+hour_range[0] t_str = times[t] _fl = clip_region(np.flipud(nc_vars[var_name][t,:,:]), region_mask, t2, y_min, y_max, x_min, x_max) plt.imshow(_fl, vmin=0, vmax=500, cmap='magma') plt.axis('off') plt.text(5, 5, "{0}: {1}".format(region_id, t_str), bbox=dict(facecolor='white', edgecolor='white', alpha=1.0)) cbar = plt.colorbar() cbar.ax.set_ylabel(var_name) _png = './img/fl_{1:02}.png'.format(region_id, t) plt.savefig(_png, bbox_inches='tight') plt.clf() """ Explanation: Extract regions End of explanation """ t_index = 0 fl_region = clip_region(np.flipud(fl_max), region_mask, t_index, y_min, y_max, x_min, x_max) #print(np.unique(fl_3034)) #fl_3034.masked_where(fl_region<=0.0) print(np.count_nonzero(np.isnan(fl_region))) print(np.unique(fl_region)) plt.imshow(fl_region) plt.colorbar() print("Mean\t: ", np.nanmean(fl_region.flatten())) for p in [0,5,25,50,75, 80, 85,90, 95,100]: print(p, "\t: ", np.nanpercentile(fl_region.flatten(), p)) fl_region_flat = fl_region[~np.isnan(fl_region)].data.flatten() sns.distplot(fl_region_flat) """ Explanation: Use make_gif.py in folder img to generate a gif animation. End of explanation """ nc_file2 = r"\\hdata\grid\metdata\prognosis\meps\det\archive\2019\meps_det_pp_1km_20190307T00Z.nc" times, altitude, land_area_fraction, nc_vars2 = ga.nc_load(nc_file2, ["precipitation_amount"], time_period=hour_range) precip_sum = np.sum(nc_vars2['precipitation_amount'][0:6,:,:], axis=0) t_index = 0 precip_sum_region = clip_region(np.flipud(precip_sum), region_mask, t_index, y_min, y_max, x_min, x_max) print(np.count_nonzero(np.isnan(precip_sum_region))) print(np.unique(precip_sum_region)) plt.imshow(precip_sum_region) plt.colorbar() # Mask where the precipitation during the day exceeds a given value. psr_mask = np.where(precip_sum_region >= 5., 1, np.nan) plt.imshow(psr_mask) fl_region_wet = fl_region * psr_mask print("Mean\t: ", np.nanmean(fl_region_wet.flatten())) for p in [0,5,25,50,75, 80, 85,90, 95,100]: print(p, "\t: ", np.nanpercentile(fl_region_wet.flatten(), p)) """ Explanation: Calculating freezing level with regard to precipitation End of explanation """
cdawei/digbeta
dchen/music/nsr_baseline.ipynb
gpl-3.0
%matplotlib inline %load_ext autoreload %autoreload 2 import os, sys, time, gzip import pickle as pkl import numpy as np import pandas as pd from scipy.sparse import lil_matrix, issparse, hstack, vstack import matplotlib.pyplot as plt import seaborn as sns from models import MTC from sklearn.linear_model import LogisticRegression # from tools import calc_RPrecision_HitRate from tools import calc_metrics TOPs = [5, 10, 20, 30, 50, 100, 200, 300, 500, 1000] datasets = ['aotm2011', '30music'] dix = 1 dataset_name = datasets[dix] dataset_name data_dir = 'data/%s/setting1' % dataset_name Y_trndev = pkl.load(gzip.open(os.path.join(data_dir, 'Y_train_dev.pkl.gz'), 'rb')) Y_test = pkl.load(gzip.open(os.path.join(data_dir, 'Y_test.pkl.gz'), 'rb')) song2pop = pkl.load(gzip.open('data/%s/setting2/song2pop.pkl.gz' % dataset_name, 'rb')) songsets = pkl.load(gzip.open(os.path.join(data_dir, 'songs_train_dev_test_s1.pkl.gz'), 'rb')) songset_trndev = songsets['train_song_set'] + songsets['dev_song_set'] songset_test = songsets['test_song_set'] """ Explanation: New song recommendation baselines End of explanation """ pl_indices = np.where(Y_test.sum(axis=0).A.reshape(-1) > 0)[0] lengths = Y_trndev.sum(axis=0).A.reshape(-1)[pl_indices] Y_pred = lil_matrix(Y_test.shape, dtype=np.float) np.random.seed(1234567890) for ix in range(len(songset_test)): sort_ix = np.argsort(-lengths) long_ix = [sort_ix[0]] longest = lengths[sort_ix[0]] for i in range(1, sort_ix.shape[0]): if lengths[sort_ix[i]] < longest: break else: short_ix.append(sort_ix[i]) long_ix = np.random.permutation(long_ix) rec_ix = long_ix[0] Y_pred[ix, pl_indices[rec_ix]] = 1 lengths[rec_ix] += 1 Y_pred = Y_pred.tocsc() rps_longest = [] hitrates_longest = {top: [] for top in TOPs} aucs_longest = [] for j in range(Y_test.shape[1]): if (j+1) % 100 == 0: sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1])) sys.stdout.flush() y_true = Y_test[:, j].toarray().reshape(-1) if y_true.sum() < 1: continue y_pred = Y_pred[:, j].A.reshape(-1) rp, hr_dict, auc = calc_metrics(y_true, y_pred, tops=TOPs) rps_longest.append(rp) for top in TOPs: hitrates_longest[top].append(hr_dict[top]) aucs_longest.append(auc) print('\n%d / %d' % (len(rps_longest), Y_test.shape[1])) longest_perf = {dataset_name: {'Test': {'R-Precision': np.mean(rps_longest), 'Hit-Rate': {top: np.mean(hitrates_longest[top]) for top in hitrates_longest}, 'AUC': np.mean(aucs_longest)}}} longest_perf fperf_longest = os.path.join(data_dir, 'perf-longest.pkl') print(fperf_longest) pkl.dump(longest_perf, open(fperf_longest, 'wb')) pkl.load(open(fperf_longest, 'rb')) """ Explanation: Given a new song, recommend to the longest playlist Random tie breaking if there are more than one longest playlist. End of explanation """ pl_indices = np.where(Y_test.sum(axis=0).A.reshape(-1) > 0)[0] lengths = Y_trndev.sum(axis=0).A.reshape(-1)[pl_indices] Y_pred = lil_matrix(Y_test.shape, dtype=np.float) np.random.seed(1234567890) for ix in range(len(songset_test)): sort_ix = np.argsort(lengths) short_ix = [sort_ix[0]] shortest = lengths[sort_ix[0]] for i in range(1, sort_ix.shape[0]): if lengths[sort_ix[i]] > shortest: break else: short_ix.append(sort_ix[i]) short_ix = np.random.permutation(short_ix) rec_ix = short_ix[0] Y_pred[ix, pl_indices[rec_ix]] = 1 lengths[rec_ix] += 1 Y_pred = Y_pred.tocsc() rps_shortest = [] hitrates_shortest = {top: [] for top in TOPs} aucs_shortest = [] ndcgs_shortest = [] for j in range(Y_test.shape[1]): if (j+1) % 100 == 0: sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1])) sys.stdout.flush() y_true = Y_test[:, j].toarray().reshape(-1) if y_true.sum() < 1: continue y_pred = Y_pred[:, j].A.reshape(-1) rp, hr_dict, auc, ndcg = calc_metrics(y_true, y_pred, tops=TOPs) rps_shortest.append(rp) for top in TOPs: hitrates_shortest[top].append(hr_dict[top]) aucs_shortest.append(auc) ndcgs_shortest.append(ndcg) print('\n%d / %d' % (len(rps_shortest), Y_test.shape[1])) shortest_perf = {dataset_name: {'Test': {'R-Precision': np.mean(rps_shortest), 'Hit-Rate': {top: np.mean(hitrates_shortest[top]) for top in hitrates_shortest}, 'AUC': np.mean(aucs_shortest), 'NDCG': np.mean(ndcgs_shortest)}}} shortest_perf fperf_shortest = os.path.join(data_dir, 'perf-shortest.pkl') print(fperf_shortest) pkl.dump(shortest_perf, open(fperf_shortest, 'wb')) pkl.load(open(fperf_shortest, 'rb')) """ Explanation: Given a new song, recommend to the shortest playlist Random tie breaking if there are more than one shortest playlist. End of explanation """ rps_poptest = [] hitrates_poptest = {top: [] for top in TOPs} aucs_poptest = [] ndcgs_poptest = [] for j in range(Y_test.shape[1]): if (j+1) % 100 == 0: sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1])) sys.stdout.flush() y_true = Y_test[:, j].toarray().reshape(-1) if y_true.sum() < 1: continue y_pred = np.asarray([song2pop[sid] for sid, _ in songset_test]) rp, hr_dict, auc, ndcg = calc_metrics(y_true, y_pred, tops=TOPs) rps_poptest.append(rp) for top in TOPs: hitrates_poptest[top].append(hr_dict[top]) aucs_poptest.append(auc) ndcgs_poptest.append(ndcg) print('\n%d / %d' % (len(rps_poptest), Y_test.shape[1])) fig = plt.figure(figsize=[20, 5]) ax1 = plt.subplot(131) ax1.hist(rps_poptest, bins=100) ax1.set_yscale('log') ax1.set_title('R-Precision') #ax.set_xlim(0, xmax) ax2 = plt.subplot(132) ax2.hist(aucs_poptest, bins=100) ax2.set_yscale('log') ax2.set_title('AUC') ax3 = plt.subplot(133) ax3.hist(ndcgs_poptest, bins=100) ax3.set_yscale('log') ax3.set_title('NDCG') pass poptest_perf = {dataset_name: {'Test': {'R-Precision': np.mean(rps_poptest), 'Hit-Rate': {top: np.mean(hitrates_poptest[top]) for top in hitrates_poptest}, 'AUC': np.mean(aucs_poptest), 'NDCG': np.mean(ndcgs_poptest)}}} poptest_perf fperf_poptest = os.path.join(data_dir, 'perf-poptest.pkl') print(fperf_poptest) pkl.dump(poptest_perf, open(fperf_poptest, 'wb')) pkl.load(open(fperf_poptest, 'rb')) """ Explanation: Popularity (in test set) as song score End of explanation """ rps_lrpop = [] hitrates_lrpop = {top: [] for top in TOPs} aucs_lrpop = [] ndcgs_lrpop = [] nsong_trndev = len(songset_trndev) nsong_test = len(songset_test) for j in range(Y_test.shape[1]): if (j+1) % 10 == 0: sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1])) sys.stdout.flush() y_true = Y_test[:, j].toarray().reshape(-1) if y_true.sum() < 1: continue X_train = np.asarray([song2pop[sid] for sid, _ in songset_trndev]).reshape(nsong_trndev, 1) Y_train = Y_trndev[:, j].A.reshape(-1) clf = LogisticRegression() clf.fit(X_train, Y_train) X_test = np.asarray([song2pop[sid] for sid, _ in songset_test]).reshape(nsong_test, 1) y_pred = clf.decision_function(X_test).reshape(-1) rp, hr_dict, auc, ndcg = calc_metrics(y_true, y_pred, tops=TOPs) rps_lrpop.append(rp) for top in TOPs: hitrates_lrpop[top].append(hr_dict[top]) aucs_lrpop.append(auc) ndcgs_lrpop.append(ndcg) print('\n%d / %d' % (len(rps_lrpop), Y_test.shape[1])) lrpop_perf = {dataset_name: {'Test': {'R-Precision': np.mean(rps_lrpop), 'Hit-Rate': {top: np.mean(hitrates_lrpop[top]) for top in hitrates_lrpop}, 'AUC': np.mean(aucs_lrpop), 'NDCG': np.mean(ndcgs_lrpop)}}} lrpop_perf fperf_lrpop = os.path.join(data_dir, 'perf-lrpop.pkl') print(fperf_lrpop) pkl.dump(lrpop_perf, open(fperf_lrpop, 'wb')) pkl.load(open(fperf_lrpop, 'rb')) """ Explanation: Logistic Regression with only song popularity as feature End of explanation """
tensorflow/docs-l10n
site/ja/guide/intro_to_modules.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ import tensorflow as tf from datetime import datetime %load_ext tensorboard """ Explanation: モジュール、レイヤー、モデルの概要 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/intro_to_modules" class=""><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/intro_to_modules.ipynb" class=""><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/intro_to_modules.ipynb" class=""><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/intro_to_modules.ipynb" class=""><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td> </table> TensorFlow で機械学習を実行するには、モデルを定義、保存、復元する必要があります。 モデルは、抽象的に次のように定義できます。 テンソルで何かを計算する関数(フォワードパス) トレーニングに応じて更新できる何らかの変数 このガイドでは、Keras 内で TensorFlow モデルがどのように定義されているか、そして、TensorFlow が変数とモデルを収集する方法、および、モデルを保存および復元する方法を説明します。 注意:今すぐ Keras を使用するのであれば、一連の Keras ガイドをご覧ください。 セットアップ End of explanation """ class SimpleModule(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.a_variable = tf.Variable(5.0, name="train_me") self.non_trainable_variable = tf.Variable(5.0, trainable=False, name="do_not_train_me") def __call__(self, x): return self.a_variable * x + self.non_trainable_variable simple_module = SimpleModule(name="simple") simple_module(tf.constant(5.0)) """ Explanation: TensorFlow におけるモデルとレイヤーの定義 ほとんどのモデルはレイヤーで構成されています。レイヤーは、再利用およびトレーニング可能な変数を持つ既知の数学的構造を持つ関数です。TensorFlow では、Keras や Sonnet といった、レイヤーとモデルの高位実装の多くは、同じ基本クラスの tf.Module に基づいて構築されています。 スカラーテンソルで動作する非常に単純な tf.Module の例を次に示します。 End of explanation """ # All trainable variables print("trainable variables:", simple_module.trainable_variables) # Every variable print("all variables:", simple_module.variables) """ Explanation: モジュールと(その延長としての)レイヤーは、「オブジェクト」のディープラーニング用語です。これらには、内部状態と、その状態を使用するメソッドがあります。 __ call__ は Python コーラブルのように動作する以外何も特別なことではないため、任意の関数を使用してモデルを呼び出すことができます。 ファインチューニング中のレイヤーと変数を凍結するなど、様々な理由で、変数をトレーニング対象とするかどうかを設定することができます。 注意: tf.Module は tf.keras.layers.Layer と tf.keras.Model の基本クラスであるため、ここに説明されているすべての内容は Keras にも当てはまります。過去の互換性の理由から、Keras レイヤーはモジュールから変数を収集しないため、モデルはモジュールのみ、または Keras レイヤーのみを使用する必要があります。ただし、以下に示す変数の検査方法はどちらの場合も同じです。 tf.Module をサブクラス化することにより、このオブジェクトのプロパティに割り当てられた tf.Variable または tf.Module インスタンスが自動的に収集されます。これにより、変数の保存や読み込みのほか、tf.Module のコレクションを作成することができます。 End of explanation """ class Dense(tf.Module): def __init__(self, in_features, out_features, name=None): super().__init__(name=name) self.w = tf.Variable( tf.random.normal([in_features, out_features]), name='w') self.b = tf.Variable(tf.zeros([out_features]), name='b') def __call__(self, x): y = tf.matmul(x, self.w) + self.b return tf.nn.relu(y) """ Explanation: これは、モジュールで構成された 2 層線形レイヤーモデルの例です。 最初の高密度(線形)レイヤーは以下のとおりです。 End of explanation """ class SequentialModule(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.dense_1 = Dense(in_features=3, out_features=3) self.dense_2 = Dense(in_features=3, out_features=2) def __call__(self, x): x = self.dense_1(x) return self.dense_2(x) # You have made a model! my_model = SequentialModule(name="the_model") # Call it, with random results print("Model results:", my_model(tf.constant([[2.0, 2.0, 2.0]]))) """ Explanation: 2 つのレイヤーインスタンスを作成して適用する完全なモデルは以下のとおりです。 End of explanation """ print("Submodules:", my_model.submodules) for var in my_model.variables: print(var, "\n") """ Explanation: tf.Module インスタンスは、それに割り当てられた tf.Variable または tf.Module インスタンスを再帰的に自動収集します。これにより、単一のモデルインスタンスで tf.Module のコレクションを管理し、モデル全体を保存して読み込むことができます。 End of explanation """ class FlexibleDenseModule(tf.Module): # Note: No need for `in_features` def __init__(self, out_features, name=None): super().__init__(name=name) self.is_built = False self.out_features = out_features def __call__(self, x): # Create variables on first call. if not self.is_built: self.w = tf.Variable( tf.random.normal([x.shape[-1], self.out_features]), name='w') self.b = tf.Variable(tf.zeros([self.out_features]), name='b') self.is_built = True y = tf.matmul(x, self.w) + self.b return tf.nn.relu(y) # Used in a module class MySequentialModule(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.dense_1 = FlexibleDenseModule(out_features=3) self.dense_2 = FlexibleDenseModule(out_features=2) def __call__(self, x): x = self.dense_1(x) return self.dense_2(x) my_model = MySequentialModule(name="the_model") print("Model results:", my_model(tf.constant([[2.0, 2.0, 2.0]]))) """ Explanation: 変数の作成を延期する ここで、レイヤーへの入力サイズと出力サイズの両方を定義する必要があることに気付いたかもしれません。これは、w 変数が既知の形状を持ち、割り当てることができるようにするためです。 モジュールが特定の入力形状で最初に呼び出されるまで変数の作成を延期することにより、入力サイズを事前に指定する必要がありません。 End of explanation """ chkp_path = "my_checkpoint" checkpoint = tf.train.Checkpoint(model=my_model) checkpoint.write(chkp_path) """ Explanation: この柔軟性のため、多くの場合、TensorFlow レイヤーは、出力の形状(tf.keras.layers.Dense)などを指定するだけで済みます。入出力サイズの両方を指定する必要はありません。 重みを保存する tf.Module はチェックポイントと SavedModel の両方として保存できます。 チェックポイントは単なる重み(モジュールとそのサブモジュール内の変数のセットの値)です。 End of explanation """ !ls my_checkpoint* """ Explanation: チェックポイントは、データ自体とメタデータのインデックスファイルの 2 種類のファイルで構成されます。インデックスファイルは、実際に保存されているものとチェックポイントの番号を追跡し、チェックポイントデータには変数値とその属性ルックアップパスが含まれています。 End of explanation """ tf.train.list_variables(chkp_path) """ Explanation: チェックポイントの内部を調べると、変数のコレクション全体が保存されており、変数を含む Python オブジェクト別に並べ替えられていることを確認できます。 End of explanation """ new_model = MySequentialModule() new_checkpoint = tf.train.Checkpoint(model=new_model) new_checkpoint.restore("my_checkpoint") # Should be the same result as above new_model(tf.constant([[2.0, 2.0, 2.0]])) """ Explanation: 分散(マルチマシン)トレーニング中にシャーディングされる可能性があるため、番号が付けられています(「00000-of-00001」など)。ただし、この例の場合、シャードは 1 つしかありません。 モデルを再度読み込むと、Python オブジェクトの値が上書きされます。 End of explanation """ class MySequentialModule(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.dense_1 = Dense(in_features=3, out_features=3) self.dense_2 = Dense(in_features=3, out_features=2) @tf.function def __call__(self, x): x = self.dense_1(x) return self.dense_2(x) # You have made a model with a graph! my_model = MySequentialModule(name="the_model") """ Explanation: 注意: チェックポイントは長いトレーニングワークフローでは重要であり、tf.checkpoint.CheckpointManager はヘルパークラスとして、チェックポイント管理を大幅に簡単にすることができます。詳細については、トレーニングチェックポイントガイドをご覧ください。 関数の保存 TensorFlow は、TensorFlow Serving と TensorFlow Lite で見たように、元の Python オブジェクトなしでモデルを実行できます。また、TensorFlow Hub からトレーニング済みのモデルをダウンロードした場合でも同じです。 TensorFlow は、Pythonで説明されている計算の実行方法を認識する必要がありますが、元のコードは必要ありません。認識させるには、グラフを作成することができます。これについてはグラフと関数の入門ガイドをご覧ください。 このグラフには、関数を実装する演算が含まれています。 @tf.function デコレータを追加して、このコードをグラフとして実行する必要があることを示すことにより、上記のモデルでグラフを定義できます。 End of explanation """ print(my_model([[2.0, 2.0, 2.0]])) print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]])) """ Explanation: 作成したモジュールは、前と全く同じように動作します。関数に渡される一意のシグネチャごとにグラフが作成されます。詳細については、グラフと関数の基礎ガイドをご覧ください。 End of explanation """ # Set up logging. stamp = datetime.now().strftime("%Y%m%d-%H%M%S") logdir = "logs/func/%s" % stamp writer = tf.summary.create_file_writer(logdir) # Create a new model to get a fresh trace # Otherwise the summary will not see the graph. new_model = MySequentialModule() # Bracket the function call with # tf.summary.trace_on() and tf.summary.trace_export(). tf.summary.trace_on(graph=True) tf.profiler.experimental.start(logdir) # Call only one tf.function when tracing. z = print(new_model(tf.constant([[2.0, 2.0, 2.0]]))) with writer.as_default(): tf.summary.trace_export( name="my_func_trace", step=0, profiler_outdir=logdir) """ Explanation: TensorBoard のサマリー内でグラフをトレースすると、グラフを視覚化できます。 End of explanation """ #docs_infra: no_execute %tensorboard --logdir logs/func """ Explanation: TensorBoard を起動して、トレースの結果を確認します。 End of explanation """ tf.saved_model.save(my_model, "the_saved_model") # Inspect the SavedModel in the directory !ls -l the_saved_model # The variables/ directory contains a checkpoint of the variables !ls -l the_saved_model/variables """ Explanation: SavedModel の作成 トレーニングが完了したモデルを共有するには、SavedModel の使用が推奨されます。SavedModel には関数のコレクションと重みのコレクションの両方が含まれています。 次のようにして、トレーニングしたモデルを保存することができます。 End of explanation """ new_model = tf.saved_model.load("the_saved_model") """ Explanation: saved_model.pb ファイルは、関数型の tf.Graph を記述するプロトコルバッファです。 モデルとレイヤーは、それを作成したクラスのインスタンスを実際に作成しなくても、この表現から読み込めます。これは、大規模なサービスやエッジデバイスでのサービスなど、Python インタープリタがない(または使用しない)場合や、元の Python コードが利用できないか実用的でない場合に有用です。 モデルを新しいオブジェクトとして読み込みます。 End of explanation """ isinstance(new_model, SequentialModule) """ Explanation: 保存したモデルを読み込んで作成された new_model は、クラスを認識しない内部の TensorFlow ユーザーオブジェクトです。SequentialModule ではありません。 End of explanation """ print(my_model([[2.0, 2.0, 2.0]])) print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]])) """ Explanation: この新しいモデルは、すでに定義されている入力シグネチャで機能します。このように復元されたモデルにシグネチャを追加することはできません。 End of explanation """ class MyDense(tf.keras.layers.Layer): # Adding **kwargs to support base Keras layer arguments def __init__(self, in_features, out_features, **kwargs): super().__init__(**kwargs) # This will soon move to the build step; see below self.w = tf.Variable( tf.random.normal([in_features, out_features]), name='w') self.b = tf.Variable(tf.zeros([out_features]), name='b') def call(self, x): y = tf.matmul(x, self.w) + self.b return tf.nn.relu(y) simple_layer = MyDense(name="simple", in_features=3, out_features=3) """ Explanation: したがって、SavedModel を使用すると、tf.Module を使用して TensorFlow の重みとグラフを保存し、それらを再度読み込むことができます。 Keras モデルとレイヤー ここまでは、Keras に触れずに説明してきましたが、tf.Module の上に独自の高位 API を構築することは可能です。 このセクションでは、Keras が tf.Module をどのように使用するかを説明します。Keras モデルの完全なユーザーガイドは、Keras ガイドをご覧ください。 Keras レイヤー tf.keras.layers.Layer はすべての Keras レイヤーの基本クラスであり、tf.Module から継承します。 親を交換してから、__call__ を call に変更するだけで、モジュールを Keras レイヤーに変換できます。 End of explanation """ simple_layer([[2.0, 2.0, 2.0]]) """ Explanation: Keras レイヤーには独自の __call__ があり、次のセクションで説明する手順を実行してから、call() を呼び出します。動作には違いはありません。 End of explanation """ class FlexibleDense(tf.keras.layers.Layer): # Note the added `**kwargs`, as Keras supports many arguments def __init__(self, out_features, **kwargs): super().__init__(**kwargs) self.out_features = out_features def build(self, input_shape): # Create the state of the layer (weights) self.w = tf.Variable( tf.random.normal([input_shape[-1], self.out_features]), name='w') self.b = tf.Variable(tf.zeros([self.out_features]), name='b') def call(self, inputs): # Defines the computation from inputs to outputs return tf.matmul(inputs, self.w) + self.b # Create the instance of the layer flexible_dense = FlexibleDense(out_features=3) """ Explanation: build ステップ 前述のように、多くの場合都合よく、入力形状が確定するまで変数の作成を延期できます。 Keras レイヤーには追加のライフサイクルステップがあり、レイヤーをより柔軟に定義することができます。このステップは、build() 関数で定義されます。 build は 1 回だけ呼び出され、入力形状で呼び出されます。通常、変数(重み)を作成するために使用されます。 上記の MyDense レイヤーを、入力のサイズに柔軟に合わせられるように書き換えることができます。 End of explanation """ flexible_dense.variables """ Explanation: この時点では、モデルは構築されていないため、変数も存在しません。 End of explanation """ # Call it, with predictably random results print("Model results:", flexible_dense(tf.constant([[2.0, 2.0, 2.0], [3.0, 3.0, 3.0]]))) flexible_dense.variables """ Explanation: 関数を呼び出すと、適切なサイズの変数が割り当てられます。 End of explanation """ try: print("Model results:", flexible_dense(tf.constant([[2.0, 2.0, 2.0, 2.0]]))) except tf.errors.InvalidArgumentError as e: print("Failed:", e) """ Explanation: buildは 1 回しか呼び出されないため、入力形状がレイヤーの変数と互換性がない場合、入力は拒否されます。 End of explanation """ class MySequentialModel(tf.keras.Model): def __init__(self, name=None, **kwargs): super().__init__(**kwargs) self.dense_1 = FlexibleDense(out_features=3) self.dense_2 = FlexibleDense(out_features=2) def call(self, x): x = self.dense_1(x) return self.dense_2(x) # You have made a Keras model! my_sequential_model = MySequentialModel(name="the_model") # Call it on a tensor, with random results print("Model results:", my_sequential_model(tf.constant([[2.0, 2.0, 2.0]]))) """ Explanation: Keras レイヤーには、次のような多くの追加機能があります。 オプションの損失 メトリクスのサポート トレーニングと推論の使用を区別する、オプションの training 引数の組み込みサポート Python でモデルのクローンを作成するための構成を正確に保存する get_config と <code>from_config</code> メソッド 詳細は、カスタムレイヤーとモデルに関する完全ガイドをご覧ください。 Keras モデル モデルはネストされた Keras レイヤーとして定義できます。 ただし、Keras は tf.keras.Model と呼ばれるフル機能のモデルクラスも提供します。Keras モデルは tf.keras.layers.Layer を継承しているため、 Keras レイヤーと同じ方法で使用、ネスト、保存することができます。Keras モデルには、トレーニング、評価、読み込み、保存、および複数のマシンでのトレーニングを容易にする追加機能があります。 上記の SequentialModule をほぼ同じコードで定義できます。先ほどと同じように、__call__ をcall() に変換して、親を変更します。 End of explanation """ my_sequential_model.variables my_sequential_model.submodules """ Explanation: 追跡変数やサブモジュールなど、すべて同じ機能を利用できます。 注意: 上記の「注意」を繰り返すと、Keras レイヤーまたはモデル内にネストされた生の tf.Module は、トレーニングまたは保存のために変数を収集しません。代わりに、Keras レイヤーを Keras レイヤーの内側にネストします。 End of explanation """ inputs = tf.keras.Input(shape=[3,]) x = FlexibleDense(3)(inputs) x = FlexibleDense(2)(x) my_functional_model = tf.keras.Model(inputs=inputs, outputs=x) my_functional_model.summary() my_functional_model(tf.constant([[2.0, 2.0, 2.0]])) """ Explanation: 非常に Python 的なアプローチとして、tf.keras.Model をオーバーライドして TensorFlow モデルを構築することができます。ほかのフレームワークからモデルを移行する場合、これは非常に簡単な方法です。 モデルが既存のレイヤーと入力の単純な集合として構築されている場合は、モデルの再構築とアーキテクチャに関する追加機能を備えた Functional API を使用すると手間とスペースを節約できます。 以下は、Functional API を使用した同じモデルです。 End of explanation """ my_sequential_model.save("exname_of_file") """ Explanation: ここでの主な違いは、入力形状が関数構築プロセスの一部として事前に指定されることです。この場合、input_shape 引数を完全に指定する必要がないため、一部の次元を None のままにしておくことができます。 注意:サブクラス化されたモデルでは、input_shape や InputLayer を指定する必要はありません。これらの引数とレイヤーは無視されます。 Keras モデルの保存 Keras モデルでは tf.Moduleと同じようにチェックポイントを設定できます。 Keras モデルはモジュールであるため、tf.saved_models.save() を使用して保存することもできます。ただし、Keras モデルには便利なメソッドやその他の機能があります。 End of explanation """ reconstructed_model = tf.keras.models.load_model("exname_of_file") """ Explanation: このように簡単に、読み込み直すことができます。 End of explanation """ reconstructed_model(tf.constant([[2.0, 2.0, 2.0]])) """ Explanation: また、Keras SavedModel は、メトリクス、損失、およびオプティマイザの状態も保存します。 再構築されたこのモデルを使用すると、同じデータで呼び出されたときと同じ結果が得られます。 End of explanation """
quoniammm/happy-machine-learning
Udacity-ML/boston_housing-master_4/boston_housing.ipynb
mit
# Import libraries necessary for this project # 载入此项目所需要的库 import numpy as np import pandas as pd import visuals as vs # Supplementary code from sklearn.model_selection import ShuffleSplit # Pretty display for notebooks # 让结果在notebook中显示 %matplotlib inline # Load the Boston housing dataset # 载入波士顿房屋的数据集 data = pd.read_csv('housing.csv') prices = data['MEDV'] features = data.drop('MEDV', axis = 1) # Success # 完成 print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape) """ Explanation: 机器学习工程师纳米学位 模型评价与验证 项目 1: 预测波士顿房价 欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示! 除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。 提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。 开始 在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。 此项目的数据集来自UCI机器学习知识库。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理: - 有16个'MEDV' 值为50.0的数据点被移除。 这很可能是由于这些数据点包含遗失或看不到的值。 - 有1个数据点的 'RM' 值为8.78. 这是一个异常值,已经被移除。 - 对于本项目,房屋的'RM', 'LSTAT','PTRATIO'以及'MEDV'特征是必要的,其余不相关特征已经被移除。 - 'MEDV'特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。 运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。 End of explanation """ # TODO: Minimum price of the data #目标:计算价值的最小值 minimum_price = np.min(prices) # TODO: Maximum price of the data #目标:计算价值的最大值 maximum_price = np.max(prices) # TODO: Mean price of the data #目标:计算价值的平均值 mean_price = np.mean(prices) # TODO: Median price of the data #目标:计算价值的中值 median_price = np.median(prices) # TODO: Standard deviation of prices of the data #目标:计算价值的标准差 std_price = np.std(prices) # Show the calculated statistics #目标:输出计算的结果 print "Statistics for Boston housing dataset:\n" print "Minimum price: ${:,.2f}".format(minimum_price) print "Maximum price: ${:,.2f}".format(maximum_price) print "Mean price: ${:,.2f}".format(mean_price) print "Median price ${:,.2f}".format(median_price) print "Standard deviation of prices: ${:,.2f}".format(std_price) """ Explanation: 分析数据 在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。 由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。目标变量:'MEDV',是我们希望预测的变量。他们分别被存在features和prices两个变量名中。 练习:基础统计运算 你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。 在下面的代码中,你要做的是: - 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差; - 将运算结果储存在相应的变量中。 End of explanation """ # TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, y_predict) # Return the score return score """ Explanation: 问题1 - 特征观察 如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点: - 'RM' 是该地区中每个房屋的平均房间数量; - 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄); - 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。 凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。 提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢? 回答: RM 增大,MEDV 增大,因为房屋面积变大; LSTAT 增大,MEDV 减小,因为低收入者变多; PTRATIO 增大,MEDV 减小,因为教育资源变得更加稀缺 建模 在项目的第二部分中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。 练习:定义衡量标准 如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。 R<sup>2</sup>的数值范围从0至1,表示目标变量的预测值和实际值之间的相关程度平方的百分比。一个模型的R<sup>2</sup> 值为0还不如直接用平均值来预测效果好;而一个R<sup>2</sup> 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用特征来解释。模型也可能出现负值的R<sup>2</sup>,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。 在下方代码的 performance_metric 函数中,你要实现: - 使用 sklearn.metrics 中的 r2_score 来计算 y_true 和 y_predict的R<sup>2</sup>值,作为对其表现的评判。 - 将他们的表现评分储存到score变量中。 End of explanation """ # Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score) """ Explanation: 问题2 - 拟合程度 假设一个数据集有五个数据且一个模型做出下列目标变量的预测: | 真实数值 | 预测数值 | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | 你会觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。 运行下方的代码,使用performance_metric函数来计算模型的决定系数。 End of explanation """ # TODO: Import 'train_test_split' from sklearn.model_selection import train_test_split # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42) # Success print "Training and testing split was successful." """ Explanation: 回答: 我觉得成功描述了。因为决定系数很的范围为 0 ~ 1,越接近1,说明这个模型可以对目标变量进行预测的效果越好,结果决定系数计算出来为 0.923 ,说明模型对目标变量的变化进行了良好的描述。 练习: 数据分割与重排 接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重新排序,以消除数据集中由于排序而产生的偏差。 在下面的代码中,你需要: - 使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。 - 分割比例为:80%的数据用于训练,20%用于测试; - 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性; - 最终分离出的子集为X_train,X_test,y_train,和y_test。 End of explanation """ # Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices) """ Explanation: 问题 3- 训练及测试 将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处? 提示: 如果没有数据来对模型进行测试,会出现什么问题? 答案: 这样做,可以使得我们可以通过测试用的数据集来对模型的泛化误差进行评估,检验模型的好坏。 分析模型的表现 在项目的第三部分,我们来看一下几个模型针对不同的数据集在学习和测试上的表现。另外,你需要专注于一个特定的算法,用全部训练集训练时,提高它的'max_depth' 参数,观察这一参数的变化如何影响模型的表现。把你模型的表现画出来对于分析过程十分有益。可视化可以让我们看到一些单看结果看不到的行为。 学习曲线 下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观的显示了随着训练数据量的增加,模型学习曲线的训练评分和测试评分的变化。注意,曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。这个模型的训练和测试部分都使用决定系数R<sup>2</sup>来评分。 运行下方区域中的代码,并利用输出的图形回答下面的问题。 End of explanation """ vs.ModelComplexity(X_train, y_train) """ Explanation: 问题 4 - 学习数据 选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练曲线的评分有怎样的变化?测试曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢? 提示:学习曲线的评分是否最终会收敛到特定的值? 答案: 第二个,最大深度为3。训练曲线开始逐渐降低,测试曲线开始逐渐升高,但它们最后都趋于平稳,所以并不能有效提升模型的表现。 复杂度曲线 下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练的变化,一个是测试的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 performance_metric 函数。 运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。 End of explanation """ # TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import make_scorer from sklearn.model_selection import GridSearchCV def fit_model(X, y): """ Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. """ # Create cross-validation sets from the training data cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0) # TODO: Create a decision tree regressor object regressor = DecisionTreeRegressor(random_state=0) # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10 params = {'max_depth': range(1, 11)} # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' scoring_fnc = make_scorer(performance_metric) # TODO: Create the grid search object grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets) # Fit the grid search object to the data to compute the optimal model grid = grid.fit(X, y) # Return the optimal model after fitting the data return grid.best_estimator_ """ Explanation: 问题 5- 偏差与方差之间的权衡取舍 当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论? 提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题? 答案:  为1时,出现了很大的偏差,因为此时无论是测试数据还是训练数据b标准系数都很低,测试数据和训练数据的标准系数之间差异很小,说明模型无法对数据进行良好预测。 为 10 时,出现了很大的方差,测试数据和训练数据的标准系数之间差异很大,说明出现了过拟合情况。 问题 6- 最优模型的猜测 你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么? 答案: 3。因为此时测试数据和训练数据的分数之间差异最小,且测试数据的标准系数达到最高。 评价模型表现 在这个项目的最后,你将自己建立模型,并使用最优化的fit_model函数,基于客户房子的特征来预测该房屋的价值。 问题 7- 网格搜索(Grid Search) 什么是网格搜索法?如何用它来优化学习算法? 回答: 是一种把参数网格化的算法。 它会自动生成一个不同参数值组成的“网格”: =================================== ('param1', param3) | ('param1', param4) ('param2', param3) | ('param2', param4) ================================== 通过尝试所有"网格"中使用的参数,找到 k(可能的选择为 'param1' 和 'param2' )和 C(可能的选择为 'param3' 和 'param4')的最佳组合,并从中选取最佳的参数组合来优化学习算法。 问题 8- 交叉验证 什么是K折交叉验证法(k-fold cross-validation)?优化模型时,使用这种方法对网格搜索有什么好处?网格搜索是如何结合交叉验证来完成对最佳参数组合的选择的? 提示: 跟为何需要一组测试集的原因差不多,网格搜索时如果不使用交叉验证会有什么问题?GridSearchCV中的'cv_results'属性能告诉我们什么? 答案: K折交叉验证法(k-fold cross-validation)是一种模型评估和验证的方法,这种方法把数据集分成k份,其中1份是测试集,剩下的k-1份是训练集。由于一般的随机把数据分成训练集和测试集在概率上具有随机性,得到的模型最好的模型存在偶然性。而利用K折交叉验证法(k-fold cross-validation)可以减少这种随机性和偶然性。同时,利用此法比较好的利用训练数据集,让模型可以学习到潜在的特征。 使用交叉验证的话,可以获得多组验证集,网格搜索是一种调整参数的算法,我们是基于验证集的性能来调参的,有多组验证集的话,就可以进行多次尝试,而如果不使用交叉验证的话或者其他产生验证集的算法,我们无法进行参数调整。 网格搜索可以使拟合函数尝试所有的参数组合,并返回一个合适的分类器,自动调整至最佳参数组合。 练习:训练模型 在最后一个练习中,你将需要将所学到的内容整合,使用决策树演算法训练一个模型。为了保证你得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。 此外,你会发现你的实现使用的是 ShuffleSplit() 。它也是交叉验证的一种方式(见变量 'cv_sets')。虽然这不是问题8中描述的 K-Fold 交叉验证,这个教程验证方法也很有用!这里 ShuffleSplit() 会创造10个('n_splits')混洗过的集合,每个集合中20%('test_size')的数据会被用作验证集。当你在实现的时候,想一想这跟 K-Fold 交叉验证有哪些相同点,哪些不同点? 在下方 fit_model 函数中,你需要做的是: - 使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树的回归函数; - 将这个回归函数储存到 'regressor' 变量中; - 为 'max_depth' 创造一个字典,它的值是从1至10的数组,并储存到 'params' 变量中; - 使用 sklearn.metrics 中的 make_scorer 创建一个评分函数; - 将 performance_metric 作为参数传至这个函数中; - 将评分函数储存到 'scoring_fnc' 变量中; - 使用 sklearn.model_selection 中的 GridSearchCV 创建一个网格搜索对象; - 将变量'regressor', 'params', 'scoring_fnc', 和 'cv_sets' 作为参数传至这个对象中; - 将 GridSearchCV 存到 'grid' 变量中。 如果有同学对python函数如何传递多个参数不熟悉,可以参考这个MIT课程的视频。 End of explanation """ # Fit the training data to the model using grid search reg = fit_model(X_train, y_train) # Produce the value for 'max_depth' print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']) """ Explanation: 做出预测 当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。 问题 9- 最优模型 最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同? 运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。 End of explanation """ # Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price) data['MEDV'].describe() """ Explanation: Answer: 4。与猜测不同,猜测结果为3。 问题 10 - 预测销售价格 想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯: | 特征 | 客戶 1 | 客戶 2 | 客戶 3 | | :---: | :---: | :---: | :---: | | 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 | | 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% | | 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 | 你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗? 提示:用你在分析数据部分计算出来的统计信息来帮助你证明你的答案。 运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。 End of explanation """ vs.PredictTrials(features, prices, fit_model, client_data) """ Explanation: 答案: 第一个顾客: $403,025.00. 第二个顾客:: $237,478.72. 第三个顾客:: $931,636.36. 这样的价格是合理的,以第三个顾客为例,他的房间数最多,社区贫困指数最低,且教育资源最丰富,因而价格最贵。以此类推,顾客一二的预测也是合理地。其次根据 `data['MEDV'].describe()` 运行的结果比较,三个价格也在合理范围类,因而价格是合理的。 敏感度 一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。执行下方区域中的代码,采用不同的训练和测试集执行 fit_model 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。 End of explanation """ ### 你的代码 # Import libraries necessary for this project # 载入此项目所需要的库 import numpy as np import pandas as pd import visuals as vs # Supplementary code from sklearn.model_selection import ShuffleSplit from sklearn.metrics import r2_score from sklearn.model_selection import train_test_split # TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import make_scorer from sklearn.model_selection import GridSearchCV # Pretty display for notebooks # 让结果在notebook中显示 %matplotlib inline # Load the Boston housing dataset # 载入波士顿房屋的数据集 data = pd.read_csv('bj_housing.csv') prices = data['Value'] features = data.drop('Value', axis = 1) print features.head() print prices.head() # Success # 完成 # print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape) def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, y_predict) # Return the score return score # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42) # Success print "Training and testing split was successful." def fit_model(X, y): """ Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. """ # Create cross-validation sets from the training data cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0) # TODO: Create a decision tree regressor object regressor = DecisionTreeRegressor(random_state=0) # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10 params = {'max_depth': range(1, 11)} # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' scoring_fnc = make_scorer(performance_metric) # TODO: Create the grid search object grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets) # Fit the grid search object to the data to compute the optimal model grid = grid.fit(X, y) # Return the optimal model after fitting the data return grid.best_estimator_ # Fit the training data to the model using grid search reg = fit_model(X_train, y_train) # Produce the value for 'max_depth' print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']) client_data = [[128, 3, 2, 0, 2005, 13], [150, 3, 2, 0, 2005, 13]] # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ¥{:,.2f}".format(i+1, price) """ Explanation: 问题 11 - 实用性探讨 简单地讨论一下你建构的模型能否在现实世界中使用? 提示: 回答几个问题,并给出相应结论的理由: - 1978年所采集的数据,在今天是否仍然适用? - 数据中呈现的特征是否足够描述一个房屋? - 模型是否足够健壮来保证预测的一致性? - 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区? 答案: 不能,首先这只是波士顿的房价,并不具有代表性,而且时间久远; 不能,房屋的价格还和其他特性有关,比如装修的程度; 不足够健壮,因为它采集的数据在今天并不适用了,且呈现的特征并不足够去描述一个房屋; 不能应用,因为模型并不足够健壮,不具有普适性。 可选问题 - 预测北京房价 (本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中bj_housing.csv。 免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。 这个数据集的特征有: - Area:房屋面积,平方米 - Room:房间数,间 - Living: 厅数,间 - School: 是否为学区房,0或1 - Year: 房屋建造时间,年 - Floor: 房屋所处楼层,层 目标变量: - Value: 房屋人民币售价,万 你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。 End of explanation """
kiteena/Fall16-Team15
Assignment1/KristinaMilkovich-EarthquakeStats.ipynb
apache-2.0
import requests, StringIO, pandas as pd, json, re # function provided by example notebook "Analyze Precipitation Data" as a way to access your data with your credentials def get_file_content(credentials): """For given credentials, this functions returns a StringIO object containing the file content.""" url1 = ''.join([credentials['auth_url'], '/v3/auth/tokens']) data = {'auth': {'identity': {'methods': ['password'], 'password': {'user': {'name': credentials['username'],'domain': {'id': credentials['domain_id']}, 'password': credentials['password']}}}}} headers1 = {'Content-Type': 'application/json'} resp1 = requests.post(url=url1, data=json.dumps(data), headers=headers1) resp1_body = resp1.json() for e1 in resp1_body['token']['catalog']: if(e1['type']=='object-store'): for e2 in e1['endpoints']: if(e2['interface']=='public'and e2['region']==credentials['region']): url2 = ''.join([e2['url'],'/', credentials['container'], '/', credentials['filename']]) s_subject_token = resp1.headers['x-subject-token'] headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'} resp2 = requests.get(url=url2, headers=headers2) return StringIO.StringIO(resp2.content) # credentials that let you access your Spark instance credentials_1 = { 'auth_url':'https://identity.open.softlayer.com', 'project':'object_storage_bc051acd_c1d9_4a5a_980e_e65cc0bf52e0', 'project_id':'53601bbabe47480ba27d8e3a19ca4d8c', 'region':'dallas', 'user_id':'156b02f442054495ba24a758665f9009', 'domain_id':'9b6eba4a400e417f8d53c45af6dab225', 'domain_name':'1139165', 'username':'admin_97f75959f3a9536e163c966f7788a05bdc836c59', 'password':"""eDDf[Dh5fhP6l(HK""", 'filename':'Earthquakequery.csv', 'container':'notebooks', 'tenantId':'s584-d33d2b3133b7f9-f4c49f0b0bc7' } # load data into a dataframe content_string = get_file_content(credentials_1) earthquakes_ds = pd.read_csv(content_string) # view the first five data rows earthquakes_ds.head() # view the last five data rows earthquakes_ds.tail() """ Explanation: Earthquake Data in North America Between 10/08/16 and 10/15/2016 This notebook computes basic analytics on earthquake data provided by http://earthquake.usgs.gov/earthquakes/search/. Date range for data: Starting: 2016-10-08 00:00:00 Ending: 2016-10-15 23:59:59 Location: [8.407, 78.207] Latitude [-172.266, -54.492] Longitude Minimum magnitude: 2.5 Analytics performed Set-Up and Load Data Query Data Visual Representation of Queries Example: Find the places with the most Earthquakes <a id="data_set"></a> 1. Set-Up and Load Data The data is uploaded in a .csv format (Earthquakesquery.csv, in this case) and loaded into dataframe object. Examples below should you how to view your data within the notebook. End of explanation """ earthquakes_ds = earthquakes_ds.set_index(earthquakes_ds["place"]) earthquakes_ds[["mag"]].sort_values(by = "mag", ascending=False).head(10) #Query that returns all the earthquakes in California. earthquakes_ds[earthquakes_ds['place'].str.contains("California")] """ Explanation: <a id="data"></a> 2. Query Data The two examples below allow you to look at specific parts of your data. 1. Get places with the highest magnitude earthquakes 2. Find all the earthquakes in California End of explanation """ %matplotlib inline # shows relationship between magnitude of an earthquake and depth of the epicenter earthquakes_ds = earthquakes_ds.set_index(earthquakes_ds['depth']).sort_values(by = "depth", ascending=True) mag = earthquakes_ds['mag'] depthplot = mag.plot(kind='bar', figsize=(22,5), title="Magnitude by Depth, Earthquakes in North America Oct 8, 2016 to Oct 15, 2016") depthplot.set_ylabel("Magnitude using Log Scale") depthplot.set_xlabel("Depth (meters)"); # maps longitude versus latitude to display earthquake clusters by location earthquakes_ds = earthquakes_ds.set_index(earthquakes_ds['longitude']).sort_values(by = "longitude", ascending=True) time = earthquakes_ds[["latitude"]].sort_values(by = "latitude", ascending=True) ax = time.plot(figsize=(20,8), marker='o', linestyle='-', title="Earthquakes, Longitude versus Latitude in Current Dataset") ax.set_xlabel("Longitude") ax.set_ylabel("Latitude"); """ Explanation: <a id="visual"></a> 3. Visual Representations of Queries Here, queries are plotted into different graphs to allow better understanding and visualization of data. The library matplotlib is used extensively. 1. Below is a bar graph of earthquake magitude by depth of epicenter. 2. Further down is a line graph of longitude versus latitude of all earthquakes (Note: certain regions on the graph contain clusters) End of explanation """ earthquakes_ds = earthquakes_ds.set_index(earthquakes_ds["place"]) s = pd.Series(earthquakes_ds['place']) section =s.str.split("of") series_name = section.str.get(1) set_series = set(series_name) set_series = list(set_series) series_name from collections import * series_count = Counter(series_name) Counter(series_name) # group keys and count, likely an better method in Panda lib import matplotlib.pyplot as plt, numpy as np names = [] counts = [] for item in series_count.keys(): names.append(item) counts.append(series_count[item]) # earthquake pie - Percentage Earthquakes between 10/8/16 to 10/15/16 plt.axis('equal') plt.pie( counts, labels= names, colors=['blue', 'green', 'red', 'turquoise', 'magenta','yellow', 'purple'], autopct="%1.2f%%", radius=4); """ Explanation: <a id="places"></a> 4. Example: Find the places with the most Earthquakes Parse data in the places column to determine closest major city to every earthquake. Get a unique list of cities and determine how many earthquakes were in each city. See Pie Chart below - each city's slice is proportional to how many earthquakes occurred during the dataset time frame. The winner is Road Town, British Virgin Islands. Runner up is Mooreland, Oklahoma. End of explanation """
ramseylab/networkscompbio
class27_booleannetwork_python3_template.ipynb
apache-2.0
import numpy nodes = ['Cell Size', 'Cln3', 'MBF', 'Clb5,6', 'Mcm1/SFF', 'Swi5', 'Sic1', 'Clb1,2', 'Cdc20&Cdc14', 'Cdh1', 'Cln1,2', 'SBF'] N = len(nodes) # define the transition matrix a = numpy.zeros([N, N]) a[0,1] = 1 a[1,1] = -1 a[1,2] = 1 a[1,11] = 1 a[2,3] = 1 a[3,4] = 1 a[3,6] = -1 a[3,7] = 1 a[3,9] = -1 a[4,4] = -1 a[4,5] = 1 a[4,7] = 1 a[4,8] = 1 a[5,5] = -1 a[5,6] = 1 a[6,3] = -1 a[6,7] = -1 a[7,2] = -1 a[7,4] = 1 a[7,5] = -1 a[7,6] = -1 a[7,8] = 1 a[7,9] = -1 a[7,11] = -1 a[8,3] = -1 a[8,5] = 1 a[8,6] = 1 a[8,7] = -1 a[8,8] = -1 a[8,9] = 1 a[9,7] = -1 a[10,6] = -1 a[10,9] = -1 a[10,10] = -1 a[11,10] = 1 a = numpy.matrix(a) # define the matrix of states for the fixed points num_fp = 7 fixed_points = numpy.zeros([num_fp, N]) fixed_points[0, 6] = 1 fixed_points[0, 9] = 1 fixed_points[1, 10] = 1 fixed_points[1, 11] = 1 fixed_points[2, 2] = 1 fixed_points[2, 6] = 1 fixed_points[2, 9] = 1 fixed_points[3, 6] = 1 fixed_points[4, 2] = 1 fixed_points[4, 6] = 1 fixed_points[6, 9] = 1 fixed_points = numpy.matrix(fixed_points) basin_counts = numpy.zeros(num_fp) """ Explanation: Class 27 - Boolean Networks End of explanation """ def hamming_dist(x1, x2): return np.sum(np.abs(x1-x2)) """ Explanation: Define a function hamming.dist that gives the hamming distance between two states of the Boolean network (as numpy arrays of ones and zeroes) End of explanation """ def evolve(state): result = numpy.array(a.transpose().dot(state)) result = numpy.reshape(result, N) result[result > 0] = 1 result[result == 0] = state[result == 0] result[result < 0] = 0 return result """ Explanation: Define a function evolve that takes the network from one Boolean vector state to another Boolean vector state End of explanation """ import itertools import random """ Explanation: Write a function that runs 10,000 simulations of the network. In each simulation, the procedure is: - create a random binary vector of length 12, and call that vector state (make sure the zeroth element is set to zero) - iteratively call "evolve", passing the state to evolve and then updating state with the return value from evolve - check if state changes in the last call to evolve; if it does not, then you have reached a fixed point; stop iterating - compare the state to the rows of fixed_points; for the unique row i for which you find a match, increment the element in position i of basin_counts - print out basin_counts End of explanation """
mari-linhares/tensorflow-workshop
code_samples/RNN/sinusoids/model.ipynb
apache-2.0
#!/usr/bin/env python # Copyright 2017 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # original code from: https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/blogs/timeseries # modified by: Marianne Linhares, monteirom@google.com, May 2017 # tensorflow import tensorflow as tf import tensorflow.contrib.learn as tflearn import tensorflow.contrib.layers as tflayers from tensorflow.contrib.learn.python.learn import learn_runner import tensorflow.contrib.metrics as metrics import tensorflow.contrib.rnn as rnn # visualization import seaborn as sns import matplotlib.pyplot as plt # helpers import numpy as np import csv # enable tensorflow logs tf.logging.set_verbosity(tf.logging.INFO) """ Explanation: Time series prediction using RNNs + Estimators This notebook illustrates how to: 1. Creating a Recurrent Neural Network in TensorFlow 2. Creating a Custom Estimator in tf.contrib.learn Dependecies End of explanation """ TRAIN = 10000 VALID = 50 TEST = 5 SEQ_LEN = 10 def create_time_series(): freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6 ampl = np.random.random() + 0.5 # 0.5 to 1.5 x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl return x def to_csv(filename, N): with open(filename, 'w') as ofp: for lineno in range(0, N): seq = create_time_series() line = ",".join(map(str, seq)) ofp.write(line + '\n') # Creating datasets to_csv('train.csv', TRAIN) to_csv('valid.csv', VALID) to_csv('test.csv', TEST) # Example for i in range(5): sns.tsplot(create_time_series()) plt.show() """ Explanation: Generating time-series data Essentially a set of sinusoids with random amplitudes and frequencies. Each series will consist of 10 (SEQ_LEN) numbers. End of explanation """ DEFAULTS = [[0.0] for x in range(0, SEQ_LEN)] BATCH_SIZE = 20 TIMESERIES_COL = 'rawdata' N_OUTPUTS = 2 # in each sequence, 1-8 are features, and 9-10 is label N_INPUTS = SEQ_LEN - N_OUTPUTS # -------- read data and convert to needed format ----------- def read_dataset(filename, mode=tf.estimator.ModeKeys.TRAIN): def _input_fn(): num_epochs = 100 if mode == tf.estimator.ModeKeys.TRAIN else 1 # could be a path to one file or a file pattern. input_file_names = tf.train.match_filenames_once(filename) filename_queue = tf.train.string_input_producer( input_file_names, num_epochs=num_epochs, shuffle=True) reader = tf.TextLineReader() _, value = reader.read_up_to(filename_queue, num_records=BATCH_SIZE) value_column = tf.expand_dims(value, -1) print('readcsv={}'.format(value_column)) # all_data is a list of tensors all_data = tf.decode_csv(value_column, record_defaults=DEFAULTS) inputs = all_data[:len(all_data)-N_OUTPUTS] # first few values label = all_data[len(all_data)-N_OUTPUTS : ] # last few values # from list of tensors to tensor with one more dimension inputs = tf.concat(inputs, axis=1) label = tf.concat(label, axis=1) print(inputs) print('inputs={}'.format(inputs)) return {TIMESERIES_COL: inputs}, label # dict of features, label return _input_fn def get_train(): return read_dataset('train.csv', mode=tf.estimator.ModeKeys.TRAIN) def get_valid(): return read_dataset('valid.csv', mode=tf.estimator.ModeKeys.EVAL) def get_test(): return read_dataset('test.csv', mode=tf.estimator.ModeKeys.EVAL) """ Explanation: Read datasets End of explanation """ LSTM_SIZE = 3 # number of hidden layers in each of the LSTM cells def simple_rnn(features, targets, mode, params): # 0. Reformat input shape to become a sequence x = tf.split(features[TIMESERIES_COL], N_INPUTS, 1) # 1. configure the RNN lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0) outputs, _ = rnn.static_rnn(lstm_cell, x, dtype=tf.float32) # slice to keep only the last cell of the RNN outputs = outputs[-1] #print 'last outputs={}'.format(outputs) # output is result of linear activation of last layer of RNN weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS])) bias = tf.Variable(tf.random_normal([N_OUTPUTS])) predictions = tf.matmul(outputs, weight) + bias # 2. Define the loss function for training/evaluation #print 'targets={}'.format(targets) #print 'preds={}'.format(predictions) loss = tf.losses.mean_squared_error(targets, predictions) eval_metric_ops = { "rmse": tf.metrics.root_mean_squared_error(targets, predictions) } # 3. Define the training operation/optimizer train_op = tf.contrib.layers.optimize_loss( loss=loss, global_step=tf.contrib.framework.get_global_step(), learning_rate=0.01, optimizer="SGD") # 4. Create predictions predictions_dict = {"predicted": predictions} # 5. return ModelFnOps return tflearn.ModelFnOps( mode=mode, predictions=predictions_dict, loss=loss, train_op=train_op, eval_metric_ops=eval_metric_ops) def serving_input_fn(): feature_placeholders = { TIMESERIES_COL: tf.placeholder(tf.float32, [None, N_INPUTS]) } features = { key: tf.expand_dims(tensor, -1) for key, tensor in feature_placeholders.items() } return tflearn.utils.input_fn_utils.InputFnOps( features, None, feature_placeholders ) """ Explanation: RNN Model End of explanation """ nn = tf.contrib.learn.Estimator(model_fn=simple_rnn) # ---------- Training ------------- print('---------- Training ------------') nn.fit(input_fn=get_train(), steps=10000) # ---------- Evaluating ------------- print('---------- Evaluating ------------') ev = nn.evaluate(input_fn=get_valid()) print(ev) # ---------- Testing ---------------- print('---------- Testing ------------') predictions = [] for p in nn.predict(input_fn=get_test()): print(p) predictions.append(p["predicted"]) """ Explanation: Running model End of explanation """ # read test csv def read_csv(filename): with open(filename, 'rt') as csvfile: reader = csv.reader(csvfile) data = [] for row in reader: data.append([float(x) for x in row]) return data test_data = read_csv('test.csv') # update predictions with features # preds = test_data[:INPUTS] concat with predictions preds = [] for i in range(len(predictions)): preds.append(list(test_data[i][:N_INPUTS]) + list(predictions[i])) # visualizing predictions for d in test_data: sns.tsplot(d[N_INPUTS:], color="black") for p in preds: sns.tsplot(p[N_INPUTS:], color="red") plt.show() # visualizing all the series for d in test_data: sns.tsplot(d, color="black") for p in preds: sns.tsplot(p, color="red") plt.show() """ Explanation: Visualizing predictions End of explanation """
bgroveben/python3_machine_learning_projects
learn_kaggle/pandas/creating_reading_writing_workbook.ipynb
mit
import pandas as pd pd.set_option('max_rows', 5) from learntools.advanced_pandas.creating_reading_writing import * """ Explanation: Creating, reading, and writing workbook Introduction and relevant resources This is the first notebook in the Learn Pandas track. These exercises assume some prior experience with Pandas. Each page has a list of relevant resources that you can use for reference, and the top item in each list has been chosen specifically to help you with the exercises on that page. The first step in most data science projects is reading in the data. In this section, you will be using pandas to create Series and DataFrame objects, both by hand and by reading data files. The Relevant Resources, as promised: Creating, Reading and Writing Reference * General Pandas Cheat Sheet* Setup End of explanation """ check_q1(pd.DataFrame()) """ Explanation: Checking Answers You can check your answers in each of the exercises that follow using the check_qN function provided in the code cell above by replacing N with the number of the exercise. For example here's how you would check an incorrect answer to exercise 1: End of explanation """ data = {'Apples': [30], 'Bananas': [21]} pd.DataFrame(data=data) df2 = pd.DataFrame([(30, 21)], columns=['Apples', 'Bananas']) df2 answer_q1() """ Explanation: A correct answer would return True. If you capitulate, run print(answer_qN())). Exercises Exercise 1 Create a DataFrame: End of explanation """ df2x2 = pd.DataFrame([[35, 21], [41, 34]], index=['2017 Sales', '2018 Sales'], columns=['Apples', 'Bananas']) df2x2 answer_q2() """ Explanation: Exercise 2 Create a 2x2 DataFrame: End of explanation """ pd.Series({'Flour': '4 cups', 'Milk': '1 cup', 'Eggs': '2 large', 'Spam': '1 can'}, name='Dinner') answer_q3() """ Explanation: Exercise 3 Create a Series: End of explanation """ wine_reviews = pd.read_csv('inputs/wine-reviews/winemag-data_first150k.csv', index_col=0) wine_reviews.head() wine_reviews.tail() wine_reviews.shape wine_reviews.info() dir(wine_reviews) print(wine_reviews) wine_reviews.items answer_q4() """ Explanation: Exercise 4 Read data from a .csv file into a DataFrame. End of explanation """ wic = pd.read_excel('inputs/publicassistance/xls_files_all/WICAgencies2014ytd.xls', sheetname='Pregnant Women Participating') wic.info() wic.head() answer_q5() """ Explanation: Exercise 5 Read data from a .xls sheet into a pandas DataFrame. End of explanation """ q6_df = pd.DataFrame({'Cows': [12, 20], 'Goats': [22, 19]}, index=['Year 1', 'Year 2']) q6_df.to_csv('cows_and_goats.csv') answer_q6() """ Explanation: Exercise 6 Save a DataFrame as a .csv file. End of explanation """
swara-salih/Portfolio
Web Scraping and Predicting Data Science Salaries/Web Scraping and Predicting Data Science Salaries.ipynb
mit
#Using a random forest regressor, with one other classifier. url = "http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=New+York&start=10" import requests import bs4 from bs4 import BeautifulSoup import urllib html = urllib.urlopen(url).read() b = BeautifulSoup(html, 'html.parser', from_encoding="utf-8") #http://stackoverflow.com/questions/9907492/how-to-get-firefox-working-with-selenium-webdriver-on-mac-osx ## YOUR CODE HERE b.find_all('span', {'class','summary'}) #List of summaries for New York 20,000. for entry in b.find_all('span', {'class','summary'}): print entry.text """ Explanation: Web Scraping for Indeed.com & Predicting Salaries In this project, we will practice two major skills: collecting data by scraping a website and then building a binary classifier. We are going to collect salary information on data science jobs in a variety of markets. Then using the location, title and summary of the job we will attempt to predict the salary of the job. For job posting sites, this would be extraordinarily useful. While most listings DO NOT come with salary information (as you will see in this exercise), being to able extrapolate or predict the expected salaries from other listings can help guide negotiations. Normally, we could use regression for this task; however, we will convert this problem into classification and use a random forest regressor, as well as another classifier of your choice; either logistic regression, SVM, or KNN. Question: Why would we want this to be a classification problem? Answer: While more precision may be better, there is a fair amount of natural variance in job salaries - predicting a range be may be useful. Therefore, the first part of the assignment will be focused on scraping Indeed.com. In the second, we'll focus on using listings with salary information to build a model and predict additional salaries. Scraping job listings from Indeed.com We will be scraping job listings from Indeed.com using BeautifulSoup. Luckily, Indeed.com is a simple text page where we can easily find relevant entries. First, look at the source of an Indeed.com page: (http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=New+York&start=10") Notice, each job listing is underneath a div tag with a class name of result. We can use BeautifulSoup to extract those. Setup a request (using requests) to the URL below. Use BeautifulSoup to parse the page and extract all results (HINT: Look for div tags with class name result) The URL here has many query parameters - q for the job search - This is followed by "+20,000" to return results with salaries (or expected salaries >$20,000) - l for a location - start for what result number to start on End of explanation """ def extract_job_from_result(result): url = result html = urllib.urlopen(url).read() b = BeautifulSoup(html, 'html.parser', from_encoding="utf-8") for entry in b.find_all('h2', {'class':'jobtitle'}): entry.text extract_job_from_result('http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=New+York&start=10') def extract_location_from_result(result): url = result html = urllib.urlopen(url).read() b = BeautifulSoup(html, 'html.parser', from_encoding="utf-8") for entry in b.find_all('span', {'class':'location'}): entry.text extract_location_from_result('https://www.indeed.com/jobs?q=data+scientist+$20,000&l=New+York&start=10') def extract_company_from_result(result): url = result html = urllib.urlopen(url).read() b = BeautifulSoup(html, 'html.parser', from_encoding="utf-8") for entry in b.find_all('span', {'class':'company'}): entry.text extract_company_from_result('https://www.indeed.com/jobs?q=data+scientist+$20,000&l=New+York&start=10') #The salary is available in a nobr element inside of a td element with class='snip'. def extract_salary_from_result(result): url = result html = urllib.urlopen(url).read() b = BeautifulSoup(html, 'html.parser', from_encoding="utf-8") for entry in b.find_all('td', {'class':'snip'}): try: entry.find('nobr').renderContents() except: 'NONE LISTED' extract_salary_from_result('http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=New+York&start=10') """ Explanation: Let's look at one result more closely. A single result looks like ```JSON <div class=" row result" data-jk="2480d203f7e97210" data-tn-component="organicJob" id="p_2480d203f7e97210" itemscope="" itemtype="http://schema.org/JobPosting"> <h2 class="jobtitle" id="jl_2480d203f7e97210"> <a class="turnstileLink" data-tn-element="jobTitle" onmousedown="return rclk(this,jobmap[0],1);" rel="nofollow" target="_blank" title="AVP/Quantitative Analyst">AVP/Quantitative Analyst</a> </h2> <span class="company" itemprop="hiringOrganization" itemtype="http://schema.org/Organization"> <span itemprop="name"> <a href="/cmp/Alliancebernstein?from=SERP&campaignid=serp-linkcompanyname&fromjk=2480d203f7e97210&jcid=b374f2a780e04789" target="_blank"> AllianceBernstein</a></span> </span> <tr> <td class="snip"> <nobr>$117,500 - $127,500 a year</nobr> <div> <span class="summary" itemprop="description"> C onduct quantitative and statistical research as well as portfolio management for various investment portfolios. Collaborate with Quantitative Analysts and</span> </div> </div> </td> </tr> </table> </div> ``` While this has some more verbose elements removed, we can see that there is some structure to the above: - The salary is available in a nobr element inside of a td element with class='snip. - The title of a job is in a link with class set to jobtitle and a data-tn-element="jobTitle. - The location is set in a span with class='location'. - The company is set in a span with class='company'. Write 4 functions to extract each item: location, company, job, and salary.¶ Example python def extract_location_from_result(result): return result.find ... - Make sure these functions are robust and can handle cases where the data/field may not be available. Remember to check if a field is empty or None for attempting to call methods on it Remember to use try/except if you anticipate errors. Test the functions on the results above and simple examples End of explanation """ YOUR_CITY = 'Boston' url_template = "http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l={}&start={}" max_results_per_city = 10000 # Set this to a high-value (5000) to generate more results. # Crawling more results, will also take much longer. First test your code on a small number of results and then expand. results = [] ny = [] chic = [] sf = [] aus = [] sea = [] la = [] phil = [] atl = [] dal = [] pitt = [] port = [] ph = [] den = [] hou = [] mi = [] for city in set(['New+York', 'Chicago', 'San+Francisco', 'Austin', 'Seattle', 'Los+Angeles', 'Philadelphia', 'Atlanta', 'Dallas', 'Pittsburgh', 'Portland', 'Phoenix', 'Denver', 'Houston', 'Miami', YOUR_CITY]): for start in range(0, max_results_per_city, 10): # Grab the results from the request (as above) url = "http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=" + city +"&start="+ str(start) # Make a list for each city if city=='New+York': ny.append(url) if city=='Chicago': chic.append(url) if city=='San+Francisco': sf.append(url) if city=='Austin': aus.append(url) if city=='Seattle': sea.append(url) if city=='Los+Angeles': la.append(url) if city=='Philadelphia': phil.append(url) if city=='Atlanta': atl.append(url) if city=='Dallas': dal.append(url) if city=='Pittsburgh': pitt.append(url) if city=='Portland': port.append(url) if city=='Philadelphia': ph.append(url) if city=='Phoenix': ph.append(url) if city=='Denver': den.append(url) if city=='Houston': hou.append(url) if city=='Miami': mi.append(url) # Make a full set of results just in case results.append(url) pass """ Explanation: Now, to scale up our scraping, we need to accumulate more results. We can do this by examining the URL above. - "http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=New+York&start=10" There are two query parameters here we can alter to collect more results, the l=New+York and the start=10. The first controls the location of the results (so we can try a different city). The second controls where in the results to start and gives 10 results (thus, we can keep incrementing by 10 to go further in the list). Complete the following code to collect results from multiple cities and starting points. Enter your city below to add it to the search Remember to convert your salary to U.S. Dollars to match the other cities if the currency is different End of explanation """ import pandas as pd job_details = pd.DataFrame(columns=['location','title','company', 'salary']) b = BeautifulSoup(html, 'html.parser', from_encoding="utf-8") #Take the overall entry and excract info from there first. for result in results: url = result html = urllib.urlopen(url).read() for entry in b.find_all('div', {'class':' row result'}): try: location = b.find('span', {'class':'location'}).text except: location = 'NA' try: title = b.find('h2', {'class':'jobtitle'}).text except: title = 'NA' try: company = b.find('span', {'class':'company'}).text except: company = 'NA' try: salary = b.find('td', {'class':'snip'}).find('nobr').renderContents() except: salary = 'NA' job_details.loc[len(job_details)]=[location, title, company, salary] """ Explanation: Use the functions you wrote above to parse out the 4 fields - location, title, company and salary. Create a dataframe from the results with those 4 columns. End of explanation """ job_details = job_details[job_details.salary != 'NONE LISTED'] job_details = job_details.reset_index() job_details = job_details.drop('index', 1) job_details = job_details.drop('level_0', 1) ## YOUR CODE HERE job_details = job_details[job_details.salary.str.contains("a month") == False] job_details = job_details[job_details.salary.str.contains("an hour") == False] job_details = job_details[job_details.salary.str.contains("a week") == False] job_details = job_details[job_details.salary.str.contains("a day") == False] job_details """ Explanation: Lastly, we need to clean up salary data. Only a small number of the scraped results have salary information - only these will be used for modeling. Some of the salaries are not yearly but hourly or weekly, these will not be useful to us for now Some of the entries may be duplicated The salaries are given as text and usually with ranges. Find the entries with annual salary entries, by filtering the entries without salaries or salaries that are not yearly (filter those that refer to hour or week). Also, remove duplicate entries End of explanation """ ## YOUR CODE HERE job_details['salary'] = (job_details['salary'].replace( '[\a year,)]','', regex=True)) job_details['salary'] = (job_details['salary'].replace( '[\$,)]','', regex=True)) job_details['company'] = (job_details['company'].replace( '[\\n,)]','', regex=True)) job_details['company'] = (job_details['company'].replace( '[\\n\n,)]','', regex=True)) job_details['title'] = (job_details['title'].replace( '[\\n,)]','', regex=True)) #Checkpoint. job_details = all_jobs job_details_2 = job_details.drop_duplicates() #533 results. Left with 34 in total. #Need to convert the ranges. job_details_2['salary'] = (job_details_2['salary'].replace( '[\-,)]',' ', regex=True)) job_details_2.reset_index() salaries = job_details_2.salary.str.split(' ', expand=True) salaries = salaries.astype(float) salaries.dtypes salaries = salaries.rename(columns = {0:'salary_1', 1:'salary_2'}) salaries.salary_2 = salaries.salary_2.fillna(salaries.salary_1) final_salary = salaries.median(axis=1) final_salary = pd.DataFrame(final_salary) final_salary = final_salary.rename(columns = {0:'final_salary'}) final_salary.head() jobs = pd.concat([job_details_2, final_salary], axis=1) jobs = jobs.drop('salary', axis=1) jobs jobs.dtypes """ Explanation: Write a function that takes a salary string and converts it to a number, averaging a salary range if necessary End of explanation """ # Export to csv job_details_csv = jobs.to_csv job_details_csv """ Explanation: Save your results as a CSV End of explanation """ ## YOUR CODE HERE from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.cross_validation import train_test_split rfc = RandomForestClassifier() knn = KNeighborsClassifier() """ Explanation: Predicting salaries using Random Forests + Another Classifier Load in the the data of scraped salaries End of explanation """ # Median salary is 72,500 per year # Should do the mean or the 50th percentile instead as there aren't that many salaries above 72,500. jobs.final_salary.describe() #Upper 50% above 108,750 jobs['high_or_low'] = jobs['final_salary'].map(lambda x: 1 if x > 108750 else 0) jobs jobs_with_locations = pd.concat([jobs, pd.get_dummies(jobs.location)], axis=1) jobs_with_locations.head(3) """ Explanation: We want to predict a binary variable - whether the salary was low or high. Compute the median salary and create a new binary variable that is true when the salary is high (above the median) We could also perform Linear Regression (or any regression) to predict the salary value here. Instead, we are going to convert this into a binary classification problem, by predicting two classes, HIGH vs LOW salary. While performing regression may be better, performing classification may help remove some of the noise of the extreme salaries. We don't have to choice the median as the splitting point - we could also split on the 75th percentile or any other reasonable breaking point. In fact, the ideal scenario may be to predict many levels of salaries, End of explanation """ ## YOUR CODE HERE X_1 = jobs_with_locations.drop(jobs_with_locations[[0,1,2,3,4]], axis=1) y_1 = jobs_with_locations.high_or_low # NO TRAIN TEST SPLIT. rfc.fit(X_train,y_train) #Accuracy score. rfc.score(X_train,y_train) from sklearn.cross_validation import cross_val_score cross_val_score(rfc, X_train, y_train) #The accuracy is high, but the cross validation score expresses substantially less # confidence. May be due to the smallness of the sample. #Try with KNN knn.fit(X_train, y_train) knn.score(X_train,y_train) #Less accurate than with rfc. cross_val_score(knn, X_train, y_train) #Same cross val score as with rfc. """ Explanation: Thought experiment: What is the baseline accuracy for this model? It is a measure of how well our selected features will be at predicting a high or low salary. Create a Random Forest model to predict High/Low salary. Start by ONLY using the location as a feature. End of explanation """ ## YOUR CODE HERE senior_variable = jobs['title'].map(lambda x: 1 if 'Senior' in x else 0) senior_variable = pd.DataFrame(senior_variable) senior_variable = senior_variable.rename(columns = {'title':'senior_variable'}) jobs_with_seniors = pd.concat([jobs, senior_variable], axis=1) jobs_with_seniors X_2 = jobs_with_seniors.drop(jobs_with_seniors[[0,1,2,3,4]], axis=1) y_2 = jobs_with_seniors.high_or_low rfc.fit(X_train,y_train) rfc.score(X_train,y_train) #Not the most accurate for "senior" in job title, but could be because of the small size of sample cross_val_score(rfc, X_train, y_train) # "Senior" not highly predictive of high salaries, at least from this dataset knn.fit(X_train,y_train) knn.score(X_train,y_train) cross_val_score(knn, X_train, y_train) # Saw a few high salaries with "Quantitative" in there--should test for that quant_variable = jobs['title'].map(lambda x: 1 if 'Quantitative' in x else 0) quant_variable = pd.DataFrame(quant_variable) quant_variable = quant_variable.rename(columns = {'title':'quant_variable'}) jobs_with_quant = pd.concat([jobs, quant_variable], axis=1) X_3 = jobs_with_quant.quant_variable y_3 = jobs_with_quant.high_or_low X_train, X_test, y_train, y_test = train_test_split(X_3, y_3, test_size=0.5, random_state=50) rfc.fit(X_train,y_train) rfc.score(X_train,y_train) #Accuracy score likely lower than usual because of the limited sample from the scraping. cross_val_score(rfc, X_train,y_train) # Not predictive according to cross val score. knn.fit(X_train,y_train) knn.score(X_train,y_train) cross_val_score(knn,X_train,y_train) """ Explanation: Create a few new variables in your dataframe to represent interesting features of a job title. For example, create a feature that represents whether 'Senior' is in the title or whether 'Manager' is in the title. Then build a new Random Forest with these features. Do they add any value? End of explanation """ #I've already turned them to dummies, so I'm not sure scaling would be beneficial. ## YOUR CODE HERE """ Explanation: Rebuild this model with scikit-learn. You can either create the dummy features manually or use the dmatrix function from patsy Remember to scale the feature variables as well! End of explanation """ ## YOUR CODE HERE """ Explanation: Use cross-validation in scikit-learn to evaluate the model above. Evaluate the accuracy of the model. End of explanation """ ## YOUR CODE HERE from sklearn.ensemble import RandomForestRegressor rfr = RandomForestRegressor() #X_1, y_1: location; X_2, y_2: senior; X_3, y_3: quant #First for location X_train, X_test, y_train, y_test = train_test_split(X_1, y_1, test_size=0.5, random_state=50) rfr.fit(X_train,y_train) import matplotlib.pyplot as plt rfr.score(X_train,y_train) #A slightly improved score from others. cross_val_score(rfr, X_train, y_train, cv=5, scoring='mean_squared_error') #All negative scores. #Now for "seniors" X_train, X_test, y_train, y_test = train_test_split(X_2, y_2, test_size=0.5, random_state=50) rfr.fit(X_train,y_train) rfr.score(X_train,y_train) #Not accurate at all. cross_val_score(rfr, X_train, y_train, cv=5, scoring='mean_squared_error') #Again, negative scores. #Maybe Quantitative could work. X_train, X_test, y_train, y_test = train_test_split(X_3, y_3, test_size=0.5, random_state=50) rfr.fit(X_train,y_train) rfr.score(X_train, y_train) #Also very low. cross_val_score(rfr, X_train, y_train, cv=5, scoring='mean_squared_error') """ Explanation: Random Forest Regressor Let's try treating this as a regression problem. Train a random forest regressor on the regression problem and predict your dependent. Evaluate the score with a 5-fold cross-validation Do a scatter plot of the predicted vs actual scores for each of the 5 folds, do they match? End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/explainable_ai/SDK_Custom_Container_XAI.ipynb
apache-2.0
%%writefile requirements.txt joblib~=1.0 numpy~=1.20 scikit-learn~=0.24 google-cloud-storage>=1.26.0,<2.0.0dev # Required in Docker serving container %pip install -U --user -r requirements.txt # For local FastAPI development and running %pip install -U --user "uvicorn[standard]>=0.12.0,<0.14.0" fastapi~=0.63 # Vertex SDK for Python %pip install -U --user google-cloud-aiplatform """ Explanation: <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/explainable_ai/SDK_Custom_Container_XAI.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/explainable_ai/SDK_Custom_Container_XAI.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/explainable_ai/SDK_Custom_Container_XAI.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> Overview This tutorial walks through building a custom container to serve a scikit-learn model on Vertex AI Prediction. You will use the FastAPI Python web server framework to create a prediction and health endpoint. You will also enable explanations for the endpoint Dataset This tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that marks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica. This tutorial uses the copy of the Iris dataset included in the scikit-learn library. Objective The goal is to: - Train a model that uses a flower's measurements as input to predict what type of iris it is. - Save the model and its serialized pre-processor - Build a FastAPI server to handle predictions and health checks - Build a custom container with model artifacts - Upload and deploy custom container to Vertex AI Prediction w/ explanability enabled This tutorial focuses more on deploying this model with Vertex AI than on the design of the model itself. Costs This tutorial uses billable components of Google Cloud: Vertex AI Learn about Vertex AI pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Vertex AI Workbench Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: Docker Git Google Cloud SDK (gcloud) Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Install additional packages Install additional package dependencies not installed in your notebook environment, such as NumPy, Scikit-learn, FastAPI, Uvicorn, and joblib. Use the latest major GA version of each package. End of explanation """ # Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel After you install the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation """ # Get your Google Cloud project ID from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null try: PROJECT_ID = shell_output[0] except IndexError: PROJECT_ID = None print("Project ID:", PROJECT_ID) """ Explanation: Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API and Compute Engine API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! or % as shell commands, and it interpolates Python variables with $ or {} into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation """ if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"} """ Explanation: Otherwise, set your project ID here. End of explanation """ MODEL_ARTIFACT_DIR = "custom-container-explainablility-model" # @param {type:"string"} REPOSITORY = "custom-container-explainablility" # @param {type:"string"} IMAGE = "sklearn-fastapi-server" # @param {type:"string"} MODEL_DISPLAY_NAME = "sklearn-explainable-custom-container" # @param {type:"string"} """ Explanation: Configure project and resource names End of explanation """ BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]": BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP if REGION == "[your-region]": REGION = "us-central1" """ Explanation: MODEL_ARTIFACT_DIR - Folder directory path to your model artifacts within a Cloud Storage bucket, for example: "my-models/fraud-detection/trial-4" REPOSITORY - Name of the Artifact Repository to create or use. IMAGE - Name of the container image that will be pushed. MODEL_DISPLAY_NAME - Display name of Vertex AI Model resource. Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. After you train the model locally, you will upload model artifacts to a Cloud Storage bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions and explanations Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. We suggest that you choose a region where Vertex AI services are available. End of explanation """ ! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_URI """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ %mkdir app """ Explanation: Train and store model with pre-processor After training completes, the steps below will save your trained model as a joblib (.joblib) file and upload to Cloud Storage Make a directory to store all the outputs End of explanation """ %cd app/ import joblib import numpy as np from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression class IrisClassifier: def __init__(self): self.X, self.y = load_iris(return_X_y=True) self.clf = self.train_model() self.iris_type = {0: "setosa", 1: "versicolor", 2: "virginica"} def train_model(self) -> LogisticRegression: return LogisticRegression( solver="lbfgs", max_iter=1000, multi_class="multinomial" ).fit(self.X, self.y) def predict(self, features: dict): X = [ features["sepal_length"], features["sepal_width"], features["petal_length"], features["petal_width"], ] prediction = self.clf.predict_proba([X]) print(prediction) return { "class": self.iris_type[np.argmax(prediction)], "probability": round(max(prediction[0]), 2), } model_local = IrisClassifier() joblib.dump(model_local, "model.joblib") %cd .. """ Explanation: Train a model locally using the iris dataset to classify flowers and return a probability End of explanation """ model_local.predict( features={ "sepal_length": 4.8, "sepal_width": 3, "petal_length": 1.4, "petal_width": 0.3, } ) """ Explanation: Test the model locally End of explanation """ instances = [ {"sepal_length": 4.8, "sepal_width": 3, "petal_length": 1.4, "petal_width": 0.3}, {"sepal_length": 6.2, "sepal_width": 3.4, "petal_length": 5.4, "petal_width": 2.3}, ] """ Explanation: Create instances for testing predictions in Docker and on Vertex AI End of explanation """ %%writefile app/classifier.py from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression import numpy as np from fastapi import FastAPI, Request class IrisClassifier: def __init__(self): self.X, self.y = load_iris(return_X_y=True) self.clf = self.train_model() self.iris_type = { 0: 'setosa', 1: 'versicolor', 2: 'virginica' } def train_model(self) -> LogisticRegression: return LogisticRegression(solver='lbfgs', max_iter=1000, multi_class='multinomial').fit(self.X, self.y) def predict(self, features: dict): X = [features['sepal_length'], features['sepal_width'], features['petal_length'], features['petal_width']] prediction = self.clf.predict_proba([X]) return {'class': self.iris_type[np.argmax(prediction)], 'probability': round(max(prediction[0]), 2)} %%writefile classifier.py from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression import numpy as np class IrisClassifier: def __init__(self): self.X, self.y = load_iris(return_X_y=True) self.clf = self.train_model() self.iris_type = { 0: 'setosa', 1: 'versicolor', 2: 'virginica' } def train_model(self) -> LogisticRegression: return LogisticRegression(solver='lbfgs', max_iter=1000, multi_class='multinomial').fit(self.X, self.y) def predict(self, features: dict): X = [features['sepal_length'], features['sepal_width'], features['petal_length'], features['petal_width']] prediction = self.clf.predict_proba([X]) return {'class': self.iris_type[np.argmax(prediction)], 'probability': round(max(prediction[0]), 2)} """ Explanation: Create the code for the classifier used to return predictions End of explanation """ %cd app with open("__init__.py", "wb") as model_f: pass %cd .. with open("__init__.py", "wb") as model_f: pass """ Explanation: Add init.py to main folder and app folder to enable import of .py files End of explanation """ !gsutil cp app/model.joblib {BUCKET_URI}/{MODEL_ARTIFACT_DIR}/ """ Explanation: Upload model artifacts and custom code to Cloud Storage Before you can deploy your model for serving, Vertex AI needs access to the following files in Cloud Storage: model.joblib (model artifact) Run the following commands to upload your files: End of explanation """ %%writefile app/main.py from fastapi import FastAPI, Request #from starlette.responses import JSONResponse import joblib import json import numpy as np import pickle import os from google.cloud import storage from classifier import IrisClassifier app = FastAPI() ''' gcs_client = storage.Client() with open("model.joblib", 'wb') as model_f: gcs_client.download_blob_to_file( f"{os.environ['AIP_STORAGE_URI']}/model.joblib", model_f ) #_model = joblib.load("model.joblib") ''' @app.get(os.environ['AIP_HEALTH_ROUTE'], status_code=200) def health(): return {} @app.post(os.environ['AIP_PREDICT_ROUTE']) async def predict(request: Request): body = await request.json() print (body) model = IrisClassifier() instances = body["instances"] output = [] for i in instances: output.append(model.predict(i)) #return 'class' and 'probability' return {"predictions": output} """ Explanation: Build a FastAPI server End of explanation """ %%writefile app/prestart.sh #!/bin/bash export PORT=$AIP_HTTP_PORT """ Explanation: Add pre-start script FastAPI will execute this script before starting up the server. The PORT environment variable is set to equal AIP_HTTP_PORT in order to run FastAPI on same the port expected by Vertex AI. End of explanation """ %%writefile instances.json { "instances": [{ "sepal_length": 4.8, "sepal_width": 3, "petal_length": 1.4, "petal_width": 0.3 },{ "sepal_length": 6.2, "sepal_width": 3.4, "petal_length": 5.4, "petal_width": 2.3 }] } """ Explanation: Store test instances to use later To learn more about formatting input instances in JSON, read the documentation. End of explanation """ import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ # NOTE: Copy in credentials to run locally, this step can be skipped for deployment import shutil GOOGLE_APPLICATION_CREDENTIALS = "[PATH-TO-YOUR-CREDENTIALS.json]" shutil.copyfile(GOOGLE_APPLICATION_CREDENTIALS, "app/credentials.json") """ Explanation: Optionally copy in your credentials to run the container locally. End of explanation """ %%writefile Dockerfile FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 COPY ./app /app COPY requirements.txt requirements.txt RUN pip install -r requirements.txt """ Explanation: Build and push container to Artifact Registry Build your container Write the Dockerfile, using tiangolo/uvicorn-gunicorn-fastapi as a base image. This will automatically run FastAPI for you using Gunicorn and Uvicorn. Visit the FastAPI docs to read more about deploying FastAPI with Docker. End of explanation """ !docker build \ --tag={REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} \ . """ Explanation: Build the image and tag the Artifact Registry path that you will push to. End of explanation """ !docker stop local-iris !docker rm local-iris container_id = !docker run -d -p 80:8080 \ --name=local-iris \ -e AIP_HTTP_PORT=8080 \ -e AIP_HEALTH_ROUTE=/health \ -e AIP_PREDICT_ROUTE=/predict \ -e AIP_STORAGE_URI={BUCKET_URI}/{MODEL_ARTIFACT_DIR} \ -e GOOGLE_APPLICATION_CREDENTIALS=credentials.json \ {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} """ Explanation: Run and test the container locally (optional) Run the container locally in detached mode and provide the environment variables that the container requires. These env vars will be provided to the container by Vertex Prediction once deployed. Test the /health and /predict routes, then stop the running image. End of explanation """ !curl localhost/health """ Explanation: Check the health route End of explanation """ !docker logs {container_id[0]} """ Explanation: Check the Docker logs to look for any issues End of explanation """ !curl -X POST \ -d @instances.json \ -H "Content-Type: application/json; charset=utf-8" \ localhost/predict """ Explanation: Get a prediction from the Docker container End of explanation """ !docker stop local-iris """ Explanation: Stop the Docker process End of explanation """ !gcloud beta artifacts repositories create {REPOSITORY} \ --repository-format=docker \ --location=$REGION """ Explanation: Push the container to artifact registry Create the repository End of explanation """ !gcloud auth configure-docker {REGION}-docker.pkg.dev --quiet """ Explanation: Configure Docker to access Artifact Registry End of explanation """ !docker push {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} """ Explanation: Push your container image to your Artifact Registry repository. End of explanation """ from google.cloud import aiplatform aiplatform.init(project=PROJECT_ID, location=REGION) """ Explanation: Deploy to Vertex AI Use the Python SDK to upload and deploy your model. End of explanation """ XAI = "shapley" # [ shapley, ig, xrai ] if XAI == "shapley": PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}} elif XAI == "ig": PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}} elif XAI == "xrai": PARAMETERS = {"xrai_attribution": {"step_count": 50}} parameters = aiplatform.explain.ExplanationParameters(PARAMETERS) """ Explanation: Configure explanations Here we will use Shapley for model trained on a tabular dataset. More details on this can be found in the documentation: https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanations#import-model-example https://cloud.google.com/vertex-ai/docs/explainable-ai/improving-explanations End of explanation """ EXPLANATION_METADATA = aiplatform.explain.ExplanationMetadata( inputs={ "sepal_length": {}, "sepal_width": {}, "petal_length": {}, "petal_width": {}, }, outputs={"probability": {}}, ) """ Explanation: Specify the input features, and the output label name to configure the Explanation Metadata for custom containers End of explanation """ model = aiplatform.Model.upload( display_name=MODEL_DISPLAY_NAME, artifact_uri=f"{BUCKET_URI}/{MODEL_ARTIFACT_DIR}", serving_container_image_uri=f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}", explanation_parameters=parameters, explanation_metadata=EXPLANATION_METADATA, ) """ Explanation: Upload the custom container model End of explanation """ endpoint = model.deploy(machine_type="n1-standard-4") """ Explanation: Deploy the model on Vertex AI After this step completes, the model is deployed and ready for online prediction. End of explanation """ endpoint.predict(instances=instances) """ Explanation: Send predictions Using Python SDK Call the endpoint for predictions End of explanation """ endpoint.explain(instances=instances) """ Explanation: Call the endpoint for explanations End of explanation """ ENDPOINT_ID = endpoint.name """ Explanation: Using REST Set an endpoint ID to use in the rest command End of explanation """ ! curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ -d @instances.json \ https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:predict """ Explanation: Call the endpoint for predictions using Rest End of explanation """ ! curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ -d @instances.json \ https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:explain """ Explanation: Call the endpoint for explanations using Rest End of explanation """ !gcloud beta ai endpoints predict $ENDPOINT_ID \ --region=$REGION \ --json-request=instances.json """ Explanation: Using gcloud CLI Call the endpoint for predictions using gcloud End of explanation """ !gcloud beta ai endpoints explain $ENDPOINT_ID \ --region=$REGION \ --json-request=instances.json """ Explanation: Call the endpoint for explanations using gcloud End of explanation """ # Undeploy model and delete endpoint endpoint.delete(force=True) # Delete the model resource model.delete() # Delete the container image from Artifact Registry !gcloud artifacts docker images delete \ --quiet \ --delete-tags \ {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} delete_bucket = False if delete_bucket or os.getenv("IS_TESTING"): ! gsutil rm -rf {BUCKET_URI} """ Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: End of explanation """
rahulkgup/deep-learning-foundation
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
mit
import pandas as pd import numpy as np import tensorflow as tf import tflearn from tflearn.data_utils import to_categorical """ Explanation: Sentiment analysis with TFLearn In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you. We'll start off by importing all the modules we'll need, then load and prepare the data. End of explanation """ reviews = pd.read_csv('reviews.txt', header=None) labels = pd.read_csv('labels.txt', header=None) """ Explanation: Preparing the data Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this. Read the data Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ from collections import Counter total_counts = Counter() # bag of words here for _, row in reviews.iterrows(): total_counts.update(row[0].split(' ')) print("Total words in data set: ", len(total_counts)) """ Explanation: Counting word frequency To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class. Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours. End of explanation """ vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000] print(vocab[:60]) """ Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words. End of explanation """ print(vocab[-1], ': ', total_counts[vocab[-1]]) """ Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words. End of explanation """ word2idx = {word: i for i, word in enumerate(vocab)} ## create the word-to-index dictionary here """ Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie. Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension. Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on. End of explanation """ def text_to_vector(text): word_vector = np.zeros(len(vocab), dtype=np.int_) for word in text.split(' '): idx = word2idx.get(word, None) if idx is None: continue else: word_vector[idx] += 1 return np.array(word_vector) """ Explanation: Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here. For each word in that list, increment the element in the index associated with that word, which you get from word2idx. Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary. End of explanation """ text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] """ Explanation: If you do this right, the following code should return ``` text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0]) ``` End of explanation """ word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_) for ii, (_, text) in enumerate(reviews.iterrows()): word_vectors[ii] = text_to_vector(text[0]) # Printing out the first 5 word vectors word_vectors[:5, :23] """ Explanation: Now, run through our entire review data set and convert each review to a word vector. End of explanation """ Y = (labels=='positive').astype(np.int_) records = len(labels) shuffle = np.arange(records) np.random.shuffle(shuffle) test_fraction = 0.9 train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):] trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2) testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2) trainY """ Explanation: Train, Validation, Test sets Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later. End of explanation """ # Network building def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Inputs net = tflearn.input_data([None, 10000]) # Hidden layer(s) net = tflearn.fully_connected(net, 200, activation='ReLU') net = tflearn.fully_connected(net, 25, activation='ReLU') # Output layer net = tflearn.fully_connected(net, 2, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) return model """ Explanation: Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size. The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units). Output layer The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax. net = tflearn.fully_connected(net, 2, activation='softmax') Training To set how you train the network, use net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with the categorical cross-entropy. Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like net = tflearn.input_data([None, 10]) # Input net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. End of explanation """ model = build_model() """ Explanation: Intializing the model Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want. Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon. End of explanation """ # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10) """ Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors. You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network. End of explanation """ predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_) test_accuracy = np.mean(predictions == testY[:,0], axis=0) print("Test accuracy: ", test_accuracy) """ Explanation: Testing After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters. End of explanation """ # Helper function that uses your model to predict sentiment def test_sentence(sentence): positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1] print('Sentence: {}'.format(sentence)) print('P(positive) = {:.3f} :'.format(positive_prob), 'Positive' if positive_prob > 0.5 else 'Negative') sentence = "Moonlight is by far the best movie of 2016." test_sentence(sentence) sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful" test_sentence(sentence) """ Explanation: Try out your own text! End of explanation """
swirlingsand/deep-learning-foundations
p3-tv-script-generation/dlnd_tv_script_generation.ipynb
mit
import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] """ Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data End of explanation """ view_sentence_range = (20, 30) import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data End of explanation """ import numpy as np import common import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ vocab_to_int, int_to_vocab = common.create_lookup_tables(text) return vocab_to_int, int_to_vocab tests.test_create_lookup_tables(create_lookup_tables) """ Explanation: Preprocessing Lookup Table Dictionary to go from the words to an id, we'll call vocab_to_int Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation """ def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ x = { '.' : '||period||', ',' : '||comma||', '"' : '||quote||', '(' : '||left_bracket||', '?' : '||question_mark||', '!' : '||exclamation||', '\n' : '||new_line||', ')' : '||right_bracket||', ';' : '||semi-colon||' , '--' : '||dash_dash||' } return x tests.test_tokenize(token_lookup) """ Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Build the Neural Network Check the Version of TensorFlow and Access to GPU End of explanation """ def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ input_ = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name ='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') return input_, targets, learning_rate tests.test_get_inputs(get_inputs) """ Explanation: Input End of explanation """ def get_embed(input_, n_vocab, n_embedding): """ Create embedding for <input_data>. :param input_: TF placeholder for text input. :param n_vocab: Number of words in vocabulary. :param n_embedding: Number of embedding dimensions :return: Embedded input. """ embedding = tf.Variable( tf.random_uniform( (n_vocab, n_embedding))) embed = tf.nn.embedding_lookup( embedding, input_) return embed tests.test_get_embed(get_embed) """ Explanation: Word Embedding End of explanation """ def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ lstms = [tf.contrib.rnn.BasicLSTMCell(rnn_size)] cell = tf.contrib.rnn.MultiRNNCell( lstms ) initial_state = cell.zero_state(batch_size, tf.float32) initial_state = tf.identity(initial_state, name="initial_state") return cell, initial_state tests.test_get_init_cell(get_init_cell) """ Explanation: Build RNN Cell and Initialize End of explanation """ def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) state = tf.identity(state, name="final_state") return outputs, state tests.test_build_rnn(build_rnn) """ Explanation: Build RNN End of explanation """ def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ embed = get_embed(input_data, vocab_size, embed_dim) outputs, state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None, weights_initializer = tf.truncated_normal_initializer( mean = 0.0, stddev = .1), biases_initializer=tf.zeros_initializer() ) return logits, state tests.test_build_nn(build_nn) """ Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation """ def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ int_text = int_text[0:(len(int_text) - len(int_text) % (batch_size * seq_length))] targets = np.zeros(len(int_text)).astype(int) targets[:-1] = int_text[1:] targets[-1] = int_text[0] elements_per_batch = batch_size * seq_length num_batches = int(len(int_text) / elements_per_batch) # Build the batched data batches = np.zeros((num_batches, 2, batch_size, seq_length)).astype(int) for sequence in range(batch_size): for batch in range(num_batches): start_at = (batch * seq_length) + ( sequence * seq_length * num_batches ) # Append to inputs batches[batch, 0, sequence, :] = int_text[start_at:(start_at + seq_length)] # Append to targets batches[batch, 1, sequence, :] = targets[start_at:(start_at + seq_length)] return batches tests.test_get_batches(get_batches) """ Explanation: Batches End of explanation """ num_epochs = 300 batch_size = 128 rnn_size = 256 embed_dim = 200 seq_length = 15 learning_rate = .001 show_every_n_batches = 100 save_dir = './save' """ Explanation: Neural Network Training Hyperparameters End of explanation """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) """ Explanation: Save Parameters End of explanation """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() """ Explanation: Checkpoint End of explanation """ def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ tensors = ["input:0", "initial_state:0", "final_state:0", "probs:0"] output = [] for t in tensors: output.append( loaded_graph.get_tensor_by_name(t) ) return output[0], output[1], output[2], output[3] tests.test_get_tensors(get_tensors) """ Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation """ def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ return int_to_vocab[np.argmax(probabilities)] tests.test_pick_word(pick_word) """ Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation """ gen_length = 300 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'homer_simpson' loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) """ Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation """
xdze2/thermique_appart
BlackBoxModel02.ipynb
mit
df_full = pd.read_pickle( 'weatherdata.pck' ) df = df_full[['T_int', 'temperature', 'flux_tot', 'windSpeed']].copy() """ Explanation: Modèle boite noire 02 L'idée est d'estimer les paramètres du modèle à partir de la mesure expérimentale de la température intérieure ($T$) et des données météo. On n'obtient pas forcement une valeur physique significative pour ces coefficients, mais cela permet d'estimer et de comparer les variations de jour en jour, et donc à priori de voir l'effet du comportement de l'habitant (ouverture des fenêtres et rideaux en été). Le modèle 'boite-noire' On souhaite le modèle le plus simple possible, avec seulement deux coefficients : la résistance thermique $h$ avec l'extérieure, une masse thermique $M$ et un flux de chaleur externe $\eta\Phi(t)$. Schéma électrique équivalent : et l'équation différentielle corespondante: $$ \frac{dT}{dt} = \frac{h}{M} \,\left[ T_{ext}(t) - T \right] + \frac{\eta}{M} \, \Phi(t) $$ $T(t)$ est la température à l'intérieure de l'appartement. $T_{ext}(t)$ est la température extérieure donnée par la météo. $\Phi(t)$ est le flux solaire (en Watt) sur les surfaces vitrées. Il y a deux paramètres inconnues, tous les deux normés par la masse thermique $M$: * $\eta$ qui correspond à l'absorption des rayons solaire, normalement entre 0 et 1. * $h$ qui correspond à l'isolation avec l'air extérieur (en W/K). $M$, enfin, est la masse thermique de l'appartement (en J/K) ... Elle détermine le temps de réponse caractérique du système, et est pour cette raison difficile à estimer parce que non unique. Chargement des données Données obtenues avec le notebook get_data_and_preprocess.ipynb. End of explanation """ df[['T_int', 'temperature']].plot( figsize=(14, 4) ); plt.ylabel('°C'); """ Explanation: On a l'enregistrement de la température intérieure ($T_{int}$) et la température extérieure ('temperature') : End of explanation """ # Flux solaire sur les vitres: df[['flux_tot']].plot( figsize=(14, 4) ); plt.ylabel('Watt'); """ Explanation: Et le flux solaire, calculé pour mon appartement, et projété suivant la surface et l'orientation de mes fenètres (Velux): End of explanation """ from scipy.integrate import odeint def get_dTdt( T, t, params, get_Text, get_Phi ): """ dérivé de la température p/r au temps params : [ h/M , eta/M ] get_Text, get_Phi: fonctions d'interpolation """ T_ext = get_Text( t ) phi = get_Phi( t ) dTdt = params[0] * ( T_ext - T ) + params[1] / 100 * phi return 1e-6*dTdt def apply_model( data, T_start, params, full_output=False ): data_dict = data.to_dict(orient='list') time_sec = data.index.astype(np.int64) // 1e9 # conversion en secondes # construction des fonctions d'interpolations: get_Text = lambda t: np.interp( t, time_sec, data_dict['temperature'] ) get_Phi = lambda t: np.interp( t, time_sec, data_dict['flux_tot'] ) T_theo = odeint(get_dTdt, T_start, time_sec, args=(params, get_Text, get_Phi ), \ full_output=full_output, h0=30*60) # h0 : pas de temps initial utilisé par le solveur return T_theo.flatten() """ Explanation: Solveur équation diff. (ODE) L'équation est résolue en temps avec odeint de scipy (doc, OdePack). End of explanation """ params = ( 3, 3 ) res = apply_model( df, 30, params ) plt.figure( figsize=(14, 4) ) plt.plot( res ) plt.plot( df['T_int'].as_matrix() ) ; """ Explanation: Rq: Facteur 100 et 1e-6 pour avoir des valeurs proches de l'unité, et le même ordre de grandeur entre $\Phi$ et $\Delta T$... pour l'optimisation Test sur les données entières : End of explanation """ def get_errorfit( params, data, T_start ): """ Calcul le modele pour les données et les paramètres puis calcul l'erreur avec les données expérimentals (non NaN) """ T_exp = data['T_int'].as_matrix() T_theo = apply_model( data, T_start, params ) delta = (T_exp - T_theo)**2 return np.sum( delta[ ~np.isnan( delta ) ] ) """ on distingue le fit suivant la nuit ou le jour pour ne fiter qu'un seul paramètre """ from scipy.optimize import fminbound from scipy.optimize import minimize def fit_model_p1( data, T_start, param_0 ): func0 = lambda x: get_errorfit( (param_0, x), data, T_start ) x1, x2 = (.1, 100) param_1 = fminbound(func0, x1, x2, disp=0) #param_1 = fmin(func0, 20) return param_1 def fit_model_p0( data, T_start, param_1 ): func1 = lambda x: get_errorfit( (x, param_1), data, T_start ) x1, x2 = (.1, 100) param_0 = fminbound(func1, x1, x2, disp=0) #param_0 = fmin(func1, 20) return param_0 def fit_model_p01( data, T_start ): func01 = lambda x: get_errorfit( x, data, T_start ) x12 = (2.3, 3.) res = minimize(func01, x12, method='Powell') #param_0 = fmin(func1, 20) return res.x """ Explanation: Estimation jour par jour Les paramètres $\eta$ et $h$ ne sont en réalité pas constant. Ils dépendante de l'usage de l'appartement, principalement de l'ouverture des fenêtres et à la position des volets sur celle-ci. Ils sont donc fonctions de la période de la journée, et de la météo. L'idée est d'estimer leurs valeurs jour par jour. mais : * la nuit, $\Phi = 0$ donc $\eta$ est non déterminé * le jour, $T_{ext}(t)$ est fortement corrélé avec $\Phi(t)$. Découpler les deux paramètres n'est alors pas évident. Les estimations sont donc effectuées séparement : le jour pour $\eta$ (avec un $h_{min}$ résiduel), et la nuit pour $h$ (correspondant à la ventilation). End of explanation """ """ Estimation des périodes de jour et de nuit à partir du flux solaire """ df['isnight'] = ( df['flux_tot'] == 0 ).astype(int) # Permet de numéroter les périodes : nights_days = df['isnight'].diff().abs().cumsum() nights_days[0] = 0 df_byday = df.groupby(nights_days) Groupes = [ int( k ) for k in df_byday.groups.keys() ] df_byday['temperature'].plot( figsize=(14, 3) ); df_byday['T_int'].plot( figsize=(14, 3) ); """ Explanation: Découpage en période jour / nuit End of explanation """ def fit_a_day( data, T_zero ): """ Estime le modèle sur les données 'data' avec la température initial 'T_zero' """ # Gestion des données exp. manquante : T_exp = data['T_int'].as_matrix() nombre_nonNaN = T_exp.size - np.isnan( T_exp ).sum() if nombre_nonNaN < 10: # pas assez de donnée pour faire le fit h, eta = np.nan, np.nan res = np.full( T_exp.shape , np.nan) else: eta_night = 0 h_day = 2.3 # h_min, valeur minimal ? ... arbitraire pour le moment if data['isnight'].all(): # nuit h = fit_model_p0( data, T_zero, eta_night ) eta = eta_night else: # jour h = h_day #eta = fit_model_p1( data, T_zero, h_day ) h, eta = fit_model_p01( data, T_zero ) # Calcul le modèle avec les paramètres optenus: res = apply_model( data, T_zero, (h, eta) ) return (h, eta), res """ Explanation: Ajustement période par période End of explanation """ len( Groupes ) data = df_byday.get_group( Groupes[1] ) T_int = data['T_int'].interpolate().as_matrix() T_zero = T_int[ ~ np.isnan( T_int ) ][0] params, res = fit_a_day( data, T_zero ) print( data.index[0] ) print( params ) plt.figure( figsize=(14, 5) ) plt.plot(data.index, res, '--', label='T_theo' ) plt.plot(data.index, T_int, label='T_int' ); plt.plot(data.index, data['temperature'].as_matrix() , label='T_ext' ); plt.plot(data.index, data['flux_tot'].as_matrix()/100 + 20, label='~ Flux' ); plt.legend(); """ Explanation: Tracé pour une période (debug) End of explanation """ # init df['T_theo'] = 0 df['eta_M'], df['h_M'] = 0, 0 # valeur initiale T_zero = df['T_int'][ df['T_int'].first_valid_index() ] for grp_id in Groupes: print( '%i, ' % grp_id, end='' ) data_day = df_byday.get_group( grp_id ) # debug cas où aucun donnée exp. : if np.isnan( T_zero ): T_int = data_day['T_int'] if np.isnan( T_int ).all(): T_zero = 0 else: T_zero = data_day['T_int'][ data_day['T_int'].first_valid_index() ] # estimation params, res = fit_a_day( data_day, T_zero ) # save df.loc[ data_day.index, 'T_theo'] = res df.loc[ data_day.index, 'eta_M'] = params[1] df.loc[ data_day.index, 'h_M'] = params[0] # valeur initiale pour l'étape suivante T_zero = res[-1] print('done') df[['T_int', 'T_theo']].plot( figsize=(14, 5) ); df[['T_int', 'T_theo']].plot( figsize=(14, 5) ); df[['T_int', 'temperature', 'T_theo']].plot( figsize=(14, 5) ); df[['flux_tot']].plot( figsize=(14, 5) ); plt.figure( figsize=(14, 5) ) plt.subplot( 2, 1, 1 ) plt.plot( df[['h_M']] ); plt.ylabel('h_M'); plt.subplot( 2, 1, 2 ) plt.plot( df[['eta_M']], 'r' ); plt.ylabel('eta_M'); """ Explanation: Calcul pour toutes les périodes : prends du temps End of explanation """ U_vitrage = 2.8 # W/m2/K, typiquement pour du double vitrage U_cadres = 0.15 + 0.016 # W/m/K, pour un cadre en bois de section carré, c'est en fait la conductivité du bois # + psi ... aire_vitre = 0.6*0.8*2 + 1.2*0.8 + 0.3*0.72*4 + 0.25**2 # m2 perimetre = (0.6+0.8)*4 + (1.2+0.8)*2 + 2*(0.3+0.72)*4 + 4*0.25 aire_parois = 4.59*7.94*2 # m2 h_parois = 0.04 / 0.15 # W/m2/K - pour de la laine de roche h_min = aire_vitre * U_vitrage + perimetre * U_cadres + aire_parois * h_parois print('h_min : %f W/K' % h_min) """ Explanation: Ordres de grandeur eta : c'est un pourcentage entre 0 et 100 h : Il y a deux contibutions: - h_min : L'isolation globale du batiment (mur, toit et surtout fenêtres). Cette valeur doit être constante. - h_aero : Les infiltrations d'air et aérations. M : La masse thermique M ~ 0.1e6 J/K ??? h_min correspond à l'isolation maximal de l'appart, toutes fenêtres fermés h_max, à la ventillation maximal, toutes fenêtres ouvertes (+ vent ?) eta_min correspond à tous les volets fermés eta_max sans volet Estimation de h_min h_min = aire_vitre * U_vitrage + perimetre * U_cadres + aire_parois * h_parois End of explanation """ R = df['T_int'] - df['T_theo'] R.plot( figsize=(14, 5), style='k' ); plt.ylabel('°C'); # Plot variation relative temp. plt.figure( ); for grp_id in Groupes: data = df_byday.get_group( grp_id ) if np.isnan( data['T_int'] ).all(): continue T_int = data['T_int'].as_matrix() Tmin, Tmax = T_int[ ~ np.isnan( T_int ) ].min(), T_int[ ~ np.isnan( T_int ) ].max() if data['isnight'].all(): T_int = T_int - Tmax else: T_int = T_int - Tmin plt.plot( T_int ) # Corrélation T_ext <-> Phi from scipy.stats import pearsonr norm = lambda X: (X - X.min())/(X.max() - X.min()) coeffs_cor, groupes_id = [], [] plt.figure( ); for grp_id in Groupes[2:]: data = df_byday.get_group( grp_id ) if data['isnight'].all(): continue T_ext = data['temperature'].as_matrix() phi = data['flux_tot'].as_matrix() #T_ext, phi = norm(T_ext), norm(phi) plt.plot( T_ext, phi, '.' ) coeffs_cor.append( pearsonr(T_ext, phi)[0] ) groupes_id.append( grp_id ) #plt.axis('equal') plt.plot( coeffs_cor ) ; sorted( zip( groupes_id, coeffs_cor ), key=lambda x:x[1] ) data = df_byday.get_group( Groupes[13] ) T_int = data['T_int'].as_matrix() T_ext = data['temperature'].as_matrix() phi = data['flux_tot'].as_matrix() plt.plot( norm(phi) ) plt.plot( norm(T_ext) ); """ Explanation: Résidus End of explanation """
ebellm/ztf_summerschool_2015
notebooks/Making_a_Lightcurve.ipynb
bsd-3-clause
reference_catalog = '../data/PTF_Refims_Files/PTF_d022683_f02_c06_u000114210_p12_sexcat.ctlg' # select R-band data (f02) """ Explanation: Hands-On Exercise 2: Making a Lightcurve from PTF catalog data Version 0.2 This "hands-on" session will proceed differently from those that are going to follow. Below, we have included all of the code that is necessary to create light curves from PTF SExtractor catalogs. (For additional information on SExtractor please consult the SExtractor manual. This manual is far from complete, however, so you may want to also consult SExtractor For Dummies.) You will not need to write any software, but we will still go through everything step by step so you can see the details of how the light curves are constructed. As we saw in the previous talk, there are many different ways to make photometric measurements, which are necessary to ultimately create a light curve. In brief, the procedure below matches sources across epochs (i.e. different observations) by comparing everything to a deep reference catalog. Photometric corrections (i.e. differential photometry) are then calculated based on how much the aperture photometry on each epoch differs from the reference image. This notebook will include commands necessary to load and manipulate PTF data, as well as a procedure that is needed to make differential corrections to the light curves. By EC Bellm and AA Miller (c) 2015 Aug 05 Problem 1) Load Source Information from the Reference Catalog Our first step is create a "master" source list based on the reference catalog. We adopt the reference image for this purpose for two reasons: most importantly, (i) PTF reference images are made from stacks of the individual exposures so they are typically significantly deeper than individual exposures, and (ii) the reference images cover a larger footprint than the individual exposures. First, we provide the path to the reference catalog and store it in reference_catalog. End of explanation """ hdus = fits.open(reference_catalog) data = hdus[1].data data.columns """ Explanation: There is a lot of information for each source, and the overall image, in each of these catalog files. As a demonstration of the parameters available for each source, we will next load the file and show each of the parameters. Note - for a detailed explanation for the definition of each of these columns, please refer to the SExtractor documentation links above. End of explanation """ def load_ref_catalog(reference_catalog): hdus = fits.open(reference_catalog) data = hdus[1].data # filter flagged detections w = ((data['flags'] & 506 == 0) & (data['MAG_AUTO'] < 99)) data = data[w] ref_coords = coords.SkyCoord(data['X_WORLD'], data['Y_WORLD'],frame='icrs',unit='deg') star_class = np.array(data["CLASS_STAR"]).T return np.vstack([data['MAG_AUTO'],data['MAGERR_AUTO']]).T, ref_coords, star_class """ Explanation: In the next step, we define a function that will read the catalog. The main SExtractor parameters that we will need are: MAG_AUTO and MAGERR_AUTO, the mag and mag uncertainty, respectively, as well as X_WORLD and Y_WORLD, which are the RA and Dec, respectively, and finally flags, which contains processing flags. After reading the catalog, the function will select sources that have no flags, and return the position of these sources, their brightness, as well as a SExtractor parameter CLASS_STAR, which provides a numerical estimation of whether or not a source is a star. Sources with CLASS_STAR $\sim 1$ are likely stars, and sources with CLASS_STAR $\sim 0$ are likely galaxies, but beware that this classification is far from perfect, especially at the faint end. Recall that galaxies cannot be used for differential photometry as they are resolved. End of explanation """ ref_mags, ref_coords, star_class = load_ref_catalog(reference_catalog) epoch_catalogs = glob('../data/PTF_Procim_Files/PTF*f02*.ctlg.gz') # Note - files have been gzipped to save space print("There are {:d} sources in the reference image".format( len(ref_mags) )) print("...") print("There are {:d} epochs for this field".format( len(epoch_catalogs) )) """ Explanation: Now, we can run the function, and determine the number of sources in our reference catalog. Following that, we will use the Python function glob to grab all of the individual SExtractor catalogs. These files contain the epoch by epoch photometric measurements of the sources in ptfField 22683 ccd 06. The file names will be stored as epoch_catalogs. End of explanation """ def crossmatch_epochs(reference_coords, epoch_catalogs): n_stars = len(reference_coords) n_epochs = len(epoch_catalogs) mags = np.ma.zeros([n_stars, n_epochs]) magerrs = np.ma.zeros([n_stars, n_epochs]) mjds = np.ma.zeros(n_epochs) with astropy.utils.console.ProgressBar(len(epoch_catalogs),ipython_widget=True) as bar: for i, catalog in enumerate(epoch_catalogs): hdus = fits.open(catalog) data = hdus[1].data hdr = hdus[2].header # filter flagged detections w = ((data['flags'] & 506 == 0) & (data['imaflags_iso'] & 1821 == 0)) data = data[w] epoch_coords = coords.SkyCoord(data['X_WORLD'], data['Y_WORLD'],frame='icrs',unit='deg') idx, sep, dist = coords.match_coordinates_sky(epoch_coords, reference_coords) wmatch = (sep <= 1.5*u.arcsec) # store data if np.sum(wmatch): mags[idx[wmatch],i] = data[wmatch]['MAG_APER'][:,2] + data[wmatch]['ZEROPOINT'] magerrs[idx[wmatch],i] = data[wmatch]['MAGERR_APER'][:,2] mjds[i] = hdr['OBSMJD'] bar.update() return mjds, mags, magerrs """ Explanation: Problem 2) Match Individual Detections to Reference Catalog Sources The next step towards constructing light curves is one of the most difficult: source association. From the reference catalog, we know the positions of the stars and the galaxies in ptfField 22683 ccd 06. The positions of these stars and galaxies as measured on the individual epochs will be different than the positions measured on the reference image, so we need to decide how to associate the two. Simply put, we will crossmatch the reference catalog and individual epoch catalogs, and consider all associations with a separation less than our tolerance to be a match. For the most part, this is the standard procedure for source association, and we will adopt a tolerance of 1.5 arcsec (the most common value is 1 arcsec). We will use astropy to crossmatch sources between the two catalogs, and we will perform a loop over every catalog so we can build up lightcurves for the individual sources. To store the data, we will construct a two-dimenstional NumPy mask array. Each row in the array will represent a source in the reference catalog, while each column will represent each epoch. Thus, each source's light curve can be read by examining the corresponding row of the mags array. We will also store the uncertainty of each mag measurement in magerrs. The date corresponding to each column will be stored in a separate 1D array: mjds. Finally, including the masks allows us to track when a source is not detected in an individual exposure. Note - there are some downsides to this approach: (i) crossmatching to sources in the reference catalog means we will miss any transients in this field as they are (presumably) not in the reference image. (ii) The matching tolerance of 1.5 arcsec is informed [0.01 arcsec is way too small and 100 arcsec is way too big], but arbitrary. Is a source separation of 1.49 arcsec much more significant than a source separation of 1.51 arcsec? While it is more significant, a binary decision threshold at 1.5 is far from perfect. (iii) This procedure assumes that the astrometric information for each catalog is correct. While this is true for the vast, vast majority of PTF images, there are some fields ($< 1\%$) where the astrometric solution can be incorrect by more than a few arcsec. End of explanation """ mjds,mags,magerrs = crossmatch_epochs(ref_coords, epoch_catalogs) """ Explanation: With the function defined, we now populate and store the arrays with the light curve information. End of explanation """ # mask obviously bad mags wbad = (mags < 10) | (mags > 25) mags[wbad] = np.ma.masked magerrs[wbad] = np.ma.masked """ Explanation: At times, SExtrator will produce "measurements" that are clearly non-physical, such as magnitude measurements of 99 (while a source may be that faint, we cannot detect such a source with PTF). We will mask everything with a clearly wrong magnitude measurement. End of explanation """ source_idx = 62 plt.errorbar(mjds, mags[source_idx,:],magerrs[source_idx,:],fmt='none') plt.ylim(np.ma.max(mags[source_idx,:])+0.3, np.ma.min(mags[source_idx,:])-0.2) plt.xlabel("MJD") plt.ylabel("R mag") print("scatter = {:.3f}".format(np.ma.std(mags[source_idx,:]))) """ Explanation: Now that we have performed source assoiciation and populated the mags array, we can plot light curves of individual sources. Here is an example for the 63rd source in the array (recall that NumPy arrays are zero indexed). End of explanation """ n_epochs = len(epoch_catalogs) plt.scatter(ref_mags[:,0], np.ma.sum(mags.mask,axis=1), alpha=0.1, edgecolor = "None") plt.plot([13, 22], [n_epochs - 20, n_epochs - 20], 'DarkOrange') # plot boundary for sources with Ndet > 20 plt.xlabel('R mag', fontsize = 13) plt.ylabel('# of masked epochs', fontsize = 13) plt.tight_layout() """ Explanation: Note that the scatter for this source is $\sim 0.11$ mag. We will later show this to be the case, but for now, trust us that this scatter is large for a source with average brightness $\sim 18.6$ mag. Either, this is a genuine variable star, with a significant decrease in brightness around MJD 56193, or, this procedure, so far, is poor. For reasons that will become clear later, we are now going to filter our arrays so that only sources with at least 20 detections included. As a brief justification - sources with zero detections should, obviously, be excluded from our array, while requiring 20 detections improves our ability to reliably measure periodicity. Before we do this, we can examine which sources are most likely to be affected by this decision. For each source, we can plot the number of masked epochs (i.e. non-detections) as a function of that source's brightness. End of explanation """ Ndet20 = n_epochs - np.ma.sum(mags.mask,axis=1) >= 20 mags = mags[Ndet20] magerrs = magerrs[Ndet20] ref_mags = ref_mags[Ndet20] ref_coords = ref_coords[Ndet20] star_class = star_class[Ndet20] print('There are {:d} sources with > 20 detections on individual epochs.'.format( sum(Ndet20) )) """ Explanation: From this plot a few things are immediately clear: (i) potentially saturated sources ($R \lesssim 14$ mag) are likely to have fewer detections (mostly because they are being flagged by SExtractor), (ii) faint sources ($R \gtrsim 20$ mag) are likely to have fewer detecions (because the limiting magnitude of individual PTF exposures is $\sim 20.5$ mag, and (iii) the faintest sources are the most likely to have light curves with very few points. Identifying sources with at least 20 epochs can be done using a conditional statement, and we will store the Boolean results of this conditional statement in an array Ndet20. We will use this array to remove sources with fewer than 20 detections in their light curves. End of explanation """ plt.scatter(ref_mags[:,0], np.ma.std(mags,axis=1)**2. - np.ma.mean(magerrs**2.,axis=1), edgecolor = "None", alpha = 0.2) plt.ylim(-0.2,0.5) plt.yscale('symlog', linthreshy=0.01) plt.xlabel('R (mag)', fontsize = 13) plt.ylabel(r'$ std(m)^2 - <\sigma_m^2>$', fontsize = 14) """ Explanation: Now that we have eliminated the poorly sampled light curves, we can also if the typical uncertainties measured by SExtractor are properly estimated by comparing their values to the typical scatter in a given light curve. For non-variable stars the scatter should be approximately equal to the mean uncertainty measurement for a given star. End of explanation """ # examine a plot of the typical scatter as a function of magnitude plt.scatter(ref_mags[:,0], np.ma.std(mags,axis=1),alpha=0.1) plt.ylim(0.005,0.5) plt.yscale("log") plt.xlabel('R (mag)', fontsize = 13) plt.ylabel(r'$std(m)$', fontsize = 14) """ Explanation: At the bright end, corresponding to sources brighter than 19th mag, we see that the typical scatter is larger than the mean uncertainty measurement. We can improve the scatter, however, so we will re-investigate this feature later. You will also notice that at the faint end the scatter is typically smaller than the mean uncertainty. This occurs because the light curves produced by our methodology are biased - in particular, the faint sources are more likely to be detected in epochs where they are a little brighter than normal and less likely to be detected in epochs where they are a little fainter than normal. As a result, summary statistics for these sources (essentially everything fainter than 20th mag if you scroll up two plots), will be misleading. We can also plot the typical scatter as a function of magnitude. This diagnostic for the photometric performance of a time-domain survey is the most common plot that you'll find in the literature. Note - (1) here we take standard deviation of a log quantity, mag. This will overestimate the true value of the scatter at lowish S/N. It’s always best to compute stats in flux space then convert to mag. For simplicity we skip that here. Further examples of the dangers of statistical inference from mag measures can be found on Frank Masci's website.(2) Non-detections on the faint end artificially supress the overall scatter. End of explanation """ def relative_photometry(ref_mags, star_class, mags, magerrs): #make copies, as we're going to modify the masks all_mags = mags.copy() all_errs = magerrs.copy() # average over observations refmags = np.ma.array(ref_mags[:,0]) madmags = 1.48*np.ma.median(np.abs(all_mags - np.ma.median(all_mags, axis = 1).reshape(len(ref_mags),1)), axis = 1) MSE = np.ma.mean(all_errs**2.,axis=1) # exclude bad stars: highly variable, saturated, or faint # use excess variance to find bad objects excess_variance = madmags**2. - MSE wbad = np.where((np.abs(excess_variance) > 0.1) | (refmags < 14.5) | (refmags > 17) | (star_class < 0.9)) # mask them out refmags[wbad] = np.ma.masked # exclude stars that are not detected in a majority of epochs Nepochs = len(all_mags[0,:]) nbad = np.where(np.ma.sum(all_mags > 1, axis = 1) <= Nepochs/2.) refmags[nbad] = np.ma.masked # for each observation, take the median of the difference between the median mag and the observed mag # annoying dimension swapping to get the 1D vector to blow up right relative_zp = np.ma.median(all_mags - refmags.reshape((len(all_mags),1)),axis=0) return relative_zp """ Explanation: This plot shows that for a typical star ($R < 19$ mag), we can achieve a scatter of $\sim 0.08$ mag. As has already been noted - this performance is poor for stars this bright with a telescope as large as P48. Problem 3) Calculate Differential Photometry Corrections Why is the scatter so large for PTF light curves? There are two reasons this is the case: We are measuring the scatter from fixed aperture measurements, but we have not accounted for the fact that the seeing varies image to image. We can correct for this via differential photometry, however. The calibration of PTF images only works properly on nights with photometric conditions (see Ofek et al. 2012). Again, we can correct for this via differential photometry. The basic idea for differential photometry is the following: using "standard" stars (what constitutes standard can be argued, but most importantly these should not be variable), small corrections to the photometry of every star in a given image are calculated in order to place the photometry from every epoch on the same relative zero-point. The corrections are determined by comparing the the "standard" stars to their mean (or median) value. Typically, the corrections are determined by averaging over a large number of stars. The function relative_photometry, which is defined below, goes through this procedure to improve the quality of the PTF light curves. To calculate the $\Delta m$ required for each epoch, we take a few (essentially justified) short cuts: only stars with $R \ge 14.5$ mag are included to avoid saturation, further stars with $R > 17$ mag are excluded so only high SNR sources are used to calculate the corrections, sources with the SExtractor parameter CLASS_STAR $< 0.9$ (i.e. likely galaxies) are excluded, and sources with excess_variance $> 0.1$ (defined below) are excluded to remove likely variable stars. After these exclusions, the remaining stars are used to calculate the median difference between their reference magnitude and their brightness on the individual epochs. End of explanation """ # compute the relative photometry and subtract it. Don't fret about error propagation rel_zp = relative_photometry(ref_mags, star_class, mags, magerrs) mags -= np.ma.resize(rel_zp, mags.shape) """ Explanation: We can now use the relative_photometry function to calculate the $\Delta m$ for each epoch. End of explanation """ source_idx = 18 plt.errorbar(mjds, mags[source_idx,:],magerrs[source_idx,:],fmt='none') plt.ylim(np.max(mags[source_idx,:])+0.3, np.min(mags[source_idx,:])-0.05) plt.xlabel("MJD") plt.ylabel("R mag") print("scatter = {:.3f}".format(np.ma.std(mags[source_idx,:]))) """ Explanation: To quickly see the effect of applying the $\Delta m$ corrections, we can once again plot the light curve of the source that we previously examined. End of explanation """ plt.scatter(ref_mags[:,0], np.ma.std(mags,axis=1),alpha=0.1, edgecolor = "None") plt.ylim(0.003,0.7) plt.yscale("log") plt.xlim(13,22) plt.xlabel('R (mag)', fontsize = 13) plt.ylabel(r'$std(m)$', fontsize = 14) """ Explanation: Wow! It is now pretty clear that this source isn't a variable. The variations appear more or less consistent with Gaussian noise, and the scatter for this source has decreased by a factor of $\sim 2$. That is a significant improvement over what we obtained when using the "raw" values from the PTF SExtractor catalogs. Once again, the scatter as a function of magnitude will provide a decent proxy for the overall quality of the light curves. End of explanation """ # save the output: ref_coords, mjds, mags, magerrs. outfile = reference_catalog.split('/')[-1].replace('ctlg','shlv') shelf = shelve.open('../data/'+outfile,flag='c',protocol=pickle.HIGHEST_PROTOCOL) shelf['mjds'] = mjds shelf['mags'] = mags shelf['magerrs'] = magerrs shelf['ref_coords'] = ref_coords shelf.close() """ Explanation: This looks much, much better than what we had before, where all the bright stars had a scatter of $\sim 0.08$ mag. Now, the brightest stars have a scatter as small as $\sim 0.007$ mag, while even stars as faint as $R = 19$ mag have scatter $< 0.01$ mag. In other words, we now have good quality light curves (good enough for publication in many cases, though caution should always always always be applied to large survey data). Problem 4) Store, and Later Access, the Light Curves As we now have high quality light curves, it is important that we store the results of our work. We will do that using the shelve module within Python which will allow us to quickly and easily access each of these light curves in the future. End of explanation """ # demonstrate getting the data back out shelf = shelve.open('../data/'+outfile) for key in shelf.keys(): print(key, shelf[key].shape) shelf.close() """ Explanation: Loading the shelf file is fast and easy. End of explanation """ def source_lightcurve(rel_phot_shlv, ra, dec, matchr = 1.0): """Crossmatch ra and dec to a PTF shelve file, to return light curve of a given star""" shelf = shelve.open(rel_phot_shlv) ref_coords = coords.SkyCoord(shelf["ref_coords"].ra, shelf["ref_coords"].dec,frame='icrs',unit='deg') source_coords = coords.SkyCoord(ra, dec,frame='icrs',unit='deg') idx, sep, dist = coords.match_coordinates_sky(source_coords, ref_coords) wmatch = (sep <= matchr*u.arcsec) if sum(wmatch) == 1: mjds = shelf["mjds"] mags = shelf["mags"][idx] magerrs = shelf["magerrs"][idx] # filter so we only return good points wgood = (mags.mask == False) if (np.sum(wgood) == 0): raise ValueError("No good photometry at this position.") return mjds[wgood], mags[wgood], magerrs[wgood] else: raise ValueError("There are no matches to the provided coordinates within %.1f arcsec" % (matchr)) """ Explanation: Finally, we have created a function, which we will use during the next few days, to produce the light curve for a source at a given RA and Dec on ptfField 22683 ccd 06. The function is below, and it loads the shelf file, performs a cross match against the user-supplied RA and Dec, and returns the light curve if there is a source with a separation less than 1 arcsec from the user-supplied position. End of explanation """ ra, dec = 312.503802, -0.706603 source_mjds, source_mags, source_magerrs = source_lightcurve( # complete plt.errorbar( # complete plt.ylim( # complete plt.xlabel( # complete plt.ylabel( # complete """ Explanation: Problem 1 Test the source_lightcurve function - load the light curve for the star located at $\alpha_{\mathrm J2000} =$ 20:50:00.91, $\delta_{\mathrm J2000} =$ -00:42:23.8. An image of this star can be found here. After loading the light curve for this star, plot its light curve, including the uncertainties on the individual epochs. End of explanation """
prody/ProDy-website
_static/ipynb/workshop2021/prody_evol_and_signdy.ipynb
mit
from prody import * from pylab import * %matplotlib inline confProDy(auto_show=False) """ Explanation: Evolution of sequence, structure and dynamics with Evol and SignDy This tutorial has two parts, focusing on two related parts of ProDy for studying evolution: The sequence sub-package Evol is for fetching, parsing and refining multiple sequence alignments (MSAs), and calculating residue-level properties such as conservation and coevolution as well as sequence-level properties such as percentage identity. The signature dynamics module SignDy calculates ENM normal modes for ensembles of related protein structures and evaluates the conservation and differentiation of signature dynamics across families and subfamilies. It also allows classification of ensemble/family members based upon their dynamics, allowing the evolution of protein dynamics to be compared with the evolution of sequence and structure. We first make the required imports: End of explanation """ pathPDBFolder('./pdbs/') """ Explanation: We also configure ProDy to put all the PDB files in a particular folder seeing as there are so many of them. End of explanation """ filename = fetchPfamMSA('PF00074') filename """ Explanation: 1. Sequence evolution with Evol Fetching, parsing and refining MSAs from Pfam The protein families database Pfam provides multiple sequence alignments of related protein domains, which we are often used as starting points for sequence evolution analyses. We can fetch such MSAs using the function fetchPfamMSA as follows: End of explanation """ msa = parseMSA(filename) msa """ Explanation: We can then parse the MSA into ProDy using the parseMSA function, which can handle various types of MSA files including Stockholm, SELEX, CLUSTAL, PIR and FASTA formats. End of explanation """ msa[:10,:10] seq0 = msa[0] seq0 str(seq0) """ Explanation: This alignment can be indexed to extract individual sequences (rows) and residue positions (columns): End of explanation """ msa_refined = refineMSA(msa, label='RNAS1_BOVIN', rowocc=0.8, seqid=0.98) msa_refined """ Explanation: This alignment contains many redundant sequences as well as lots of rows and columns with large numbers of gaps. Therefore, we refine it using refineMSA, which we can do based on the sequence of RNAS1_BOVIN: End of explanation """ entropy = calcShannonEntropy(msa_refined) """ Explanation: Measuring sequence conservation with Shannon entropy We calculate use calcShannonEntropy to calculate the entropy of the refined MSA, which is a measure of sequence variability. Shannon's entropy measures the degree of uncertainty that exists in a system. In the case of multiple sequence alignments, the Shannon entropy of each protein site (column) can be computed according to: $$H(p_1, p_2, \ldots, p_n) = -\sum_{i=1}^n p_i \log_2 p_i $$ where $p_i$ is the frequency of amino acid $i$ in that site. If a column is completely conserved then Shannon entropy is 0. The maximum variability, where each amino acid occurs with frequency 1/20, yields an entropy of 4.32 End of explanation """ showShannonEntropy(msa_refined); """ Explanation: We can also show the Shannon entropy on a bar chart: End of explanation """ ag = parsePDB('2W5I', chain='B') ag """ Explanation: Comparisons of sequence evolution and structural dynamics Next, we obtain residue fluctuations or mobility for a protein member of the above family using the GNM. We will use chain B of PDB structure 2W5I, which corresponds to our reference sequence RNAS1_BOVIN. End of explanation """ aln, idx_1, idx_2 = alignSequenceToMSA(ag.ca, msa_refined, label='RNAS1_BOVIN') showAlignment(aln, indices=[idx_1, idx_2]) """ Explanation: The next step is to select the corresponding residues from the AtomGroup to match the sequence alignment. We can identify these using alignSequenceToMSA. We give it the Calpha atoms only so the residue numbers aren't repeated. End of explanation """ print(ag.ca.getResnums()) """ Explanation: We see that there are extra residues in the PDB sequence compared to the reference sequence so we identify their residue numbers to make a selection. End of explanation """ chB = ag.select('resid 3 to 121') chB print(msa_refined['RNAS1_BOVIN']) print(chB.ca.getSequence()) """ Explanation: They are numbered from 1 to 124, two residues are missing from the beginning, and three residues are missing from the end, so we select residues 3 to 121. This now makes the two sequences match. End of explanation """ gnm = GNM('2W5I') gnm.buildKirchhoff(chB.ca) gnm.calcModes(n_modes=None) # calculate all modes """ Explanation: We perform GNM analysis as follows: End of explanation """ mobility = calcSqFlucts(gnm) figure(figsize=(13,6)) # plot entropy as grey bars bar(chB.ca.getResnums(), entropy, width=1.2, color='grey', label='entropy'); # rescale mobility mobility = mobility*(max(entropy)/max(mobility)) # plot mobility as a blue line showAtomicLines(mobility, atoms=chB.ca, color='b', linewidth=2, label='mobility'); legend() """ Explanation: We can then visually compare the behaviour at the individual residue level as follows: End of explanation """ mutinfo = buildMutinfoMatrix(msa_refined) showMutinfoMatrix(msa_refined, cmap='inferno'); title(None); """ Explanation: Coevolution Calculation In addition to the conservation/variation of individual positions, we can also calculate the coevolution between positions due to correlated mutations. One simple and common method for this is to compute the mutual information between the columns in the MSA: End of explanation """ mi_apc = applyMutinfoCorr(mutinfo) showMatrix(mi_apc, cmap='inferno'); """ Explanation: We can improve this with the widely used average product correction: End of explanation """ showMatrix(mi_apc, cmap='inferno', norm=Normalize(0, 0.5)); """ Explanation: We can change the colour scale normalisation to eliminate the effect of the diagonal. However, the mutual information matrix is still pretty noisy. End of explanation """ di = buildDirectInfoMatrix(msa_refined) showDirectInfoMatrix(msa_refined, cmap='inferno'); title(None); """ Explanation: Therefore, more sophisticated analyses have also been developed including the Direct Information (DI; also known as direct coupling analysis (DCA), which is very successful for contact prediction. This method can also be used in ProDy as follows: End of explanation """ showContactMap(gnm, origin='lower', cmap='Greys'); """ Explanation: If we compare the brighter regions on this map to the contact matrix then we see that they indeed match pretty well: End of explanation """ di_rank_row, di_rank_col, di_zscore_sort = calcRankorder(di, zscore=True) print('row: ', di_rank_row[:5]) print('column:', di_rank_col[:5]) mi_rank_row, mi_rank_col, mi_zscore_sort = calcRankorder(mi_apc, zscore=True) print('row: ', mi_rank_row[:5]) print('column:', mi_rank_col[:5]) """ Explanation: We can also apply a rank-ordering to the DI and corrected MI matrix entries, which helps identify the strongest signals: End of explanation """ import time """ Explanation: 2. Signature Dynamics analysis with SignDy This tutorial describes how to calculate signature dynamics for a family of proteins with similar structures using Elastic Network Models (ENMs). This method (also called ensemble normal mode analysis) creates an ensemble of aligned structures and calculates statistics such as means and standard deviations on various dynamic properties including mode profiles, mean square fluctuations and cross-correlation matrices. It also includes tools for classifying family members based on their sequence, structure and dynamics. The theory and usage of this toolkit is described in our recent paper: Zhang S, Li H, Krieger J, Bahar I. Shared signature dynamics tempered by local fluctuations enables fold adaptability and specificity. Mol. Biol. Evol. 2019 36(9):2053–2068 In this tutorial, we will have a quick walk-through on the SignDy calculations and functions using the example of type-I periplasmic binding protein (PBP-I) domains. The data is collected using the Dali server (http://ekhidna2.biocenter.helsinki.fi/dali/). Holm L, Rosenström P. Dali server: conservation mapping in 3D. Nucleic Acids Res. 2010 10(38):W545-9 In addition to the previous imports, we also import time so that we can use the sleep function to reduce the load on the Dali server. End of explanation """ dali_rec = searchDali('3H5V','A') dali_rec """ Explanation: Overview The first step in signature dynamics analysis is to collect a set of related protein structures and build a PDBEnsemble. This can be achieved by multiple routes: a query search of the PDB using blastPDB or Dali, extraction of PDB IDs from the Pfam database (as above) or the CATH database, or input of a pre-defined list. We demonstrate the Dali method here in the first part of the tutorial. The usage of CATH methods is described in the website tutorial and the function blastPDB is described in the Structure Analysis Tutorial. We apply these methods to the PBP-I domains, a group of protein structures originally found in bacteria for transport of solutes across the periplasmic space and later seen in various eukaryotic receptors including ionotropic and metabotropic glutamate receptors. We use the N-terminal domain of AMPA receptor subunit GluA2 (gene name GRIA2; https://www.uniprot.org/uniprot/P42262) as a query. The second step is then to calculate ENM normal modes for all members of the PDBEnsemble, creating a ModeEnsemble. We usually use the GNM for this as will be shown here, but the ANM can be used too. The third step is then to analyse conserved and divergent behaviours to identify signature dynamics of the whole family or individual subfamilies. This is aided calculations of overlaps and distances between the mode spectra (step 4), which can be used to create phylogenetic trees that can be compared to sequence and structural conservation and divergence. Step 1: Prepare Ensemble (using Dali) First we use the function searchDali to search the PDB with Dali, which returns a DaliRecord object that contains a list of PDB IDs and their corresponding mappings to the reference structure. End of explanation """ while not dali_rec.isSuccess: dali_rec.fetch() time.sleep(120) dali_rec """ Explanation: The Dali search often remains in the queue longer than the timeout time. We therefore have a fetch method, which can be run later to fetch the data. We can run this in a loop with a wait of a couple of minutes in between fetches to make sure we get the result. End of explanation """ pdb_ids = dali_rec.filter(cutoff_len=0.7, cutoff_rmsd=1.0, cutoff_Z=30) mappings = dali_rec.getMappings() ags = parsePDB(pdb_ids, subset='ca') len(ags) """ Explanation: Next, we get the lists of PDB IDs and mappings from dali_rec, and parse the pdb_ids to get a list of AtomGroup instances: End of explanation """ dali_ens = buildPDBEnsemble(ags, mapping=mappings, seqid=20, labels=pdb_ids) dali_ens """ Explanation: Then we provide ags together with mappings to buildPDBEnsemble. We set the keyword argument seqid=20 to account for the low sequence identity between some of the structures. End of explanation """ saveEnsemble(dali_ens, 'PBP-I') """ Explanation: Finally, we save the ensemble for later processing: End of explanation """ dali_ens = loadEnsemble('PBP-I.ens.npz') """ Explanation: Step 2: Mode ensemble For this analysis we'll build a ModeEnsemble by calculating normal modes for each member of the PDBEnsemble. You can load a PDB ensemble at this stage if you already have one. We demonstrate this for the one we just saved. End of explanation """ gnms = calcEnsembleENMs(dali_ens, model='GNM', trim='reduce') gnms """ Explanation: Then we calculate GNM modes for each member of the ensemble using calcEnsembleENMs. There are options to select the model (GNM by default) and the way of considering non-aligned residues by setting the trim option (default is reduceModel, which treats them as environment). End of explanation """ saveModeEnsemble(gnms, 'PBP-I') """ Explanation: We can save the mode ensemble as follows: End of explanation """ gnms = loadModeEnsemble('PBP-I.modeens.npz') """ Explanation: We can also load in a previously saved mode ensemble such as the one we saved above: End of explanation """ gnms[0] """ Explanation: Slicing and Indexing Mode Ensembles We can index the ModeEnsemble object in two different dimensions. The first dimension corresponds to ensemble members as shown below for extracting the mode set for the first member (numbered 0). End of explanation """ gnms[:,0] """ Explanation: The second dimension corresponds to particular modes of all ensemble members as shown below for extracting the first mode (numbered 0). The colon means we select everything from the first dimension. End of explanation """ gnms[5:10,2:4] """ Explanation: We can also slice out ranges of members and modes and index them both at the same time. E.g. to get the five members from 5 up to but not including 10 (5, 6, 7, 8, 9), and the two modes from 2 up to but not including 4 (modes with indices 2 and 3 in the reference), we'd use the following code. End of explanation """ gnms[5,2] """ Explanation: We can also use indexing to extract individual modes from individual members, e.g. End of explanation """ showSignatureMode(gnms[:, 0]); """ Explanation: Remember that we usually talk about modes counting from 1 so this is "Mode 3" or "the 3rd global mode" in conversation but Python counts from 0 so it has index 2. Likewise this is the "6th member" of the ensemble but has index 5. Step 3: Signature dynamics Signatures are calculated as the mean and standard deviation of various properties such as mode shapes and mean square fluctations. For example, we can show the average and standard deviation of the shape of the first mode (second index 0). The first index of the mode ensemble is over conformations. End of explanation """ showSignatureSqFlucts(gnms[:, :5]); showSignatureCrossCorr(gnms[:, :20]); """ Explanation: We can also show such results for properties involving multiple modes such as the mean square fluctuations from the first 5 modes or the cross-correlations from the first 20. End of explanation """ highlights = {'3h5vA': 'GluA2','3o21C': 'GluA3', '3h6gA': 'GluK2', '3olzA': 'GluK3', '5kc8A': 'GluD2'} """ Explanation: We can also look at distributions over values across different members of the ensemble such as inverse eigenvalue. We can show a bar above this with individual members labelled like in Krieger J, Bahar I, Greger IH. Structure, Dynamics, and Allosteric Potential of Ionotropic Glutamate Receptor N-Terminal Domains. Biophys. J. 2015 109(6):1136-48. In this automated version, the bar is coloured from white to dark red depending on how many structures have values at that point. We can select particular members to highlight with arrows by putting their names and labels in a dictionary: End of explanation """ gs = GridSpec(ncols=1, nrows=2, height_ratios=[1, 10], hspace=0.15) subplot(gs[0]); showVarianceBar(gnms[:, :5], fraction=True, highlights=highlights); xlabel(''); subplot(gs[1]); showSignatureVariances(gnms[:, :5], fraction=True, bins=80, alpha=0.7); xlabel('Fraction of inverse eigenvalue'); """ Explanation: We plot the variance bar for the first five modes (showing a function of the inverse eigenvalues related to the resultant relative size of motion) above the inverse eigenvalue distributions for each of those modes. To arrange the plots like this, we use the GridSpec function of Matplotlib. End of explanation """ eigvals = gnms.getEigvals() eigvals eigvecs = gnms.getEigvecs() eigvecs """ Explanation: We can also extract the eigenvalues and eigenvectors directly from the mode ensemble and analyse them ourselves: End of explanation """ eigvals.shape eigvals[0:5,0:5] """ Explanation: These are stored in instances of the sdarray class that we designed specifically for signature dynamics analysis. It is an extension of the standard NumPy ndarray but has additional attributes and some modified methods. The first axis is reserved for ensemble members and the mean, min, max and std are altered to average over this dimension rather than all dimensions. We can look at the shape of these arrays and index them just like ndarray and ModeEnsemble objects. The eigenvalues are arranged in eigvals such that the first axis is the members and the second is the modes as in the mode ensemble. End of explanation """ eigvecs.shape """ Explanation: The eigenvectors are arranged in eigvecs such that the first axis is over the members, and the remaining dimensions are as in other eigenvector arrays - the second is over atoms and the third is mode index. Each atom has a weight, which varies between members and is important in calculating the mean, std, etc. End of explanation """ so_matrix = calcEnsembleSpectralOverlaps(gnms[:, :1]) figure(figsize=(8,8)) showMatrix(so_matrix); """ Explanation: Step 4: Spectral overlap and distance Spectral overlap, also known as covariance overlap, measures the overlap between two covariance matrices, or the overlap of a subset of the modes (a mode spectrum). This can also be converted into a distance using its arccosine as will be shown below. We can calculate a matrix of spectral overlaps (so_matrix) over any slice of the ModeEnsemble that is still a mode ensemble itself, e.g. End of explanation """ sd_matrix = calcEnsembleSpectralOverlaps(gnms[:, :1], distance=True) figure(figsize=(8,8)); showMatrix(sd_matrix); """ Explanation: We can also obtain a spectral distance matrix (sd_matrix) from calcEnsembleSpectralOverlaps by giving it an additional argument: End of explanation """ labels = dali_ens.getLabels() so_tree = calcTree(names=labels, distance_matrix=sd_matrix, method='upgma') """ Explanation: We can then use this distance to calculate a tree. The labels from the mode ensemble as used as names for the leaves of the tree and are stored in their own variable/object for later use. End of explanation """ showTree(so_tree); """ Explanation: We can show this tree using the function showTree: End of explanation """ reordered_so, new_so_indices = reorderMatrix(names=labels, matrix=so_matrix, tree=so_tree) figure(figsize=(8,8)) showMatrix(reordered_so, ticklabels=new_so_indices); """ Explanation: We can also use this tree to reorder the so_matrix and obtain indices for reordering other objects: End of explanation """ figure(figsize=(8,8)) showMatrix(reordered_so, ticklabels=new_so_indices, origin='upper'); """ Explanation: As in the tree, we see 2-3 clusters with some finer structure within them as in the tree. These correspond to different subtypes of iGluRs called AMPA receptors (subunit paralogues GluA1-4, top) and kainate receptors (subunit paralogues GluK1-5, bottom) based on their preferred agonists as well as delta receptors at the bottom (these are flipped relative to the tree). To show the matrix in the same order as the tree, we can add the option origin='upper': End of explanation """ figure(figsize=(11,8)) showMatrix(reordered_so, ticklabels=new_so_indices, origin='upper', y_array=so_tree); """ Explanation: We can also show the tree along the y-axis of the matrix as follows: End of explanation """ so_reordered_ens = dali_ens[new_so_indices] so_reordered_gnms = gnms[new_so_indices, :] """ Explanation: We can also use the resulting indices to reorder the ModeEnsemble and PDBEnsemble: End of explanation """ so_reordered_labels = np.array(labels)[new_so_indices] """ Explanation: Lists can only be used for indexing arrays not lists so we need to perform a type conversion prior to indexing in order to reorder the labels: End of explanation """ seqid_matrix = buildSeqidMatrix(so_reordered_ens.getMSA()) seqdist_matrix = 1. - seqid_matrix figure(figsize=(8,8)); showMatrix(seqdist_matrix); """ Explanation: Comparing with sequence and structural distances The sequence distance is given by the (normalized) Hamming distance, which is calculated by subtracting the percentage identity (fraction) from 1, and the structural distance is the RMSD. We can also calculate and show the matrices and trees for these from the PDB ensemble. First we calculate the sequence distance matrix: End of explanation """ seqdist_tree = calcTree(names=so_reordered_labels, distance_matrix=seqdist_matrix, method='upgma') showTree(seqdist_tree); """ Explanation: We can also construct a tree based on seqdist_matrix and use that to reorder it: End of explanation """ reordered_seqdist_seqdist, new_seqdist_indices = reorderMatrix(names=so_reordered_labels, matrix=seqdist_matrix, tree=seqdist_tree) figure(figsize=(8,8)); showMatrix(reordered_seqdist_seqdist, ticklabels=new_seqdist_indices); """ Explanation: We can reorder seqdist_matrix with seqdist_tree as we did above with so_tree: End of explanation """ rmsd_matrix = so_reordered_ens.getRMSDs(pairwise=True) figure(figsize=(8,8)); showMatrix(rmsd_matrix); rmsd_tree = calcTree(names=so_reordered_labels, distance_matrix=rmsd_matrix, method='upgma') """ Explanation: This shows us even clearer groups than the dynamic spectrum-based analysis. We see one subunit by itself at the bottom that is from a delta-type iGluR (GluD2), then two groups of kainate receptors (GluK5 and GluK2 with GluK3), and four groups of AMPARs (GluA1, GluA2, GluA4, and many structures from GluA3). Similarily, once we obtain the RMSD matrix and tree using the getRMSDs method of the PDBEnsemble, we can calculate the structure-based tree: End of explanation """ figure(figsize=(20,8)); subplot(1, 3, 1); showTree(seqdist_tree, format='plt'); title('Sequence'); subplot(1, 3, 2); showTree(rmsd_tree, format='plt'); title('Structure'); subplot(1, 3, 3); showTree(so_tree, format='plt'); title('Dynamics'); """ Explanation: It could be of interest to put all three trees constructed based on different distance metrics side by side and compare them. We can do this using the subplot function from Matplotlib. End of explanation """ reordered_rmsd_seqdist, new_seqdist_indices = reorderMatrix(names=so_reordered_labels, matrix=rmsd_matrix, tree=seqdist_tree) reordered_sd_seqdist, new_seqdist_indices = reorderMatrix(names=so_reordered_labels, matrix=sd_matrix, tree=seqdist_tree) figure(figsize=(20,8)); subplot(1, 3, 1); showMatrix(reordered_seqdist_seqdist, ticklabels=new_seqdist_indices, origin='upper'); title('Sequence'); subplot(1, 3, 2); showMatrix(reordered_rmsd_seqdist, ticklabels=new_seqdist_indices, origin='upper'); title('Structure'); subplot(1, 3, 3); showMatrix(reordered_sd_seqdist, ticklabels=new_seqdist_indices, origin='upper'); title('Dynamics'); """ Explanation: Likewise, we can place the matrices side-by-side after having them all reordered the same way. We'll reorder by seqdist in this example: End of explanation """ pathPDBFolder('') """ Explanation: This analysis is quite sensitive to how many modes are used. As the number of modes approaches the full number, the dynamic distance order approaches the RMSD order. With smaller numbers, we see finer distinctions and there is a point where the dynamic distances are more in line with the sequence distances, which we call the low-to-intermediate frequency regime. In the current case where we used just one global mode (with the lowest frequency), we see small spectral distances but some subfamily differentiation is still apparent. The same analysis could also be performed with a larger ensemble by selecting lower sequence identity and Z-score cutoffs as we did in our paper. Now we have finished this tutorial, we reset the default path to the PDB folder, so that we aren't surprised next time we download PDBs and can't find them: End of explanation """
weikang9009/pysal
notebooks/explore/pointpats/distance_statistics.ipynb
bsd-3-clause
import scipy.spatial import pysal.lib as ps import numpy as np from pysal.explore.pointpats import PointPattern, PoissonPointProcess, as_window, G, F, J, K, L, Genv, Fenv, Jenv, Kenv, Lenv %matplotlib inline import matplotlib.pyplot as plt """ Explanation: Distance Based Statistical Method for Planar Point Patterns Authors: Serge Rey &#115;&#106;&#115;&#114;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; and Wei Kang &#119;&#101;&#105;&#107;&#97;&#110;&#103;&#57;&#48;&#48;&#57;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; Introduction Distance based methods for point patterns are of three types: Mean Nearest Neighbor Distance Statistics Nearest Neighbor Distance Functions Interevent Distance Functions In addition, we are going to introduce a computational technique Simulation Envelopes to aid in making inferences about the data generating process. An example is used to demonstrate how to use and interprete simulation envelopes. End of explanation """ points = [[66.22, 32.54], [22.52, 22.39], [31.01, 81.21], [9.47, 31.02], [30.78, 60.10], [75.21, 58.93], [79.26, 7.68], [8.23, 39.93], [98.73, 77.17], [89.78, 42.53], [65.19, 92.08], [54.46, 8.48]] pp = PointPattern(points) pp.summary() """ Explanation: Mean Nearest Neighbor Distance Statistics The nearest neighbor(s) for a point $u$ is the point(s) $N(u)$ which meet the condition $$d_{u,N(u)} \leq d_{u,j} \forall j \in S - u$$ The distance between the nearest neighbor(s) $N(u)$ and the point $u$ is nearest neighbor distance for $u$. After searching for nearest neighbor(s) for all the points and calculating the corresponding distances, we are able to calculate mean nearest neighbor distance by averaging these distances. It was demonstrated by Clark and Evans(1954) that mean nearest neighbor distance statistics distribution is a normal distribution under null hypothesis (underlying spatial process is CSR). We can utilize the test statistics to determine whether the point pattern is the outcome of CSR. If not, is it the outcome of cluster or regular spatial process? Mean nearest neighbor distance statistic $$\bar{d}{min}=\frac{1}{n} \sum{i=1}^n d_{min}(s_i)$$ End of explanation """ # one nearest neighbor (default) pp.knn() """ Explanation: We may call the method knn in PointPattern class to find $k$ nearest neighbors for each point in the point pattern pp. End of explanation """ # two nearest neighbors pp.knn(2) pp.max_nnd # Maximum nearest neighbor distance pp.min_nnd # Minimum nearest neighbor distance pp.mean_nnd # mean nearest neighbor distance pp.nnd # Nearest neighbor distances pp.nnd.sum()/pp.n # same as pp.mean_nnd pp.plot() """ Explanation: The first array is the ids of the most nearest neighbor for each point, the second array is the distance between each point and its most nearest neighbor. End of explanation """ gp1 = G(pp, intervals=20) gp1.plot() """ Explanation: Nearest Neighbor Distance Functions Nearest neighbour distance distribution functions (including the nearest “event-to-event” and “point-event” distance distribution functions) of a point process are cumulative distribution functions of several kinds -- $G, F, J$. By comparing the distance function of the observed point pattern with that of the point pattern from a CSR process, we are able to infer whether the underlying spatial process of the observed point pattern is CSR or not for a given confidence level. $G$ function - event-to-event The $G$ function is defined as follows: for a given distance $d$, $G(d)$ is the proportion of nearest neighbor distances that are less than $d$. $$G(d) = \sum_{i=1}^n \frac{ \phi_i^d}{n}$$ $$ \phi_i^d = \begin{cases} 1 & \quad \text{if } d_{min}(s_i)<d \ 0 & \quad \text{otherwise } \ \end{cases} $$ If the underlying point process is a CSR process, $G$ function has an expectation of: $$ G(d) = 1-e(-\lambda \pi d^2) $$ However, if the $G$ function plot is above the expectation this reflects clustering, while departures below expectation reflect dispersion. End of explanation """ gp1.plot(qq=True) """ Explanation: A slightly different visualization of the empirical function is the quantile-quantile plot: End of explanation """ gp1.d # distance domain sequence (corresponding to the x-axis) gp1.G #cumulative nearest neighbor distance distribution over d (corresponding to the y-axis)) """ Explanation: in the q-q plot the csr function is now a diagonal line which serves to make accessment of departures from csr visually easier. It is obvious that the above $G$ increases very slowly at small distances and the line is below the expected value for a CSR process (green line). We might think that the underlying spatial process is regular point process. However, this visual inspection is not enough for a final conclusion. In Simulation Envelopes, we are going to demonstrate how to simulate data under CSR many times and construct the $95\%$ simulation envelope for $G$. End of explanation """ fp1 = F(pp, intervals=20) # The default is to randomly generate 100 points. fp1.plot() fp1.plot(qq=True) """ Explanation: $F$ function - "point-event" When the number of events in a point pattern is small, $G$ function is rough (see the $G$ function plot for the 12 size point pattern above). One way to get around this is to turn to $F$ funtion where a given number of randomly distributed points are generated in the domain and the nearest event neighbor distance is calculated for each point. The cumulative distribution of all nearest event neighbor distances is called $F$ function. End of explanation """ fp1 = F(pp, intervals=50) fp1.plot() fp1.plot(qq=True) """ Explanation: We can increase the number of intervals to make $F$ more smooth. End of explanation """ jp1 = J(pp, intervals=20) jp1.plot() """ Explanation: $F$ function is more smooth than $G$ function. $J$ function - a combination of "event-event" and "point-event" $J$ function is defined as follows: $$J(d) = \frac{1-G(d)}{1-F(d)}$$ If $J(d)<1$, the underlying point process is a cluster point process; if $J(d)=1$, the underlying point process is a random point process; otherwise, it is a regular point process. End of explanation """ kp1 = K(pp) kp1.plot() """ Explanation: From the above figure, we can observe that $J$ function is obviously above the $J(d)=1$ horizontal line. It is approaching infinity with nearest neighbor distance increasing. We might tend to conclude that the underlying point process is a regular one. Interevent Distance Functions Nearest neighbor distance functions consider only the nearest neighbor distances, "event-event", "point-event" or the combination. Thus, distances to higer order neighbors are ignored, which might reveal important information regarding the point process. Interevent distance functions, including $K$ and $L$ functions, are proposed to consider distances between all pairs of event points. Similar to $G$, $F$ and $J$ functions, $K$ and $L$ functions are also cumulative distribution function. $K$ function - "interevent" Given distance $d$, $K(d)$ is defined as: $$K(d) = \frac{\sum_{i=1}^n \sum_{j=1}^n \psi_{ij}(d)}{n \hat{\lambda}}$$ where $$ \psi_{ij}(d) = \begin{cases} 1 & \quad \text{if } d_{ij}<d \ 0 & \quad \text{otherwise } \ \end{cases} $$ $\sum_{j=1}^n \psi_{ij}(d)$ is the number of events within a circle of radius $d$ centered on event $s_i$ . Still, we use CSR as the benchmark (null hypothesis) and see how the $K$ funtion estimated from the observed point pattern deviate from that under CSR, which is $K(d)=\pi d^2$. $K(d)<\pi d^2$ indicates that the underlying point process is a regular point process. $K(d)>\pi d^2$ indicates that the underlying point process is a cluster point process. End of explanation """ lp1 = L(pp) lp1.plot() """ Explanation: $L$ function - "interevent" $L$ function is a scaled version of $K$ function, defined as: $$L(d) = \sqrt{\frac{K(d)}{\pi}}-d$$ End of explanation """ realizations = PoissonPointProcess(pp.window, pp.n, 100, asPP=True) # simulate CSR 100 times genv = Genv(pp, intervals=20, realizations=realizations) # call Genv to generate simulation envelope genv genv.observed genv.plot() """ Explanation: Simulation Envelopes A Simulation envelope is a computer intensive technique for inferring whether an observed pattern significantly deviates from what would be expected under a specific process. Here, we always use CSR as the benchmark. In order to construct a simulation envelope for a given function, we need to simulate CSR a lot of times, say $1000$ times. Then, we can calculate the function for each simulated point pattern. For every distance $d$, we sort the function values of the $1000$ simulated point patterns. Given a confidence level, say $95\%$, we can acquire the $25$th and $975$th value for every distance $d$. Thus, a simulation envelope is constructed. Simulation Envelope for G function Genv class in pysal. End of explanation """ fenv = Fenv(pp, intervals=20, realizations=realizations) fenv.plot() """ Explanation: In the above figure, LB and UB comprise the simulation envelope. CSR is the mean function calculated from the simulated data. G is the function estimated from the observed point pattern. It is well below the simulation envelope. We can infer that the underlying point process is a regular one. Simulation Envelope for F function Fenv class in pysal. End of explanation """ jenv = Jenv(pp, intervals=20, realizations=realizations) jenv.plot() """ Explanation: Simulation Envelope for J function Jenv class in pysal. End of explanation """ kenv = Kenv(pp, intervals=20, realizations=realizations) kenv.plot() """ Explanation: Simulation Envelope for K function Kenv class in pysal. End of explanation """ lenv = Lenv(pp, intervals=20, realizations=realizations) lenv.plot() """ Explanation: Simulation Envelope for L function Lenv class in pysal. End of explanation """ from pysal.lib.cg import shapely_ext from pysal.explore.pointpats import Window import pysal.lib as ps va = ps.io.open(ps.examples.get_path("vautm17n.shp")) polys = [shp for shp in va] state = shapely_ext.cascaded_union(polys) """ Explanation: CSR Example In this example, we are going to generate a point pattern as the "observed" point pattern. The data generating process is CSR. Then, we will simulate CSR in the same domain for 100 times and construct a simulation envelope for each function. End of explanation """ a = [[1],[1,2]] np.asarray(a) n = 100 samples = 1 pp = PoissonPointProcess(Window(state.parts), n, samples, asPP=True) pp.realizations[0] pp.n """ Explanation: Generate the point pattern pp (size 100) from CSR as the "observed" point pattern. End of explanation """ csrs = PoissonPointProcess(pp.window, 100, 100, asPP=True) csrs """ Explanation: Simulate CSR in the same domian for 100 times which would be used for constructing simulation envelope under the null hypothesis of CSR. End of explanation """ genv = Genv(pp.realizations[0], realizations=csrs) genv.plot() """ Explanation: Construct the simulation envelope for $G$ function. End of explanation """ genv.low # lower bound of the simulation envelope for G genv.high # higher bound of the simulation envelope for G """ Explanation: Since the "observed" $G$ is well contained by the simulation envelope, we infer that the underlying point process is a random process. End of explanation """ fenv = Fenv(pp.realizations[0], realizations=csrs) fenv.plot() """ Explanation: Construct the simulation envelope for $F$ function. End of explanation """ jenv = Jenv(pp.realizations[0], realizations=csrs) jenv.plot() """ Explanation: Construct the simulation envelope for $J$ function. End of explanation """ kenv = Kenv(pp.realizations[0], realizations=csrs) kenv.plot() """ Explanation: Construct the simulation envelope for $K$ function. End of explanation """ lenv = Lenv(pp.realizations[0], realizations=csrs) lenv.plot() """ Explanation: Construct the simulation envelope for $L$ function. End of explanation """
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160530월_9일차_추정 및 검정 Estimation and Test/6.MLE 모수 추정의 예.ipynb
mit
theta0 = 0.6 x = sp.stats.bernoulli(theta0).rvs(1000) N0, N1 = np.bincount(x, minlength=2) N = N0 + N1 theta = N1 / N theta """ Explanation: MLE 모수 추정의 예 베르누이 분포의 모수 추정 이 과정을 스스로 쓸 줄 알아야 돼 각각의 시도 $x_i$에 대한 확률은 베르누이 분포 $$ P(x | \theta ) = \text{Bern}(x | \theta ) = \theta^x (1 - \theta)^{1-x}$$ 샘플이 $N$개 있는 경우, Likelihood $$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1-x_i} $$ Log-Likelihood $$ \begin{eqnarray} \log L &=& \log P(x_{1:N}|\theta) \ &=& \sum_{i=1}^N \big{ {x_i} \log\theta + (1-x_i)\log(1 - \theta) \big} \ &=& \sum_{i=1}^N {x_i} \log\theta + \left( N-\sum_{i=1}^N x_i \right) \log( 1 - \theta ) \ \end{eqnarray} $$ $x = 1$(성공) 또는 $x= 0$ (실패) 이므로 전체 시도 횟수 $N$ 그 중 성공 횟수 $N_1 = \sum_{i=1}^N {x_i}$ 따라서 Log-Likelihood는 $$ \begin{eqnarray} \log L &=& N_1 \log\theta + (N-N_1) \log(1 - \theta) \ \end{eqnarray} $$ Log-Likelihood Derivative $$ \begin{eqnarray} \dfrac{\partial \log L}{\partial \theta} &=& \dfrac{\partial}{\partial \theta} \big{ N_1 \log\theta + (N-N_1) \log(1 - \theta) \big} = 0\ &=& \dfrac{N_1}{\theta} - \dfrac{N-N_1}{1-\theta} = 0 \ \end{eqnarray} $$ $$ \dfrac{N_1}{\theta} = \dfrac{N-N_1}{1-\theta} $$ $$ \dfrac{1-\theta}{\theta} = \dfrac{N-N_1}{N_1} $$ $$ \dfrac{1}{\theta} - 1 = \dfrac{N}{N_1} - 1 $$ $$ \theta= \dfrac{N_1}{N} $$ End of explanation """ theta0 = np.array([0.1, 0.3, 0.6]) x = np.random.choice(np.arange(3), 1000, p=theta0) N0, N1, N2 = np.bincount(x, minlength=3) theta = np.array([N0, N1, N2]) / N theta """ Explanation: 카테고리 분포의 모수 추정 K가 4인 경우로 가정해서 일일이 더하기하면 좀 더 이해가 잘 될 것이다. 여기까지는 이산 분포 각각의 시도 $x_i$에 대한 확률은 카테고리 분포 $$ P(x | \theta ) = \text{Cat}(x | \theta) = \prod_{k=1}^K \theta_k^{x_k} $$ $$ \sum_{k=1}^K \theta_k = 1 $$ 샘플이 $N$개 있는 경우, Likelihood $$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} $$ Log-Likelihood $$ \begin{eqnarray} \log L &=& \log P(x_{1:N}|\theta) \ &=& \sum_{i=1}^N \sum_{k=1}^K {x_{i,k}} \log\theta_k \ &=& \sum_{k=1}^K \log\theta_k \sum_{i=1}^N {x_{i,k}} \end{eqnarray} $$ $x_k$가 나온 횟수 $N_k = \sum_{i=1}^N {x_{i,k}}$이라고 표시 따라서 Log-Likelihood는 $$ \begin{eqnarray} \log L &=& \sum_{k=1}^K \log\theta_k N_k \end{eqnarray} $$ 추가 조건 $$ \sum_{k=1}^K \theta_k = 1 $$ Log-Likelihood Derivative with Lagrange multiplier $$ \begin{eqnarray} \dfrac{\partial \log L}{\partial \theta_k} &=& \dfrac{\partial}{\partial \theta_k} \left{ \sum_{k=1}^K \log\theta_k N_k + \lambda \left(1- \sum_{k=1}^K \theta_k\right) \right} = 0 \ \dfrac{\partial \log L}{\partial \lambda} &=& \dfrac{\partial}{\partial \lambda} \left{ \sum_{k=1}^K \log\theta_k N_k + \lambda \left(1- \sum_{k=1}^K \theta_k \right) \right} = 0\ \end{eqnarray} $$ $$ \dfrac{N_1}{\theta_1} = \dfrac{N_2}{\theta_2} = \cdots = \dfrac{N_K}{\theta_K} = \lambda $$ $$ \sum_{k=1}^K N_k = N $$ $$ \lambda \sum_{k=1}^K \theta_k = \lambda = N $$ $$ \theta_k = \dfrac{N_k}{N} $$ End of explanation """ mu0 = 1 sigma0 = 2 x = sp.stats.norm(mu0, sigma0).rvs(1000) xbar = x.mean() s2 = x.std(ddof=1) xbar, s2 """ Explanation: 정규 분포의 모수 추정 여기서는 연속 분포. 정규만 할 거야 각각의 시도 $x_i$에 대한 확률은 가우시안 정규 분포 $$ P(x | \theta ) = N(x | \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right) $$ 샘플이 $N$개 있는 경우, Likelihood $$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x_i-\mu)^2}{2\sigma^2}\right)$$ Log-Likelihood $$ \begin{eqnarray} \log L &=& \log P(x_{1:N}|\theta) \ &=& \sum_{i=1}^N \left{ -\dfrac{1}{2}\log(2\pi\sigma^2) - \dfrac{(x_i-\mu)^2}{2\sigma^2} \right} \ &=& -\dfrac{N}{2} \log(2\pi\sigma^2) - \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2 \end{eqnarray} $$ Log-Likelihood Derivative $$ \begin{eqnarray} \dfrac{\partial \log L}{\partial \mu} &=& \dfrac{\partial}{\partial \mu} \left{ \dfrac{N}{2} \log(2\pi\sigma^2) + \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2 \right} = 0 \ \dfrac{\partial \log L}{\partial \sigma^2} &=& \dfrac{\partial}{\partial \sigma^2} \left{ \dfrac{N}{2} \log(2\pi\sigma^2) + \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2 \right} = 0\ \end{eqnarray} $$ $$ \dfrac{2}{2\sigma^2}\sum_{i=1}^N (x_i-\mu) = 0 $$ $$ N \mu = \sum_{i=1}^N x_i $$ $$ \mu = \dfrac{1}{N}\sum_{i=1}^N x_i = \bar{x} $$ $$ \dfrac{N}{2\sigma^2 } - \dfrac{1}{2(\sigma^2)^2}\sum_{i=1}^N (x_i-\mu)^2 = 0 $$ $$ \sigma^2 = \dfrac{1}{N}\sum_{i=1}^N (x_i-\mu)^2 = \dfrac{1}{N}\sum_{i=1}^N (x_i-\bar{x})^2 = s^2 $$ End of explanation """ mu0 = np.array([0, 1]) sigma0 = np.array([[1, 0.2], [0.2, 4]]) x = sp.stats.multivariate_normal(mu0, sigma0).rvs(1000) xbar = x.mean(axis=0) S2 = np.cov(x, rowvar=0) print(xbar) print(S2) """ Explanation: 다변수 정규 분포의 모수 추정 이건 저번에 한 거 이건 외우지 마 MLE for Multivariate Gaussian Normal Distribution 각각의 시도 $x_i$에 대한 확률은 다변수 정규 분포 $$ P(x | \theta ) = N(x | \mu, \Sigma) = \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) $$ 샘플이 $N$개 있는 경우, Likelihood $$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x_i-\mu)^T \Sigma^{-1} (x_i-\mu) \right)$$ Log-Likelihood $$ \begin{eqnarray} \log L &=& \log P(x_{1:N}|\theta) \ &=& \sum_{i=1}^N \left{ -\log((2\pi)^{D/2} |\Sigma|^{1/2}) - \dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right} \ &=& C -\dfrac{N}{2} \log|\Sigma| - \dfrac{1}{2} \sum (x-\mu)^T \Sigma^{-1} (x-\mu) \end{eqnarray} $$ precision matrix $\Lambda = \Sigma^{-1}$ $$ \begin{eqnarray} \log L &=& C + \dfrac{N}{2} \log|\Lambda| - \dfrac{1}{2} \sum(x-\mu)^T \Lambda (x-\mu) \end{eqnarray} $$ $$ \dfrac{\partial L}{\partial \mu} = - \dfrac{\partial}{\partial \mu} \sum_{i=1}^N (x_i-\mu)^T \Lambda (x_i-\mu) = \sum_{i=1}^N 2\Lambda (x_i - \mu) = 0 $$ $$ \mu = \dfrac{1}{N}\sum_{i=1}^N x_i $$ $$ \dfrac{\partial L}{\partial \Lambda} = \dfrac{\partial}{\partial \Lambda} \dfrac{N}{2} \log|\Lambda| - \dfrac{\partial}{\partial \Lambda} \dfrac{1}{2} \sum_{i=1}^N \text{tr}( (x_i-\mu)(x_i-\mu)^T\Lambda) =0 $$ $$ \dfrac{N}{2} \Lambda^{-T} = \dfrac{1}{2}\sum_{i=1}^N (x_i-\mu)(x_i-\mu)^T $$ $$ \Sigma = \dfrac{1}{N}\sum_{i=1}^N (x_i-\mu)(x_i-\mu)^T $$ End of explanation """
HazyResearch/snorkel
tutorials/workshop/Workshop_3_Generative_Model_Training.ipynb
apache-2.0
%load_ext autoreload %autoreload 2 %matplotlib inline import os import re import numpy as np # Connect to the database backend and initalize a Snorkel session from lib.init import * from snorkel.models import candidate_subclass from snorkel.annotations import load_gold_labels from snorkel.lf_helpers import ( get_left_tokens, get_right_tokens, get_between_tokens, get_text_between, get_tagged_text, ) # initialize our candidate type definition Spouse = candidate_subclass('Spouse', ['person1', 'person2']) # gold (human-labeled) development set labels L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1) """ Explanation: <img align="left" src="imgs/logo.jpg" width="50px" style="margin-right:10px"> Snorkel Workshop: Extracting Spouse Relations <br> from the News Part 3: Training the Generative Model Now, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other. End of explanation """ from snorkel.annotations import LabelAnnotator labeler = LabelAnnotator(lfs=[]) L_train = labeler.load_matrix(session, split=0) L_dev = labeler.load_matrix(session, split=1) """ Explanation: I. Loading Labeling Matricies First we'll load our label matrices from notebook 2 End of explanation """ from snorkel.learning import GenerativeModel from snorkel.learning import RandomSearch # use random search to optimize the generative model param_ranges = { 'step_size' : [1e-3, 1e-4, 1e-5, 1e-6], 'decay' : [0.9, 0.95], 'epochs' : [50, 100], 'reg_param' : [1e-3], } model_class_params = {'lf_propensity' : False} searcher = RandomSearch(GenerativeModel, param_ranges, L_train, n=5, model_class_params=model_class_params) %time gen_model, run_stats = searcher.fit(L_dev, L_gold_dev) run_stats """ Explanation: Now we set up and run the hyperparameter search, training our model with different hyperparamters and picking the best model configuration to keep. We'll set the random seed to maintain reproducibility. Note that we are fitting our model's parameters to the training set generated by our labeling functions, while we are picking hyperparamters with respect to score over the development set labels which we created by hand. II: Unifying supervision Generative Model In data programming, we use a more sophisitcated model to unify our labeling functions. We know that these labeling functions will not be perfect, and some may be quite low-quality, so we will model their accuracies with a generative model, which Snorkel will help us easily apply. This will ultimately produce a single set of noise-aware training labels, which we will then use to train an end extraction model in the next notebook. For more technical details of this overall approach, see our NIPS 2016 paper. NOTE: Make sure you've written some of your own LFs in the previous notebook to get a decent score!!! 1. Training the Model When training the generative model, we'll tune our hyperparamters using a simple grid search. Parameter Definitions epochs A single pass through all the data in your training set step_size The factor by which we update model weights after computing the gradient decay The rate our update factor dimishes (decay) over time. End of explanation """ x = L_dev.lf_stats(session, L_gold_dev) train_marginals = gen_model.marginals(L_train) """ Explanation: 2. Model Accuracies These are the weights learned for each LF End of explanation """ import matplotlib.pyplot as plt plt.hist(train_marginals, bins=20, range=(0.0, 1.0)) plt.show() """ Explanation: 3. Plotting Marginal Probabilities One immediate santity check you can peform using the generative model is to visually examine the distribution of predicted training marginals. Ideally, there should get a bimodal distribution with large seperation between each peaks, as shown below by the far right image. The corresponds to good signal for true and positive class labels. For your first Snorkel application, you'll probably see marginals closer to the far left or middle images. With all mass centered around p=0.5, you probably need to write more LFs got get more overall coverage. In the middle image, you have good negative coverage, but not enough positive LFs <img align="left" src="imgs/marginals-common.jpg" width="265px" style="margin-right:0px"> <img align="left" src="imgs/marginals-real.jpg" width="265px" style="margin-right:0px"> <img align="left" src="imgs/marginals-ideal.jpg" width="265px" style="margin-right:0px"> End of explanation """ dev_marginals = gen_model.marginals(L_dev) _, _, _, _ = gen_model.error_analysis(session, L_dev, L_gold_dev) """ Explanation: 4. Generative Model Metrics End of explanation """ from snorkel.annotations import save_marginals %time save_marginals(session, L_train, train_marginals) """ Explanation: 5. Saving our training labels Finally, we'll save the training_marginals, which are our "noise-aware training labels", so that we can use them in the next tutorial to train our end extraction model: End of explanation """ from snorkel.learning.structure import DependencySelector MAX_DEPS = 5 ds = DependencySelector() deps = ds.select(L_train, threshold=0.1) deps = set(list(deps)[0:min(len(deps), MAX_DEPS)]) print("Using {} dependencies".format(len(deps))) """ Explanation: III. Advanced Generative Model Features A. Structure Learning We may also want to include the dependencies between our LFs when training the generative model. Snorkel makes it easy to do this! DependencySelector runs a fast structure learning algorithm over the matrix of LF outputs to identify a set of likely dependencies. End of explanation """
idisblueflash/skills_map_searcher
skill map search.ipynb
mit
import numpy as np import tensorflow as tf from openpyxl import load_workbook from collections import namedtuple import time """ Explanation: Skills map searcher Search related chapter base on text entered. Data loading End of explanation """ # Load data from xlsx file wb = load_workbook('skill_map_data.xlsx') ## print(wb.get_sheet_names()) ws = wb.get_sheet_by_name('raw data - Chapter and Text') raw_data = [] for row in ws.iter_rows(): raw_data_row = { "week_day" : row[0].value, "chapter" : row[1].value, "lesson" : row[2].value, "section" : row[3].value, "text" : row[4].value } raw_data.append(raw_data_row) raw_data = raw_data[2:] # remove table name and header assert(len(raw_data) < 100) # normally we don't have 100+ sections # Split raw_data into inputs and labels inputs = [row['text'] for row in raw_data] assert(len(raw_data) == len(inputs)) ## concated week_day, chapter, lesson, section into one label labels = [' '.join([ str(row['week_day']), ' ', row['chapter'], ' ', row['lesson'], ' ', row['section'] ]) for row in raw_data] assert(len(raw_data) == len(labels)) # Split inputs to generate more training datas seq_len = 100 # length for split long text seq_inputs = [] seq_labels = [] count = 0 for i, input in enumerate(inputs): if len(input) > seq_len: for j in range(int(len(input)/seq_len + 0.5)): seq_input = input[j*seq_len:(j+1)*seq_len] seq_inputs.append(seq_input) seq_labels.append(labels[i]) count += 1 else: seq_inputs.append(input) seq_labels.append(labels[i]) len(seq_inputs), len(seq_labels) # seq_labels[998], seq_inputs[998] inputs = seq_inputs labels = seq_labels inputs[:5] """ Explanation: Load data from xlsx file. I loaded xlsx file and split it into inputs, labels. Finally, I also split inputs to generate more training datas. End of explanation """ from string import punctuation all_text = ''.join([c for c in inputs if c not in punctuation]) all_text = ' '.join(inputs) words = all_text.split() len(words), len(all_text), len(inputs) all_text[:200] words[:10] """ Explanation: Data preprocessing End of explanation """ from collections import Counter counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} inputs_ints = [] for each in inputs: inputs_ints.append([vocab_to_int[word] for word in each.split()]) """ Explanation: Encoding the words End of explanation """ labels_set = set(labels) int_to_label = dict(enumerate(labels_set)) label_to_int = {l: i for i, l in enumerate(labels_set)} seq_len = 100 labels = [[label_to_int[L]] * seq_len for L in labels] labels = np.array(labels, dtype=np.int32) # test encoded labels test_index = 6 test_label = int_to_label[test_index] assert(test_index == label_to_int[test_label]) assert(len(inputs) == len(labels)) labels[:1], labels.shape """ Explanation: Encoding the labels End of explanation """ # Filter out that inputs with 0 length inputs_ints = [each for each in inputs_ints if len(each) > 0] seq_len = 100 features = np.zeros((len(inputs), seq_len), dtype=int) for i, row in enumerate(inputs_ints): features[i, -len(row):] = np.array(row)[:seq_len] features.shape features[0] """ Explanation: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector. End of explanation """ split_frac= 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape)) """ Explanation: Training, Validation, Test End of explanation """ def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') # One-hot encoding the input and target characters x_one_hot = tf.one_hot(inputs, num_classes) y_one_hot = tf.one_hot(targets, num_classes) ### Build the RNN layers # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) initial_state = cell.zero_state(batch_size, tf.float32) ### Run the data through the RNN layers # This makes a list where each element is on step in the sequence rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)] # Run each sequence step through the RNN and collect the outputs outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one output row for each step for each batch seq_output = tf.concat(outputs, axis=1) output = tf.reshape(seq_output, [-1, lstm_size]) # Now connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(num_classes)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and batch logits = tf.matmul(output, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters preds = tf.nn.softmax(logits, name='predictions') # Reshape the targets to match the logits y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) cost = tf.reduce_mean(loss) # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) # Export the nodes # NOTE: I'm using a namedtuple here because I think they are cool export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph """ Explanation: Building the graph End of explanation """ def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size] test = get_batches(train_x, train_y, batch_size=100) first = list(test)[0] len(first[0]) batch_size = 100 num_steps = 100 lstm_size = 256 num_layers = 2 learning_rate = 0.001 keep_prob = 0.5 """ Explanation: Batching End of explanation """ epochs = 20 # Save every N iterations save_every_n = 15 num_classes = len(labels) model = build_rnn(num_classes, batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batches(train_x, train_y, num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batches(val_x, val_y, num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) """ Explanation: Training End of explanation """
Kaggle/learntools
notebooks/feature_engineering_new/raw/tut4.ipynb
apache-2.0
#$HIDE_INPUT$ import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.cluster import KMeans plt.style.use("seaborn-whitegrid") plt.rc("figure", autolayout=True) plt.rc( "axes", labelweight="bold", labelsize="large", titleweight="bold", titlesize=14, titlepad=10, ) df = pd.read_csv("../input/fe-course-data/housing.csv") X = df.loc[:, ["MedInc", "Latitude", "Longitude"]] X.head() """ Explanation: Introduction This lesson and the next make use of what are known as unsupervised learning algorithms. Unsupervised algorithms don't make use of a target; instead, their purpose is to learn some property of the data, to represent the structure of the features in a certain way. In the context of feature engineering for prediction, you could think of an unsupervised algorithm as a "feature discovery" technique. Clustering simply means the assigning of data points to groups based upon how similar the points are to each other. A clustering algorithm makes "birds of a feather flock together," so to speak. When used for feature engineering, we could attempt to discover groups of customers representing a market segment, for instance, or geographic areas that share similar weather patterns. Adding a feature of cluster labels can help machine learning models untangle complicated relationships of space or proximity. Cluster Labels as a Feature Applied to a single real-valued feature, clustering acts like a traditional "binning" or "discretization" transform. On multiple features, it's like "multi-dimensional binning" (sometimes called vector quantization). <figure style="padding: 1em;"> <img src="https://i.imgur.com/sr3pdYI.png" width=800, alt=""> <figcaption style="textalign: center; font-style: italic"><center><strong>Left:</strong> Clustering a single feature. <strong>Right:</strong> Clustering across two features. </center></figcaption> </figure> Added to a dataframe, a feature of cluster labels might look like this: | Longitude | Latitude | Cluster | |-----------|----------|---------| | -93.619 | 42.054 | 3 | | -93.619 | 42.053 | 3 | | -93.638 | 42.060 | 1 | | -93.602 | 41.988 | 0 | It's important to remember that this Cluster feature is categorical. Here, it's shown with a label encoding (that is, as a sequence of integers) as a typical clustering algorithm would produce; depending on your model, a one-hot encoding may be more appropriate. The motivating idea for adding cluster labels is that the clusters will break up complicated relationships across features into simpler chunks. Our model can then just learn the simpler chunks one-by-one instead having to learn the complicated whole all at once. It's a "divide and conquer" strategy. <figure style="padding: 1em;"> <img src="https://i.imgur.com/rraXFed.png" width=800, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Clustering the YearBuilt feature helps this linear model learn its relationship to SalePrice. </center></figcaption> </figure> The figure shows how clustering can improve a simple linear model. The curved relationship between the YearBuilt and SalePrice is too complicated for this kind of model -- it underfits. On smaller chunks however the relationship is almost linear, and that the model can learn easily. k-Means Clustering There are a great many clustering algorithms. They differ primarily in how they measure "similarity" or "proximity" and in what kinds of features they work with. The algorithm we'll use, k-means, is intuitive and easy to apply in a feature engineering context. Depending on your application another algorithm might be more appropriate. K-means clustering measures similarity using ordinary straight-line distance (Euclidean distance, in other words). It creates clusters by placing a number of points, called centroids, inside the feature-space. Each point in the dataset is assigned to the cluster of whichever centroid it's closest to. The "k" in "k-means" is how many centroids (that is, clusters) it creates. You define the k yourself. You could imagine each centroid capturing points through a sequence of radiating circles. When sets of circles from competing centroids overlap they form a line. The result is what's called a Voronoi tessallation. The tessallation shows you to what clusters future data will be assigned; the tessallation is essentially what k-means learns from its training data. The clustering on the Ames dataset above is a k-means clustering. Here is the same figure with the tessallation and centroids shown. <figure style="padding: 1em;"> <img src="https://i.imgur.com/KSoLd3o.jpg.png" width=450, alt=""> <figcaption style="textalign: center; font-style: italic"><center>K-means clustering creates a Voronoi tessallation of the feature space. </center></figcaption> </figure> Let's review how the k-means algorithm learns the clusters and what that means for feature engineering. We'll focus on three parameters from scikit-learn's implementation: n_clusters, max_iter, and n_init. It's a simple two-step process. The algorithm starts by randomly initializing some predefined number (n_clusters) of centroids. It then iterates over these two operations: 1. assign points to the nearest cluster centroid 2. move each centroid to minimize the distance to its points It iterates over these two steps until the centroids aren't moving anymore, or until some maximum number of iterations has passed (max_iter). It often happens that the initial random position of the centroids ends in a poor clustering. For this reason the algorithm repeats a number of times (n_init) and returns the clustering that has the least total distance between each point and its centroid, the optimal clustering. The animation below shows the algorithm in action. It illustrates the dependence of the result on the initial centroids and the importance of iterating until convergence. <figure style="padding: 1em;"> <img src="https://i.imgur.com/tBkCqXJ.gif" width=550, alt=""> <figcaption style="textalign: center; font-style: italic"><center>The K-means clustering algorithm on Airbnb rentals in NYC. </center></figcaption> </figure> You may need to increase the max_iter for a large number of clusters or n_init for a complex dataset. Ordinarily though the only parameter you'll need to choose yourself is n_clusters (k, that is). The best partitioning for a set of features depends on the model you're using and what you're trying to predict, so it's best to tune it like any hyperparameter (through cross-validation, say). Example - California Housing As spatial features, California Housing's 'Latitude' and 'Longitude' make natural candidates for k-means clustering. In this example we'll cluster these with 'MedInc' (median income) to create economic segments in different regions of California. End of explanation """ # Create cluster feature kmeans = KMeans(n_clusters=6) X["Cluster"] = kmeans.fit_predict(X) X["Cluster"] = X["Cluster"].astype("category") X.head() """ Explanation: Since k-means clustering is sensitive to scale, it can be a good idea rescale or normalize data with extreme values. Our features are already roughly on the same scale, so we'll leave them as-is. End of explanation """ sns.relplot( x="Longitude", y="Latitude", hue="Cluster", data=X, height=6, ); """ Explanation: Now let's look at a couple plots to see how effective this was. First, a scatter plot that shows the geographic distribution of the clusters. It seems like the algorithm has created separate segments for higher-income areas on the coasts. End of explanation """ X["MedHouseVal"] = df["MedHouseVal"] sns.catplot(x="MedHouseVal", y="Cluster", data=X, kind="boxen", height=6); """ Explanation: The target in this dataset is MedHouseVal (median house value). These box-plots show the distribution of the target within each cluster. If the clustering is informative, these distributions should, for the most part, separate across MedHouseVal, which is indeed what we see. End of explanation """
henry-ngo/VIP
docs/source/tutorials/06_fm_disk.ipynb
mit
%matplotlib inline from hciplot import plot_frames, plot_cubes from matplotlib.pyplot import * from matplotlib import pyplot as plt import numpy as np from packaging import version """ Explanation: 6. Forward modeling of disks Author: Julien Milli Last update: 23/03/2022 Suitable for VIP v1.0.0 onwards. Table of contents 6.1. Introduction 6.1.1. Overview 6.1.2. Parametrisation of the density distribution of dust 6.2. Examples of disks 6.2.1. Symmetric pole-on disk 6.2.2. Inclined symmetric disk 6.2.3. Inclined symmetric disk with anisotropy of scattering 6.2.3.1. Simple Henyey-Greenstein phase function 6.2.3.2. Double Henyey-Greenstein phase function 6.2.3.3. Custom phase function 6.2.3.4. Representing a polarised phase function 6.2.4. Asymmetric disk 6.3. Forward modeling of disks This tutorial shows: how to generate different models of synthetic (debris) disks; how to inject model disks in ADI cubes, for forward modeling. Let's first import a couple of external packages needed in this tutorial: End of explanation """ import vip_hci as vip vvip = vip.__version__ print("VIP version: ", vvip) if version.parse(vvip) < version.parse("1.0.0"): msg = "Please upgrade your version of VIP" msg+= "It should be 1.0.0 or above to run this notebook." raise ValueError(msg) elif version.parse(vvip) <= version.parse("1.0.3"): from vip_hci.conf import time_ini, timing from vip_hci.medsub import median_sub from vip_hci.metrics import cube_inject_fakedisk, ScatteredLightDisk else: from vip_hci.config import time_ini, timing from vip_hci.fm import cube_inject_fakedisk, ScatteredLightDisk from vip_hci.psfsub import median_sub # common to all versions: from vip_hci.var import create_synth_psf """ Explanation: In the following box we import all the VIP routines that will be used in this tutorial. The path to some routines has changed between versions 1.0.3 and 1.1.0, which saw a major revamp of the modular architecture, hence the if statements. End of explanation """ pixel_scale=0.01225 # pixel scale in arcsec/px dstar= 80 # distance to the star in pc nx = 200 # number of pixels of your image in X ny = 200 # number of pixels of your image in Y """ Explanation: 6.1. Introduction 6.1.1. Overview The functions implemented in vip_hci for disks are located in vip.metrics.scattered_light_disk. It contains the definition of a class called ScatteredLightDisk which can produce a synthetic image of a disk, and also utility functions to create cubes of images where a synthetic disk has been injected at specific position angles to simulate a real observation. Currently there is no utility function to do forward modelling and try to find the best disk matching a given dataset as this is usually specific to each dataset. Keep in mind that ScatteredLightDisk is only a ray-tracing approach and does not contain any physics in it (no radiative transfer, no particle cross-section). It assumes the particles number density around a star follows the mathematical prescription given in section 1.2 and uses a unity scattering cross-section for all particles (no particle size distribution and cross-section dependant on the particle size, the flux of the synthetic disk cannot be converted in physical units (e.g. Jy) 6.1.2. Parametrisation of the density distribution of dust The density distribution of dust particles is parametrized in a cylindrical coordinate system $\rho(r,\theta,z)$ and is described by the equation: $\rho(r,\theta,z) = \rho_0 \times \left( \frac{2}{\left( \frac{r}{R(\theta)} \right)^{-2a_{in}} + \left( \frac{r}{R(\theta)} \right)^{-2a_{out}} }\right)^{1/2} \times e^{\left[ -\left( \frac{z}{H(r) }\right)^\gamma \right]}$ where $R(\theta)$ is called the reference radius. It is simply the radius of the disk $a$ if the dust distribution is centrally symmetric (no eccentricity). If the disk is eccentric, then $R(\theta)$ depends on $\theta$ and is given by the equation of an ellipse in polar coordinates: $R(\theta) = \frac{a(1-e^2)}{1+e \cos{\theta}}$ This equation for $\rho(r,\theta,z)$ is the product of 3 terms: 1. a constant $\rho_0$ which is the surfacce density of the dust in the midplane, at the reference radius $R(\theta)$. 2. the density distribution in the midplane $z=0$ defined as $\left( \frac{2}{\left( \frac{r}{R(\theta)} \right)^{-2a_{in}} + \left( \frac{r}{R(\theta)} \right)^{-2a_{out}} }\right)^{1/2}$. Such a function ensures that when $r\ll R(\theta)$ then the term is $\propto r^{\alpha_{in}}$ (and we typically use $\alpha_{in}>0$) and when $r\gg R(\theta)$ then the term is $\propto r^{\alpha_{out}}$ (and we typically use $\alpha_{out}<0$). 3. the vertical profile $e^{\left[ -\left( \frac{z}{H(r) }\right)^\gamma \right]}$ is parametrized by an exponential decay of exponent $\gamma$ and scale height $H(r)$. If $\gamma=2$, the vertical profile is Gaussian (and $H(r)$ is proportional to the $\sigma$ or FWHM of the Gaussian (but not strictly equal to any of them). The scale height is further defined as $H(r)=\xi_0 \times \left( \frac{r}{R(\theta)} \right)^\beta$ where $\xi_0$ is the reference scale height at the reference radius $R(\theta)$ and $\beta$ is the flaring coeffient ($\beta=1$ means a linear flaring: the scale height increases linearly with radius). Go to the top 6.2. Examples of disks Let's assume we want to create a synthetic image of 200px, containing a disk around a star located at 80 a.u., observed with SPHERE/IRDIS (pixel scale 12.25 mas). End of explanation """ itilt = 0. # inclination of your disk in degrees a = 70. # semimajoraxis of the disk in au ksi0 = 3. # reference scale height at the semi-major axis of the disk gamma = 2. # exponant of the vertical exponential decay alpha_in = 12 alpha_out = -12 beta = 1 """ Explanation: 6.2.1. Symmetric pole-on disk For a pole-on disk, $i_\text{tilt}=0^\circ$. For a symmetric disk, $e=0$ and the position angle (pa) and argument of pericenter ($\omega$) have no impact. We choose a semi-major axis of 70 a.u., a vertical profile with a gaussian distribution ($\gamma=2$), a reference scale height of 3 a.u. at the semi-major axis of the disk, and inner and outer exponent $\alpha_{in}=12$ and $\alpha_{out}=-12$ End of explanation """ fake_disk1 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws','ain':alpha_in,'aout':alpha_out, 'a':a,'e':0.0,'ksi0':ksi0,'gamma':gamma,'beta':beta}, spf_dico={'name':'HG', 'g':0., 'polar':False}, flux_max=1.) """ Explanation: Then create your disk model End of explanation """ fake_disk1_map = fake_disk1.compute_scattered_light() plot_frames(fake_disk1_map, grid=False, size_factor=6) """ Explanation: The method compute_scattered_light returns the synthetic image of the disk. End of explanation """ fake_disk1.print_info() """ Explanation: You can print some info on the geometrical properties of the model, the dust distribution parameters, the numerical integration parameters and the phase function parameters (detailed later). This can be useful because, in addition to reminding all the parameters used in the model, it also computes some properties such as the radial FWHM of the disk. End of explanation """ fake_disk1 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':-3, 'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta}, spf_dico={'name':'HG', 'g':0., 'polar':False}, flux_max=1.) fake_disk1_map = fake_disk1.compute_scattered_light() plot_frames(fake_disk1_map, grid=False, size_factor=6) fake_disk1.print_info() """ Explanation: As a side note, if $\alpha_{in} \ne \alpha_{out}$, then the peak surface density of the disk is not located at the reference radius $a$. End of explanation """ itilt = 76 # inclination of your disk in degreess fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta}, spf_dico={'name':'HG', 'g':0., 'polar':False}, flux_max=1.) fake_disk2_map = fake_disk2.compute_scattered_light() plot_frames(fake_disk2_map, grid=False, size_factor=6) """ Explanation: Go to the top 6.2.2. Inclined symmetric disk End of explanation """ fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta, 'dens_at_r0':1e6}, spf_dico={'name':'HG', 'g':0, 'polar':False}) fake_disk2_map = fake_disk2.compute_scattered_light() plot_frames(fake_disk2_map, grid=False, size_factor=6) """ Explanation: The position angle of the disk is 0 (e.g. north). The phase function is asymmetric, the reason why the north and south ansae appear brighter is because the disk is not flat: it has a certain scale height and there is more dust intercepted along the line of sight in the ansae. Note that we decided here to normalize the disk to a maximum brightness of 1, using the option flux_max=1.. This is not the only option available and you can decide to paramterize $\rho_0$ instead, using the keyword dens_at_r0 which directly specifies $\rho_0$. End of explanation """ fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=90, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':0.0, 'ksi0':2, 'gamma':gamma, 'beta':beta, 'dens_at_r0':1e6}, spf_dico={'name':'HG', 'g':0, 'polar':False}) fake_disk2_map = fake_disk2.compute_scattered_light() plot_frames(fake_disk2_map, grid=False, size_factor=6) """ Explanation: Warning ! The code does not handle perfectly edge-on disks. There is a maximum inclination close to edge-on beyond which it cannot create an image. In practice this is not a limitation as the convolution by the PSF always makes it impossible to disentangle between a close to edge-on disk and a perfectly edge-on disk. End of explanation """ g=0.4 fake_disk3 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta}, spf_dico={'name':'HG', 'g':g, 'polar':False}, flux_max=1.) """ Explanation: Go to the top 6.2.3. Inclined symmetric disk with anisotropy of scattering 6.2.3.1. Simple Henyey-Greenstein phase function We parametrize the phase function by a Henyey Greenstein phase function, with an asymmetry parameter g. An isotropic phase function has $g=0$, forward scattering is represented by $0<g\leq1$ and backward scattering is represented by $-1\leq g<0$ End of explanation """ fake_disk3.phase_function.plot_phase_function() fake_disk3_map = fake_disk3.compute_scattered_light() plot_frames(fake_disk3_map, grid=False, size_factor=6) """ Explanation: You can plot how the phase function look like: End of explanation """ g1=0.6 g2=-0.4 weight1=0.7 fake_disk4 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta}, spf_dico={'name':'DoubleHG', 'g':[g1,g2], 'weight':weight1, 'polar':False}, flux_max=1) fake_disk4.phase_function.plot_phase_function() fake_disk4_map = fake_disk4.compute_scattered_light() plot_frames(fake_disk4_map, grid=False, size_factor=6) """ Explanation: The forward side is brighter. Go to the top 6.2.3.2. Double Henyey-Greenstein phase function A double Henyey Greenstein (HG) phase function is simply a linear combination of 2 simple HG phase function. It is therefore parametrized by $g_1$ and $g_2$ the 2 asymmetry parameters of each HG and the weight (between 0 and 1) of the first HG phase function. Typically a double HG is used to represent a combination of forward scattering ($g_1>0$) and backward scattering ($g_2<1$) End of explanation """ kind='cubic' #kind must be either "linear", "nearest", "zero", "slinear", "quadratic" or "cubic" spf_dico = dict({'phi':[0, 60, 90, 120, 180], 'spf':[1, 0.4, 0.3, 0.3, 0.5], 'name':'interpolated', 'polar':False, 'kind':kind}) fake_disk5 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta}, spf_dico=spf_dico, flux_max=1) fake_disk5.phase_function.plot_phase_function() fake_disk5_map = fake_disk5.compute_scattered_light() plot_frames(fake_disk5_map, grid=False, size_factor=6) """ Explanation: Go to the top 6.2.3.3. Custom phase function In some cases, a HG phase function (simple or double) cannot represent well the behaviour of the dust. The code is modular and you can propose new prescriptions for the phase functions if you need, or you can also create a custom phase function. End of explanation """ fake_disk6 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta, 'dens_at_r0':1e6}, spf_dico={'name':'HG', 'g':0, 'polar':True}) fake_disk6.phase_function.plot_phase_function() fake_disk6_map = fake_disk6.compute_scattered_light() plot_frames(fake_disk6_map, grid=False, size_factor=6) """ Explanation: Go to the top 6.2.3.4. Representing a polarised phase function If you are trying to reproduce the polarised intensity of a disk (for instance Stokes $Q_\phi$ image), you may want to add on top of the scattering phase function, a modulation representing the degree of linear polarisation. This can be done by setting the polar keyword to True and in this case, the model assumes a Rayleigh-like degree of linear polarisation parametrized by $(1-(\cos \phi)^2) / (1+(\cos \phi)^2)$ where $\phi$ is the scattering angle. End of explanation """ e=0.4 # eccentricity in degrees omega=30 # argument of pericenter fake_disk7 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=0, omega=omega, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':e, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta}, spf_dico={'name':'HG', 'g':g, 'polar':False}, flux_max=1.) fake_disk7_map = fake_disk7.compute_scattered_light() plot_frames(fake_disk7_map, grid=False, size_factor=6) """ Explanation: You can combine this Rayleigh-like degree of linear polarisation with any phase function (simple HG, double HG or custom type). Go to the top 6.2.4. Asymmetric disk Be careful here ! There is no consensus in the community on how to parametrize an eccentric dust distribution, so keep in mind that the convention described in section 1.2 is only one way to do so, but does not mean the dust density distribution in an eccentric disk follows this prescription. For instance, around the pericenter particle velocities are higher and one expects more collision to happen which can create an overdensity of particles compared to other regions of the disk. Conversely, particles stay longer at the apocenter because of Kepler's third law, which means that one could also expect a higher density at apocenter... All these physical phenomena are not described in this model. Let's start woth a pole-on disk to be insensitive to phase function effects End of explanation """ fake_disk7 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar, itilt=itilt, omega=omega, pxInArcsec=pixel_scale, pa=0, density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out, 'a':a, 'e':e, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta}, spf_dico={'name':'HG', 'g':g, 'polar':False}, flux_max=1.) fake_disk7_map = fake_disk7.compute_scattered_light() plot_frames(fake_disk7_map, grid=False, size_factor=6) """ Explanation: The brightness asymmetry here is entirely due to the fact that the brightness at one point in the disk is inversely proportional to the squared distance to the star. Once you incline the disk, you start seeing the competing effect of the phase function and eccentricity. End of explanation """ plot_frames(fake_disk3_map, grid=False, size_factor=6) nframes = 30 # we assume we have 60º of parallactic angle rotation centered around meridian parang_amplitude = 60 derotation_angles = np.linspace(-parang_amplitude/2, parang_amplitude/2, nframes) start = time_ini() cube_fake_disk3 = cube_inject_fakedisk(fake_disk3_map, -derotation_angles, imlib='vip-fft') timing(start) """ Explanation: Go to the top 6.3. Forward modeling of disks Let's start from our inclined simple HG symmeric disk fake_disk3_map and assume we observe this disk as part of an ADI sequence of 30 images End of explanation """ cube_fake_disk3.shape """ Explanation: cube_fake_disk3 is now a cube of 30 frames, where the disk has been injected at the correct position angle. End of explanation """ plot_frames((cube_fake_disk3[0], cube_fake_disk3[nframes//2], cube_fake_disk3[nframes-1]), grid=False, size_factor=3) """ Explanation: Let's visualize the first, middle and last image of the cube. End of explanation """ cadi_fake_disk3 = median_sub(cube_fake_disk3, derotation_angles, imlib='vip-fft') plot_frames((fake_disk3_map, cadi_fake_disk3), grid=False, size_factor=4) """ Explanation: We can now process this cube with median-ADI for instance: End of explanation """ psf = create_synth_psf(model='gauss', shape=(11, 11), fwhm=4.) plot_frames(psf, grid=True, size_factor=2) """ Explanation: The example above shows a typical bias that can be induced by ADI on extended disk signals (Milli et al. 2012). So far we have not dealt with convolution effects. In practice the image of a disk is convolved by the instrumental PSF. Let's assume here an instrument having a gaussian PSF with FWHM = 4px, and create a synthetic PSF using the create_synth_psf function: End of explanation """ cube_fake_disk3_convolved = cube_inject_fakedisk(fake_disk3_map, -derotation_angles, psf=psf, imlib='vip-fft') cadi_fake_disk3_convolved = median_sub(cube_fake_disk3_convolved, derotation_angles, imlib='vip-fft') plot_frames((fake_disk3_map, cadi_fake_disk3, cadi_fake_disk3_convolved), grid=False, size_factor=4) """ Explanation: Then we inject the disk in the cube and convolve each frame by the PSF End of explanation """
liufuyang/deep_learning_tutorial
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
mit
import numpy as np import h5py import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1) """ Explanation: Convolutional Neural Networks: Step by Step Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. Notation: - Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters. Superscript $(i)$ denotes an object from the $i^{th}$ example. Example: $x^{(i)}$ is the $i^{th}$ training example input. Lowerscript $i$ denotes the $i^{th}$ entry of a vector. Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with numpy and/or have completed the previous courses of the specialization. Let's get started! 1 - Packages Let's first import all the packages that you will need during this assignment. - numpy is the fundamental package for scientific computing with Python. - matplotlib is a library to plot graphs in Python. - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. End of explanation """ # GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- integer, amount of padding around each image on vertical and horizontal dimensions Returns: X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C) """ ### START CODE HERE ### (≈ 1 line) X_pad = np.pad(X, ((0,0), (pad, pad), (pad, pad), (0,0)), 'constant', constant_values = 0) ### END CODE HERE ### return X_pad np.random.seed(1) x = np.random.randn(4, 3, 3, 2) x_pad = zero_pad(x, 2) print ("x.shape =", x.shape) print ("x_pad.shape =", x_pad.shape) print ("x[1,1] =", x[1,1]) print ("x_pad[1,1] =", x_pad[1,1]) fig, axarr = plt.subplots(1, 2) axarr[0].set_title('x') axarr[0].imshow(x[0,:,:,0]) axarr[1].set_title('x_pad') axarr[1].imshow(x_pad[0,:,:,0]) """ Explanation: 2 - Outline of the Assignment You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed: Convolution functions, including: Zero Padding Convolve window Convolution forward Convolution backward (optional) Pooling functions, including: Pooling forward Create mask Distribute value Pooling backward (optional) This notebook will ask you to implement these functions from scratch in numpy. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model: <img src="images/model.png" style="width:800px;height:300px;"> Note that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural Networks Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. <img src="images/conv_nn.png" style="width:350px;height:200px;"> In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-Padding Zero-padding adds zeros around the border of an image: <img src="images/PAD.png" style="width:600px;height:400px;"> <caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Zero-Padding<br> Image (3 channels, RGB) with a padding of 2. </center></caption> The main benefits of padding are the following: It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image. Exercise: Implement the following function, which pads all the images of a batch of examples X with zeros. Use np.pad. Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with pad = 1 for the 2nd dimension, pad = 3 for the 4th dimension and pad = 0 for the rest, you would do: python a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..)) End of explanation """ # GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev) b -- Bias parameters contained in a window - matrix of shape (1, 1, 1) Returns: Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data """ ### START CODE HERE ### (≈ 2 lines of code) # Element-wise product between a_slice and W. Do not add the bias yet. s = np.multiply(a_slice_prev, W) # Sum over all entries of the volume s. Z = np.sum(s) # Add bias b to Z. Cast b to a float() so that Z results in a scalar value. Z = Z + float(b) ### END CODE HERE ### return Z np.random.seed(1) a_slice_prev = np.random.randn(4, 4, 3) W = np.random.randn(4, 4, 3) b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b) print("Z =", Z) """ Explanation: Expected Output: <table> <tr> <td> **x.shape**: </td> <td> (4, 3, 3, 2) </td> </tr> <tr> <td> **x_pad.shape**: </td> <td> (4, 7, 7, 2) </td> </tr> <tr> <td> **x[1,1]**: </td> <td> [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] </td> </tr> <tr> <td> **x_pad[1,1]**: </td> <td> [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] </td> </tr> </table> 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: Takes an input volume Applies a filter at every position of the input Outputs another volume (usually of different size) <img src="images/Convolution_schematic.gif" style="width:500px;height:300px;"> <caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : Convolution operation<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption> In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. Exercise: Implement conv_single_step(). Hint. End of explanation """ # GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): """ Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shape (f, f, n_C_prev, n_C) b -- Biases, numpy array of shape (1, 1, 1, n_C) hparameters -- python dictionary containing "stride" and "pad" Returns: Z -- conv output, numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward() function """ ### START CODE HERE ### # Retrieve dimensions from A_prev's shape (≈1 line) (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (≈1 line) (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" (≈2 lines) stride = hparameters['stride'] pad = hparameters['pad'] # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines) n_H = int((n_H_prev - f + 2*pad)/stride) + 1 n_W = int((n_W_prev - f + 2*pad)/stride) + 1 # Initialize the output volume Z with zeros. (≈1 line) Z = np.zeros((m, n_H, n_W, n_C)) # Create A_prev_pad by padding A_prev A_prev_pad = zero_pad(A_prev, pad) for i in range(m): # loop over the batch of training examples a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over channels (= #filters) of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h*stride vert_end = vert_start+f horiz_start = w*stride horiz_end = horiz_start+f # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line) a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end,:] # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line) Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c]) ### END CODE HERE ### # Making sure your output shape is correct assert(Z.shape == (m, n_H, n_W, n_C)) # Save information in "cache" for the backprop cache = (A_prev, W, b, hparameters) return Z, cache np.random.seed(1) A_prev = np.random.randn(10,4,4,3) W = np.random.randn(2,2,3,8) b = np.random.randn(1,1,1,8) hparameters = {"pad" : 2, "stride": 2} Z, cache_conv = conv_forward(A_prev, W, b, hparameters) print("Z's mean =", np.mean(Z)) print("Z[3,2,1] =", Z[3,2,1]) print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3]) """ Explanation: Expected Output: <table> <tr> <td> **Z** </td> <td> -6.99908945068 </td> </tr> </table> 3.3 - Convolutional Neural Networks - Forward pass In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: <center> <video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls> </video> </center> Exercise: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. Hint: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do: python a_slice_prev = a_prev[0:2,0:2,:] This will be useful when you will define a_slice_prev below, using the start/end indexes you will define. 2. To define a_slice you will need to first define its corners vert_start, vert_end, horiz_start and horiz_end. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. <img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;"> <caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) <br> This figure shows only a single channel. </center></caption> Reminder: The formulas relating the output shape of the convolution to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_C = \text{number of filters used in the convolution}$$ For this exercise, we won't worry about vectorization, and will just implement everything with for-loops. End of explanation """ # GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"): """ Implements the forward pass of the pooling layer Arguments: A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) hparameters -- python dictionary containing "f" and "stride" mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C) cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters """ # Retrieve dimensions from the input shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve hyperparameters from "hparameters" f = hparameters["f"] stride = hparameters["stride"] # Define the dimensions of the output n_H = int(1 + (n_H_prev - f) / stride) n_W = int(1 + (n_W_prev - f) / stride) n_C = n_C_prev # Initialize output matrix A A = np.zeros((m, n_H, n_W, n_C)) ### START CODE HERE ### for i in range(m): # loop over the training examples for h in range(n_H): # loop on the vertical axis of the output volume for w in range(n_W): # loop on the horizontal axis of the output volume for c in range (n_C): # loop over the channels of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h*stride vert_end = vert_start+f horiz_start = w*stride horiz_end = horiz_start+f # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line) a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean. if mode == "max": A[i, h, w, c] = np.max(a_prev_slice) elif mode == "average": A[i, h, w, c] = np.mean(a_prev_slice) ### END CODE HERE ### # Store the input and hparameters in "cache" for pool_backward() cache = (A_prev, hparameters) # Making sure your output shape is correct assert(A.shape == (m, n_H, n_W, n_C)) return A, cache np.random.seed(1) A_prev = np.random.randn(2, 4, 4, 3) hparameters = {"stride" : 2, "f": 3} A, cache = pool_forward(A_prev, hparameters) print("mode = max") print("A =", A) print() A, cache = pool_forward(A_prev, hparameters, mode = "average") print("mode = average") print("A =", A) """ Explanation: Expected Output: <table> <tr> <td> **Z's mean** </td> <td> 0.0489952035289 </td> </tr> <tr> <td> **Z[3,2,1]** </td> <td> [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] </td> </tr> <tr> <td> **cache_conv[0][1][2][3]** </td> <td> [-0.20075807 0.18656139 0.41005165] </td> </tr> </table> Finally, CONV layer should also contain an activation, in which case we would add the following line of code: ```python Convolve the window to get back one output neuron Z[i, h, w, c] = ... Apply activation A[i, h, w, c] = activation(Z[i, h, w, c]) ``` You don't need to do it here. 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output. Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output. <table> <td> <img src="images/max_pool1.png" style="width:500px;height:300px;"> <td> <td> <img src="images/a_pool.png" style="width:500px;height:300px;"> <td> </table> These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. 4.1 - Forward Pooling Now, you are going to implement MAX-POOL and AVG-POOL, in the same function. Exercise: Implement the forward pass of the pooling layer. Follow the hints in the comments below. Reminder: As there's no padding, the formulas binding the output shape of the pooling to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$ $$ n_C = n_{C_{prev}}$$ End of explanation """ def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv_forward() Returns: dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev), numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) dW -- gradient of the cost with respect to the weights of the conv layer (W) numpy array of shape (f, f, n_C_prev, n_C) db -- gradient of the cost with respect to the biases of the conv layer (b) numpy array of shape (1, 1, 1, n_C) """ ### START CODE HERE ### # Retrieve information from "cache" (A_prev, W, b, hparameters) = None # Retrieve dimensions from A_prev's shape (m, n_H_prev, n_W_prev, n_C_prev) = None # Retrieve dimensions from W's shape (f, f, n_C_prev, n_C) = None # Retrieve information from "hparameters" stride = None pad = None # Retrieve dimensions from dZ's shape (m, n_H, n_W, n_C) = None # Initialize dA_prev, dW, db with the correct shapes dA_prev = None dW = None db = None # Pad A_prev and dA_prev A_prev_pad = None dA_prev_pad = None for i in range(None): # loop over the training examples # select ith training example from A_prev_pad and dA_prev_pad a_prev_pad = None da_prev_pad = None for h in range(None): # loop over vertical axis of the output volume for w in range(None): # loop over horizontal axis of the output volume for c in range(None): # loop over the channels of the output volume # Find the corners of the current "slice" vert_start = None vert_end = None horiz_start = None horiz_end = None # Use the corners to define the slice from a_prev_pad a_slice = None # Update gradients for the window and the filter's parameters using the code formulas given above da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += None dW[:,:,:,c] += None db[:,:,:,c] += None # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :]) dA_prev[i, :, :, :] = None ### END CODE HERE ### # Making sure your output shape is correct assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev)) return dA_prev, dW, db np.random.seed(1) dA, dW, db = conv_backward(Z, cache_conv) print("dA_mean =", np.mean(dA)) print("dW_mean =", np.mean(dW)) print("db_mean =", np.mean(db)) """ Explanation: Expected Output: <table> <tr> <td> A = </td> <td> [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] </td> </tr> <tr> <td> A = </td> <td> [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] </td> </tr> </table> Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainer of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED) In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA: This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example: $$ dA += \sum {h=0} ^{n_H} \sum{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$ Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into: python da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] 5.1.2 - Computing dW: This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss: $$ dW_c += \sum {h=0} ^{n_H} \sum{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$ Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into: python dW[:,:,:,c] += a_slice * dZ[i, h, w, c] 5.1.3 - Computing db: This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$: $$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$ As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into: python db[:,:,:,c] += dZ[i, h, w, c] Exercise: Implement the conv_backward function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above. End of explanation """ def create_mask_from_window(x): """ Creates a mask from an input matrix x, to identify the max entry of x. Arguments: x -- Array of shape (f, f) Returns: mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x. """ ### START CODE HERE ### (≈1 line) mask = None ### END CODE HERE ### return mask np.random.seed(1) x = np.random.randn(2,3) mask = create_mask_from_window(x) print('x = ', x) print("mask = ", mask) """ Explanation: Expected Output: <table> <tr> <td> **dA_mean** </td> <td> 1.45243777754 </td> </tr> <tr> <td> **dW_mean** </td> <td> 1.72699145831 </td> </tr> <tr> <td> **db_mean** </td> <td> 7.83923256462 </td> </tr> </table> 5.2 Pooling layer - backward pass Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called create_mask_from_window() which does the following: $$ X = \begin{bmatrix} 1 && 3 \ 4 && 2 \end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix} 0 && 0 \ 1 && 0 \end{bmatrix}\tag{4}$$ As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. Exercise: Implement create_mask_from_window(). This function will be helpful for pooling backward. Hints: - np.max() may be helpful. It computes the maximum of an array. - If you have a matrix X and a scalar x: A = (X == x) will return a matrix A of the same size as X such that: A[i,j] = True if X[i,j] = x A[i,j] = False if X[i,j] != x - Here, you don't need to consider cases where there are several maxima in a matrix. End of explanation """ def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we distributed the value of dz """ ### START CODE HERE ### # Retrieve dimensions from shape (≈1 line) (n_H, n_W) = None # Compute the value to distribute on the matrix (≈1 line) average = None # Create a matrix where every entry is the "average" value (≈1 line) a = None ### END CODE HERE ### return a a = distribute_value(2, (2,2)) print('distributed value =', a) """ Explanation: Expected Output: <table> <tr> <td> **x =** </td> <td> [[ 1.62434536 -0.61175641 -0.52817175] <br> [-1.07296862 0.86540763 -2.3015387 ]] </td> </tr> <tr> <td> **mask =** </td> <td> [[ True False False] <br> [False False False]] </td> </tr> </table> Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this. For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix} 1/4 && 1/4 \ 1/4 && 1/4 \end{bmatrix}\tag{5}$$ This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. Exercise: Implement the function below to equally distribute a value dz through a matrix of dimension shape. Hint End of explanation """ def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev """ ### START CODE HERE ### # Retrieve information from cache (≈1 line) (A_prev, hparameters) = None # Retrieve hyperparameters from "hparameters" (≈2 lines) stride = None f = None # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines) m, n_H_prev, n_W_prev, n_C_prev = None m, n_H, n_W, n_C = None # Initialize dA_prev with zeros (≈1 line) dA_prev = None for i in range(None): # loop over the training examples # select training example from A_prev (≈1 line) a_prev = None for h in range(None): # loop on the vertical axis for w in range(None): # loop on the horizontal axis for c in range(None): # loop over the channels (depth) # Find the corners of the current "slice" (≈4 lines) vert_start = None vert_end = None horiz_start = None horiz_end = None # Compute the backward propagation in both modes. if mode == "max": # Use the corners and "c" to define the current slice from a_prev (≈1 line) a_prev_slice = None # Create the mask from a_prev_slice (≈1 line) mask = None # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None elif mode == "average": # Get the value a from dA (≈1 line) da = None # Define the shape of the filter as fxf (≈1 line) shape = None # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None ### END CODE ### # Making sure your output shape is correct assert(dA_prev.shape == A_prev.shape) return dA_prev np.random.seed(1) A_prev = np.random.randn(5, 5, 3, 2) hparameters = {"stride" : 1, "f": 2} A, cache = pool_forward(A_prev, hparameters) dA = np.random.randn(5, 4, 2, 2) dA_prev = pool_backward(dA, cache, mode = "max") print("mode = max") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) print() dA_prev = pool_backward(dA, cache, mode = "average") print("mode = average") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) """ Explanation: Expected Output: <table> <tr> <td> distributed_value = </td> <td> [[ 0.5 0.5] <br\> [ 0.5 0.5]] </td> </tr> </table> 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer. Exercise: Implement the pool_backward function in both modes ("max" and "average"). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an if/elif statement to see if the mode is equal to 'max' or 'average'. If it is equal to 'average' you should use the distribute_value() function you implemented above to create a matrix of the same shape as a_slice. Otherwise, the mode is equal to 'max', and you will create a mask with create_mask_from_window() and multiply it by the corresponding value of dZ. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/pcmdi/cmip6/models/sandbox-1/landice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-1', 'landice') """ Explanation: ES-DOC CMIP6 Model Properties - Landice MIP Era: CMIP6 Institute: PCMDI Source ID: SANDBOX-1 Topic: Landice Sub-Topics: Glaciers, Ice. Properties: 30 (21 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:36 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Ice Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify how ice albedo is modelled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Atmospheric Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Oceanic Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the ocean and ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which variables are prognostically calculated in the ice model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Grid Land ice grid 3.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is an adative grid being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Base Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The base resolution (in metres), before any adaption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Resolution Limit Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If an adaptive grid is being used, what is the limit of the resolution (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.5. Projection Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The projection of the land ice grid (e.g. albers_equal_area) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Glaciers Land ice glaciers 4.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of glaciers in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of glaciers, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 4.3. Dynamic Areal Extent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does the model include a dynamic glacial extent? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the ice sheet and ice shelf in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.2. Grounding Line Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.3. Ice Sheet Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice sheets simulated? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Ice Shelf Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice shelves simulated? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over bedrock End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of calving from the front of the ice shelf End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Melting Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of melting from the front of the ice shelf End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description if ice sheet and ice shelf dynamics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Approximation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Approximation type used in modelling ice dynamics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.3. Adaptive Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there an adaptive time scheme for the ice scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. End of explanation """
plipp/informatica-pfr-2017
nbs/2/3-OPTIONAL-More-Pandas-Exercises.ipynb
mit
import pandas as pd import numpy as np def top15_countries(): pass # TODO Top15 = top15_countries() Top15 """ Explanation: [Optional] More Pandas Exercises Original Source: Coursera Introduction to Data Science in Python: Assignment 3 Additional Requirements bash pip install xlrd Exercise 1 Load the energy data from the file Energy Indicators.xls, which is a list of indicators of energy supply and renewable electricity production from the United Nations for the year 2013, and should be put into a DataFrame with the variable name of energy. Keep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are: ['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable'] Convert Energy Supply to gigajoules (there are 1,000,000 gigajoules in a petajoule). For all countries which have missing data (e.g. data with "...") make sure this is reflected as np.NaN values. Rename the following list of countries (for use in later questions): "Republic of Korea": "South Korea", "United States of America": "United States", "United Kingdom of Great Britain and Northern Ireland": "United Kingdom", "China, Hong Kong Special Administrative Region": "Hong Kong" There are also several countries with numbers and/or parenthesis in their name. Be sure to remove these, e.g. 'Bolivia (Plurinational State of)' should be 'Bolivia', 'Switzerland17' should be 'Switzerland'. <br> Next, load the GDP data from the file world_bank.csv, which is a csv containing countries' GDP from 1960 to 2015 from World Bank. Call this DataFrame GDP. Make sure to skip the header, and rename the following list of countries: "Korea, Rep.": "South Korea", "Iran, Islamic Rep.": "Iran", "Hong Kong SAR, China": "Hong Kong" <br> Finally, load the Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology from the file scimagojr-3.xlsx, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame ScimEn. Join the three datasets: GDP, Energy, and ScimEn into a new dataset (using the intersection of country names). Use only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15). The index of this DataFrame should be the name of the country, and the columns should be ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations', 'Citations per document', 'H index', 'Energy Supply', 'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']. This function should return a DataFrame with 20 columns and 15 entries. End of explanation """ %%HTML <svg width="800" height="300"> <circle cx="150" cy="180" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="blue" /> <circle cx="200" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="red" /> <circle cx="100" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="green" /> <line x1="150" y1="125" x2="300" y2="150" stroke="black" stroke-width="2" fill="black" stroke-dasharray="5,3"/> <text x="300" y="165" font-family="Verdana" font-size="35">Everything but this!</text> </svg> def missed_entries(): pass # TODO missed_entries() """ Explanation: Exercise 2 The previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose? This function should return a single number. End of explanation """ def average_gdp(Top15): pass # TODO average_gdp(Top15) """ Explanation: <br> Answer the following questions in the context of only the top 15 countries by Scimagojr Rank (aka Top15) Exercise 3 What is the average GDP over the last 10 years for each country? (exclude missing values from this calculation.) This function should return a Series named avgGDP with 15 countries and their average GDP sorted in descending order. End of explanation """ def delta_gdp(Top15): pass # TODO delta_gdp(Top15) """ Explanation: Exercise 4 By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP? This function should return a single number. End of explanation """ def mean_energy_supply_per_capita(Top15): pass # TODO mean_energy_supply_per_capita(Top15) """ Explanation: Exercise 5 What is the mean Energy Supply per Capita? This function should return a single number. End of explanation """ def country_pct_with_max_renewals(Top15): pass # TODO country_pct_with_max_renewals(Top15) """ Explanation: Exercise 6 What country has the maximum % Renewable and what is the percentage? This function should return a tuple with the name of the country and the percentage. End of explanation """ def ratio_self_to_total_citation(Top15): pass # TODO ratio_self_to_total_citation(Top15) """ Explanation: Exercise 7 Create a new column that is the ratio of Self-Citations to Total Citations. What is the maximum value for this new column, and what country has the highest ratio? This function should return a tuple with the name of the country and the ratio. End of explanation """ def third_most_populated(Top15): pass # TODO third_most_populated(Top15) """ Explanation: Exercise 8 Create a column that estimates the population using Energy Supply and Energy Supply per capita. What is the third most populous country according to this estimate? This function should return a single string value. End of explanation """ def corr_citation_energy_supply(Top15): pass # TODO corr_citation_energy_supply(Top15) def plot9(): import matplotlib as plt %matplotlib inline Top15 = top15_countries() Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita'] Top15['Citable docs per Capita'] = Top15['Citable documents'] / Top15['PopEst'] Top15.plot(x='Citable docs per Capita', y='Energy Supply per Capita', kind='scatter', xlim=[0, 0.0006]) # TODO # plot9() """ Explanation: Exercise 9 Create a column that estimates the number of citable documents per person. What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the .corr() method, (Pearson's correlation). This function should return a single number. (Optional: Use the built-in function plot9() to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita) End of explanation """ def calc_high_renew(Top15): pass # TODO calc_high_renew(Top15) """ Explanation: Exercise 10 Create a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median. This function should return a series named HighRenew whose index is the country name sorted in ascending order of rank. End of explanation """ ContinentDict = {'China':'Asia', 'United States':'North America', 'Japan':'Asia', 'United Kingdom':'Europe', 'Russian Federation':'Europe', 'Canada':'North America', 'Germany':'Europe', 'India':'Asia', 'France':'Europe', 'South Korea':'Asia', 'Italy':'Europe', 'Spain':'Europe', 'Iran':'Asia', 'Australia':'Australia', 'Brazil':'South America'} def stats_for_pop_est(Top15): pass # TODO stats_for_pop_est(Top15) """ Explanation: Exercise 11 Use the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country. python ContinentDict = {'China':'Asia', 'United States':'North America', 'Japan':'Asia', 'United Kingdom':'Europe', 'Russian Federation':'Europe', 'Canada':'North America', 'Germany':'Europe', 'India':'Asia', 'France':'Europe', 'South Korea':'Asia', 'Italy':'Europe', 'Spain':'Europe', 'Iran':'Asia', 'Australia':'Australia', 'Brazil':'South America'} This function should return a DataFrame with index named Continent ['Asia', 'Australia', 'Europe', 'North America', 'South America'] and columns ['size', 'sum', 'mean', 'std'] End of explanation """ def count_by_renewable(Top15): pass # TODO count_by_renewable(Top15) """ Explanation: Exercise 12 Cut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups? This function should return a Series with a MultiIndex of Continent, then the bins for % Renewable. Do not include groups with no countries. End of explanation """ def formatted_pop_est(Top15): pass # TODO formatted_pop_est(Top15) """ Explanation: Exercise 13 Convert the Population Estimate series to a string with thousands separator (using commas). Do not round the results. e.g. 317615384.61538464 -> 317,615,384.61538464 This function should return a Series PopEst whose index is the country name and whose values are the population estimate string. End of explanation """ def plot_14(Top15): import matplotlib as plt %matplotlib inline ax = Top15.plot(x='Rank', y='% Renewable', kind='scatter', c=['#e41a1c','#377eb8','#e41a1c','#4daf4a','#4daf4a','#377eb8','#4daf4a','#e41a1c', '#4daf4a','#e41a1c','#4daf4a','#4daf4a','#e41a1c','#dede00','#ff7f00'], xticks=range(1,16), s=6*Top15['2014']/10**10, alpha=.75, figsize=[16,6]); for i, txt in enumerate(Top15.index): ax.annotate(txt, [Top15['Rank'][i], Top15['% Renewable'][i]], ha='center') print("This is an example of a visualization that can be created to help understand the data. \ This is a bubble chart showing % Renewable vs. Rank. The size of the bubble corresponds to the countries' \ 2014 GDP, and the color corresponds to the continent.") # TODO """ Explanation: Exercise 14 Use the built in function plot_14() to see an example visualization. End of explanation """
yingchi/fastai-notes
deeplearning1/nbs/lesson5_yingchi.ipynb
apache-2.0
from keras.datasets import imdb idx = imdb.get_word_index() type(idx) # Let's look at the word list """ sorted(iterable, *, key=None, reverse=False): built-in function; Return a new sorted list from the items in iterable. """ idx_list = sorted(idx, key=idx.get) print(idx_list[:5]) from itertools import islice def take(n, iterable): "Return first n items of the iterable as a list" return list(islice(iterable, n)) print(take(5, idx.items())) """ Explanation: Setup data We're going to look at the IMDB dataset, which contains movie reviews from IMDB, along with their sentiment. Keras comes with some helpers for this dataset. End of explanation """ idx2word = {v:k for k, v in idx.items()} """ Explanation: Create a mapping dict from id to word End of explanation """ path = get_file('imdb_full.pkl', origin='https://s3.amazonaws.com/text-datasets/imdb_full.pkl', md5_hash='d091312047c43cf9e4e38fef92437263') """ get_file(fname, origin,...): keras function; downloads a file from a URL if it not already in the cache. """ f = open(path, 'rb') (x_train, labels_train), (x_test, labels_test) = pickle.load(f) print(type(x_train)) print(len(x_train)) # print the 1st review ', '.join(map(str, x_train[0])) # Let's map the idx to words ' '.join(idx2word[o] for o in x_train[0]) """ Explanation: Get the reviews file End of explanation """ labels_train[:10] """ Explanation: The labels are 1 for positive, 0 for negative End of explanation """ vocab_size = 5000 trn = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_train] test = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_test] """ Explanation: Reduce vocab size by setting rare words to max index End of explanation """ lens = np.array(list(map(len, trn))) (lens.max(), lens.min(), lens.mean()) """ Explanation: Let's look at the distribution of the sentences length End of explanation """ seq_len = 500 """ keras.preprocessing.sequence.pad_sequences(sequences, maxlen=None, dtype='int32', padding='pre', truncating='pre', value=0.) Transform a list of num_samples sequences (lists of scalars) into a 2D Numpy array of shape (num_samples, num_timesteps). num_timesteps is either the maxlen argument if provided, or the length of the longest sequence otherwise. Sequences that are shorter than num_timesteps are padded with value at the end. Sequences longer than num_timesteps are truncated so that it fits the desired length. Position where padding or truncation happens is determined by padding or truncating, respectively. """ trn = sequence.pad_sequences(trn, maxlen=seq_len, value=0) test = sequence.pad_sequences(test, maxlen=seq_len, value=0) trn.shape """ Explanation: Pad or truncate each sentence to make consistent length of 500 End of explanation """ model = Sequential([ Embedding(vocab_size, 32, input_length=seq_len), Flatten(), Dense(100, activation='relu'), Dropout(0.7), Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy']) model.summary() model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64) """ Explanation: Create simple models Single hidden layer NN The simplest model that tends to give reasonable results is a single hidden layer net. Note that we can't expect to get any useful results by feeding word ids directly into a neural net - so intead we use an embedding to replace them with a vector of 32 floating numbers for each word in the vocab Note here that the final sigmoid function is the same as softmax becuase out output is binary. Whenver we use 'binary_crossentryop', we use 'sigmoid' as activation End of explanation """ conv1 = Sequential([ Embedding(vocab_size, 32, input_length=seq_len, dropout=0.2), Dropout(0.2), # look at 5 words at a time Convolution1D(64, 5, border_mode='same', activation='relu'), Dropout(0.2), MaxPooling1D(), Flatten(), Dense(100, activation='relu'), Dropout(0.7), Dense(1, activation='sigmoid') ]) conv1.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy']) conv1.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64) conv1.summary() """ Explanation: Single conv layer with max pooling A CNN is likely to work better, since it's designed to take advantage of ordered data. We'll need to use a 1D CNN since a sequence of word is 1D End of explanation """ conv1.save_weights(model_path + 'conv1.h5') conv1.load_weights(model_path + 'conv1.h5') """ Explanation: $10304 = 53264 + 64$ Each filter is a 5x32 matrix End of explanation """ def get_glove_dataset(dataset): """Download the requested glove dataset from files.fast.ai and return a location that can be passed to load_vectors. """ # see wordvectors.ipynb for info on how these files were # generated from the original glove data. md5sums = {'6B.50d': '8e1557d1228decbda7db6dfd81cd9909', '6B.100d': 'c92dbbeacde2b0384a43014885a60b2c', '6B.200d': 'af271b46c04b0b2e41a84d8cd806178d', '6B.300d': '30290210376887dcc6d0a5a6374d8255'} glove_path = os.path.abspath('data/glove/results') %mkdir -p $glove_path return get_file(dataset, 'http://files.fast.ai/models/glove/' + dataset + '.tgz', cache_subdir=glove_path, md5_hash=md5sums.get(dataset, None), untar=True) def load_vectors(loc): return (load_array(loc+'.dat'), pickle.load(open(loc+'_words.pkl','rb'), encoding='latin1'), pickle.load(open(loc+'_idx.pkl','rb'), encoding='latin1')) vecs, words, wordidx = load_vectors(get_glove_dataset('6B.50d')) """ Explanation: Pre-trained vectors You may want to look at wordvectors.ipynb before moving on. In this section, we replicate the previous CNN, but using pre-trained embeddings. You should always use pre-trained vectors End of explanation """ def create_emb(vecs, vocab_size): n_fact = vecs.shape[1] emb = np.zeros((vocab_size, n_fact)) for i in range(1, len(emb)): word = idx2word[i] if word and re.match(r"^[a-zA-Z0-9\-]*$", word): src_idx = wordidx[word] emb[i] = vecs[src_idx] else: # If we can't find the word in glove, randomly initialize emb[i] = normal(scale=0.6, size=(n_fact,)) # This is our "rare word" id - we want to randomly initialize emb[-1] = normal(scale=0.6, size=(n_fact,)) emb/=3 return emb emb = create_emb(vecs, vocab_size) """ Explanation: The glove word ids and imdb word ids use different indexes. So we create a simple function that creates an embedding matrix using the indexes from imdb, and the embeddings from glove (where they exist). End of explanation """ model = Sequential([ Embedding(vocab_size, 50, input_length=seq_len, dropout=0.2, weights=[emb]), Dropout(0.25), Convolution1D(64, 5, border_mode='same', activation='relu'), Dropout(0.25), MaxPooling1D(), Flatten(), Dense(100, activation='relu'), Dropout(0.7), Dense(1, activation='sigmoid')]) model.layers[1].trainable=False model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy']) model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64) """ Explanation: We pass our embedding matrix to the Embedding constructor, and set it to non-trainable. End of explanation """ model.layers[0].trainable = True model.optimizer.lr = 1e-4 model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64) model.save_weights(model_path+'glove50.h5') """ Explanation: We already have beaten our previous model! But let's fine-tune the embedding weights - especially since the words we couldn't find in glove just have random embeddings End of explanation """ from keras.layers import Merge """ Explanation: Multi-size CNN End of explanation """ graph_in = Input((vocab_size, 50)) convs = [] for fsz in range(3, 6): x = Convolution1D(64, fsz, border_mode='same', activation='relu')(graph_in) x = MaxPooling1D()(x) x = Flatten()(x) convs.append(x) out = Merge(mode='concat')(convs) graph = Model(graph_in, out) emb = create_emb(vecs, vocab_size) """ Explanation: How can we further improve? Well, let's try not just using one size of convolution, but a few sizes of convolution layers. We use the functional API to create multiple conv layer of different sizes, and then concatenate them End of explanation """ model = Sequential([ Embedding(vocab_size, 50, input_length=seq_len, dropout=0.2, weights=[emb]), Dropout (0.2), graph, Dropout (0.5), Dense (100, activation="relu"), Dropout (0.7), Dense (1, activation='sigmoid') ]) model.layers[1].trainable=False model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy']) model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64) model.layers[0].trainable=True model.optimizer.lr=1e-5 model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64) """ Explanation: We then replace the conv/max-pool layer in our original CNN with the concatenated conv layers End of explanation """
jorisvandenbossche/DS-python-data-analysis
notebooks/pandas_09_combining_datasets.ipynb
bsd-3-clause
import pandas as pd """ Explanation: <p><font size="6"><b>Pandas: Combining datasets Part I - concat</b></font></p> © 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons End of explanation """ # redefining the example objects # series population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3, 'United Kingdom': 64.9, 'Netherlands': 16.9}) # dataframe data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'], 'population': [11.3, 64.3, 81.3, 16.9, 64.9], 'area': [30510, 671308, 357050, 41526, 244820], 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']} countries = pd.DataFrame(data) countries """ Explanation: Combining data is essential functionality in a data analysis workflow. Data is distributed in multiple files, different information needs to be merged, new data is calculated, .. and needs to be added together. Pandas provides various facilities for easily combining together Series and DataFrame objects End of explanation """ pop_density = countries['population']*1e6 / countries['area'] pop_density countries['pop_density'] = pop_density countries """ Explanation: Adding columns As we already have seen before, adding a single column is very easy: End of explanation """ countries["country"].str.split(" ", expand=True) """ Explanation: Adding multiple columns at once is also possible. For example, the following method gives us a DataFrame of two columns: End of explanation """ countries[['first', 'last']] = countries["country"].str.split(" ", expand=True) countries """ Explanation: We can add both at once to the dataframe: End of explanation """ data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'], 'population': [11.3, 64.3, 81.3, 16.9, 64.9], 'area': [30510, 671308, 357050, 41526, 244820], 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']} countries = pd.DataFrame(data) countries data = {'country': ['Nigeria', 'Rwanda', 'Egypt', 'Morocco', ], 'population': [182.2, 11.3, 94.3, 34.4], 'area': [923768, 26338 , 1010408, 710850], 'capital': ['Abuja', 'Kigali', 'Cairo', 'Rabat']} countries_africa = pd.DataFrame(data) countries_africa """ Explanation: Concatenating data The pd.concat function does all of the heavy lifting of combining data in different ways. pd.concat takes a list or dict of Series/DataFrame objects and concatenates them in a certain direction (axis) with some configurable handling of “what to do with the other axes”. Combining rows - pd.concat Assume we have some similar data as in countries, but for a set of different countries: End of explanation """ pd.concat([countries, countries_africa]) """ Explanation: We now want to combine the rows of both datasets: End of explanation """ pd.concat([countries, countries_africa], ignore_index=True) """ Explanation: If we don't want the index to be preserved: End of explanation """ pd.concat([countries, countries_africa[['country', 'capital']]], ignore_index=True) """ Explanation: When the two dataframes don't have the same set of columns, by default missing values get introduced: End of explanation """ pd.concat({'europe': countries, 'africa': countries_africa}) """ Explanation: We can also pass a dictionary of objects instead of a list of objects. Now the keys of the dictionary are preserved as an additional index level: End of explanation """ df = pd.read_csv("data/titanic.csv") df = df.loc[:9, ['Survived', 'Pclass', 'Sex', 'Age', 'Fare', 'Embarked']] df """ Explanation: <div class="alert alert-info"> **NOTE**: A typical use case of `concat` is when you create (or read) multiple DataFrame with a similar structure in a loop, and then want to combine this list of DataFrames into a single DataFrame. For example, assume you have a folder of similar CSV files (eg the data per day) you want to read and combine, this would look like: ```python import pathlib data_files = pathlib.Path("data_directory").glob("*.csv") dfs = [] for path in data_files: temp = pd.read_csv(path) dfs.append(temp) df = pd.concat(dfs) ``` <br> Important: append to a list (not DataFrame), and concat this list at the end after the loop! </div> Joining data with pd.merge Using pd.concat above, we combined datasets that had the same columns. But, another typical case is where you want to add information of a second dataframe to a first one based on one of the columns they have in common. That can be done with pd.merge. Let's look again at the titanic passenger data, but taking a small subset of it to make the example easier to grasp: End of explanation """ locations = pd.DataFrame({'Embarked': ['S', 'C', 'N'], 'City': ['Southampton', 'Cherbourg', 'New York City'], 'Country': ['United Kindom', 'France', 'United States']}) locations """ Explanation: Assume we have another dataframe with more information about the 'Embarked' locations: End of explanation """ pd.merge(df, locations, on='Embarked', how='left') """ Explanation: We now want to add those columns to the titanic dataframe, for which we can use pd.merge, specifying the column on which we want to merge the two datasets: End of explanation """ import zipfile with zipfile.ZipFile("data/TF_VAT_NACE_SQ_2019.zip", "r") as zip_ref: zip_ref.extractall() """ Explanation: In this case we use how='left (a "left join") because we wanted to keep the original rows of df and only add matching values from locations to it. Other options are 'inner', 'outer' and 'right' (see the docs for more on this, or this visualization: https://joins.spathon.com/). Exercise with VAT numbers For this exercise, we start from an open dataset on "Enterprises subject to VAT" (VAT = Value Added Tax), from https://statbel.fgov.be/en/open-data/enterprises-subject-vat-according-legal-form-11. For different regions and different enterprise types, it contains the number of enterprises subset to VAT ("MS_NUM_VAT"), and the number of such enterprises that started ("MS_NUM_VAT_START") or stopped ("MS_NUM_VAT_STOP") in 2019. This file is provided as a zipped archive of a SQLite database file. Let's first unzip it: End of explanation """ import sqlite3 # connect with the database file con = sqlite3.connect("TF_VAT_NACE_2019.sqlite") # list the tables that are present in the database con.execute("SELECT name FROM sqlite_master WHERE type='table';").fetchall() """ Explanation: SQLite (https://www.sqlite.org/index.html) is a light-weight database engine, and a database can be stored as a single file. With the sqlite3 module of the Python standard library, we can open such a database and inspect it: End of explanation """ df = pd.read_sql("SELECT * FROM TF_VAT_NACE_2019", con) df """ Explanation: Pandas provides functionality to query data from a database. Let's fetch the main dataset contained in this file: End of explanation """ df_legal_forms = pd.read_sql("SELECT * FROM TD_LGL_PSN_VAT", con) df_legal_forms """ Explanation: More information about the identifyer variables (the first three columns) can be found in the other tables. For example, the "CD_LGL_PSN_VAT" column contains information about the legal form of the enterprise. What the values in this column mean, can be found in a different table: End of explanation """ # %load _solutions/pandas_09_combining_datasets1.py """ Explanation: This type of data organization is called a "star schema" (https://en.wikipedia.org/wiki/Star_schema), and if we want to get the a "denormalized" version of the main dataset (all the data combined), we need to join the different tables. <div class="alert alert-success"> **EXERCISE 1**: Add the full name of the legal form (in the DataFrame `df_legal_forms`) to the main dataset (`df`). For this, join both datasets based on the "CD_LGL_PSN_VAT" column. <details><summary>Hints</summary> - `pd.merge` requires a left and a right DataFrame, the specification `on` to define the common index and the merge type `how`. - Decide which type of merge is most appropriate: left, right, inner,... </details> </div> End of explanation """ # %load _solutions/pandas_09_combining_datasets2.py """ Explanation: <div class="alert alert-success"> **EXERCISE 2**: How many registered enterprises are there for each legal form? Sort the result from most to least occurring form. <details><summary>Hints</summary> - To count the number of registered enterprises, take the `sum` _for each_ (`groupby`) legal form. - Check the `ascending` parameter of the `sort_values` function. </details> </div> End of explanation """ # %load _solutions/pandas_09_combining_datasets3.py # %load _solutions/pandas_09_combining_datasets4.py # %load _solutions/pandas_09_combining_datasets5.py """ Explanation: <div class="alert alert-success"> **EXERCISE 3**: How many enterprises are registered per province? * Read in the "TD_MUNTY_REFNIS" table from the database file into a `df_muni` dataframe, which contains more information about the municipality (and the province in which the municipality is located). * Merge the information about the province into the main `df` dataset. * Using the joined dataframe, calculate the total number of registered companies per province. <details><summary>Hints</summary> - Data loading in Pandas requires `pd.read_...`, in this case `read_sql`. Do not forget the connection object as a second input. - `df_muni` contains a lot of columns, whereas we are only interested in the province information. Only use the relevant columns "TX_PROV_DESCR_EN" and "CD_REFNIS" (you need this to join the data). - Calculate the `sum` _for each_ (`groupby`) province. </details> </div> End of explanation """ import geopandas import fiona stat = geopandas.read_file("data/statbel_statistical_sectors_2019.shp.zip") stat.head() stat.plot() """ Explanation: Joining with spatial data to make a map The course materials contains a simplified version of the "statistical sectors" dataset (https://statbel.fgov.be/nl/open-data/statistische-sectoren-2019), with the borders of the municipalities. This dataset is provided as a zipped ESRI Shapefile, one of the often used file formats used in GIS for vector data. The GeoPandas package extends pandas with geospatial functionality. End of explanation """ df_by_muni = df.groupby("CD_REFNIS").sum() """ Explanation: The resulting dataframe (a GeoDataFrame) has a "geometry" column (in this case with polygons representing the borders of the municipalities), and a couple of new methods with geospatial functionality (for example, the plot() method by default makes a map). It is still a DataFrame, and everything we have learned about pandas can be used here as well. Let's visualize the change in number of registered enterprises on a map at the municipality-level. We first calculate the total number of (existing/starting/stopping) enterprises per municipality: End of explanation """ df_by_muni["NUM_VAT_CHANGE"] = (df_by_muni["MS_NUM_VAT_START"] - df_by_muni["MS_NUM_VAT_STOP"]) / df_by_muni["MS_NUM_VAT"] * 100 df_by_muni """ Explanation: And add a new column with the relative change in the number of registered enterprises: End of explanation """ joined = pd.merge(stat, df_by_muni, left_on="CNIS5_2019", right_on="CD_REFNIS") joined """ Explanation: We can now merge the dataframe with the geospatial information of the municipalities with the dataframe with the enterprise numbers: End of explanation """ joined["NUM_VAT_CHANGE_CAT"] = pd.cut(joined["NUM_VAT_CHANGE"], [-15, -6, -4, -2, 2, 4, 6, 15]) joined.plot(column="NUM_VAT_CHANGE_CAT", figsize=(10, 10), cmap="coolwarm", legend=True)#k=7, scheme="equal_interval") """ Explanation: With this joined dataframe, we can make a new map, now visualizing the change in number of registered enterprises ("NUM_VAT_CHANGE"): End of explanation """ data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'], 'population': [11.3, 64.3, 81.3, 16.9, 64.9], 'area': [30510, 671308, 357050, 41526, 244820], 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']} countries = pd.DataFrame(data) countries data = {'country': ['Belgium', 'France', 'Netherlands'], 'GDP': [496477, 2650823, 820726], 'area': [8.0, 9.9, 5.7]} country_economics = pd.DataFrame(data).set_index('country') country_economics pd.concat([countries, country_economics], axis=1) """ Explanation: Combining columns - pd.concat with axis=1 We can use pd.merge to combine the columns of two DataFrame based on a common column. If our two DataFrames already have equivalent rows, we can also achieve this basic case using pd.concat with specifying axis=1 (or axis="columns"). Assume we have another DataFrame for the same countries, but with some additional statistics: End of explanation """ countries2 = countries.set_index('country') countries2 pd.concat([countries2, country_economics], axis="columns") """ Explanation: pd.concat matches the different objects based on the index: End of explanation """
marvinoeben/transactional-analysis
Loan_payment_feature_engineering.ipynb
mit
import pandas as pd import numpy as np """ Explanation: Loan payment feature engineering. In this notebook, we will engineer the futures to predict whether an account will be unable to pay its loan in the future. We will use the following charasteristics: - Loan characteristics (size, count, payments etc.) - Account charasteristics - Transactional behavior - Demographic information - Card info For the finished loans, we will use the first 75% of the running time to predict whether there is a missed payment in the last 25% of the running time. Imports: End of explanation """ client_info = pd.read_csv('data/client_info.csv') demographic_info = pd.read_csv('data/demographic_data.csv') transaction_info = pd.read_csv('data/transction_info.csv') order_info = pd.read_csv('data/order_info.csv') loan_info = pd.read_csv('data/loan_info.csv') """ Explanation: Load data: End of explanation """ loan_info['status'].value_counts() """ Explanation: Loan data Look at the loan data and see which features can be made. End of explanation """ scope = loan_info[loan_info['status']=='run_but_debt']['account_id'].unique() transaction_scope = transaction_info[transaction_info['account_id'].isin(scope)] """ Explanation: We start by looking at running loans with payment problems to see how we can define a model their behavior. Scope on run_but_dept clients (for plots) End of explanation """ transaction_scope = transaction_scope[transaction_scope['k_symbol']=='loan'] transaction_scope = transaction_scope.loc[:,['account_id','date','amount']] transaction_scope = transaction_scope.pivot(index = 'date', columns = 'account_id', values = 'amount') transaction_scope = transaction_scope.fillna(0) """ Explanation: Scope only on loans and pivot. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') trans_plot = transaction_scope.cumsum() plt.figure() ax = trans_plot.plot(legend = False) ax.set_ylabel("amount") """ Explanation: Plot all payments. End of explanation """ from datetime import datetime payment_diff = transaction_scope.copy() for acct in scope: loan_start = loan_info[loan_info['account_id']==acct]['date'].unique()[0] loan_date = datetime.strptime(loan_start, '%Y-%m-%d') if loan_date.day < 12: loan_date = datetime(loan_date.year, loan_date.month, loan_date.day + 12) loan_start = loan_date.strftime('%Y-%m-%d') loan_order = loan_info[loan_info['account_id']==acct]['payments'].unique()[0] payment_diff.loc[payment_diff.index > loan_start, acct] = payment_diff.loc[payment_diff.index > loan_start,acct] - loan_order """ Explanation: All these payments seem to have flat months indeed! As we assume that loans are payed monthly, the flat lines on the map appear to be missed payments. As order_info provides us with the heigth of every montly payment, we can plot the difference between the money ordered and eventually paid. Also, all payments occur on the 12th of the month. Many customers who sign the contract of the loan before the 12th, therefore providing a missing payment in the data which may be artificial. We assume that loan payments start the next month therefore we bump the date by 12 days if the day in the month is before the 12th, removing our missed payments. End of explanation """ plt.figure() ax = payment_diff.plot(legend = False) ax.set_ylabel("amount") """ Explanation: Plot all missed payments. End of explanation """ scope = loan_info['account_id'].unique() running = loan_info[loan_info['status']=='run_but_debt']['account_id'].unique() transaction_scope = transaction_info[transaction_info['account_id'].isin(scope)] transaction_scope = transaction_scope[transaction_scope['k_symbol']=='loan'] transaction_scope = transaction_scope.loc[:,['account_id','date','amount']] transaction_scope = transaction_scope.pivot(index = 'date', columns = 'account_id', values = 'amount') transaction_scope = transaction_scope.fillna(0) """ Explanation: On first inspection, it seems that we missed some initial payments. This is not a big deal as we are only intersted in missed payments and not so much in early payments (which may or may not have been refunded). Note also that in this graph, spikes are one-time missers, where plateaus are multiple consecutive missed payments. Now, the main bulk of missed payments is at the end of the graph, this is because we look only at running loans, which are generally started later. Therefore we choose to revert this to once again looking at all loans and later look at only the running ones. This essentially means running the same code again, with the adjustment that for ended loans, we need to not assume that a zero payment after the startdate is a missed one. End of explanation """ from datetime import datetime import calendar def add_months(sourcedate, months): month = sourcedate.month - 1 + months year = int(sourcedate.year + month / 12 ) month = month % 12 + 1 day = min(sourcedate.day,calendar.monthrange(year,month)[1]) return datetime(year,month,day) payment_diff = transaction_scope.copy() for acct in scope: if acct in running: loan_start = loan_info[loan_info['account_id']==acct]['date'].unique()[0] loan_date = datetime.strptime(loan_start, '%Y-%m-%d') if loan_date.day < 12: loan_date = datetime(loan_date.year, loan_date.month, loan_date.day + 12) loan_start = loan_date.strftime('%Y-%m-%d') loan_order = loan_info[loan_info['account_id']==acct]['payments'].unique()[0] payment_diff.loc[payment_diff.index > loan_start, acct] = payment_diff.loc[payment_diff.index > loan_start,acct] - loan_order else: loan_start = loan_info[loan_info['account_id']==acct]['date'].unique()[0] loan_date = datetime.strptime(loan_start, '%Y-%m-%d') if loan_date.day < 12: loan_date = datetime(loan_date.year, loan_date.month, loan_date.day + 12) loan_start = loan_date.strftime('%Y-%m-%d') loan_date_end = add_months(loan_date,loan_info[loan_info['account_id']==acct]['duration'].unique()[0]) loan_end = loan_date_end.strftime('%Y-%m-%d') monthly_payment = loan_info[loan_info['account_id']==acct]['payments'].unique()[0] dates = (payment_diff.index > loan_start) & (payment_diff.index < loan_end) payment_diff.loc[dates, acct] = payment_diff.loc[dates, acct] - monthly_payment """ Explanation: We now have to add some months to have dates for every loan in the full dataset. Function borrowed from: https://stackoverflow.com/questions/4130922/ Note that this code is somewhat messy but it will do for now. End of explanation """ plt.figure() ax = payment_diff.plot(legend = False) ax.set_ylabel("amount") """ Explanation: Now we can again make a plot. Which should show more earlier missed payments (and some more positives which were prepaid). End of explanation """ max_duration = loan_info['duration'].max() monthly_missed = pd.DataFrame(index=range(0,60), columns=list(payment_diff)) for acct in scope: if acct in running: loan_start = loan_info[loan_info['account_id']==acct]['date'].unique()[0] tmp = payment_diff.loc[payment_diff.index > loan_start, acct].tolist() tmp = [round(i,0) for i in tmp] tmp = tmp + [0]*(60 - len(tmp)) tmp = [int(i) for i in tmp] monthly_missed[acct] = tmp else: loan_start = loan_info[loan_info['account_id']==acct]['date'].unique()[0] loan_date = datetime.strptime(loan_start, '%Y-%m-%d') loan_date_end = add_months(loan_date,loan_info[loan_info['account_id']==acct]['duration'].unique()[0]) loan_end = loan_date_end.strftime('%Y-%m-%d') tmp = payment_diff.loc[(payment_diff.index > loan_start) & (payment_diff.index < loan_end), acct].tolist() tmp = [round(i,0) for i in tmp] tmp = tmp + [0]*(60 - len(tmp)) tmp = [int(i) for i in tmp] monthly_missed[acct] = tmp plt.figure() ax = monthly_missed.plot(legend = False) ax.set_ylabel("amount") ax.set_xlabel("month") """ Explanation: Missing payments into loan. Now that we know this, we can do our final transformation on the missed payments. Even though we have information on the unemployment rates of '95 and '96, we act as if all our periods are equal. Hence we can now start making features. We will use the features: - Loan amount - Loan duration - Number of total missed payments - Number of missed payments in first, second, third and last quarter of the loan - Max number of consecutively missed payments As we will provide the split into the 80/20 estimate, we will first split and then generate the features on the first 80%. The last 20% will only be provided a "missed payment" column which is either 0 or 1. Let's first remake the above picture for 'months into loan'. This means moving all starts to 0. My implementation seems suboptimal, but it will do the job. I could have also moved all transactions and redo the pivoting. End of explanation """ df = loan_info.copy() df.columns = ['loan_id','account_id','start_date','amount','duration','payments','status'] """ Explanation: This confirms the fact that payment differences which are positive occur in the first month. Hence, this is the prepayment mismatch we have (but at least we don't set ones to missed which aren't). Let's make the loan table. End of explanation """ tmp = pd.DataFrame(monthly_missed.apply(lambda column: (column < -1).sum())) tmp.columns = ['total_missed'] df = pd.merge(left = df, right = tmp, left_on = 'account_id', right_index = True) """ Explanation: Merge the total misses End of explanation """ durations = df['duration'].value_counts().to_dict() quarter_missed = {} misses = pd.DataFrame(columns = ['first_quarter','second_quarter','third_quarter','last_quarter']) for key in durations.keys(): tmp = df[df['duration']==int(key)]['account_id'].tolist() quarter_missed[key] = monthly_missed.loc[:,tmp] tmp_df = pd.DataFrame(quarter_missed[key].loc[range(0,key/4),].apply(lambda column: (column < -1).sum())) tmp_df['second_quarter'] = quarter_missed[key].loc[range(key/4, 2*key/4),].apply(lambda column: (column < -1).sum()) tmp_df['third_quarter'] = quarter_missed[key].loc[range(2*key/4, 3*key/4),].apply(lambda column: (column < -1).sum()) tmp_df['last_quarter'] = quarter_missed[key].loc[range(3*key/4, 4*key/4),].apply(lambda column: (column < -1).sum()) tmp_df.columns = ['first_quarter','second_quarter','third_quarter','last_quarter'] misses = pd.concat([misses,tmp_df]) for col in misses.columns: misses[col] = misses[col].astype(int) df = pd.merge(left = df, right = misses, left_on = 'account_id', right_index = True) quarter_missed.keys() for col in misses.columns: misses[col] = misses[col].astype(int) """ Explanation: Split the dataset on loan duration End of explanation """ consecutive = monthly_missed.copy() consecutive[consecutive > -2] =0 consecutive[consecutive < -1] = -1 # From: https://stackoverflow.com/questions/36441521 import itertools tmplist = [] for acct in consecutive.columns: a = consecutive[acct].tolist() z = [(x[0], len(list(x[1]))) for x in itertools.groupby(a)] z = [x for x in z if x[0] !=0] if len(z)==0: tmp = 0 else: tmp = max(z, key=lambda x:x[1])[1] tmplist.append(tmp) max_missed = pd.DataFrame() max_missed['account_id'] = consecutive.columns max_missed['max_missed'] = tmplist df = pd.merge(left = df, right = max_missed, left_on = 'account_id', right_on = 'account_id') """ Explanation: Max number of consecutively missed payments End of explanation """ client_scope = client_info[client_info['account_id'].isin(scope)] client_scope.describe() """ Explanation: Client charasteristics We look at the following charasteristics: - Number of clients per account. - Average balance on the account. - Birthyear of the owner - Sex of the owner - Startyear of the account Note that we will need to cut the data into 75% running time for finished clients when we accumulate different data. Scope on only the clients with a loan: End of explanation """ tmp = pd.DataFrame(client_scope.groupby('account_id').agg('count')['client_id']) df = pd.merge(left = df, right = tmp, left_on = 'account_id', right_index = True) """ Explanation: Get the number of clients End of explanation """ # Get the scope trans_scope = transaction_info[transaction_info['account_id'].isin(scope)] trans_scope = pd.merge(left = trans_scope, right = df.loc[:,['account_id','start_date','duration','status']]) trans_scope['start_date'] = pd.to_datetime(trans_scope['start_date'], format='%Y-%m-%d') trans_scope['date'] = pd.to_datetime(trans_scope['date'], format='%Y-%m-%d') # Split the data in running and finished trans_scope_running = trans_scope[trans_scope['status'].isin(['run_no_problem','run_but_debt'])].copy() trans_scope_finished = trans_scope[trans_scope['status'].isin(['fin_no_problem','fin_unpaid'])].copy() # Remove the transactions after the third quarter trans_scope_finished['cutoff_duration'] = trans_scope_finished['duration']*3/4 trans_scope_finished['cutoff_date'] = trans_scope_finished['start_date'] + \ trans_scope_finished['cutoff_duration'].values.astype("timedelta64[M]") trans_scope_finished = trans_scope_finished[trans_scope_finished['cutoff_date'] > trans_scope_finished['date']] # Remove the helpcolumns del trans_scope_finished['cutoff_duration'] del trans_scope_finished['cutoff_date'] trans_scope = pd.concat([trans_scope_finished, trans_scope_running]) """ Explanation: Get the average balance on the account For finished clients, cut all data after 75% of the running time. NB: This actually comes from the transaction_scope data. End of explanation """ avg_balance = pd.DataFrame(trans_scope.groupby('account_id').agg('mean')['balance']) df = pd.merge(left = df, right = avg_balance, left_on = 'account_id', right_index = True) """ Explanation: Calculate the average balance on the account. End of explanation """ owner_scope = client_scope[client_scope['type_client']=="OWNER"].copy() owner_scope['birth_number'] = pd.to_datetime(owner_scope['birth_number'], format='%Y-%m-%d') owner_scope['birthyear_owner'] = owner_scope['birth_number'].dt.year df = pd.merge(left = df, right = owner_scope.loc[:,['account_id','birthyear_owner']], left_on = 'account_id', right_on = 'account_id') """ Explanation: Get the average birthyear of the owner. End of explanation """ owner_scope['female'] = 0 owner_scope.loc[owner_scope.sex == 'F', 'female'] = 1 df = pd.merge(left = df, right = owner_scope.loc[:,['account_id','female']], left_on = 'account_id', right_on = 'account_id') owner_scope['date_client'] = pd.to_datetime(owner_scope['date_client'], format='%Y-%m-%d') owner_scope['startyear_client'] = owner_scope['date_client'].dt.year df = pd.merge(left = df, right = owner_scope.loc[:,['account_id','startyear_client']], left_on = 'account_id', right_on = 'account_id') """ Explanation: Add the sex of the owner. End of explanation """ trans_scope['operation'].value_counts() average_volume = pd.DataFrame(trans_scope.groupby('account_id').agg('mean')['amount']) average_volume.columns = ['average_volume'] total_number = pd.DataFrame(trans_scope.groupby('account_id').agg(lambda x: len(x.unique()))['trans_id']) total_number.columns = ['total_number_trans'] total_number_cash = pd.DataFrame(trans_scope[trans_scope['operation'].isin(['cash_credit','cash_withdrawl'])].groupby('account_id').agg(lambda x: len(x.unique()))['trans_id']) total_number_cash.columns = ['total_number_cash'] average_cash_volume = pd.DataFrame(trans_scope[trans_scope['operation'].isin(['cash_credit','cash_withdrawl'])].groupby('account_id').agg('mean')['amount']) average_cash_volume.columns = ['average_volume_cash'] total_number_cc = pd.DataFrame(trans_scope[trans_scope['operation']=="cc_withdrawal"].groupby('account_id').agg(lambda x: len(x.unique()))['trans_id']) total_number_cc.columns = ['total_number_cc'] average_cc_volume = pd.DataFrame(trans_scope[trans_scope['operation']=="cc_withdrawal"].groupby('account_id').agg('mean')['amount']) average_cc_volume.columns = ['average_volume_cc'] df = pd.merge(left = df, right = average_volume, left_on = 'account_id', right_index = True, how ='left') df = pd.merge(left = df, right = total_number, left_on = 'account_id', right_index = True, how ='left') df = pd.merge(left = df, right = total_number_cash, left_on = 'account_id', right_index = True, how ='left') df = pd.merge(left = df, right = average_cash_volume, left_on = 'account_id', right_index = True, how ='left') df = pd.merge(left = df, right = total_number_cc, left_on = 'account_id', right_index = True, how ='left') df = pd.merge(left = df, right = average_cc_volume, left_on = 'account_id', right_index = True, how ='left') df = df.fillna(0) """ Explanation: Transactional behavior Let's look at all the transactions - The total number transactions - The average volume per transaction - The total number of cash transactions - The aveage volume per cash transaction - The total number of cc withdrawals - The aveage volume per cc withdrawal NB: We already filtered the finsihed contracts on the 75/25% split above. End of explanation """ demographic_info['avg_unemployment'] = demographic_info[['unemployment_rate_1995', 'unemployment_rate_1996']].mean(axis=1) demographic_info['avg_committed_crimes'] = demographic_info[['committed_crimes_1995', 'comitted_crimes_1996']].mean(axis=1) demographic_info.loc[:,'avg_committed_crimes'] = demographic_info.loc[:,'avg_committed_crimes']/demographic_info.loc[:,'inhabitants'] df = pd.merge(left = df, right = owner_scope.loc[:,['account_id','district_id_client']], how = 'left', left_on = 'account_id', right_on = 'account_id') df = pd.merge(left = df, right = demographic_info.loc[:,['district_id','inhabitants','urban_ratio', 'avg_unemployment','avg_committed_crimes']], how = 'left', left_on = 'district_id_client', right_on = 'district_id') """ Explanation: Demographic information Not taking everything in acocunt (as not account/client specific information). We will use: - inhabitants - urban_ratio - avg unemployment rate (1995 & 1996) divided by # of inhabitants - avg commited crimes (1995 & 1996) End of explanation """ card_scope = client_info.loc[:,['status','type_card']].dropna() card_scope['status'].value_counts() """ Explanation: Card info We will only look at the type of card We will try to make two groups. We have no information on what 'junior', 'classic' or gold is. Thus we will make a plot to see whether they have any influence on the loan status. We only have 107 credit cards for all customers having a loan. Only 5 of those have no debt. Thus we deicd to drop card info for now. End of explanation """ df.to_csv('data/loan_prep.csv', index = False) """ Explanation: Save the data frame: End of explanation """
massimo-nocentini/simulation-methods
notes/matrices-functions/matricial-characterization-of-Hermite-interpolating-polynomials.ipynb
mit
from sympy import * from sympy.abc import n, i, N, x, lamda, phi, z, j, r, k, a, t, alpha from matrix_functions import * from sequences import * init_printing() d = IndexedBase('d') g = Function('g') m_sym = symbols('m') """ Explanation: <p> <img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg" alt="UniFI logo" style="float: left; width: 20%; height: 20%;"> <div align="right"> Massimo Nocentini<br> <small> <br>February 28, 2018: splitting from "big" notebook </small> </div> </p> <br> <br> <div align="center"> <b>Abstract</b><br> Theory of matrix functions, matricial characterization of Hermite interpolating polynomials. </div> End of explanation """ m=8 R = define(Symbol(r'\mathcal{R}'), Matrix(m, m, riordan_matrix_by_recurrence(m, lambda n, k: {(n, k):1 if n == k else d[n, k]}))) R eigendata = spectrum(R) eigendata data, eigenvals, multiplicities = eigendata.rhs Phi_poly = Phi_poly_ctor(deg=m-1) Phi_poly Phi_polynomials = component_polynomials(eigendata, early_eigenvals_subs=False) Phi_polynomials Phi_polynomials = component_polynomials(eigendata, early_eigenvals_subs=True) Phi_polynomials res_expt = M_expt, z_expt, Phi_expt =( Matrix(m, m, lambda n,k: (-lamda_indexed[1])**(n-k)/(factorial(n-k)) if n-k >= 0 else 0), Matrix([z**i/factorial(i, evaluate=i<2) for i in range(m)]), Matrix([Function(r'\Phi_{{ {}, {} }}'.format(1, j))(z) for j in range(1, m+1)])) res_expt production_matrix(M_expt) exp(-lamda_indexed[1]*t).series(t, n=m) g, f = Function('g'), Function('f') ERA = Matrix(m, m, riordan_matrix_by_convolution(m, d=Eq(g(t), exp(-lamda_indexed[1]*t)), h=Eq(f(t), t))) ERA assert M_expt == ERA exp(z*t).series(t, n=m), [factorial(i) for i in range(m)] exp(t*(z-lamda_indexed[1])).series(t, n=m) partials = Matrix(m, m, lambda n, k: Subs(f(t).diff(t, n), [t], [lamda_indexed[1]]) if n==k else 0) partials DE = (partials * M_expt).applyfunc(lambda i: i.doit()) DE production_matrix(DE).applyfunc(simplify) # takes long to evaluate """ Explanation: End of explanation """ DE_inv = DE.subs({f:Lambda(t, 1/t)}).applyfunc(lambda i: i.doit()) DE_inv production_matrix(DE_inv) Matrix(m, m, columns_symmetry(DE_inv)) inspect(_) DE_inv_RA = Matrix(m, m, riordan_matrix_by_recurrence(m, lambda n, k: {(n-1,k-1):-k/lamda_indexed[1], (n-1,k):1} if k else {(n-1,k):1}, init={(0,0):1/lamda_indexed[1]})) DE_inv_RA assert DE_inv == DE_inv_RA DEz = (DE_inv* z_expt).applyfunc(lambda i: i.doit().factor()) DEz g_v = ones(1, m) * DEz g_inv_eq = Eq(g(z), g_v[0,0], evaluate=False) g_inv_eq.subs(eigenvals) g_Z_12 = Eq(g(z), Sum((-z)**(j), (j,0,m_sym-1))) g_Z_12 with lift_to_matrix_function(g_Z_12.subs({m_sym:m}).doit()) as g_Z_12_fn: P = Matrix(m, m, binomial) I = eye(m, m) Z_12 = define(Symbol(r'Z_{1,2}'), P - I) P_inv = g_Z_12_fn(Z_12) P_inv assert P * P_inv.rhs == I g_Z_12.subs({m_sym:oo}).doit() """ Explanation: $f(z)=\frac{1}{z}$ End of explanation """ DE_pow = DE.subs({f:Lambda(t, t**r)}).applyfunc(lambda i: i.doit().factor()) DE_pow DE_pow_ff = Matrix(m, m, lambda n, k: ((-1)**(n-k)*ff(r, n, evaluate=False)*(lamda_indexed[1])**r/(ff(n-k, n-k, evaluate=False)*lamda_indexed[1]**k) if k<=n else S(0)).powsimp()) DE_pow_ff assert DE_pow.applyfunc(powsimp) == DE_pow_ff.doit() ff(r, 7), factorial(7), ff(7,7) assert binomial(r,7).combsimp() == (ff(r, 7)/ff(7,7)) production_matrix(DE_pow) def rec(n, k): if k: return {(n-1, k-1):( r+1-k)/lamda_indexed[1], (n-1,k):1} else: return {(n-1, j): -((r+1)*lamda_indexed[1]**j/factorial(j+1) if j else r) for j in range(n)} DE_pow_rec = Matrix(m, m, riordan_matrix_by_recurrence(m, rec, init={(0,0):lamda_indexed[1]**r})) DE_pow_rec = DE_pow_rec.applyfunc(factor) DE_pow_rec assert DE_pow == DE_pow_rec DEz = (DE_pow* z_expt).applyfunc(lambda i: i.doit().factor()) DEz DEz_ff = Matrix(m,1,lambda n,_: (ff(r, n,evaluate=False)/(ff(n,n,evaluate=False)*lamda_indexed[1]**n) * lamda_indexed[1]**r * (z-lamda_indexed[1])**n).powsimp()) DEz_ff DEz_binomial = Matrix(m,1,lambda n,_: binomial(r, n,evaluate=False)*(lamda_indexed[1]**(r-n)) * (z-lamda_indexed[1])**n) DEz_binomial assert DEz.applyfunc(lambda i: i.powsimp()) == DEz_ff.doit().applyfunc(lambda i: i.powsimp()) == DEz_binomial.applyfunc(lambda i: i.combsimp().powsimp()) g_v = ones(1, m) * DEz_binomial g_v_eq = Eq(g(z), g_v[0,0].collect(z), evaluate=False) g_v_eq.subs(eigenvals) g_pow_eq = Eq(g(z), Sum(z**(j) * binomial(r,j), (j,0,m_sym-1))) g_pow_eq with lift_to_matrix_function(g_pow_eq.subs({m_sym:m}).doit()) as g_pow_fn: P_star_r = g_pow_fn(Z_12) P_star_r assert (P**r).applyfunc(simplify) == P_star_r.rhs g_pow_eq.subs({m_sym:oo}).doit() """ Explanation: $f(z)=z^{r}$ End of explanation """ DE_sqrt = DE.subs({f:Lambda(t, sqrt(t))}).applyfunc(lambda i: i.doit().factor()) DE_sqrt production_matrix(DE_sqrt) DEz = (DE_sqrt* z_expt).applyfunc(lambda i: i.doit().factor()) DEz g_v = ones(1, m) * DEz g_sqrt = Eq(g(z), g_v[0,0].collect(z), evaluate=False) g_sqrt g_sqrt.subs(eigenvals) sqrt(1+t).series(t, n=10) """ Explanation: $f(z)=\sqrt{z}$ End of explanation """ g_sqrt_eq = Eq(g(z), Sum(z**(j) * binomial(1/S(2),j), (j,0,m_sym-1))) g_sqrt_eq with lift_to_matrix_function(g_sqrt_eq.subs({m_sym:m}).doit()) as g_sqrt_fn: P_sqrt_r = g_sqrt_fn(Z_12) P_sqrt_r assert (P_sqrt_r.rhs**2).applyfunc(simplify) == P g_sqrt_eq.subs({m_sym:oo}).doit() """ Explanation: according to A002596 End of explanation """ DE_expt = DE.subs({f:Lambda(t, exp(alpha*t))}).applyfunc(lambda i: i.doit().factor()) DE_expt production_matrix(DE_expt) DEz = (DE_expt* z_expt).applyfunc(lambda i: i.doit().factor()) DEz g_v = ones(1, m) * DEz g_exp_v = Eq(g(z), g_v[0,0].collect(z), evaluate=False) g_exp_v g_exp_v.subs(eigenvals) g_exp_eq = Eq(g(z), exp(alpha)*Sum(alpha**j * z**(j) / factorial(j), (j,0,m_sym-1))) g_exp_eq with lift_to_matrix_function(g_exp_eq.subs({m_sym:m}).doit()) as g_exp_fn: P_exp_r = g_exp_fn(Z_12) P_exp_r.rhs.applyfunc(powsimp) g_exp_eq.subs({m_sym:oo}).doit()#.rhs.powsimp() """ Explanation: $f(z)=e^{\alpha z}$ End of explanation """ DE_log = DE.subs({f:Lambda(t, log(t))}).applyfunc(lambda i: i.doit().factor()) DE_log production_matrix(DE_log) DEz = (DE_log* z_expt).applyfunc(lambda i: i.doit().factor()) DEz g_v = ones(1, m) * DEz g_log_v = Eq(g(z), g_v[0,0].collect(z), evaluate=False) g_log_v g_log_v.subs(eigenvals) g_log_eq = Eq(g(z), Sum((-1)**(j+1) * z**(j) / j, (j,1,m_sym-1))) g_log_eq with lift_to_matrix_function(g_log_eq.subs({m_sym:m}).doit()) as g_log_fn: P_log_r = g_log_fn(Z_12) P_log_r.rhs.applyfunc(powsimp) g_log_eq.subs({m_sym:oo}).doit() """ Explanation: $f(z)=\log{z}$ End of explanation """ DE_sin = DE.subs({f:Lambda(t, sin(t))}).applyfunc(lambda i: i.doit().factor()) DE_sin production_matrix(DE_sin) # takes long to evaluate DEz = (DE_sin* z_expt).applyfunc(lambda i: i.doit().factor()) DEz g_v = ones(1, m) * DEz g_sin = Eq(g(z), g_v[0,0].collect(z), evaluate=False) g_sin.subs(eigenvals) with lift_to_matrix_function(g_sin) as _g_sin: P_sin = _g_sin(Z_12).rhs.subs(eigenvals).applyfunc(trigsimp) P_sin sin(z).series(z, 1,n=10) """ Explanation: $f(z)=\sin{z}$ End of explanation """ DE_cos = DE.subs({f:Lambda(t, cos(t))}).applyfunc(lambda i: i.doit().factor()) DE_cos production_matrix(DE_cos) # takes long to evaluate DEz = (DE_cos* z_expt).applyfunc(lambda i: i.doit().factor()) DEz g_v = ones(1, m) * DEz Eq(g(z), g_v[0,0].collect(z), evaluate=False) cos(z).series(z, 1,n=10) """ Explanation: $f(z)=\cos{z}$ End of explanation """
JonasHarnau/apc
apc/vignettes/vignette_misspecification.ipynb
gpl-3.0
import apc # Turn off future warnings import warnings warnings.simplefilter('ignore', FutureWarning) """ Explanation: Misspecification Tests for Log-Normal and Over-Dispersed Poisson Chain-Ladder Models We replicate the empirical applications in Harnau (2018) in Section 5. The work on this vignette was supported by the European Research Council, grant AdG 694262. First, we import the package End of explanation """ model_VNJ = apc.Model() """ Explanation: 5.1 Log-Normal Chain-Ladder This corresponds to Section 5.1 in the paper. The data are taken from Verrall et al. (2010). Kuang et al. (2015) fitted a log-normal chain-ladder model to this data. The model is given by $$ M^{LN}{\mu, \sigma^2}: \quad \log(Y{ij}) \stackrel{D}{=} N(\alpha_i + \beta_j + \delta, \sigma^2). $$ They found that the largest residuals could be found within the first five accident years. Consequently, they raised the question whether the model is misspecified. Here, we investigate this question. Full model We set up and estimate the full, most restrictive, model $M^{LN}_{\mu, \sigma^2}$. We begin by setting up a model class. End of explanation """ model_VNJ.data_from_df(apc.loss_VNJ(), data_format='CL') """ Explanation: Next, we attach the data for the model. The data come pre-formatted in the package. End of explanation """ model_VNJ.fit('log_normal_response', 'AC') """ Explanation: We fit a log-normal chain-ladder model to the full data. End of explanation """ print('log-data variance full model: {:.3f}'.format(model_VNJ.s2)) print('degrees of freedom full model: {:.0f}'.format(model_VNJ.df_resid)) """ Explanation: and confirm that we get the same result as in the paper for the log-data variance estimate $\hat{\sigma}^{2,LN}$ and the degrees of freedom $df$. This should correspond to the values for $\mathcal{I}$ in Figure 2(b). End of explanation """ sub_model_VNJ_1 = model_VNJ.sub_model(coh_from_to=(1,5), fit=True) sub_model_VNJ_2 = model_VNJ.sub_model(coh_from_to=(6,10), fit=True) """ Explanation: This matches the results in the paper. Sub-models We move on to split the data into sub-samples. The sub-samples $\mathcal{I}_1$ and $\mathcal{I}_2$ contain the first and the last five accident years, respectively. Accident years correspond to "cohorts" in age-period-cohort terminology. Rather than first splitting the sample and the generating a new model and fitting it, we make use of the "sub_model" functionality of the package which does all that for us. Combined, the sub-models correspond to $M^{LN}$. End of explanation """ print('First five accident years (I_1)') print('-------------------------------') print('log-data variance: {:.3f}'.format(sub_model_VNJ_1.s2)) print('degrees of freedom: {:.0f}\n'.format(sub_model_VNJ_1.df_resid)) print('Last five accident years (I_2)') print('------------------------------') print('log-data variance: {:.3f}'.format(sub_model_VNJ_2.s2)) print('degrees of freedom: {:.0f}'.format(sub_model_VNJ_2.df_resid)) """ Explanation: We can check that this generated the estimates $\hat{\sigma}^{2, LN}\ell$ and degrees of freedom $df\ell$ from the paper. End of explanation """ s2_bar_VNJ = ((sub_model_VNJ_1.s2 * sub_model_VNJ_1.df_resid + sub_model_VNJ_2.s2 * sub_model_VNJ_2.df_resid) /(sub_model_VNJ_1.df_resid + sub_model_VNJ_2.df_resid)) print('Weighted avg of log-data variance: {:.3f}'.format(s2_bar_VNJ)) """ Explanation: Reassuringly, it does. We can then also compute the weighted average predictor $\bar{\sigma}^{2,LN}$ End of explanation """ bartlett_VNJ = apc.bartlett_test([sub_model_VNJ_1, sub_model_VNJ_2]) """ Explanation: Check! Testing for common variances Now we can move on to test the hypothesis of common variances $$ H_{\sigma^2}: \sigma^2_1 = \sigma^2_2. $$ This corresponds to testing for a reduction from $M^{LN}$ to $M^{LN}_{\sigma^2}$. First, we can conduct a Bartlett test. This functionality is pre-implemented in the package. End of explanation """ for key, value in bartlett_VNJ.items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: The test statistic $B^{LN}$ is computed as the ratio of $LR^{LN}$ to the Bartlett correction factor $C$. The p-value is computed by the $\chi^2$ approximation to the distribution of $B^{LN}$. The number of sub-samples is given by $m$. End of explanation """ F_VNJ_sigma2 = sub_model_VNJ_2.s2/sub_model_VNJ_1.s2 print('F statistic for common variances: {:.2f}'.format(F_VNJ_sigma2)) """ Explanation: We get the same results as in the paper. Specifically, we get a p-value of $0.09$ for the hypothesis so that the Bartlett test does not arm us with strong evidence against the null hypothesis. In the paper, we also conduct an $F$-test for the same hypothesis. The statistic is computed as $$ F_{\sigma^2}^{LN} = \frac{\hat\sigma^{2,LN}2}{\hat\sigma^{2,LN}_1} $$ which, under the null, is distributed as $\mathrm{F}{df_2, df_1}$. This is not directly implemented in the package but still easily computed. First we compute the test statistic End of explanation """ from scipy import stats F_VNJ_sigma2_percentile = stats.f.cdf( F_VNJ_sigma2, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid ) print('Percentile of F statistic: {:.2f}'.format(F_VNJ_sigma2_percentile)) """ Explanation: Now we can compute p-values in one-sided and two-sided tests. For an (equal-tailed) two-sided test, we first find the percentile $P(F_{\sigma^2}^{LN} \leq \mathrm{F}_{df_2, df_1})$. This is given by End of explanation """ import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.linspace(0.01,5,1000) y = stats.f.pdf(x, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid) plt.figure() plt.plot(x, y, label='$\mathrm{F}_{df_2, df_1}$ density') plt.axvline(F_VNJ_sigma2, color='black', linewidth=1, label='$F^{LN}_{\sigma^2}$') tmp = stats.f.cdf(F_VNJ_sigma2, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid) plt.fill_between(x[x < F_VNJ_sigma2], y[x < F_VNJ_sigma2], color='green', alpha=0.3) tmp = stats.f.ppf(1-tmp, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid) plt.fill_between(x[x > tmp], y[x > tmp], color='green', alpha=0.3) plt.annotate('Area 0.06', xy=(0.15, 0.1), xytext=(0.75, 0.15), arrowprops=dict(facecolor='black')) plt.annotate('Area 0.06', xy=(2.75, 0.025), xytext=(3, 0.2), arrowprops=dict(facecolor='black')) plt.legend() plt.title('Two-sided F-test') plt.show() """ Explanation: If this is below the 50th percentile, the p-value is simply twice the percentile, otherwise we subtract the percentile from unity and multiply that by two. For intuition, we can look at the plot below. The green areas in the lower and upper tail of the distribution contain the same probability mass, namely $P(F_{\sigma^2}^{LN} \leq \mathrm{F}_{df_2, df_1})$. The two-sided p-value corresponds to the sum of the two areas. End of explanation """ print('F test two-sided p-value: {:.2f}'.format( 2*np.min([F_VNJ_sigma2_percentile, 1-F_VNJ_sigma2_percentile]) ) ) """ Explanation: Since $F_{\sigma^2}^{LN}$ is below the 50th percentile, the two-sided equal tailed p-value is in our case given by End of explanation """ print('F statistic one-sided p-value: {:.2f}'.format(F_VNJ_sigma2_percentile)) """ Explanation: The one-sided p-value for the hypothesis $H_{\sigma^2}: \sigma^2_1 \leq \sigma^2_2$ simply corresponds to the area in the lower tail of the distribution. This is because the statistic is $\hat\sigma^{2,LN}_2/\hat\sigma^{2,LN}_1$ so that smaller values work against our hypothesis. Thus, the rejection region is the lower tail. Remark: in the paper, the one-sided hypothesis is given as $H_{\sigma^2}: \sigma^2_1 > \sigma^2_2$. This is a mistake as this corresponds to the alternative. End of explanation """ f_linpred_VNJ = apc.f_test(model_VNJ, [sub_model_VNJ_1, sub_model_VNJ_2]) """ Explanation: Testing for common linear predictors We can move on to test for common linear predictors: $$ H_{\mu, \sigma^2}: \sigma^2_1 = \sigma^2_2 \quad \text{and} \quad \alpha_{i,\ell} + \beta_{j,\ell} + \delta_\ell = \alpha_i + \beta_j + \delta $$ If we are happy to accept the hypothesis of common variances $H_{\sigma^2}: \sigma^2_1 = \sigma^2_2$, we can test $H_{\mu, \sigma^2}: \sigma^2_1$ with a simple $F$-test; corresponding to a reduction from $M^{LN}{\sigma^2}$ to $M^{LN}{\mu, \sigma^2}$ The test is implemented in the package. End of explanation """ for key, value in f_linpred_VNJ.items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: This returns the test statistic $F_\mu^{LN}$ along with the p-value. End of explanation """ model_TA = apc.Model() model_TA.data_from_df(apc.data.pre_formatted.loss_TA(), data_format='CL') model_TA.fit('od_poisson_response', 'AC') print('log-data variance full model: {:.0f}'.format(model_TA.s2)) print('degrees of freedom full model: {:.0f}'.format(model_TA.df_resid)) """ Explanation: These results, too, much those from the paper. 5.2 Over-dispersed Poisson Chain-Ladder This corresponds to Section 5.2 in the paper. The data are taken from Taylor and Ashe (1983). For this data, the desired full model is an over-dispersed Poisson model given by $$ M^{ODP}{\mu, \sigma^2}: \quad E(Y{ij}) = \exp(\alpha_i + \beta_j + \delta), \quad \frac{\mathrm{var}(Y_{ij})}{E(Y_{ij})} = \sigma^2. $$ We proceed just as we did above. First, we set up and estimate the full model and the sub-models. Second, we compute the Bartlett test for common over-dispersion. Third, we test for common linear predictors. Finally, we repeat the testing procedure for different sub-sample structures. Full model We set up and estimate the model $M^{ODP}_{\mu, \sigma^2}$ on the full data set. End of explanation """ sub_model_TA_1 = model_TA.sub_model(per_from_to=(1,5), fit=True) sub_model_TA_2 = model_TA.sub_model(coh_from_to=(1,5), age_from_to=(1,5), per_from_to=(6,10), fit=True) sub_model_TA_3 = model_TA.sub_model(age_from_to=(6,10), fit=True) sub_model_TA_4 = model_TA.sub_model(coh_from_to=(6,10), fit=True) sub_models_TA = [sub_model_TA_1, sub_model_TA_2, sub_model_TA_3, sub_model_TA_4] for i, sm in enumerate(sub_models_TA): print('Sub-sample I_{}'.format(i+1)) print('--------------') print('over-dispersion: {:.0f}'.format(sm.s2)) print('degrees of freedom: {:.0f}\n'.format(sm.df_resid)) s2_bar_TA = np.array([sm.s2 for sm in sub_models_TA]).dot( np.array([sm.df_resid for sm in sub_models_TA]) )/np.sum([sm.df_resid for sm in sub_models_TA]) print('Weighted avg of over-dispersion: {:.0f}'.format(s2_bar_TA)) """ Explanation: Sub-models We set up and estimate the models on the four sub-samples. Combined, these models correspond to $M^{ODP}$. End of explanation """ bartlett_TA = apc.bartlett_test(sub_models_TA) for key, value in bartlett_TA.items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: Testing for common over-dispersion We perform a Bartlett test for the hypothesis of common over-dispersion across sub-samples $H_{\sigma^2}: \sigma^2_\ell = \sigma^2$. This corresponds to testing a reduction from $M^{ODP}$ to $M^{ODP}_{\sigma^2}$. End of explanation """ f_linpred_TA = apc.f_test(model_TA, sub_models_TA) for key, value in f_linpred_TA.items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: These results match those in the paper. The Bartlett test yields a p-value of 0.08. Testing for common linear predictors If we are happy to impose common over-dispersion, we can test for common linear predictors across sub-samples. Then, this corresponds to a reduction from $M^{ODP}{\sigma^2}$ to $M^{ODP}{\mu, \sigma^2}$. End of explanation """ sub_models_TA_2 = [model_TA.sub_model(coh_from_to=(1,5), fit=True), model_TA.sub_model(coh_from_to=(6,10), fit=True)] sub_models_TA_3 = [model_TA.sub_model(per_from_to=(1,4), fit=True), model_TA.sub_model(per_from_to=(5,7), fit=True), model_TA.sub_model(per_from_to=(8,10), fit=True)] print('Two sub-samples') print('---------------') print('Bartlett') print('--------') for key, value in apc.bartlett_test(sub_models_TA_2).items(): print('{}: {:.2f}'.format(key, value)) print('\nF-test') print('------') for key, value in apc.f_test(model_TA, sub_models_TA_2).items(): print('{}: {:.2f}'.format(key, value)) print('\nThree sub-samples') print('-----------------') print('Bartlett') print('--------') for key, value in apc.bartlett_test(sub_models_TA_3).items(): print('{}: {:.2f}'.format(key, value)) print('\nF-test') print('------') for key, value in apc.f_test(model_TA, sub_models_TA_3).items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: Repeated testing In the paper, we also suggest a procedure to repeat the tests for different sub-sample structures, using a Bonferroni correction for size-control. End of explanation """ model_BZ = apc.Model() model_BZ.data_from_df(apc.data.pre_formatted.loss_BZ(), time_adjust=1, data_format='CL') model_BZ.fit('log_normal_response', 'AC') print('log-data variance full model: {:.4f}'.format(model_BZ.s2)) print('degrees of freedom full model: {:.0f}'.format(model_BZ.df_resid)) """ Explanation: The test results match those in the paper. For a quick refresher on the Bonferroni correction we turn to Wikipedia. The idea is to control the family wise error rate, the probability of rejecting at least one null hypothesis when the null is true. In our scenario, we repeat testing three times. Each individual repetition is comprised of two sequential tests: a Bartlett and an $F$-test. Under the null hypothesis (so the true model is $M_{\mu, \sigma^2}^{ODP}$), the two tests are independent so $$P(\text{reject $F$-test } | \text{ not-reject Bartlett test}) = P(\text{reject $F$-test}).$$ Thus, if we test at level $\alpha$, the probability to reject at least once within a repetition is not $\alpha$ but $1-(1-\alpha)^2 \approx 2\alpha$: $$ P(\text{Reject Bartlett or F-test at level }\alpha \text{ for a given split}) \approx 2 \alpha .$$ For thrice repeated testing, we replace $\alpha$ by $\alpha/3$. Then, we bound the probability to reject when the null is true with $$ P\left{\cup_{i=1}^3\left(\text{Reject Bartlett or F-test at level } \frac{\alpha}{3} \text{ for split }i\right)\right} \leq 2\alpha \quad \text{(approximately)} .$$ 5.3 Log-Normal (Extended) Chain-Ladder This corresponds to Section 5.3 in the paper. The data are taken from Barnett and Zehnwirth (2000). These data are commonly modeled with a calendar effect. We consider misspecification tests both for a model without $M^{LN}$ and with $M^{LNe}$ a calendar effect $\gamma$. The models are given by $$ M^{LN}{\mu, \sigma^2}: \quad \log(Y{ij}) \stackrel{D}{=} N(\alpha_i + \beta_j + \delta, \sigma^2)$$ and $$ M^{LNe}{\mu, \sigma^2}: \quad \log(Y{ij}) \stackrel{D}{=} N(\alpha_i + \beta_j + \gamma_k + \delta, \sigma^2). $$ No calendar effect We set up and estimate the model $M^{LN}_{\mu, \sigma^2}$ on the full data set. End of explanation """ sub_models_BZ = [model_BZ.sub_model(per_from_to=(1977,1981), fit=True), model_BZ.sub_model(per_from_to=(1982,1984), fit=True), model_BZ.sub_model(per_from_to=(1985,1987), fit=True)] for i, sm in enumerate(sub_models_BZ): print('Sub-sample I_{}'.format(i+1)) print('--------------') print('over-dispersion: {:.4f}'.format(sm.s2)) print('degrees of freedom: {:.0f}\n'.format(sm.df_resid)) s2_bar_BZ = np.array([sm.s2 for sm in sub_models_BZ]).dot( np.array([sm.df_resid for sm in sub_models_BZ]) )/np.sum([sm.df_resid for sm in sub_models_BZ]) print('Weighted avg of over-dispersion: {:.4f}'.format(s2_bar_BZ)) """ Explanation: Next, the models for the sub-samples. End of explanation """ bartlett_BZ = apc.bartlett_test(sub_models_BZ) for key, value in bartlett_BZ.items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: We move on the Bartlett test for the hypothesis of common log-data variances across sub-samples $H_{\sigma^2}: \sigma^2_\ell = \sigma^2$. End of explanation """ f_linpred_BZ = apc.f_test(model_BZ, sub_models_BZ) for key, value in f_linpred_BZ.items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: The Bartlett test yields a p-value of 0.05 as in the paper. We test for common linear predictors across sub-samples. End of explanation """ model_BZe = apc.Model() model_BZe.data_from_df(apc.data.pre_formatted.loss_BZ(), time_adjust=1, data_format='CL') model_BZe.fit('log_normal_response', 'APC') # The only change is in this line. print('log-data variance full model: {:.4f}'.format(model_BZe.s2)) print('degrees of freedom full model: {:.0f}'.format(model_BZe.df_resid)) sub_models_BZe = [model_BZe.sub_model(per_from_to=(1977,1981), fit=True), model_BZe.sub_model(per_from_to=(1982,1984), fit=True), model_BZe.sub_model(per_from_to=(1985,1987), fit=True)] for i, sm in enumerate(sub_models_BZe): print('Sub-sample I_{}'.format(i+1)) print('--------------') print('over-dispersion: {:.4f}'.format(sm.s2)) print('degrees of freedom: {:.0f}\n'.format(sm.df_resid)) s2_bar_BZe = np.array([sm.s2 for sm in sub_models_BZe]).dot( np.array([sm.df_resid for sm in sub_models_BZe]) )/np.sum([sm.df_resid for sm in sub_models_BZe]) print('Weighted avg of log-data variances: {:.4f}'.format(s2_bar_BZe)) bartlett_BZe = apc.bartlett_test(sub_models_BZe) print('\nBartlett test') print('-------------') for key, value in bartlett_BZe.items(): print('{}: {:.2f}'.format(key, value)) print('\nF-test') print('------') f_linpred_BZe = apc.f_test(model_BZe, sub_models_BZe) for key, value in f_linpred_BZe.items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: Calendar effect Now we redo the same for the model with calendar effect. End of explanation """ model_BZe.fit_table(attach_to_self=False).loc[['AC']] """ Explanation: With this, we replicated Figure 4b. Closer look at the effect of dropping the calendar effect In the paper, we move on to take a closer look at the effect of dropping the calendar effect. We do so in two ways starting with $$M^{LNe}{\sigma^2}: \stackrel{D}{=} N(\alpha{i, \ell} + \beta_{j, \ell} + \gamma_{k, \ell} + \delta_\ell, \sigma^2).$$ We want to test for a reduction to $$M^{LN}_{\mu, \sigma^2}: \stackrel{D}{=} N(\alpha_i + \beta_j + \delta, \sigma^2).$$ In the figure below, we illustrate two different testing procedures that would get us to there. <center> <img src="https://user-images.githubusercontent.com/25103918/41599423-27d94fec-73a1-11e8-9fe1-3f3a1a9e184a.png" alt="Two ways to test for reduction to the same model" width="400px"/> </center> We can move down, testing $H^{LNe}{\sigma^2, \mu}$, and then right, testing $H\gamma: \gamma_k = 0$ We can move right, testing $H_{\gamma_{k, \ell}}: \gamma_{k, \ell} = 0$, and then down, testing $H^{LN}_{\sigma^2, \mu}$ Looking at the first way, we already saw that $H_{\gamma_{k, \ell}}: \gamma_{k, \ell} = 0$ cannot be rejected. To test for the absence of a calendar effect, we can do an (exact) $F$ test. End of explanation """ rss_BZe_dot = np.sum([sub.rss for sub in sub_models_BZe]) rss_BZ_dot = np.sum([sub.rss for sub in sub_models_BZ]) df_BZe_dot = np.sum([sub.df_resid for sub in sub_models_BZe]) df_BZ_dot = np.sum([sub.df_resid for sub in sub_models_BZ]) F_BZ = ((rss_BZ_dot - rss_BZe_dot)/(df_BZ_dot - df_BZe_dot)) / (rss_BZe_dot/df_BZe_dot) p_F_BZ = stats.f.sf(F_BZ, dfn=df_BZ_dot - df_BZe_dot, dfd=df_BZe_dot) print('p-value of F-test: {:.2f}'.format(p_F_BZ)) """ Explanation: We see that the p-value (P&gt;F) is close to zero. Next, we consider the second way. We first test $H_{\gamma_{k, \ell}}$. Since $\sigma^2$ is common across the array from the outset, we can do this with a simple $F$-test: $$ \frac{(RSS_.^{LN} - RSS_.^{LNe})/(df_.^{LN} - df_.^{LNe})}{RSS_.^{LNe}/df_.^{LNe}} \stackrel{D}{=} F_{df_.^{LN} - df_.^{LNe}, df_.^{LNe}} $$ End of explanation """ sub_models_BZe_2 = [model_BZe.sub_model(coh_from_to=(1977,1981), fit=True), model_BZe.sub_model(coh_from_to=(1982,1987), fit=True)] sub_models_BZe_4 = [model_BZe.sub_model(per_from_to=(1977,1981), fit=True), model_BZe.sub_model(coh_from_to=(1977,1982), age_from_to=(1,5), per_from_to=(1982,1987), fit=True), model_BZe.sub_model(age_from_to=(6,11), fit=True), model_BZe.sub_model(coh_from_to=(1983,19871), fit=True)] print('Two sub-samples') print('---------------') print('Bartlett') print('--------') for key, value in apc.bartlett_test(sub_models_BZe_2).items(): print('{}: {:.3f}'.format(key, value)) print('\nF-test') print('------') for key, value in apc.f_test(model_BZe, sub_models_BZe_2).items(): print('{}: {:.3f}'.format(key, value)) print('\nFour sub-samples') print('----------------') print('Bartlett') print('--------') for key, value in apc.bartlett_test(sub_models_BZe_4).items(): print('{}: {:.2f}'.format(key, value)) print('\nF-test') print('------') for key, value in apc.f_test(model_BZe, sub_models_BZe_4).items(): print('{}: {:.2f}'.format(key, value)) """ Explanation: Thus this is not rejected. However, we already saw that a reduction from $M^{LN}{\sigma^2}$ to $M^{LN}{\mu, \sigma^2}$ is rejected. Repeated testing Just as for the Taylor and Ashe (1983) data, we repeat testing for different splits. End of explanation """
ML4DS/ML4all
C_lab2_NNs/Hand_Digit_with_NN_student.ipynb
mit
import numpy as np import matplotlib.pyplot as plt %matplotlib inline size=18 params = {'legend.fontsize': 'Large', 'axes.labelsize': size, 'axes.titlesize': size, 'xtick.labelsize': size*0.75, 'ytick.labelsize': size*0.75} plt.rcParams.update(params) """ Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#-MNIST-Hand-Digit-Classification-" data-toc-modified-id="-MNIST-Hand-Digit-Classification--1"><font color="teal"> MNIST Hand Digit Classification </font></a></span></li><li><span><a href="#-Part-1.-Scikit-learn-Methods-" data-toc-modified-id="-Part-1.-Scikit-learn-Methods--2"><font color="teal"> Part 1. Scikit-learn Methods </font></a></span><ul class="toc-item"><li><span><a href="#-1.-Data-Preparation-" data-toc-modified-id="-1.-Data-Preparation--2.1"><font color="teal"> 1. Data Preparation </font></a></span><ul class="toc-item"><li><span><a href="#1.1.-Data-load" data-toc-modified-id="1.1.-Data-load-2.1.1">1.1. Data load</a></span></li><li><span><a href="#1.2.-Data-partitioning" data-toc-modified-id="1.2.-Data-partitioning-2.1.2">1.2. Data partitioning</a></span></li><li><span><a href="#1.3.-Data-normalization" data-toc-modified-id="1.3.-Data-normalization-2.1.3">1.3. Data normalization</a></span></li></ul></li><li><span><a href="#-2.-Binary-classification-" data-toc-modified-id="-2.-Binary-classification--2.2"><font color="teal"> 2. Binary classification </font></a></span><ul class="toc-item"><li><span><a href="#2.1.-Two-dimensional-representation" data-toc-modified-id="2.1.-Two-dimensional-representation-2.2.1">2.1. Two dimensional representation</a></span><ul class="toc-item"><li><span><a href="#2.1.1.-Principal-Component-Analysis-(PCA)" data-toc-modified-id="2.1.1.-Principal-Component-Analysis-(PCA)-2.2.1.1">2.1.1. Principal Component Analysis (PCA)</a></span></li><li><span><a href="#2.1.2.-Linear-Classification-with-Logistic-Regression" data-toc-modified-id="2.1.2.-Linear-Classification-with-Logistic-Regression-2.2.1.2">2.1.2. Linear Classification with Logistic Regression</a></span></li><li><span><a href="#2.1.3.-Polynomial-Logistic-Regression" data-toc-modified-id="2.1.3.-Polynomial-Logistic-Regression-2.2.1.3">2.1.3. Polynomial Logistic Regression</a></span></li><li><span><a href="#2.1.4.-Multi-Layer-Perceptron" data-toc-modified-id="2.1.4.-Multi-Layer-Perceptron-2.2.1.4">2.1.4. Multi-Layer Perceptron</a></span></li></ul></li><li><span><a href="#2.2.-Classification-with-all-input-features" data-toc-modified-id="2.2.-Classification-with-all-input-features-2.2.2">2.2. Classification with all input features</a></span><ul class="toc-item"><li><span><a href="#2.2.1.-Logistic-Regression" data-toc-modified-id="2.2.1.-Logistic-Regression-2.2.2.1">2.2.1. Logistic Regression</a></span></li><li><span><a href="#2.2.2.-K-Nearest-Neighbors" data-toc-modified-id="2.2.2.-K-Nearest-Neighbors-2.2.2.2">2.2.2. K Nearest Neighbors</a></span></li><li><span><a href="#2.2.3.-Multi-Layer-Perceptron" data-toc-modified-id="2.2.3.-Multi-Layer-Perceptron-2.2.2.3">2.2.3. Multi-Layer Perceptron</a></span></li></ul></li></ul></li><li><span><a href="#-3.-Multi-Class-Classification-" data-toc-modified-id="-3.-Multi-Class-Classification--2.3"><font color="teal"> 3. Multi Class Classification </font></a></span><ul class="toc-item"><li><span><a href="#3.1.-Principal-Component-Analysis-(PCA)" data-toc-modified-id="3.1.-Principal-Component-Analysis-(PCA)-2.3.1">3.1. Principal Component Analysis (PCA)</a></span></li><li><span><a href="#3.2.-Nearest-Neighbor-Method" data-toc-modified-id="3.2.-Nearest-Neighbor-Method-2.3.2">3.2. Nearest Neighbor Method</a></span></li><li><span><a href="#3.3.-Multi-Layer-Perceptron" data-toc-modified-id="3.3.-Multi-Layer-Perceptron-2.3.3">3.3. Multi-Layer Perceptron</a></span></li></ul></li></ul></li><li><span><a href="#-Part-2.-Implementing-Deep-Networks-with-PyTorch-" data-toc-modified-id="-Part-2.-Implementing-Deep-Networks-with-PyTorch--3"><font color="teal"> Part 2. Implementing Deep Networks with PyTorch </font></a></span><ul class="toc-item"><li><span><a href="#-4.-Pytorch-Tutorial-" data-toc-modified-id="-4.-Pytorch-Tutorial--3.1"><font color="teal"> 4. Pytorch Tutorial </font></a></span><ul class="toc-item"><li><span><a href="#4.1.-PyTorch-Installation" data-toc-modified-id="4.1.-PyTorch-Installation-3.1.1">4.1. PyTorch Installation</a></span></li><li><span><a href="#4.2.-Torch-tensors-(very)-general-overview" data-toc-modified-id="4.2.-Torch-tensors-(very)-general-overview-3.1.2">4.2. Torch tensors (very) general overview</a></span></li><li><span><a href="#4.3.-Automatic-Gradient-Calculation" data-toc-modified-id="4.3.-Automatic-Gradient-Calculation-3.1.3">4.3. Automatic Gradient Calculation</a></span></li></ul></li><li><span><a href="#-5.-Feed-Forward-Networks-using-PyTorch-" data-toc-modified-id="-5.-Feed-Forward-Networks-using-PyTorch--3.2"><font color="teal"> 5. Feed Forward Networks using PyTorch </font></a></span><ul class="toc-item"><li><span><a href="#5.1.-Using-torch-nn.Module-and-nn.Parameter" data-toc-modified-id="5.1.-Using-torch-nn.Module-and-nn.Parameter-3.2.1">5.1. Using torch <code>nn.Module</code> and <code>nn.Parameter</code></a></span></li><li><span><a href="#5.2.-Network-Optimization" data-toc-modified-id="5.2.-Network-Optimization-3.2.2">5.2. Network Optimization</a></span></li><li><span><a href="#5.3.-Multi-Layer-networks-using-nn.Sequential" data-toc-modified-id="5.3.-Multi-Layer-networks-using-nn.Sequential-3.2.3">5.3. Multi Layer networks using <code>nn.Sequential</code></a></span></li><li><span><a href="#5.4.-Generalization" data-toc-modified-id="5.4.-Generalization-3.2.4">5.4. Generalization</a></span></li></ul></li><li><span><a href="#6.-Convolutional-Neural-Networks" data-toc-modified-id="6.-Convolutional-Neural-Networks-3.3">6. Convolutional Neural Networks</a></span></li></ul></li></ul></div> <font color='teal'> MNIST Hand Digit Classification </font> End of explanation """ from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split # Load data from https://www.openml.org/d/554 X, y = fetch_openml("mnist_784", version=1, return_X_y=True, as_frame=False) # Format data in y as integers y = y.astype(np.int) print('Size of Input Data Matrix:', X.shape) print('Size of Label Vector:', y.shape) """ Explanation: In this notebook we will explore different strategies for solving a classification problem consisting of determining the handwritten digit corresponding to a $ 28 \times 28 $ pixel image (MNIST dataset). We will start by tackling a binary classification problem using different classification algorithms whose implementations are available in scikit-learn, including the multilayer perceptron as an example of a multilayer type neural network (multilayer perceptron, MLP). Next, we will consider multiclass classification, and calculate the performance of the previous algorithms in this more challenging problem. The last part of the notebook contains an introduction to PyTorch, one of the most widely used libraries for neural network training. We will review the basic concepts of Pytorch and the different modules that allow simplifying the implementation and optimization of neural networks, including the design of convolutional neural networks (CNNs) that represent a more powerful alternative than MLPs in image classification problems. For faster executions you must run PyTorch code on GPU. For this, it is sufficient to use Gooble Colab by properly configuring the runtime environment. <font color='teal'> Part 1. Scikit-learn Methods </font> <font color='teal'> 1. Data Preparation </font> 1.1. Data load MNIST is a handwritten digit classification problem that includes 60,000 training patterns and 10,000 test patterns, with representations obtained from the digitization of the corresponding grayscale images and with $28\times 28$ pixel resolution. The database can be downloaded from the OpenML repository using tools available from Scikit-learn. It is recommended that after a first download you make a local copy of the data to speed up future executions. End of explanation """ # You may use numpy.savez # Reload variables with names Xlocal and ylocal # <SOL> # </SOL> # Check that the reloaded matrices shapes are the same as before print('Size of Input Data Matrix:', Xlocal.shape) print('Size of Label Vector:', ylocal.shape) """ Explanation: Exercise 1.1: Save data variables X and y so that you can use them in the future without donwloading the dataset again. Reload data from file to check the correctness of your solution End of explanation """ pos = 1900 plt.imshow(X[pos,].reshape(28, 28)), plt.axis('off'), plt.show() print('The label of element %s is "%s"' % (pos, y[pos])) """ Explanation: Each row of X contains a different digit. You can display the original images by realigning the dimensions of each row of X, as shown in the following example: End of explanation """ from sklearn.model_selection import train_test_split train_samples = 60000 test_samples = 10000 X_tr, X_tst, y_tr, y_tst = train_test_split( Xlocal, ylocal, train_size=train_samples, test_size=test_samples, random_state=0 ) print('Shape of input training data (multiclass):', X_tr.shape) print('Shape of target training vector (multiclass):', y_tr.shape) print('Shape of input test data (multiclass):', X_tst.shape) print('Shape of target test vector (multiclass):', y_tst.shape) # Exercise: create the binary classification problem 7 vs 9 # <SOL> # </SOL> print('\nShape of input training data (binary):', X_tr_bin.shape) print('Shape of target training vector (binary):', y_tr_bin.shape) print('Shape of input test data (binary):', X_tst_bin.shape) print('Shape of target test vector (binary):', y_tst_bin.shape) """ Explanation: 1.2. Data partitioning To begin, we will perform a random partition of the available data into training and test sets of 60,000 and 10,000 digits, respectively. The notebook also considers a binary problem consisting of the recognition of digits 7 and 9. We have selected this pair of digits as it is one of the most confusing, but if you wish, you can use any other pair of digits. Exercise 1.2: Create the binary classification problem 7 vs 9. Save the corresponding data matrices with names X_tr_bin, y_tr_bin, X_tst_bin, y_tst_bin. Make sure that the target vectors contain just 0s (for class 7) and 1s (class 9). End of explanation """ # <SOL> # </SOL> """ Explanation: 1.3. Data normalization Next we will normalize the data for the multiclass and binary problems. When working with images, it is frequent to normalize the input data, corresponding to the gray intensity values of the different pixels, so that they take values in the range $[-0.5, 0.5]$. Exercise 1.3: Implement the normalization of the input data. Make sure to normalize data independently for the multiclass and binary classification cases. Make also sure not to refit the scaler object when transforming the test partitions. End of explanation """ from sklearn.decomposition import PCA # <SOL> # </SOL> plt.figure(figsize=(7,5)) plt.scatter(X_tr_2D[:, 0], X_tr_2D[:, 1], c=y_tr_bin, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('rainbow', 10)) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar(); """ Explanation: <font color='teal'> 2. Binary classification </font> 2.1. Two dimensional representation In order to be able to represent the classification frontier, we will begin our exploration by working on the two variables that present the greatest dispersion. To do this, we will use the Principal Component Analysis (PCA) algorithm. 2.1.1. Principal Component Analysis (PCA) Exercise 2.1: Obtain the first two PCA projections for the binary classification problem. Store your results in the variables X_tr_2D andX_tst_2D. Make a scatter plot of these dimensions distinguishing the points corresponding to both classes, and reflect on the type of classification frontier that would provide a lower error rate. End of explanation """ from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression # <SOL> # </SOL> plt.semilogx(C_param, CE_param) plt.xlabel('C'), plt.ylabel('(%)'), plt.title('Classification Error Rate (Linear Logistic Regression)') plt.show() print('CE of Linear Logistic Regression Classifier (%):', CE_LR2D) print('NLL of Linear Logistic Regression Classifier:', NLL_LR2D) """ Explanation: 2.1.2. Linear Classification with Logistic Regression First we will analyze the behavior of logistic regression for this dataset using just the two first dimensions of PCA as the input variables. Exercise 2.2: Use sklearn GridSearchCV and LogisticRegression methods to calculate the average classification error for different values of the inverse regularization parameter $C$. Explore $C$ using a logarithmic scale, e.g., $$C = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001]$$ Use a 5-fold strategy to obtain the best value of parameter $C$. Exercise 2.3: Calculate the average classification error rate (CE) and negative log-likelihood (NLL) of the best classifier. For the next two cells to work, you need to use the following variable names: C_param: Array with the explored values of $C$ CE_param: Array with the validation error rates for the corresponding values of $C$ clf: Best classifier selected with 5 fold strategy. It needs to implement a clf.predict_proba method CE_LR2D: Average Classification Error Rate of previous classifier calculated over the test data NLL_LR2D: Negative log-lilelihood of previous classifier calculated over the test data End of explanation """ def plot_proba_map(X, y, clf): # param X: Input Data Matrix (Size (N,2)) # param y: Targets (Size (N,)) # param clf: a classifier implementing the predict_proba method if X.shape[1] != 2: print('Can only plot 2D probability maps') else: # Create a regtangular grid. x_min, x_max = X[:, 0].min(), X[:, 0].max() y_min, y_max = X[:, 1].min(), X[:, 1].max() dx = x_max - x_min dy = y_max - y_min h = dy /400 xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h), np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h)) X_grid = np.array([xx.ravel(), yy.ravel()]).T # Compute the classifier output for all samples in the grid. pp = clf.predict_proba(X_grid)[:,1] pp = pp.reshape(xx.shape) plt.figure(figsize=(10,5)) plt.subplot(1, 2, 1) CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper) plt.contour(CS, levels=[0.5], colors='m', linewidths=(3,)) plt.subplot(1, 2, 2) CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper) plt.scatter(X[:, 0], X[:, 1], c=y, s=4, cmap='summer') plt.contour(CS, levels=[0.5], colors='m', linewidths=(3,)) plt.show() plot_proba_map(X_tr_2D, y_tr_bin, clf) """ Explanation: The next cell visualizes the classification border and the probabilistic map of the selected classifier (clf) End of explanation """ from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.preprocessing import StandardScaler # <SOL> # </SOL> print('CE of Polynomial Logistic Regression Classifier (%):', CE_poly) print('NLL of Polynomial Logistic Regression Classifier:', NLL_poly) print('Selected parameters:', clf.best_params_) plot_proba_map(X_tr_2D, y_tr_bin, clf) """ Explanation: A relevant parameter for the training of logistic regression is the optimization method used to update the parameter vector $\bf w$. You can try other methods different from the lbfgs method which is the default selection. You can find information about these methods online, for instance in this stackoverflow entry: Logistic regression python solvers' definitions 2.1.3. Polynomial Logistic Regression A strategy for the implementation of classifiers that provide non-linear boundaries consists of transforming the input variables. To this end, in this section we will use polynomial transformations, exploring different values of the degree of the built-in terms to try to optimize performance. The proposed processing scheme for this section consists of the following steps: Polynomial expansion of the input variables using the PolynomialFeatures method of scikit-learn Scaling of all variables to zero mean and unit variance (StandardScaler method) Logistic regression The free parameters to be adjusted are, therefore, the degree of the polynomial transformation and the parameter $ C $ of the logistic regression. Exercise 2.4: Validate parameters degree and C of the proposed classification scheme using a 5-fold validation. Allow for polynomial transformations of degree up to 6. A very convenient way to do this is to define a processing pipeline in scikit-learn, and use it together with GridSearchCV. Note that both parameters should be validated together, i.e., all possible combinations need to be evaluated. For the next two cells to work, you need to use the following variable names: clf: Best classifier selected with 5 fold strategy. It needs to implement a clf.predict_proba method CE_poly: Average Classification Error Rate of previous classifier calculated over the test data NLL_poly: Negative log-lilelihood of previous classifier calculated over the test data End of explanation """ from sklearn.model_selection import GridSearchCV from sklearn.neural_network import MLPClassifier # <SOL> # </SOL> print('CE of MLP Classifier (%):', CE_MLP) print('NLL of MLP Classifier:', NLL_MLP) print('Selected parameters:', clf.best_params_) plot_proba_map(X_tr_2D, y_tr_bin, clf) """ Explanation: 2.1.4. Multi-Layer Perceptron Rather than using a predefined type of transformation, such as polynomial, we can recur to neural networks with non-linear activation functions to learn the most appropriate representation of the input data to solve the classification problem. Exercise 2.5: In this section you will use the MLPClassifier method provided by scikit-learn for the implementation of feed-forward networks. You will only need to adjust the following parameters: activation: The activation function for the units of the intermediate layer. You can test relu and tanh. max_iter: Number of iterations of the optimization method. Try different values and make sure that the algorithm has completely converged. hidden_layer_sizes: Use just one hidden layer with 20 units alpha: The L2 regularization parameters. Save your results using the following variable names: clf: Best classifier selected with 5 fold strategy. It needs to implement a clf.predict_proba method CE_MLP: Average Classification Error Rate of previous classifier calculated over the test data NLL_MLP: Negative log-lilelihood of previous classifier calculated over the test data End of explanation """ from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression # <SOL> # </SOL> plt.semilogx(C_param, CE_param) plt.xlabel('C'), plt.ylabel('(%)'), plt.title('Classification Error Rate (Linear Logistic Regression)') plt.show() print('CE of Linear Logistic Regression Classifier (%):', CE_LR) print('NLL of Linear Logistic Regression Classifier:', NLL_LR) """ Explanation: 2.2. Classification with all input features Exercise 2.6: Analyze the performance in the binary classification problem of the following classifiers: Logistic regression (validate $C$) K-nearest neighbors (validate $K$) Multi-layer perceptron (validate size of hidden layers) For comparison purposes you can use both the average classifiation error rate and the negative log-likelihood. In this section you need to use all 784 available features. 2.2.1. Logistic Regression End of explanation """ from sklearn.model_selection import GridSearchCV from sklearn.neighbors import KNeighborsClassifier # <SOL> # </SOL> plt.plot(K_param, CE_param) plt.xlabel('K'), plt.ylabel('(%)'), plt.title('Classification Error Rate (KNN)') plt.show() print('CE of KNN (%):', CE_KNN) """ Explanation: 2.2.2. K Nearest Neighbors Be aware that, in this case, the number of test samples together with the dimension of the input data implies long processing times for classification (close to 1 min per execution in Google Colab). You might consider to subsample the test partition for the validation process. End of explanation """ from sklearn.model_selection import GridSearchCV from sklearn.neural_network import MLPClassifier # <SOL> # </SOL> print('CE of MLP Classifier (%):', CE_MLP) print('NLL of MLP Classifier:', NLL_MLP) print('Selected parameters:', clf.best_params_) """ Explanation: 2.2.3. Multi-Layer Perceptron Be aware that, in this case, the number of test samples together with the dimension of the input data implies long processing times for classification (close to 1 min per execution in Google Colab). You might consider to subsample the test partition for the validation process. End of explanation """ from sklearn.decomposition import PCA pca = PCA(2) # project from 784 to 2 dimensions projected = pca.fit_transform(X_tr) plt.scatter(projected[:, 0], projected[:, 1], c=y_tr, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('rainbow', 10)) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar(); # Solution to Exercise 3.1 # <SOL> # </SOL> """ Explanation: <font color='teal'> 3. Multi Class Classification </font> In this section we will train classification schemes that allow us to discriminate between the 10 digits that make up the complete dataset. In this case, you must use the parameters provided, and you will be asked to study the execution times required by the different methods. To do this, you can use the time library as follows: ``` import time start = time.time() Some code should go here etime = time.time() - start ```` 3.1. Principal Component Analysis (PCA) In a preliminary way, and as you did in section 2.2.1, we can analyze the first two PCA components of the available data. You can see that in this case there is a very important overlap between the digits of all the classes on the first two components of PCA. In this section, we will apply the different classification strategies using all the available features. Exercise 3.1: Analyze the variance of the successive projections that the PCA method would obtain, and reflect on whether a smaller-dimensional representation of the input data could be used. End of explanation """ from sklearn.neighbors import KNeighborsClassifier import time train_size = [250, 500, 1000, 2000, 5000, 10000, 25000, 60000] fit_time = [] test_time = [] CE = [] for ntrain in train_size: print('Classifying with', ntrain, 'samples') # Write your code here # <SOL> # </SOL> plt.figure(figsize=(15,3.5)) plt.subplot(1, 3, 1), plt.plot(train_size, 100*np.array(CE)), plt.xlabel('Training Set Size'), plt.ylabel('(%)'), plt.title('CE') plt.subplot(1, 3, 2), plt.plot(train_size, fit_time), plt.xlabel('Training Set Size'), plt.ylabel('Seconds'), plt.title('Fit Time') plt.subplot(1, 3, 3), plt.plot(train_size, test_time), plt.xlabel('Training Set Size'), plt.ylabel('Seconds'), plt.title('Test Time') plt.show() print('Average Classification Error for the 1-NN approach', 100*np.min(CE)) """ Explanation: 3.2. Nearest Neighbor Method In this section, you will analyze the performance of the nearest neighbor (1-NN) algorithm. The complexity of this algorithm grows with the size of the training set, so it is proposed to analyze the behavior of the algorithm for a variable size of the training set. Exercise 3.2: Use the 1-NN method with a varying training set size. Obtain for each size: The average classifiation error rate calculated on the test set The fit time for the 1-NN method The time taken to classify the complete test partition (10000 samples) End of explanation """ # <SOL> # </SOL> """ Explanation: Exercise 3.3: Calculate the confusion matrix of the 1-NN classifier when using 1000 samples for the training set, and answer the following questions: - Which two digits are most frequently confused? - Which is the digit that gets misclassified most often? End of explanation """ from sklearn.neural_network import MLPClassifier MLP = MLPClassifier(activation='relu', max_iter=2000, hidden_layer_sizes=(200,100)) # <SOL> # </SOL> print('Negative Log Likelihood for the MLP classifier:', NLL_MLP) """ Explanation: 3.3. Multi-Layer Perceptron Exercise 3.4: Train an MLP network using all samples in the training set. Use the following settings: - relu activation function for the hidden units - Two hidden layers with 200 and 100 neurons, respectively - Maximum number of iterations for the training: 2000 iterations Compare the performance of the MLP network with that of the 1-NN method. Consider in your comparison both the classification error rate and the fit and operation times. Calculate also the negative log likelihood of the model implemented by the Neural Network. End of explanation """ import torch x = torch.rand((100,200)) X_tr_tensor = torch.from_numpy(X_tr) print(x.type()) print(X_tr_tensor.size()) """ Explanation: <font color='teal'> Part 2. Implementing Deep Networks with PyTorch </font> <font color='teal'> 4. Pytorch Tutorial </font> Pytorch is a Python library that provides different levels of abstraction for implementing deep neural networks The main features of PyTorch are: Definition of numpy-like n-dimensional tensors. They can be stored in / moved to GPU for parallel execution of operations Automatic calculation of gradients, making backward gradient calculation transparent to the user Definition of common loss functions, NN layers of different types, optimization methods, data loaders, etc, simplifying NN implementation and training Provides different levels of abstraction, thus a good balance between flexibility and simplicity This notebook provides just a basic review of the main concepts necessary to train NNs with PyTorch taking materials from: <a href="https://pytorch.org/tutorials/beginner/pytorch_with_examples.html">Learning PyTorch with Examples</a>, by Justin Johnson <a href="https://pytorch.org/tutorials/beginner/nn_tutorial.html">What is torch.nn really?</a>, by Jeremy Howard <a href="https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers">Pytorch Tutorial for Deep Learning Lovers</a>, by Kaggle user kanncaa1 4.1. PyTorch Installation PyTorch can be installed with or without GPU support If you have an Anaconda installation, you can install from the command line, using the <a href="https://pytorch.org/">instructions of the project website</a> PyTorch is also preinstalled in Google Collab with free GPU access Follow RunTime -> Change runtime type, and select GPU for HW acceleration Please, refer to Pytorch getting started tutorial for a quick introduction regarding tensor definition, GPU vs CPU storage of tensors, operations, and bridge to Numpy 4.2. Torch tensors (very) general overview Esentially, tensors are objects provided by PyTorch for numerical representation. They are generic n-dimensional arrays such as the ones used by Numpy. Apart from the library providing them, there are two important differences between Numpy arrays and PyTorch tensors: Tensors can be stored in / moved to GPU. When doing so, certain operations will be parallelized resulting in faster execution. Tensors provide out-of-the-box functions for tracking the operations in which they are involved, and to systematically compute derivatives, something which is very useful for the implementation of the backpropagation method used for training deep networks. Creating tensors from Numpy arrays We can create tensors with different construction methods provided by the library, either to create new tensors from scratch or from a Numpy array Tensors can be converted back to numpy arrays Note that in this case, a tensor and its corresponding numpy array will share memory End of explanation """ print('Size of tensor x:', x.size()) print('Tranpose of vector has size', x.t().size()) #Transpose and compute size print('Extracting upper left matrix of size 3 x 3:', x[:3,:3]) print((x @ x.t()).size()) xpx = x.add(x) xpx2 = torch.add(x,x) print((xpx!=xpx2).sum()) #Since all are equal, count of different terms is zero """ Explanation: Operations and slicing with tensors Operations and slicing involving tensors use a syntax similar to that of numpy End of explanation """ if torch.cuda.is_available(): device = torch.device('cuda') x = x.to(device) y = x.add(x) y = y.to('cpu') else: print('No GPU card is available') """ Explanation: Adding underscore performs operations "in place", e.g., x.add_(y) Parallelization of Operations using GPUs If a GPU is available, tensors can be moved to and from the GPU device Operations on tensors stored in a GPU will be carried out using GPU resources and will typically be highly parallelized End of explanation """ x.requires_grad = True y = (3 * torch.log(x)).sum() y.backward() print(x.grad[:2,:2]) print(3/x[:2,:2]) x.requires_grad = False x.grad.zero_() print('Automatic gradient calculation is deactivated, and gradients set to zero') """ Explanation: Note: If you are using Google Colab and the previous cell indicates that you do not have access to a GPU, you may change your runtime type. However, note that doing so will restart your runtime, so that you will have to run again the initial cells of the notebook to load the data. 4.3. Automatic Gradient Calculation PyTorch tensors have a property requires_grad. When true, PyTorch automatic gradient calculation will be activated for that variable In order to compute these derivatives numerically, PyTorch keeps track of all operations carried out on these variables, organizing them in a forward computation graph. When executing the backward() method, derivatives will be calculated However, this should only be activated when necessary, to save computation End of explanation """ if torch.cuda.is_available(): x = torch.from_numpy(X_tr[0,315:325]).to(device) else: x = torch.from_numpy(X_tr[0,315:325]) # <SOL> # </SOL> """ Explanation: Exercise 4.1: Initialize a tensor x as X_tr[0,315:325] Compute output vector y applying a function of your choice to x Compute scalar value z as the sum of all elements in y squared Check that x.grad calculation is correct using the backward method Try to run your cell multiple times to see if the calculation is still correct. If not, implement the necessary modifications so that you can run the cell multiple times, but the gradient does not change from run to run Note: The backward method can only be run on scalar variables End of explanation """ from sklearn.preprocessing import LabelBinarizer #Convert to Torch tensors. Float type is required X_tr_torch = torch.from_numpy(X_tr).float() X_val_torch = torch.from_numpy(X_tst).float() # <SOL> # </SOL> """ Explanation: <font color='teal'> 5. Feed Forward Networks using PyTorch </font> In this section we are going to illustrate how we can implement a multilayer perceptron-type neural network using the features of PyTorch. The network thus implemented will be equivalent to the one you would train using the scikit-learn MLP function, but thanks to the use of PyTorch it will be able to be executed parallelizing many of the calculations in GPU, which will allow a much faster training. A first possibility would be the direct implementation of the neural network, through an implementation based on PyTorch tensors of the evaluation functions of the neural network and of the derivatives of the cost function with respect to the different parameters (back-propagation). However, PyTorch module nn provides different levels of abstraction that considerably simplify the implementation of the network, in addition to making the theoretical calculation of derivatives unnecessary. Before proceding, we need to import training and test data as PyTorch tensors. The fragment below can be used for that purpose. Note that if you are using a GPU, all tensors should be moved to the GPU. Exercise 5.1: Complete the code below to create PyTorch tensors for the different MNIST variables. The tensors for the input data have already been created for you. For encoding class membership, you need to create y_tr and y_val using One-Hot-Encoding. You can easily do that using sklearn method LabelBinarizer. End of explanation """ from torch import nn class my_multiclass_net(nn.Module): def __init__(self, nin, nout): """Note that now, we do not even need to initialize network parameters ourselves""" super().__init__() self.lin = nn.Linear(nin, nout) def forward(self, x): return self.lin(x) loss_func = nn.CrossEntropyLoss() """ Explanation: 5.1. Using torch nn.Module and nn.Parameter PyTorch nn module provides many attributes and methods to make simple the implementation and training of Neural Networks. nn.Module and nn.Parameter allow to implement a concise network configuration, and simplify the calculation of the gradients nn.Module is a PyTorch class that will be used to encapsulate and design a specific neural network, thus, it is central to the implementation of deep neural nets using PyTorch nn.Parameter allow the definition of trainable network parameters. In this way, we will simplify the implementation of the training loop. All parameters defined with nn.Parameter will have requires_grad = True Below you can see a PyTorch fragment for the definition of a single layer perceptron (SLP) network. You can see that at least two methods need to be defined: the initialization of the network (including parameter definition and initialization), and a forward method that implements how the network produces its output for a given input pattern. Other auxiliary functions may be defined as well. However, you can see that there is no need to implement a backward method for gradient calculation. ```` from torch import nn class my_multiclass_net(nn.Module): def init(self, nin, nout): """This method initializes the network parameters Parameters nin and nout stand for the number of input parameters (features in X) and output parameters (number of classes)""" super().init() self.W = nn.Parameter(.1 * torch.randn(nin, nout)) self.b = nn.Parameter(torch.zeros(nout)) def forward(self, x): return softmax(x @ self.W + self.b) def softmax(t): """Compute softmax values for each sets of scores in t""" return t.exp() / t.exp().sum(-1).unsqueeze(-1) ```` You can see that by using nn.Parameter and nn.Module you can easily implement any function of your choice. However, we need to be careful about matrix dimensions and some particularities which are required to correctly operate PyTorch tensors. For standard feed-forward networks such as MLPs, we can use other PyTorch abstraction levels that make these implementations even simpler. nn.Module comes with several kinds of pre-defined layers, thus making it even simpler to implement neural networks nn.CrossEntropyLoss implements the calculation of the negative log likelihood incorporating the softmax for the predictions (so there is no need to include it in the forward method of the network The code below shows how these predefined layers and cost functions can be used to create an SLP network in a rather straightforward manner. Note that when creating the network we just need to specify the dimensionality of the input and output data, i.e., number of input features and number of classes. End of explanation """ def CE(y, y_hat): return 1-(y.argmax(axis=-1) == y_hat.argmax(axis=-1)).float().mean() my_net = my_multiclass_net(X_tr_torch.size()[1], y_tr_torch.size()[1]) epochs = 300 rho = .1 loss_tr = np.zeros(epochs) loss_val = np.zeros(epochs) CE_tr = np.zeros(epochs) CE_val = np.zeros(epochs) start = time.time() for epoch in range(epochs): print(f'Current epoch: {epoch+1} \r', end="") #Compute network output and cross-entropy loss pred = my_net(X_tr_torch) loss = loss_func(pred, y_tr_torch.argmax(axis=-1)) #Compute gradients loss.backward() #Deactivate gradient automatic updates with torch.no_grad(): #Computing network performance after iteration loss_tr[epoch] = loss.item() CE_tr[epoch] = CE(y_tr_torch, pred).item() pred_val = my_net(X_val_torch) loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item() CE_val[epoch] = CE(y_val_torch, pred_val).item() #Weight update for p in my_net.parameters(): p -= p.grad * rho #Reset gradients my_net.zero_grad() print('Training of the network took', time.time()-start, 'seconds') plt.figure(figsize=(14,5)) plt.subplot(1, 2, 1), plt.plot(loss_tr, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss') plt.subplot(1, 2, 2), plt.plot(CE_tr, 'b'), plt.plot(CE_val, 'r'), plt.legend(['train', 'val']), plt.title('Classification Error') plt.show() """ Explanation: The code below implements the training of the network using conventional gradient descent. The training takes place over a predefined number of epochs using a fixed step size. It is important to note that: Gradient updates are stopped after the evaluation of the network output for all training patterns. This is done by encapsulating any additional computations inside a block with torch.no_grad(). Parameter updates are implemented by iterating over all network parameter using the method nn.Model.parameters() After parameter update, gradients are set back to zero for the next epoch End of explanation """ from torch.utils.data import TensorDataset, DataLoader train_ds = TensorDataset(X_tr_torch, y_tr_torch) train_dl = DataLoader(train_ds, batch_size=64) from torch import optim my_net = my_multiclass_net(X_tr_torch.size()[1], y_tr_torch.size()[1]) opt = optim.SGD(my_net.parameters(), lr=0.1) epochs = 150 loss_tr = np.zeros(epochs) loss_val = np.zeros(epochs) CE_tr = np.zeros(epochs) CE_val = np.zeros(epochs) start = time.time() for epoch in range(epochs): print(f'Current epoch: {epoch+1} \r', end="") # In each epoch we iterate over all minibatches for xb, yb in train_dl: #Compute network output and cross-entropy loss for current minibatch pred = my_net(xb) loss = loss_func(pred, yb.argmax(axis=-1)) #Compute gradients and optimize parameters loss.backward() opt.step() opt.zero_grad() #At the end of each epoch, evaluate overall network performance with torch.no_grad(): #Computing network performance after iteration pred = my_net(X_tr_torch) loss_tr[epoch] = loss_func(pred, y_tr_torch.argmax(axis=-1)).item() CE_tr[epoch] = CE(y_tr_torch, pred).item() pred_val = my_net(X_val_torch) loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item() CE_val[epoch] = CE(y_val_torch, pred_val).item() print('Neural Network training completed in', time.time()-start, 'seconds') plt.figure(figsize=(14,5)) plt.subplot(1, 2, 1), plt.plot(loss_tr, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss') plt.subplot(1, 2, 2), plt.plot(100*CE_tr, 'b'), plt.plot(100*CE_val, 'r'), plt.legend(['train', 'val']), plt.title('Classification Error (%)') plt.show() """ Explanation: 5.2. Network Optimization We cover in this subsection two different aspects about network training using PyTorch: Using torch.optim allows an easier and more interpretable encoding of neural network training, and opens the door to more sophisticated training algorithms Using minibatches can speed up network convergence. The idea of minibatches is that, at each epoch, gradients are evaluated over just a subset of the training input data. Training of the network will normally require more epochs but, as each epoch requires the evaluation of the network output for a smaller subset of training samples, the overall training time is normally reduced significantly. torch.optim provides two convenient methods for neural network training: * opt.step() updates all network parameters using current gradients * opt.zero_grad() resets all network parameters End of explanation """ my_MLP_net = nn.Sequential( nn.Linear(X_tr_torch.size()[1], 200), nn.ReLU(), nn.Linear(200,100), nn.ReLU(), nn.Linear(100,y_tr_torch.size()[1]) ) """ Explanation: Comparing this figures to those for the conventional gradient descent method, we can extract a number of conclusions: Convergence is radically faster when using SGD with minibatches. Note that just after the first epoch the error (both in terms of loss function and CE) is already smaller than that achieved with conventional gradient descent In this case, the error in the validation set starts increasing slightly after a number of epochs. I.e., even for a linear classifier overfitting may occur given the high dimensionality of the input data Note that the final classification error is much larger than that observed in Section 3; however keep in mind that the network we have just implemented is constrained to linear classification. Exercise 5.2: Implement network training with other optimization methods. You can refer to the <a href="https://pytorch.org/docs/stable/optim.html">official documentation</a> and select a couple of methods. You can also try to implement adaptive learning rates using torch.optim.lr_scheduler 5.3. Multi Layer networks using nn.Sequential As we have seen, PyTorch simplifies considerably the implementation of neural network training, since we do not need to implement derivatives ourselves. We can also make a simpler implementation of multilayer networks using nn.Sequential function. It returns directly a network with the requested topology, including parameters and forward evaluation method For instance, the cell below defines a Network with two hidden layers with 200 and 100 units for the resolution of the MNIST multiclass problem. Relu activation is used at the output of the neurons of the hidden layers. End of explanation """ # <SOL> # </SOL> plt.figure(figsize=(14,5)) plt.subplot(1, 2, 1), plt.plot(loss_tr, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss') plt.subplot(1, 2, 2), plt.plot(100*CE_tr, 'b'), plt.plot(100*CE_val, 'r'), plt.legend(['train', 'val']), plt.title('Classification Error (%)') plt.show() """ Explanation: Exercise 5.3: Train the MLP network we have just defined on the MNIST dataset using the following settings: * Loss function: nn.CrossEntropyLoss() * Optimization algorithm: optim.Adam() with learning rate 1e-4 * Minibatch size: 256 * Number of epochs: 100 Calculate the time required to train the network End of explanation """ # Network, training and validation data should be moved to GPU. # You can do that with .to(device) or .cuda() functions my_MLP_cuda = nn.Sequential( nn.Linear(X_tr_torch.size()[1], 200), nn.ReLU(), nn.Linear(200,100), nn.ReLU(), nn.Linear(100,y_tr_torch.size()[1]) ).cuda() X_tr_cuda = X_tr_torch.to(device) X_val_cuda = X_val_torch.to(device) y_tr_cuda = y_tr_torch.to(device) y_val_cuda = y_val_torch.to(device) # Adapt training code # <SOL> # </SOL> plt.figure(figsize=(14,5)) plt.subplot(1, 2, 1), plt.plot(loss_tr, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss') plt.subplot(1, 2, 2), plt.plot(100*CE_tr, 'b'), plt.plot(100*CE_val, 'r'), plt.legend(['train', 'val']), plt.title('Classification Error (%)') plt.show() """ Explanation: Exercise 5.4: Modify your code so that network training is done using GPUs. Compare the training time when using CPU and GPU implementations. End of explanation """
codez266/codez266.github.io
markdown_generator/talks.ipynb
mit
import pandas as pd import os """ Explanation: Talks markdown generator for academicpages Takes a TSV of talks with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in talks.py. Run either from the markdown_generator folder after replacing talks.tsv with one containing your data. TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style. End of explanation """ !cat talks.tsv """ Explanation: Data format The TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV. Fields that cannot be blank: title, url_slug, date. All else can be blank. type defaults to "Talk" date must be formatted as YYYY-MM-DD. url_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/talks/YYYY-MM-DD-[url_slug] The combination of url_slug and date must be unique, as it will be the basis for your filenames This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create). End of explanation """ talks = pd.read_csv("talks.tsv", sep="\t", header=0) talks """ Explanation: Import TSV Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \t. I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others. End of explanation """ html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): if type(text) is str: return "".join(html_escape_table.get(c,c) for c in text) else: return "False" """ Explanation: Escape special characters YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely. End of explanation """ loc_dict = {} for row, item in talks.iterrows(): md_filename = str(item.date) + "-" + item.url_slug + ".md" html_filename = str(item.date) + "-" + item.url_slug year = item.date[:4] md = "---\ntitle: \"" + item.title + '"\n' md += "collection: talks" + "\n" if len(str(item.type)) > 3: md += 'type: "' + item.type + '"\n' else: md += 'type: "Talk"\n' md += "permalink: /talks/" + html_filename + "\n" if len(str(item.venue)) > 3: md += 'venue: "' + item.venue + '"\n' if len(str(item.location)) > 3: md += "date: " + str(item.date) + "\n" if len(str(item.location)) > 3: md += 'location: "' + str(item.location) + '"\n' md += "---\n" if len(str(item.talk_url)) > 3: md += "\n[More information here](" + item.talk_url + ")\n" if len(str(item.description)) > 3: md += "\n" + html_escape(item.description) + "\n" md_filename = os.path.basename(md_filename) #print(md) with open("../_talks/" + md_filename, 'w') as f: f.write(md) """ Explanation: Creating the markdown files This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page. End of explanation """ !ls ../_talks !cat ../_talks/2013-03-01-tutorial-1.md """ Explanation: These files are in the talks directory, one directory below where we're working from. End of explanation """
gjtorikian/Algorithms-Notebooks
Long-Tails.ipynb
apache-2.0
import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set_context('talk') sns.set_style('darkgrid') inventory = 100.0 volume = 5000.0 rr = np.linspace(1,inventory,100) ns = [0.25, 0.75, 1.25, 1.75] fig, ax = plt.subplots(figsize=(10, 6)) for nn in ns: norm = (nn-1)*volume/(1-inventory**(1-nn)) ax.plot(rr, norm/rr**nn, label='$n=%g$' % nn) ax.legend() ax.set_xlabel('Rank by Sales Volume $r$') ax.set_ylabel('Units Sold') ax.set_title('Sales volume of each product by rank') ax.set_ylim(0,100) """ Explanation: Stitch Fix, Jupyter, GitHub, and the Long Tail At Stitch Fix we are avid users of Jupyter for research at both the personal and team scales. At the personal level, Jupyter is a great interface to research the question at hand. It captures the workflow of the research where we can take detailed notes on the code and explain models with written content and mathematical equations. At the team level, Jupyter is a great tool for communication. Notebooks allow one to fluidly mix text, code, equations, data, and plots in whatever way makes the most sense to explain something. You can organize your explanation around your thought process rather than around the artificial lines determined by the tools you’re using. You have a single Jupyter notebook instead of a bunch of disconnected files, one containing SQL code, another Python, a third LaTeX to typeset equations. When the analysis is finished, the Jupyter notebook remains a “living” interactive research notebook. Re-doing the analysis using new data or different assumptions is simple. Stitch Fix provides clothing to its clients based on their style preferences. We send a client five pieces of clothing we predict they’ll like, and the client chooses what to keep. Inevitably, some pieces of clothing will be more popular than others. In some cases, a few select items may be unpopular. The largest benefit from adding a single style of clothing to our line of inventory comes from the most popular one. Each of the less popular styles, by itself, contributes less. However, there are many of the less popular ones, reflecting the fact that our clients are unique in their fashion preferences. Together, the value in the "long tail" can match or exceed the value of the few products in the "head." Catering to the long tail allows us to save our clients the time they would otherwise spend searching through many retail stores to find clothing that’s unique to their tastes. But, where do we draw the line on how far into the long tail we should support? Below we investigate this question using the Jupyter Notebook. The portability and flexibility of the Notebooks allows us to easily share the analysis with others. GitHub integration allows a great new possibility: other researches can fork the notebook to extend or alter the analysis according to their own particular interests! Is the value in the head or the tail? We will approximate the number of each style of clothing sold as a power law of the rank $r$ by sales volume. The most popular style has $r=1$, the second most popular $r=2$, and the least popular has $r=N$. Consumer preferences dictate the shape of the curve. Even though we may want to carry an infinite number of styles of clothing, it's important to keep $N$ finite so that the integrals converge! For the moment we will consider a scaled-down version of Stitch Fix that only carrys 100 styles of clothing and sells a volume of $V=5,000$ units per year. The volume of each style sold is \begin{equation} v(r) = \frac{A}{r^n} \end{equation} where $A$ is a normalization constant and $n$ is the index of the power law. The value of $n$ will determine how much value is in the head versus the tail. Approximating the product distribution as continuous so it can be written as an integral, the normalization constant is set by the constraint \begin{equation} \int_1^N \frac{A\,dr}{r^n} = V \end{equation} so \begin{equation} A = \frac{(n-1) V}{1-N^{1-n}} \end{equation} End of explanation """ # Same plot as above fig, ax = plt.subplots(figsize=(10, 6)) for nn in ns: norm = (nn-1)*volume/(1-inventory**(1-nn)) ax.plot(rr, norm/rr**nn, label='$n=%g$' % nn) ax.set_xlabel('Rank by Sales Volume $r$') ax.set_ylabel('Units Sold') ax.set_title('Sales volume of each product by rank') ax.set_ylim(0,100) # Ask seaborn for some pleasing colors c1, c2, c3 = sns.color_palette(n_colors=3) # Add transparent rectangles head_patch = plt.matplotlib.patches.Rectangle((1,0), 9, 100, alpha=0.25, color=c1) middle_patch = plt.matplotlib.patches.Rectangle((11,0), 39, 100, alpha=0.25, color=c2) tail_patch = plt.matplotlib.patches.Rectangle((51,0), 48, 100, alpha=0.25, color=c3) ax.add_patch(head_patch) ax.add_patch(middle_patch) ax.add_patch(tail_patch) # Add text annotations ax.text(5,50,"Head", color=c1, fontsize=16, rotation=90) ax.text(25,80,"Middle", color=c2, fontsize=16) ax.text(75,80,"Tail", color=c3, fontsize=16) """ Explanation: All of these distributions have the same area under the curve, so they represent the same total number of units sold. Smaller values of $n$ give flatter distributions (less head, more tail) and larger values of $n$ give more head-heavy distributions. What is the total value in the head versus the tail? Define the head to be the 10% of styles with the largest sales volume, the tail to be the 50% of styles with the lowest sales volumes, and the middle to be those in between. That is, the head, tail, and middle look like this: End of explanation """ f_head = 0.1 f_tail = 0.5 ns = np.linspace(0,2,100) nm1 = ns-1.0 head = volume*(inventory**nm1 - f_head**-nm1)/(inventory**nm1-1) middle = volume*(f_head**-nm1 - f_tail**-nm1)/(inventory**nm1-1) tail = volume*(f_tail**-nm1 - 1)/(inventory**nm1-1) fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(ns, head, label='Head') ax.plot(ns, middle, label='Middle') ax.plot(ns, tail, label='Tail') ax.legend(loc='upper left') ax.set_ylabel('Units Sold') ax.set_xlabel('Power law index $n$') """ Explanation: How many units from the head, tail, and middle are sold? Integrate over the sales rank distribution to get the sales volume in the head: \begin{equation} V_H = \int_1^{f_H N} \frac{A\, dr}{r^n} = \frac{V(N^{n-1} - f_H^{1-n})}{N^{n-1} - 1} \end{equation} where $f_H=0.1$ The volume in the tail is \begin{equation} V_T = \int_{f_T N}^N \frac{A\, dr}{r^n} = \frac{V(f_T^{1-n}-1)}{N^{n-1}-1} \end{equation} where $f_T=0.5$ And the middle: \begin{equation} V_M = \int_{f_H N}^{f_T N} \frac{A\, dr}{r^n} = \frac{V(f_H^{1-n} - f_T^{1-n})}{N^{n-1}-1} \end{equation} End of explanation """ marginal_benefit = ((ns-1)*volume)/((1-inventory**(1-ns))*(inventory+1)**ns) fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(ns, marginal_benefit) ax.set_ylabel('Additional Units Sold') ax.set_xlabel('Power law index $n$') ax.set_title('Marginal Benefit of Expanding Inventory') """ Explanation: For $n>1$, the head has most of value. As $n$ falls, the middle and tail become important. How many styles of clothing should we carry? We can choose expand our inventory from $N$ to $N+1$ styles of clothing. How many additional units will we sell? This is just the sales volume distribution $n(r)$ evaluated at $r=N+1$ \begin{equation} \frac{d V}{d N} = \frac{(n-1) V}{(1-N^{1-n})(N+1)^n} \end{equation} End of explanation """
tensorflow/examples
courses/udacity_intro_to_tensorflow_for_deep_learning/l06c02_exercise_flowers_with_transfer_learning.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import tensorflow_hub as hub import tensorflow_datasets as tfds from tensorflow.keras import layers import logging logger = tf.get_logger() logger.setLevel(logging.ERROR) """ Explanation: <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c02_exercise_flowers_with_transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c02_exercise_flowers_with_transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> TensorFlow Hub TensorFlow Hub is an online repository of already trained TensorFlow models that you can use. These models can either be used as is, or they can be used for Transfer Learning. Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs. Here, you can see all the models available in TensorFlow Module Hub. Before starting this Colab, you should reset the Colab environment by selecting Runtime -&gt; Reset all runtimes... from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of. End of explanation """ splits = (training_set, validation_set), dataset_info = """ Explanation: TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the TensorFlow Datasets documentation you will see that the name of the Flowers dataset is tf_flowers. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use tfds.splits to split this training set into to a training_set and a validation_set. Do a [70, 30] split such that 70 corresponds to the training_set and 30 to the validation_set. Then load the tf_flowers dataset using tfds.load. Make sure the tfds.load function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets. End of explanation """ print('Total Number of Classes: {}'.format(num_classes)) print('Total Number of Training Images: {}'.format(num_training_examples)) print('Total Number of Validation Images: {} \n'.format(num_validation_examples)) """ Explanation: TODO: Print Information about the Flowers Dataset Now that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets. End of explanation """ for i, example in enumerate(training_set.take(5)): print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1])) """ Explanation: The images in the Flowers dataset are not all the same size. End of explanation """ IMAGE_RES = def format_image(image, label): return image, label BATCH_SIZE = train_batches = validation_batches = """ Explanation: TODO: Reformat Images and Create Batches In the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an image and a label as arguments and should return the new image and corresponding label. Then create training and validation batches of size 32. End of explanation """ URL = feature_extractor = """ Explanation: Do Simple Transfer Learning with TensorFlow Hub Let's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature Extractor In the cell below create a feature_extractor using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the TensorFlow Hub documentation to see a list of available feature vectors. Click on the tf2-preview/mobilenet_v2/feature_vector. Read the documentation and get the corresponding URL to get the MobileNet v2 feature vector. Finally, create a feature_extractor by using hub.KerasLayer with the correct input_shape parameter. End of explanation """ feature_extractor """ Explanation: TODO: Freeze the Pre-Trained Model In the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer. End of explanation """ model = """ Explanation: TODO: Attach a classification head In the cell below create a tf.keras.Sequential model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model. End of explanation """ EPOCHS = history = """ Explanation: TODO: Train the model In the cell bellow train this model like any other, by first calling compile and then followed by fit. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs. End of explanation """ acc = val_acc = loss = val_loss = epochs_range = """ Explanation: You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation Graphs In the cell below, plot the training and validation accuracy/loss graphs. End of explanation """ class_names = """ Explanation: What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution. One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch. The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check Predictions In the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names. End of explanation """ image_batch, label_batch = predicted_batch = predicted_batch = tf.squeeze(predicted_batch).numpy() predicted_ids = predicted_class_names = """ Explanation: TODO: Create an Image Batch and Make Predictions In the cell below, use the next() function to create an image_batch and its corresponding label_batch. Convert both the image_batch and label_batch to numpy arrays using the .numpy() method. Then use the .predict() method to run the image batch through your model and make predictions. Then use the np.argmax() function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names. End of explanation """ print() """ Explanation: TODO: Print True Labels and Predicted Indices In the cell below, print the true labels and the indices of predicted labels. End of explanation """ plt.figure(figsize=(10,9)) for n in range(30): plt.subplot(6,5,n+1) plt.subplots_adjust(hspace = 0.3) plt.imshow(image_batch[n]) color = "blue" if predicted_ids[n] == label_batch[n] else "red" plt.title(predicted_class_names[n].title(), color=color) plt.axis('off') _ = plt.suptitle("Model predictions (blue: correct, red: incorrect)") """ Explanation: Plot Model Predictions End of explanation """
honjy/foundations-homework
5/.ipynb_checkpoints/nyt-homework-hon-june6-checkpoint.ipynb
mit
#Mother's Day in 2009 was May 10, 2009 response = requests.get("http://api.nytimes.com/svc/books/v2/lists/2009-05-10/hardcover-fiction.json?api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") mom_09_data = response.json() #print(mom_09_data) #mom_09_data.keys() #print(mom_09_data['results']) for item in mom_09_data['results']: for title in item['book_details']: print(title['title']) #Q: Is this the only way to get into a dictionary in a list in a dictionary in a list? To do another for loop? #Mother's Day in 2010 was May 9, 2010 response = requests.get("http://api.nytimes.com/svc/books/v2/lists/2010-05-09/hardcover-fiction.json?api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") mom_10_data = response.json() #print(mom_10_data) for item in mom_10_data['results']: for title in item['book_details']: print(title['title']) #Father's Day in 2009 was June 21, 2009 response = requests.get("http://api.nytimes.com/svc/books/v2/lists/2009-06-21/hardcover-fiction.json?api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") dad_09_data = response.json() for item in dad_09_data['results']: for title in item['book_details']: print(title['title']) #Father's Day in 2010 was June 20, 2010 response = requests.get("http://api.nytimes.com/svc/books/v2/lists/2010-06-20/hardcover-fiction.json?api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") dad_10_data = response.json() for item in dad_10_data['results']: for title in item['book_details']: print(title['title']) """ Explanation: Graded = 7/8 1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day? End of explanation """ response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?date=2009-06-06&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") june6_09_data = response.json() #print(june6_09_data) #Looks all right? june6_09_data.keys() #print(june6_09_data['results']) #Looks like the categories are under display name and list name. I'll just go for list name. #I hope I got the right list, I am actually not very sure. for category in june6_09_data['results']: print(category['list_name']) response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?date=2015-06-06&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") june6_15_data = response.json() #print(june6_15_data) for category in june6_15_data['results']: print(category['list_name']) """ Explanation: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015? Question: To specify a date, include it in the URI path. To specify a response-format, add it as an extension. The other parameters in this table are specified as name-value pairs in a query string. (What is the difference between putting it in the URI path and putting it as a query?) End of explanation """ #Gadafi response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=Gadafi&glocations=Libya&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gadafi_data = response.json() #print(gadafi_data) print("The NYT has referred to him by 'Gadafi' a total of", gadafi_data['response']['meta']['hits'], "times") #Gaddafi response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=Gaddafi&glocations=Libya&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gaddafi_data = response.json() #print(gaddafi_data) print("The NYT has referred to him by 'Gaddafi' a total of", gaddafi_data['response']['meta']['hits'], "times") #Kadafi response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=Kadafi&glocations=Libya&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") kadafi_data = response.json() #print(kadafi_data) print("The NYT has referred to him by 'Kadafi' a total of", kadafi_data['response']['meta']['hits'], "times") #Qaddafi response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=qaddafi&glocations=Libya&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") qaddafi_data = response.json() #print(qaddafi_data) print("The NYT has referred to him by 'Qaddafi' a total of", qaddafi_data['response']['meta']['hits'], "times") """ Explanation: 3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names? Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy. End of explanation """ #hipster response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=hipster&begin_date=19950101&end_date=19951231&sort=oldest&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") hipster_data = response.json() #print(hipster_data) hipster_data.keys() #print(hipster_data['response']['docs'][0]) #The first story hipster_data['response']['docs'][0].keys() print("The title of the first story to mention the word 'hipster' is", hipster_data['response']['docs'][0]['headline']['main']) """ Explanation: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph? Question: What is the difference between query and filter query? End of explanation """ #gaymarriage response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19500101&end_date=19591231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gay_50_data = response.json() #print(gay_50_data) print("The number of times gay marriage is mentioned between 1950 and 1959 is", gay_50_data['response']['meta']['hits'], "times") response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19600101&end_date=19691231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gay_60_data = response.json() print("The number of times gay marriage is mentioned between 1960 and 1969 is", gay_60_data['response']['meta']['hits'], "times") response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19700101&end_date=19791231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gay_70_data = response.json() print("The number of times gay marriage is mentioned between 1970 and 1979 is", gay_70_data['response']['meta']['hits'], "times") response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=gaymarriage%22&begin_date=19800101&end_date=19891231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gay_80_data = response.json() print("The number of times gay marriage is mentioned between 1980 and 1989 is", gay_80_data['response']['meta']['hits'], "times") response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19900101&end_date=19991231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gay_90_data = response.json() print("The number of times gay marriage is mentioned between 1990 and 1999 is", gay_90_data['response']['meta']['hits'], "times") response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=20000101&end_date=20091231&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gay_00_data = response.json() print("The number of times gay marriage is mentioned between 2000 and 2001 is", gay_00_data['response']['meta']['hits'], "times") response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=20100101&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") gay_10_data = response.json() print("The number of times gay marriage is mentioned between 2010 and now is", gay_10_data['response']['meta']['hits'], "times") """ Explanation: TA-Stephan: Didn't print out first paragraph. 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present? Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article. Tip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959. End of explanation """ response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycle&facet_field=section_name&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") motor_data = response.json() top_motor = motor_data['response']['facets']['section_name']['terms'] for item in top_motor: print(item['term'], item['count']) print("The section that mentions motorcycles the most is the World section with 1739 mentions") #QUESTION: Why can't I do this? #Question: Is there a way to automatically tell you highest value? #top_motor = (motor_data['response']['facets']['section_name']['terms'][0]) #for item in top_motor: #print("The section that mentions motorcycles the most is the", item['term'], "section with", item['count'], "mentions") """ Explanation: 6) What section talks about motorcycles the most? Tip: You'll be using facets End of explanation """ response = requests.get("http://api.nytimes.com/svc/movies/v2/reviews/search.json?resource-type=all&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") movie20_data = response.json() #print(movie20_data) #movie20_data.keys() #print(movie20_data['results']) #I am assuming critics_pick = 0 means no and critics_pick = 1 means yes movie_count = 0 for movie in movie20_data['results']: #print(movie['display_title'], movie['critics_pick']) if movie['critics_pick'] > 0: movie_count = movie_count + 1 print("There are", movie_count, "Critics' Picks movies in the last 20 movies reviewed by the NYT") response = requests.get("http://api.nytimes.com/svc/movies/v2/reviews/search.json?resource-type=all&offset=20&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") movie40_data = response.json() #print(movie40_data) movie40 = movie40_data['results'] + movie20_data['results'] #print(movie40) movie_count = 0 for pie in movie40: if pie['critics_pick'] > 0: movie_count = movie_count + 1 print("There are", movie_count, "Critics' Picks movies in the last 40 movies reviewed by the NYT") response = requests.get("http://api.nytimes.com/svc/movies/v2/reviews/search.json?resource-type=all&offset=40&api-key=2ca9e983dcfd4b1ba330521af1c9c2b2") movie60_data = response.json() movie60 = movie60_data['results'] + movie40_data['results'] + movie20_data['results'] movie_count = 0 for berry in movie60: if berry['critics_pick'] > 0: movie_count = movie_count + 1 print("There are", movie_count, "Critics' Picks movies in the last 60 movies reviewed by the NYT") """ Explanation: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60? Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them. End of explanation """ byline_count = [] from collections import Counter #print(movie40_data['results']) for stuff in movie40_data['results']: byline = stuff['byline'] #print(byline) byline_count.append(byline) counts = Counter(byline_count) counts.most_common(1) """ Explanation: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews? End of explanation """
PyDataMallorca/WS_Introduction_to_data_science
anscombes_quartet-in_depth.ipynb
gpl-3.0
#!conda install -y numpy pandas matplotlib seaborn statsmodels ipywidgets %matplotlib inline import seaborn as sns import pandas as pd sns.set(style="ticks") """ Explanation: 1. Anscombe's quartet In this introductory course to data science we will start by introducing the basics of the discipline. In this first part of the course we will explain the first steps towards working with data and the basic theoretical concepts needed to analyse a dataset. End of explanation """ df = sns.load_dataset("anscombe") """ Explanation: We will be using a dataset called the Anscombe dataset that can be loaded from the seaborn examples. End of explanation """ type(df) """ Explanation: And df is a pandas dataframe... End of explanation """ df.head() """ Explanation: that we can print, plot, ... This dataset is comprised of three columns. Two of them, called 'x' and 'y', are filled with numerical values while the 'dataset' column is filled with string values. Each of these values contained in the 'dataset' column will be either 'I','II','III','IV'. This means that the dataset column is a categorical value. We can visualize the first five rows of a DataFrame using the head() function. We also have a column comprised of integer values that we will use as an index. End of explanation """ import matplotlib.pyplot as plt with plt.xkcd(): _=df.plot(kind='hist',title='Histograms of x and y',figsize=(12,5),bins=6,fontsize=20,subplots=True) #plt.xlabel(,fontsize=16) """ Explanation: 2. Exploring the data When dealing with a new dataset we should start by doing some data exploration. This means that we should try to get an intuitive idea of how the data is structured, and how the different columns are related to each other. 2.1 The histogram As we don't know anything about the dataset we will start by plotting the numerical values contained in each column. The histogram is a plot that is used to display the values that a single column takes. An histogram consists in several vertical bars displayed in parallel, where the x axis represents the values that a point in the column can take, and its height is proportional to the number of times that the value appears in the column. End of explanation """ df[['x','y']].plot( linestyle='--', marker='o',figsize=(15,5)) plt.title("Anscombe's points",fontsize=25) plt.xlabel('Index',fontsize=16) _ = plt.ylabel('Value',fontsize=16) """ Explanation: A histogram allows us to see how many items of a column have the same range of values, but it doesn't allow us to relate the values contained in one column of our dataset to the values contained in a another column for a given row. The histogram doesn't take into account the ordering of the data. 2.2 The line plot In order to see how the ordering of the data affects its value we can check the following plot: End of explanation """ df.plot(kind='scatter',x='x',y='y',fontsize=14,s=75) plt.title("Anscombe's points",fontsize=20) plt.xlabel('x',fontsize=16) plt.grid() _ = plt.ylabel('y',fontsize=16) """ Explanation: This is called a line plot. In these plots we will represent the values of different columns in the y axis relative to its index value, represented in the x axis. This kind of plot helps us understand how the index value (and thus the order) affects the structure of the data. This plot also allows us to compare the values of different columns at a given moment (index value). As we can see, this plot doesn't give us any intuitive idea of how the data is structured. This probably means that the index value does not have a meaning impact on how the data is structured. 2.3 The scatter plot Given that the two columns do not seem to be related to the index value, we will try to figure out if they are related to each other. One graphical way to see how the two series of points are related to each other is by means of a scatter plot. In this plot, instead of plotting the value of a column against an index, we will plot it against the value the other column have at a given index. In this plot the ordering of the data doesn't matter (you cannot see the information contained in the index column in this plot.) End of explanation """ df['dataset'].unique() """ Explanation: This is a scatter plot. This speciffic plot answers the following question: When a point of the data set in the column 'x' had a value of x, which was the value y contained in the column 'y'? (or vice versa). In order to answer the question with the plot we only have to substitue the value x for the number we want, and search for that number in the lower axis of the plot, labeled x. The answer y will be the height of the points that lie in an straight parallel to the grid and starts at x. Now, at least we can infer some structure in our data. In this plot it almost seems as if the points were more or less distributed across an imaginary straight line. 3 Representing categories Since now, we have not paid much attention to the 'dataset' column. This column is comprised of four different values: End of explanation """ number_mapping = lambda x: x.count('I') if x.count('V')==0 else 4#apply a function value by value to all df['num_ix'] = df['dataset'].map(number_mapping)#the values contained a column df.head(3)#change the number of displayed rows """ Explanation: It is possible to interpret the values contained in this column as labels attached to each row. This means that a categorical value can be interpreted as a column that allows us to break our dataset in different subsets. Each of the elements of these diferent "mini datasets" will have the same value. This is what is known as *grouping a data set by a categorical value". We can also see that these labels represent numbers, so we will add a column to our database representing the same numeric value in decimal base. This way we can represent the information contained in the labels without getting errors in the pandas plotting interface. End of explanation """ from matplotlib.colors import LinearSegmentedColormap vmax=3 cmap = LinearSegmentedColormap.from_list('mycmap', [(0/vmax ,'blue'), (1/vmax, 'green'), (2/vmax, 'red'), (3/vmax, 'yellow')] ) """ Explanation: In order to represent the category we will assign a different color to each different value. This means defining a color map between the label values and its visual representation. One of the possible ways of doing it is by manually defining a colormap. End of explanation """ df.reset_index().plot(kind='scatter', x='index', y='x', fontsize=14, s=150,c='num_ix', cmap=cmap,figsize=(15,7),alpha=0.5) plt.title("Anscombe's points",fontsize=20) plt.grid() plt.ylabel('x',fontsize=16) plt.xlabel('index',fontsize=16) """ Explanation: Now that we are ready to plot,we will see how the different categorical values are distributed across the dataset with the kind of plot we already know: the scatter plot. We will make two different scatter plots. First we will plot the value of one numerical-valued column versus the index value, and we will color each point according to its 'dataset' category. End of explanation """ def color_scatter(df): df.plot(kind='scatter',x='x',y='y',fontsize=14,s=150,c='num_ix',cmap=cmap,figsize=(15,7),alpha=0.5) plt.title("Anscombe's points",fontsize=20) plt.xlabel('x',fontsize=16) plt.grid() _ = plt.ylabel('y',fontsize=16) color_scatter(df) """ Explanation: We can see in this plot how the index actually represents the different classes ordered. This supports our initial hypothesis that the index is not related to the 'x' and 'y' columns. Now lets take a look at how the coloured scatter plot of x vs y looks. End of explanation """ sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df, col_wrap=2, ci=None, palette="muted", size=4, scatter_kws={"s": 50, "alpha": 1}, fit_reg=False) """ Explanation: We can clearly see that our Anscombe's dataset actually consists in four different datasets all added up together in the same DataFrame, and each of the category labels are used to separate each different dataset. End of explanation """ I = df[df.dataset == 'I'].copy() I len(df) """ Explanation: 4 Basic statistics 4.1 Subsampling the Anscombe's dataset Now its time to analise each of the datasets conforming our dataframe independently. And we will start by selecting the data set labeled 'I'. End of explanation """ color_scatter(I) with plt.xkcd(): _ =I[['x','y']].plot(kind='hist',title='Histograms of x and y',figsize=(12,5),bins=6,fontsize=20,sharey=True,subplots=True) """ Explanation: We can see that the new dataset contains 10 points. Now let's plot the "I" dataset to see how the points are distributed End of explanation """ df.describe() """ Explanation: 4.2 Why we need statistics It is not easy to infer something meaningfull at first sight from this dataset. We can tell that the points seem to be distributed around an imaginary line, but it is something that is not easy to describe with words. Statistics is a way of describing a dataset, but instead of using plots we are using numbers. That numbers will represent properties of our data set, and they are a way to quantify certain properties of a data set. you can think of it as a language to describe dataset where plots are not available. 4.3 Density and probability We will try to define a quantity that tells us how likely is to find a data point in any point (x,y) from our scatter plot, and we will call this quantity density. This concept is closely related to the histograms, as the higher the frecuency count the higher the density will be, as these magnitudes are proportional. That would mean that the density is intuitively the height that an histogram would have at a given point. This makes the density of a 1 dimensional data set really easy to visualize using a histogram, but it is a little more tricky if we try to do this in 2D, as we will se later. End of explanation """ import numpy as np from ipywidgets import interact, interactive, fixed from IPython.core.display import clear_output,display import ipywidgets as widgets def draw_example_dist(n=10000): a = np.random.normal(loc=-15.,scale=3,size=n) b = np.random.normal(loc=20,scale=3,size=n) c = np.random.exponential(size=n,scale=3) return np.hstack([a,b,c]) example = draw_example_dist() def draw_regions(distribution='normal', bins=7, bw=0.18, normed=False, mean=False, std=False, percents=False, hist=True): x = draw_example_dist() if distribution=='example_2' else np.random.standard_normal(10000) x = draw_regions_data if distribution=='custom' else x fig = plt.figure(figsize=(14,8)) #ax = fig.add_subplot(1, 1, 1) ax = sns.kdeplot(x, cut=0,color='b',shade=True,alpha=0.3,bw=bw) d_bins = np.linspace(x.min(),x.max(),num=bins) if hist: n, bins, patches = ax.hist(x,bins=d_bins,normed=normed,rwidth=0.8,alpha=0.7) else: n=[1] maxx = 1 if normed else max(n) if mean: ax.axvline(x=x.mean(), ymin=0, ymax = maxx, linewidth=6, color='r',label='Mean: {:.3f}'.format(x.mean()),alpha=1.) if std: m = x.mean() ax.axvline(x=m+x.std(), ymin=0, ymax = maxx, linewidth=5, color='g',label='Std: {:.3f}'.format(x.std()),alpha=0.8) ax.axvline(x=m-x.std(), ymin=0, ymax = maxx, linewidth=5, color='g',alpha=0.8) if percents: d = pd.Series(x).describe() ax.axvline(x=d.ix['min'], ymin=0, ymax = maxx, linewidth=5, color=(0.19130826141258903, 0.13147472185630074, 0.09409307479747722), label='min: {:.2f}'.format(d.ix['min']),alpha=0.8) ax.axvline(x=d.ix['25%'], ymin=0, ymax = maxx, linewidth=5, color=(0.38717148143023966, 0.26607979423298955, 0.19042646089965626), label='25%: {:.2f}'.format(d.ix['25%']),alpha=0.8) ax.axvline(x=d.ix['50%'], ymin=0, ymax = maxx, linewidth=5, color=(0.5830347014478903, 0.4006848666096784, 0.2867598470018353), label='50%: {:.2f}'.format(d.ix['50%']),alpha=0.8) ax.axvline(x=d.ix['75%'], ymin=0, ymax = maxx, linewidth=5, color=(0.7743429628604792, 0.5321595884659791, 0.3808529217993126), label='75%: {:.2f}'.format(d.ix['75%']),alpha=0.8) ax.axvline(x=d.ix['max'], ymin=0, ymax = maxx, linewidth=5, color=(0.9415558823529412, 0.663581294117647, 0.47400294117647046), label='max%: {:.2f}'.format(d.ix['max']),alpha=0.8) # ax.plot((m-0.1, m+0.1), (0,max(n)), 'k-') plt.grid(linewidth=2) plt.title("Basic statistics",fontsize=20) plt.legend(loc='upper left',fontsize=18) plt.show() clear_output(True) display(interact(draw_regions,bins=(1,100,2),bw=(0.01,10.,0.025), normed=False,distribution=['Normal','exmple_2','custom'])) """ Explanation: 4.4 Calculating the probaility density End of explanation """ #from shaolin import KungFu def draw_regions(x_regions,y_regions): I = df x,y = I.x,I.y fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1, 1, 1) ax.scatter(x,y) plt.xticks(np.linspace(x.min(),x.max(),num=x_regions)) plt.yticks(np.linspace(y.min(),y.max(),num=y_regions)) plt.grid(linewidth=2) plt.show() clear_output(True) display(interact(draw_regions,x_regions=(0,30), y_regions=(0,30))) def density_plot(linreg_order=1,confidence_intervals=95,bandwidth=1.43,**kwargs): if False:# 'bandwidth' not in kwargs.keys(): kwargs['bandwith'] = 1 data=jointplot_data clear_output() if kwargs['dataset']=='ALL': subdf = data.copy() else: subdf = data[data['dataset']==kwargs['dataset']].copy() x,y = subdf.x,subdf.y x_regions = 10 y_regions = 10 x_bins = np.linspace(x.min(),x.max(),num=kwargs['num_xbins']) y_bins = np.linspace(y.min(),y.max(),num=kwargs['num_ybins']) g = sns.JointGrid(x="x", y="y", data=subdf) g.fig.set_figwidth(14) g.fig.set_figheight(9) if kwargs['plot_density']: g = g.plot_joint(sns.kdeplot, shade=True,alpha=0.5,legend=True, bw=bandwidth, gridsize=int((len(x_bins)+len(y_bins))/2), clip=((x.min(),x.max()),(y.min(),y.max()))) if kwargs['scatter']:# and not self.kwargs['regression']: g = g.plot_joint(plt.scatter) if kwargs['marginals'] in ['Histogram','Both']: _ = g.ax_marg_x.hist(x, alpha=.6, bins=x_bins,normed=True) _ = g.ax_marg_y.hist(y, alpha=.6, orientation="horizontal", bins=y_bins,normed=True) if kwargs['marginals'] in ['KDE','Both']: clip = ((x.values.min()-0.1,x.values.max()+0.1),(y.values.min()-0.1,y.values.max()+0.1)) g = g.plot_marginals(sns.kdeplot, **dict(shade=True))#,bw=kwargs['bandwith']))#, #gridsize=int((len(x_bins)+len(y_bins))/2))) #clip=clip) if kwargs['regression']: g = g.plot_joint(sns.regplot, truncate=True, order=linreg_order, ci=confidence_intervals, scatter_kws={"s": 80} ) if kwargs['grid']: _ = plt.grid(linewidth=2) _ = plt.xticks(x_bins) plt.yticks(y_bins) _ = plt.xlim(x.values.min(),x.values.max()) _ = plt.ylim(y.values.min(),y.values.max()) _ = plt.show() if kwargs['save']: _ =plt.savefig("data/density_plot_{}.png".format(0), dpi=100) kwargs= {'num_xbins':(1,50,2), 'num_ybins':(1,50,2), 'bandwidth':(0.001,10,0.025), 'grid': True, 'plot_density':False, 'scatter':True, 'regression':False, 'linreg_order':(1,5,1), 'confidence_intervals':(1,100,1), 'marginals':['None','Histogram','KDE','Both'], 'dataset':['ALL','I','II','III','IV'], 'save':False, #'@kernel':['gau','cos','biw','epa','tri','triw'] } jointplot_data = df display(interact(density_plot,**kwargs)) """ Explanation: 4.5 2d density End of explanation """ import numpy as np df['new_index'] = df.groupby('dataset').transform(lambda x:np.arange(len(x)))['x'].values idx = pd.IndexSlice sep = df.pivot(columns='dataset',index='new_index') sep.loc[:,idx[:,'I']] sep groups = ['I', 'II', 'III', 'IV'] for group in groups: print(group) print(df[df.dataset == group].describe()) print() """ Explanation: 5 Grouping datasets End of explanation """ for g in groups: print(df[df.dataset == g]['x'].corr(df[df.dataset == g]['y'])) """ Explanation: Let's compare the correlation coefficient for each dataset End of explanation """ import matplotlib.cm as cmx import matplotlib.pyplot as plt groups = df.groupby('dataset') # Plot fig, ax = plt.subplots() ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling for name, group in groups: ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=name) ax.legend(loc=2) plt.title("Anscombe's points",fontsize=20) plt.xlabel('x') plt.ylabel('y') plt.show() g = sns.jointplot(x="x", y="y",data=df[df['dataset']=='I'], kind="kde", size=7, space=0) """ Explanation: 6 Quick Plotting Plot datasets End of explanation """ _=sep.plot(kind='hist',subplots=True,figsize=(18,10),layout=(3,4),sharex=False,rwidth=0.8) """ Explanation: 6.1 Histograms End of explanation """ sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df, col_wrap=2, ci=None, palette="muted", size=4) """ Explanation: 6.2 Linear regression Show the results of a linear regression within each dataset End of explanation """ sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df, col_wrap=2, ci=95, palette="muted", size=4) """ Explanation: It's the same line for all datasets Let's plot with its 95% confidence interval region. End of explanation """ sns.lmplot(x="x", y="y", data=df[df.dataset == 'II'], order=2, ci=95, scatter_kws={"s": 80}); """ Explanation: Key message Visualize your data beforehand Nonlinear regression? outliers? One can fit a polynomial regression model to explore simple kinds of nonlinear trends in the dataset End of explanation """ sns.lmplot(x="x", y="y", data=df[df.dataset == 'III'], robust=True, ci=None, scatter_kws={"s": 80}); """ Explanation: In the presence of outliers, it can be useful to fit a robust regression, which uses a different loss function to downweight relatively large residuals: End of explanation """
ajhenrikson/phys202-2015-work
assignments/assignment03/NumpyEx01.ipynb
mit
import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import antipackage import github.ellisonbg.misc.vizarray as va """ Explanation: Numpy Exercise 1 Imports End of explanation """ def checkerboard(size): """Return a 2d checkboard of 0.0 and 1.0 as a NumPy array""" c=np.empty((size,size,)) #this makes the array for i in range(size): for j in range(size):#gives the size of the matrix as defined by the imput if (i+j)%2==0: c[i,j]=1 else: c[i,j]=0 return c print checkerboard(8) #needed to check to make sure this was working a = checkerboard(4) assert a[0,0]==1.0 assert a.sum()==8.0 assert a.dtype==np.dtype(float) assert np.all(a[0,0:5:2]==1.0) assert np.all(a[1,0:5:2]==0.0) b = checkerboard(5) assert b[0,0]==1.0 assert b.sum()==13.0 assert np.all(b.ravel()[0:26:2]==1.0) assert np.all(b.ravel()[1:25:2]==0.0) """ Explanation: Checkerboard Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0: Your function should work for both odd and even size. The 0,0 element should be 1.0. The dtype should be float. End of explanation """ va.set_block_size(10) #set the block size va.vizarray(checkerboard(20))#makes the checkerboard assert True """ Explanation: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px. End of explanation """ va.set_block_size(5) va.vizarray(checkerboard(27))#same as above assert True """ Explanation: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px. End of explanation """
Quadrocube/rep
howto/Neurolab-rep.ipynb
apache-2.0
import neurolab as nl f2 = nl.trans.SoftMax() f = nl.trans.LogSig() from rep.estimators import NeurolabClassifier clf = NeurolabClassifier(show=1, layers=[300], transf=[f, f], epochs=10, trainf=nl.train.train_rprop, features=variables) %time _ = clf.fit(X_train, y_train) predict_labels = clf.predict(X_test) predict_proba = clf.predict_proba(X_test) from sklearn.metrics.metrics import accuracy_score score = accuracy_score(y_test, predict_labels) print(score) print predict_labels print predict_proba print np.allclose(predict_proba.sum(axis=1), 1) np.unique(predict_proba.sum(axis=1)) from sklearn.metrics import roc_auc_score roc_auc_score(y_test, predict_proba[:, 1]) """ Explanation: Neurolab Тип сети задаётся опциональным параметром net_type. fit и predict работают в полном соответствии с sklearn'овской спецификацией, никаких предварительных преобразований входных данных производить не нужно. Также, в соответствии с идеологией REP, поддерживается аргумент features, отвечающий за то, по каким признакам идёт построение модели. End of explanation """ clf.set_params(epochs=5, show=0) %time clf.fit(X_train, y_train) predict_proba = clf.predict_proba(X_test) roc_auc_score(y_test, predict_proba[:, 1]) from sklearn.metrics import classification_report from sklearn.metrics import zero_one_loss print "Accuracy:", zero_one_loss(y_test, predict_labels) print "Classification report:" print classification_report(y_test, predict_labels) """ Explanation: Проверим, что set_params работает End of explanation """ import pickle pickle.dump(clf, open("dump.p", "wb")) clf_loaded = pickle.load(open("dump.p", "rb")) predict_proba = clf_loaded.predict_proba(X_test) roc_auc_score(y_test, predict_proba[:, 1]) """ Explanation: Проверим, что сеть нормально (де-)сериализуется на диск End of explanation """
bosscha/alma-calibrator
notebooks/selecting_source/select_source_non_almacal.ipynb
gpl-2.0
import sys sys.path.append('../src/') from ALMAQueryCal import * q = queryCal() """ Explanation: Selecting source and uid based on some criteria End of explanation """ fileCal = "alma_sourcecat_searchresults.csv" listCal = q.readCal(fileCal, fluxrange=[0.1, 9999999999]) print "Number of selected sources: ", len(listCal) """ Explanation: Select calibrator with Flux > 0.1 Jy Calibrator list here is downloaded (2017-06-23) from ALMA calibrator source catalogue (https://almascience.eso.org/sc/) End of explanation """ data = q.queryAlma(listCal, public = True, savedb=True, dbname='calibrators_gt_0.1Jy.db') """ Explanation: We can found 1685 objects with F>0.1Jy* from ALMA Calibrator source catalogue *from the first Band found in the list, usually B3 Query all information about the projects that use these objects as calibrator using astroquery End of explanation """ report = q.selectDeepfield_fromsql("calibrators_gt_0.1Jy.db", maxFreqRes=999999999, array='12m', \ excludeCycle0=True, selectPol=False, minTimeBand={3:60., 6:60., 7:60.}, verbose=True, silent=True) """ Explanation: result of the query 'data' is in the form of Pandas DataFrame, but we save it in sql database also. Note: Many of them only listed in calibrator list, but not observed/shown in any public data yet. Selection criteria *We already choose the calibrator with F>0.1Jy + only Public data Select sources with other criteria: ignore freq res (for imaging) ignore polarization product excludeCycle0 data only accept data from 12m array (or 12m7m) minimum integration time per band is 1h for B3, B6, and B7 (after filtering all above) End of explanation """ q.writeReport(report, "report6_nonALMACAL.txt", silent=True) """ Explanation: write the report in a file End of explanation """
feststelltaste/software-analytics
courses/20190918_Uni_Leipzig/Analyzing Java Dependencies with jdeps (Demo Notebook).ipynb
gpl-3.0
from ozapfdis import jdeps deps = jdeps.read_jdeps_file( "../datasets/jdeps_dropover.txt", filter_regex="at.dropover") deps.head() """ Explanation: Questions Which types / classes have unwanted dependencies in our code? Which group of types / classes is highly cohesive but lowly coupled? Idea Using JDK's jdeps command line utility, we can extract the existing dependencies between Java types: bash jdeps -v dropover-classes.jar &gt; jdeps.txt Data Read data in with <b>O</b>pen <b>Z</b>ippy <b>A</b>nalysis <b>P</b>latform <b>F</b>or <b>D</b>ata <b>I</b>n <b>S</b>oftware End of explanation """ deps = deps[['from', 'to']] deps['group_from'] = deps['from'].str.split(".").str[2] deps['group_to'] = deps['to'].str.split(".").str[2] deps.head() """ Explanation: Modeling Extract the information about existing modules based on path naming conventions End of explanation """ from ausi import d3 d3.create_d3force( deps, "jdeps_demo_output/dropover_d3forced", group_col_from="group_from", group_col_to="group_to") d3.create_semantic_substrate( deps, "jdeps_demo_output/dropover_semantic_substrate") d3.create_hierarchical_edge_bundling( deps, "jdeps_demo_output/dropover_bundling") """ Explanation: Visualization Output results with <b>A</b>n <b>U</b>nified <b>S</b>oftware <b>I</b>ntegrator End of explanation """
kimkipyo/dss_git_kkp
Python 복습/15일차.목_serialize, SQL실습/15일차_2T_데이터 분석을 위한 SQL 실습 (1) - WHERE IN, LIKE, JOIN.ipynb
mit
import pymysql db = pymysql.connect( "db.fastcamp.us", "root", "dkstncks", "sakila", charset='utf8', ) film_df = pd.read_sql("SELECT * FROM film;", db) film_df.head(1) SQL_QUERY = """ SELECT * FROM film WHERE (release_year = 2006 OR release_year = 2007) AND (rating = "PG" OR rating = "G") ; """ pd.read_sql(SQL_QUERY, db) SQL_QUERY = """ SELECT COUNT(*) FROM film WHERE release_year IN (2006, 2007) AND rating IN ("PG", "G") ; """ pd.read_sql(SQL_QUERY, db) """ Explanation: 2T_데이터 분석을 위한 SQL 실습 (1) - WHERE IN, LIKE, JOIN film테이블에서 2006년이나 2007년에 출시되었으면서, PG등급이거나, G등급의 영화제목을 모두 출력하시오 End of explanation """ is_pg_or_g = film_df.rating.isin(["PG", "G"]) is_2006_or_2007 = film_df.release_year.isin([2006, 2007]) film_df[is_pg_or_g & is_2006_or_2007] film_df[is_pg_or_g & is_2006_or_2007].count() """ Explanation: pandas End of explanation """ film_df.head(1) SQL_QUERY = """ SELECT title, description, rental_rate FROM film WHERE description LIKE "%Boring%" AND rental_rate = 0.99 ; """ pd.read_sql(SQL_QUERY, db) is_099 = film_df.rental_rate == 0.99 is_boring = film_df.description.str.contains("Boring") film_df[is_099 & is_boring].count() """ Explanation: film 테이블에서 설명에 "Boring"이라는 텍스트가 포함되면서, 렌탈 비용이 0.99인 영화 제목, 설명, 렌탈 비용을 모두 출력하세요. End of explanation """ film_df.rental_rate.unique() SQL_QUERY = """ SELECT DISTINCT rental_rate FROM film ORDER BY rental_rate ; """ pd.read_sql(SQL_QUERY, db) """ Explanation: rental_rate 에 unique한 값들은 어떤게 있었을까? 0.99, 1.99, 2.99... End of explanation """ SQL_QUERY = """ SELECT rating, COUNT(*) "total_films", AVG(rental_rate) "average_rental_rate" FROM film GROUP BY rating ORDER BY average_rental_rate ; """ pd.read_sql(SQL_QUERY, db) SQL_QUERY = """ SELECT rating, COUNT(*) "total_films", AVG(rental_rate) "average_rental_rate" FROM film GROUP BY 1 ORDER BY 3 ; """ pd.read_sql(SQL_QUERY, db) film_df.groupby("rating").agg({ "film_id": {"total films": np.size}, "rental_rate": {"average_rental_rate": np.mean}, }) """ Explanation: film테이블에서 등급으로 그룹을 묶어서, 각 등급별 갯수, 평균 렌탈 비용을 모두 출력하세요 End of explanation """
probml/pyprobml
notebooks/book1/20/word_analogies_torch.ipynb
mit
import numpy as np import matplotlib.pyplot as plt np.random.seed(seed=1) import math import requests import zipfile import hashlib import os import random try: import torch except ModuleNotFoundError: %pip install -qq torch import torch from torch import nn from torch.nn import functional as F !mkdir figures # for saving plots # Required functions def download(name, cache_dir=os.path.join("..", "data")): """Download a file inserted into DATA_HUB, return the local filename.""" assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}." url, sha1_hash = DATA_HUB[name] os.makedirs(cache_dir, exist_ok=True) fname = os.path.join(cache_dir, url.split("/")[-1]) if os.path.exists(fname): sha1 = hashlib.sha1() with open(fname, "rb") as f: while True: data = f.read(1048576) if not data: break sha1.update(data) if sha1.hexdigest() == sha1_hash: return fname # Hit cache print(f"Downloading {fname} from {url}...") r = requests.get(url, stream=True, verify=True) with open(fname, "wb") as f: f.write(r.content) return fname def download_extract(name, folder=None): """Download and extract a zip/tar file.""" fname = download(name) base_dir = os.path.dirname(fname) data_dir, ext = os.path.splitext(fname) if ext == ".zip": fp = zipfile.ZipFile(fname, "r") elif ext in (".tar", ".gz"): fp = tarfile.open(fname, "r") else: assert False, "Only zip/tar files can be extracted." fp.extractall(base_dir) return os.path.join(base_dir, folder) if folder else data_dir """ Explanation: Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/20/word_analogies_jax.ipynb <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/word_analogies_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Solving word analogies using pre-trained word embeddings Based on D2L 14.7 http://d2l.ai/chapter_natural-language-processing-pretraining/similarity-analogy.html End of explanation """ DATA_HUB = dict() DATA_URL = "http://d2l-data.s3-accelerate.amazonaws.com/" DATA_HUB["glove.6b.50d"] = (DATA_URL + "glove.6B.50d.zip", "0b8703943ccdb6eb788e6f091b8946e82231bc4d") DATA_HUB["glove.6b.100d"] = (DATA_URL + "glove.6B.100d.zip", "cd43bfb07e44e6f27cbcc7bc9ae3d80284fdaf5a") DATA_HUB["glove.42b.300d"] = (DATA_URL + "glove.42B.300d.zip", "b5116e234e9eb9076672cfeabf5469f3eec904fa") DATA_HUB["wiki.en"] = (DATA_URL + "wiki.en.zip", "c1816da3821ae9f43899be655002f6c723e91b88") class TokenEmbedding: """Token Embedding.""" def __init__(self, embedding_name): self.idx_to_token, self.idx_to_vec = self._load_embedding(embedding_name) self.unknown_idx = 0 self.token_to_idx = {token: idx for idx, token in enumerate(self.idx_to_token)} def _load_embedding(self, embedding_name): idx_to_token, idx_to_vec = ["<unk>"], [] # data_dir = d2l.download_extract(embedding_name) data_dir = download_extract(embedding_name) # GloVe website: https://nlp.stanford.edu/projects/glove/ # fastText website: https://fasttext.cc/ with open(os.path.join(data_dir, "vec.txt"), "r") as f: for line in f: elems = line.rstrip().split(" ") token, elems = elems[0], [float(elem) for elem in elems[1:]] # Skip header information, such as the top row in fastText if len(elems) > 1: idx_to_token.append(token) idx_to_vec.append(elems) idx_to_vec = [[0] * len(idx_to_vec[0])] + idx_to_vec return idx_to_token, torch.tensor(idx_to_vec) def __getitem__(self, tokens): indices = [self.token_to_idx.get(token, self.unknown_idx) for token in tokens] vecs = self.idx_to_vec[torch.tensor(indices)] return vecs def __len__(self): return len(self.idx_to_token) """ Explanation: Get pre-trained word embeddings Pretrained embeddings taken from GloVe website: https://nlp.stanford.edu/projects/glove/ fastText website: https://fasttext.cc/ End of explanation """ glove_6b50d = TokenEmbedding("glove.6b.50d") len(glove_6b50d) """ Explanation: Get a 50dimensional glove embedding, with vocab size of 400k End of explanation """ glove_6b50d.token_to_idx["beautiful"], glove_6b50d.idx_to_token[3367] embedder = glove_6b50d # embedder = TokenEmbedding('glove.6b.100d') embedder.idx_to_vec.shape """ Explanation: Map from word to index and vice versa. End of explanation """ def knn(W, x, k): # The added 1e-9 is for numerical stability cos = torch.mv( W, x.reshape( -1, ), ) / ((torch.sqrt(torch.sum(W * W, axis=1) + 1e-9) * torch.sqrt((x * x).sum()))) _, topk = torch.topk(cos, k=k) return topk, [cos[int(i)] for i in topk] def get_similar_tokens(query_token, k, embed): topk, cos = knn(embed.idx_to_vec, embed[[query_token]], k + 1) for i, c in zip(topk[1:], cos[1:]): # Remove input words print(f"cosine sim={float(c):.3f}: {embed.idx_to_token[int(i)]}") get_similar_tokens("man", 3, embedder) get_similar_tokens("banana", 3, embedder) """ Explanation: Finding most similar words End of explanation """ # We slightly modify D2L code so it works on the man:woman:king:queen example def get_analogy(token_a, token_b, token_c, embed): vecs = embed[[token_a, token_b, token_c]] x = vecs[1] - vecs[0] + vecs[2] topk, cos = knn(embed.idx_to_vec, x, 10) # remove word c from nearest neighbor idx_c = embed.token_to_idx[token_c] topk = list(topk.numpy()) topk.remove(idx_c) return embed.idx_to_token[int(topk[0])] get_analogy("man", "woman", "king", embedder) get_analogy("man", "woman", "son", embedder) get_analogy("beijing", "china", "tokyo", embedder) """ Explanation: Word analogies End of explanation """
Neuroglycerin/neukrill-net-work
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
mit
import multiprocessing import numpy as np p = multiprocessing.Pool(4) x = range(3) f = lambda x: x*2 def f(x): return x**2 print(x) """ Explanation: We're wasting a bunch of time waiting for our iterators to produce minibatches when we're running epochs. Seems like we should probably precompute them while the minibatch is being run on the GPU. To do this involves using the multiprocessing module. Since I've never used it before, here are my dev notes for writing this into the dataset iterators. End of explanation """ %%python from multiprocessing import Pool def f(x): return x*x if __name__ == '__main__': p = Pool(5) print(p.map(f, [1, 2, 3])) %%python from multiprocessing import Pool import numpy as np def f(x): return x*x if __name__ == '__main__': p = Pool(5) print(p.map(f, np.array([1, 2, 3]))) """ Explanation: For some reason can't run these in the notebook. So have to run them with subprocess like so: End of explanation """ %%python from multiprocessing import Pool import numpy as np def f(x): return x**2 if __name__ == '__main__': p = Pool(5) r = p.map_async(f, np.array([0,1,2])) print(dir(r)) print(r.get(timeout=1)) """ Explanation: Now doing this asynchronously: End of explanation """ %%python from multiprocessing import Pool import numpy as np def f(x): return x**2 class It(object): def __init__(self,a): # store an array (2D) self.a = a # initialise pool self.p = Pool(4) # initialise index self.i = 0 # initialise pre-computed first batch self.batch = self.p.map_async(f,self.a[self.i,:]) def get(self): return self.batch.get(timeout=1) def f(self,x): return x**2 if __name__ == '__main__': it = It(np.random.randn(4,4)) print(it.get()) %%python from multiprocessing import Pool import numpy as np def f(x): return x**2 class It(object): def __init__(self,a): # store an array (2D) self.a = a # initialise pool self.p = Pool(4) # initialise index self.i = 0 # initialise pre-computed first batch self.batch = self.p.map_async(f,self.a[self.i,:]) def __iter__(self): return self def next(self): # check if we've got something pre-computed to return if self.batch: # get the output output = self.batch.get(timeout=1) #output = self.batch # prepare next batch self.i += 1 if self.i < self.a.shape[0]: self.p = Pool(4) self.batch = self.p.map_async(f,self.a[self.i,:]) #self.batch = map(self.f,self.a[self.i,:]) else: self.batch = False return output else: raise StopIteration if __name__ == '__main__': it = It(np.random.randn(4,4)) for a in it: print a """ Explanation: Now trying to create an iterable that will precompute it's output using multiprocessing. End of explanation """ %%time %%python from multiprocessing import Pool import numpy as np import neukrill_net.augment import time class It(object): def __init__(self,a,f): # store an array (2D) self.a = a # store the function self.f = f # initialise pool self.p = Pool(4) # initialise indices self.inds = range(self.a.shape[0]) # pop a batch from top self.batch_inds = [self.inds.pop(0) for _ in range(100)] # initialise pre-computed first batch self.batch = map(self.f,self.a[self.batch_inds,:]) def __iter__(self): return self def next(self): # check if we've got something pre-computed to return if self.inds != []: # get the output output = self.batch # prepare next batch self.batch_inds = [self.inds.pop(0) for _ in range(100)] self.p = Pool(4) self.batch = map(self.f,self.a[self.batch_inds,:]) return output else: raise StopIteration if __name__ == '__main__': f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270]) it = It(np.random.randn(10000,48,48),f) for a in it: time.sleep(0.01) pass %%time %%python from multiprocessing import Pool import numpy as np import neukrill_net.augment import time class It(object): def __init__(self,a,f): # store an array (2D) self.a = a # store the function self.f = f # initialise pool self.p = Pool(8) # initialise indices self.inds = range(self.a.shape[0]) # pop a batch from top self.batch_inds = [self.inds.pop(0) for _ in range(100)] # initialise pre-computed first batch self.batch = self.p.map_async(f,self.a[self.batch_inds,:]) def __iter__(self): return self def next(self): # check if we've got something pre-computed to return if self.inds != []: # get the output output = self.batch.get(timeout=1) # prepare next batch self.batch_inds = [self.inds.pop(0) for _ in range(100)] #self.p = Pool(4) self.batch = self.p.map_async(f,self.a[self.batch_inds,:]) return output else: raise StopIteration if __name__ == '__main__': f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270]) it = It(np.random.randn(10000,48,48),f) for a in it: time.sleep(0.01) pass %%time %%python from multiprocessing import Pool import numpy as np import neukrill_net.augment import time class It(object): def __init__(self,a,f): # store an array (2D) self.a = a # store the function self.f = f # initialise pool self.p = Pool(8) # initialise indices self.inds = range(self.a.shape[0]) # pop a batch from top self.batch_inds = [self.inds.pop(0) for _ in range(100)] # initialise pre-computed first batch self.batch = self.p.map_async(f,self.a[self.batch_inds,:]) def __iter__(self): return self def next(self): # check if we've got something pre-computed to return if self.inds != []: # get the output output = self.batch.get(timeout=1) # prepare next batch self.batch_inds = [self.inds.pop(0) for _ in range(100)] #self.p = Pool(4) self.batch = self.p.map_async(f,self.a[self.batch_inds,:]) return output else: raise StopIteration if __name__ == '__main__': f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270]) it = It(np.random.randn(10000,48,48),f) for a in it: print np.array(a).shape print np.array(a).reshape(100,48,48,1).shape break """ Explanation: Then we have to try and do a similar thing, but using the randomaugment function. In the following two cells one uses multiprocessiung and one that doesn't. Testing them by pretending to ask for a minibatch and then sleep, applying the RandomAugment function each time. End of explanation """
letsgoexploring/sargentPhillipsCurve
sargentPhillipsCurve.ipynb
mit
import numpy as np import matplotlib.pyplot as plt from fredpy import series,window_equalize %matplotlib inline """ Explanation: Python program for replicating Figure 1.5 from The Conquest of American Inflation by Thomas Sargent. In Figure 1.5, Sargent compares the business cycle componenets of monthly inflation and unemployment data for the US from 1960-1982. This program produces a replication of Figure 1.5 (among other things) by using the fredpy package to import data from inflation and unemployment data from Federeal Reserve Economic Data (FRED), manage the data, and then plot the results. End of explanation """ # Dowload data u = series('LNS14000028') p = series('CPIAUCSL') # Construct the inflation series p.pc(annualized=True) p.ma2side(length=6) p.data = p.ma2data p.datenumbers = p.ma2datenumbers p.dates = p.ma2dates # Make sure that the data inflation and unemployment series cver the same time interval window_equalize([p,u]) # Filter the data p.bpfilter(low=24,high=84,K=84) p.hpfilter(lamb=129600) u.bpfilter(low=24,high=84,K=84) u.hpfilter(lamb=129600) """ Explanation: Data Importing the data As his measure of the unemployment rate, Sargent uses the unemployment rate for white men age 20 and over (FRED code: LNS14000028). The results are essentially identical if the unemployment rate of the over 16 non-institutional population (FRED code: UNRATE) is used. His measure of the inflation rate is a 13-month two-sided moving average of the annualized monthly percentage change in the CPI. The unemployment and inflation rate data are monthly. Detrending procedures Sargent isolates the business cycle components of the data using the bandpass filter of Baxter and King (1995). Since the data are monthly, the minimum frequency is set to 24 months, the maximum is set to 84 months, and the lag-lead truncation to 84. Additionally, I also detrend the data using the Hodrick-Prescott filter (1997). The striking loops in Sargent's Figure 1.5 are sensitive to the filtering procedure used. End of explanation """ # BP-filtered data fig = plt.figure() ax = fig.add_subplot(2,1,1) ax.plot_date(p.datenumbers,p.data,'b-',lw=2) ax.plot_date(p.datenumbers,p.hptrend,'r-',lw=2) ax.grid(True) ax.set_title('Inflation') ax = fig.add_subplot(2,1,2) ax.plot_date(p.bpdatenumbers,p.bpcycle,'r-',lw=2) ax.plot_date(p.datenumbers,p.hpcycle,'g--',lw=2) ax.grid(True) ax.set_title('Unemployment') fig.autofmt_xdate() # Scatter plot of BP-filtered inflation and unemployment data (Sargent's Figure 1.5) fig = plt.figure() ax = fig.add_subplot(1,1,1) t = np.arange(len(u.bpcycle)) ax.scatter(u.bpcycle,p.bpcycle,facecolors='none',alpha=0.75,s=20,c=t, linewidths=1.5) ax.set_xlabel('unemployment rate (%)') ax.set_ylabel('inflation rate (%)') ax.set_title('Inflation and unemployment: BP-filtered data') ax.grid(True) # HP-filtered data fig = plt.figure() ax = fig.add_subplot(2,1,1) ax.plot_date(u.datenumbers,u.data,'b-',lw=2) ax.plot_date(u.datenumbers,u.hptrend,'r-',lw=2) ax.grid(True) ax.set_title('Inflation') ax = fig.add_subplot(2,1,2) ax.plot_date(u.bpdatenumbers,u.bpcycle,'r-',lw=2) ax.plot_date(u.datenumbers,u.hpcycle,'g--',lw=2) ax.grid(True) ax.set_title('Unemployment') fig.autofmt_xdate() # Scatter plot of HP-filtered inflation and unemployment data fig = plt.figure() ax = fig.add_subplot(1,1,1) t = np.arange(len(u.hpcycle)) ax.scatter(u.hpcycle,p.hpcycle,alpha=0.5,s=50,c=t) ax.set_xlabel('unemployment rate (%)') ax.set_ylabel('inflation rate (%)') ax.set_title('Inflation and unemployment: HP-filtered data') ax.grid(True) """ Explanation: Plots End of explanation """
ccwang002/2015Talk-Python35News
code/PEP-465.ipynb
mit
import numpy as np """ Explanation: PEP 465 - @ operator A dedicated infix operator for matrix multiplication $$ \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix} \times \begin{bmatrix} 11 & 12 \ 13 & 14 \end{bmatrix} = \text{?} $$ In Numpy (or many numerical computation cases), there are two ways to handle multiplication: elementwise matrix Elementwise Multiplication $$ \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix} \times \begin{bmatrix} 11 & 12 \ 13 & 14 \end{bmatrix} = \begin{bmatrix} 1 \cdot 11 & 2 \cdot 12\ 3 \cdot 13 & 4 \cdot 14 \end{bmatrix} $$ Matrix Multiplication $$ \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix} \times \begin{bmatrix} 11 & 12 \ 13 & 14 \end{bmatrix} = \begin{bmatrix} 1 \cdot 11 + 2 \cdot 13 & 1 \cdot 12 + 2 \cdot 14\ 3 \cdot 11 + 4 \cdot 13 & 3 \cdot 12 + 4 \cdot 14\ \end{bmatrix} $$ End of explanation """ matA = np.array([[1, 2], [3, 4]], dtype=np.int) matB = np.array([[11, 12], [13, 14]], dtype=np.int) print('matrix A: \n%r' % matA) print('matrix B: \n%r' % matB) """ Explanation: In Numpy by defualt is C-style (row-first) continguous. End of explanation """ matA * matB np.multiply(matA, matB) matA.__mul__(matB) """ Explanation: Elementwise End of explanation """ matA @ matB np.dot(matA, matB) matA.dot(matB) """ Explanation: Matrix Multiplication End of explanation """
simpeg/simpegpf
simpegPF/notebooks/tutorials/Tutorial_1_Mag forward modeling.ipynb
mit
cs = 12.5 ncx, ncy, ncz, npad = 41, 41, 40, 5 hx = [(cs,npad,-1.4), (cs,ncx), (cs,npad,1.4)] hy = [(cs,npad,-1.4), (cs,ncy), (cs,npad,1.4)] hz = [(cs,npad,-1.4), (cs,ncz), (cs,npad,1.4)] mesh = Mesh.TensorMesh([hx, hy, hz], 'CCC') fig, ax = plt.subplots(1,2, figsize=(12, 5)) dat0 = mesh.plotSlice(np.zeros(mesh.nC), grid=True, ax=ax[0]); ax[0].set_title('XY plane') dat1 = mesh.plotSlice(np.zeros(mesh.nC), grid=True, normal='X', ax=ax[1]); ax[1].set_title('YZ plane') """ Explanation: Forward problem: Magnetics This is a tutorial for Mag forward problem using simpegPF package. We first start with analytic solution for susceptible sphere in a whole space. Then we solve steady-state Maxwell's equatoins for Mag problem (<a href="http://simpegpf.readthedocs.org/en/latest/api_PF.html">See Doc</a>) using simpegPF package. Step1: Discretize the earth We use TensorMesh class in SimPEG to discretize the 3D earth (<a href="http://docs.simpeg.xyz/en/latest/api_MeshCode.html?highlight=tensormesh#module-SimPEG.Mesh.TensorMesh">See Doc</a>). Let's visualize discretized mesh on section views: End of explanation """ from scipy.constants import mu_0 mu0 = 4*np.pi*1e-7 chibkg = 0. # Background susceptibility chiblk = 0.01 # Susceptibility for a sphere chi = np.ones(mesh.nC)*chibkg sph_ind = spheremodel(mesh, 0, 0, -100, 80) # A sphere is located at (0, 0, 0) and radius of the sphere is 100 m chi[sph_ind] = chiblk # Assign susceptibility value for the sphere mu = (1.+chi)*mu0 fig, ax = plt.subplots(1,2, figsize=(12, 7)) indz = int(np.argmin(abs(mesh.vectorCCz-(-100.)))); indx = int(np.argmin(abs(mesh.vectorCCx-0.))) dat0 = mesh.plotSlice(chi, grid=True, ind=indz, ax=ax[0]); ax[0].set_title(('XY plane at z=%5.2f')%(mesh.vectorCCz[indz])) dat1 = mesh.plotSlice(chi, grid=True, normal='X', ind=indx, ax=ax[1]); ax[1].set_title(('YZ plane at x=%5.2f')%(mesh.vectorCCx[indx])) cb0 = plt.colorbar(dat0[0], orientation='horizontal', ax=ax[0], ticks = linspace(0, 0.01, 5)); cb1 = plt.colorbar(dat1[0], orientation='horizontal', ax=ax[1], ticks = linspace(0, 0.01, 5)); cb0.set_label("Suceptibility") cb1.set_label("Suceptibility") """ Explanation: Step2: Compose suceptibility model: susceptible sphere in whole space $\mu = \mu_0(1+\chi)$ $\mu$: magnetic permeability $\mu_0$: magnetic permeability of vacuum space $\chi$: magnetic susceptibility End of explanation """ xr = np.linspace(-200, 200, 21) yr = np.linspace(-200, 200, 21) X, Y = np.meshgrid(xr, yr) Z = np.ones((size(xr), size(yr)))*30. fig, ax = plt.subplots(1,1, figsize=(5, 5)) indz = int(np.argmin(abs(mesh.vectorCCz-(0.)))); dat0 = mesh.plotSlice(chi, grid=True, ind=indz, ax=ax); ax.set_title(('XY plane at z=%5.2f')%(mesh.vectorCCz[indz])) ax.plot(X.flatten(), Y.flatten(), 'w.', ms=5) """ Explanation: Step3: Set up an airborne MAG survey We have discretized 3D earth and generated suceptibility model, which means that we have discretized earth and physical property distribution. We can compute magnetic fields everywhere in our domain by solving parial differental equation (PDE), but our measurements are confined to finite locations. Therefore, we need to project computed fields $\mathbf{u}$, which is defined everywhere in our domain to certain locations where we have receiving points. For instance in airborne mag survey these are the points where a plane or helicopter measure earth magnetic fields. This projection can be expressed as: $$ \mathbf{d} = P(\mathbf{u})$$ where $P(\cdot)$ is a projection from computed field to the measured data, and $d$ is the measure data. Let's assume that we have a survey area: 400 m $\times$ 400 m. We have 21 lines of airborne MAG survey, and we measure magnetic fields for every 20 m on each line. A pilot for this helicopter is really talented so that the flight height is constant for 30 m above the surface. End of explanation """ Bxra, Byra, Bzra = MagSphereAnaFunA(X, Y, Z, 80., 0., 0., -100, chiblk, np.array([0., 0., 1.]), flag) Bxra = np.reshape(Bxra, (size(xr), size(yr)), order='F') Byra = np.reshape(Byra, (size(xr), size(yr)), order='F') Bzra = np.reshape(Bzra, (size(xr), size(yr)), order='F') fig, ax = plt.subplots(1,3, figsize=(18, 4)) dat0=ax[0].contourf(X, Y, Bxra, 30); ax[0].set_title('Bx'); cb0 = plt.colorbar(dat0, ax=ax[0]) dat1=ax[1].contourf(X, Y, Byra, 30); ax[1].set_title('By'); cb1 = plt.colorbar(dat0, ax=ax[1]) dat2=ax[2].contourf(X, Y, Bzra, 30); ax[2].set_title('Bz'); cb2 = plt.colorbar(dat0, ax=ax[2]) for i in range(3): ax[i].plot(X.flatten(), Y.flatten(), 'k.', ms=3) """ Explanation: Step4: Analytic solution We have an analytic solution when we have a sphere in a whole-space. simpegPF provides this function so that you can compute magnetic field on your receiving locations. Another input you need to put is direction of the earth magnetic fieds, and the strength, you can easily get this information from <a href="http://www.ngdc.noaa.gov/geomag-web/">NOAA's website</a>. We assume that we have veritcal earth fields and the strength is 1. End of explanation """ survey = BaseMag.BaseMagSurvey() # survey class for mag problem Inc = 90. Dec = 0. Btot = 1 survey.setBackgroundField(Inc, Dec, Btot) # set inclination, declination, and strength of magnetic field rxLoc = np.c_[Utils.mkvc(X), Utils.mkvc(Y), Utils.mkvc(Z)] survey.rxLoc = rxLoc # set receiver locations """ Explanation: Step5: Solve PDE Note that data for this case magnetic fields projected to earth field direction, which is typical data type for airborne mag survey. First, set survey class End of explanation """ prob = MagneticsDiffSecondary(mesh) prob.pair(survey) """ Explanation: Second, set problem class then pair with survey End of explanation """ data = survey.dpred(mu) """ Explanation: Third, run forward modeling End of explanation """ fig, ax = plt.subplots(1,3, figsize=(18, 4)) vmin = Bzra.min() vmax = Bzra.max() residual = data.reshape((xr.size, yr.size), order='F')-Bzra dat0=ax[0].contourf(X, Y, Bzra, 30, vmin=vmin, vmax=vmax) dat1=ax[1].contourf(X, Y, data.reshape((xr.size, yr.size), order='F'), 30, vmin=vmin, vmax=vmax) dat2=ax[2].contourf(X, Y, residual, 30) cb0 = plt.colorbar(dat0, ax=ax[0]) cb1 = plt.colorbar(dat0, ax=ax[1]) cb2 = plt.colorbar(dat2, ax=ax[2]) ax[0].set_title('Bz (analytic)') ax[1].set_title('Bz (simpegPF)') ax[2].set_title('Residual') """ Explanation: Now we viualize computed solution and compare with analytic solutions! Note that you may always need to make sure that your numerical solution is reasonable enough. End of explanation """
mne-tools/mne-tools.github.io
0.24/_downloads/13f9133d0e7c13dded3c5dd2cf828dd3/gamma_map_inverse.ipynb
bsd-3-clause
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de> # # License: BSD-3-Clause import numpy as np import mne from mne.datasets import sample from mne.inverse_sparse import gamma_map, make_stc_from_dipoles from mne.viz import (plot_sparse_source_estimates, plot_dipole_locations, plot_dipole_amplitudes) print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif' cov_fname = data_path + '/MEG/sample/sample_audvis-cov.fif' # Read the evoked response and crop it condition = 'Left visual' evoked = mne.read_evokeds(evoked_fname, condition=condition, baseline=(None, 0)) evoked.crop(tmin=-50e-3, tmax=300e-3) # Read the forward solution forward = mne.read_forward_solution(fwd_fname) # Read noise noise covariance matrix and regularize it cov = mne.read_cov(cov_fname) cov = mne.cov.regularize(cov, evoked.info, rank=None) # Run the Gamma-MAP method with dipole output alpha = 0.5 dipoles, residual = gamma_map( evoked, forward, cov, alpha, xyz_same_gamma=True, return_residual=True, return_as_dipoles=True) """ Explanation: Compute a sparse inverse solution using the Gamma-MAP empirical Bayesian method See :footcite:WipfNagarajan2009 for details. End of explanation """ plot_dipole_amplitudes(dipoles) # Plot dipole location of the strongest dipole with MRI slices idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles]) plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', idx='amplitude') # # Plot dipole locations of all dipoles with MRI slices # for dip in dipoles: # plot_dipole_locations(dip, forward['mri_head_t'], 'sample', # subjects_dir=subjects_dir, mode='orthoview', # idx='amplitude') """ Explanation: Plot dipole activations End of explanation """ ylim = dict(grad=[-120, 120]) evoked.pick_types(meg='grad', exclude='bads') evoked.plot(titles=dict(grad='Evoked Response Gradiometers'), ylim=ylim, proj=True, time_unit='s') residual.pick_types(meg='grad', exclude='bads') residual.plot(titles=dict(grad='Residuals Gradiometers'), ylim=ylim, proj=True, time_unit='s') """ Explanation: Show the evoked response and the residual for gradiometers End of explanation """ stc = make_stc_from_dipoles(dipoles, forward['src']) """ Explanation: Generate stc from dipoles End of explanation """ scale_factors = np.max(np.abs(stc.data), axis=1) scale_factors = 0.5 * (1 + scale_factors / np.max(scale_factors)) plot_sparse_source_estimates( forward['src'], stc, bgcolor=(1, 1, 1), modes=['sphere'], opacity=0.1, scale_factors=(scale_factors, None), fig_name="Gamma-MAP") """ Explanation: View in 2D and 3D ("glass" brain like 3D plot) Show the sources as spheres scaled by their strength End of explanation """
tyler-abbot/PyShop
session1/PyShop_session1_notes.ipynb
agpl-3.0
print('Hello World!') """ Explanation: PyShop Session 1 This session introduces Python as an open source, high level programming language, as well as a community. By the end of the session, you should be familiar with the following necessary (or at least useful) components for being a participating member of the Python community: The Python interpreter. Anaconda Python. Text editors. Stack Exchange. GitHub. Additionally, this session will introduce Python style and syntax, data types, modules and packages, the standard workflow, objects and object oriented programming, documentation for collaboration, as well as some basic examples. By the end of this session you should feel comfortable setting up and working in your new Python environment. What is Python? Python is a programming language. More specifically, python is an interpreted, object oriented programming language. This means that Python is not compiled like C, but fed into the Python interpreter from the command line (or a script). This makes the Python workflow much smoother and easier to understand. Beyond technical issues, Python's syntax is very user friendly and has made it the first choice for programming courses. In fact, many economics departments already offer Python courses and MIT recently switched all of their introductory computer science courses to Python. How is Python different from the other tools we use? The programs I'm refering to are R, Stata, Matlab. These are not programming languages. Stata and Matlab are proprietary software that costs money, while R is an open source software package for statistics. R gets around the problem of having to pay for your calculations, but its syntax is not intuitive and it is geared solely towards statisticians. Matlab can compete with Python on parallelism, but it cannot compare in terms of capability on several other metrics: currentness, price, strings, portability, classes, and functions. Overall, the beauty of Python is that it was designed by computer programmers for their own needs and the code is constantly being updated and improved. Who uses Python and what are its main features? Who uses it: Yahoo maps Google! Tons of computer games Disney animators NASA's Johnson Space Center and Los Alomos National Laboratories Theoretical Physics Division (#PhysicsEnvy) The CIA website But what about in academia? Just googling "'school name' economics python" for the top ten programs according to US news, the following programs either have high performance computing interests using Python or courses in Python: UPenn NYU Stern Harvard Institute for Quantitative Social Science Chicago Kellogg Columbia On top of this, Python is fast becoming the intructory programming language for teaching. For instance, MIT recently switched all of their intro computer programming courses to Python. What are its main features? (this list borrowed from 'A Byte of Python' by Patrick Flemming) Simplicity. Very streamlined syntax makes it easy to read. Free! High level. You don't need to manage memory usage, all of that is done under the hood. Portable. Works on practically any operating system. Interpreted. No compiling. When you run a script, the interpreter translates that into Bytecodes and then to the native language to run. This actually makes Python slightly slower than, for example, C++, but much easier to use. However, you can use modules to compile your programs to C if you really care all that much! Object oriented. Python uses method attributes (we'll go over that later) to make object oriented programming easy, but it also supports procedure oriented programming as well. Extensible. Python plays nice with C. Libraries!!!! So many libraries, you won't know where to start. How to get set-up. The Python workflow that I use is a text editor and an interpreter, since I'm a Linux programmer. However, you may be interested in a MatLab like IDE. These can be really messy and complex, so beware, but if you want to do it I've heard Sublime Text 2 is nice, or just IDLE (the IDE included with Python) will work. Since I can't speak to this, though, I will talk about iPython and text editors. For step by step set up, see Pre-Workshop Exercises. The Python interpreter and Anaconda Python The first thing you need is Python! Most store bought computers have it, but we need some additional features, so I suggest you download Anaconda Python. It comes with 330+ packages, it's free, it works on any operating system, its light, and it includes the interactive iPython and iPython notebook. Made by Continuum Analytics, who also run Wakaari, a web based Python environment (but the Sciences Po firewall blocks it, so good luck using it here!). Text editors Not going to use an Interactive Development Environment (IDE) because too complicated, too many options. There are a ton of options for text editors, but I like GitHub's Atom text editor. It is easy to use, web based, and stylish! However, it can be a little slow... GitHub Knowing how to use GitHub is really a must for anyone hoping to collaborate on an open source project or just on a large programming project. It is a repository hosting service based on the Git distributed revision control system. Essentially, it is a free service that helps you to do version control and software updating without ever having to do any work! The GitHub workflow follows the following flow: Create a branch of a project. Others can do the same while you work on the code, but your branch will be a snapshot from the time you created it, along with any changes you made. Add commits. You make changes to the branch, which in GitHub jargon is a 'commit'. Open a pull request. This starts a discussion with the community about your changes. Think of a 'pull request' as asking them to pull your branch back into the main flow. Deploy. When things are looking good, you deploy the code to test for bugs. Merge. When things are running smoothly, your changes will be merged back into the main project. GitHub takes care of all of the tedious parts, you just write the code and see if it works! StackExchange Ok, this is just in case you don't already know it, but Stack Exchange will be your best friend in the coming weeks. There is a massive community using this site to discuss Python, so create an account and start asking questions! IPython and the IPython Notebook. There are several technical reasons why you would want to use the IPython shell, including tab completion, help 'magic', debugging, and optimization, but these are all beyond the scope of the course. However, you should look into these in the <a href="https://ipython.org/ipython-doc/3/interactive/magics.html" target="_blank">IPython documentation</a>. The reason WE are going to use IPython is it's ease of use and the IPython (or apparently now they call it Jupyter) Notebook. To use the IPython Notebook simply type ipython notebook, at which point the computer will open up a notebook server where you can see and edit your own IPython notebooks. These are probably not the most efficient way to work in general, but they are a great teaching tool. Ok, let's run a command! The prototypical first program is the Hello World! program. Here it is: End of explanation """ %matplotlib inline """ Explanation: That's it! It is that easy. In fact, you can save this single line of code in a file ending in .py and then run it and you would get the same thing. Running a script can be done using the python command, but IPython is a better way to work. You can open the IPython interpreter by typing ipython, then run your program by typing run my_program.py, as long as it is in your present working directory. Ok, let's see a more complex example. First, we are going to set up inline plotting: End of explanation """ """ Origin: Plotting a utilty function. Filename: example_utility.py Author: Tyler Abbot Last modified: 8 September, 2015 """ import numpy as np import matplotlib.pyplot as plt # Define the input variable c = np.linspace(0.01, 10.0, 100.0) # Calculate utility over the given space U = np.log(c) #Plot the function plt.plot(c, U) plt.xlabel('Consumption') plt.ylabel('Utility') plt.title('An Example Utility Function') plt.show() """ Explanation: Now we are going to write a little program (I would save this as a file, hence the docstring, but I put it here for clarity) to plot a utility function: End of explanation """ import this """ Explanation: This example illustrates some of the basic points of Python programming and syntax. Docstring. A docstring is a 'string literal' that informs the reader (not the computer!) of the usage for a program, function, module, etc. It is good practice to include information about your program at the beginning so that others, and you for that matter, can figure out what it is for later. I got this habit from Tom Sargent's Quant Econ course and have used it religiously (although I dont always remember to change the modification date...). Import statements. One of the great things about Python is that you can pick and choose what functionality you would like. This is where import statements come in. At the beginning of your program you tell Python exactly what modules and packages you would like. It is possible to import entire modules, but this is frowned upon. You should be specific in order to keep your program light. We will come back to the syntax later, but here we are importing the numpy and matplotlib modules and defining more compact names for them. Comments. In python you can comment text using the #, or a multiline comment can be surrounded by three quotation marks: """This is commented. """. There are certain comment style standards laid out in PEP8: Comments should be complete sentences. Use two spaces after a period. Comments should be in English (sorry French folk) Use block comments instead of inline comments (or at least use in line comments sparingly) Write Docstrings! In general, save the multiline comment for docstrings. Variable definition. In Python you use the '=' sign for variable definitions. Variable type is assumed based on the definition, but you can define the variable more precisely if you'd like. In general, Python variables are local in scope, that is they are defined for use within a function or program. However, you should be careful to use different variable names in different functions, as if Python cannot find a variable in the local 'namespace', then it will move to the global namespace. Function call. When you call a function in Python, you simply pass it the required arguments. Things can get more complicated, but we will discuss this later. When you import a module, the module itself is an object. This is sort of a philosophical point, but pretty much everything in Python, functions, modules, variables, etc., is an object. Given this, you can reference methods of those objects. Here, when we do plt.plot(foo), we are actually telling the computer to go into matplotlib and find the plot function, then to run that function on the variable foo. This is the idea behind object oriented programming. I know, this was a very short explanation, but either I wave some hands or I write a book. Methods. Throughout the example you'll notice the syntax of object.method. This is a very pythonic way of programming, as everything in Python is an object, be it a variable you have defined or a module you import. A "method" is a function or attribute defined within the class to which the object belongs. We're getting ahead of ourselves, but it suffices to know that this syntax refers to a method. Behond these basic parts to a Python program, there is a lot of focus on syntax and style in the community. This is why after 25 years the language is still concise and clear. To get an idea of the philosophy behind python style, run the following cell: End of explanation """ x = [] for i in range(0, 50): x.append(i) print(x) """ Explanation: Modules and packages A module is a way for Python to save definitions for later use. A module file simply contains the definitions of functions, classes, etc., and you can write your own if you want to. Modules can also contain executable statements that are run the first time the code is used, ie when you import. It is important to note that when you import a module, the system will search in your PYTHONPATH, a system variable that you may have to define yourself (I think on the newest version of Anaconda this is not a problem, but I'm not sure...). A package is a larger container of modules. For instance, Matplotlib is a package and pyplot is a module. This is simply a vocabulary issue. You'll just import the stuff you need! Here is a list of some of the most useful packages for economists (or anybody, really): Numpy. This is probably the most used package in Python, and if not it's definitely the most used by scientists. It includes mainly the numpy array object and related linear algebra operators. Along with that it has some random number and Fourier transform capabilities. Scipy. The 'Scipy Stack' actually incorporates most of the packages in this list, as well as some others. If you are talking about the 'Scipy Library' then you are referring to the namesake package that includes numerical routines. The part you'll use most is probably the numerical integration and optimization, but it also has great interpolation, sparse matrices, statistics, and linear algebra capabilities. Matplotlib. An object oriented plotting utility. This package can be very easy to use, but also offers amazing customizability which, when combined with third party packages, can match any graphics software on the market. Pandas. A data analysis library. Contains the DataFrame object (which is what everyone in this course is probably interested in), as well as multidimensional panel objects for panel data (I'm just learning about these! so neat!), and series objects. It also contains some statistical functions, but the most useful things are IO and data munging. You can use Pandas to read in large amounts of data in almost any format and write to almost anything, including HDF5 which allows you to work with 'bigdata'. StatModels. A module for statistics. It seems to me to be a lot of econometric methods, and given that the founder is an econometrician it is probably pretty focused on stuff you will be interested in! Alongside a lot of the functionality, you get some nice statistical plotting as well. Requests. This is probably less useful, but if you ever want to automate data retrieval you'll need to use http. Requests makes this easy. Their slogan is 'HTTP for Humans', which is pretty self explanatory. Beautiful Soup. Again this is not as useful. This is a simple and easy to use module for parsing, navigating, searching, and modifying a document. This is particularly useful if you need to do any webscraping. Scrapy. This module helps you to create web crawlers that you can use to gather data on the web. It even generates the file tree for you. I actually find this kind of terrifying, but you might find it useful. Sympy. This is a symbolic algebra package that can do simplification, polynomial expansions, symbolic calculus, equation solving, combinatronics, and tons of other neat stuff! Data structures Python's data structures are, like everything else, object oriented. In this sense, each one has special methods that can be used to manipulate it. Here's a list of some of the basic data structures: List String Tuple Set Dictionary Numpy Array Pandas DataFrame Most of these data types will come up in this course, but we can't cover everything. You should take some time to read <a href="https://docs.python.org/3/library/stdtypes.html#" target="_blank">the documentation</a> on these so that you are at least familiar with them. Indexing in Python begins at zero and you can reference an item in a list, string, or tuple by its index. You can also reference a numpy array using indices, and in higher dimensions these are much easier to deal with than list indices. Dictionaries and DataFrames use keys to keep track of their contents. We will discuss DataFrames in more depth when we talk about Pandas. For now, we will talk just about the native list, string, tuple, and dictionary types. Lists A list some useful methods that you should look up in the <a href="https://docs.python.org/3/library/stdtypes.html#sequence-types-list-tuple-range" target="_blank">documentation</a>, such as pop, append, and sort. Two of the most useful methods with lists, though, are list comprehensions and lists as iterators. List comprehensions This is a concise and simple way to create a list. Instead of writing: End of explanation """ x = [i for i in range(0, 50)] print(x) """ Explanation: we can directly fill the object x using what is called a list comprehension: End of explanation """ names = ["var_1", "var_2", "var_3"] for variable in names: print(variable) """ Explanation: A list comprehension is a succinct way to write for loop that creates a list. You essentially place all of the syntax within the list definition, between the []'s. List as iterator If you have a list of items, say variable names, and you would like to iterate over these variables, you can use them as an iterator. For example: End of explanation """ # Definite loop for i in range(0, 10): print(i) # Indefinite loop import random x = 0.0 while x < 1: x += random.random() print(x) """ Explanation: This is the simplest (for me) form of what's called an iteraterable, beyond simply a list of numbers. Many objects in Python are "iterable": lists, strings, arrays, even text files. If you are interested in how these objects work in the context of iterators and a more general object known as "generators", I encourage you to check out the <a href="http://anandology.com/python-practice-book/iterators.html" target="_blank">Python practice book</a>, although it isn't quite necessary for our work. Loops When programming we think of two different kinds of loops: indefinite and definite loops. A "definite loop" is one where the number of iterations is known in advance; the definition of the loop specifies the number of iterations. An "indefinite loop" is one where the number of iterations is unknown in advance; the definition of the loop specifies a condition. Here are two examples: End of explanation """ # Define a string using quotes x = 'Hello! I am a string!' print(x) # The type of quote is irrelevant x = "Hello! I am a string!" print(x) # Reference stings in the same way as a list, # but indices refer to position in the string print(x[0]) print(x[:5]) # Strings support arithmetic operations similar to lists print(x + x) """ Explanation: In Python, definite loops seem to be the norm, while in C indefinite loops are used more often. I encourage you to stick to definite loops, as indefinite ones can be unruly and a runaway loop can crash your computer quite easily. Strings A "String" is a list of letters and characters. Strings in Python behave in a similar way to lists, but treating a letter as an entry: End of explanation """ # Change the case print(x.upper()) print(x.lower()) # Find the index of a substring print(x.find('I am')) print(x[x.find('I am'):x.find('I am') + 4]) """ Explanation: Strings are different from lists, however, in the set of methods that are associated to them: End of explanation """ # Defining a tuple with or without parentheses tup = 'a', 'b' tup = ('a', 'b') # Tuples can contain different data tup = 'a', 2 # Trying to modify a tuple will cause an error tup[0] = 1 """ Explanation: Strings offer a ton of special methods, so if you are interested in them, check out the <a href="https://docs.python.org/3/library/stdtypes.html#text-sequence-type-str" target="_blank">official documentation</a>. Tuple A tuple is an immutable datatype. That is, once it is defined it cannot be changed. Tuples are often used for defining things like global parameters. In the case where you need to do a lot of analysis, defining a variable you do not want to change as a tuple will protect you from making a mistake. End of explanation """ # Defining a dictionary students_grades = {"Joe": [10., 15., 12.], "Jane": [12., 16., 14.], "Nick": [8., 6., 6.]} print(students_grades) # Retrieving information from the dictionary print(students_grades['Joe']) # Looping over the information students_averages = {} for student, grades in students_grades.items(): students_averages[student] = sum(grades)/len(grades) print(students_averages) """ Explanation: Dictionaries Dictionaries are, frankly, a mystery to me. There is a very nice discussion of their uses on the <a href="http://openbookproject.net/thinkcs/python/english3e/dictionaries.html" target="_blank">Think Like a Computer Scientist open book project page</a>. As an economist you probably won't need them very often, but you will have to interact with them. It's for this reason that I include them here, but if you want to learn more, check out the <a href="https://docs.python.org/3/library/stdtypes.html#mapping-types-dict" target="_blank">documenation</a>. End of explanation """
uber/pyro
tutorial/source/modules.ipynb
apache-2.0
import os import torch import torch.nn as nn import pyro import pyro.distributions as dist import pyro.poutine as poutine from torch.distributions import constraints from pyro.nn import PyroModule, PyroParam, PyroSample from pyro.nn.module import to_pyro_module_ from pyro.infer import SVI, Trace_ELBO from pyro.infer.autoguide import AutoNormal from pyro.optim import Adam smoke_test = ('CI' in os.environ) assert pyro.__version__.startswith('1.7.0') """ Explanation: Modules in Pyro This tutorial introduces PyroModule, Pyro's Bayesian extension of PyTorch's nn.Module class. Before starting you should understand the basics of Pyro models and inference, understand the two primitives pyro.sample() and pyro.param(), and understand the basics of Pyro's effect handlers (e.g. by browsing minipyro.py). Summary: PyroModules are like nn.Modules but allow Pyro effects for sampling and constraints. PyroModule is a mixin subclass of nn.Module that overrides attribute access (e.g. .__getattr__()). There are three different ways to create a PyroModule: create a new subclass: class MyModule(PyroModule): ..., Pyro-ize an existing class: MyModule = PyroModule[OtherModule], or Pyro-ize an existing nn.Module instance in-place: to_pyro_module_(my_module). Usual nn.Parameter attributes of a PyroModule become Pyro parameters. Parameters of a PyroModule synchronize with Pyro's global param store. You can add constrained parameters by creating PyroParam objects. You can add stochastic attributes by creating PyroSample objects. Parameters and stochastic attributes are named automatically (no string required). PyroSample attributes are sampled once per .__call__() of the outermost PyroModule. To enable Pyro effects on methods other than .__call__(), decorate them with @pyro_method. A PyroModule model may contain nn.Module attributes. An nn.Module model may contain at most one PyroModule attribute (see naming section). An nn.Module may contain both a PyroModule model and PyroModule guide (e.g. Predictive). Table of Contents How PyroModule works How to create a PyroModule How effects work How to constrain parameters How to make a PyroModule Bayesian Caution: accessing attributes inside plates How to create a complex nested PyroModule How naming works Caution: avoiding duplicate names End of explanation """ class Linear(nn.Module): def __init__(self, in_size, out_size): super().__init__() self.weight = nn.Parameter(torch.randn(in_size, out_size)) self.bias = nn.Parameter(torch.randn(out_size)) def forward(self, input_): return self.bias + input_ @ self.weight linear = Linear(5, 2) assert isinstance(linear, nn.Module) assert not isinstance(linear, PyroModule) example_input = torch.randn(100, 5) example_output = linear(example_input) assert example_output.shape == (100, 2) """ Explanation: How PyroModule works <a class="anchor" id="How-PyroModule-works"></a> PyroModule aims to combine Pyro's primitives and effect handlers with PyTorch's nn.Module idiom, thereby enabling Bayesian treatment of existing nn.Modules and enabling model serving via jit.trace_module. Before you start using PyroModules it will help to understand how they work, so you can avoid pitfalls. PyroModule is a subclass of nn.Module. PyroModule enables Pyro effects by inserting effect handling logic on module attribute access, overriding the .__getattr__(), .__setattr__(), and .__delattr__() methods. Additionally, because some effects (like sampling) apply only once per model invocation, PyroModule overrides the .__call__() method to ensure samples are generated at most once per .__call__() invocation (note nn.Module subclasses typically implement a .forward() method that is called by .__call__()). How to create a PyroModule <a class="anchor" id="How-to-create-a-PyroModule"></a> There are three ways to create a PyroModule. Let's start with a nn.Module that is not a PyroModule: End of explanation """ class PyroLinear(Linear, PyroModule): pass linear = PyroLinear(5, 2) assert isinstance(linear, nn.Module) assert isinstance(linear, Linear) assert isinstance(linear, PyroModule) example_input = torch.randn(100, 5) example_output = linear(example_input) assert example_output.shape == (100, 2) """ Explanation: The first way to create a PyroModule is to create a subclass of PyroModule. You can update any nn.Module you've written to be a PyroModule, e.g. diff - class Linear(nn.Module): + class Linear(PyroModule): def __init__(self, in_size, out_size): super().__init__() self.weight = ... self.bias = ... ... Alternatively if you want to use third-party code like the Linear above you can subclass it, using PyroModule as a mixin class End of explanation """ linear = PyroModule[Linear](5, 2) assert isinstance(linear, nn.Module) assert isinstance(linear, Linear) assert isinstance(linear, PyroModule) example_input = torch.randn(100, 5) example_output = linear(example_input) assert example_output.shape == (100, 2) """ Explanation: The second way to create a PyroModule is to use bracket syntax PyroModule[-] to automatically denote a trivial mixin class as above. diff - linear = Linear(5, 2) + linear = PyroModule[Linear](5, 2) In our case we can write End of explanation """ linear = Linear(5, 2) assert isinstance(linear, nn.Module) assert not isinstance(linear, PyroModule) to_pyro_module_(linear) # this operates in-place assert isinstance(linear, nn.Module) assert isinstance(linear, Linear) assert isinstance(linear, PyroModule) example_input = torch.randn(100, 5) example_output = linear(example_input) assert example_output.shape == (100, 2) """ Explanation: The one difference between manual subclassing and using PyroModule[-] is that PyroModule[-] also ensures all nn.Module superclasses also become PyroModules, which is important for class hierarchies in library code. For example since nn.GRU is a subclass of nn.RNN, also PyroModule[nn.GRU] will be a subclass of PyroModule[nn.RNN]. The third way to create a PyroModule is to change the type of an existing nn.Module instance in-place using to_pyro_module_(). This is useful if you're using a third-party module factory helper or updating an existing script, e.g. End of explanation """ pyro.clear_param_store() # This is not traced: linear = Linear(5, 2) with poutine.trace() as tr: linear(example_input) print(type(linear).__name__) print(list(tr.trace.nodes.keys())) print(list(pyro.get_param_store().keys())) # Now this is traced: to_pyro_module_(linear) with poutine.trace() as tr: linear(example_input) print(type(linear).__name__) print(list(tr.trace.nodes.keys())) print(list(pyro.get_param_store().keys())) """ Explanation: How effects work <a class="anchor" id="How-effects-work"></a> So far we've created PyroModules but haven't made use of Pyro effects. But already the nn.Parameter attributes of our PyroModules act like pyro.param statements: they synchronize with Pyro's param store, and they can be recorded in traces. End of explanation """ print("params before:", [name for name, _ in linear.named_parameters()]) linear.bias = PyroParam(torch.randn(2).exp(), constraint=constraints.positive) print("params after:", [name for name, _ in linear.named_parameters()]) print("bias:", linear.bias) example_input = torch.randn(100, 5) example_output = linear(example_input) assert example_output.shape == (100, 2) """ Explanation: How to constrain parameters <a class="anchor" id="How-to-constrain-parameters"></a> Pyro parameters allow constraints, and often we want our nn.Module parameters to obey constraints. You can constrain a PyroModule's parameters by replacing nn.Parameter with a PyroParam attribute. For example to ensure the .bias attribute is positive, we can set it to End of explanation """ print("params before:", [name for name, _ in linear.named_parameters()]) linear.weight = PyroSample(dist.Normal(0, 1).expand([5, 2]).to_event(2)) print("params after:", [name for name, _ in linear.named_parameters()]) print("weight:", linear.weight) print("weight:", linear.weight) example_input = torch.randn(100, 5) example_output = linear(example_input) assert example_output.shape == (100, 2) """ Explanation: Now PyTorch will optimize the .bias_unconstrained parameter, and each time we access the .bias attribute it will read and transform the .bias_unconstrained parameter (similar to a Python @property). If you know the constraint beforehand, you can build it into the module constructor, e.g. diff class Linear(PyroModule): def __init__(self, in_size, out_size): super().__init__() self.weight = ... - self.bias = nn.Parameter(torch.randn(out_size)) + self.bias = PyroParam(torch.randn(out_size).exp(), + constraint=constraints.positive) ... How to make a PyroModule Bayesian <a class="anchor" id="How-to-make-a-PyroModule-Bayesian"></a> So far our Linear module is still deterministic. To make it randomized and Bayesian, we'll replace nn.Parameter and PyroParam attributes with PyroSample attributes, specifying a prior. Let's put a simple prior over the weights, taking care to expand its shape to [5,2] and declare event dimensions with .to_event() (as explained in the tensor shapes tutorial). End of explanation """ with poutine.trace() as tr: linear(example_input) for site in tr.trace.nodes.values(): print(site["type"], site["name"], site["value"]) """ Explanation: Notice that the .weight parameter now disappears, and each time we call linear() a new weight is sampled from the prior. In fact, the weight is sampled when the Linear.forward() accesses the .weight attribute: this attribute now has the special behavior of sampling from the prior. We can see all the Pyro effects that appear in the trace: End of explanation """ class BayesianLinear(PyroModule): def __init__(self, in_size, out_size): super().__init__() self.bias = PyroSample( prior=dist.LogNormal(0, 1).expand([out_size]).to_event(1)) self.weight = PyroSample( prior=dist.Normal(0, 1).expand([in_size, out_size]).to_event(2)) def forward(self, input): return self.bias + input @ self.weight # this line samples bias and weight """ Explanation: So far we've modified a third-party module to be Bayesian py linear = Linear(...) to_pyro_module_(linear) linear.bias = PyroParam(...) linear.weight = PyroSample(...) If you are creating a model from scratch, you could instead define a new class End of explanation """ class NormalModel(PyroModule): def __init__(self): super().__init__() self.loc = PyroSample(dist.Normal(0, 1)) class GlobalModel(NormalModel): def forward(self, data): # If .loc is accessed (for the first time) outside the plate, # then it will have empty shape (). loc = self.loc assert loc.shape == () with pyro.plate("data", len(data)): pyro.sample("obs", dist.Normal(loc, 1), obs=data) class LocalModel(NormalModel): def forward(self, data): with pyro.plate("data", len(data)): # If .loc is accessed (for the first time) inside the plate, # then it will be expanded by the plate to shape (plate.size,). loc = self.loc assert loc.shape == (len(data),) pyro.sample("obs", dist.Normal(loc, 1), obs=data) data = torch.randn(10) LocalModel()(data) GlobalModel()(data) """ Explanation: Note that samples are drawn at most once per .__call__() invocation, for example py class BayesianLinear(PyroModule): ... def forward(self, input): weight1 = self.weight # Draws a sample. weight2 = self.weight # Reads previous sample. assert weight2 is weight1 # All accesses should agree. ... ⚠ Caution: accessing attributes inside plates <a class="anchor" id="⚠-Caution:-accessing-attributes-inside-plates"></a> Because PyroSample and PyroParam attributes are modified by Pyro effects, we need to take care where parameters are accessed. For example pyro.plate contexts can change the shape of sample and param sites. Consider a model with one latent variable and a batched observation statement. We see that the only difference between these two models is where the .loc attribute is accessed. End of explanation """ class Model(PyroModule): def __init__(self, in_size, out_size): super().__init__() self.linear = BayesianLinear(in_size, out_size) # this is a PyroModule self.obs_scale = PyroSample(dist.LogNormal(0, 1)) def forward(self, input, output=None): obs_loc = self.linear(input) # this samples linear.bias and linear.weight obs_scale = self.obs_scale # this samples self.obs_scale with pyro.plate("instances", len(input)): return pyro.sample("obs", dist.Normal(obs_loc, obs_scale).to_event(1), obs=output) """ Explanation: How to create a complex nested PyroModule <a class="anchor" id="How-to-create-a-complex-nested-PyroModule"></a> To perform inference with the above BayesianLinear module we'll need to wrap it in probabilistic model with a likelihood; that wrapper will also be a PyroModule. End of explanation """ %%time pyro.clear_param_store() pyro.set_rng_seed(1) model = Model(5, 2) x = torch.randn(100, 5) y = model(x) guide = AutoNormal(model) svi = SVI(model, guide, Adam({"lr": 0.01}), Trace_ELBO()) for step in range(2 if smoke_test else 501): loss = svi.step(x, y) / y.numel() if step % 100 == 0: print("step {} loss = {:0.4g}".format(step, loss)) """ Explanation: Whereas a usual nn.Module can be trained with a simple PyTorch optimizer, a Pyro model requires probabilistic inference, e.g. using SVI and an AutoNormal guide. See the bayesian regression tutorial for details. End of explanation """ class Model(PyroModule): def __init__(self): super().__init__() self.dof = PyroSample(dist.Gamma(3, 1)) self.loc = PyroSample(dist.Normal(0, 1)) self.scale = PyroSample(lambda self: dist.InverseGamma(self.dof, 1)) self.x = PyroSample(lambda self: dist.Normal(self.loc, self.scale)) def forward(self): return self.x Model()() """ Explanation: PyroSample statements may also depend on other sample statements or parameters. In this case the prior can be a callable depending on self, rather than a constant distribution. For example consider the hierarchical model End of explanation """ with poutine.trace() as tr: model(x) for site in tr.trace.nodes.values(): print(site["type"], site["name"], site["value"].shape) """ Explanation: How naming works <a class="anchor" id="How-naming-works"></a> In the above code we saw a BayesianLinear model embedded inside another Model. Both were PyroModules. Whereas simple pyro.sample statements require name strings, PyroModule attributes handle naming automatically. Let's see how that works with the above model and guide (since AutoNormal is also a PyroModule). Let's trace executions of the model and the guide. End of explanation """ with poutine.trace() as tr: guide(x) for site in tr.trace.nodes.values(): print(site["type"], site["name"], site["value"].shape) """ Explanation: Observe that model.linear.bias corresponds to the linear.bias name, and similarly for the model.linear.weight and model.obs_scale attributes. The "instances" site corresponds to the plate, and the "obs" site corresponds to the likelihood. Next examine the guide: End of explanation """
GoogleCloudPlatform/ml-design-patterns
04_hacking_training_loop/distribution_strategies.ipynb
apache-2.0
import datetime import os import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow import feature_column as fc # Determine CSV, label, and key columns # Create list of string column headers, make sure order matches. CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks", "mother_race"] # Add string name for label column LABEL_COLUMN = "weight_pounds" # Set default values for each CSV column as a list of lists. # Treat is_male and plurality as strings. DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0], ["null"]] def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): """Loads dataset using the tf.data API from CSV files. Args: pattern: str, file pattern to glob into list of files. batch_size: int, the number of examples per batch. mode: tf.estimator.ModeKeys to determine if training or evaluating. Returns: `Dataset` object. """ # Make a CSV dataset dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS) # Map dataset to features and label dataset = dataset.map(map_func=features_and_labels) # features, label # Shuffle and repeat for training if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(buffer_size=1000).repeat() # Take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(buffer_size=1) return dataset """ Explanation: Distribution Strategy Design Pattern This notebook demonstrates how to use distributed training with Keras. End of explanation """ def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_age", "gestation_weeks"]} inputs.update({ colname: tf.keras.layers.Input( name=colname, shape=(), dtype="string") for colname in ["is_male", "plurality", "mother_race"]}) return inputs """ Explanation: Build model as before. End of explanation """ def categorical_fc(name, values): cat_column = fc.categorical_column_with_vocabulary_list( key=name, vocabulary_list=values) return fc.indicator_column(categorical_column=cat_column) def create_feature_columns(): feature_columns = { colname : fc.numeric_column(key=colname) for colname in ["mother_age", "gestation_weeks"] } feature_columns["is_male"] = categorical_fc( "is_male", ["True", "False", "Unknown"]) feature_columns["plurality"] = categorical_fc( "plurality", ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"]) feature_columns["mother_race"] = fc.indicator_column( fc.categorical_column_with_hash_bucket( "mother_race", hash_bucket_size=17, dtype=tf.dtypes.string)) feature_columns["gender_x_plurality"] = fc.embedding_column( fc.crossed_column(["is_male", "plurality"], hash_bucket_size=18), dimension=2) return feature_columns def get_model_outputs(inputs): # Create two hidden layers of [64, 32] just in like the BQML DNN h1 = layers.Dense(64, activation="relu", name="h1")(inputs) h2 = layers.Dense(32, activation="relu", name="h2")(h1) # Final output is a linear activation because this is regression output = layers.Dense(units=1, activation="linear", name="weight")(h2) return output def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2)) """ Explanation: And set up feature columns. End of explanation """ def build_dnn_model(): """Builds simple DNN using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layer inputs = create_input_layers() # Create feature columns feature_columns = create_feature_columns() # The constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires: LayerConstructor()(inputs) dnn_inputs = layers.DenseFeatures( feature_columns=feature_columns.values())(inputs) # Get output of model given inputs output = get_model_outputs(dnn_inputs) # Build model and compile it all together model = tf.keras.models.Model(inputs=inputs, outputs=output) model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"]) return model # Create the distribution strategy mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = build_dnn_model() print("Here is our DNN architecture so far:\n") print(model.summary()) """ Explanation: Build the model and set up distribution strategy Next, we'll combine the components of the model above to build the DNN model. Here is also where we'll define the distribution strategy. To do that, we'll place the building of the model inside the scope of the distribution strategy. Notice the output after excuting the cell below. We'll see INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') This indicates that we're using the MirroredStrategy on 4 GPUs. That is because my machine has 4 GPUs. Your output may look different depending on how many GPUs you have on your device. End of explanation """ print('Number of devices: {}'.format(mirrored_strategy.num_replicas_in_sync)) """ Explanation: To see how many GPU devices you have attached to your machine, run the cell below. As mentioned above, I have 4. End of explanation """
tensorflow/lucid
notebooks/activation-atlas/activation-atlas-simple.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ !pip -q install lucid>=0.3.8 !pip -q install umap-learn>=0.3.7 # General support import math import tensorflow as tf import numpy as np # For plots import matplotlib.pyplot as plt # Dimensionality reduction import umap from sklearn.manifold import TSNE # General lucid code from lucid.misc.io import save, show, load import lucid.modelzoo.vision_models as models # For rendering feature visualizations import lucid.optvis.objectives as objectives import lucid.optvis.param as param import lucid.optvis.render as render import lucid.optvis.transform as transform """ Explanation: Simple Activation Atlas This notebook uses Lucid to reproduce the results in Activation Atlas. This notebook doesn't introduce the abstractions behind lucid; you may wish to also read the Lucid tutorial. Note: The easiest way to use this tutorial is as a colab notebook, which allows you to dive in with no setup. We recommend you enable a free GPU by going: Runtime   →   Change runtime type   →   Hardware Accelerator: GPU Install and imports End of explanation """ model = models.InceptionV1() model.load_graphdef() # model.layers[7] is "mixed4c" layer = "mixed4c" print(model.layers[7]) raw_activations = model.layers[7].activations activations = raw_activations[:100000] print(activations.shape) """ Explanation: Load model and activations End of explanation """ def whiten(full_activations): correl = np.matmul(full_activations.T, full_activations) / len(full_activations) correl = correl.astype("float32") S = np.linalg.inv(correl) S = S.astype("float32") return S S = whiten(raw_activations) """ Explanation: Whiten End of explanation """ def normalize_layout(layout, min_percentile=1, max_percentile=99, relative_margin=0.1): """Removes outliers and scales layout to between [0,1].""" # compute percentiles mins = np.percentile(layout, min_percentile, axis=(0)) maxs = np.percentile(layout, max_percentile, axis=(0)) # add margins mins -= relative_margin * (maxs - mins) maxs += relative_margin * (maxs - mins) # `clip` broadcasts, `[None]`s added only for readability clipped = np.clip(layout, mins, maxs) # embed within [0,1] along both axes clipped -= clipped.min(axis=0) clipped /= clipped.max(axis=0) return clipped layout = umap.UMAP(n_components=2, verbose=True, n_neighbors=20, min_dist=0.01, metric="cosine").fit_transform(activations) ## You can optionally use TSNE as well # layout = TSNE(n_components=2, verbose=True, metric="cosine", learning_rate=10, perplexity=50).fit_transform(d) layout = normalize_layout(layout) plt.figure(figsize=(10, 10)) plt.scatter(x=layout[:,0],y=layout[:,1], s=2) plt.show() """ Explanation: Dimensionality reduction End of explanation """ # # Whitened, euclidean neuron objective # @objectives.wrap_objective def direction_neuron_S(layer_name, vec, batch=None, x=None, y=None, S=None): def inner(T): layer = T(layer_name) shape = tf.shape(layer) x_ = shape[1] // 2 if x is None else x y_ = shape[2] // 2 if y is None else y if batch is None: raise RuntimeError("requires batch") acts = layer[batch, x_, y_] vec_ = vec if S is not None: vec_ = tf.matmul(vec_[None], S)[0] # mag = tf.sqrt(tf.reduce_sum(acts**2)) dot = tf.reduce_mean(acts * vec_) # cossim = dot/(1e-4 + mag) return dot return inner # # Whitened, cosine similarity objective # @objectives.wrap_objective def direction_neuron_cossim_S(layer_name, vec, batch=None, x=None, y=None, cossim_pow=1, S=None): def inner(T): layer = T(layer_name) shape = tf.shape(layer) x_ = shape[1] // 2 if x is None else x y_ = shape[2] // 2 if y is None else y if batch is None: raise RuntimeError("requires batch") acts = layer[batch, x_, y_] vec_ = vec if S is not None: vec_ = tf.matmul(vec_[None], S)[0] mag = tf.sqrt(tf.reduce_sum(acts**2)) dot = tf.reduce_mean(acts * vec_) cossim = dot/(1e-4 + mag) cossim = tf.maximum(0.1, cossim) return dot * cossim ** cossim_pow return inner # # Renders a batch of activations as icons # def render_icons(directions, model, layer, size=80, n_steps=128, verbose=False, S=None, num_attempts=2, cossim=True, alpha=True): image_attempts = [] loss_attempts = [] # Render multiple attempts, and pull the one with the lowest loss score. for attempt in range(num_attempts): # Render an image for each activation vector param_f = lambda: param.image(size, batch=directions.shape[0], fft=True, decorrelate=True, alpha=alpha) if(S is not None): if(cossim is True): obj_list = ([ direction_neuron_cossim_S(layer, v, batch=n, S=S, cossim_pow=4) for n,v in enumerate(directions) ]) else: obj_list = ([ direction_neuron_S(layer, v, batch=n, S=S) for n,v in enumerate(directions) ]) else: obj_list = ([ objectives.direction_neuron(layer, v, batch=n) for n,v in enumerate(directions) ]) obj = objectives.Objective.sum(obj_list) transforms = [] if alpha: transforms.append(transform.collapse_alpha_random()) transforms.append(transform.pad(2, mode='constant', constant_value=1)) transforms.append(transform.jitter(4)) transforms.append(transform.jitter(4)) transforms.append(transform.jitter(8)) transforms.append(transform.jitter(8)) transforms.append(transform.jitter(8)) transforms.append(transform.random_scale([0.995**n for n in range(-5,80)] + [0.998**n for n in 2*list(range(20,40))])) transforms.append(transform.random_rotate(list(range(-20,20))+list(range(-10,10))+list(range(-5,5))+5*[0])) transforms.append(transform.jitter(2)) # This is the tensorflow optimization process. # We can't use the lucid helpers here because we need to know the loss. print("attempt: ", attempt) with tf.Graph().as_default(), tf.Session() as sess: learning_rate = 0.05 losses = [] trainer = tf.train.AdamOptimizer(learning_rate) T = render.make_vis_T(model, obj, param_f, trainer, transforms) loss_t, vis_op, t_image = T("loss"), T("vis_op"), T("input") losses_ = [obj_part(T) for obj_part in obj_list] tf.global_variables_initializer().run() for i in range(n_steps): loss, _ = sess.run([losses_, vis_op]) losses.append(loss) if (i % 100 == 0): print(i) img = t_image.eval() img_rgb = img[:,:,:,:3] if alpha: print("alpha true") k = 0.8 bg_color = 0.0 img_a = img[:,:,:,3:] img_merged = img_rgb*((1-k)+k*img_a) + bg_color * k*(1-img_a) image_attempts.append(img_merged) else: print("alpha false") image_attempts.append(img_rgb) loss_attempts.append(losses[-1]) # Use the icon with the lowest loss loss_attempts = np.asarray(loss_attempts) loss_final = [] image_final = [] print("Merging best scores from attempts...") for i, d in enumerate(directions): # note, this should be max, it is not a traditional loss mi = np.argmax(loss_attempts[:,i]) image_final.append(image_attempts[mi][i]) return (image_final, loss_final) """ Explanation: Feature visualization End of explanation """ # # Takes a list of x,y layout and bins them into grid cells # def grid(xpts=None, ypts=None, grid_size=(8,8), x_extent=(0., 1.), y_extent=(0., 1.)): xpx_length = grid_size[0] ypx_length = grid_size[1] xpt_extent = x_extent ypt_extent = y_extent xpt_length = xpt_extent[1] - xpt_extent[0] ypt_length = ypt_extent[1] - ypt_extent[0] xpxs = ((xpts - xpt_extent[0]) / xpt_length) * xpx_length ypxs = ((ypts - ypt_extent[0]) / ypt_length) * ypx_length ix_s = range(grid_size[0]) iy_s = range(grid_size[1]) xs = [] for xi in ix_s: ys = [] for yi in iy_s: xpx_extent = (xi, (xi + 1)) ypx_extent = (yi, (yi + 1)) in_bounds_x = np.logical_and(xpx_extent[0] <= xpxs, xpxs <= xpx_extent[1]) in_bounds_y = np.logical_and(ypx_extent[0] <= ypxs, ypxs <= ypx_extent[1]) in_bounds = np.logical_and(in_bounds_x, in_bounds_y) in_bounds_indices = np.where(in_bounds)[0] ys.append(in_bounds_indices) xs.append(ys) return np.asarray(xs) def render_layout(model, layer, S, xs, ys, activ, n_steps=512, n_attempts=2, min_density=10, grid_size=(10, 10), icon_size=80, x_extent=(0., 1.0), y_extent=(0., 1.0)): grid_layout = grid(xpts=xs, ypts=ys, grid_size=grid_size, x_extent=x_extent, y_extent=y_extent) icons = [] for x in range(grid_size[0]): for y in range(grid_size[1]): indices = grid_layout[x, y] if len(indices) > min_density: average_activation = np.average(activ[indices], axis=0) icons.append((average_activation, x, y)) icons = np.asarray(icons) icon_batch, losses = render_icons(icons[:,0], model, alpha=False, layer=layer, S=S, n_steps=n_steps, size=icon_size, num_attempts=n_attempts) canvas = np.ones((icon_size * grid_size[0], icon_size * grid_size[1], 3)) for i, icon in enumerate(icon_batch): y = int(icons[i, 1]) x = int(icons[i, 2]) canvas[(grid_size[0] - x - 1) * icon_size:(grid_size[0] - x) * icon_size, (y) * icon_size:(y + 1) * icon_size] = icon return canvas # # Given a layout, renders an icon for the average of all the activations in each grid cell. # xs = layout[:, 0] ys = layout[:, 1] canvas = render_layout(model, layer, S, xs, ys, raw_activations, n_steps=512, grid_size=(20, 20), n_attempts=1) show(canvas) """ Explanation: Grid End of explanation """
metpy/MetPy
v1.0/_downloads/0c4dbfdebeb6fcd2f5364a69f0c6d4a8/Skew-T_Layout.ipynb
bsd-3-clause
import matplotlib.gridspec as gridspec import matplotlib.pyplot as plt import pandas as pd import metpy.calc as mpcalc from metpy.cbook import get_test_data from metpy.plots import add_metpy_logo, Hodograph, SkewT from metpy.units import units """ Explanation: Skew-T with Complex Layout Combine a Skew-T and a hodograph using Matplotlib's GridSpec layout capability. End of explanation """ col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed'] df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False), skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names) # Drop any rows with all NaN values for T, Td, winds df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed' ), how='all').reset_index(drop=True) """ Explanation: Upper air data can be obtained using the siphon package, but for this example we will use some of MetPy's sample data. End of explanation """ p = df['pressure'].values * units.hPa T = df['temperature'].values * units.degC Td = df['dewpoint'].values * units.degC wind_speed = df['speed'].values * units.knots wind_dir = df['direction'].values * units.degrees u, v = mpcalc.wind_components(wind_speed, wind_dir) # Create a new figure. The dimensions here give a good aspect ratio fig = plt.figure(figsize=(9, 9)) add_metpy_logo(fig, 630, 80, size='large') # Grid for plots gs = gridspec.GridSpec(3, 3) skew = SkewT(fig, rotation=45, subplot=gs[:, :2]) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot_barbs(p, u, v) skew.ax.set_ylim(1000, 100) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() # Good bounds for aspect ratio skew.ax.set_xlim(-30, 40) # Create a hodograph ax = fig.add_subplot(gs[0, -1]) h = Hodograph(ax, component_range=60.) h.add_grid(increment=20) h.plot(u, v) # Show the plot plt.show() """ Explanation: We will pull the data out of the example dataset into individual variables and assign units. End of explanation """
transcranial/keras-js
notebooks/layers/convolutional/Conv2DTranspose.ipynb
mit
data_in_shape = (4, 4, 2) conv = Conv2DTranspose(4, (3,3), strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=False) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for w in model.get_weights(): np.random.seed(150) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) # print('b shape:', weights[1].shape) # print('b:', format_decimal(weights[1].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.Conv2DTranspose.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: Conv2DTranspose [convolutional.Conv2DTranspose.0] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=False End of explanation """ data_in_shape = (4, 4, 2) conv = Conv2DTranspose(4, (3,3), strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for w in model.get_weights(): np.random.seed(151) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) print('b shape:', weights[1].shape) print('b:', format_decimal(weights[1].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.Conv2DTranspose.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.Conv2DTranspose.1] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=True End of explanation """ data_in_shape = (4, 4, 2) conv = Conv2DTranspose(4, (3,3), strides=(2,2), padding='valid', data_format='channels_last', activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for w in model.get_weights(): np.random.seed(152) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) print('b shape:', weights[1].shape) print('b:', format_decimal(weights[1].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.Conv2DTranspose.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.Conv2DTranspose.2] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='valid', data_format='channels_last', activation='relu', use_bias=True End of explanation """ data_in_shape = (4, 4, 2) conv = Conv2DTranspose(4, (3,3), strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for w in model.get_weights(): np.random.seed(153) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) print('b shape:', weights[1].shape) print('b:', format_decimal(weights[1].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.Conv2DTranspose.3'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.Conv2DTranspose.3] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True End of explanation """ data_in_shape = (4, 4, 2) conv = Conv2DTranspose(5, (3,3), strides=(2,2), padding='same', data_format='channels_last', activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for w in model.get_weights(): np.random.seed(154) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) print('b shape:', weights[1].shape) print('b:', format_decimal(weights[1].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.Conv2DTranspose.4'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.Conv2DTranspose.4] 5 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', activation='relu', use_bias=True End of explanation """ data_in_shape = (4, 4, 2) conv = Conv2DTranspose(3, (2,3), strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for w in model.get_weights(): np.random.seed(155) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) print('W shape:', weights[0].shape) print('W:', format_decimal(weights[0].ravel().tolist())) print('b shape:', weights[1].shape) print('b:', format_decimal(weights[1].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['convolutional.Conv2DTranspose.5'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [convolutional.Conv2DTranspose.5] 3 2x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True End of explanation """ import os filename = '../../../test/data/layers/convolutional/Conv2DTranspose.json' if not os.path.exists(os.path.dirname(filename)): os.makedirs(os.path.dirname(filename)) with open(filename, 'w') as f: json.dump(DATA, f) print(json.dumps(DATA)) """ Explanation: export for Keras.js tests End of explanation """
tylere/docker-tmpnb-ee
notebooks/1 - IPython Notebook Examples/IPython Project Examples/IPython Kernel/Custom Display Logic.ipynb
apache-2.0
from IPython.display import ( display, display_html, display_png, display_svg ) """ Explanation: Custom Display Logic Overview As described in the Rich Output tutorial, the IPython display system can display rich representations of objects in the following formats: JavaScript HTML PNG JPEG SVG LaTeX PDF This Notebook shows how you can add custom display logic to your own classes, so that they can be displayed using these rich representations. There are two ways of accomplishing this: Implementing special display methods such as _repr_html_ when you define your class. Registering a display function for a particular existing class. This Notebook describes and illustrates both approaches. Import the IPython display functions. End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt """ Explanation: Parts of this notebook need the matplotlib inline backend: End of explanation """ from IPython.core.pylabtools import print_figure from IPython.display import Image, SVG, Math class Gaussian(object): """A simple object holding data sampled from a Gaussian distribution. """ def __init__(self, mean=0.0, std=1, size=1000): self.data = np.random.normal(mean, std, size) self.mean = mean self.std = std self.size = size # For caching plots that may be expensive to compute self._png_data = None def _figure_data(self, format): fig, ax = plt.subplots() ax.hist(self.data, bins=50) ax.set_title(self._repr_latex_()) ax.set_xlim(-10.0,10.0) data = print_figure(fig, format) # We MUST close the figure, otherwise IPython's display machinery # will pick it up and send it as output, resulting in a double display plt.close(fig) return data def _repr_png_(self): if self._png_data is None: self._png_data = self._figure_data('png') return self._png_data def _repr_latex_(self): return r'$\mathcal{N}(\mu=%.2g, \sigma=%.2g),\ N=%d$' % (self.mean, self.std, self.size) """ Explanation: Special display methods The main idea of the first approach is that you have to implement special display methods when you define your class, one for each representation you want to use. Here is a list of the names of the special methods and the values they must return: _repr_html_: return raw HTML as a string _repr_json_: return a JSONable dict _repr_jpeg_: return raw JPEG data _repr_png_: return raw PNG data _repr_svg_: return raw SVG data as a string _repr_latex_: return LaTeX commands in a string surrounded by "$". As an illustration, we build a class that holds data generated by sampling a Gaussian distribution with given mean and standard deviation. Here is the definition of the Gaussian class, which has a custom PNG and LaTeX representation. End of explanation """ x = Gaussian(2.0, 1.0) x """ Explanation: Create an instance of the Gaussian distribution and return it to display the default representation: End of explanation """ display(x) """ Explanation: You can also pass the object to the display function to display the default representation: End of explanation """ display_png(x) """ Explanation: Use display_png to view the PNG representation: End of explanation """ x2 = Gaussian(0, 2, 2000) x2 """ Explanation: <div class="alert alert-success"> It is important to note a subtle different between <code>display</code> and <code>display_png</code>. The former computes <em>all</em> representations of the object, and lets the notebook UI decide which to display. The later only computes the PNG representation. </div> Create a new Gaussian with different parameters: End of explanation """ display_png(x) display_png(x2) """ Explanation: You can then compare the two Gaussians by displaying their histograms: End of explanation """ p = np.polynomial.Polynomial([1,2,3], [-10, 10]) p """ Explanation: Note that like print, you can call any of the display functions multiple times in a cell. Adding IPython display support to existing objects When you are directly writing your own classes, you can adapt them for display in IPython by following the above approach. But in practice, you often need to work with existing classes that you can't easily modify. We now illustrate how to add rich output capabilities to existing objects. We will use the NumPy polynomials and change their default representation to be a formatted LaTeX expression. First, consider how a NumPy polynomial object renders by default: End of explanation """ def poly_to_latex(p): terms = ['%.2g' % p.coef[0]] if len(p) > 1: term = 'x' c = p.coef[1] if c!=1: term = ('%.2g ' % c) + term terms.append(term) if len(p) > 2: for i in range(2, len(p)): term = 'x^%d' % i c = p.coef[i] if c!=1: term = ('%.2g ' % c) + term terms.append(term) px = '$P(x)=%s$' % '+'.join(terms) dom = r', $x \in [%.2g,\ %.2g]$' % tuple(p.domain) return px+dom """ Explanation: Next, define a function that pretty-prints a polynomial as a LaTeX string: End of explanation """ poly_to_latex(p) """ Explanation: This produces, on our polynomial p, the following: End of explanation """ from IPython.display import Latex Latex(poly_to_latex(p)) """ Explanation: You can render this string using the Latex class: End of explanation """ ip = get_ipython() for mime, formatter in ip.display_formatter.formatters.items(): print('%24s : %s' % (mime, formatter.__class__.__name__)) """ Explanation: However, you can configure IPython to do this automatically by registering the Polynomial class and the plot_to_latex function with an IPython display formatter. Let's look at the default formatters provided by IPython: End of explanation """ ip = get_ipython() latex_f = ip.display_formatter.formatters['text/latex'] """ Explanation: The formatters attribute is a dictionary keyed by MIME types. To define a custom LaTeX display function, you want a handle on the text/latex formatter: End of explanation """ help(latex_f.for_type) help(latex_f.for_type_by_name) """ Explanation: The formatter object has a couple of methods for registering custom display functions for existing types. End of explanation """ latex_f.for_type_by_name('numpy.polynomial.polynomial', 'Polynomial', poly_to_latex) """ Explanation: In this case, we will use for_type_by_name to register poly_to_latex as the display function for the Polynomial type: End of explanation """ p p2 = np.polynomial.Polynomial([-20, 71, -15, 1]) p2 """ Explanation: Once the custom display function has been registered, all NumPy Polynomial instances will be represented by their LaTeX form instead: End of explanation """ import json import uuid from IPython.display import display_javascript, display_html, display class FlotPlot(object): def __init__(self, x, y): self.x = x self.y = y self.uuid = str(uuid.uuid4()) def _ipython_display_(self): json_data = json.dumps(list(zip(self.x, self.y))) display_html('<div id="{}" style="height: 300px; width:80%;"></div>'.format(self.uuid), raw=True ) display_javascript(""" require(["//cdnjs.cloudflare.com/ajax/libs/flot/0.8.2/jquery.flot.min.js"], function() { var line = JSON.parse("%s"); console.log(line); $.plot("#%s", [line]); }); """ % (json_data, self.uuid), raw=True) import numpy as np x = np.linspace(0,10) y = np.sin(x) FlotPlot(x, np.sin(x)) """ Explanation: More complex display with _ipython_display_ Rich output special methods and functions can only display one object or MIME type at a time. Sometimes this is not enough if you want to display multiple objects or MIME types at once. An example of this would be to use an HTML representation to put some HTML elements in the DOM and then use a JavaScript representation to add events to those elements. IPython 2.0 recognizes another display method, _ipython_display_, which allows your objects to take complete control of displaying themselves. If this method is defined, IPython will call it, and make no effort to display the object using the above described _repr_*_ methods for custom display functions. It's a way for you to say "Back off, IPython, I can display this myself." Most importantly, your _ipython_display_ method can make multiple calls to the top-level display functions to accomplish its goals. Here is an object that uses display_html and display_javascript to make a plot using the Flot JavaScript plotting library: End of explanation """
Hyperparticle/deep-learning-foundation
lessons/dcgan-svhn/DCGAN.ipynb
mit
%matplotlib inline import pickle as pkl import matplotlib.pyplot as plt import numpy as np from scipy.io import loadmat import tensorflow as tf !mkdir data """ Explanation: Deep Convolutional GANs In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here. You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(data_dir + "train_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat', data_dir + 'train_32x32.mat', pbar.hook) if not isfile(data_dir + "test_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat', data_dir + 'test_32x32.mat', pbar.hook) """ Explanation: Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. End of explanation """ trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat') """ Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above. End of explanation """ idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0, hspace=0) """ Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. End of explanation """ def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx = int(len(test['y'])*(1 - val_frac)) self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:] self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:] self.train_x, self.train_y = train['X'], train['y'] self.train_x = np.rollaxis(self.train_x, 3) self.valid_x = np.rollaxis(self.valid_x, 3) self.test_x = np.rollaxis(self.test_x, 3) if scale_func is None: self.scaler = scale else: self.scaler = scale_func self.shuffle = shuffle def batches(self, batch_size): if self.shuffle: idx = np.arange(len(dataset.train_x)) np.random.shuffle(idx) self.train_x = self.train_x[idx] self.train_y = self.train_y[idx] n_batches = len(self.train_y)//batch_size for ii in range(0, len(self.train_y), batch_size): x = self.train_x[ii:ii+batch_size] y = self.train_y[ii:ii+batch_size] yield self.scaler(x), y """ Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z """ Explanation: Network Inputs Here, just creating some placeholders like normal. End of explanation """ def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x1 = tf.layers.dense(z, 4*4*512) # Reshape it to start the convolutional stack x1 = tf.reshape(x1, (-1, 4, 4, 512)) x1 = tf.layers.batch_normalization(x1, training=training) x1 = tf.maximum(alpha * x1, x1) # 4x4x512 now x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same') x2 = tf.layers.batch_normalization(x2, training=training) x2 = tf.maximum(alpha * x2, x2) # 8x8x256 now x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same') x3 = tf.layers.batch_normalization(x3, training=training) x3 = tf.maximum(alpha * x3, x3) # 16x16x128 now # Output layer logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same') # 32x32x3 now out = tf.tanh(logits) return out """ Explanation: Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper: Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. End of explanation """ def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same') relu1 = tf.maximum(alpha * x1, x1) # 16x16x64 x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same') bn2 = tf.layers.batch_normalization(x2, training=True) relu2 = tf.maximum(alpha * bn2, bn2) # 8x8x128 x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same') bn3 = tf.layers.batch_normalization(x3, training=True) relu3 = tf.maximum(alpha * bn3, bn3) # 4x4x256 # Flatten it flat = tf.reshape(relu3, (-1, 4*4*256)) logits = tf.layers.dense(flat, 1) out = tf.sigmoid(logits) return out, logits """ Explanation: Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately. End of explanation """ def model_loss(input_real, input_z, output_dim, alpha=0.2): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) """ g_model = generator(input_z, output_dim, alpha=alpha) d_model_real, d_logits_real = discriminator(input_real, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha) d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) d_loss = d_loss_real + d_loss_fake return d_loss, g_loss """ Explanation: Model Loss Calculating the loss like before, nothing new here. End of explanation """ def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) """ # Get weights and bias to update t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] # Optimize with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt """ Explanation: Optimizers Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics. End of explanation """ class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_size[2], alpha=alpha) self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1) """ Explanation: Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. End of explanation """ def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8) ax.set_adjustable('box-forced') im = ax.imshow(img, aspect='equal') plt.subplots_adjust(wspace=0, hspace=0) return fig, axes """ Explanation: Here is a function for displaying generated images. End of explanation """ def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in dataset.batches(batch_size): steps += 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z}) _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x}) if steps % print_every == 0: # At the end of each epoch, get the losses and print them out train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x}) train_loss_g = net.g_loss.eval({net.input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) if steps % show_every == 0: gen_samples = sess.run( generator(net.input_z, 3, reuse=True, training=False), feed_dict={net.input_z: sample_z}) samples.append(gen_samples) _ = view_samples(-1, samples, 6, 12, figsize=figsize) plt.show() saver.save(sess, './checkpoints/generator.ckpt') with open('samples.pkl', 'wb') as f: pkl.dump(samples, f) return losses, samples """ Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt. End of explanation """ real_size = (32,32,3) z_size = 100 learning_rate = 0.0002 batch_size = 128 epochs = 25 alpha = 0.2 beta1 = 0.5 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() _ = view_samples(-1, samples, 6, 12, figsize=(10,5)) _ = view_samples(-1, samples, 6, 12, figsize=(10,5)) """ Explanation: Hyperparameters GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. End of explanation """
phobson/statsmodels
examples/notebooks/statespace_local_linear_trend.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd from scipy.stats import norm import statsmodels.api as sm import matplotlib.pyplot as plt """ Explanation: State space modeling: Local Linear Trends This notebook describes how to extend the Statsmodels statespace classes to create and estimate a custom model. Here we develop a local linear trend model. The Local Linear Trend model has the form (see Durbin and Koopman 2012, Chapter 3.2 for all notation and details): $$ \begin{align} y_t & = \mu_t + \varepsilon_t \qquad & \varepsilon_t \sim N(0, \sigma_\varepsilon^2) \ \mu_{t+1} & = \mu_t + \nu_t + \xi_t & \xi_t \sim N(0, \sigma_\xi^2) \ \nu_{t+1} & = \nu_t + \zeta_t & \zeta_t \sim N(0, \sigma_\zeta^2) \end{align} $$ It is easy to see that this can be cast into state space form as: $$ \begin{align} y_t & = \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} \mu_t \ \nu_t \end{pmatrix} + \varepsilon_t \ \begin{pmatrix} \mu_{t+1} \ \nu_{t+1} \end{pmatrix} & = \begin{bmatrix} 1 & 1 \ 0 & 1 \end{bmatrix} \begin{pmatrix} \mu_t \ \nu_t \end{pmatrix} + \begin{pmatrix} \xi_t \ \zeta_t \end{pmatrix} \end{align} $$ Notice that much of the state space representation is composed of known values; in fact the only parts in which parameters to be estimated appear are in the variance / covariance matrices: $$ \begin{align} H_t & = \begin{bmatrix} \sigma_\varepsilon^2 \end{bmatrix} \ Q_t & = \begin{bmatrix} \sigma_\xi^2 & 0 \ 0 & \sigma_\zeta^2 \end{bmatrix} \end{align} $$ End of explanation """ """ Univariate Local Linear Trend Model """ class LocalLinearTrend(sm.tsa.statespace.MLEModel): def __init__(self, endog): # Model order k_states = k_posdef = 2 # Initialize the statespace super(LocalLinearTrend, self).__init__( endog, k_states=k_states, k_posdef=k_posdef, initialization='approximate_diffuse', loglikelihood_burn=k_states ) # Initialize the matrices self.ssm['design'] = np.array([1, 0]) self.ssm['transition'] = np.array([[1, 1], [0, 1]]) self.ssm['selection'] = np.eye(k_states) # Cache some indices self._state_cov_idx = ('state_cov',) + np.diag_indices(k_posdef) @property def param_names(self): return ['sigma2.measurement', 'sigma2.level', 'sigma2.trend'] @property def start_params(self): return [np.std(self.endog)]*3 def transform_params(self, unconstrained): return unconstrained**2 def untransform_params(self, constrained): return constrained**0.5 def update(self, params, *args, **kwargs): params = super(LocalLinearTrend, self).update(params, *args, **kwargs) # Observation covariance self.ssm['obs_cov',0,0] = params[0] # State covariance self.ssm[self._state_cov_idx] = params[1:] """ Explanation: To take advantage of the existing infrastructure, including Kalman filtering and maximum likelihood estimation, we create a new class which extends from statsmodels.tsa.statespace.MLEModel. There are a number of things that must be specified: k_states, k_posdef: These two parameters must be provided to the base classes in initialization. The inform the statespace model about the size of, respectively, the state vector, above $\begin{pmatrix} \mu_t & \nu_t \end{pmatrix}'$, and the state error vector, above $\begin{pmatrix} \xi_t & \zeta_t \end{pmatrix}'$. Note that the dimension of the endogenous vector does not have to be specified, since it can be inferred from the endog array. update: The method update, with argument params, must be specified (it is used when fit() is called to calculate the MLE). It takes the parameters and fills them into the appropriate state space matrices. For example, below, the params vector contains variance parameters $\begin{pmatrix} \sigma_\varepsilon^2 & \sigma_\xi^2 & \sigma_\zeta^2\end{pmatrix}$, and the update method must place them in the observation and state covariance matrices. More generally, the parameter vector might be mapped into many different places in all of the statespace matrices. statespace matrices: by default, all state space matrices (obs_intercept, design, obs_cov, state_intercept, transition, selection, state_cov) are set to zeros. Values that are fixed (like the ones in the design and transition matrices here) can be set in initialization, whereas values that vary with the parameters should be set in the update method. Note that it is easy to forget to set the selection matrix, which is often just the identity matrix (as it is here), but not setting it will lead to a very different model (one where there is not a stochastic component to the transition equation). start params: start parameters must be set, even if it is just a vector of zeros, although often good start parameters can be found from the data. Maximum likelihood estimation by gradient methods (as employed here) can be sensitive to the starting parameters, so it is important to select good ones if possible. Here it does not matter too much (although as variances, they should't be set zero). initialization: in addition to defined state space matrices, all state space models must be initialized with the mean and variance for the initial distribution of the state vector. If the distribution is known, initialize_known(initial_state, initial_state_cov) can be called, or if the model is stationary (e.g. an ARMA model), initialize_stationary can be used. Otherwise, initialize_approximate_diffuse is a reasonable generic initialization (exact diffuse initialization is not yet available). Since the local linear trend model is not stationary (it is composed of random walks) and since the distribution is not generally known, we use initialize_approximate_diffuse below. The above are the minimum necessary for a successful model. There are also a number of things that do not have to be set, but which may be helpful or important for some applications: transform / untransform: when fit is called, the optimizer in the background will use gradient methods to select the parameters that maximize the likelihood function. By default it uses unbounded optimization, which means that it may select any parameter value. In many cases, that is not the desired behavior; variances, for example, cannot be negative. To get around this, the transform method takes the unconstrained vector of parameters provided by the optimizer and returns a constrained vector of parameters used in likelihood evaluation. untransform provides the reverse operation. param_names: this internal method can be used to set names for the estimated parameters so that e.g. the summary provides meaningful names. If not present, parameters are named param0, param1, etc. End of explanation """ import requests from io import BytesIO from zipfile import ZipFile # Download the dataset ck = requests.get('http://staff.feweb.vu.nl/koopman/projects/ckbook/OxCodeAll.zip').content zipped = ZipFile(BytesIO(ck)) df = pd.read_table( BytesIO(zipped.read('OxCodeIntroStateSpaceBook/Chapter_2/NorwayFinland.txt')), skiprows=1, header=None, sep='\s+', engine='python', names=['date','nf', 'ff'] ) """ Explanation: Using this simple model, we can estimate the parameters from a local linear trend model. The following example is from Commandeur and Koopman (2007), section 3.4., modeling motor vehicle fatalities in Finland. End of explanation """ # Load Dataset df.index = pd.date_range(start='%d-01-01' % df.date[0], end='%d-01-01' % df.iloc[-1, 0], freq='AS') # Log transform df['lff'] = np.log(df['ff']) # Setup the model mod = LocalLinearTrend(df['lff']) # Fit it using MLE (recall that we are fitting the three variance parameters) res = mod.fit() print(res.summary()) """ Explanation: Since we defined the local linear trend model as extending from MLEModel, the fit() method is immediately available, just as in other Statsmodels maximum likelihood classes. Similarly, the returned results class supports many of the same post-estimation results, like the summary method. End of explanation """ # Perform prediction and forecasting predict = res.get_prediction() forecast = res.get_forecast('2014') fig, ax = plt.subplots(figsize=(10,4)) # Plot the results df['lff'].plot(ax=ax, style='k.', label='Observations') predict.predicted_mean.plot(ax=ax, label='One-step-ahead Prediction') predict_ci = predict.conf_int(alpha=0.05) predict_index = np.arange(len(predict_ci)) ax.fill_between(predict_index[2:], predict_ci.iloc[2:, 0], predict_ci.iloc[2:, 1], alpha=0.1) forecast.predicted_mean.plot(ax=ax, style='r', label='Forecast') forecast_ci = forecast.conf_int() forecast_index = np.arange(len(predict_ci), len(predict_ci) + len(forecast_ci)) ax.fill_between(forecast_index, forecast_ci.iloc[:, 0], forecast_ci.iloc[:, 1], alpha=0.1) # Cleanup the image ax.set_ylim((4, 8)); legend = ax.legend(loc='lower left'); """ Explanation: Finally, we can do post-estimation prediction and forecasting. Notice that the end period can be specified as a date. End of explanation """
Kaggle/learntools
notebooks/ml_explainability/raw/tut3_partial_plots.ipynb
apache-2.0
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier data = pd.read_csv('../input/fifa-2018-match-statistics/FIFA 2018 Statistics.csv') y = (data['Man of the Match'] == "Yes") # Convert from string "Yes"/"No" to binary feature_names = [i for i in data.columns if data[i].dtype in [np.int64]] X = data[feature_names] train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) tree_model = DecisionTreeClassifier(random_state=0, max_depth=5, min_samples_split=5).fit(train_X, train_y) """ Explanation: Partial Dependence Plots While feature importance shows what variables most affect predictions, partial dependence plots show how a feature affects predictions. This is useful to answer questions like: Controlling for all other house features, what impact do longitude and latitude have on home prices? To restate this, how would similarly sized houses be priced in different areas? Are predicted health differences between two groups due to differences in their diets, or due to some other factor? If you are familiar with linear or logistic regression models, partial dependence plots can be interpreted similarly to the coefficients in those models. Though, partial dependence plots on sophisticated models can capture more complex patterns than coefficients from simple models. If you aren't familiar with linear or logistic regressions, don't worry about this comparison. We will show a couple examples, explain the interpretation of these plots, and then review the code to create these plots. How it Works Like permutation importance, partial dependence plots are calculated after a model has been fit. The model is fit on real data that has not been artificially manipulated in any way. In our soccer example, teams may differ in many ways. How many passes they made, shots they took, goals they scored, etc. At first glance, it seems difficult to disentangle the effect of these features. To see how partial plots separate out the effect of each feature, we start by considering a single row of data. For example, that row of data might represent a team that had the ball 50% of the time, made 100 passes, took 10 shots and scored 1 goal. We will use the fitted model to predict our outcome (probability their player won "man of the match"). But we repeatedly alter the value for one variable to make a series of predictions. We could predict the outcome if the team had the ball only 40% of the time. We then predict with them having the ball 50% of the time. Then predict again for 60%. And so on. We trace out predicted outcomes (on the vertical axis) as we move from small values of ball possession to large values (on the horizontal axis). In this description, we used only a single row of data. Interactions between features may cause the plot for a single row to be atypical. So, we repeat that mental experiment with multiple rows from the original dataset, and we plot the average predicted outcome on the vertical axis. Code Example Model building isn't our focus, so we won't focus on the data exploration or model building code. End of explanation """ from sklearn import tree import graphviz tree_graph = tree.export_graphviz(tree_model, out_file=None, feature_names=feature_names) graphviz.Source(tree_graph) """ Explanation: Our first example uses a decision tree, which you can see below. In practice, you'll use more sophistated models for real-world applications. End of explanation """ from matplotlib import pyplot as plt from pdpbox import pdp, get_dataset, info_plots # Create the data that we will plot pdp_goals = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=feature_names, feature='Goal Scored') # plot it pdp.pdp_plot(pdp_goals, 'Goal Scored') plt.show() """ Explanation: As guidance to read the tree: - Leaves with children show their splitting criterion on the top - The pair of values at the bottom show the count of False values and True values for the target respectively, of data points in that node of the tree. Here is the code to create the Partial Dependence Plot using the PDPBox library. End of explanation """ feature_to_plot = 'Distance Covered (Kms)' pdp_dist = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=feature_names, feature=feature_to_plot) pdp.pdp_plot(pdp_dist, feature_to_plot) plt.show() """ Explanation: A few items are worth pointing out as you interpret this plot - The y axis is interpreted as change in the prediction from what it would be predicted at the baseline or leftmost value. - A blue shaded area indicates level of confidence From this particular graph, we see that scoring a goal substantially increases your chances of winning "Man of The Match." But extra goals beyond that appear to have little impact on predictions. Here is another example plot: End of explanation """ # Build Random Forest model rf_model = RandomForestClassifier(random_state=0).fit(train_X, train_y) pdp_dist = pdp.pdp_isolate(model=rf_model, dataset=val_X, model_features=feature_names, feature=feature_to_plot) pdp.pdp_plot(pdp_dist, feature_to_plot) plt.show() """ Explanation: This graph seems too simple to represent reality. But that's because the model is so simple. You should be able to see from the decision tree above that this is representing exactly the model's structure. You can easily compare the structure or implications of different models. Here is the same plot with a Random Forest model. End of explanation """ # Similar to previous PDP plot except we use pdp_interact instead of pdp_isolate and pdp_interact_plot instead of pdp_isolate_plot features_to_plot = ['Goal Scored', 'Distance Covered (Kms)'] inter1 = pdp.pdp_interact(model=tree_model, dataset=val_X, model_features=feature_names, features=features_to_plot) pdp.pdp_interact_plot(pdp_interact_out=inter1, feature_names=features_to_plot, plot_type='contour') plt.show() """ Explanation: This model thinks you are more likely to win Man of the Match if your players run a total of 100km over the course of the game. Though running much more causes lower predictions. In general, the smooth shape of this curve seems more plausible than the step function from the Decision Tree model. Though this dataset is small enough that we would be careful in how we interpret any model. 2D Partial Dependence Plots If you are curious about interactions between features, 2D partial dependence plots are also useful. An example may clarify this. We will again use the Decision Tree model for this graph. It will create an extremely simple plot, but you should be able to match what you see in the plot to the tree itself. End of explanation """
probml/pyprobml
notebooks/book2/27/gplvm_mocap.ipynb
mit
import matplotlib.pyplot as plt plt.style.use("seaborn-pastel") %%capture %pip install -qq --upgrade git+https://github.com/lawrennd/ods %pip install -qq --upgrade git+https://github.com/SheffieldML/GPy.git try: import GPy, pods except ModuleNotFoundError: %pip install -qq GPy, import GPy, pods import numpy as np np.random.seed(42) """ Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/gplvm_mocap.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Gaussian proceess latent variable model for motion capture data http://inverseprobability.com/gpy-gallery/gallery/cmu-mocap-gplvm Author: Aditya Ravuri Setup End of explanation """ subject = "16" motion = ["02", "21"] data = pods.datasets.cmu_mocap(subject, motion) """ Explanation: CMU Mocap Database Motion capture data from the CMU motion capture data base (CMU Motion Capture Lab, 2003). You can download any subject and motion from the data set. Here we will download motions 02 and 21 from subject 16. End of explanation """ data["Y"].shape print(data["citation"]) """ Explanation: The data dictionary contains the keys ‘Y’ and ‘skel,’ which represent the data and the skeleton.. End of explanation """ print(data["info"]) print(data["details"]) """ Explanation: And extra information about the data is included, as standard, under the keys info and details. End of explanation """ # Make figure move in place. data["Y"][:, 0:3] = 0.0 """ Explanation: Fit GP-LVM The original data has the figure moving across the floor during the motion capture sequence. We can make the figure walk ‘in place,’ by setting the x, y, z positions of the root node to zero. This makes it easier to visualize the result. End of explanation """ Y = data["Y"] """ Explanation: We can also remove the mean of the data. End of explanation """ model = GPy.models.GPLVM(Y, 2, init="PCA", normalizer=True) """ Explanation: Now we create the GP-LVM model. End of explanation """ model.optimize(optimizer="lbfgs", messages=True, max_f_eval=1e4, max_iters=1e4) """ Explanation: Now we optimize the model. End of explanation """ %matplotlib inline def plot_skeleton(ax, Y_vec): Z = data["skel"].to_xyz(Y_vec) ax.scatter(Z[:, 0], Z[:, 2], Z[:, 1], marker=".", color="b") connect = data["skel"].connection_matrix() # Get the connectivity matrix. I, J = np.nonzero(connect) xyz = np.zeros((len(I) * 3, 3)) idx = 0 for i, j in zip(I, J): xyz[idx] = Z[i, :] xyz[idx + 1] = Z[j, :] xyz[idx + 2] = [np.nan] * 3 idx += 3 line_handle = ax.plot(xyz[:, 0], xyz[:, 2], xyz[:, 1], "-", color="b") ax.set_xlim(-15, 15) ax.set_ylim(-15, 15) ax.set_zlim(-15, 15) ax.set_yticks([]) ax.set_xticks([]) ax.set_zticks([]) plt.tight_layout() # fig = plt.figure(figsize=(7,2.5)) fig = plt.figure(figsize=(14, 5)) ax_a = fig.add_subplot(131) ax_a.set_title("Latent Space") n = len(Y) idx_a = 51 # jumping idx_b = 180 # standing other_indices = np.arange(n)[~np.isin(range(n), [idx_a, idx_b])] jump = np.arange(n)[data["lbls"][:, 0] == 1] walk = np.arange(n)[data["lbls"][:, 0] == 0] jump = jump[jump != idx_a] walk = walk[walk != idx_b] ax_a.scatter(model.X[jump, 0], model.X[jump, 1], label="jumping motion") ax_a.scatter(model.X[walk, 0], model.X[walk, 1], label="walking motion") ax_a.scatter(model.X[idx_a, 0], model.X[idx_a, 1], label="Pose A", marker="^", s=150, c="red") ax_a.scatter(model.X[idx_b, 0], model.X[idx_b, 1], label="Pose B", marker="+", s=150, c="red") ax_a.legend(loc="lower left") # , fontsize='x-small') plt.tight_layout() ax_b = fig.add_subplot(132, projection="3d") plot_skeleton(ax_b, Y[idx_a, :]) ax_b.set_title("Pose A") ax_c = fig.add_subplot(133, projection="3d") plot_skeleton(ax_c, Y[idx_b, :]) ax_c.set_title("Pose B") # print(fig) plt.savefig("gplvm-mocap.pdf") plt.show() """ Explanation: Plotting the skeleton End of explanation """
ML4DS/ML4all
R_lab1_ML_Bay_Regresion/Pract_regression_professor.ipynb
mit
# Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline import matplotlib import matplotlib.pyplot as plt import matplotlib.cm as cm import numpy as np import scipy.io # To read matlab files from scipy import spatial import pylab pylab.rcParams['figure.figsize'] = 8, 5 """ Explanation: Parametric ML and Bayesian regression Notebook version: 1.2 (Sep 28, 2018) Authors: Miguel Lázaro Gredilla Jerónimo Arenas García (jarenas@tsc.uc3m.es) Jesús Cid Sueiro (jesus.cid@uc3m.es) Changes: v.1.0 - First version. Python version v.1.1 - Python 3 compatibility. ML section. v.1.2 - Revised content. 2D visualization removed. Pending changes: End of explanation """ np.random.seed(3) """ Explanation: 1. Introduction In this exercise the student will review several key concepts of Maximum Likelihood and Bayesian regression. To do so, we will assume the regression model $$s = {\bf w}^\top {\bf z} + \varepsilon$$ where $s$ is the output corresponding to input ${\bf x}$, ${\bf z} = T({\bf x})$ is a possibly non-linear transformation of the input, and $\varepsilon$ is white zero-mean Gaussian noise, i.e., $$\varepsilon \sim {\cal N}(0,\sigma_\varepsilon^2).$$ Along this notebook, we will explore different types of transformations. Also, we will assume an <i>a priori</i> distribution for ${\bf w}$ given by $${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ Practical considerations Though sometimes unavoidable, it is recommended not to use explicit matrix inversion whenever possible. For instance, if an operation like ${\mathbf A}^{-1} {\mathbf b}$ must be performed, it is preferable to code it using python $\mbox{numpy.linalg.lstsq}$ function (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html), which provides the LS solution to the overdetermined system ${\mathbf A} {\mathbf w} = {\mathbf b}$. Sometimes, the computation of $\log|{\mathbf A}|$ (where ${\mathbf A}$ is a positive definite matrix) can overflow available precision, producing incorrect results. A numerically more stable alternative, providing the same result is $2\sum_i \log([{\mathbf L}]{ii})$, where $\mathbf L$ is the Cholesky decomposition of $\mathbf A$ (i.e., ${\mathbf A} = {\mathbf L}^\top {\mathbf L}$), and $[{\mathbf L}]{ii}$ is the $i$th element of the diagonal of ${\mathbf L}$. Non-degenerate covariance matrices, such as the ones in this exercise, are always positive definite. It may happen, as a consequence of chained rounding errors, that a matrix which was mathematically expected to be positive definite, turns out not to be so. This implies its Cholesky decomposition will not be available. A quick way to palliate this problem is by adding a small number (such as $10^{-6}$) to the diagonal of such matrix. Reproducibility of computations To guarantee the exact reproducibility of the experiments, it may be useful to start your code initializing the seed of the random numbers generator, so that you can compare your results with the ones given in this notebook. End of explanation """ # Parameter settings # sigma_p = <FILL IN> sigma_p = np.sqrt(2) # sigma_eps = <FILL IN> sigma_eps = np.sqrt(0.2) """ Explanation: 2. Data generation with a linear model During this section, we will assume affine transformation $${\bf z} = T({\bf x}) = \begin{pmatrix} 1 \ {\bf x} \end{pmatrix} $$. The <i>a priori</i> distribution of ${\bf w}$ is assumed to be $${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ 2.1. Synthetic data generation First, we are going to generate synthetic data (so that we have the ground-truth model) and use them to make sure everything works correctly and our estimations are sensible. [1] Set parameters $\sigma_p^2 = 2$ and $\sigma_{\varepsilon}^2 = 0.2$. To do so, define variables sigma_p and sigma_eps containing the respective standard deviations. End of explanation """ # Data dimension: dim_x = 2 # Generate a parameter vector taking a random sample from the prior distributions # (the np.random module may be usefull for this purpose) # true_w = <FILL IN> np.random.seed(3) true_w = np.random.normal(0, sigma_p, (2,1)) # --> Alternatively, you can use true_w = sigma_p * np.random.randn(dim_x, 1) print('The true parameter vector is:') print(true_w) """ Explanation: [2] Generate a weight vector true_w with two elements from the a priori distribution of the weights. This vector determines the regression line that we want to find (i.e., the optimum unknown solution). End of explanation """ # <SOL> # Parameter settings x_min = 0 x_max = 2 n_points = 20 # Training datapoints X = np.linspace(x_min, x_max, n_points)[:,np.newaxis] # </SOL> """ Explanation: [3] Generate an input matrix ${\bf X}$ (in this case, a single column) containing 20 samples with equally spaced values between 0 and 2 (method linspace from numpy can be useful for this) End of explanation """ # Expand input matrix with an all-ones column col_1 = np.ones((n_points, 1)) # Z = <FILL IN> Z = np.hstack((col_1,X)) # Generate values of the target variable # s = <FILL IN> s = np.dot(Z, true_w) + sigma_eps * np.random.randn(n_points, 1) print(s) """ Explanation: [4] Finally, generate the output vector ${\bf s}$ as the product ${\bf Z} \ast \text{true_w}$ plus Gaussian noise of pdf ${\cal N}(0,\sigma_\varepsilon^2)$ at each element. End of explanation """ # <SOL> # Plot training points plt.scatter(X, s); plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); # </SOL> """ Explanation: 2.2. Data visualization Plot the generated data. You will notice a linear behavior, but the presence of noise makes it hard to estimate precisely the original straight line that generated them (which is stored in true_w). End of explanation """ # <SOL> # Prediction function def predict(Z, w): return Z.dot(w) w = np.array([0.4, 0.7]) p = predict(Z, w) # </SOL> # Print predictions print(p) """ Explanation: 3. Maximum Likelihood (ML) regression 3.1. Likelihood function [1] Define a function predict(w, Z) that computes the linear predictions for all inputs in data matrix Z (a 2-D numpy arry), for a given parameter vector w (a 1-D numpy array). The output should be a 1-D array. Test your function with the given dataset and w = [0.4, 0.7] End of explanation """ # <SOL> # Sum of Squared Errors def sse(Z, s, w): return np.sum((s - predict(Z, w))**2) SSE = sse(Z, s, true_w) # </SOL> print(" The SSE is: {0}".format(SSE)) """ Explanation: [2] Define a function sse(w, Z, s) that computes the sum of squared errors (SSE) for the linear prediction with parameters w (1D numpy array), inputs Z (2D numpy array) and targets s (1D numpy array). Using this function, compute the SSE of the true parameter vector in true_w. End of explanation """ # <SOL> # Likelihood function def likelihood(w, Z, s, sigma_eps): K = len(s) lw = 1.0 / (np.sqrt(2*np.pi)*sigma_eps)**K * np.exp(- sse(Z, s, w)/(2*sigma_eps**2)) return lw L_w_true = likelihood(true_w, Z, s, sigma_eps) # </SOL> print("The likelihood of the true parameter vector is {0}".format(L_w_true)) """ Explanation: [3] Define a function likelihood(w, Z, s, sigma_eps) that computes the likelihood of parameter vector w for a given dataset in matrix Z and vector s, assuming Gaussian noise with varianze $\sigma_\epsilon^2$. Note that this function can use the sse function defined above. Using this function, compute the likelihood of the true parameter vector in true_w. End of explanation """ # <SOL> # The plot: LHS is the data, RHS will be the cost function. def LL(w, Z, s, sigma_eps): K = len(s) Lw = - 0.5 * K * np.log(2*np.pi*sigma_eps**2) - sse(Z, s, w)/(2*sigma_eps**2) return Lw LL_w_true = LL(true_w, Z, s, sigma_eps) # </SOL> print("The log-likelihood of the true parameter vector is {0}".format(LL_w_true)) """ Explanation: [4] Define a function LL(w, Z, s, sigma_eps) that computes the log-likelihood of parameter vector w for a given dataset in matrix Z and vector s, assuming Gaussian noise with varianze $\sigma_\epsilon^2$. Note that this function can use the likelihood function defined above. However, for a higher numerical precission, implemening a direct expression for the log-likelihood is recommended. Using this function, compute the likelihood of the true parameter vector in true_w. End of explanation """ # <SOL> w_ML, _, _, _ = np.linalg.lstsq(Z, s, rcond=None) # </SOL> print(w_ML) """ Explanation: 3.2. ML estimate [1] Compute the ML estimate of ${\bf w}$ given the data. Remind that using np.linalg.lstsq ia a better option than a direct implementation of the formula of the ML estimate, that would involve a matrix inversion. End of explanation """ # <SOL> L_w_ML = likelihood(w_ML, Z, s, sigma_eps) LL_w_ML = LL(w_ML, Z, s, sigma_eps) # </SOL> print('Maximum likelihood: {0}'.format(L_w_ML)) print('Maximum log-likelihood: {0}'.format(LL_w_ML)) """ Explanation: [2] Compute the maximum likelihood, and the maximum log-likelihood. End of explanation """ # First construct a grid of (theta0, theta1) parameter pairs and their # corresponding cost function values. N = 200 # Number of points along each dimension. w0_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N) w1_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N) Lw = np.zeros((N,N)) # Fill Lw with the likelihood values for i, w0i in enumerate(w0_grid): for j, w1j in enumerate(w1_grid): we = np.array((w0i, w1j)) Lw[i, j] = LL(we, Z, s, sigma_eps) WW0, WW1 = np.meshgrid(w0_grid, w1_grid, indexing='ij') contours = plt.contour(WW0, WW1, Lw, 20) plt.figure plt.clabel(contours) plt.scatter([true_w[0]]*2, [true_w[1]]*2, s=[50,10], color=['k','w']) plt.scatter([w_ML[0]]*2, [w_ML[1]]*2, s=[50,10], color=['r','w']) plt.xlabel('$w_0$') plt.ylabel('$w_1$') plt.show() """ Explanation: Just as an illustration, the code below generates a set of points in a two dimensional grid going from $(-\sigma_p, -\sigma_p)$ to $(\sigma_p, \sigma_p)$, computes the log-likelihood for all these points and visualize them using a 2-dimensional plot. You can see the difference between the true value of the parameter ${\bf w}$ (black) and the ML estimate (red). If they are not quite close to each other, maybe you have made some mistake in the above exercises: End of explanation """ # Parameter settings x_min = 0 x_max = 2 n_points = 2**16 # <SOL> # Training datapoints X2 = np.linspace(x_min, x_max, n_points) # Expand input matrix with an all-ones column col_1 = np.ones((n_points,)) Z2 = np.vstack((col_1, X2)).T s2 = Z2.dot(true_w) + sigma_eps * np.random.randn(n_points, 1) # </SOL> """ Explanation: 3.3. [OPTIONAL]: Convergence of the ML estimate for the true model Note that the likelihood of the true parameter vector is, in general, smaller than that of the ML estimate. However, as the sample size increasis, both should converge to the same value. [1] Generate a longer dataset, with $K_\text{max}=2^{16}$ samples, uniformly spaced between 0 and 2. Store it in the 2D-array X2 and the 1D-array s2 End of explanation """ # <SOL> e2 = [] for k in range(3, 16): Zk = Z2[0:2**k, :] sk = s2[0:2**k] w_MLk, _, _, _ = np.linalg.lstsq(Zk, sk, rcond=None) e2.append(np.sum((true_w - w_MLk)**2)) plt.semilogy(e2) plt.show() # </SOL> """ Explanation: [2] Compute the ML estimate based on the first $2^k$ samples, for $k=2,3,\ldots, 15$. For each value of $k$ compute the squared euclidean distance between the true parameter vector and the ML estimate. Represent it graphically (using a logarithmic scale in the y-axis). End of explanation """ # <SOL> matvar = scipy.io.loadmat('DatosLabReg.mat') Xtrain = matvar['Xtrain'] Xtest = matvar['Xtest'] strain = matvar['Ytrain'] stest = matvar['Ytest'] # </SOL> """ Explanation: 4. ML estimation with real data. The stocks dataset. Once our code has been tested on synthetic data, we will use it with real data. 4.1. Dataset [1] Load the dataset file provided with this notebook, corresponding to the evolution of the stocks of 10 airline companies. (<small>The dataset is an adaptation of the <a href="http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html"> Stock dataset</a>, which in turn was taken from the <a href="http://lib.stat.cmu.edu/">StatLib Repository</a></small>) End of explanation """ # <SOL> # Data normalization mean_x = np.mean(Xtrain, axis=0) std_x = np.std(Xtrain, axis=0) Xtrain = (Xtrain - mean_x) / std_x Xtest = (Xtest - mean_x) / std_x # </SOL> """ Explanation: [2] Normalize the data so all training sample components have zero mean and unit standard deviation. Store the normalized training and test samples in 2D numpy arrays Xtrain and Xtest, respectively. End of explanation """ # <SOL> X0train = Xtrain[:, [0]] X0test = Xtest[:, [0]] # Uncomment this to reduce the dataset size #ntr = 55 #X0train = Xtrain[0:ntr, [0]] #strain = strain[0:ntr] # </SOL> """ Explanation: 4.2. Polynomial ML regression with a single variable In this first part, we will work with the first component of the input only. [1] Take the first column of Xtrain and Xtest into arrays X0train and X0test, respectively. End of explanation """ # <SOL> # Plot training points plt.scatter(X0train, strain, linewidths=0.1); plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); # </SOL> """ Explanation: [2] Visualize, in a single scatter plot, the target variable (in the vertical axes) versus the input variable, using the training data End of explanation """ # The following normalizer will be helpful: it normalizes all components of the # input matrix, unless for the first one (the "all-one's" column) that # should not be normalized class Normalizer(): """ A data normalizer. Usage: nm = Normalizer() Z = nm.fit_transform(X) # to estimate the normalization mean and variance an normalize # all columns of X unles the first one Z2 = nm.transform(X) # to normalize X without recomputing mean and variance parameters """ def fit_transform(self, Z): self.mean_z = np.mean(Z, axis=0) self.mean_z[0] = 0 self.std_z = np.std(Z, axis=0) self.std_z[0] = 1 Zout = (Z - self.mean_z) / self.std_z # sc = StandardScaler() # Ztrain = sc.fit_transform(Ztrain) return Zout def transform(self, Z): return (Z - self.mean_z) / self.std_z # Ztest = sc.transform(Ztest) # Set the maximum degree of the polynomial model g_max = 50 # Compute polynomial transformation for train and test data # <SOL> Ztrain = np.vander(X0train.flatten(), g_max + 1, increasing=True) Ztest = np.vander(X0test.flatten(), g_max + 1, increasing=True) # </SOL> # Normalize training and test data # <SOL> nm = Normalizer() Ztrain = nm.fit_transform(Ztrain) Ztest = nm.transform(Ztest) # </SOL> """ Explanation: [3] Since the data have been taken from a real scenario, we do not have any true mathematical model of the process that generated the data. Thus, we will explore different models trying to take the one that fits better the training data. Assume a polinomial model given by $$ {\bf z} = T({\bf x}) = (1, x_0, x_0^2, \ldots, x_0^{g-1})^\top. $$ Compute matrices Ztrain and Ztest that result from applying the polynomial transformation to the inputs in X0train and X0test for a model with degree g_max = 50. The np.vander() method may be useful for this. Note that, despite X0train and X0test where normalized, you will need to re-normalize the transformed variables. Note, also, that the first component of the transformed variables, which must be equal to 1, should not be normalized. To simplify the job, the code below defines a normalizer class that performs normalization to all components unless for the first one. End of explanation """ # IMPORTANT NOTE: Use np.linalg.lstsq() with option rcond=-1 for better precission. # HINT: Take into account that the data matrix required to fit a polynomial model # with degree g consists of the first g+1 columns of Ztrain. # <SOL> models = [] for g in range(g_max + 1): w_MLg, _, _, _ = np.linalg.lstsq(Ztrain[:, 0:g+1], strain, rcond=-1) models.append(w_MLg) # </SOL> """ Explanation: [4] Fit a polynomial model with degree $g$ for $g$ ranging from 0 to g_max. Store the weights of all models in a list of weight vectors, named models, such that models[g] returns the parameters estimated for the polynomial model with degree $g$. We will use these models in the following sections. End of explanation """ # Create a grid of samples along the x-axis. n_points = 10000 xmin = min(X0train) xmax = max(X0train) X = np.linspace(xmin, xmax, n_points) # Apply the polynomial transformation to the inputs with degree g_max. # <SOL> Z = np.vander(X.flatten(), g_max+1, increasing=True) Z = nm.transform(Z) # </SOL> # Plot training points plt.plot(X0train, strain, 'b.', markersize=4); plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); plt.xlim(xmin, xmax) plt.ylim(30, 65) # Plot the regresion function for the required degrees # <SOL> for g in [1, 3, g_max]: s_ML = predict(Z[:,0:g+1], models[g]) plt.plot(X, s_ML) # </SOL> plt.show() """ Explanation: [5] Plot the polynomial models with degrees 1, 3 and g_max, superimposed over a scatter plot of the training data. End of explanation """ LLtrain = [] LLtest = [] sigma_eps = 1 # Fill LLtrain and LLtest with the log-likelihood values for all values of # g ranging from 0 to g_max (included). # <SOL> for g in range(g_max + 1): LLtrain.append(LL(models[g], Ztrain[:, :g+1], strain, sigma_eps)) LLtest.append(LL(models[g], Ztest[:, :g+1], stest, sigma_eps)) # </SOL> plt.figure() plt.plot(range(g_max + 1), LLtrain, label='Training') plt.plot(range(g_max + 1), LLtest, label='Test') plt.xlabel('g') plt.ylabel('Log-likelihood') plt.xlim(0, g_max) plt.ylim(-5e4,100) plt.legend() plt.show() """ Explanation: [6] Taking sigma_eps = 1, show, in the same plot: The log-likelihood function corresponding to each model, as a function of $g$, computed over the training set. The log-likelihood function corresponding to each model, as a function of $g$, computed over the test set. End of explanation """ from sklearn.model_selection import KFold # Select the number of splits n_sp = 10 # Create a cross-validator object kf = KFold(n_splits=n_sp) # Split data from Ztrain kf.get_n_splits(Ztrain) LLmean = [] for g in range(g_max + 1): # Compute the cross-validation Likelihood LLg = 0 for tr_index, val_index in kf.split(Ztrain): # Take the data matrices for the current split Z_tr, Z_val = Ztrain[tr_index, 0:g+1], Ztrain[val_index, 0:g+1] s_tr, s_val = strain[tr_index], strain[val_index] # Train with the current training splits. # w_MLk, _, _, _ = np.linalg.lstsq(<FILL IN>) w_MLk, _, _, _ = np.linalg.lstsq(Z_tr[:, 0:g+1], s_tr, rcond=-1) # Compute the validation likelihood for this split # LLg += LL(<FILL IN>) LLg += LL(w_MLk, Z_val[:, :g+1], s_val, sigma_eps) LLmean.append(LLg / n_sp) # Take the optimal value of g and its correpsponding likelihood # g_opt = <FILL IN> g_opt = np.argmax(LLmean) # LLmax = <FILL IN> LLmax = np.max(LLmean) print("The optimal degree is: {}".format(g_opt)) print("The maximum cross-validation likehood is {}".format(LLmax)) plt.figure() plt.plot(range(g_max + 1), LLmean, label='Training') plt.plot([g_opt], [LLmax], 'g.', markersize = 20) plt.xlabel('g') plt.ylabel('Log-likelihood') plt.xlim(0, g_max) plt.ylim(-1e3, LLmax + 100) plt.legend() plt.show() """ Explanation: [7] You may have seen the likelihood function over the training data grows with the degree of the polynomial. However, large values of $g$ produce a strong data overfitting. For this reasong, $g$ cannot be selected with the same data used to fit the model. This kind of parameters, like $g$ are usually called hyperparameters and need to be selected by cross validation. Select the optimal value of $g$ by 10-fold cross-validation. To do so, the cross validation methods provided by sklearn will simplify this task. End of explanation """ # You do not need to code here. Just copy the value of g_opt obtained after re-running the code # g_opt_new = <FILL IN> g_opt_new = 7 print("The optimal value of g for the 100-sample training set is {}".format(g_opt_new)) """ Explanation: [8] You may have observed the overfitting effect for large values of $g$. The best degree of the polynomial may depend on the size of the training set. Take a smaller dataset by running, after the code in section 4.2[1]: X0train = Xtrain[0:55, [0]] X0test = Xtest[0:100, [0]] Then, re-run the whole code after that. What is the optimal value of $g$ in that case? End of explanation """ # Explore the values of sigma logarithmically spaced according to the following array sigma_eps = np.logspace(-0.1, 5, num=50) g = 3 K = len(strain) # <SOL> LL_eps = [] for sig in sigma_eps: # LL_eps.append(<FILL IN>) LL_eps.append(LL(models[g], Ztrain[:, :g+1], strain, sig)) sig_opt = np.sqrt(sse(Ztrain[:, :g+1], strain, models[g]) / K) LL_opt = LL(models[g], Ztrain[:, :g+1], strain, sig_opt) plt.figure() plt.semilogx(sigma_eps, LL_eps) plt.plot([sig_opt], [LL_opt], 'b.', markersize=20) plt.xlabel('$\sigma_\epsilon$') plt.ylabel('Log-likelihood') plt.xlim(min(sigma_eps), max(sigma_eps)) plt.show() # </SOL> """ Explanation: [9] [OPTIONAL] Note that the model coefficients do not depend on $\sigma_\epsilon^2$. Therefore, we do not need to care about its values for polynomial ML regression. However, the log-likelihood function do depends on $\sigma_\epsilon^2$. Actually, we can estimate its value by cross-validation. By simple differentiation, it is not difficult to see that the optimal ML estimate of $\sigma_\epsilon$ is $$ \widehat{\sigma}_\epsilon^2 = \sqrt{\frac{1}{K} \|{\bf s}-{\bf Z}{\bf w}\|^2} $$ Plot the log-likelihood function corresponding to the polynomial model with degree 3 for different values of $\sigma_\epsilon^2$, for the training set, and verify that the value computed with the above formula is actually optimal. End of explanation """ # Note that you can easily adapt your code in 4.2[5] # <SOL> # Create a grid of samples along the x-axis. n_points = 100000 xmin = min(X0train) xmax = max(X0train) X = np.linspace(xmin, xmax, n_points) # Apply the polynomial transformation to the inputs with degree g_max. Z = np.vander(X.flatten(), g_max+1, increasing=True) Z = nm.transform(Z) # Plot training points plt.plot(X0train, strain, 'b.', markersize=3); plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); # Plot the regresion function for the required degrees s_ML = predict(Z[:,0:g_opt+1], models[g_opt]) plt.plot(X, s_ML, 'g-') plt.xlim(xmin, xmax) plt.ylim(30, 65) plt.show() # </SOL> """ Explanation: [10] [OPTIONAL] For the selected model: Plot the regresion function over the scater plot of the data. Compute the log-likelihood and the SSE over the test set. End of explanation """ # Degree for bayesian regression gb = 10 # w_LS, residuals, rank, s = <FILL IN> w_ML, residuals, rank, s = np.linalg.lstsq(Ztrain[:, :gb+1], strain, rcond=-1) # sigma_p = <FILL IN> sigma_p = np.sqrt(np.mean(w_ML**2)) # sigma_eps = <FILL IN> sigma_eps = np.sqrt(2*np.mean((strain - Ztrain[:, :gb+1].dot(w_ML))**2)) print(sigma_p) print(sigma_eps) """ Explanation: 5. Bayesian regression. The stock dataset. In this section we will keep using the first component of the data from the stock dataset, assuming the same kind of plolynomial model. We will explore the potential advantages of using a Bayesian model. To do so, we will asume that the <i>a priori</i> distribution of ${\bf w}$ is $${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ 5.1. Hyperparameter selection Since the values $\sigma_p$ and $\sigma_\varepsilon$ are no longer known, a first rough estimation is needed (we will soon see how to estimate these values in a principled way). To this end, we will adjust them using the ML solution to the regression problem with g=10: $\sigma_p^2$ will be taken as the average of the square values of ${\hat {\bf w}}_{ML}$ $\sigma_\varepsilon^2$ will be taken as two times the average of the square of the residuals when using ${\hat {\bf w}}_{ML}$ End of explanation """ # <SOL> def posterior_stats(Z, s, sigma_eps, sigma_p): dim_w = Z.shape[1] iCov_w = Z.T.dot(Z)/(sigma_eps**2) + np.eye(dim_w, dim_w)/(sigma_p**2) Cov_w = np.linalg.inv(iCov_w) mean_w = Cov_w.dot(Z.T).dot(s)/(sigma_eps**2) return mean_w, Cov_w, iCov_w # </SOL> mean_w, Cov_w, iCov_w = posterior_stats(Ztrain[:, :gb+1], strain, sigma_eps, sigma_p) print('mean_w = {0}'.format(mean_w)) # print('Cov_w = {0}'.format(Cov_w)) # print('iCov_w = {0}'.format(iCov_w)) """ Explanation: 5.2. Posterior pdf of the weight vector In this section we will visualize prior and the posterior distribution functions. First, we will restore the dataset at the begining of this notebook: [1] Define a function posterior_stats(Z, s, sigma_eps, sigma_p) that computes the parameters of the posterior coefficient distribution given the dataset in matrix Z and vector s, for given values of the hyperparameters. This function should return the posterior mean, the covariance matrix and the precision matrix (the inverse of the covariance matrix). Test the function to the given dataset, for $g=3$. End of explanation """ # <SOL> def gauss_pdf(w, mean_w, iCov_w): d = w - mean_w w_dim = len(mean_w) pw = np.sqrt(np.linalg.det(iCov_w)) / (2*np.pi)**(w_dim/2) * np.exp(- d.T.dot(iCov_w.dot(d))/2) return pw[0][0] # </SOL> print('p(w_ML | s) = {0}'.format(gauss_pdf(w_ML, mean_w, iCov_w))) print('p(w_MSE | s) = {0}'.format(gauss_pdf(mean_w, mean_w, iCov_w))) """ Explanation: [2] Define a function gauss_pdf(w, mean_w, iCov_w) that computes the Gaussian pdf with mean mean_w and precision matrix iCov_w. Use this function to compute and compare the ML estimate and the MSE estimate, given the dataset. End of explanation """ # <SOL> def log_gauss_pdf(w, mean_w, iCov_w): d = w - mean_w w_dim = len(mean_w) pw = 0.5 * np.log(np.linalg.det(iCov_w)) - 0.5 * w_dim * np.log(2*np.pi) - 0.5 * d.T.dot(iCov_w.dot(d)) return pw[0][0] # </SOL> print('log(p(w_ML | s)) = {0}'.format(log_gauss_pdf(w_ML, mean_w, iCov_w))) print('log(p(w_MSE | s)) = {0}'.format(log_gauss_pdf(mean_w, mean_w, iCov_w))) """ Explanation: [3] [OPTIONAL] Define a function log_gauss_pdf(w, mean_w, iCov_w) that computes the log of the Gaussian pdf with mean mean_w and precision matrix iCov_w. Use this function to compute and compare the log of the posterior pdf value of the true coefficients, the ML estimate and the MSE estimate, given the dataset. End of explanation """ # Definition of the interval for representation purposes xmin = min(X0train) xmax = max(X0train) n_points = 100 # Only two points are needed to plot a straigh line # Build the input data matrix: # Input values for representation of the regression curves X = np.linspace(xmin, xmax, n_points) Z = np.vander(X.flatten(), g_max+1, increasing=True) Z = nm.transform(Z)[:, :gb+1] """ Explanation: 5.3 Sampling regression curves from the posterior In this section we will plot the functions corresponding to different samples drawn from the posterior distribution of the weight vector. To this end, we will first generate an input dataset of equally spaced samples. We will compute the functions at these points End of explanation """ # Drawing weights from the posterior for l in range(50): # Generate a random sample from the posterior distribution (you can use np.random.multivariate_normal()) # w_l = <FILL IN> w_l = np.random.multivariate_normal(mean_w.flatten(), Cov_w) # Compute predictions for the inputs in the data matrix # p_l = <FILL IN> p_l = Z.dot(w_l) # Plot prediction function # plt.plot(<FILL IN>, 'c:'); plt.plot(X, p_l, 'c:'); # Plot the training points plt.plot(X0train, strain,'b.',markersize=2); plt.xlim((xmin, xmax)); plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); """ Explanation: Generate random vectors ${\bf w}_l$ with $l = 1,\dots, 50$, from the posterior density of the weights, $p({\bf w}\mid{\bf s})$, and use them to generate 50 polinomial regression functions, $f({\bf x}^\ast) = {{\bf z}^\ast}^\top {\bf w}_l$, with ${\bf x}^\ast$ between $-1.2$ and $1.2$, with step $0.1$. Plot the line corresponding to the model with the posterior mean parameters, along with the $50$ generated straight lines and the original samples, all in the same plot. As you can check, the Bayesian model is not providing a single answer, but instead a density over them, from which we have extracted 50 options. End of explanation """ # Note that you can re-use code from sect. 4.2 to solve this exercise # Plot the training points # plt.plot(X, Z.dot(true_w), 'b', label='True model', linewidth=2); plt.plot(X0train, strain,'b.',markersize=2); plt.xlim(xmin, xmax); # </SOL> # Plot the posterior mean. # mean_s = <FILL IN> mean_s = Z.dot(mean_w) plt.plot(X, mean_s, 'g', label='Predictive mean', linewidth=2); # Plot the posterior mean +- two standard deviations # std_f = <FILL IN> std_f = np.sqrt(np.diagonal(Z.dot(Cov_w).dot(Z.T)))[:, np.newaxis] # Plot the confidence intervals. # To do so, you can use the fill_between method plt.fill_between(X.flatten(), (mean_s - 2*std_f).flatten(), (mean_s + 2*std_f).flatten(), alpha=0.4, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=2) # plt.legend(loc='best') plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); plt.show() """ Explanation: 5.4. Plotting the confidence intervals On top of the previous figure (copy here your code from the previous section), plot functions $${\mathbb E}\left{f({\bf x}^\ast)\mid{\bf s}\right}$$ and $${\mathbb E}\left{f({\bf x}^\ast)\mid{\bf s}\right} \pm 2 \sqrt{{\mathbb V}\left{f({\bf x}^\ast)\mid{\bf s}\right}}$$ (i.e., the posterior mean of $f({\bf x}^\ast)$, as well as two standard deviations above and below). It is possible to show analytically that this region comprises $95.45\%$ probability of the posterior probability $p(f({\bf x}^\ast)\mid {\bf s})$ at each ${\bf x}^\ast$. End of explanation """ # Plot sample functions confidence intervals and sampling points # Note that you can simply copy and paste most of the code used in the cell above. # <SOL> # Plot the training points plt.figure() plt.plot(X0train, strain,'b.',markersize=2); plt.xlim(xmin, xmax); plt.xlabel('$x$',fontsize=14); plt.ylabel('$s$',fontsize=14); # Plot the posterior mean. plt.plot(X, mean_s, 'm', label='Predictive mean', linewidth=2); # Plot the posterior mean +- two standard deviations (as in the previous cell) # plt.fill_between(# <FILL IN>) plt.fill_between(X.flatten(), (mean_s - 2*std_f).flatten(), (mean_s + 2*std_f).flatten(), alpha=0.5) # Compute the standard deviations for s and plot the confidence intervals # <SOL> std_s = np.sqrt(np.diagonal(Z.dot(Cov_w).dot(Z.T)) + sigma_eps**2)[:, np.newaxis] # Plot now the posterior mean and posterior mean \pm 2 std for s (i.e., adding the noise variance) # plt.fill_between(# <FILL IN>) plt.fill_between(X.flatten(), (mean_s- 2*std_s).flatten(), (mean_s + 2*std_s).flatten(), alpha=0.2) # </SOL> plt.show() """ Explanation: Plot now ${\mathbb E}\left{s({\bf x}^\ast)\mid{\bf s}\right} \pm 2 \sqrt{{\mathbb V}\left{s({\bf x}^\ast)\mid{\bf s}\right}}$ (note that the posterior means of $f({\bf x}^\ast)$ and $s({\bf x}^\ast)$ are the same, so there is no need to plot it again). Notice that $95.45\%$ of observed data lie now within the newly designated region. These new limits establish a confidence range for our predictions. See how the uncertainty grows as we move away from the interpolation region to the extrapolation areas. End of explanation """ SSE_ML = [] SSE_Bayes = [] # Compute the SSE for the ML and the bayes estimates for g in range(g_max + 1): # <SOL> SSE_ML.append(sse(Ztest[:, :g+1], stest, models[g])) mean_w, Cov_w, iCov_w = posterior_stats(Ztrain[:, :g+1], strain, sigma_eps, sigma_p) SSE_Bayes.append(sse(Ztest[:, :g+1], stest, mean_w)) # </SOL> plt.figure() plt.semilogy(range(g_max + 1), SSE_ML, label='ML') plt.semilogy(range(g_max + 1), SSE_Bayes, 'g.', label='Bayes') plt.xlabel('g') plt.ylabel('Sum of square errors') plt.xlim(0, g_max) plt.ylim(min(min(SSE_Bayes), min(SSE_ML)),10000) plt.legend() plt.show() """ Explanation: 5.5. Test square error [1] To test the regularization effect of the Bayesian prior. To do so, compute and plot the sum of square errors of both the ML and Bayesian estimates as a function of the polynomial degree. End of explanation """ # <SOL> m_s = Ztest[:, :g+1].dot(mean_w) v_s = np.diagonal(Ztest[:, :g+1].dot(Cov_w).dot(Ztest[:, :g+1].T)) + sigma_eps**2 v_s = np.matrix(v_s).T # </SOL> """ Explanation: 5.6. [Optional] Model assessment In order to verify the performance of the resulting model, compute the posterior mean and variance of each of the test outputs from the posterior over ${\bf w}$. I.e, compute ${\mathbb E}\left{s({\bf x}^\ast)\mid{\bf s}\right}$ and $\sqrt{{\mathbb V}\left{s({\bf x}^\ast)\mid{\bf s}\right}}$ for each test sample ${\bf x}^\ast$ contained in each row of Xtest. Store the predictive mean and variance of all test samples in two column vectors called m_s and v_s, respectively. End of explanation """ # <SOL> MSE = np.mean((m_s - stest)**2) NLPD = 0.5 * np.mean(((stest - m_s)**2)/v_s) + 0.5*np.log(2*np.pi*np.prod(v_s)) # </SOL> print('MSE = {0}'.format(MSE)) print('NLPD = {0}'.format(NLPD)) """ Explanation: Compute now the mean square error (MSE) and the negative log-predictive density (NLPD) with the following code: End of explanation """
tensorflow/docs-l10n
site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ !pip install tensorflow-gpu==2.0.0-rc1 import pandas as pd import tensorflow as tf """ Explanation: 使用 tf.data 加载 pandas dataframes <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/pandas_dataframe"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">Download notebook</a> </td> </table> 本教程提供了如何将 pandas dataframes 加载到 tf.data.Dataset。 本教程使用了一个小型数据集,由克利夫兰诊所心脏病基金会(Cleveland Clinic Foundation for Heart Disease)提供. 此数据集中有几百行CSV。每行表示一个患者,每列表示一个属性(describe)。我们将使用这些信息来预测患者是否患有心脏病,这是一个二分类问题。 使用 pandas 读取数据 End of explanation """ csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/applied-dl/heart.csv') """ Explanation: 下载包含心脏数据集的 csv 文件。 End of explanation """ df = pd.read_csv(csv_file) df.head() df.dtypes """ Explanation: 使用 pandas 读取 csv 文件。 End of explanation """ df['thal'] = pd.Categorical(df['thal']) df['thal'] = df.thal.cat.codes df.head() """ Explanation: 将 thal 列(数据帧(dataframe)中的 object )转换为离散数值。 End of explanation """ target = df.pop('target') dataset = tf.data.Dataset.from_tensor_slices((df.values, target.values)) for feat, targ in dataset.take(5): print ('Features: {}, Target: {}'.format(feat, targ)) """ Explanation: 使用 tf.data.Dataset 读取数据 使用 tf.data.Dataset.from_tensor_slices 从 pandas dataframe 中读取数值。 使用 tf.data.Dataset 的其中一个优势是可以允许您写一些简单而又高效的数据管道(data pipelines)。从 loading data guide 可以了解更多。 End of explanation """ tf.constant(df['thal']) """ Explanation: 由于 pd.Series 实现了 __array__ 协议,因此几乎可以在任何使用 np.array 或 tf.Tensor 的地方透明地使用它。 End of explanation """ train_dataset = dataset.shuffle(len(df)).batch(1) """ Explanation: 随机读取(shuffle)并批量处理数据集。 End of explanation """ def get_compiled_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='relu'), tf.keras.layers.Dense(10, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) return model model = get_compiled_model() model.fit(train_dataset, epochs=15) """ Explanation: 创建并训练模型 End of explanation """ inputs = {key: tf.keras.layers.Input(shape=(), name=key) for key in df.keys()} x = tf.stack(list(inputs.values()), axis=-1) x = tf.keras.layers.Dense(10, activation='relu')(x) output = tf.keras.layers.Dense(1, activation='sigmoid')(x) model_func = tf.keras.Model(inputs=inputs, outputs=output) model_func.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) """ Explanation: 代替特征列 将字典作为输入传输给模型就像创建 tf.keras.layers.Input 层的匹配字典一样简单,应用任何预处理并使用 functional api。 您可以使用它作为 feature columns 的替代方法。 End of explanation """ dict_slices = tf.data.Dataset.from_tensor_slices((df.to_dict('list'), target.values)).batch(16) for dict_slice in dict_slices.take(1): print (dict_slice) model_func.fit(dict_slices, epochs=15) """ Explanation: 与 tf.data 一起使用时,保存 pd.DataFrame 列结构的最简单方法是将 pd.DataFrame 转换为 dict ,并对该字典进行切片。 End of explanation """
ComputationalModeling/spring-2017-danielak
past-semesters/fall_2016/homework/HW2/Homework_2_SOLUTIONS.ipynb
agpl-3.0
import numpy as np %matplotlib inline import matplotlib.pyplot as plt ''' count_times = the time since the start of data-taking when the data was taken (in seconds) count_rates = the number of counts since the last time data was taken, at the time in count_times ''' count_times = np.loadtxt("count_rates.txt", dtype=int)[0] count_rates = np.loadtxt("count_rates.txt", dtype=int)[1] # Put your code here - add additional cells if necessary # number of bins to smooth over Nsmooth = 100 ''' create arrays for smoothed counts. Given how we're going to subsample, we want to make the arrays shorter by a factor of 2*Nsmooth. Use numpy's slicing to get times that start after t=0 and end before the end of the array. Then just make smooth_counts the same size, and zero it out. They should be the size of count_rates.size-2*Nsmooth ''' smooth_times = count_times[Nsmooth:-Nsmooth] smooth_counts = np.zeros_like(smooth_times,dtype='float64') ''' loop over the count_rates arrays, but starting Nsmooth into count_rates and ending Nsmooth prior to the end. Then, go from i-Nsmooth to i+Nsmooth and sum those up. After the loop, we're going to divide by 2*Nsmooth+1 in order to normalize it (because each value of the smoothed array has 2*Nsmooth+1 cells in it). ''' for i in range(Nsmooth,count_rates.size-Nsmooth): for j in range(i-Nsmooth,i+Nsmooth+1): # the +1 is because it'll then end at i+Nsmooth smooth_counts[i-Nsmooth] += count_rates[j] smooth_counts /= (2.0*Nsmooth+1.0) # plot noisy counts, smoothed counts, each with their own line types plt.plot(count_times,count_rates,'b.',smooth_times,smooth_counts,'r-',linewidth=5) # some guesses for the various parameters in the model # (which are basically lifted directly from the data) N0 = 2000.0 # counts per 5-second bin half_life = 1712.0 # half life (in seconds) Nbackground = 292.0 # background counts per 5-second bin # calculate estimated count rate using the parameters listed above count_rate_estimate = N0 * 2.0**(-count_times/half_life) + Nbackground plt.plot(count_times,count_rate_estimate,'c--',linewidth=5) plt.xlabel('time (seconds)') plt.ylabel('counts per bin') plt.title("Counts per 5-second bin") """ Explanation: Homework #2 This notebook is due on Friday, October 7th, 2016 at 11:59 p.m.. Please make sure to get started early, and come by the instructors' office hours if you have any questions. Office hours and locations can be found in the course syllabus. IMPORTANT: While it's fine if you talk to other people in class about this homework - and in fact we encourage it! - you are responsible for creating the solutions for this homework on your own, and each student must submit their own homework assignment. Some links that you may find helpful: Markdown tutorial The Pandas website The Pandas tutorial 10-minute Panda Tutorial All CMSE 201 YouTube videos Your name Put your name here! Section 1: Radioactivity wrapup In this part of the homework, we're going to finish what we started regarding modeling the count rate of radioactive data that you worked wtih in class, to try to estimate the strength of the radioactive background that was seen in the radioactive count rates. In class, we discussed that for radioactive material with an initial amount $N_0$ and a half life $t_{1/2}$, the amount left after time t is $N(t) = N_0 2^{-t/t_{1/2}}$. The expected radioactive decay rate is then: $\mathrm{CR}(t) = - \frac{dN}{dt} = \frac{N_0 \ln 2}{t_{1/2}}2^{-t/t_{1/2}}$ However, the data doesn't agree well with this - there's something contaminating our count rate data that's causing a radioactive "background" that is approximately constant with time. A better estimate of the count rates is more like: $\mathrm{CR}(t) = \mathrm{CR}{\mathrm{S}}(t) + \mathrm{CR}{\mathrm{BG}}$ where $\mathrm{CR}{\mathrm{S}}(t)$ is the count rate from the sample, which has the shape expected above, and $\mathrm{CR}{\mathrm{BG}}$ is the count rate from the radioactive background. We're now going to try to figure out the values that go into the expressions for $\mathrm{CR}{\mathrm{S}}(t)$ and $\mathrm{CR}{\mathrm{BG}}$ by using the data. What you're going to do is: "Smooth" the decay rate data over N adjacent samples in time to get rid of some of the noise. Try writing a piece of code to loop over the array of data and average the sample you're interested in along with the N samples on either side (i.e., from element i-N to i+N, for an arbitrary number of cells). Store this smoothed data in a new array (perhaps using np.zeros_like() to create the new array?). Plot your smoothed data on top of the noisy data to ensure that it agrees. Create a new array with the analytic equation from above that describes for the decay rate as a function of time, taking into account what you're seeing in point (2), and try to find the values of the various constants in the equation. Plot the new array on top of the raw data and smoothed values. Note that code to load the file count_rates.txt has been added below, and puts the data into two numpy arrays as it did in the in-class assignment. End of explanation """ import pandas erie = pandas.read_csv('erie1918Ann.csv', skiprows=2) miHuron = pandas.read_csv('miHuron1918Ann.csv', skiprows=2) ontario = pandas.read_csv('ontario1918Ann.csv', skiprows=2) superior = pandas.read_csv('superior1918Ann.csv', skiprows=2) """ Explanation: Question: What are the constants that you came up with for the count rate equation? Do these values make sense given the experimental data? Why or why not? The values that the students should get are approximately: N0 = 2000 counts per bin (each bin is 5 seconds long) half life = 1712 seconds background count rate = 292 counts per bin (each bin 5 seconds) Note that I do not expect students to get an answer that's that close - as long as the curve that is produced (dashed line above) gets reasonably close to the smoothed value, it's fine. Also, note that students may show the entire plot in counts/second instead of counts/bin - both are fine. The plots make sense given the experimental data - the half life should be somewhere around 2000 seconds from looking at the curve, the count rates per bin for the noise should be somewhere between 200-400, and the counts at t=0 are somewhere around 2000-2200 counts/bin, after you subtract the noise from the bin. Section 2: Great Lakes water levels The water level in the Great Lakes fluctuates over the course of a year, and also fluctuates in many-year cycles. About two and a half years ago (in Feb. 2014), there was an article in Scientific American describing the historically low levels of the Great Lakes - in particular, that of Lake Michigan and Lake Huron, which together make up the largest body of fresh water in the world. In this part of the homework assignment, we're going to look at water height data from the Great Lakes Environmental Research Laboratory - in particular, data from 1918 to the present day. In the cell below this, we're using Pandas to load four CSV ("Comma-Separated Value") files with data from Lake Eric, Lakes Michigan and Huron combined, Lake Ontario, and Lake Superior into data frames. Each dataset contains the annual average water level for every year from 1918 to the present. Use these datasets to answer the questions posed below. End of explanation """ # Put your code here! # expect to see students taking the mean value and printing it out! erie_mean = erie['AnnAvg'].mean() miHuron_mean = miHuron['AnnAvg'].mean() ontario_mean = ontario['AnnAvg'].mean() superior_mean = superior['AnnAvg'].mean() print('Erie (meters): ', erie_mean) print('Michigan/Huron (meters): ', miHuron_mean) print('Ontario (meters): ', ontario_mean) print('Superior (meters): ', superior_mean) """ Explanation: Question 1: Calculate the mean water levels of all of the Great Lakes over the past century (treating Lakes Michigan and Huron as a single body of water). Are all of the values similar? Why does your answer make sense? (Hint: where is Niagara Falls, and what direction does the water flow?) Answer: Three of the values (Erie, Michigan/Huron, Superior) are all pretty similar (to within 9 or 10 meters), but Lake Ontario is about 100 meters lower. The fact that Erie/Michigan/Superior are all of similar mean height makes sense because they're connected by waterways, and the water should level out. It makes sense that Ontario is lower, because Niagara Falls flows from Lake Erie into Lake Ontario, and Niagara Falls are really high. So, it makes sense that Lake Ontario is much lower than Lake Erie. End of explanation """ # Put your code here # make a plot of the lakes' heights minus the mean values. lake_erie, = plt.plot(erie['year'],erie['AnnAvg']-erie['AnnAvg'].mean(),'r-') lake_mi, = plt.plot(miHuron['year'],miHuron['AnnAvg']-miHuron['AnnAvg'].mean(),'g-') lake_ont, = plt.plot(ontario['year'],ontario['AnnAvg']-ontario['AnnAvg'].mean(),'b-') lake_sup, = plt.plot(superior['year'],superior['AnnAvg']-superior['AnnAvg'].mean(),'k-') plt.xlabel('year') plt.ylabel('value minus historic mean') plt.title('variation around historic mean for all lakes') plt.legend( (lake_erie,lake_mi,lake_ont,lake_sup), ('Erie','MI/Huron','Ontario','Superior'),loc='upper left') """ Explanation: Question 2: Make a plot where you show the fluctuations of each lake around the mean value from the last century (i.e., subtracting the mean value of the lake's water level from the data of water level over time). In general, do you see similar patterns of fluctuations in all of the lakes? What might this suggest to you about the source of the fluctuations? Hint: you may want to use pyplot instead of the built-in Pandas plotting functions! Answer: We do see similar patterns overall, though some of the lakes (Superior, for example) are more stable. This suggests that there's some sort of regional thing (weather, for example) that's causing fluctuations in all of the lakes. End of explanation """ # Put your code here # basically the plot from above, but with different x limits. lake_erie, = plt.plot(erie['year'],erie['AnnAvg']-erie['AnnAvg'].mean(),'r-') lake_mi, = plt.plot(miHuron['year'],miHuron['AnnAvg']-miHuron['AnnAvg'].mean(),'g-') lake_ont, = plt.plot(ontario['year'],ontario['AnnAvg']-ontario['AnnAvg'].mean(),'b-') lake_sup, = plt.plot(superior['year'],superior['AnnAvg']-superior['AnnAvg'].mean(),'k-') plt.xlabel('year') plt.ylabel('value minus historic mean') plt.title('variation around historic mean for all lakes') plt.legend( (lake_erie,lake_mi,lake_ont,lake_sup), ('Erie','MI/Huron','Ontario','Superior'),loc='upper left') plt.xlim(1996,2017) """ Explanation: Question 3: Finally, let's look at the original issue - the water level of the Lake Michigan+Lake Huron system and how it changes over time. When you examine just the Lake Michigan data, zooming in on only the last 20 years of data, does the decrease in water level continue, does it reverse itself, or does it stay the same? In other words, was the low level reported in 2014 something we should continue to be worried about, or was it a fluke? Answer: The lake Michigan/Huron system data has reversed itself in the last couple of years, and has returned to historically reasonable values. It's just a fluke. End of explanation """ # put your code and plots here! # concentration (Q) is in units of micrograms/milliliter # 2 tables * (325 mg/tablet) / 3000 mL * 1000 micrograms/mg Q_start = 2.0 * 325./3000.0*1000. t_half = 3.2 K = 0.693/t_half time=0 t_end = 12.0 dt = 0.01 Q = [] t = [] Q_old = Q_start while time <= t_end: Q_new = Q_old - K*Q_old*dt Q.append(Q_new) t.append(time) Q_old = Q_new time += dt plt.plot(t,Q,'r-') plt.plot([0,12],[150,150],'b-') plt.plot([0,12],[300,300],'b-') plt.ylim(0,350) plt.xlabel('time [hours]') plt.ylabel('concentration [micrograms/mL]') plt.title('concentration of aspirin over time') """ Explanation: Section 3: Modeling drug doses in the human body Modeling the behavior of drugs in the human body is very important in medicine. One frequently-used model is called the "Single-Compartment Drug Model", which takes the complex human body and treats it as one homogeneous unit, where drug distribution is instantaneous, the concentration of the drug in the blood (i.e., the amount of drug per volume of blood) is proportional to the drug dosage, and the rate of elimination of the drug is proportional to the amount of drug in the system. Using this model allows the prediction of the range of therapeutic doses where the drug will be effective. We'll first model the concentration in the body of aspirin, which is commonly used to treat headaches and reduce fever. For adults, it is typical to take 1 or 2 325 mg tablets every four hours, up to a maximum of 12 tablets/day. This dose is assumed to be dissolved immediately into the blood plasma. (An average adult human has about 3 liters of blood plasma.) The concentration of drugs in the blood (represented with the symbol Q) is typically measured in $\mu$g/ml, where 1000 $\mu$g (micrograms) = 1 mg (milligram). For aspirin, the does that is effective for relieving headaches is typically between 150-300 $\mu$g/ml, and the half-life for removal of the drug from the system is about 3.2 hours (more on that later). The rate of removal of aspirin from the body (elimination) is proportional to the amount present in the system: $\frac{dQ}{dt} = -K Q$ Where Q is the concentration, and K is a constant of proportionality that is related to the half-life of removal of drug from the system: $K = 0.693 / t_{1/2}$. Part 1: We're now going to make a simple model of the amount of aspirin in the human body. Assuming that an adult human has a headache and takes 2 325 mg aspirin tablets. If the drug immediately enters their system, for how long will their headache be relieved? Show a plot, with an appropriate title, x-axis label, and y-axis label, that shows the concentration of aspirin in the patient's blood over a 12-hour time span. In your model, make sure to resolve the time evolution well - make sure that your individual time steps are only a few minutes! Put your answer immediately below, and the code you wrote (and plots you generated) to solve this immediately below that. Answer: Their headache will be relieved for about an hour and a half, or perhaps an hour and 45 minutes (precise answer is 1.68 hours). You can tell because that's where the aspirin concentration dips below 150 micrograms/mL. End of explanation """ # put your code and plots here! # concentration (Q) is in units of micrograms/milliliter # 1 tablet * (100 mg/tablet) / 3000 mL * 1000 micrograms/mg Q_dosage = 1.0 * 100./3000.0*1000. t_half = 22.0 absorption_fraction = 0.12 K = 0.693/t_half time=0 t_end = 10.0*24.0 # 10 days dt = 0.01 Q = [] t = [] Q_old = absorption_fraction*Q_dosage t_dosage = 0.0 dt_dosage = 8.0 while time <= t_end: if time - t_dosage >= dt_dosage: Q_old += absorption_fraction*Q_dosage t_dosage = time Q_new = Q_old - K*Q_old*dt Q.append(Q_new) t.append(time) Q_old = Q_new time += dt plt.plot(t,Q,'r-') plt.plot([0,250],[10,10],'b-') plt.plot([0,250],[20,20],'b-') plt.ylim(0,25) #plt.xlim(0,50) plt.xlabel('time [hours]') plt.ylabel('concentration [micrograms/mL]') plt.title('concentration of Dilantin over time') """ Explanation: Part 2: We're now going to model the concentration of a drug that needs to be repeatedly administered - the drug Dilantin, which is used to treat epilepsy. The effective concentration of the drug in humans is 10-20 $\mu$g/ml, the half-life of Dilantin is approximately 22 hours, and the drug comes in 100 mg tablets which are effectively instantaneously released into your bloodstream. For this particular drug, only about 12% of the drug in each dose is actually available for absorption in the bloodstream, meaning that the effective amount added to the blood is 12 mg per dose. Assuming that the drug is administered every 8 hours to a patient that starts out having none of the drug in their body, make a plot of the drug concentration over a ten day period and use it to answer the following two questions: How long does it take to reach an effective concentration of the drug in the patient's blood? By roughly when does the drug concentration reach a steady state in the patient's blood? (In other words, after how long is the concentration neither rising nor falling on average?) Answer: Assuming 100 mg per pill and 12% absorption (the corrected version of the homework): (1) we first reach the therapeutic concentration at about 24 hours, finally are completely above (i.e., never dip below) the therapeutic concentration around 40 hours. (2) We reach steady-state somewhere around 100-120 hours (~somewhat between 4-5 days). Assuming 100 mg per pill and 100% absorption (the original version of the homework): (1) we're always above the effective concentration. (2) We reach steady-state somewhere around 120 hours (~somewhere around 5 days). End of explanation """ from IPython.display import HTML HTML( """ <iframe src="https://goo.gl/forms/Px7wk9DcldfyCqMt2?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> """ ) """ Explanation: Section 4: Feedback (required!) End of explanation """
Britefury/deep-learning-tutorial-pydata2016
SUPPLEMENTARY - Convolutions with sliding windows.ipynb
mit
%matplotlib inline """ Explanation: Convolutions and sliding windows Plots inline: End of explanation """ import os import numpy as np from matplotlib import pyplot as plt from scipy.ndimage import convolve from skimage.filters import gabor_kernel from skimage.color import rgb2grey from skimage.util.montage import montage2d from skimage.util import view_as_windows from skimage.transform import downscale_local_mean """ Explanation: Imports: End of explanation """ def image_montage(im_3d, padding=1, cval=None, grid_shape=None): if cval is None: return montage2d(np.pad(im_3d, [(0,0), (padding, padding), (padding, padding)], mode='constant'), grid_shape=grid_shape) else: return montage2d(np.pad(im_3d, [(0,0), (padding, padding), (padding, padding)], mode='constant', constant_values=[(0,0), (cval,cval), (cval,cval)]), grid_shape=grid_shape) def pad_image(img, shape): d0 = shape[0]-img.shape[0] d1 = shape[1]-img.shape[1] p0a = d0/2 p0b = d0-p0a p1a = d1/2 p1b = d1-p1a return np.pad(img, [(p0a, p0b), (p1a, p1b)], mode='constant') """ Explanation: Some utility functions for making an image montage for display and padding images: End of explanation """ IMAGE_PATH = os.path.join('images', 'fruit.JPG') # Extract a square block img = rgb2grey(plt.imread(IMAGE_PATH)[:1536,:1536]) print img.shape plt.imshow(img, cmap='gray') plt.show() """ Explanation: Load a photo of some fruit: End of explanation """ img_small = downscale_local_mean(img, (8,8)) plt.imshow(img_small, cmap='gray') plt.show() """ Explanation: Scale down by a factor of 8: End of explanation """ WAVELENGTH = 8.0 THETA = np.pi / 3.0 k_complex = gabor_kernel(1.0/WAVELENGTH, THETA, 1.2) k_imag = np.imag(k_complex) plt.imshow(k_imag, cmap='gray', interpolation='none') plt.imsave('images/single_kernel.png', k_imag, cmap='gray') """ Explanation: Construct a single Gabor filter kernel with a wavelength of 8 and an angle of 60 degrees and select the imaginary component. End of explanation """ windows = view_as_windows(img_small, (128,128), (32,32)) grid_shape = windows.shape[:2] windows = windows.reshape((-1, 128,128)) window_feats = [-convolve(1-windows[i], k_imag) for i in range(windows.shape[0])] feats_3d = np.concatenate([c[None,:,:] for c in window_feats], axis=0) feats_montage = image_montage(feats_3d, padding=10, grid_shape=grid_shape) plt.imshow(feats_montage, cmap='gray', interpolation='none') plt.imsave('images/fruit_window_montage.png', feats_montage, cmap='gray') """ Explanation: Extract 128x128 windows from the image, with a spacing of 32 pixels, convolve with the Gabor kernel constructed above and make a montage of the result: End of explanation """
deeplook/alerta_tutorial
tutorial.ipynb
gpl-3.0
from IPython.display import HTML HTML('<iframe src="http://alerta.io" width="100%" height="500"></iframe>') """ Explanation: Alerta Tutorial A tutorial from scratch to writing your own alerts using alerta.io. End of explanation """ from IPython.display import HTML HTML('<iframe src="http://localhost:8090" width="100%" height="500"></iframe>') """ Explanation: Prerequisites Assumed environment (tested on Mac OS X): POSIX OS bash git pkill wget Setup Components installed during the baseline setup (before you can actually see this notebook), executed with bash setup.sh: Miniconda2 pip jupyter ipython RISE plugin for jupyter Install the real thing(s) Components needed for Alerta, executed with bash install.sh: MongoDB Alerta server Alerta Dashboard Install custom packages and tools The following dependencies for creating custom alerts from in the included examples can be installed with bash custom.sh: lxml url3 packaging selenium PhantomJS Start everything Processes to start (Jupyter doesn't support background processes) with bash start.sh: MongoDB Alerta server/API Alerta Dashboard Jupyter tutorial/presentation (this notebook file) Alerta API End of explanation """ from IPython.display import HTML HTML('<iframe src="http://localhost:8095" width="100%" height="500"></iframe>') """ Explanation: Alerta Dashboard End of explanation """ ! cd $ALERTA_TEST_DIR && ./miniconda2/bin/alerta \ --endpoint-url "http://localhost:8090" \ send -E Production -r localhost -e VolUnavailable \ -S Filesystem -v ERROR -s minor \ -t "/Volumes/XYZ not available." ! cd $ALERTA_TEST_DIR && ./miniconda2/bin/alerta \ --endpoint-url "http://localhost:8090" \ delete """ Explanation: Alerta Top Run this command in a Jupyter terminal (or any other): bash ./miniconda2/bin/alerta --endpoint http://localhost:8090 top Rolling Your Own Alerts Simple, Unix style End of explanation """ from alerta.api import ApiClient from alerta.alert import Alert api = ApiClient(endpoint='http://localhost:8090') alert = Alert(resource='localhost', event='VolUnavailable', service=['Filesystem'], environment='Production', value='ERROR', severity='minor') res = api.send(alert) """ Explanation: Same Thing, Python style End of explanation """ import utils utils.volume_is_mounted('/Volumes/Intenso64') utils.internet_available() utils.using_vpn(city='Berlin', country='Germany') utils.get_python_sites_status() from IPython.display import HTML HTML('<iframe src="https://status.python.org" width="100%" height="500"></iframe>') utils.get_webpage_info('http://www.python.org', title_contains='Python') import sys from os.path import join, dirname conda_path = join(dirname(sys.executable), 'conda') conda_path utils.get_conda_list(conda_path) utils.get_conda_updates(conda_path) ks_url = 'https://www.kickstarter.com/projects/udoo/udoo-x86-the-most-powerful-maker-board-ever/' utils.get_kickstarter_days_left(ks_url) # uses Firefox as it doesn't need a special driver installation phantomjs_path = './alerta_test_directory/phantomjs-2.1.1-macosx/bin/phantomjs' browser = utils.webdriver.PhantomJS(phantomjs_path) utils.get_kickstarter_days_left(ks_url, browser) """ Explanation: Custom Alerts Remember, you can do amazing stuff… End of explanation """ from IPython.display import HTML HTML('<iframe src="http://localhost:8095" width="100%" height="500"></iframe>') import my_alerts as ma ma.start(list=True) rts = ma.start(name='alert_conda_outdated') rts import subprocess cmd = "%s install -y sqlite==3.8.4.1" % conda_path print subprocess.check_output(cmd.split()) cmd = "%s update -y sqlite" % conda_path print subprocess.check_output(cmd.split()) rts[0].stop() """ Explanation: Get Alerts to Get Things Done Watch this dashboard as you perform changes on the next slides! End of explanation """
riceda195/kernel_gateway_demos
swagger-notebook-service/swagger-petstore-service/SwaggerPetstoreApi.ipynb
bsd-3-clause
!pip install dicttoxml import json from dicttoxml import dicttoxml PETS = {} PET_STATUS_INDEX = {} TAG_INDEX = {} ORDERS = {} ORDER_STATUS_INDEX = {} JSON = 'application/json' XML = 'application/xml' content_type = JSON class MissingField(Exception): def __init__(self, type_name, field): self.msg = '{} is missing required field "{}"'.format(type_name, field) class InvalidValue(Exception): def __init__(self, name, type_name): self.msg = '{} is not a {}'.format(name, type_name) class NotFound(Exception): def __init__(self, type_name, id): self.msg = 'There is no {} with id {}'.format(type_name, id) def print_response(content, content_type=JSON): if content_type == JSON: print(json.dumps(content)) elif content_type == XML: print(dicttoxml(content).decode('UTF-8')) def split_query_param(param): values = [] for paramValue in param: values += paramValue.split(',') values = map(lambda x: x.strip(), values) return list(values) def create_error_response(code, error_type, message): return { 'code' : code, 'type' : error_type, 'message' : message } # Pet APIs def validate_pet(pet): fields = ['id', 'category', 'name', 'photoUrls', 'tags', 'status'] for field in fields: if field not in pet: raise MissingField('Pet', field) def persist_pet(pet): validate_pet(pet) PETS[pet['id']] = pet index_pet(pet) return pet def get_pet_by_id(pet_id): try: pet_id = int(pet_id) if not pet_id in PETS: raise NotFound('Pet', pet_id) else: return PETS[pet_id] except ValueError: raise InvalidValue('Pet id', 'int') def delete_pet_by_id(pet_id): try: pet_id = int(pet_id) if not pet_id in PETS: raise NotFound('Pet', pet_id) else: pet = PETS[pet_id] del PETS[pet_id] return pet except ValueError: raise InvalidValue('Pet id', 'int') def index_pet(pet): # Index the status of the pet pet_status = pet['status'] if pet_status not in PET_STATUS_INDEX: PET_STATUS_INDEX[pet_status] = set() PET_STATUS_INDEX[pet_status].add(pet['id']) # index the tags of the pet for tag in pet['tags']: tag = tag.strip() if tag not in STATUS_INDEX: TAG_INDEX[tag] = set() TAG_INDEX[tag].add(pet['id']) def collect_pets_by_id(petIds): petIds = set(petIds) petList = [] for petId in petIds: petList.append(PETS[petId]) return petList # Order APIs def validate_order(order): fields = ['id', 'petId', 'quantity', 'shipDate', 'status', 'complete'] for field in fields: if field not in order: raise MissingField('Order', field) def persist_order(order): validate_order(order) ORDERS[order['id']] = order def get_order_by_id(order_id): try: order_id = int(order_id) if not order_id in ORDERS: raise NotFound('Order', order_id) else: return ORDERS[order_id] except ValueError: raise InvalidValue('Order id', 'int') def delete_order_by_id(order_id): try: order_id = int(order_id) if not order_id in ORDERS: raise NotFound('Order', order_id) else: order = ORDERS[order_id] del ORDERS[order_id] return order except ValueError: raise InvalidValue('Order id', 'int') """ Explanation: Swagger Petstore - 1.0.0 This is a sample server Petstore server. You can find out more about Swagger at http://swagger.io or on irc.freenode.net, #swagger. For this sample, you can use the api key special-key to test the authorization filters. End of explanation """ REQUEST = json.dumps({ 'body' : { 'id': 1, 'category' : { 'id' : 1, 'name' : 'cat' }, 'name': 'fluffy', 'photoUrls': [], 'tags': ['cat', 'siamese'], 'status': 'available' } }) # POST /pet try: req = json.loads(REQUEST) pet = req['body'] persist_pet(pet) response = pet except MissingField as e: response = create_error_response(405, 'Invalid Pet', e.msg) except ValueError as e: response = create_error_response(405, 'Invalid Pet', 'Could not parse json') finally: print_response(response, content_type) """ Explanation: POST /pet Add a new pet to the store Body Parameters: body (required) - Pet object that needs to be added to the store End of explanation """ REQUEST = json.dumps({ 'body' : { 'id': 1, 'category' : { 'id' : 1, 'name' : 'cat' }, 'name': 'fluffy', 'photoUrls': [], 'tags': ['cat', 'siamese'], 'status': 'available' } }) # PUT /pet try: req = json.loads(REQUEST) new_pet = req['body'] current_pet = get_pet_by_id(new_pet['id']) persist_pet(new_pet) response = new_pet except InvalidValue as e: response = create_error_response(400, 'Invalid ID', e.msg) except ValueError as e: response = create_error_response(400, 'Invalid Pet', 'Could not parse json') except NotFound as e: response = create_error_response(404, 'Not Found', e.msg) except MissingField as e: response = create_error_response(405, 'Invalid Pet', e.msg) finally: print_response(response, content_type) """ Explanation: PUT /pet Update an existing pet Body Parameters: body (required) - Pet object that needs to be added to the store End of explanation """ REQUEST = json.dumps({ 'args' : { 'status' : ['available , unavailable'] } }) # GET /pet/findByStatus req = json.loads(REQUEST) status_list = split_query_param(req['args']['status']) pet_ids = [] for status in status_list: if status in PET_STATUS_INDEX: pet_ids += PET_STATUS_INDEX[status] pet_list = collect_pets_by_id(pet_ids) print_response(pet_list, content_type) """ Explanation: GET /pet/findByStatus Finds Pets by status Multiple status values can be provided with comma seperated strings Query Parameters: status (required) - Status values that need to be considered for filter End of explanation """ REQUEST = json.dumps({ 'args' : { 'tags' : ['cat , dog, horse'] } }) # GET /pet/findByTags req = json.loads(REQUEST) tag_list = split_query_param(req['args']['tags']) pet_ids = [] for tag in tag_list: if tag in TAG_INDEX: pet_ids += TAG_INDEX[tag] pet_list = collect_pets_by_id(pet_ids) print_response(pet_list, content_type) """ Explanation: GET /pet/findByTags Finds Pets by tags Muliple tags can be provided with comma seperated strings. Use tag1, tag2, tag3 for testing. Query Parameters: tags (required) - Tags to filter by End of explanation """ REQUEST = json.dumps({ 'path' : { 'petId' : 1 } }) # GET /pet/:petId try: req = json.loads(REQUEST) pet_id = req['path']['petId'] response = get_pet_by_id(pet_id) except InvalidValue as e: response = create_error_response(400, 'Invalid ID', e.msg) except NotFound as e: response = create_error_response(404, 'Not Found', e.msg) finally: print_response(response, content_type) """ Explanation: GET /pet/:petId Find pet by ID Returns a single pet Path Parameters: petId (required) - ID of pet to return End of explanation """ REQUEST = json.dumps({ 'path' : { 'petId' : 1 }, 'body' : { 'name' : ['new name'] } }) # POST /pet/:petId try: req = json.loads(REQUEST) pet_updates = req['body'] pet_id = req['path']['petId'] old_pet = get_pet_by_id(pet_id) props = ['name', 'status'] for prop in props: if prop in pet_updates: old_pet[prop] = pet_updates[prop][0] response = persist_pet(old_pet) except InvalidValue as e: response = create_error_response(400, 'Invalid ID', e.msg) except NotFound as e: response = create_error_response(404, 'Not Found', e.msg) finally: print_response(response, content_type) """ Explanation: POST /pet/:petId Updates a pet in the store with form data Path Parameters: petId (required) - ID of pet that needs to be updated Form Parameters: name (optional) - Updated name of the pet status (optional) - Updated status of the pet End of explanation """ REQUEST = json.dumps({ 'path' : { 'petId' : '1' } }) # DELETE /pet/:petId try: req = json.loads(REQUEST) pet_id = req['path']['petId'] response = delete_pet_by_id(pet_id) except InvalidValue as e: response = create_error_response(400, 'Invalid ID', e.msg) except NotFound as e: response = create_error_response(404, 'Not Found', e.msg) finally: print_response(response, content_type) """ Explanation: DELETE /pet/:petId Deletes a pet Path Parameters: petId (required) - Pet id to delete End of explanation """ # GET /store/inventory status_counts = {} for status in ORDER_STATUS_INDEX: status_counts[status] = len(set(ORDER_STATUS_INDEX[status])) print_response(status_counts, content_type) """ Explanation: Store Endpoints GET /store/inventory Returns pet inventories by status Returns a map of status codes to quantities End of explanation """ REQUEST = json.dumps({ 'body' : { 'id' : 1, 'petId' : 1, 'quantity' : 1, 'shipDate' : '12/30/2015', 'status' : 'placed', 'complete' : False } }) # POST /store/order try: req = json.loads(REQUEST) order = req['body'] persist_order(order) response = order except MissingField as e: response = create_error_response(400, 'Invalid Order', e.msg) except ValueError as e: response = create_error_response(400, 'Invalid Order', 'Could not parse json') finally: print_response(response, content_type) """ Explanation: POST /store/order Place an order for a pet Body Parameters: body (required) - order placed for purchasing the pet End of explanation """ REQUEST = json.dumps({ 'path' : { 'orderId' : 1 } }) # GET /store/order/:orderId try: req = json.loads(REQUEST) order_id = req['path']['orderId'] response = get_order_by_id(order_id) except InvalidValue as e: response = create_error_response(400, 'Invalid ID', e.msg) except NotFound as e: response = create_error_response(404, 'Not Found', e.msg) finally: print_response(response, content_type) """ Explanation: GET /store/order/:orderId Find purchase order by ID For valid response try integer IDs with value &lt;= 5 or &gt; 10. Other values will generated exceptions Path Parameters: orderId (required) - ID of pet that needs to be fetched End of explanation """ REQUEST = json.dumps({ 'path' : { 'orderId' : 1 } }) # DELETE /store/order/:orderId try: req = json.loads(REQUEST) order_id = req['path']['orderId'] response = delete_order_by_id(order_id) except InvalidValue as e: response = create_error_response(400, 'Invalid ID', e.msg) except NotFound as e: response = create_error_response(404, 'Not Found', e.msg) finally: print_response(response, content_type) """ Explanation: DELETE /store/order/:orderId Delete purchase order by ID For valid response try integer IDs with value &lt; 1000. Anything above 1000 or nonintegers will generate API errors Path Parameters: orderId (required) - ID of the order that needs to be deleted End of explanation """ PETS = {} STATUS_INDEX = {} TAG_INDEX = {} ORDERS = {} """ Explanation: Initialization Sets all stores to empty dictionaries, so when the app starts there is no initial state. End of explanation """
scoyote/RHealthDataImport
ImportAppleHealthXML.ipynb
mit
import xml.etree.ElementTree as et import pandas as pd import numpy as np from datetime import * import matplotlib.pyplot as plt import re import os.path import zipfile import pytz %matplotlib inline plt.rcParams['figure.figsize'] = 16, 8 """ Explanation: Download, Parse and Interrogate Apple Health Export Data The first part of this program is all about getting the Apple Health export and putting it into an analyzable format. At that point it can be analysed anywhere. The second part of this program is concerned with using SAS Scripting Wrapper for Analytics Transfer (SWAT) Python library to transfer the data to SAS Viya, and analyze it there. The SWAT package provides native python language access to the SAS Viya codebase. https://github.com/sassoftware/python-swat This file was created from a desire to get my hands on data collected by Apple Health, notably heart rate information collected by Apple Watch. For this to work, this file needs to be in a location accessible to Python code. A little bit of searching told me that iCloud file access is problematic and that there were already a number of ways of doing this with the Google API if the file was saved to Google Drive. I chose PyDrive. So for the end to end program to work with little user intervention, you will need to sign up for Google Drive, set up an application in the Google API and install Google Drive app to your iPhone. This may sound involved, and it is not necessary if you simply email the export file to yourself and copy it to a filesystem that Python can see. If you choose to do that, all of the Google Drive portion can be removed. I like the Google Drive process though as it enables a minimal manual work scenario. This version requires the user to grant Google access, requiring some additional clicks, but it is not too much. I think it is possible to automate this to run without user intervention as well using security files. The first step to enabling this process is exporting the data from Apple Health. As of this writing, open Apple Health and click on your user icon or photo. Near the bottom of the next page in the app will be a button or link called Export Health Data. Clicking on this will generate a xml file, zipped up. THe next dialog will ask you where you want to save it. Options are to email, save to iCloud, message etc... Select Google Drive. Google Drive allows multiple files with the same name and this is accounted for by this program. End of explanation """ # Authenticate into Google Drive from pydrive.auth import GoogleAuth gauth = GoogleAuth() gauth.LocalWebserverAuth() """ Explanation: Authenticate with Google This will open a browser to let you beging the process of authentication with an existing Google Drive account. This process will be separate from Python. For this to work, you will need to set up a Other Authentication OAuth credential at https://console.developers.google.com/apis/credentials, save the secret file in your root directory and a few other things that are detailed at https://pythonhosted.org/PyDrive/. The PyDrive instructions also show you how to set up your Google application. There are other methods for accessing the Google API from python, but this one seems pretty nice. The first time through the process, regular sign in and two factor authentication is required (if you require two factor auth) but after that it is just a process of telling Google that it is ok for your Google application to access Drive. End of explanation """ from pydrive.drive import GoogleDrive drive = GoogleDrive(gauth) file_list = drive.ListFile({'q': "'root' in parents and trashed=false"}).GetList() # Step through the file list and find the most current export.zip file id, then use # that later to download the file to the local machine. # This may look a little old school, but these file lists will never be massive and # it is readable and easy one pass way to get the most current file using the # least (or low) amount of resouces selection_dt = datetime.strptime("2000-01-01T01:01:01.001Z","%Y-%m-%dT%H:%M:%S.%fZ") print("Matching Files") for file1 in file_list: if re.search("^export-*\d*.zip",file1['title']): dt = datetime.strptime(file1['createdDate'],"%Y-%m-%dT%H:%M:%S.%fZ") if dt > selection_dt: selection_id = file1['id'] selection_dt = dt print(' title: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate'])) if not os.path.exists('healthextract'): os.mkdir('healthextract') """ Explanation: Download the most recent Apple Health export file Now that we are authenticated into Google Drive, use PyDrive to access the API and get to files stored. Google Drive allows multiple files with the same name, but it indexes them with the ID to keep them separate. In this block, we make one pass of the file list where the file name is called export.zip, and save the row that corresponds with the most recent date. We will use that file id later to download the correct file that corresponds with the most recent date. Apple Health export names the file export.zip, and at the time this was written, there is no other option. End of explanation """ for file1 in file_list: if file1['id'] == selection_id: print('Downloading this file: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate'])) file1.GetContentFile("healthextract/export.zip") """ Explanation: Download the file from Google Drive Ensure that the file downloaded is the latest file generated End of explanation """ zip_ref = zipfile.ZipFile('healthextract/export.zip', 'r') zip_ref.extractall('healthextract') zip_ref.close() """ Explanation: Unzip the most current file to a holding directory End of explanation """ path = "healthextract/apple_health_export/export.xml" e = et.parse(path) #this was from an older iPhone, to demonstrate how to join files legacy = et.parse("healthextract/apple_health_legacy/export.xml") #<<TODO: Automate this process #legacyFilePath = "healthextract/apple_health_legacy/export.xml" #if os.path.exists(legacyFilePath): # legacy = et.parse("healthextract/apple_health_legacy/export.xml") #else: # os.mkdir('healthextract/apple_health_legacy') """ Explanation: Parse Apple Health Export document End of explanation """ pd.Series([el.tag for el in e.iter()]).value_counts() """ Explanation: List XML headers by element count End of explanation """ pd.Series([atype.get('type') for atype in e.findall('Record')]).value_counts() """ Explanation: List types for "Record" Header End of explanation """ import pytz #Extract the heartrate values, and get a timestamp from the xml # there is likely a more efficient way, though this is very fast def txloc(xdate,fmt): eastern = pytz.timezone('US/Eastern') dte = xdate.astimezone(eastern) return datetime.strftime(dte,fmt) def xmltodf(eltree, element,outvaluename): dt = [] v = [] for atype in eltree.findall('Record'): if atype.get('type') == element: dt.append(datetime.strptime(atype.get("startDate"),"%Y-%m-%d %H:%M:%S %z")) v.append(atype.get("value")) myd = pd.DataFrame({"Create":dt,outvaluename:v}) colDict = {"Year":"%Y","Month":"%Y-%m", "Week":"%Y-%U","Day":"%d","Hour":"%H","Days":"%Y-%m-%d","Month-Day":"%m-%d"} for col, fmt in colDict.items(): myd[col] = myd['Create'].dt.tz_convert('US/Eastern').dt.strftime(fmt) myd[outvaluename] = myd[outvaluename].astype(float).astype(int) print('Extracting ' + outvaluename + ', type: ' + element) return(myd) HR_df = xmltodf(e,"HKQuantityTypeIdentifierHeartRate","HeartRate") EX_df = xmltodf(e,"HKQuantityTypeIdentifierAppleExerciseTime","Extime") EX_df.head() #comment this cell out if no legacy exports. # extract legacy data, create series for heartrate to join with newer data #HR_df_leg = xmltodf(legacy,"HKQuantityTypeIdentifierHeartRate","HeartRate") #HR_df = pd.concat([HR_df_leg,HR_df]) #import pytz #eastern = pytz.timezone('US/Eastern') #st = datetime.strptime('2017-08-12 23:45:00 -0400', "%Y-%m-%d %H:%M:%S %z") #ed = datetime.strptime('2017-08-13 00:15:00 -0400', "%Y-%m-%d %H:%M:%S %z") #HR_df['c2'] = HR_df['Create'].dt.tz_convert('US/Eastern').dt.strftime("%Y-%m-%d") #HR_df[(HR_df['Create'] >= st) & (HR_df['Create'] <= ed) ].head(10) #reset plot - just for tinkering plt.rcParams['figure.figsize'] = 30, 8 HR_df.boxplot(by='Month',column="HeartRate", return_type='axes') plt.grid(axis='x') plt.title('All Months') plt.ylabel('Heart Rate') plt.ylim(40,140) dx = HR_df[HR_df['Year']=='2019'].boxplot(by='Week',column="HeartRate", return_type='axes') plt.title('All Weeks') plt.ylabel('Heart Rate') plt.xticks(rotation=90) plt.grid(axis='x') [plt.axvline(_x, linewidth=1, color='blue') for _x in [10,12]] plt.ylim(40,140) monthval = '2019-03' #monthval1 = '2017-09' #monthval2 = '2017-10' #HR_df[(HR_df['Month']==monthval1) | (HR_df['Month']== monthval2)].boxplot(by='Month-Day',column="HeartRate", return_type='axes') HR_df[HR_df['Month']==monthval].boxplot(by='Month-Day',column="HeartRate", return_type='axes') plt.grid(axis='x') plt.rcParams['figure.figsize'] = 16, 8 plt.title('Daily for Month: '+ monthval) plt.ylabel('Heart Rate') plt.xticks(rotation=90) plt.ylim(40,140) HR_df[HR_df['Month']==monthval].boxplot(by='Hour',column="HeartRate") plt.title('Hourly for Month: '+ monthval) plt.ylabel('Heart Rate') plt.grid(axis='x') plt.ylim(40,140) """ Explanation: Extract Values to Data Frame TODO: Abstraction of the next code block End of explanation """ # This isnt efficient yet, just a first swipe. It functions as intended. def getDelta(res,ttp,cyclelength): mz = [x if (x >= 0) & (x < cyclelength) else 999 for x in res] if ttp == 0: return(mz.index(min(mz))+1) else: return(mz[mz.index(min(mz))]) #chemodays = np.array([date(2017,4,24),date(2017,5,16),date(2017,6,6),date(2017,8,14)]) chemodays = np.array([date(2018,1,26),date(2018,2,2),date(2018,2,9),date(2018,2,16),date(2018,2,26),date(2018,3,2),date(2018,3,19),date(2018,4,9),date(2018,5,1),date(2018,5,14),date(2018,6,18),date(2018,7,10),date(2018,8,6)]) HR_df = xmltodf(e,"HKQuantityTypeIdentifierHeartRate","HeartRate") #I dont think this is efficient yet... a = HR_df['Create'].apply(lambda x: [x.days for x in x.date()-chemodays]) HR_df['ChemoCycle'] = a.apply(lambda x: getDelta(x,0,21)) HR_df['ChemoDays'] = a.apply(lambda x: getDelta(x,1,21)) import seaborn as sns plotx = HR_df[HR_df['ChemoDays']<=21] plt.rcParams['figure.figsize'] = 24, 8 ax = sns.boxplot(x="ChemoDays", y="HeartRate", hue="ChemoCycle", data=plotx, palette="Set2",notch=1,whis=0,width=0.75,showfliers=False) plt.ylim(65,130) #the next statement puts the chemodays variable as a rowname, we need to fix that plotx_med = plotx.groupby('ChemoDays').median() #this puts chemodays back as a column in the frame. I need to see if there is a way to prevent the effect plotx_med.index.name = 'ChemoDays' plotx_med.reset_index(inplace=True) snsplot = sns.pointplot(x='ChemoDays', y="HeartRate", data=plotx_med,color='Gray') """ Explanation: import calmap ts = pd.Series(HR_df['HeartRate'].values, index=HR_df['Days']) ts.index = pd.to_datetime(ts.index) tstot = ts.groupby(ts.index).median() plt.rcParams['figure.figsize'] = 16, 8 import warnings warnings.simplefilter(action='ignore', category=FutureWarning) calmap.yearplot(data=tstot,year=2017) Flag Chemotherapy Days for specific analysis The next two cells provide the ability to introduce cycles that start on specific days and include this data in the datasets so that they can be overlaid in graphics. In the example below, there are three cycles of 21 days. The getDelta function returns the cycle number when tpp == 0 and the days since day 0 when tpp == 2. This allows the overlaying of the cycles, with the days since day 0 being overlaid. End of explanation """ import seaborn as sns sns.set(style="ticks", palette="muted", color_codes=True) sns.boxplot(x="Month", y="HeartRate", data=HR_df,whis=np.inf, color="c") # Add in points to show each observation snsplot = sns.stripplot(x="Month", y="HeartRate", data=HR_df,jitter=True, size=1, alpha=.15, color=".3", linewidth=0) hr_only = HR_df[['Create','HeartRate']] hr_only.tail() hr_only.to_csv('~/Downloads/stc_hr.csv') """ Explanation: Boxplots Using Seaborn End of explanation """
gibiansky/blog
posts/coding-intro-to-nns/post.ipynb
gpl-2.0
import numpy as np """ Explanation: In this tutorial, we'll use Python with the Numpy and Theano to get a feel for writing machine learning algorithms. We'll start with a brief intro those libraries, and then implement a logistic regression and a neural network, looking at some properties of the implementations as we train them. To run this code, you must have a computer with Python, Theano, NumPy, and Matplotlib installed, as well as IPython if you wish to follow along directly in this notebook, which you can download from here. Intro to Python Libraries Before jumping into neural networks with Python, we'll need to understand the basics of NumPy and Theano, the two libraries we'll be using for our code. NumPy Let's start by importing the numpy module under its common np alias: End of explanation """ np.array([1, 2, 3]) """ Explanation: Let's start off with NumPy. If you've used Matlab before, NumPy should feel fairly familiar. It's a library in Python for dealing with structured matrices. We can make a NumPy array by feeding a Python list to np.array: End of explanation """ np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) """ Explanation: We can do two dimensional matrices by giving it lists of lists: End of explanation """ np.array([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[1, 2, 3], [4, 5, 6], [7, 8, 9]]]) """ Explanation: And three dimension matrices by giving it lists of lists of lists: End of explanation """ print 'Float', np.array([1, 2, 3], dtype=np.float) print 'Double', np.array([1, 2, 3], dtype=np.double) print 'Unsigned 8-bit int', np.array([1, 2, 3], dtype=np.uint8) """ Explanation: NumPy has its own system for managing what type of data it has, so we can ask it to store our data in whatever format we want: End of explanation """ print 'Matrix of 1s' print np.ones((3, 3)) print 'Matrix of 0s' print np.zeros((3, 3)) # Many ways of generating random values in numpy.random # Different distributions, etc. Default is uniform([0, 1]) print 'Random matrix' print np.random.random((3, 3)) """ Explanation: NumPy provides many ways to create matrices without initializing them explicitly: End of explanation """ A = np.random.random((3, 3)) print A.shape """ Explanation: Note that in each of these cases we had to give NumPy the tuple (3, 3); this is the shape of the desired matrix. You can access it with the shape attribute: End of explanation """ A[0, 2] # Zero-indexed! """ Explanation: NumPy lets us access individual elements of the arrays: End of explanation """ A = np.random.random((3, 3)) B = np.random.random((3, 3)) # Elementwise addition, subtraction A + B, A - B # Elementwise multiplication, division A * B # NOT matrix multiplication A / B # Matrix multiplication A.dot(B) # Dot Product a = np.random.random(3) b = np.random.random(3) a.dot(b) # Matrix inverse np.linalg.inv(A) """ Explanation: NumPy lets us do standard mathematical operations on our matrices. Some just use normal symbols, others require a function call: End of explanation """ from theano import * import theano.tensor as T """ Explanation: Theano NumPy is a fairly simple library. It lets us represent data in a compact way and do common operations on it, but it doesn't help us reduce the complexity of our programs significantly. Theano is a Python library built on top of NumPy with the goal of simplifying machine-learning style algorithm design and programming. Unlike NumPy, it uses a symbolic representation of the operations you're doing. It is a bit more complex and requires a bit more setup, so let's dig in. A complete Theano tutorial is available here. Start out by importing it: End of explanation """ x = T.dvector("x") y = T.dvector("y") A = T.dmatrix("A") """ Explanation: Everything in Theano is done with symbolic variables. We create symbolic variables with constructors from the theano.tensor package, such as dmatrix or fscalar or dtensor. The first character is d or f, which stand for "double" or "float" (precision of your data, 64-bit or 32-bit): End of explanation """ z = x + A.dot(y) """ Explanation: Note that we give each symbolic variable a name, and that the variable name and the name we give Theano don't have to match up. We can use NumPy-style operations to operate on these: End of explanation """ pp(z) """ Explanation: However, note that we haven't given Theano any data! Theano is building up a symbolic representation of our program in the background. We can ask it to print the expression corresponding to any particular value: End of explanation """ f = function([x, y, A], z) """ Explanation: To do something with Theano, we have to convert a symbolic expression into a function. For this we have the function command: End of explanation """ x_data = np.random.random(10) y_data = np.random.random(5) A_data = np.random.random((10, 5)) f(x_data, y_data, A_data) """ Explanation: The function command (in its simplest form) takes two things: a list of input variables and an output variable. We can now call the function f with NumPy arrays as inputs: End of explanation """ !wget http://deeplearning.net/data/mnist/mnist.pkl.gz """ Explanation: Preparing the Data Before proceeding with NumPy and Theano, let's get ourselves some data to learn from. We're going to use a database called MNIST, a database of handwritten digits that is commonly used as a benchmark for machine learning algorithms. First, let's download the dataset with wget: End of explanation """ import cPickle, gzip # Load the dataset with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = cPickle.load(f) """ Explanation: Next, let's load it in as a NumPy array: End of explanation """ print 'Shapes:' print '\tTraining: ', train_set[0].shape, train_set[1].shape print '\tValidation: ', valid_set[0].shape, valid_set[1].shape print '\tTest: ', test_set[0].shape, test_set[1].shape """ Explanation: Note that it's already conveniently divided into a test, validation, and training set. Let's take a look at the shape of our data: End of explanation """ # Weight vector shape: from 784 pixels to 10 possible classifications W_shape = (10, 784) b_shape = 10 W = shared(np.random.random(W_shape) - 0.5, name="W") b = shared(np.random.random(b_shape) - 0.5, name="b") """ Explanation: We have 50 thousand training images, ten thousand validation images, and ten thousand test images; each set comes with a set of labels as well. Logistic Regression Before immediately jumping in to a neural network, let's start off with just a logistic regression. A logistic regression is, after all, just a neural network with no hidden layers! This part of the tutorial is taken almost directly from the Theano documentation. A logistic regression has a matrix of weights $W$ and a vector of biases $b$; these are what we will be learning. Let's start off by making them: End of explanation """ x = T.dmatrix("x") # N x 784 labels = T.dmatrix("labels") # N x 10 """ Explanation: We are now using the shared constructor function instead of something like dmatrix or fvector. Using shared tells Theano that we plan on reusing this: instead of this being an input into our function, this is something that is hidden in the background, and our function can update. Next, let's make our input matrix (which can be the training, validation, or test set), and our input labels. Both of these are symbolic inputs, so we use dmatrix as before: End of explanation """ output = T.nnet.softmax(x.dot(W.transpose()) + b) """ Explanation: Finally, let's construct our symbolic expression representing the output of our logistic regression. We use the function theano.tensor.nnet.softmax, which implements the softmax function $s(\vec x) : \mathbb{R}^N \to \mathbb{R}^N$, where the $j$th component is defined as: $$s(\vec x)j = \frac{e^{x_j}}{\sum{i=1}^N e^{x_i}}$$ End of explanation """ prediction = T.argmax(output, axis=1) """ Explanation: The model predicts whichever class has the highest output on its corresponding unit: End of explanation """ cost = T.nnet.binary_crossentropy(output, labels).mean() """ Explanation: Next, we have to generate an error function so that we have a way to train our model. For logistic regression, we could use the negative log-likelihood. In neural networks, the classic way of generating an error model is called binary cross-entropy, and since we're treating this regression as just a neural network without a hidden layer, let's go ahead and use it. If our prediction is $\hat y$ and the real value is $y$, then binary cross-entropy is defined as $$b(y, \hat y) = -y \log(\hat y) - (1 - y)\log(1 - \hat y)$$ Since $y$ is either zero or one, this is really applying a logarithmic penalty to $\hat y$ according to whatever $y$ should be. When $y$ and $\hat y$ are vectors, then apply binary cross-entropy element-wise and take the mean of the components to get the total cost. End of explanation """ def encode_labels(labels, max_index): """Encode the labels into binary vectors.""" # Allocate the output labels, all zeros. encoded = np.zeros((labels.shape[0], max_index + 1)) # Fill in the ones at the right indices. for i in xrange(labels.shape[0]): encoded[i, labels[i]] = 1 return encoded print 'Example label encoding' print encode_labels(np.array([1, 3, 2, 0]), 3) """ Explanation: In the above usage of binary_crossentropy, we are assuming that labels is already in encoded form. That is, instead of being a digit between 0 and 9, each label is a 10-vector filled with zeros, except for one element which is a one; the index at which the one is located is the digit that the label represents. Before proceeding, let's write a label encoder with NumPy: End of explanation """ compute_prediction = function([x], prediction) compute_cost = function([x, labels], cost) """ Explanation: Finally, we are ready to compile our Theano functions! We're going to make three functions: compute_prediction: Given an input vector, predict what digit it is. compute_cost: Given an input vector and labels, compute the current cost. train: Given an input vector and labels, update the shared weights. We already know how to do the first two given our knowledge of Theano: End of explanation """ # Compute the gradient of our error function grad_W = grad(cost, W) grad_b = grad(cost, b) # Set up the updates we want to do alpha = 2 updates = [(W, W - alpha * grad_W), (b, b - alpha * grad_b)] # Make our function. Have it return the cost! train = function([x, labels], cost, updates=updates) """ Explanation: In order to implement train, we're going to have to use Theano update functionality. In addition to inputs and outputs, Theano's function can take a list of updates that we would like to have it perform. Each update is a tuple, where each tuple contains the shared variable we'd like to update and the value we'd like to update it to. In this case, our updates are just our gradients. This is where Theano starts to shine: we don't need to compute the gradients ourselves. Theano comes with a grad function, which, given a variable to differentiate and a variable to differentiate with respect to, symbolically computes the gradient we need. So our update step is pretty easy to implement: End of explanation """ # Set up the updates we want to do alpha = T.dscalar("alpha") updates = [(W, W - alpha * grad_W), (b, b - alpha * grad_b)] # Make our function. Have it return the cost! train = function([x, labels, alpha], cost, updates=updates) alpha = 10.0 labeled = encode_labels(train_set[1], 9) costs = [] while True: costs.append(float(train(train_set[0], labeled, alpha))) if len(costs) % 10 == 0: print 'Epoch', len(costs), 'with cost', costs[-1], 'and alpha', alpha if len(costs) > 2 and costs[-2] - costs[-1] < 0.0001: if alpha < 0.2: break else: alpha = alpha / 1.5 """ Explanation: We can now train it on our training set until it seems to converge, using a heuristic adaptive step size: End of explanation """ prediction = compute_prediction(test_set[0]) def accuracy(predicted, actual): total = 0.0 correct = 0.0 for p, a in zip(predicted, actual): total += 1 if p == a: correct += 1 return correct / total accuracy(prediction, test_set[1]) """ Explanation: Let's make our prediction on the test set and see how well we did: End of explanation """ W.get_value().shape """ Explanation: Just a logistic regression can get it right around 90% of the time! (Since our initial weights are random, the accuracy may vary over runs, but I've found that when it converges, it converges to around 90%.) A natural question at this point is, what is this regression actually doing? How can we visualize or understand the effects of this network? While this is a very difficult question in general, in this case, we can use the fact that our data is images. Look at the shape of our weight matrix: End of explanation """ val_W = W.get_value() activations = [val_W[i, :].reshape((28, 28)) for i in xrange(val_W.shape[0])] # Shape of our images print activations[0].shape """ Explanation: It's effectively ten 28 by 28 images! Each of these images defines the activation of one output unit. Let's start by splitting up these images into separate images, and then shaping them into a 2D pixel grid: End of explanation """ %matplotlib inline import matplotlib.pyplot as plt for i, w in enumerate(activations): plt.subplot(1, 10, i + 1) plt.set_cmap('gray') plt.axis('off') plt.imshow(w) plt.gcf().set_size_inches(9, 9) """ Explanation: We used NumPy's reshape function here. reshape takes a new shape and converts the values in the input array to that shape, and tries to avoid copying data unless absolutely necessary. We can now visualize these images: End of explanation """ reg_lambda = 0.01 regularized_cost = cost + reg_lambda * ((W * W).sum() + (b * b).sum()) """ Explanation: It's hard to see, but the images above do show certain noisy patterns: There's definitely region in the center and various edges that are darker or lighter than the background. There's still a lot of noise, however, especially around the edges. This is because those bits just don't matter – so it doesn't matter what the weights there are. Our neural network just leaves them in their randomly initialized state. This is a good reason to use regularization, which adds the weights to the cost. This reduces noisiness of your weights, and makes sure only the important ones are non-zero. Let's add regularization to our cost, weighted by a regularization factor $\lambda$: End of explanation """ # Compute the gradient of our error function grad_W = grad(regularized_cost, W) grad_b = grad(regularized_cost, b) # Set up the updates we want to do alpha = T.dscalar("alpha") updates = [(W, W - alpha * grad_W), (b, b - alpha * grad_b)] # Make our function. Have it return the cost! train_regularized = function([x, labels, alpha], regularized_cost, updates=updates) alpha = 10.0 labeled = encode_labels(train_set[1], 9) costs = [] while True: costs.append(float(train_regularized(train_set[0], labeled, alpha))) if len(costs) % 10 == 0: print 'Epoch', len(costs), 'with cost', costs[-1], 'and alpha', alpha if len(costs) > 2 and costs[-2] - costs[-1] < 0.0001: if alpha < 0.2: break else: alpha = alpha / 1.5 """ Explanation: And now let's train a regularized network: End of explanation """ val_W = W.get_value() activations = [val_W[i, :].reshape((28, 28)) for i in xrange(val_W.shape[0])] for i, w in enumerate(activations): plt.subplot(1, 10, i + 1) plt.set_cmap('gray') plt.axis('off') plt.imshow(w) plt.gcf().set_size_inches(9, 9) """ Explanation: Having retrained with regularization, let's take another look at our weight visualization: End of explanation """ prediction = compute_prediction(test_set[0]) accuracy(prediction, test_set[1]) """ Explanation: Wow! We can very clearly see the effect of regularization – it's completely eliminated all the noise in our network, yielding almost crisp images. Places that are white are highly positive, places that are gray are close to zero, and places that are black are highly negative. It's very clear what this network is looking for on each of its outputs! How well does it do? End of explanation """ # Initialize shared weight variables W1_shape = (50, 784) b1_shape = 50 W2_shape = (10, 50) b2_shape = 10 W1 = shared(np.random.random(W1_shape) - 0.5, name="W1") b1 = shared(np.random.random(b1_shape) - 0.5, name="b1") W2 = shared(np.random.random(W2_shape) - 0.5, name="W2") b2 = shared(np.random.random(b2_shape) - 0.5, name="b2") # Symbolic inputs x = T.dmatrix("x") # N x 784 labels = T.dmatrix("labels") # N x 10 # Symbolic outputs hidden = T.nnet.sigmoid(x.dot(W1.transpose()) + b1) output = T.nnet.softmax(hidden.dot(W2.transpose()) + b2) prediction = T.argmax(output, axis=1) reg_lambda = 0.0001 regularization = reg_lambda * ((W1 * W1).sum() + (W2 * W2).sum() + (b1 * b1).sum() + (b2 * b2).sum()) cost = T.nnet.binary_crossentropy(output, labels).mean() + regularization # Output functions compute_prediction = function([x], prediction) # Training functions alpha = T.dscalar("alpha") weights = [W1, W2, b1, b2] updates = [(w, w - alpha * grad(cost, w)) for w in weights] train_nn = function([x, labels, alpha], cost, updates=updates) """ Explanation: Although the weight visualizations look good, it doesn't do nearly as well as the old network. This is probably because we've underfit our training data. By using a high regularization, we've forced the network to discard things it learned in favor of lower weights, which decreased the effectiveness. (It's also possible to overfit, where not using enough regularization yields a network that's highly specific to the input data. Finding the right regularization is tricky, and you can use cross-validation to do it.) Note: In the above tests, we used the test set both times. If we were developing a new algorithm, this would not be okay: the test set should be used only when reporting final results. You should not look at the test set accuracy while developing the algorithm, as then it becomes just part of the training set. Neural Network with Hidden Layers Above, we implemented a logistic regression – effectively a neural network with no hidden layers. Let's add a hidden layer and see if we can do better. The code is almost exactly the same, but will just involve a second weight matrix and bias vector. End of explanation """ alpha = 10.0 labeled = encode_labels(train_set[1], 9) costs = [] while True: costs.append(float(train_nn(train_set[0], labeled, alpha))) if len(costs) % 10 == 0: print 'Epoch', len(costs), 'with cost', costs[-1], 'and alpha', alpha if len(costs) > 2 and costs[-2] - costs[-1] < 0.0001: if alpha < 0.2: break else: alpha = alpha / 1.5 prediction = compute_prediction(test_set[0]) accuracy(prediction, test_set[1]) """ Explanation: Let's train our network, just like we did before: End of explanation """ val_W1 = W1.get_value() activations = [val_W1[i, :].reshape((28, 28)) for i in xrange(val_W1.shape[0])] for i, w in enumerate(activations): plt.subplot(5, 10, i + 1) plt.set_cmap('gray') plt.axis('off') plt.imshow(w) plt.subplots_adjust(hspace=-0.85) plt.gcf().set_size_inches(9, 9) """ Explanation: When I run this, I get an improvement to around 93-94% correctness! Although this is only a few percent, this reduces the error rate dramatically. Note that proper regularization is crucial here. In my experiments, overregularizing caused accuracies of only 50%, not regularizing at all yielded accuracies of around 91%, and doing a good regularization can get between 93 and 95 percent. Regularization is important and tricky! Let's look at the weights our network is learning. We can really only look at the hidden layer, since the next layers don't have the same image interpretation: End of explanation """ # Plot biases plt.subplot(2, 2, 1) n, bins, patches = plt.hist(b1.get_value(), 20, normed=1, histtype='stepfilled') plt.title('Bias b1') plt.setp(patches, 'facecolor', 'g', 'alpha', 0.75); plt.subplot(2, 2, 2) n, bins, patches = plt.hist(b2.get_value(), 20, normed=1, histtype='stepfilled') plt.title('Bias b2') plt.setp(patches, 'facecolor', 'g', 'alpha', 0.75); # Plot weights plt.subplot(2, 2, 3) n, bins, patches = plt.hist(W1.get_value().flatten(), 50, normed=1, histtype='stepfilled') plt.title('Weights W1') plt.subplot(2, 2, 4) n, bins, patches = plt.hist(W2.get_value().flatten(), 50, normed=1, histtype='stepfilled') plt.title('Weights W2') plt.gcf().set_size_inches(10, 5) """ Explanation: We can see that these weights are all varieties on weak edge, circle, and so on detectors. Our regularization was not nearly as strong as it was before, so there's still a little bit of noise – this is okay! Although visualizing other weights is hard, we can still look at their distribution on a histogram to give us an idea of what range they are in: End of explanation """ plt.plot(range(len(costs)), costs) plt.title('Cost vs training epoch') """ Explanation: We can also visualizing the training costs over time, allowing us to see the gradual decrease in the objective function: End of explanation """