code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/hyosubkim/course-content/blob/master/Copy_of_W1D3_ModelFitting_Tutorial_3_v2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] colab_type="text" id="sSukjRujSzSt" # # # Neuromatch Academy: Week 1, Day 3, Tutorial 3 # # Model Fitting: Confidence intervals and bootstrapping # + [markdown] colab_type="text" id="wfBMy-1Obng2" # #Tutorial Objectives # # This is Tutorial 3 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of linear models by generalizing to multiple linear regression (Tutorial 4). We then move on to polynomial regression (Tutorial 5). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 6) and two common methods for model selection, AIC and Cross Validation (Tutorial 7). # # In this tutorial, we wil discuss how to gauge how good our estimated model parameters are. # - Learn how to use bootstrapping to generate new sample datasets # - Estimate our model parameter on these new sample datasets # - Quantify the variance of our estimate using confidence intervals # + [markdown] colab_type="text" id="6iUjf0H0ejWw" # # Setup # + cellView="form" colab_type="code" id="7lFaCKBThezP" colab={} # @title Imports import numpy as np import matplotlib.pyplot as plt import ipywidgets as widgets # + cellView="form" colab_type="code" id="EFZln8PoY8Jz" colab={} #@title Figure Settings # %matplotlib inline fig_w, fig_h = (8, 6) plt.rcParams.update({'figure.figsize': (fig_w, fig_h)}) # %config InlineBackend.figure_format = 'retina' # + cellView="form" colab_type="code" id="VVpi0RvDfDYr" colab={} #@title Helper Functions def solve_normal_eqn(x, y): """Solve the normal equations to produce the value of theta_hat that minimizes MSE. Args: x (ndarray): An array of shape (samples,) that contains the input values. y (ndarray): An array of shape (samples,) that contains the corresponding measurement values to the inputs. thata_hat (float): An estimate of the slope parameter. Returns: float: The mean squared error of the data with the estimated parameter. """ theta_hat = (x.T @ y) / (x.T @ x) return theta_hat # + [markdown] colab_type="text" id="WhtGk02Wi9Us" # # Confidence Intervals and the Bootstrap # + cellView="form" colab_type="code" id="2iJiGgJTX1E_" colab={"base_uri": "https://localhost:8080/", "height": 518} outputId="7a96e905-9a48-44d2-a8c4-eaa7459163f1" #@title Video: Bootstrapping from IPython.display import YouTubeVideo video = YouTubeVideo(id="6PZthKespNg", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video # + [markdown] colab_type="text" id="R-LVG50rUglS" # Up to this point we have been finding ways to estimate model parameters to fit some observed data. Our approach has been to optimize some criterion, either minimize the mean squared error or maximize the likelihood while using the entire dataset. How good is our estimate really? How confident are we that it will generalize to describe new data we haven't seen yet? # # One solution to this is to just collect more data and check the MSE on this new dataset with the previously estimated parameters. However this is not always feasible and still leaves open the question of how quantifiably confident we are in the accuracy of our model. # # In this section we will explore how we can build confidence intervals of our estimates using the bootstrapping method. # # ### Bootstrapping # # [Bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) is an elegent approach originally [proposed](https://projecteuclid.org/euclid.aos/1176344552) by [<NAME>](https://en.wikipedia.org/wiki/Bradley_Efron), that generates several new sample datasets from our existing one by randomly sampling from it, finding estimators for each one of these new datasets, and then looking at the statistics of the distribution of estimators to quantify our confidence. # # In this case, our new datasets will be the same size as our original one, with the new data points sampled with replacement i.e. we can repeat the same data point multiple times (an alternatively omit others entirely). # # To explore this idea, we wil start again with our noisy samples along the line $y = 1.2x$, but this time only use half the data points as last time (15 instead of 30). # + colab_type="code" id="koeqKMBvaYU6" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="8135beda-b7ec-4da1-9208-37d09890<PASSWORD>" # setting a fixed seed to our random number generator ensures we will always # get the same psuedorandom number sequence np.random.seed(121) theta = 1.2 n_samples = 15 x = 10*np.random.rand(n_samples) # sample from a uniform distribution over [0,10) noise = np.random.randn(n_samples) # sample from a standard normal distribution y = theta*x + noise fig, ax = plt.subplots() ax.scatter(x, y) # produces a scatter plot ax.set(xlabel='x', ylabel='y'); # + [markdown] colab_type="text" id="ygskS0o6aimK" # ### Exercise: Resample Dataset with Replacement # # In this exercise you will implement a method to resample a dataset with replacement. The method accepts $x$ and $y$ arrays. It should return a new set of $x'$ and $y'$ arrays that are created by randomly sampling from the originals. # # TIP: The [numpy.random.choice](https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html) method would be useful here. # + cellView="both" colab_type="code" id="3iHqPzPCYKwn" colab={} def resample_with_replacement(x, y): """Resample data points with replacement from the dataset of `x` inputs and `y` measurements. Args: x (ndarray): An array of shape (samples,) that contains the input values. y (ndarray): An array of shape (samples,) that contains the corresponding measurement values to the inputs. Returns: ndarray, ndarray: The newly resampled `x` and `y` data points. """ ####################################################### ## TODO for students: resample dataset with replacement ####################################################### # comment this out when you've filled raise NotImplementedError("Student excercise: resample dataset with replacement") return x_, y_ # + cellView="both" colab_type="code" id="l9khdX_uhyQK" colab={} # to_remove solution def resample_with_replacement(x, y): """Resample data points with replacement from the dataset of `x` inputs and `y` measurements. Args: x (ndarray): An array of shape (samples,) that contains the input values. y (ndarray): An array of shape (samples,) that contains the corresponding measurement values to the inputs. Returns: ndarray, ndarray: The newly resampled `x` and `y` data points. """ sample_idx = np.random.choice(len(x), size=len(x), replace=True) x_ = x[sample_idx] y_ = y[sample_idx] return x_, y_ # + colab_type="code" id="druW82FQiPUm" colab={"base_uri": "https://localhost:8080/", "height": 347} outputId="2d1e4709-f41a-4f91-8be7-9b9ca25eb29f" x_, y_ = resample_with_replacement(x, y) fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5)) ax1.scatter(x, y) ax1.set(title='Original', xlabel='x', ylabel='y') ax2.scatter(x_, y_) ax2.set(title='Resampled', xlabel='x', ylabel='y', xlim=ax1.get_xlim(), ylim=ax1.get_ylim()); # + [markdown] colab_type="text" id="iJTQuQuviLD7" # If you run the above cell multiple times, you should see the right-hand figure change to show a similar looking plot, though with likely fewer displayed points. The actual number of points is the same, but some have been repeated so they only display once. # # Now that we have a way to resample the data, we can use that in the full bootstrapping process. # + [markdown] colab_type="text" id="ddHbCN1tl8Cs" # ### Exercise: Bootstrap Estimates # # In this exercise you will implement a method to run the bootstrap process of generating a set of $\hat\theta$ values from a dataset of $x$ inputs and $y$ measurements. You should use `resample_with_replacement` here, and you may also invoke `solve_normal_eqn` from Tutorial 1 to produce the MSE-based estimator. # # + colab_type="code" id="wjZFMlkLl8ZK" colab={} def bootstrap_estimates(x, y, n=1000): """Generate a set of theta_hat estimates using the bootstrap method. Args: x (ndarray): An array of shape (samples,) that contains the input values. y (ndarray): An array of shape (samples,) that contains the corresponding measurement values to the inputs. n (int): The number of estimates to compute Returns: ndarray: An array of estimated parameters with size (n,) """ theta_hats = np.zeros(n) ###################################################################### ## TODO for students: implement bootstrap estimation ###################################################################### # comment this out when you've filled raise NotImplementedError("Student excercise: implement bootstrap estimation") return theta_hats # + cellView="both" colab_type="code" id="HCoDk04Coqcw" colab={} # to_remove solution def bootstrap_estimates(x, y, n=1000): """Generate a set of theta_hat estimates using the bootstrap method. Args: x (ndarray): An array of shape (samples,) that contains the input values. y (ndarray): An array of shape (samples,) that contains the corresponding measurement values to the inputs. n (int): The number of estimates to compute Returns: ndarray: An array of estimated parameters with size (n,) """ theta_hats = np.zeros(n) for i in range(n): x_, y_ = resample_with_replacement(x, y) theta_hats[i] = solve_normal_eqn(x_, y_) return theta_hats # + colab_type="code" id="NtdKkrGVAUVM" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8df14306-98da-400c-8941-be82f279fa11" theta_hats = bootstrap_estimates(x, y, n=1000) print(theta_hats[0:5]) # + [markdown] colab_type="text" id="njiSvektBZPv" # Now that we have our bootstrap estimates, we can visualize all the potential models together to see how distributed they are. # + colab_type="code" id="muuAIlckBg0d" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="65aab2db-8fc3-40d7-d592-6be783943e57" fig, ax = plt.subplots() for theta_hat in theta_hats: y_hat = theta_hat * x ax.plot(x, y_hat, c='orange', alpha=0.01) ax.scatter(x, y) y_true = theta * x ax.plot(x, y_true, 'r', label='true') ax.set( title='Bootstrapped Slope Estimation', xlabel='x', ylabel='y' ) ax.legend(); # + [markdown] colab_type="text" id="uhz7Y-rt7XID" # This looks pretty good! While there is some variance around the true model, it still seems fairly close. We can quantify this closeness by computing some summary statistics and constructing [confidence intervals](https://www.google.com/url?q=https://online.stat.psu.edu/stat504/node/19/&sa=D&ust=1592839697848000&usg=AFQjCNGaH-6a01eBgZygNc5obwD07cmqXA) from our estimate distribution. # + colab_type="code" id="A-oenf4orD1s" colab={"base_uri": "https://localhost:8080/", "height": 421} outputId="3cbb7fdb-4592-4898-8afc-7ba6de4af875" print(f"mean = {np.mean(theta_hats):.2f}, std = {np.std(theta_hats):.2f}") fig, ax = plt.subplots() ax.hist(theta_hats, facecolor='C1', alpha=0.75) ax.axvline(theta, c='r', label='true') ax.axvline(np.percentile(theta_hats, 50), color='C1', label='median') ax.axvline(np.percentile(theta_hats, 2.5), color='C2', label='95%CI') ax.axvline(np.percentile(theta_hats, 97.5), color='C2') ax.legend() ax.set( title='Bootstrapped Confidence Interval', xlabel='theta_hat', ylabel='count', xlim=[1.0, 1.5] ); # + [markdown] colab_type="text" id="R3hnAF1bvr68" # Looking at the distribution of bootstrapped $\hat{\theta}$ values, we see that the true $\theta$ falls well within our 95% confidence interval. # + [markdown] colab_type="text" id="BR2aEvAqRNmN" # # Summary # # - Bootstrapping is a resampling procedure that gives confidence intervals around inferred parameter values # - this method is very practical and relies on computational power as a resource
Copy_of_W1D3_ModelFitting_Tutorial_3_v2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="-zMKQx6DkKwt" # ##### Copyright 2019 The TensorFlow Authors. # + cellView="form" colab={} colab_type="code" id="J307vsiDkMMW" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="vCMYwDIE9dTT" # # The Keras functional API # + [markdown] colab_type="text" id="lAJfkZ-K9flj" # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://www.tensorflow.org/guide/keras/functional"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/functional.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/functional.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> # </td> # <td> # <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/functional.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> # </td> # </table> # + [markdown] colab_type="text" id="ITh3wzORxgpw" # ## Setup # + colab={} colab_type="code" id="HFbM9dcfxh4l" import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers tf.keras.backend.clear_session() # For easy reset of notebook state. # + [markdown] colab_type="text" id="ZI47-lpfkZ5c" # ## Introduction # # The Keras *functional API* is a way to create models that is more flexible than the `tf.keras.Sequential` API. The functional API can handle models with non-linear topology, models with shared layers, and models with multiple inputs or outputs. # # The main idea that a deep learning model is usually a directed acyclic graph (DAG) of layers. So the functional API is a way to build *graphs of layers*. # # Consider the following model: # # ``` # (input: 784-dimensional vectors) # ↧ # [Dense (64 units, relu activation)] # ↧ # [Dense (64 units, relu activation)] # ↧ # [Dense (10 units, softmax activation)] # ↧ # (output: logits of a probability distribution over 10 classes) # ``` # # This is a basic graph with three layers. To build this model using the functional API, start by creating an input node: # + colab={} colab_type="code" id="Yxi0LaSHkDT-" inputs = keras.Input(shape=(784,)) # + [markdown] colab_type="text" id="Mr3Z_Pxcnf-H" # The shape of the data is set as a 784-dimensional vector. The batch size is always omitted since only the shape of each sample is specified. # # If, for example, you have an image input with a shape of `(32, 32, 3)`, you would use: # + colab={} colab_type="code" id="0-2Q2nJNneIO" # Just for demonstration purposes. img_inputs = keras.Input(shape=(32, 32, 3)) # + [markdown] colab_type="text" id="HoMFNu-pnkgF" # The `inputs` that is returned contains information about the shape and `dtype` of the input data that you feed to your model: # + colab={} colab_type="code" id="ddIr9LPJnibj" inputs.shape # + colab={} colab_type="code" id="lZkLJeQonmTe" inputs.dtype # + [markdown] colab_type="text" id="kZnhhndTnrzC" # You create a new node in the graph of layers by calling a layer on this `inputs` object: # + colab={} colab_type="code" id="sMyyMTqDnpYV" dense = layers.Dense(64, activation='relu') x = dense(inputs) # + [markdown] colab_type="text" id="besm-lgFnveV" # The "layer call" action is like drawing an arrow from "inputs" to this layer you created. # You're "passing" the inputs to the `dense` layer, and out you get `x`. # # Let's add a few more layers to the graph of layers: # + colab={} colab_type="code" id="DbF-MIO2ntf7" x = layers.Dense(64, activation='relu')(x) outputs = layers.Dense(10)(x) # + [markdown] colab_type="text" id="B38UlEIlnz_8" # At this point, you can create a `Model` by specifying its inputs and outputs in the graph of layers: # + colab={} colab_type="code" id="MrSfwvl-nx9s" model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model') # + [markdown] colab_type="text" id="jJzocCbdn6qj" # Let's check out what the model summary looks like: # + colab={} colab_type="code" id="GirC9odQn5Ep" model.summary() # + [markdown] colab_type="text" id="mbNqYAlOn-vA" # You can also plot the model as a graph: # + colab={} colab_type="code" id="JYh2wLain8Oi" keras.utils.plot_model(model, 'my_first_model.png') # + [markdown] colab_type="text" id="QtgX2RoGoDZo" # And, optionally, display the input and output shapes of each layer in the plotted graph: # + colab={} colab_type="code" id="7FGesSSuoAG5" keras.utils.plot_model(model, 'my_first_model_with_shape_info.png', show_shapes=True) # + [markdown] colab_type="text" id="PBZ9XE6LoWvi" # This figure and the code are almost identical. In the code version, the connection arrows are replaced by the call operation. # # A "graph of layers" is an intuitive mental image for a deep learning model, and the functional API is a way to create models that closely mirror this. # + [markdown] colab_type="text" id="WUUHMaKLoZDn" # ## Training, evaluation, and inference # # Training, evaluation, and inference work exactly in the same way for models built using the functional API as for `Sequential` models. # # Here, load the MNIST image data, reshape it into vectors, fit the model on the data (while monitoring performance on a validation split), then evaluate the model on the test data: # + colab={} colab_type="code" id="DnHvkD22oFEY" (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.reshape(60000, 784).astype('float32') / 255 x_test = x_test.reshape(10000, 784).astype('float32') / 255 model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=keras.optimizers.RMSprop(), metrics=['accuracy']) history = model.fit(x_train, y_train, batch_size=64, epochs=5, validation_split=0.2) test_scores = model.evaluate(x_test, y_test, verbose=2) print('Test loss:', test_scores[0]) print('Test accuracy:', test_scores[1]) # + [markdown] colab_type="text" id="c3nq2fjiLCkE" # For further reading, see the [train and evaluate](./train_and_evaluate.ipynb) guide. # + [markdown] colab_type="text" id="XOsL56zDorLh" # ## Save and serialize # # Saving the model and serialization work the same way for models built using the functional API as they do for `Sequential` models. To standard way to save a functional model is to call `model.save()` to save the entire model as a single file. You can later recreate the same model from this file, even if the code that built the model is no longer available. # # This saved file includes the: # - model architecture # - model weight values (that were learned during training) # - model training config, if any (as passed to `compile`) # - optimizer and its state, if any (to restart training where you left off) # + colab={} colab_type="code" id="kN-<KEY>" model.save('path_to_my_model') del model # Recreate the exact same model purely from the file: model = keras.models.load_model('path_to_my_model') # + [markdown] colab_type="text" id="u0J0tFPHK4pb" # For details, read the model [save and serialize](./save_and_serialize.ipynb) guide. # + [markdown] colab_type="text" id="lKz1WWr2LUzF" # ## Use the same graph of layers to define multiple models # # In the functional API, models are created by specifying their inputs and outputs in a graph of layers. That means that a single graph of layers can be used to generate multiple models. # # In the example below, you use the same stack of layers to instantiate two models: an `encoder` model that turns image inputs into 16-dimensional vectors, # and an end-to-end `autoencoder` model for training. # + colab={} colab_type="code" id="WItZQr6LuVbF" encoder_input = keras.Input(shape=(28, 28, 1), name='img') x = layers.Conv2D(16, 3, activation='relu')(encoder_input) x = layers.Conv2D(32, 3, activation='relu')(x) x = layers.MaxPooling2D(3)(x) x = layers.Conv2D(32, 3, activation='relu')(x) x = layers.Conv2D(16, 3, activation='relu')(x) encoder_output = layers.GlobalMaxPooling2D()(x) encoder = keras.Model(encoder_input, encoder_output, name='encoder') encoder.summary() x = layers.Reshape((4, 4, 1))(encoder_output) x = layers.Conv2DTranspose(16, 3, activation='relu')(x) x = layers.Conv2DTranspose(32, 3, activation='relu')(x) x = layers.UpSampling2D(3)(x) x = layers.Conv2DTranspose(16, 3, activation='relu')(x) decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x) autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder') autoencoder.summary() # + [markdown] colab_type="text" id="oNeg3WWFuYZK" # Here, the decoding architecture is strictly symmetrical to the encoding architecture, so the output shape is the same as the input shape `(28, 28, 1)`. # # The reverse of a `Conv2D` layer is a `Conv2DTranspose` layer, and the reverse of a `MaxPooling2D` layer is an `UpSampling2D` layer. # + [markdown] colab_type="text" id="h1FVW4j-uc6Y" # ## All models are callable, just like layers # # You can treat any model as if it were a layer by invoking it on an `Input` or on the output of another layer. By calling a model you aren't just reusing the architecture of the model, you're also reusing its weights. # # To see this in action, here's a different take on the autoencoder example that creates an encoder model, a decoder model, and chain them in two calls to obtain the autoencoder model: # + colab={} colab_type="code" id="Ld7KdsQ_uZbr" encoder_input = keras.Input(shape=(28, 28, 1), name='original_img') x = layers.Conv2D(16, 3, activation='relu')(encoder_input) x = layers.Conv2D(32, 3, activation='relu')(x) x = layers.MaxPooling2D(3)(x) x = layers.Conv2D(32, 3, activation='relu')(x) x = layers.Conv2D(16, 3, activation='relu')(x) encoder_output = layers.GlobalMaxPooling2D()(x) encoder = keras.Model(encoder_input, encoder_output, name='encoder') encoder.summary() decoder_input = keras.Input(shape=(16,), name='encoded_img') x = layers.Reshape((4, 4, 1))(decoder_input) x = layers.Conv2DTranspose(16, 3, activation='relu')(x) x = layers.Conv2DTranspose(32, 3, activation='relu')(x) x = layers.UpSampling2D(3)(x) x = layers.Conv2DTranspose(16, 3, activation='relu')(x) decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x) decoder = keras.Model(decoder_input, decoder_output, name='decoder') decoder.summary() autoencoder_input = keras.Input(shape=(28, 28, 1), name='img') encoded_img = encoder(autoencoder_input) decoded_img = decoder(encoded_img) autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder') autoencoder.summary() # + [markdown] colab_type="text" id="icQFny_huiXC" # As you can see, the model can be nested: a model can contain sub-models (since a model is just like a layer). A common use case for model nesting is *ensembling*. For example, here's how to ensemble a set of models into a single model that averages their predictions: # + colab={} colab_type="code" id="ZBlZbRn5uk-9" def get_model(): inputs = keras.Input(shape=(128,)) outputs = layers.Dense(1)(inputs) return keras.Model(inputs, outputs) model1 = get_model() model2 = get_model() model3 = get_model() inputs = keras.Input(shape=(128,)) y1 = model1(inputs) y2 = model2(inputs) y3 = model3(inputs) outputs = layers.average([y1, y2, y3]) ensemble_model = keras.Model(inputs=inputs, outputs=outputs) # + [markdown] colab_type="text" id="e1za1TZxuoId" # ## Manipulate complex graph topologies # # ### Models with multiple inputs and outputs # # The functional API makes it easy to manipulate multiple inputs and outputs. # This cannot be handled with the `Sequential` API. # # For example, if you're building a system for ranking custom issue tickets by priority and routing them to the correct department, then the model will have three inputs: # # - the title of the ticket (text input), # - the text body of the ticket (text input), and # - any tags added by the user (categorical input) # # This model will have two outputs: # # - the priority score between 0 and 1 (scalar sigmoid output), and # - the department that should handle the ticket (softmax output over the set of departments). # # You can build this model in a few lines with the functional API: # + colab={} colab_type="code" id="Gt91OtzbutJy" num_tags = 12 # Number of unique issue tags num_words = 10000 # Size of vocabulary obtained when preprocessing text data num_departments = 4 # Number of departments for predictions title_input = keras.Input(shape=(None,), name='title') # Variable-length sequence of ints body_input = keras.Input(shape=(None,), name='body') # Variable-length sequence of ints tags_input = keras.Input(shape=(num_tags,), name='tags') # Binary vectors of size `num_tags` # Embed each word in the title into a 64-dimensional vector title_features = layers.Embedding(num_words, 64)(title_input) # Embed each word in the text into a 64-dimensional vector body_features = layers.Embedding(num_words, 64)(body_input) # Reduce sequence of embedded words in the title into a single 128-dimensional vector title_features = layers.LSTM(128)(title_features) # Reduce sequence of embedded words in the body into a single 32-dimensional vector body_features = layers.LSTM(32)(body_features) # Merge all available features into a single large vector via concatenation x = layers.concatenate([title_features, body_features, tags_input]) # Stick a logistic regression for priority prediction on top of the features priority_pred = layers.Dense(1, name='priority')(x) # Stick a department classifier on top of the features department_pred = layers.Dense(num_departments, name='department')(x) # Instantiate an end-to-end model predicting both priority and department model = keras.Model(inputs=[title_input, body_input, tags_input], outputs=[priority_pred, department_pred]) # + [markdown] colab_type="text" id="KIS7lqW0uwh-" # Now plot the model: # + colab={} colab_type="code" id="IMij4gzhuzYV" keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True) # + [markdown] colab_type="text" id="oOyuig2Hu00p" # When compiling this model, you can assign different losses to each output. You can even assign different weights to each loss—to modulate their contribution to the total training loss. # + colab={} colab_type="code" id="Crtdpi5Uu2cX" model.compile(optimizer=keras.optimizers.RMSprop(1e-3), loss=[keras.losses.BinaryCrossentropy(from_logits=True), keras.losses.CategoricalCrossentropy(from_logits=True)], loss_weights=[1., 0.2]) # + [markdown] colab_type="text" id="t42Jrn0Yu5jL" # Since the output layers have different names, you could also specify the loss like this: # + colab={} colab_type="code" id="dPM0EwW_u6mV" model.compile(optimizer=keras.optimizers.RMSprop(1e-3), loss={'priority':keras.losses.BinaryCrossentropy(from_logits=True), 'department': keras.losses.CategoricalCrossentropy(from_logits=True)}, loss_weights=[1., 0.2]) # + [markdown] colab_type="text" id="bpTx2sXnu3-W" # Train the model by passing lists of NumPy arrays of inputs and targets: # + colab={} colab_type="code" id="nB-upOoGu_k4" # Dummy input data title_data = np.random.randint(num_words, size=(1280, 10)) body_data = np.random.randint(num_words, size=(1280, 100)) tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32') # Dummy target data priority_targets = np.random.random(size=(1280, 1)) dept_targets = np.random.randint(2, size=(1280, num_departments)) model.fit({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}, epochs=2, batch_size=32) # + [markdown] colab_type="text" id="qNguhBWuvCtz" # When calling fit with a `Dataset` object, it should yield either a tuple of lists like `([title_data, body_data, tags_data], [priority_targets, dept_targets])` or a tuple of dictionaries like # `({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets})`. # # For more detailed explanation, refer to the [training and evaluation](./train_and_evaluate.ipynb) guide. # + [markdown] colab_type="text" id="tR0X5tTOvPyg" # ### A toy ResNet model # # In addition to models with multiple inputs and outputs, the functional API makes it easy to manipulate non-linear connectivity topologies—these are models with layers that are not connected sequentially. Something the `Sequential` API can not handle. # # A common use case for this is residual connections. Let's build a toy ResNet model for CIFAR10 to demonstrate this: # + colab={} colab_type="code" id="VzMoYrMNvXrm" inputs = keras.Input(shape=(32, 32, 3), name='img') x = layers.Conv2D(32, 3, activation='relu')(inputs) x = layers.Conv2D(64, 3, activation='relu')(x) block_1_output = layers.MaxPooling2D(3)(x) x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output) x = layers.Conv2D(64, 3, activation='relu', padding='same')(x) block_2_output = layers.add([x, block_1_output]) x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output) x = layers.Conv2D(64, 3, activation='relu', padding='same')(x) block_3_output = layers.add([x, block_2_output]) x = layers.Conv2D(64, 3, activation='relu')(block_3_output) x = layers.GlobalAveragePooling2D()(x) x = layers.Dense(256, activation='relu')(x) x = layers.Dropout(0.5)(x) outputs = layers.Dense(10)(x) model = keras.Model(inputs, outputs, name='toy_resnet') model.summary() # + [markdown] colab_type="text" id="ISQX32bgrkis" # Plot the model: # + colab={} colab_type="code" id="pNFVkAd3rlCM" keras.utils.plot_model(model, 'mini_resnet.png', show_shapes=True) # + [markdown] colab_type="text" id="ECcG87yZrxp5" # Now train the model: # + colab={} colab_type="code" id="_iXGz5XEryou" (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. y_train = keras.utils.to_categorical(y_train, 10) y_test = keras.utils.to_categorical(y_test, 10) model.compile(optimizer=keras.optimizers.RMSprop(1e-3), loss=keras.losses.CategoricalCrossentropy(from_logits=True), metrics=['acc']) model.fit(x_train, y_train, batch_size=64, epochs=1, validation_split=0.2) # + [markdown] colab_type="text" id="XQfg0JUkr7SH" # ## Shared layers # # Another good use for the functional API are for models that use *shared layers*. Shared layers are layer instances that are reused multiple times in a same model—they learn features that correspond to multiple paths in the graph-of-layers. # # Shared layers are often used to encode inputs from similar spaces (say, two different pieces of text that feature similar vocabulary). They enable sharing of information across these different inputs, and they make it possible to train such a model on less data. If a given word is seen in one of the inputs, that will benefit the processing of all inputs that pass through the shared layer. # # To share a layer in the functional API, call the same layer instance multiple times. For instance, here's an `Embedding` layer shared across two different text inputs: # + colab={} colab_type="code" id="R9pAPQCnKuMR" # Embedding for 1000 unique words mapped to 128-dimensional vectors shared_embedding = layers.Embedding(1000, 128) # Variable-length sequence of integers text_input_a = keras.Input(shape=(None,), dtype='int32') # Variable-length sequence of integers text_input_b = keras.Input(shape=(None,), dtype='int32') # Reuse the same layer to encode both inputs encoded_input_a = shared_embedding(text_input_a) encoded_input_b = shared_embedding(text_input_b) # + [markdown] colab_type="text" id="xNEKvfUpr-Kf" # ## Extract and reuse nodes in the graph of layers # # Because the graph of layers you are manipulating is a static data structure, it can be accessed and inspected. And this is how you are able to plot functional models as images. # # This also means that you can access the activations of intermediate layers ("nodes" in the graph) and reuse them elsewhere—which is very useful for something like feature extraction. # # Let's look at an example. This is a VGG19 model with weights pretrained on ImageNet: # + colab={} colab_type="code" id="c-gl3xHBH-oX" vgg19 = tf.keras.applications.VGG19() # + [markdown] colab_type="text" id="AKefin_xIGBP" # And these are the intermediate activations of the model, obtained by querying the graph data structure: # + colab={} colab_type="code" id="1_Ap05fgIRgE" features_list = [layer.output for layer in vgg19.layers] # + [markdown] colab_type="text" id="H1zx5qM7IYu4" # Use these features to create a new feature-extraction model that returns the values of the intermediate layer activations: # + colab={} colab_type="code" id="NrU82Pa8Igwo" feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list) img = np.random.random((1, 224, 224, 3)).astype('float32') extracted_features = feat_extraction_model(img) # + [markdown] colab_type="text" id="G-e2-jNCLIqy" # This comes in handy for tasks like [neural style transfer](https://www.tensorflow.org/tutorials/generative/style_transfer), among other things. # + [markdown] colab_type="text" id="t9M2Uvi3sBy0" # ## Extend the API using custom layers # # `tf.keras` includes a wide range of built-in layers, for example: # # - Convolutional layers: `Conv1D`, `Conv2D`, `Conv3D`, `Conv2DTranspose` # - Pooling layers: `MaxPooling1D`, `MaxPooling2D`, `MaxPooling3D`, `AveragePooling1D` # - RNN layers: `GRU`, `LSTM`, `ConvLSTM2D` # - `BatchNormalization`, `Dropout`, `Embedding`, etc. # # But if you don't find what you need, it's easy to extend the API by creating your own layers. All layers subclass the `Layer` class and implement: # # - `call` method, that specifies the computation done by the layer. # - `build` method, that creates the weights of the layer (this is just a style convention since you can create weights in `__init__`, as well). # # To learn more about creating layers from scratch, read [custom layers and models](./custom_layers_and_models.ipynb) guide. # # The following is a basic implementation of `tf.keras.layers.Dense`: # + colab={} colab_type="code" id="ztAmarbgNV6V" class CustomDense(layers.Layer): def __init__(self, units=32): super(CustomDense, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight(shape=(input_shape[-1], self.units), initializer='random_normal', trainable=True) self.b = self.add_weight(shape=(self.units,), initializer='random_normal', trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b inputs = keras.Input((4,)) outputs = CustomDense(10)(inputs) model = keras.Model(inputs, outputs) # + [markdown] colab_type="text" id="NXxp_32bNWTy" # For serialization support in your custom layer, define a `get_config` method that returns the constructor arguments of the layer instance: # + colab={} colab_type="code" id="K3OQ4XxzNfAZ" class CustomDense(layers.Layer): def __init__(self, units=32): super(CustomDense, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight(shape=(input_shape[-1], self.units), initializer='random_normal', trainable=True) self.b = self.add_weight(shape=(self.units,), initializer='random_normal', trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b def get_config(self): return {'units': self.units} inputs = keras.Input((4,)) outputs = CustomDense(10)(inputs) model = keras.Model(inputs, outputs) config = model.get_config() new_model = keras.Model.from_config( config, custom_objects={'CustomDense': CustomDense}) # + [markdown] colab_type="text" id="kXg6hZN_NfN8" # Optionally, implement the classmethod `from_config(cls, config)` which is used when recreating a layer instance given its config dictionary. The default implementation of `from_config` is: # # ```python # def from_config(cls, config): # return cls(**config) # ``` # + [markdown] colab_type="text" id="ifOVqn84sCNU" # ## When to use the functional API # # When should you use the Keras functional API to create a new model, or just subclass the `Model` class directly? In general, the functional API is higher-level, easier and safer, and has a number of features that subclassed models do not support. # # However, model subclassing provides greater flexibility when building models that are not easily expressible as directed acyclic graphs of layers. For example, you could not implement a Tree-RNN with the functional API and would have to subclass `Model` directly. # # For in-depth look at the differences between the functional API and model subclassing, read [What are Symbolic and Imperative APIs in TensorFlow 2.0?](https://blog.tensorflow.org/2019/01/what-are-symbolic-and-imperative-apis.html). # # ### Functional API strengths # # The following properties are also true for Sequential models (which are also data structures), but are not true for subclassed models (which are Python bytecode, not data structures). # # #### Less verbose # # There is no `super(MyClass, self).__init__(...)`, no `def call(self, ...):`, etc. # # Compare: # # ```python # inputs = keras.Input(shape=(32,)) # x = layers.Dense(64, activation='relu')(inputs) # outputs = layers.Dense(10)(x) # mlp = keras.Model(inputs, outputs) # ``` # # With the subclassed version: # # ```python # class MLP(keras.Model): # # def __init__(self, **kwargs): # super(MLP, self).__init__(**kwargs) # self.dense_1 = layers.Dense(64, activation='relu') # self.dense_2 = layers.Dense(10) # # def call(self, inputs): # x = self.dense_1(inputs) # return self.dense_2(x) # # # Instantiate the model. # mlp = MLP() # # Necessary to create the model's state. # # The model doesn't have a state until it's called at least once. # _ = mlp(tf.zeros((1, 32))) # ``` # # #### Model validation while defining # # In the functional API, the input specification (shape and dtype) is created in advance (using `Input`). Every time you call a layer, the layer checks that the specification passed to it matches its assumptions, and it will raise a helpful error message if not. # # This guarantees that any model you can build with the functional API will run. All debugging—other than convergence-related debugging—happens statically during the model construction and not at execution time. This is similar to type checking in a compiler. # # #### A functional model is plottable and inspectable # # You can plot the model as a graph, and you can easily access intermediate nodes in this graph. For example, to extract and reuse the activations of intermediate layers (as seen in a previous example): # # ```python # features_list = [layer.output for layer in vgg19.layers] # feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list) # ``` # # #### A functional model can be serialized or cloned # # Because a functional model is a data structure rather than a piece of code, it is safely serializable and can be saved as a single file that allows you to recreate the exact same model without having access to any of the original code. See the [saving and serialization guide](./save_and_serialize.ipynb). # # # ### Functional API weakness # # #### Does not support dynamic architectures # # The functional API treats models as DAGs of layers. This is true for most deep learning architectures, but not all—for example, recursive networks or Tree RNNs do not follow this assumption and cannot be implemented in the functional API. # # #### Everything from scratch # # When writing advanced architectures, you may want to do things that are outside the scope of defining a DAG of layers. For example, to you must use model subclassing to expose multiple custom training and inference methods on your model instance. # + [markdown] colab_type="text" id="Ym1jrCqusGvj" # ## Mix-and-match API styles # # Choosing between the functional API or Model subclassing isn't a binary decision that restricts you into one category of models. All models in the `tf.keras` API can interact with each other, whether they're `Sequential` models, functional models, or subclassed models that are written from scratch. # # You can always use a functional model or `Sequential` model as part of a subclassed model or layer: # + colab={} colab_type="code" id="9zF5YTLy_vGZ" units = 32 timesteps = 10 input_dim = 5 # Define a Functional model inputs = keras.Input((None, units)) x = layers.GlobalAveragePooling1D()(inputs) outputs = layers.Dense(1)(x) model = keras.Model(inputs, outputs) class CustomRNN(layers.Layer): def __init__(self): super(CustomRNN, self).__init__() self.units = units self.projection_1 = layers.Dense(units=units, activation='tanh') self.projection_2 = layers.Dense(units=units, activation='tanh') # Our previously-defined Functional model self.classifier = model def call(self, inputs): outputs = [] state = tf.zeros(shape=(inputs.shape[0], self.units)) for t in range(inputs.shape[1]): x = inputs[:, t, :] h = self.projection_1(x) y = h + self.projection_2(state) state = y outputs.append(y) features = tf.stack(outputs, axis=1) print(features.shape) return self.classifier(features) rnn_model = CustomRNN() _ = rnn_model(tf.zeros((1, timesteps, input_dim))) # + [markdown] colab_type="text" id="oxW1d0a8_ufg" # You can use any subclassed layer or model in the functional API as long as it implements a `call` method that follows one of the following patterns: # # - `call(self, inputs, **kwargs)` —Where `inputs` is a tensor or a nested structure of tensors (e.g. a list of tensors), and where `**kwargs` are non-tensor arguments (non-inputs). # - `call(self, inputs, training=None, **kwargs)` —Where `training` is a boolean indicating whether the layer should behave in training mode and inference mode. # - `call(self, inputs, mask=None, **kwargs)` —Where `mask` is a boolean mask tensor (useful for RNNs, for instance). # - `call(self, inputs, training=None, mask=None, **kwargs)` —Of course, you can have both masking and training-specific behavior at the same time. # # Additionally, if you implement the `get_config` method on your custom Layer or model, the functional models you create will still be serializable and cloneable. # # Here's a quick example of a custom RNN written from scratch in a functional model: # + colab={} colab_type="code" id="TmTEZ6F3ArJR" units = 32 timesteps = 10 input_dim = 5 batch_size = 16 class CustomRNN(layers.Layer): def __init__(self): super(CustomRNN, self).__init__() self.units = units self.projection_1 = layers.Dense(units=units, activation='tanh') self.projection_2 = layers.Dense(units=units, activation='tanh') self.classifier = layers.Dense(1) def call(self, inputs): outputs = [] state = tf.zeros(shape=(inputs.shape[0], self.units)) for t in range(inputs.shape[1]): x = inputs[:, t, :] h = self.projection_1(x) y = h + self.projection_2(state) state = y outputs.append(y) features = tf.stack(outputs, axis=1) return self.classifier(features) # Note that you specify a static batch size for the inputs with the `batch_shape` # arg, because the inner computation of `CustomRNN` requires a static batch size # (when you create the `state` zeros tensor). inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim)) x = layers.Conv1D(32, 3)(inputs) outputs = CustomRNN()(x) model = keras.Model(inputs, outputs) rnn_model = CustomRNN() _ = rnn_model(tf.zeros((1, 10, 5)))
site/en/guide/keras/functional.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Imports import pandas as pd from typing import Iterator, Tuple from pyspark.sql import SparkSession, Window import pyspark.sql.functions as F from pyspark.sql.functions import pandas_udf from pyspark.sql.types import * spark = (SparkSession .builder .appName('Chapter 5') .enableHiveSupport() .getOrCreate()) spark # # User Defined Functions (UDFs) # It extends the capability of Spark by defining our own functions in Python. They do not persist across SparkSessions, so they are only avaialble in the same session they were defined (registered). Spark will store them in metastore. x = spark.range(10) x.createOrReplaceTempView('udf_test') def cubed(s): return s * s * s # This will make it available to use (stored in sql metastore) spark.udf.register(name='cube', f=cubed, returnType=LongType()) spark.sql('SELECT id, cube(id) FROM udf_test').show(10) # Spark SQL (SQL, DataFrame, and Dataset) does not guarantee the order of execution of subexpressions. For example, # ```sql # spark.sql("SELECT s FROM test1 WHERE s IS NOT NULL AND strlen(s) > 1") # ``` # The above query does not guarantee that checking for NULL will be evaluated before the `strlen(s)`. Therefore, it is recommended that we do the following: # - Make the __UDF__ itself null-aware and do null checking inside the UDF. # - Use `IF or CASE WHEN` expressions to do the null check and invoke the UDF in a conditional branch. # We can now use pandas_udf to use vectorized operations instead of operating row by row which is very slow. With Pandas_UDFs, Pandas uses Apache Arrow to transfer data and Pandas will work on the data. Once the data is in Apache Arrow format, there is no need to serialize/pickle data. To make a Python UDF a Pandas UDF, we need to decorate the function using `@pandas_udf` and use __type hints__. # ## Series to Series # Take `pd.Series` as input and returns `pd.Series` as output with the same length. pandas_udf('long') def squares(s: pd.Series) -> pd.Series: return s * 2 x = pd.Series(range(10)) x squares(x) df = spark.range(10) df.select('id', squares(F.col('id'))).show() # We can also use the UDFs on pandas locally. squares(pd.Series(range(10))) # ## Iterator of Series to Iterator of Series # - The Python function # - Takes an iterator of batches instead of a single input batch as input. # - Returns an iterator of output batches instead of a single output batch. # - The length of the entire output in the iterator should be the same as the length of the entire input. # - The wrapped pandas UDF takes a single Spark column as an input. # - Specify the Python type hint as Iterator[pandas.Series] -> Iterator[pandas.Series]. # # It is useful in the case of loading a machine learning model file to apply inference to every input batch. # + pdf = pd.DataFrame([1, 2, 3], columns=["x"]) df = spark.createDataFrame(pdf) # When the UDF is called with the column, # the input to the underlying function is an iterator of pd.Series. @pandas_udf("long") def plus_one(batch_iter: Iterator[pd.Series]) -> Iterator[pd.Series]: for x in batch_iter: print(type(x)) yield x + 1 df.select(plus_one(F.col("x"))).show() # df.select(F.col("x")).show() # + # In the UDF, you can initialize some state before processing batches. # Wrap your code with try/finally or use context managers to ensure # the release of resources at the end. y_bc = spark.sparkContext.broadcast(1) @pandas_udf("long") def plus_y(batch_iter: Iterator[pd.Series]) -> Iterator[pd.Series]: y = y_bc.value # initialize states try: for x in batch_iter: yield x + y finally: pass # release resources here, if any df.select(plus_y(F.col("x"))).show() # - # ## Iterator of multiple Series to Iterator of multiple Series # The differences with Iterator of Series to Iterator of Series are: # - The underlying Python function takes an iterator of a tuple of pandas Series. # - The wrapped pandas UDF takes multiple Spark columns as an input. # + pdf = pd.DataFrame([1, 2, 3], columns=["x"]) df = spark.createDataFrame(pdf) @pandas_udf("long") def multiply_two_cols( iterator: Iterator[Tuple[pd.Series, pd.Series]]) -> Iterator[pd.Series]: for a, b in iterator: yield a * b df.select(multiply_two_cols("x", "x")).show() # - # ## Series to scalar # It is typically used in aggregation from one or more pandas Series to scalar such as __mean__. The type of the scalar can be Python primitive type or Numpy. It loads all the data into memory. df = spark.createDataFrame( [(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)], ("id", "v")) df.show() @pandas_udf('double') def mean_udf(x: pd.Series) -> float: return x.mean() # + # df.select(mean_udf('v')).show() # - # # Higher Order Functions in DataFrames and Spark SQL # There are typically two solutions for the manipulation of complex data types: # - Exploding the nested structure into individual rows, applying some function, and then re-creating the nested structure as noted in the code snippet below (see Option 1) # - Building a User Defined Function (UDF) # + # Create an array dataset arrayData = [[1, (1, 2, 3)], [2, (2, 3, 4)], [3, (3, 4, 5)]] # Create schema from pyspark.sql.types import * arraySchema = (StructType([ StructField("id", IntegerType(), True), StructField("values", ArrayType(IntegerType()), True) ])) # Create DataFrame df = spark.createDataFrame(spark.sparkContext.parallelize(arrayData), arraySchema) df.createOrReplaceTempView("table") df.printSchema() df.show() # - # explode(values) creates a new row (with the id) for each element (value) within values. spark.sql(''' SELECT id, explode(values) AS value FROM table ''').show() # ## Option 1: Explode & Collect spark.sql(''' SELECT id, collect_list(value + 1) AS new_values FROM (SELECT id, explode(values) AS value FROM table) AS A GROUP BY id ''').show() # The order of values may not be the same as their order in the original array due to shuffling. This option is very expensive because we use `GroupBy` and it requires shuffling and values can be very wide/long. `collect_list` may cause out of memory issues for larger datasets. # ## Option 2: User Defined Function def plus_one(values): return [v + 1 for v in values] spark.udf.register('plus_one', plus_one, returnType=ArrayType(IntegerType())) spark.sql( ''' SELECT id, plus_one(values) AS new_values FROM table ''').show() # The major drawback of this approach is that it requires serialization/deserialization which may be expensive. # # Higher Order Functions schema = StructType([StructField("celsius", ArrayType(IntegerType()))]) t_list = [[35, 36, 32, 30, 40, 42, 38]], [[31, 32, 34, 55, 56]] t_c = spark.createDataFrame(t_list, schema) t_c.createOrReplaceTempView("tC") t_c.show() # ## `transform` # It takes an array and a lambda function, applies the function to each element in the array and returns an array. It is the same as UDF but more efficient. spark.sql( ''' SELECT celsius, transform(celsius, v -> ((v * 9) div 5) + 32) AS fahrenheit FROM tC ''').show() # ## `filter` # Returns only the elements where the predicate is true spark.sql( ''' SELECT celsius, filter(celsius, v -> v > 38) AS fahrenheit FROM tC ''').show() # ## `exists` # Returns __True__ if the predicate is true for any element in the array. spark.sql( ''' SELECT celsius, exists(celsius, v -> v = 38) AS threshold FROM tC ''').show() # ## `reduce` spark.sql( ''' SELECT celsius, reduce( celsius, 0, (x, y) -> x + y, y -> (y div size(celsius) * 9 div 5) + 32) AS avg_fahrenheit FROM tC ''').show() # # DataFrames and Spark SQL Common Relational Operators # ## Data airports_na = (spark .read .format('csv') .option('header', 'true') .option('inferSchema', 'true') .option('sep', '\t') .load('data/airport-codes-na.txt')) airports_na.show(5) airports_na.createOrReplaceTempView('airports_na') schema = ''' `date` STRING, `delay` INT, `distance` INT, `origin` STRING, `destination` STRING ''' departure_delays = (spark .read .format('csv') .option('header', 'true') .option('schema', schema) .load('data/departuredelays.csv')) departure_delays.show(5) departure_delays.printSchema() departure_delays = (departure_delays .withColumn('delay', F.expr('CAST(delay AS INT) AS delay')) .withColumn('distance', F.expr('CAST(distance AS INT) AS distance'))) departure_delays.printSchema() departure_delays.createOrReplaceTempView('departure_delays') foo = (departure_delays .filter(F.expr( """origin == 'SEA' and destination == 'SFO' and date like '01010%' and delay > 0""") )) foo.show() # ## Union # It is the same as `UNION ALL` in SQL. To get the SQL-like `UNION`, use `distinct` after `union`. bar = departure_delays.union(foo) (bar .filter(F.expr( """origin == 'SEA' and destination == 'SFO' and date like '01010%' and delay > 0""") ) .show()) # We can also use `UnionByName` # ## Joins # The default join is __inner__. foo.join(airports_na, foo.origin == airports_na.IATA).show() # ## Windowing spark.sql("DROP TABLE IF EXISTS departure_delays_window") spark.sql(""" CREATE TABLE departure_delays_window AS SELECT origin, destination, sum(delay) as total_delays FROM departure_delays WHERE origin IN ('SEA', 'SFO', 'JFK') AND destination IN ('SEA', 'SFO', 'JFK', 'DEN', 'ORD', 'LAX', 'ATL') GROUP BY origin, destination """) spark.sql("""SELECT * FROM departure_delays_window""").show() spark.sql( """ SELECT origin, destination, total_delays, rank FROM ( SELECT origin, destination, total_delays, dense_rank() OVER (PARTITION BY origin ORDER BY total_delays DESC) AS rank FROM departure_delays_window ) AS A WHERE rank <= 3 """).show() # ## Adding new columns foo2 = (foo .withColumn('status', F.expr("CASE WHEN delay <= 10 THEN 'on_time' ELSE 'delayed' END"))) foo2.show() # ## Dropping columns foo3 = foo2.drop('delay') foo3.show() # ## Renaming Columns foo4 = (foo3 .withColumnRenamed('status', 'flight_status')) foo4.show() # ## Pivoting spark.sql( """ SELECT destination, CAST(SUBSTRING(date, 0, 2) AS INT), delay FROM departure_delays WHERE origin = 'SEA' """ ).show() spark.sql( """ SELECT * FROM ( SELECT destination, CAST(SUBSTRING(date, 0, 2) AS INT) AS month, delay FROM departure_delays WHERE origin = 'SEA' ) PIVOT ( CAST(AVG(delay) AS DECIMAL(4, 2)) AS avg_delay, MAX(delay) AS max_delay FOR month in (1 Jan, 2 Feb) ) """ ).show()
notebooks/Learning-Spark/Spark-SQL-II.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd import numpy as np people = { 'first': ['<NAME>','Lucas', 'Adeguimar', 'Chris', np.nan, None, 'NA'], 'last': ['Siqueira', 'Siqueira', 'Lacerda', 'Schafer', np.nan, None, 'Missing'], 'email': ['<EMAIL>', '<EMAIL>','<EMAIL>', '<EMAIL>', None, np.nan, 'Missing'], 'age' : ['33', '55', '63', '36', None, None, 'Missing'] } # + df = pd.DataFrame(people) # Ja podemos tratar dados ao criar o DataFrame df.replace('NA', np.nan, inplace=True) df.replace('Missing', np.nan, inplace=True) # - df # Remover número vázios df.dropna() # Opções default, estou executando somente para ter registrado mesmo resultado anterior # a opção axis= index vai remover as linhas para remover as colunas devemos colocar columns df.dropna(axis='index', how='any') df.dropna(axis='index', how='all') df.dropna(axis='columns', how='any') df.dropna(axis='index', how='any', subset=['last','email']) df.isna() # Converter numero ausente para 0 df.fillna(0) df.dtypes # Vai dar erro devido age ser um objeto do tipo str df['age'].mean() # Conveter para float a coluna age # verificando o tipo de dados de np type(np.nan) # Conveter para float a coluna age df['age'] = df['age'].astype(float) # Verificar os tipos de dados df.dtypes # agora podemos pegar a media df['age'].mean()
11 - Limpar E converter Dados.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="QnXzcZvdc-r6" # In this notebook we will demostrate how to perform tokenization,stemming,lemmatization and pos_tagging using libraries like [spacy](https://spacy.io/) and [nltk](https://www.nltk.org/) # + # To install only the requirements of this notebook, uncomment the lines below and run this cell # =========================== # # !pip install numpy==1.19.5 # # !pip install nltk==3.2.5 # # !pip install spacy==2.2.4 # =========================== # + # To install the requirements for the entire chapter, uncomment the lines below and run this cell # =========================== # try : # import google.colab # # !curl https://raw.githubusercontent.com/practical-nlp/practical-nlp/master/Ch2/ch2-requirements.txt | xargs -n 1 -L 1 pip install # except ModuleNotFoundError : # # !pip install -r "ch2-requirements.txt" # =========================== # + colab={} colab_type="code" id="R3xEmJpRc5r8" #This will be our corpus which we will work on corpus_original = "Need to finalize the demo corpus which will be used for this notebook and it should be done soon !!. It should be done by the ending of this month. But will it? This notebook has been run 4 times !!" corpus = "Need to finalize the demo corpus which will be used for this notebook & should be done soon !!. It should be done by the ending of this month. But will it? This notebook has been run 4 times !!" # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="KHh_33IopPTf" outputId="fa12e7e4-aeb3-4053-be10-3cadad90d094" #lower case the corpus corpus = corpus.lower() print(corpus) # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="3yaGf8RiqgBM" outputId="859abb8b-3a34-4e23-bd8e-963520fb6ed3" #removing digits in the corpus import re corpus = re.sub(r'\d+','', corpus) print(corpus) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="v5Q--GItqzfu" outputId="82fec440-1251-4ba1-cdf4-f1cab3c2a607" #removing punctuations import string corpus = corpus.translate(str.maketrans('', '', string.punctuation)) print(corpus) # + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="zmANqee9rK4N" outputId="6105b616-e770-409a-88b0-3f23dd3ffd72" #removing trailing whitespaces corpus = ' '.join([token for token in corpus.split()]) corpus # + # # !python -m spacy download en_core_web_sm # + [markdown] colab_type="text" id="nfJx3MnVj_ph" # ### Tokenizing the text # + colab={"base_uri": "https://localhost:8080/", "height": 272} colab_type="code" id="OUz580k2sMqf" outputId="da21bf1e-444b-4077-c823-e58b4986a35f" from pprint import pprint ##NLTK import nltk from nltk.corpus import stopwords nltk.download('stopwords') nltk.download('punkt') from nltk.tokenize import word_tokenize stop_words_nltk = set(stopwords.words('english')) tokenized_corpus_nltk = word_tokenize(corpus) print("\nNLTK\nTokenized corpus:",tokenized_corpus_nltk) tokenized_corpus_without_stopwords = [i for i in tokenized_corpus_nltk if not i in stop_words_nltk] print("Tokenized corpus without stopwords:",tokenized_corpus_without_stopwords) ##SPACY from spacy.lang.en.stop_words import STOP_WORDS import spacy spacy_model = spacy.load('en_core_web_sm') stopwords_spacy = spacy_model.Defaults.stop_words print("\nSpacy:") tokenized_corpus_spacy = word_tokenize(corpus) print("Tokenized Corpus:",tokenized_corpus_spacy) tokens_without_sw= [word for word in tokenized_corpus_spacy if not word in stopwords_spacy] print("Tokenized corpus without stopwords",tokens_without_sw) print("Difference between NLTK and spaCy output:\n", set(tokenized_corpus_without_stopwords)-set(tokens_without_sw)) # + [markdown] colab_type="text" id="eRH_ltkD-HpA" # Notice the difference output after stopword removal using nltk and spacy # + [markdown] colab_type="text" id="tGcwD1JlkEao" # ### Stemming # + colab={"base_uri": "https://localhost:8080/", "height": 84} colab_type="code" id="ibEpzcv0sdW8" outputId="18f77b85-3a8e-4e89-df28-3bd6342ac594" from nltk.stem import PorterStemmer from nltk.tokenize import word_tokenize stemmer= PorterStemmer() print("Before Stemming:") print(corpus) print("After Stemming:") for word in tokenized_corpus_nltk: print(stemmer.stem(word),end=" ") # + [markdown] colab_type="text" id="9Wy6cwvYkJeR" # ### Lemmatization # + colab={"base_uri": "https://localhost:8080/", "height": 67} colab_type="code" id="27KvL4ZE-fqJ" outputId="d8b6778f-79b7-4dd4-8832-da29d75dc3a8" from nltk.stem import WordNetLemmatizer from nltk.tokenize import word_tokenize nltk.download('wordnet') lemmatizer=WordNetLemmatizer() for word in tokenized_corpus_nltk: print(lemmatizer.lemmatize(word),end=" ") # + [markdown] colab_type="text" id="h8uCGA8ukMfQ" # ### POS Tagging # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="kZqBxLDz-6cu" outputId="a8503608-0352-4c00-82fe-789d874b5655" #POS tagging using spacy print("POS Tagging using spacy:") doc = spacy_model(corpus_original) # Token and Tag for token in doc: print(token,":", token.pos_) #pos tagging using nltk nltk.download('averaged_perceptron_tagger') print("POS Tagging using NLTK:") pprint(nltk.pos_tag(word_tokenize(corpus_original))) # + [markdown] colab_type="text" id="zWdmz6lFkpEI" # There are various other libraries you can use to perform these common pre-processing steps
Ch2/04_Tokenization_Stemming_lemmatization_stopword_postagging.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Reading and Visualising Data with Pandas # # <NAME> (<EMAIL>) # # + slideshow={"slide_type": "slide"} # %matplotlib inline import pandas as pd import matplotlib as ml import sys plt = ml.pyplot ml.rcParams['figure.figsize'] = (10.0, 5.0) print("Python version: {0}\n" "Pandas version: {1}\n" "Matplotlib version: {2}\n" .format(sys.version, pd.__version__, ml.__version__)) # + slideshow={"slide_type": "slide"} from IPython.core.magic import register_line_magic @register_line_magic def shorterr(line): """Show only the exception message if one is raised.""" try: output = eval(line) except Exception as e: print("\x1b[31m\x1b[1m{e.__class__.__name__}: {e}\x1b[0m".format(e=e)) else: return output del shorterr # + slideshow={"slide_type": "skip"} import warnings warnings.filterwarnings('ignore') # annoying UserWarnings from Jupyter which are not fixed yet # + [markdown] slideshow={"slide_type": "slide"} # ## Exercise 1 # # Use the `pd.read_csv()` function to create a `DataFrame` from the dataset `data/neutrinos.csv`. # - # ### Problems encountered # # - the first few lines represent a plain header and need to be skipped # - comments are indicated with `$` at the beginning of the line # - the column separator is `:` # - the decimal delimiter is `,` # - the index column is the first one # - there is a footer to be excluded # - footer exclusion only works with the Python-engine # + slideshow={"slide_type": "subslide"} neutrinos = pd.read_csv('data/neutrinos.csv', skiprows=6, delimiter=':', decimal=',', comment ='$', index_col=0, skipfooter=1) # - neutrinos.head() # ### Check the dtypes to make sure everthing is parsed correctly (and is not an `object`-array) neutrinos.bjorkeny = neutrinos.bjorkeny.str.replace(',', '.').astype(float) neutrinos.dtypes # + [markdown] slideshow={"slide_type": "slide"} # ## Exercise 2 # # Create a histogram of the neutrino energies. # - neutrinos.energy.hist(bins=100); # + [markdown] slideshow={"slide_type": "slide"} # ## Exercise 3 # # Use the `pd.read_csv()` function to create a `DataFrame` from the dataset `data/reco.csv`. # - reco = pd.read_csv('data/reco.csv', index_col=0) reco # + [markdown] slideshow={"slide_type": "slide"} # ## Exercise 4 # # Combine the `neutrinos` and `reco` `DataFrames` using `pd.concat()` # - neutrinos neutrinos.shape reco.shape df = pd.concat([neutrinos, reco.add_prefix('reco_')], axis=1) df.shape df # ### Problems encountered # # - need to define the right axis # - identical column names should be avoided # + [markdown] slideshow={"slide_type": "slide"} # ## Exercise 5 # # Make a scatter plot to visualise the zenith reconstruction quality. # # # - df[['zenith', 'reco_zenith']] df.plot.scatter(x='zenith', y='reco_zenith',alpha=0.01) # + [markdown] slideshow={"slide_type": "slide"} # ## Exercise 6 # # Create a histogram of the cascade probabilities (__`neutrinos`__ dataset: `proba_cscd` column) for the energy ranges 1-5 GeV, 5-10 GeV, 10-20 GeV and 20-100 GeV. # - df['energy_bin'] = pd.Series(np.nan) df['energy_bin'][(df.energy <= 5) & (df.energy >= 1)] = 1 df['energy_bin'][(df.energy > 5) & (df.energy <= 10)] = 2 df['energy_bin'][(df.energy > 10) & (df.energy <= 20)] = 3 df['energy_bin'][(df.energy > 20) & (df.energy <= 100)] = 4 df df['energy_bin'] = pd.cut(x=df.energy, bins=[1, 5, 10, 20, 100]) df df.hist('proba_cscd', by='energy_bin', bins=100, figsize=(15,10), alpha=0.6); # + [markdown] slideshow={"slide_type": "slide"} # ## Exercise 7 # # Create a 2D histogram showing the distribution of the `x` and `y` values of the starting positions (`pos_x` and `pos_y`) of the neutrinos. This is basically a 2D plane of the starting positions. using the method hist2d # - plt.hist2d(df.pos_x, df.pos_y, bins=100, range=((-200,200),(-200,200))); plt.axis('equal') df.plot.hexbin(x='pos_x', y='pos_y', gridsize=100) plt.xlim(-200, 200) plt.ylim(-200, 200) plt.axis('equal') # + [markdown] slideshow={"slide_type": "slide"} # ## Acknowledgements # ![](images/eu_asterics.png) # # This tutorial was supported by the H2020-Astronomy ESFRI and Research Infrastructure Cluster (Grant Agreement number: 653477).
pandas/Pandas_M1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # ![](../graphics/solutions-microsoft-logo-small.png) # # # R for Data Professionals # # ## 01 Overview and Setup # # In this course you'll cover the basics of the R language and environment from a Data Professional's perspective. While you will learn the basics of R itself, you'll quickly cover topics that have a lot more depth available. In each section you'll get more references to go deeper, which you should follow up on. Also watch for links within the text - click on each one to explore that topic. # # The code sections of this course are as much a part of your learning as these overview files. You'll get not only assignments but explanations in the R code in those exercises. # # Make sure you check out the **00 Pre-Requisites** page before you start. You'll need all of the items loaded there before you can proceed with the course. # # You'll cover these topics in the course: # # <p style="border-bottom: 1px solid lightgrey;"></p> # # <dl> # <dt>Course Outline</dt> # <dt>1 - Overview and Course Setup <i>(This section)</i></dt> # <dt>2 - Programming Basics</dt> # <dt>3 Working with Data</dt> # <dt>4 Deployment and Environments</dt> # <dl> # # <p style="border-bottom: 1px solid lightgrey;"></p> # <h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/cortanalogo.png"> Overview</h2> # # There are many "distributions" of R. The most common installation is from the CRAN - the Comprehensive R Network. The distribution you will use in this course is installed when you install SQL Server 2016 or higher with ML Services (or R Services in the earlier versions) is called Microsoft R Open (MRO), and its base code is from the CRAN distribution. MRO replaces a couple of libraries (more about those later) and adds a few to increase the speed, capabilities and features of standard CRAN R. # # You have a few ways of working with R: # # - The Interactive Interpreter (Type `R` if it is in your path) # - Writing code and running it in some graphical environment (Such as VSCode, Visual Studio, RGUI, R-Studio, etc.) # - Calling an `.R` script file from the `R` command # # When you're in command-mode, you'll see that the code works more like a scripting language. Programming-mode looks like a standard programming language environment - you'll normally use that within an Integrated Programming Environment (IDE). In any case, R is an "interpreted" language, meaning you write code that R then runs through a series of steps before it returns a result. # <p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/aml-logo.png"><b>Activity: Verify Your Installation and Configure R</b></p> # # Open the **01_OverviewAndCourseSetup.R** file and run the code you see there. The exercises will be marked out using comments: # # `<TODO> - 01` # + # 01_OverviewAndCourseSetup.R # Purpose: Initial Course Setup and displaying versions # Author: <NAME> # Credits and Sources: Inline # Last Updated: 27 June 2018 # Check the R Version and Information # <TODO> - Fix this code so that it runs print "The R Version is: " R Something Something # EOF: 01_OverviewAndCourseSetup.R # - # <p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/thinking.jpg"><b>For Further Study</b></p> # # - The Official R Documentation: https://mran.microsoft.com/rro # - The R tutorial (current as of the publication of this course) is in your ./assets folder as a file called `R-intro.pdf`. # # Next, Continue to *02 Programming Basics*
RForDataProfessionals/R For Data Professionals/notebooks/.ipynb_checkpoints/01 Overview and Setup-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.preprocessing import StandardScaler import sklearn.neural_network import sklearn.model_selection from sklearn.model_selection import train_test_split datos = pd.read_csv('data.csv') datos = np.array(datos) X = datos[:,1:-1] Y = datos[:,-1] # + scaler = StandardScaler() X_train, X_test, y_train, y_test = train_test_split(X, Y, train_size=0.5) X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) y_train = np.array([int(x) for x in y_train]) y_test = np.array([int(x) for x in y_test]) # - Ntrain = len(X_train) Ntest = len(X_test) mlp = sklearn.neural_network.MLPClassifier(activation='logistic', hidden_layer_sizes=(4,4,4), max_iter=1000) F1_train = [] F1_test = [] for i in np.arange(0.1,1.1,0.1): x_train = X_train[:int(i*Ntrain)] Y_train = y_train[:int(i*Ntrain)] x_test = X_test[:int(i*Ntest)] Y_test = y_test[:int(i*Ntest)] mlp.fit(x_train, Y_train) F1_train.append(sklearn.metrics.f1_score(Y_train, mlp.predict(x_train), average='macro')) F1_test.append(sklearn.metrics.f1_score(Y_test, mlp.predict(x_test), average='macro')) plt.figure(figsize = (5,5)) plt.plot(np.arange(0.1,1.1,0.1),F1_train,label = 'Train') plt.plot(np.arange(0.1,1.1,0.1),F1_test,label = 'Test') plt.legend() plt.title('F1 usando softmax') plt.xlabel('Porcentaje de train y test') plt.ylabel('F1 score') mlp = sklearn.neural_network.MLPClassifier(activation='relu', hidden_layer_sizes=(4,4,4), max_iter=1000) F1_train = [] F1_test = [] for i in np.arange(0.1,1.1,0.1): x_train = X_train[:int(i*Ntrain)] Y_train = y_train[:int(i*Ntrain)] x_test = X_test[:int(i*Ntest)] Y_test = y_test[:int(i*Ntest)] mlp.fit(x_train, Y_train) F1_train.append(sklearn.metrics.f1_score(Y_train, mlp.predict(x_train), average='macro')) F1_test.append(sklearn.metrics.f1_score(Y_test, mlp.predict(x_test), average='macro')) plt.figure(figsize = (5,5)) plt.plot(np.arange(0.1,1.1,0.1),F1_train,label = 'Train') plt.plot(np.arange(0.1,1.1,0.1),F1_test,label = 'Test') plt.legend() plt.title('F1 usando relu') plt.xlabel('Porcentaje de train y test') plt.ylabel('F1 score') alpha = np.logspace(-5,2,20) F1_train = [] F1_test = [] for i in alpha: mlp = sklearn.neural_network.MLPClassifier(alpha = i, activation='relu', hidden_layer_sizes=(4,4,4), max_iter=1000) mlp.fit(X_train, y_train) F1_train.append(sklearn.metrics.f1_score(y_train, mlp.predict(X_train), average='macro')) F1_test.append(sklearn.metrics.f1_score(y_test, mlp.predict(X_test), average='macro')) plt.figure(figsize = (5,5)) plt.semilogx(alpha,F1_train,label = 'Train') plt.semilogx(alpha,F1_test,label = 'Test') plt.legend() plt.title('F1 usando relu variando alpha') plt.xlabel('Alpha') plt.ylabel('F1 score') # Se observa que el valor de F1 es claramente afectado tanto para softmax como para relu. Sin embargo, mientras usando softmax la variación es imprecisa ya que no hay un comportamiento bien definido, en relu se observa un decrecimiento en F1 a medida que se aumenta el porcentaje de train y test usados. Finalmente, un comportamiento errático es también observado al variar el alpha
influencia_instancias.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- from noisygamesutil import * import matplotlib.pyplot as plt # + #objective here is to get the multi round performance of player0 #up against everything else. Between random play is uninteresting #from that we can build a plot of a given strategy's performance against #differing amounts of randomness # + base_dir = "/home/kennethmclarney/Documents/RustProjects/noisygames/test_runs/Year2022Month1Day23Hour5Min29Sec3/" #great, now I've got a nice setup to organize results. players_struct = build_players_struct(base_dir) player_name='player0' tested_strat='GrimTrigger' matchups=list(players_struct[player_name].keys()) all_matches = {} # - probs=[] for matchup in matchups: match=load_matchup_files(players_struct,player_name,matchup) strat=list(match[0]['player_b'].keys())[0] prob = match[0]['player_b'][strat]['player']['probability']*100 probs.append(prob) prob_str=str(prob) strat_id=strat+'_'+prob_str matchup_score = get_matchup_scores(match,player_name,matchups[0])[0] all_matches[strat_id]=matchup_score # + probs.sort() prob_strings = [] for prob in probs: prob_strings.append("RandomDefect_"+str(prob)) values=[] for s in prob_strings: values.append(all_matches[s]) # - fig = plt.figure() ax = plt.axes() ax.plot(probs,values) plt.title(tested_strat+' Performance vs Chance of Defection',fontsize=14) plt.xlabel('Probability of Defection',fontsize=12) plt.ylabel('Payoff',fontsize=12) #plt.savefig(tested_strat+"Randomized.jpg") plt.show()
analysis/StratVsRandom.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # # Triangle with Normal Vector # # ### Plotting a triangle with its normal vector. The tail of the vector is set to be the triangle centroid. # # %matplotlib inline from skspatial.objects import Triangle from skspatial.plotting import plot_3d triangle = Triangle([0, 0, 1], [1, 1, 0], [0, 2, 1]) centroid = triangle.centroid() plot_3d( triangle.plotter(c='k', zorder=3), centroid.plotter(c='r'), triangle.normal().plotter(point=centroid, scalar=0.2, c='r'), *[x.plotter(c='k', zorder=3) for x in triangle.multiple('line', 'abc')], )
Mathematics/Linear Algebra/Python Visualization Notebooks/scikit-spatial/plot_normal.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using Dynamic IP on the Composable Pipeline # ---- # # <div class="alert alert-box alert-info"> # Please use Jupyter labs http://&lt;board_ip_address&gt;/lab for this notebook. # </div> # # This notebook shows your how to load dynamic IP and compose branched pipelines # # ## Aims # * Load dynamic IP # * Compose branched pipelines # # ## Table of Contents # * [Download Composable Overlay](#download) # * [Start HDMI Video](#start_hdmi) # * [Load Dynamic IP](#dynamic) # * [Let us Compose](#compose) # * [Branched Pipeline](#branched) # * [Conflicting Dynamic IP](#conflicting) # * [Stop HDMI Video](#stop_hdmi) # * [Conclusion](#conclusion) # # ---- # # ## Revision History # # * v1.0 | 30 March 2021 | First notebook revision. # # ---- # ## Download Composable Overlay <a class="anchor" id="download"></a> # # Import the pynq video libraries as well as ComposableOverlay class and the drivers for the IP. # # Download the Composable Overlay using the `ComposableOverlay` and grab a handler to the `composable` hierarchy # + from pynq.lib.video import * from composable_pipeline import ComposableOverlay from composable_pipeline.libs import * ol = ComposableOverlay("../overlay/cv_dfx_4_pr.bit") cpipe = ol.composable # - # ## Start HDMI Video <a class="anchor" id="start_hdmi"></a> # # Get `HDMIVideo` object and start video # # <div class="alert alert-heading alert-danger"> # <h4 class="alert-heading">Warning:</h4> # # Failure to connect HDMI cables to a valid video source and screen may cause the notebook to hang # </div> video = HDMIVideo(ol) video.start() # ## Load Dynamic IP <a class="anchor" id="dynamic"></a> # # The Composable Overlay provides DFX regions where IP can be loaded dynamically to bring new functionality. If we want to load an IP within a DFX region, the `.loadIP` method is used. # # Let us start by looking at the `.c_dict` to see what IP cores are loaded cpipe.c_dict.loaded # The documentation of `.loadIP` specify that IP can be loaded using the full name or the IP object # + # cpipe.loadIP? # - cpipe.loadIP([cpipe.pr_1.dilate_accel]) # Examine the `.c_dict` again and verify that `dilate_accel` and `erode_accel` are indeed loaded, both are in the same DFX region cpipe.c_dict.loaded # ## Let us Compose <a class="anchor" id="compose"></a> # # Grab handlers to these dynamic IP objects and compose a pipeline with them # + video_in_in = cpipe.video.hdmi_in.color_convert video_in_out = cpipe.video.hdmi_in.pixel_pack dilate = cpipe.pr_1.dilate_accel # + cpipe.compose([video_in_in, dilate, video_in_out]) cpipe.graph # - # ## Branched Pipeline <a class="anchor" id="branched"></a> # # In this part of the notebook, we will bring new functionality into the four DFX regions to compose the [Difference of Gaussians](https://en.wikipedia.org/wiki/Difference_of_Gaussians) application that was also introduced in the previous session. # # Load dynamic IP, grab handlers and set up default values cpipe.loadIP(['pr_fork/duplicate_accel', 'pr_join/subtract_accel', 'pr_0/filter2d_accel']) # + filter2d = cpipe.video.composable.filter2d_accel duplicate = cpipe.pr_fork.duplicate_accel subtract = cpipe.pr_join.subtract_accel fifo = cpipe.pr_0.axis_data_fifo_1 filter2d_d = cpipe.pr_0.filter2d_accel filter2d.sigma = 0.3 filter2d.kernel_type = 'gaussian_blur' filter2d_d.sigma = 12 filter2d_d.kernel_type = 'gaussian_blur' # - # The Difference of Gaussians is realized by subtracting one Gaussian blurred version of an original image from another less blurred version of the original. In the Composable Overlay this is achieved by branching the pipeline, which is expressed as a list of a list. # + video_pipeline = [video_in_in, filter2d, duplicate, [[filter2d_d], [1]], subtract, video_in_out] cpipe.compose(video_pipeline) cpipe.graph # - # ## Conflicting Dynamic IP <a class="anchor" id="conflicting"></a> # # Note that IP within the DFX regions are often mutually exclusive (some DFX regions support multiple IP at the same time), this means that they cannot be loaded at the same time. The `.loadIP` will raise an exception in these cases, try it by yourself running the following cell cpipe.loadIP(['pr_fork/duplicate_accel', 'pr_fork/colorthresholding_accel']) # <div class="alert alert-info"> # <strong>Info!</strong> Use the <strong>dfx_dict</strong> attribute to identify which IP are mutually exclusive # </div> # ## Stop HDMI Video <a class="anchor" id="stop_hdmi"></a> # # Finally stop the HDMI video pipeline # # <div class="alert alert-heading alert-danger"> # <h4 class="alert-heading">Warning:</h4> # # Failure to stop the HDMI Video may hang the board # when trying to download another bitstream onto the FPGA # </div> video.stop() # ---- # # ## Conclusion <a class="anchor" id="conclusion"></a> # # This notebook has shown how to bring new functionality to the composable overlay by loading dynamic IP. Moreover, the notebook shows how to implement a branched pipeline. # # [⬅️ Modify Composable Pipeline](04_modify_pipeline.ipynb) | | [Build Custom Application ➡️](06_build_application.ipynb) # Copyright &copy; 2021 Xilinx, Inc # # SPDX-License-Identifier: BSD-3-Clause # # ----
composable_pipeline/notebooks/custom_pipeline/05_dynamic_ip.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Data Science) # language: python # name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0 # --- # # Run Data Bias Analysis with SageMaker Clarify (Pre-Training) # # ## Using SageMaker Processing Jobs # + import boto3 import sagemaker import pandas as pd import numpy as np sess = sagemaker.Session() bucket = sess.default_bucket() role = sagemaker.get_execution_role() region = boto3.Session().region_name sm = boto3.Session().client(service_name="sagemaker", region_name=region) # - # # Get Data from S3 # %store -r bias_data_s3_uri print(bias_data_s3_uri) # !aws s3 cp $bias_data_s3_uri ./data-clarify # + import pandas as pd data = pd.read_csv("./data-clarify/amazon_reviews_us_giftcards_software_videogames.csv") data.head() # - data.shape # ### Data inspection # Plotting histograms for the distribution of the different features is a good way to visualize the data. # + import seaborn as sns sns.countplot(data=data, x="star_rating", hue="product_category") # - # # Detecting Bias with Amazon SageMaker Clarify # # SageMaker Clarify helps you detect possible pre- and post-training biases using a variety of metrics. # + from sagemaker import clarify clarify_processor = clarify.SageMakerClarifyProcessor( role=role, instance_count=1, instance_type="ml.c5.xlarge", sagemaker_session=sess ) # - # # Pre-training Bias # Bias can be present in your data before any model training occurs. Inspecting your data for bias before training begins can help detect any data collection gaps, inform your feature engineering, and hep you understand what societal biases the data may reflect. # # Computing pre-training bias metrics does not require a trained model. # ## Writing DataConfig # A `DataConfig` object communicates some basic information about data I/O to Clarify. We specify where to find the input dataset, where to store the output, the target column (`label`), the header names, and the dataset type. # + bias_report_output_path = "s3://{}/clarify".format(bucket) bias_data_config = clarify.DataConfig( s3_data_input_path=bias_data_s3_uri, s3_output_path=bias_report_output_path, label="star_rating", headers=data.columns.to_list(), dataset_type="text/csv", ) # - # ## Writing BiasConfig # SageMaker Clarify also needs information on what the sensitive columns (`facets`) are, what the sensitive features (`facet_values_or_threshold`) may be, and what the desirable outcomes are (`label_values_or_threshold`). # Clarify can handle both categorical and continuous data for `facet_values_or_threshold` and for `label_values_or_threshold`. In this case we are using categorical data. # # We specify this information in the `BiasConfig` API. Here that the positive outcome is `star rating==5`, `product_category` is the sensitive column, and `Gift Card` is the sensitive value. bias_config = clarify.BiasConfig( label_values_or_threshold=[5, 4], facet_name="product_category", facet_values_or_threshold=["Gift Card"], group_name="product_category", ) # ## Detect Bias with a SageMaker Processing Job and Clarify clarify_processor.run_pre_training_bias( data_config=bias_data_config, data_bias_config=bias_config, methods="all", wait=False, logs=False ) run_pre_training_bias_processing_job_name = clarify_processor.latest_job.job_name run_pre_training_bias_processing_job_name # + from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}">Processing Job</a></b>'.format( region, run_pre_training_bias_processing_job_name ) ) ) # + from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After About 5 Minutes</b>'.format( region, run_pre_training_bias_processing_job_name ) ) ) # + from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}/{}/?region={}&tab=overview">S3 Output Data</a> After The Processing Job Has Completed</b>'.format( bucket, run_pre_training_bias_processing_job_name, region ) ) ) # + running_processor = sagemaker.processing.ProcessingJob.from_processing_name( processing_job_name=run_pre_training_bias_processing_job_name, sagemaker_session=sess ) processing_job_description = running_processor.describe() print(processing_job_description) # - running_processor.wait(logs=False) # # Download Report From S3 # The class-imbalance metric should match the value calculated for the unbalanced dataset using the open source version above. # !aws s3 ls $bias_report_output_path/ # !aws s3 cp --recursive $bias_report_output_path ./generated_bias_report/ # + from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="./generated_bias_report/report.html">Bias Report</a></b>')) # - # # Release Resources # + language="html" # # <p><b>Shutting down your kernel for this notebook to release resources.</b></p> # <button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button> # # <script> # try { # els = document.getElementsByClassName("sm-command-button"); # els[0].click(); # } # catch(err) { # // NoOp # } # </script> # + language="javascript" # # try { # Jupyter.notebook.save_checkpoint(); # Jupyter.notebook.session.delete(); # } # catch(err) { # // NoOp # }
05_explore/04_Run_Data_Bias_Analysis_ProcessingJob.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="ERSps3-ac4Po" # # Introduction to QP # + [markdown] colab_type="text" id="VsbCH_XUJDCN" # ## Notebook Setup # The following cell will install Drake, checkout the manipulation repository, and set up the path (only if necessary). # - On Google's Colaboratory, this **will take approximately two minutes** on the first time it runs (to provision the machine), but should only need to reinstall once every 12 hours. # # More details are available [here](http://manipulation.mit.edu/drake.html). # + colab={} colab_type="code" id="l3cuy181WKu6" import importlib import os, sys from urllib.request import urlretrieve if 'google.colab' in sys.modules and importlib.util.find_spec('manipulation') is None: urlretrieve(f"http://manipulation.csail.mit.edu/scripts/setup/setup_manipulation_colab.py", "setup_manipulation_colab.py") from setup_manipulation_colab import setup_manipulation setup_manipulation(manipulation_sha='c1bdae733682f8a390f848bc6cb0dbbf9ea98602', drake_version='0.25.0', drake_build='releases') # + colab={} colab_type="code" id="tFMmTfbHWQfh" # python libraries import numpy as np from pydrake.all import ( MathematicalProgram, Solve, eq, le, ge ) # + [markdown] colab_type="text" id="ze9gQeOVOJUA" # # Introduction to MathematicalProgram # # The purpose of this exercise is to get you familiar with the basics of what an instance of an optimization problem is, as well as how to solve it. # # An optimization problem is usually written as # $$\begin{aligned} # \min_x \quad & f(x) \\ # \textrm{s.t.} \quad & g(x)\leq 0,\\ # \quad & h(x)=0 \end{aligned}$$ # # We call $x$ the **decision variable**, $f(x)$ the **cost function**, $g(x)\leq 0$ an **inequality constraint**, and $h(x)=0$ an **equality constraint**. We usually denote the optimal solution by $x^*$. Most of the times, the constraints are hard-constraints, meaning that they must be fulfilled by the optimal solution. # # Drake offers a very good interface to many solvers using `MathematicalProgram`. Let's try to solve a simple problem using `MathematicalProgram`: # $$\begin{aligned} # \min_x \quad & \frac{1}{2}x^2 \\ # \textrm{s.t.} \quad & x\geq 3 # \end{aligned}$$ # # Before we start coding, what do you expect the answer to be? You should persuade yourself that the optimal solution is $x^*=3$, since that is value at which minimum cost is achieved without violating the constraint. # # # + colab={} colab_type="code" id="Khi7GeVNcwtU" ''' Steps to solve a optimization problem using Drake's MathematicalProgram ''' # 1. Define an instance of MathematicalProgram prog = MathematicalProgram() # 2. Add decision varaibles x = prog.NewContinuousVariables(1) # 3. Add Cost function prog.AddCost(x.dot(x)) # 4. Add Constraints prog.AddConstraint(x[0] >= 3) # 5. Solve the problem result = Solve(prog) # 6. Get the solution if (result.is_success): print("Solution: " + str(result.GetSolution())) # + [markdown] colab_type="text" id="HvEI7697UUZC" # You should have seen that we were successful in getting the expected solution of $x^*=3$. # # A particular class of problems that we want to focus on this problem are [Quadratic Programs (QP)](https://en.wikipedia.org/wiki/Quadratic_programming), which can be solved very efficiently in practice (even on the order of kHz). # # The general formulation of these problems are defined as follows. # $$\begin{aligned} # \min_x \quad & \frac{1}{2}x^T\mathbf{Q}x + c^Tx \\ # \textrm{s.t.} \quad & \mathbf{A}x\leq b,\\ # \quad & \mathbf{A}'x=b' \end{aligned}$$ # # where $\mathbf{Q}$ is a positive-definite, symmetric matrix. Note that the cost is a quadratic function of the decision variables, while the constraints are all linear. This is what defines a convex QP. # # Let's practice solving a simple QP: # # $$\begin{aligned} # \min_{x_0,x_1,x_2} \quad & x_0^2 + x_1^2 + x_2^2 \\ # \textrm{s.t.} \quad & \begin{pmatrix} 2 & 3 & 1 \\ 5 & 1 & 0 \end{pmatrix} \begin{pmatrix} x_0 \\ x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\\ # \quad & \begin{pmatrix} x_0 \\ x_1 \\ x_2 \end{pmatrix} \leq \begin{pmatrix} 2 \\ 2 \\ 2\end{pmatrix} \end{aligned}$$ # # To conveniently write down constraints that are vector-valued, Drake offers `eq,le,ge` for elementwise constraints. It might take some time to learn the syntax of constraints. For a more well-written and in-depth introduction to `MathematicalProgram`, [this notebook tutorial](https://mybinder.org/v2/gh/RobotLocomotion/drake/nightly-release-binder?filepath=tutorials/mathematical_program.ipynb) is incredibly useful. # # # + colab={} colab_type="code" id="SNvpjgzxVQJC" prog = MathematicalProgram() x = prog.NewContinuousVariables(3) prog.AddCost(x.dot(x)) prog.AddConstraint(eq(np.array([[2, 3, 1], [5, 1, 0]]).dot(x), [1, 1])) prog.AddConstraint(le(x, 2 * np.ones(3))) result = Solve(prog) # 6. Get the solution if (result.is_success()): print("Solution: " + str(result.GetSolution())) # + [markdown] colab_type="text" id="SmYZWSewSwf6" # # **Now, it's your turn to solve a simple problem!** # # You must solve the following problem and store the result in a variable named `result_submission`. # # $$\begin{aligned} # \min_{x_0,x_1,x_2} \quad & 2x_0^2 + x_1^2 + 3x_2^2 \\ # \textrm{s.t.} \quad & \begin{pmatrix} 1 & 2 & 3 \\ 2 & 7 & 4 \end{pmatrix} \begin{pmatrix} x_0 \\ x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \end{pmatrix} \\ # \quad & |x| \leq \begin{pmatrix} 0.35 \\ 0.35 \\ 0.35\end{pmatrix} \end{aligned}$$ # # NOTE: The last constraint says that the absolute value of `x[i]` must be less than the value of `b_bb[i]`. You cannot put an absolute value directly as a constraint, so there are two routes that you can take: # - Break the constraints down to two constraints that don't involve the absolute value. # - Drake offers [`AddBoundingboxConstraint`](https://drake.mit.edu/pydrake/pydrake.solvers.mathematicalprogram.html?highlight=addboundingbox#pydrake.solvers.mathematicalprogram.MathematicalProgram.AddBoundingBoxConstraint) which you may use in your implementation. # + colab={} colab_type="code" id="qhMB4kc3asCE" prog = MathematicalProgram() # Modify here to get the solution to the above optimization problem. result_submission = None # store the result here. # + [markdown] colab_type="text" id="zPmeRLtJk410" # ##How will this notebook be Graded?## # # If you are enrolled in the class, this notebook will be graded using [Gradescope](www.gradescope.com). You should have gotten the enrollement code on our announcement in Piazza. # # For submission of this assignment, you must do as follows: # - Download and submit the notebook `intro_to_qp.ipynb` to Gradescope's notebook submission section, along with your notebook for the other problems. # # We will evaluate the local functions in the notebook to see if the function behaves as we have expected. For this exercise, the rubric is as follows: # - [4 pts] `result_submission` must have the correct answer to the QP. # + [markdown] colab_type="text" id="t4GLP2woecl5" # Below is our autograder where you can check the correctness of your implementations. # + colab={} colab_type="code" id="Ea4zI6Enefhx" from manipulation.exercises.pick.test_simple_qp import TestSimpleQP from manipulation.exercises.grader import Grader Grader.grade_output([TestSimpleQP], [locals()], 'results.json') Grader.print_test_results('results.json')
exercises/pick/intro_to_qp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Pandas # --- # # *For those who have error loading this dataset on macosx, run below commands on terminal* # # export LC_ALL=en_US.UTF-8 # export LANG=en_US.UTF-8 import pandas as pd # ### Reading from csv file # %ls ../data # *let's use countries.csv file* df = pd.read_csv('../data/countries.csv') df # *If it gives encoding error, define encoding format* # # ```python # df = pd.read_csv('../data/countries.csv', encoding='utf-8') # ``` df.head() # select rows from 10 - 30 ndf = df[10: 30] ndf.head() # reset the index of our new dataframe "ndf" to start it from 0 ndf.reset_index() # *Every dataframe operation returns a new dataframe, ie it doesnot change the old one* ndf.head() # now drop the old index while resetting it instead of creating a new column ndf.reset_index(drop=True) # set a new index out of a column ndf.set_index("Code") ndf.head() # index of dataframe ndf.index mdf = ndf.set_index("Code") mdf.head() mdf.index mdf.columns df.columns mdf mdf[3] mdf["Name"] mdf["AR"] mdf[0] mdf[2:5] mdf.ix["AW"] mdf.loc["AU"] mdf.ix[0] mdf.loc[0] mdf.iloc[0] ndf.head() ndf.reindex(columns=["Pop", "Name", "Code"]).head() ndf.reindex(index=[8, 11, 10, 14, 13, 12]) df2 = pd.read_csv('co2_emissions.csv', skiprows=3) df2.head() df2['Country Name'].unique() df2['Country Name'].describe() ag_df = df2[df2["Indicator Code"] == 'EN.ATM.CO2E.KT'] ag_df ag_df.loc[:, "1960": "2015"] ag_df ag_df.drop(["Country Name", "Country Code", "Indicator Name", "Indicator Code", "Unnamed: 61"], axis=1) ag_df ag_df.drop(["Country Name", "Country Code", "Indicator Name", "Indicator Code", "Unnamed: 61"], axis=1, inplace=True) ag_df ag_df.dropna() ag_df = ag_df.transpose() ag_df ag_df.columns = ["Values"] ag_df ag_df = ag_df.dropna() ag_df import matplotlib.pyplot as plt # %matplotlib inline ag_df.plot() df2.head() df2[(df2['2009'] > 1000) & (df2['2010'] < 4000)] df3 = df2.drop(["Country Name", "Country Code", "Indicator Name"], axis=1) df3.head() df3 = df3.transpose() df3.head() df3.iloc[0] df3.columns = df3.iloc[0] df3.head() df3 = df3.drop("Indicator Code") df3.head() df3[["AG.LND.ARBL.HA.PC", "AG.LND.AGRI.ZS"]].dropna() df3[["AG.LND.ARBL.HA.PC", "AG.LND.AGRI.ZS"]].dropna().mean() df3[["AG.LND.ARBL.HA.PC", "AG.LND.AGRI.ZS"]].sort_values("AG.LND.ARBL.HA.PC", ascending=False) mdf.sort_index(ascending=False) mdf.sort_values("Name", ascending=False) ag_df.plot(kind='bar', color='cyan') mdf.head() def lowercase(x): return x.lower() lambda x: x.uppercase() mdf.Name mdf["Name"] df5 = mdf.Name.apply(lowercase) df5.head() mdf['Name'] = mdf.Name.apply(lambda x: x.upper()) mdf # http://pandas.pydata.org/pandas-docs/stable/tutorials.html # # http://nbviewer.jupyter.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/02%20-%20Lesson.ipynb
References/5.Pandas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # import the required libraries import numpy as np import seaborn as sns import matplotlib.pyplot as plt import os import cv2 import tensorflow as tf from tensorflow.keras import layers, optimizers from tensorflow.keras.layers import * from tensorflow.keras.models import Model, load_model from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint # - # ## Loading Images from the Disk # + # function that would read an image provided the image path, preprocess and return it back def read_and_preprocess(img_path): img = cv2.imread(img_path, cv2.IMREAD_COLOR) # reading the image img = cv2.resize(img, (256, 256)) # resizing it (I just like it to be powers of 2) img = np.array(img, dtype='float32') # convert its datatype so that it could be normalized img = img/255 # normalization (now every pixel is in the range of 0 and 1) return img # + X_train = [] # To store train images y_train = [] # To store train labels # labels - # 0 - Covid # 1 - Viral Pneumonia # 2 - Normal train_path = './dataset/train/' # path containing training image samples # - for folder in os.scandir(train_path): for entry in os.scandir(train_path + folder.name): X_train.append(read_and_preprocess(train_path + folder.name + '/' + entry.name)) if folder.name[0]=='C': y_train.append(0) # Covid elif folder.name[0]=='V': y_train.append(1) # Viral Pneumonia else: y_train.append(2) # Normal X_train = np.array(X_train) X_train.shape # We have 1955 training samples in total y_train = np.array(y_train) y_train.shape # ## Visualizing the Dataset # + covid_count = len(y_train[y_train==0]) pneumonia_count = len(y_train[y_train==1]) normal_count = len(y_train[y_train==2]) plt.title("Train Images for Each Label") plt.bar(["Covid", "Viral Pneumonia", "Normal"],[covid_count, pneumonia_count, normal_count]) # We have more samples of Normal and Viral Pneumonia than Covid # + # Plotting 2 images per disease import random title = {0:"Covid", 1:"Viral Pneumonia", 2:"Normal"} rows = 2 columns = 3 for i in range(2): fig = plt.figure(figsize=(7,7)) fig.add_subplot(rows, columns, 1) pos = random.randint(0, covid_count) plt.imshow(X_train[pos]) plt.title(title[y_train[pos]]) fig.add_subplot(rows, columns, 2) pos = random.randint(covid_count, covid_count+pneumonia_count) plt.imshow(X_train[pos]) plt.title(title[y_train[pos]]) fig.add_subplot(rows, columns, 3) pos = random.randint(covid_count+pneumonia_count, covid_count+pneumonia_count+normal_count) plt.imshow(X_train[pos]) plt.title(title[y_train[pos]]) # - # ## Image Augmentation # Augmentation is the process of creating new training samples by altering the available data. <br> # It not only increases the number of samples for training the model but also prevents the model from overfitting the training data since it makes relevant feautes in the image location invariant. <br> # Although there are various ways of doing so like random zoom, increasing/decreasing brightness, rotating the images, most of it does not make sense for Health related data as the real world data is almost always of high quality. <br> # So we only applied one type of Image Augmentation in this Model : <b> Horizontal Flip </b>. <br> # Now, even if we try to classify horizontaly flipped images, we can expect to get correct predictions. plt.imshow(X_train[0]) plt.title("Original Image") X_new = np.fliplr(X_train[0]) plt.imshow(X_new) plt.title("Horizontaly Flipped Image") # + X_aug = [] y_aug = [] for i in range(0, len(y_train)): X_new = np.fliplr(X_train[i]) X_aug.append(X_new) y_aug.append(y_train[i]) # - X_aug = np.array(X_aug) y_aug = np.array(y_aug) X_train = np.append(X_train, X_aug, axis=0) # appending augmented images to original training samples X_train.shape y_train = np.append(y_train, y_aug, axis=0) y_train.shape # Now we have 3910 samples in total # ## Spliting the data for Training and Validation from sklearn.model_selection import train_test_split # We have splitted our data in a way that - # 1. The samples are shuffled # 2. The ratio of each class is maintained (stratify) # 3. We get same samples every time we split our data (random state) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.15, shuffle=True, stratify=y_train, random_state=123) # we will use 3323 images for training the model y_train.shape # we will use 587 images for validating the model's performance y_val.shape # ## Designing and Training the Model model = tf.keras.Sequential([ Conv2D(filters=32, kernel_size=(2,2), activation='relu', input_shape=(256, 256, 3)), MaxPooling2D((4,4)), Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'), MaxPooling2D((3,3)), Dropout(0.3), # for regularization Conv2D(filters=64, kernel_size=(4,4), activation='relu', padding='same'), Conv2D(filters=128, kernel_size=(5,5), activation='relu', padding='same'), MaxPooling2D((2,2)), Dropout(0.4), Conv2D(filters=128, kernel_size=(5,5), activation='relu', padding='same'), MaxPooling2D((2,2)), Dropout(0.5), Flatten(), # flattening for feeding into ANN Dense(512, activation='relu'), Dropout(0.5), Dense(256, activation='relu'), Dropout(0.3), Dense(128, activation='relu'), Dense(3, activation='softmax') ]) model.summary() # Slowing down the learning rate opt = optimizers.Adam(learning_rate=0.0001) # compile the model model.compile(loss = 'sparse_categorical_crossentropy', optimizer=opt, metrics= ["accuracy"]) # + # use early stopping to exit training if validation loss is not decreasing even after certain epochs (patience) earlystopping = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=20) # save the best model with least validation loss checkpointer = ModelCheckpoint(filepath="covid_classifier_weights.h5", verbose=1, save_best_only=True) # - history = model.fit(X_train, y_train, epochs = 100, validation_data=(X_val, y_val), batch_size=32, shuffle=True, callbacks=[earlystopping, checkpointer]) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right') plt.show() plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # + # save the model architecture to json file for future use model_json = model.to_json() with open("covid_classifier_model.json","w") as json_file: json_file.write(model_json) # - # ## Evaluating the Saved Model Performance # Load pretrained model (best saved one) with open('covid_classifier_model.json', 'r') as json_file: json_savedModel= json_file.read() # load the model model = tf.keras.models.model_from_json(json_savedModel) model.load_weights('covid_classifier_weights.h5') model.compile(loss = 'sparse_categorical_crossentropy', optimizer=opt, metrics= ["accuracy"]) # ### Loading the Test Images # + X_test = [] # To store test images y_test = [] # To store test labels test_path = './dataset/test/' for folder in os.scandir(test_path): for entry in os.scandir(test_path + folder.name): X_test.append(read_and_preprocess(test_path + folder.name + '/' + entry.name)) if folder.name[0]=='C': y_test.append(0) elif folder.name[0]=='V': y_test.append(1) else: y_test.append(2) X_test = np.array(X_test) y_test = np.array(y_test) # - X_test.shape # We have 185 images for testing # + covid_count = len(y_test[y_test==0]) pneumonia_count = len(y_test[y_test==1]) normal_count = len(y_test[y_test==2]) plt.title("Test Images for Each Label") plt.bar(["Covid", "Viral Pneumonia", "Normal"],[covid_count, pneumonia_count, normal_count]) # - # making predictions predictions = model.predict(X_test) predictions.shape # + # Obtain the predicted class from the model prediction predict = [] for i in predictions: predict.append(np.argmax(i)) predict = np.asarray(predict) # + # Obtain the accuracy of the model from sklearn.metrics import accuracy_score accuracy = accuracy_score(y_test, predict) accuracy # + # plot the confusion matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, predict) plt.figure(figsize = (7,7)) sns.heatmap(cm, annot=True, cmap='Blues') # The model misclassified one Covid case as Normal # + from sklearn.metrics import classification_report report = classification_report(y_test, predict) print(report)
Diagnosis using ML/Model Training/Covid Classifier/Covid Classifier Training.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # http://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html import glob import os import unicodedata import string import torch # ### Gather data data_dir = '/home/as/datasets/pytorch.tutorial/surnames/data/names/*.txt' files = glob.glob(data_dir) # + all_letters = string.ascii_letters + " .,;'" n_letters = len(all_letters) def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' and c in all_letters ) # - lang_to_surnames = dict() all_languages = list() for fn in files: with open(fn, encoding='utf-8') as f: surnames = f.read().splitlines() surnames = [unicodeToAscii(surname) for surname in surnames] lang = os.path.basename(fn).replace('.txt', '') lang_to_surnames[lang] = surnames all_languages.append(lang) n_languages = len(lang_to_surnames) n_languages print(lang_to_surnames['Italian'][:5]) all_letters.find('b') # ### Helpers # + # Find letter index from all_letters, e.g. "a" = 0 def letterToIndex(letter): return all_letters.find(letter) # Just for demonstration, turn a letter into a <1 x n_letters> Tensor def letterToTensor(letter): tensor = torch.zeros(1, n_letters) tensor[0][letterToIndex(letter)] = 1 return tensor # Turn a line into a <line_length x 1 x n_letters>, # or an array of one-hot letter vectors def lineToTensor(line): tensor = torch.zeros(len(line), 1, n_letters) for li, letter in enumerate(line): tensor[li][0][letterToIndex(letter)] = 1 return tensor # + #letterToIndex('n') # + #letterToTensor('n') # + #lineToTensor('Anand')
pytorch.char.rnn.classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + # default_exp models.ResNetPlus # - # # ResNetPlus # # > This is an unofficial PyTorch implementation by <NAME> - <EMAIL> based on: # * <NAME>., <NAME>., & <NAME>. (2017, May). Time series classification from scratch with deep neural networks: A strong baseline. In 2017 international joint conference on neural networks (IJCNN) (pp. 1578-1585). IEEE. # * <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2019). Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, 33(4), 917-963. # * Official ResNet TensorFlow implementation: https://github.com/hfawaz/dl-4-tsc # * 👀 kernel filter size 8 has been replaced by 7 (I believe it's a bug since even kernels are not commonly used in practice) #export from fastai.layers import * from tsai.imports import * from tsai.models.layers import * from tsai.models.utils import * # + # export class ResBlockPlus(Module): def __init__(self, ni, nf, ks=[7, 5, 3], coord=False, separable=False, bn_1st=True, zero_norm=False, sa=False, se=None, act=nn.ReLU, act_kwargs={}): self.convblock1 = ConvBlock( ni, nf, ks[0], coord=coord, separable=separable, bn_1st=bn_1st, act=act, act_kwargs=act_kwargs) self.convblock2 = ConvBlock( nf, nf, ks[1], coord=coord, separable=separable, bn_1st=bn_1st, act=act, act_kwargs=act_kwargs) self.convblock3 = ConvBlock( nf, nf, ks[2], coord=coord, separable=separable, zero_norm=zero_norm, act=None) self.se = SEModule1d( nf, reduction=se, act=act) if se and nf//se > 0 else noop self.sa = SimpleSelfAttention(nf, ks=1) if sa else noop self.shortcut = BN1d(ni) if ni == nf else ConvBlock( ni, nf, 1, coord=coord, act=None) self.add = Add() self.act = act(**act_kwargs) self._init_cnn(self) def _init_cnn(self, m): if getattr(self, 'bias', None) is not None: nn.init.constant_(self.bias, 0) if isinstance(self, (nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.Linear)): nn.init.kaiming_normal_(self.weight) for l in m.children(): self._init_cnn(l) def forward(self, x): res = x x = self.convblock1(x) x = self.convblock2(x) x = self.convblock3(x) x = self.se(x) x = self.sa(x) x = self.add(x, self.shortcut(res)) x = self.act(x) return x @delegates(ResBlockPlus.__init__) class ResNetPlus(nn.Sequential): def __init__(self, c_in, c_out, seq_len=None, nf=64, sa=False, se=None, fc_dropout=0., concat_pool=False, flatten=False, custom_head=None, y_range=None, **kwargs): resblock1 = ResBlockPlus(c_in, nf, se=se, **kwargs) resblock2 = ResBlockPlus(nf, nf * 2, se=se, **kwargs) resblock3 = ResBlockPlus(nf * 2, nf * 2, sa=sa, **kwargs) backbone = nn.Sequential(resblock1, resblock2, resblock3) self.head_nf = nf * 2 if flatten: assert seq_len is not None, "you need to pass seq_len when flatten=True" self.head_nf *= seq_len if custom_head is not None: head = custom_head(self.head_nf, c_out) else: head = self.create_head(self.head_nf, c_out, flatten=flatten, concat_pool=concat_pool, fc_dropout=fc_dropout, y_range=y_range) super().__init__(OrderedDict([('backbone', backbone), ('head', head)])) def create_head(self, nf, c_out, flatten=False, concat_pool=False, fc_dropout=0., y_range=None, **kwargs): layers = [Flatten()] if flatten else [] if concat_pool: nf = nf * 2 layers = [GACP1d(1) if concat_pool else GAP1d(1)] if fc_dropout: layers += [nn.Dropout(fc_dropout)] layers += [nn.Linear(nf, c_out)] if y_range: layers += [SigmoidRange(*y_range)] return nn.Sequential(*layers) # - from tsai.models.layers import Swish xb = torch.rand(2, 3, 4) test_eq(ResNetPlus(3,2)(xb).shape, [xb.shape[0], 2]) test_eq(ResNetPlus(3,2,coord=True, separable=True, bn_1st=False, zero_norm=True, act=Swish, act_kwargs={}, fc_dropout=0.5)(xb).shape, [xb.shape[0], 2]) test_eq(count_parameters(ResNetPlus(3, 2)), 479490) # for (3,2) from tsai.models.ResNet import * test_eq(count_parameters(ResNet(3, 2)), count_parameters(ResNetPlus(3, 2))) # for (3,2) m = ResNetPlus(3, 2, zero_norm=True, coord=True, separable=True) print('n_params:', count_parameters(m)) print(m) print(check_weight(m, is_bn)[0]) #hide from tsai.imports import * from tsai.export import * nb_name = get_nb_name() create_scripts(nb_name);
nbs/101b_models.ResNetPlus.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pprint import pprint import json import azure.cosmos.cosmos_client as cosmos_client import azure.cosmos.documents as documents import azure.cosmos.errors as errors # + COSMOSDB_ENDPOINT = '{AZURE_COSMOS_DB_ENDPOINT}' COSMOSDB_PRIMARY_KEY = '{AZURE_COSMOS_DB_PRIMARY_KEY}' COSMOSDB_DB_ID = 'UniversityDatabase' COSMOSDB_COLL_ID = 'StudentCollection' client = cosmos_client.CosmosClient(COSMOSDB_ENDPOINT, { 'masterKey': COSMOSDB_PRIMARY_KEY }) database_link = f'dbs/{COSMOSDB_DB_ID}' throughput = 11000 partition_key = '/enrollmentYear' unique_key = '/studentAlias' collection_link = f'{database_link}/colls/{COSMOSDB_COLL_ID}' # - def create_database(id): print('Create database') try: client.CreateDatabase({"id": id}) print(f'Database with id "{id}" created') except errors.HTTPFailure as e: if e.status_code == 409: print(f'A database with id "{id}" already exists') else: raise create_database(COSMOSDB_DB_ID) def create_container(id, throughput, partition_key, unique_key): try: coll = { "id": id, "partitionKey": { "paths": [ partition_key ], "kind": "Hash", "version": 2 }, 'uniqueKeyPolicy': { 'uniqueKeys': [ { 'paths': [ unique_key ] } ] } } collection_options = { 'offerThroughput': throughput } collection = client.CreateContainer(database_link, coll, collection_options) print(f'Collection with id "{id}" created') print(f'Partition Key - "{partition_key}"') except errors.CosmosError as e: if e.status_code == 409: print(f'A collection with id "{id}" already exists') else: raise create_container(COSMOSDB_COLL_ID, throughput, partition_key, unique_key) def create_documents(docs): client.CreateItem(collection_link, docs) # + # The upload may take up to 15 minutes to complete with open('../data/students-25%.json') as json_file: students_json = json.load(json_file) for student in students_json: create_documents(student) # - def query_documents(collection_link, query, options={ "enableCrossPartitionQuery": True }): try: results = list(client.QueryItems(collection_link, query, options)) return results except errors.HTTPFailure as e: if e.status_code == 404: print("Document doesn't exist") elif e.status_code == 400: # Can occur when we are trying to query on excluded paths print("Bad Request exception occured: ", e) pass else: raise finally: print() # + query = 'SELECT * FROM students s WHERE s.enrollmentYear = 2017' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results[0]) # + # In this query, we drop the 's' alias and use the 'students' source. When we execute this query, we should see the same results as the previous query. query = 'SELECT * FROM students WHERE students.enrollmentYear = 2017' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') # + # In this query, we will prove that the name used for the source can be any name you choose. We will use the name 'arbitraryname' for the source. # When we execute this query, we should see the same results as the previous query. query = 'SELECT * FROM arbitraryname WHERE arbitraryname.enrollmentYear = 2017' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') # + # Going back to 's' as an alias, we will now create a query where we only select the 'studentAlias' property and return the value of # that property in our result set. query = 'SELECT s.studentAlias FROM students s WHERE s.enrollmentYear = 2017' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results[0]) # + # In some scenarios, you may need to return a flattened array as the result of your query. This query uses the 'VALUE' keyword to flatten # the array by taking the single returned (string) property and creating a string array. query = 'SELECT VALUE s.studentAlias FROM students s WHERE s.enrollmentYear = 2017' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results) # + # Since we know that our partition key is /enrollmentYear, we know that any query that targets a single valid value for the 'enrollmentYear' property # will be a single partition query. query = 'SELECT * FROM students s WHERE s.enrollmentYear = 2016' query_results = query_documents(collection_link, query) request_charge = client.last_response_headers['x-ms-request-charge'] print(f'Length of query results: {len(query_results)}') print(f'RU: {request_charge}') # + # If we want to execute a blanket query that will fan-out to all partitions, we simply can drop our WHERE clause that filters on a single valid # value for our partition key path. # # Observe the Request Charge (in RU/s) for the executed query. You will notice that the charge is relatively greater for this query. query = 'SELECT * FROM students s' query_results = query_documents(collection_link, query) request_charge = client.last_response_headers['x-ms-request-charge'] print(f'Length of query results: {len(query_results)}') print(f'RU: {request_charge}') # + # Observe the Request Charge (in RU/s) for the executed query. You will notice that the charge is greater than a single partition but far # less than a fan-out across all partitions. query = 'SELECT * FROM students s WHERE s.enrollmentYear IN (2015, 2016, 2017)' query_results = query_documents(collection_link, query) request_charge = client.last_response_headers['x-ms-request-charge'] print(f'Length of query results: {len(query_results)}') print(f'RU: {request_charge}') # + # To get the school-issued e-mail address, we will need to concatenate the # '@contoso.edu' string to the end of each alias. We can perform this action # using the CONCAT built-in function. query = 'SELECT CONCAT(s.studentAlias, "@contoso.edu") AS email FROM students s WHERE s.enrollmentYear = 2015' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results[0]) # + # In most client-side applications, you likely would only need an array of # strings as opposed to an array of objects. We can use the VALUE keyword # here to flatten our result set. query = 'SELECT VALUE CONCAT(s.studentAlias, "@contoso.edu") FROM students s WHERE s.enrollmentYear = 2015' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results[0]) # + # In this query, we want to determine the current status of every # student who enrolled in 2014. Our goal here is to eventually have a # flattened, simple-to-understand view of every student and their current # academic status. # # You will quickly notice that the value representing the name of the # student, using the CONCAT function, has a placeholder property name # instead of a simple string. query = ''' SELECT CONCAT(s.firstName, " ", s.lastName), s.academicStatus.warning, s.academicStatus.suspension, s.academicStatus.expulsion, s.enrollmentYear, s.projectedGraduationYear FROM students s WHERE s.enrollmentYear = 2014 ''' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results[0]) # + # We will update our previous query by naming our property that uses # a built-in function. query = ''' SELECT CONCAT(s.firstName, " ", s.lastName) AS name, s.academicStatus.warning, s.academicStatus.suspension, s.academicStatus.expulsion, s.enrollmentYear, s.projectedGraduationYear FROM students s WHERE s.enrollmentYear = 2014 ''' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results[0]) # + # Another alternative way to specify the structure of our JSON document # is to use the curly braces from JSON. At this point, we are defining # the structure of the JSON result directly in our query. query = ''' SELECT { "name": CONCAT(s.firstName, " ", s.lastName), "isWarned": s.academicStatus.warning, "isSuspended": s.academicStatus.suspension, "isExpelled": s.academicStatus.expulsion, "enrollment": { "start": s.enrollmentYear, "end": s.projectedGraduationYear } } AS studentStatus FROM students s WHERE s.enrollmentYear = 2014 ''' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results[0]) # + # If we want to “unwrap” our JSON data and flatten to a simple array of # like-structured objects, we need to use the 'VALUE' keyword. query = ''' SELECT VALUE { "name": CONCAT(s.firstName, " ", s.lastName), "isWarned": s.academicStatus.warning, "isSuspended": s.academicStatus.suspension, "isExpelled": s.academicStatus.expulsion, "enrollment": { "start": s.enrollmentYear, "end": s.projectedGraduationYear } } FROM students s WHERE s.enrollmentYear = 2014 ''' query_results = query_documents(collection_link, query) print(f'Length of query results: {len(query_results)}') pprint(query_results[0]) # + # Notice that this query returns more than 3 items although the option 'maxItemCount' is specified to 3. # This is because the return value of 'CosmosClient.QueryItems' is converted to list in the definition # of function 'query_documents'. The resulting list contains all items returned by the query query = ''' SELECT VALUE { "name": CONCAT(s.firstName, " ", s.lastName), "isWarned": s.academicStatus.warning, "isSuspended": s.academicStatus.suspension, "isExpelled": s.academicStatus.expulsion, "enrollment": { "start": s.enrollmentYear, "end": s.projectedGraduationYear } } FROM students s WHERE s.enrollmentYear = 2014 ''' query_options = { "enableCrossPartitionQuery": True, 'maxItemCount': 3 } query_iterable = query_documents(collection_link, query, query_options) print(f'Length of query results: {len(query_iterable)}') pprint(query_iterable) # + # To leverage the pagination capability from Azure Cosmos DB, the returned value from 'CosmosClient.QueryItems' # should not be converted to 'list' type. The 'fetch_next_block' function is used to obtain the next page from # the 'QueryIterable' object query = ''' SELECT VALUE { "name": CONCAT(s.firstName, " ", s.lastName), "isWarned": s.academicStatus.warning, "isSuspended": s.academicStatus.suspension, "isExpelled": s.academicStatus.expulsion, "enrollment": { "start": s.enrollmentYear, "end": s.projectedGraduationYear } } FROM students s WHERE s.enrollmentYear = 2014 ''' query_options = { "enableCrossPartitionQuery": True, 'maxItemCount': 3 } query_iterable = client.QueryItems(collection_link, query, query_options) query_page = query_iterable.fetch_next_block() print(f'Length of query results: {len(query_page)}') pprint(query_page) # + # Calling 'fetch_next_block' again returns the next 3 items in the array. query_page = query_iterable.fetch_next_block() print(f'Length of query results: {len(query_page)}') pprint(query_page) # - def delete_database(id): print('Delete Database') try: database_link = 'dbs/' + id client.DeleteDatabase(database_link) print(f'Database with id "{id}" was deleted') except errors.HTTPFailure as e: if e.status_code == 404: print('A database with id "{id}" does not exist') else: raise # Clean up delete_database(COSMOSDB_DB_ID)
code/1. University Students.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: python38 # language: python # name: python38 # --- # + # default_exp distance # - # # distance # > find driving distance using bing and osrm api #hide from nbdev.showdoc import * #export from beartype import beartype from typing import List, Dict, Tuple import pandas as pd import yaml, requests # # osrm #export @beartype def getDistanceOsrm(lat1:float, lon1:float, lat2:float, lon2:float, osrmEndpoint = 'http://router.project-osrm.org')->float: ''' get driving distance between 2 points using osrm url ''' url = f"{osrmEndpoint}/route/v1/bike/{lon1},{lat1};{lon2},{lat2}?overview=false" r = requests.get(url) try: distance = r.json()['routes'][0]['distance'] return float(distance) except KeyError as e: raise KeyError(f"error getting distance got {r.json()} from urll \n{e}") # %%time #example testData = ''' lat1 : 13.732048 lon1 : 100.567623 lat2 : 13.856271 lon2 : 100.546467 ''' i = yaml.load(testData, Loader=yaml.FullLoader) f'the distance is {getDistanceOsrm(**i)} m' # # get distance bing # + #export @beartype def getDistBing(origin:Tuple[float,float] ,destinations:List[Tuple[float,float]], bingApiKey = '')->pd.DataFrame: ''' accept a list of tuple origins and destinations, return a dataframe input: origin :: Tuple[float,float] : (lat,long) of the origin destinations:: List[Tuple[float,float]]: [(lat,long),(lat,long)] of the destinations bingApiKey:str:: apikey from bing map response: pd.DataFrame with [dist, dur, lat, long] as indexes, all in float, all distances in m and time in s ''' parameters = { 'origins' : f'{origin[0]},{origin[1]}', 'destinations' : ';'.join(f'{d[0]},{d[1]}'for d in destinations), 'travelMode' : 'driving', 'key': bingApiKey } url = 'https://dev.virtualearth.net/REST/v1/Routes/DistanceMatrix' r = requests.get(url,params=parameters).json() results:pd.DataFrame = pd.DataFrame(r['resourceSets'][0]['resources'][0]['results']) destinationsDf:pd.DataFrame = pd.DataFrame(destinations) destinationsDf.columns = ['lat', 'long'] results = pd.concat([results,destinationsDf],axis=1).reindex( ['travelDistance','travelDuration', 'lat','long'], axis = 1) results.columns # return results # return [(result['travelDistance'], result['travelDuration'])for result in results] # return [result['travelDistance'] for result in results], [result['travelDuration'] for result in results] # + # example from nicHelper.secrets import getSecret testData = ''' origin: !!python/tuple - 13.732048 - 100.567623 destinations: - !!python/tuple - 13.856271 - 100.546467 - !!python/tuple - 13.856444 - 100.549487 - !!python/tuple - 13.857444 - 100.549487 ''' i = yaml.load(testData, Loader = yaml.FullLoader) apikey = getSecret( name = 'webApiKeys')['bing'] results = getDistBing( **i, bingApiKey = apikey ) results # print(f'distances are {dist}, time taken are {timetaken}') # -
nbs/distance.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 1.4.2 판다스(Pandas) # # 판다스에는 세 가지 데이터 타입이 존재한다. # # * 시리즈(Series) # * 데이터프레임(DataFrame) # * 판넬(Panel) import pandas as pd import numpy as np # ### 시리즈 생성 # + a = pd.Series([1, 3, 5, 7, 10]) # 리스트를 이용한 시리즈 데이터 생성 print(a) # a를 확인해보면 index와 함께 값이 나온다. # 0 1 # 1 3 # 2 5 # 3 7 # 4 10 # dtype: int64 data = np.array(['a', 'b', 'c', 'd']) # 넘파이 배열 생성 b = pd.Series(data) #넘파이 배열을 이용한 시리즈 데이터 생성 print(b) # 0 a # 1 b # 2 c # 3 d # dtype: object c = pd.Series(np.arange(10,30,5)) # 넘파이 arange함수로 생성한 배열로 시리즈 생성 print(c) # 0 10 # 1 15 # 2 20 # 3 25 # dtype: int32 # + a = pd.Series(['a', 'b', 'c'], index=[10, 20, 30]) # 인덱스를 직접 지정한다. print(a) # 10 a # 20 b # 30 c # dtype: object dict = {'a' : 10, 'b' : 20, 'c' : 30} # 파이썬 딕셔너리를 활용한 시리즈 생성 d = pd.Series(dict) # 인덱스가 a,b,c로 된 것을 확인 할 수 있다. print(d) # a 10 # b 20 # c 30 # - # ### 데이터 프레임 생성 # + a = pd.DataFrame([1,3,5,7,9]) # 리스트를 이용한 생성 print(a) # 0 # 0 1 # 1 3 # 2 5 # 3 7 # 4 9 dict = { 'Name' : [ 'Cho', 'Kim', 'Lee' ], 'Age' : [ 28, 31, 38] } b = pd.DataFrame(dict) # 딕셔너리를 이용한 생성 print(b) # Age Name # 0 28 Cho # 1 31 Kim # 2 38 Lee c = pd.DataFrame([['apple', 7000], ['banana', 5000], ['orange', 4000]]) #리스트의 중첩에 의한 생성 print(c) # 0 1 # 0 apple 7000 # 1 banana 5000 # 2 orange 4000 # - a = pd.DataFrame([['apple', 7000], ['banana', 5000], ['orange', 4000]], columns = ['name', 'price']) print(a) # name price # 0 apple 7000 # 1 banana 5000 # 2 orange 4000 # ### 판다스 데이터 불러오기 및 쓰기 # + data_frame = pd.read_csv( './data_in/datafile.csv') print(data_frame['A']) # A열의 데이터만 확인 # 2018-02-03 0.076547 # 2018-02-04 0.810574 # ... # 2018-11-28 0.072067 # 2018-11-29 0.705263 # Freq: D, Name: A, Length: 300, dtype: float64 print(data_frame['A'][:3]) # A열의 데이터 중 앞의 10개만 확인 # 2018-02-03 0.076547 # 2018-02-04 0.810574 # 2018-02-05 0.071555 # Freq: D, Name: A, dtype: float64 data_frame['D'] = data_frame['A'] + data_frame['B'] # A열과 B열을 더한 새로운 C열 생성 print(data_frame ['D']) # 2018-02-03 -0.334412 # 2018-02-04 1.799571 # 2018-02-05 0.843764 # 2018-02-06 1.079784 # 2018-02-07 0.734765 # Freq: D, Name: D, dtype: float64 # - data_frame.describe()
1.NLP_PREP/1.4.2.pandas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp core # - #hide # !pip install nbdev # # 00_Core # # > This notebook will include the core functionalities needed to make our library operational within Google Colab #hide from nbdev.showdoc import * # As we are working out of our `Drive`, let's write a function to mount and refresh it each time (you will only need to sign in on the first) #export import os from google.colab import drive #export def setup_drive(): "Connect Google Drive to use GitHub" drive.mount('/content/drive', force_remount=True) os._exit(00) setup_drive() # Clone your repo into your Google Drive then re-open it (this only needs to be done once) so we are working in it's codebase # Now let's setup our instance to be utilized by Git and accepted #export from pathlib import Path import os, subprocess #export def setup_git(path:Path, project_name:str, username:str, password:str, email:str): "Link your mounted drive to GitHub. Remove sensitive information before pushing" start = os.getcwd() os.chdir(path) commands = [] commands.append(f"git config --global user.email {email}") commands.append(f"git config --global user.name {username}") commands.append("git init") commands.append("git remote rm origin") commands.append(f"git remote add origin https://{username}:{password}@github.com/{username}/{project_name}.git") commands.append("git pull origin master --allow-unrelated-histories") for cmd in commands: process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE) output, err = process.communicate() os.chdir(start) # We need to pass in the `Path` to point to our cloned repository along with the needed information. **REMEMBER to delete or replace the sensitive information BEFORE uploading to your library** git_path = Path('drive/My Drive/nbdev_colab') setup_git(git_path, 'nbdev_colab', 'muellerzr', '<PASSWORD>', '<EMAIL>') # Now let's make a command to push to our repository, similar to `setup_git`. This will also make our library for us #export from nbdev.export import * #export def git_push(path:Path, message:str): "Convert the notebooks to scripts and then push to the library" start = os.getcwd() os.chdir(path) commands = [] commands.append('nbdev_install_git_hooks') commands.append('nbdev_build_lib') commands.append('git add *') commands.append(f'git commit -m "{message}"') commands.append('git push origin master') for cmd in commands: process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE) output, err = process.communicate() os.chdir(start) # Save your notebook, and now we can push to `GitHub`. When pushing, make sure not to include spaces, as this will post an error. git_push(git_path, 'Final')
nbs/00_core.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={} colab_type="code" id="QyTdFnuhqhZx" import detectron2 from detectron2.utils.logger import setup_logger setup_logger() # import some common libraries import numpy as np import cv2 import random from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg from detectron2.utils.visualizer import Visualizer from detectron2.data import MetadataCatalog from detectron2.modeling import build_model from detectron2.evaluation import COCOEvaluator,PascalVOCDetectionEvaluator import matplotlib.pyplot as plt import torch.tensor as tensor from detectron2.data import build_detection_test_loader from detectron2.evaluation import inference_on_dataset import torch from detectron2.structures.instances import Instances from detectron2.modeling import build_model from detectron2.modeling.meta_arch.tracker import Tracker from detectron2.modeling.meta_arch.soft_tracker import SoftTracker # %matplotlib inline # + [markdown] colab_type="text" id="wTnx-rVFc-gi" # # # # # ## Load Weights # + colab={"base_uri": "https://localhost:8080/", "height": 55} colab_type="code" executionInfo={"elapsed": 1654, "status": "ok", "timestamp": 1577719198991, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mC4MQNrCOLVZp6wyxyAhCMqw8Udn-UEuh66kHi9qw=s64", "userId": "16526136313232193808"}, "user_tz": -60} id="6rigOU2Gre3r" outputId="e25fe68a-a39f-4f76-e9d3-58990208d499" cfg = get_cfg() #cfg.MODEL.DEVICE='cpu' cfg.merge_from_file("../configs/COCO-Detection/faster_rcnn_R_50_FPN_3x_Video.yaml") #cfg.merge_from_file("./detectron2_repo/configs/COCO-Detection/faster_rcnn_R_50_Video.yaml") cfg.MODEL.ROI_HEADS.NUM_CLASSES=1 cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.4 # set threshold for this model #cfg.MODEL.WEIGHTS = "KITTI_FPN_FINAL/model_final.pth" cfg.MODEL.WEIGHTS="/media/DATA/Users/Issa/Models/MOT17_JDE/model_final.pth" #cfg.MODEL.WEIGHTS = "../models_pub/MOT/mot_17.pth" print(cfg.MODEL) #arr = {1:'cyclist',2:'car',0:'pedestrian'} arr = {0:'Pedestrian'} # + [markdown] colab_type="text" id="fNcPBUx2c-go" # ## inference: Joint detection and tracking # - fix the hyper parameters (setting) # - choose a dataset (train or test of mot17 or mot20) # + import json import os import time from tqdm.notebook import tqdm colors = [[0,0,128],[0,255,0],[0,0,255],[255,0,0],[0,128,128],[128,0,128],[128,128,0],[255,255,0],[0,255,255],[255,255,0],[128,0,0],[0,128,0] ,[0,128,255],[0,255,128],[255,0,128],[128,255,0],[255,128,0],[128,255,255],[128,0,255],[128,128,128],[128,255,128]] dirC = '/media/DATA/Datasets/MOT/MOT17/train/' names = [] setting_id = 0 settings = [ dict(props=50, #number of proposals to use by rpn st=1.05, #acceptance distance percentage for soft tracker sup_fp = True, # fp suppression based on Intersection over Union for new detections alpha = 0.6, # the percentage of the new embedding in track embedding update (emb = alpha * emb(t) +(1-alpha) emb(t-1)) fp_thresh=0.95, # iou threshold above which the new detection is considered a fp T=True, #use past tracks as proposals D='cosine', # distance metric for embeddings Re=True, #use the embedding head A=True, # use appearance information K=True, # use kalman for motion prediction E=False, #use raw FPN features as appearance descriptors measurement=0.001, #measruement noise for the kalman filter process=1, #process noise for the kalman filter dist_thresh=1.5, # the normalization factor for the appearance distance track_life=7, #frames for which a track is kept in memory without an update track_vis=2, #frames for which a track is displayed without an update ), ] train_folders_17 = ['MOT17-02','MOT17-04','MOT17-05','MOT17-09','MOT17-10','MOT17-11','MOT17-13'] test_folders_17 = ['MOT17-01','MOT17-03','MOT17-06','MOT17-07','MOT17-08','MOT17-12','MOT17-14'] train_folder_20 = ["MOT20-01","MOT20-02","MOT20-03","MOT20-05"] test_folder_20 = ["MOT20-04","MOT20-06","MOT20-07","MOT20-08"] mot15_folders = ['MOT17-02','MOT17-04','MOT17-05','MOT17-09','MOT17-10','MOT17-11','MOT17-13'] for setting in settings: setting_id = setting_id + 1 if(not os.path.exists("../results")): os.mkdir('../results') os.mkdir('../results/MOT') else: if(not os.path.exists("../results/MOT")): os.mkdir('../results/MOT') output_path = '../results/MOT/MOT17_%s'%(str(setting_id)) exp_name = output_path total_elapsed=0 if(not os.path.exists(exp_name)): os.mkdir(exp_name) for folder_name in ["MOT16-02","MOT16-05","MOT16-10"]: #for folder_name in os.listdir(dirC): #out_tracking = cv2.VideoWriter('joint_%s.avi'%folder_name,cv2.VideoWriter_fourcc('M','J','P','G'), 30, (1242,375)) #if(not folder_name in ['0000','0007','0011']): #continue dump_image = cv2.imread(dirC+folder_name+'/img1/000001.jpg') #out_tracking = cv2.VideoWriter('mot_paper.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 30, dump_image.shape[1::-1]) predictor = DefaultPredictor(cfg) #prop_limit=60 predictor.model.tracker = SoftTracker() predictor.model.tracking_proposals = setting['T'] predictor.model.tracker.track_life = setting['track_life'] predictor.model.tracker.track_visibility = setting['track_vis'] predictor.model.tracker.use_appearance = setting['A'] predictor.model.tracker.use_kalman = setting['K'] predictor.model.tracker.embed = setting['E'] predictor.model.tracker.reid = setting['Re'] predictor.model.tracker.dist = setting['D'] predictor.model.tracker.measurement_noise=setting['measurement'] predictor.model.tracker.process_noise = setting['process'] predictor.model.tracker.dist_thresh = setting['dist_thresh'] predictor.model.use_reid = setting['Re'] predictor.model.tracker.soft_thresh = setting['st'] predictor.model.tracker.suppress_fp = setting['sup_fp'] predictor.model.tracker.fp_thresh = setting['fp_thresh'] predictor.model.tracker.embed_alpha = setting['alpha'] max_distance = 0.2 output_file = open('%s/%s.txt'%(exp_name,folder_name),'w') print('%s/%s.txt'%(exp_name,folder_name)) start = time.time() frame_counter = 0 prev_path = 0 predictor.model.prev_path = 0 frames = {} for photo_name in sorted(os.listdir(dirC+folder_name+'/img1/')): img_path = dirC+folder_name+'/img1/'+photo_name frames[frame_counter] = {} img = cv2.imread(img_path) inp = {} inp['width'] = img.shape[1] inp['height'] = img.shape[0] inp['file_name'] = photo_name inp['image_id'] = photo_name predictor.model.photo_name = img_path outputs = predictor(img,setting['props']) for i in outputs: if(i.pred_class in arr): output_file.write("%d,%d,%d,%d,%d,%d,%f,-1,-1,-1\n" %(frame_counter,i.track_id,i.xmin,i.ymin,i.xmax-i.xmin,i.ymax-i.ymin ,i.conf)) frame_counter +=1 predictor.model.prev_path = img_path end = time.time() elapsed = end-start avg = frame_counter/elapsed print('avg time is' ,avg) print('elapsed : ',elapsed) output_file.close() total_elapsed += elapsed print('total elapsed ', total_elapsed) # -
Notebooks/MOT_VAL.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Do FastSLAM import numpy as np from data_parser import data_parser from homie_filter import homie_filter import math import matplotlib.pyplot as plt # %matplotlib inline from IPython import display import timeit # get data from experiment data_file = '../processing_read_control/comm_trial_52_w_vis.txt' my_parser = data_parser(data_file, verbose=0) sense_data = my_parser.get_sense_data() move_data = my_parser.get_move_data() # print(len(sense_data['time'])) # print('Range sensing data keys: ') # print(sense_data.keys()) # print('') # print('Rotary encoder number of clicks data keys:') # print(move_data.keys()) # print('') # print('Positive wheel counts ---> forward direction rotation') # print('Negative wheel counts ---> backward direction rotation') # #### do iterative fastSLAM and update particles in a loop num_particles = 500 start_x = 0 start_y = 0 start_theta = 0 my_filter = homie_filter(num_particles, start_x, start_y, start_theta) # fig = plt.figure(figsize=(10,10)) for i in range(len(sense_data['time'])): tic = timeit.default_timer() # move homies l_cnt = move_data['l_cnt'][i] r_cnt = move_data['r_cnt'][i] if i == 0: del_t = 0.183 else: del_t = (move_data['time'][i] - move_data['time'][i-1])/1000 # print(l_cnt, r_cnt) my_filter.move_particles(l_cnt, r_cnt, del_t) ## correct this later when wires are crossed back # for homie in my_filter._homies: # plt.scatter(homie._x, homie._y, c='r') # plt.xlim(-5000,5000) # plt.ylim(-7000,7000) # plt.title(str(sense_data['time'][i])) # display.clear_output(wait=True) # display.display(plt.gcf()) # update homie weights front = sense_data['front'][i] left = sense_data['left'][i] back = sense_data['back'][i] right = sense_data['right'][i] # print(front, left, back, right) my_filter.update_particle_weights(front, left, back, right) my_filter.resample_homies() # print(str(sense_data['time'][i])+': Finished loop '+str(i+1)+' of '+str(len(sense_data['time'])) + ' | Time taken: '+str(timeit.default_timer() - tic)+' seconds') # print(str(i)+' of '+str(len(sense_data['time']))+ '| '+str(my_filter.get_most_landmarks())+' max landmarks') # get data from experiment data_file = '../processing_read_control/comm_trial_50_w_vis.txt' my_parser = data_parser(data_file, verbose=0) sense_data = my_parser.get_sense_data() move_data = my_parser.get_move_data() # print(len(sense_data['time'])) # print('Range sensing data keys: ') # print(sense_data.keys()) # print('') # print('Rotary encoder number of clicks data keys:') # print(move_data.keys()) # print('') # print('Positive wheel counts ---> forward direction rotation') # print('Negative wheel counts ---> backward direction rotation') num_particles = 500 start_x = 0 start_y = 0 start_theta = 0 my_filter_2 = homie_filter(num_particles, start_x, start_y, start_theta) # fig = plt.figure(figsize=(10,10)) for i in range(len(sense_data['time'])): tic = timeit.default_timer() # move homies l_cnt = move_data['l_cnt'][i] r_cnt = move_data['r_cnt'][i] if i == 0: del_t = 0.183 else: del_t = (move_data['time'][i] - move_data['time'][i-1])/1000 # print(l_cnt, r_cnt) my_filter_2.move_particles(l_cnt, r_cnt, del_t) ## correct this later when wires are crossed back # for homie in my_filter._homies: # plt.scatter(homie._x, homie._y, c='r') # plt.xlim(-5000,5000) # plt.ylim(-7000,7000) # plt.title(str(sense_data['time'][i])) # display.clear_output(wait=True) # display.display(plt.gcf()) # update homie weights front = sense_data['front'][i] left = sense_data['left'][i] back = sense_data['back'][i] right = sense_data['right'][i] # print(front, left, back, right) my_filter_2.update_particle_weights(front, left, back, right) my_filter_2.resample_homies() # print(str(sense_data['time'][i])+': Finished loop '+str(i+1)+' of '+str(len(sense_data['time'])) + ' | Time taken: '+str(timeit.default_timer() - tic)+' seconds') # print(str(i)+' of '+str(len(sense_data['time']))+ '| '+str(my_filter.get_most_landmarks())+' max landmarks') # get data from experiment data_file = '../processing_read_control/comm_trial_51_w_vis.txt' my_parser = data_parser(data_file, verbose=0) sense_data = my_parser.get_sense_data() move_data = my_parser.get_move_data() # print(len(sense_data['time'])) # print('Range sensing data keys: ') # print(sense_data.keys()) # print('') # print('Rotary encoder number of clicks data keys:') # print(move_data.keys()) # print('') # print('Positive wheel counts ---> forward direction rotation') # print('Negative wheel counts ---> backward direction rotation') num_particles = 500 start_x = 0 start_y = 0 start_theta = 3.14159 my_filter_3 = homie_filter(num_particles, start_x, start_y, start_theta) # fig = plt.figure(figsize=(10,10)) for i in range(len(sense_data['time'])): tic = timeit.default_timer() # move homies l_cnt = move_data['l_cnt'][i] r_cnt = move_data['r_cnt'][i] if i == 0: del_t = 0.183 else: del_t = (move_data['time'][i] - move_data['time'][i-1])/1000 # print(l_cnt, r_cnt) my_filter_3.move_particles(l_cnt, r_cnt, del_t) ## correct this later when wires are crossed back # for homie in my_filter._homies: # plt.scatter(homie._x, homie._y, c='r') # plt.xlim(-5000,5000) # plt.ylim(-7000,7000) # plt.title(str(sense_data['time'][i])) # display.clear_output(wait=True) # display.display(plt.gcf()) # update homie weights front = sense_data['front'][i] left = sense_data['left'][i] back = sense_data['back'][i] right = sense_data['right'][i] # print(front, left, back, right) my_filter_3.update_particle_weights(front, left, back, right) my_filter_3.resample_homies() # print(str(sense_data['time'][i])+': Finished loop '+str(i+1)+' of '+str(len(sense_data['time'])) + ' | Time taken: '+str(timeit.default_timer() - tic)+' seconds') # print(str(i)+' of '+str(len(sense_data['time']))+ '| '+str(my_filter.get_most_landmarks())+' max landmarks') for homie in my_filter._homies: mus = np.array(homie._land_mu) plt.scatter(mus[:,0], mus[:,1], c='g') for i in range(len(my_filter._xs)): this_x = np.mean(my_filter._xs[i]) this_y = np.mean(my_filter._ys[i]) plt.scatter(this_x, this_y, c='r') for homie in my_filter_2._homies: mus = np.array(homie._land_mu) plt.scatter(mus[:,0], mus[:,1], c='g') for i in range(len(my_filter_2._xs)): this_x = np.mean(my_filter_2._xs[i]) this_y = np.mean(my_filter_2._ys[i]) plt.scatter(this_x, this_y, c='r') for homie in my_filter_3._homies: mus = np.array(homie._land_mu) plt.scatter(mus[:,0], mus[:,1], c='g') for i in range(len(my_filter_3._xs)): this_x = np.mean(my_filter_3._xs[i]) this_y = np.mean(my_filter_3._ys[i]) plt.scatter(this_x, this_y, c='r')
data_analyzer/do_fastslam.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Binary Tree # ![image.png](attachment:image.png) # # **A Binary Tree is a tree like structure. Each Node can have maximum 2 child nodes. Any general node of a binary tree should have data, left and right** class BinaryTreeNode: def __init__(self,data): self.data = data self.left = None self.right = None # ## Print a Binary Tree def printTree(root): if root == None: return print(root.data) printTree(root.left) printTree(root.right) def printTreeDetailed(root): if root == None: return print(root.data, end = " : ") if root.left != None: print( "L", root.left.data , end = ", ") if root.right != None: print("R", root.right.data, end = " ") print("") printTreeDetailed(root.left) printTreeDetailed(root.right) # ## Binary Tree by User Input def treeInput(): rootData = int(input()) if rootData == -1: return None root = BinaryTreeNode(rootData) leftTree = treeInput() rightTree = treeInput() root.left = leftTree root.right = rightTree return root # ## Number of Nodes in a Binary Tree def numNodes(root): if root == None: return 0 total = 1 + numNodes(root.left) + numNodes(root.right) return total # ## Sum of Nodes def sumNodes(root): if root == None: return 0 leftSum = sumNodes(root.left) rightSum = sumNodes(root.right) return root.data + leftSum + rightSum # ## Pre-Order Traversal # + def preOrder(root): if root == None: return print(root.data) preOrder(root.left) preOrder(root.right) # - # ## Post-Order Traversal def postOrder(root): if root == None: return postOrder(root.left) postOrder(root.right) print(root.data) # ## In-Order Traversal def inOrder(root): if root == None: return inOrder(root.left) print(root.data) inOrder(root.right) # ## Node with Largest Data def largestData(root): if root == None: return -1 largestLeft = largestData(root.left) largestRight = largestData(root.right) return max(root.data, largestLeft, largestRight) # ## Nodes greater than X # # **For a given a binary tree of integers and an integer X, find and return the total number of nodes of the given binary tree which are having data greater than X.** # # ### Solution: # Base Case : If Node is None then return 0 bcoz that can not be greater than anything. # # Induction Hypothesis: Check for the the root node only and give the left and right subtrees to the recursion. # # Induction Step: If root is grater than the X, then add +1 else 0. def greaterThanX(root, x): if root == None: return 0 inLeft = greaterThanX(root.left,x) inRight = greaterThanX(root.right,x) if root.data > x: return 1 + inLeft + inRight else: return inLeft + inRight # ## Height of a Tree # # Print the number of levels of a tree, if tree is None, its height is 0. def heightofTree(root): if root == None: return 0 left_height = heightofTree(root.left) right_height = heightofTree(root.right) return 1 + max(left_height, right_height) # ## Number of Leaf Nodes in a Tree def countLeaves(root): if root == None: return 0 if root.left == None and root.right == None: return 1 leafsLeft = countLeaves(root.left) leafsRight = countLeaves(root.right) return leafsLeft + leafsRight def countLeaves1(root): if root == None: return 0 leafsLeft = countLeaves1(root.left) leafsRight = countLeaves1(root.right) if root.left == None and root.right == None: return 1+ leafsLeft + leafsRight else: return leafsLeft + leafsRight # ## Print Nodes of tree at depth K def printatK(root,k): if root == None: return if k == 0: print(root.data) printatK(root.left, k-1) printatK(root.right, k-1) # + ## another version of code where we will not change k def printatK1(root, k, d = 0): if root == None: return if d == k: print(root.data) printatK1(root.left, k, d+1) printatK1(root.right, k, d+1) # - # ## Replace the Node data with Depth of Node def replacewithDepth(root, d=0): if root == None: return root.data = d replacewithDepth(root.left, d+1) replacewithDepth(root.right, d+1) # ## Find if the Node is present with the given data in the Tree or Not. def isPresent(root, x): if root == None: return False if root.data == x: return True return isPresent(root.left, x) or isPresent(root.right, x) # ## Nodes without Siblings def node_without_sibling(root): if root == None: return if root.left == None and root.right != None: print(root.right.data) if root.right == None and root.left != None: print(root.left.data) node_without_sibling(root.left) node_without_sibling(root.right) # ## Remove Leaf Nodes of a Binary Tree def removeLeaves(root): if root == None: return None if root.left == None and root.right == None: return None root.left = removeLeaves(root.left) root.right = removeLeaves(root.right) return root # ## Mirror Binary Tree def mirrorTree(root): if root == None: return root.left, root.right = root.right, root.left mirrorTree(root.left) mirrorTree(root.right) # ## Check if BT is Balanced or not O(n^2) # + def isBalanced(root): if root == None: return True lh = heightofTree(root.left) rh = heightofTree(root.right) if abs(lh-rh) > 1: return False left = isBalanced(root.left) right = isBalanced(root.right) if left and right: return True else: return False # - # ## Optimised check Balance O(n) # # **Dont calculate the height for each node separately, just check the height and isBalanced at the same time** def heightandBalanced(root): if root == None: return 0, None lh, isleftBalanced = heightandBalanced(root.left) rh, isrightBalanced = heightandBalanced(root.right) h = 1 + max(lh, rh) if abs(lh-rh) > 1: return h, False if isleftBalanced and isrightBalanced: return True else: return h, False def isBalanced2(root): h, isrootBalanced = heightandBalanced(root) return isrootBalanced # ## Diameter of Binary Tree def diameter(root): if root == None: return 0 lH = heightofTree(root.left) rH = heightofTree(root.right) return max(lH+rH, diameter(root.left), diameter(root.right)) # ## Optimised Solution for the Diameter of a Binary Tree # + def heightDiameter(root): if root == None: return 0,0 lD, lH = heightDiameter(root.left) rD, rH = heightDiameter(root.right) return max(lH+rH, lD, rD), 1+ max(lH, rH) # return max(lH+rH+1, lD, rD), 1+ max(lH, rH) :-> GFG condition def findDiameter(root): return heightDiameter(root)[0] # - # ## Levelwise input of a Binary Tree # + import queue def levelwiseinput(): q = queue.Queue() rootData = int(input("Enter the root Data")) if rootData == -1: return None root = BinaryTreeNode(rootData) q.put(root) while not q.empty(): current_node = q.get() print("Enter left child of ", current_node.data) leftChildData = int(input()) if leftChildData != -1: leftChild = BinaryTreeNode(leftChildData) current_node.left = leftChild q.put(leftChild) print("Enter the right Child of ", current_node.data) rightChildData = int(input()) if rightChildData != -1: rightChild = BinaryTreeNode(rightChildData) current_node.right = rightChild q.put(rightChild) return root # - # ## Print tree levelwise import queue def printLevelWise(root): q = queue.Queue() if root != None: q.put(root) while not q.empty(): a = q.get() print(a.data, end=" : ") if a.left != None: print("L", a.left.data, end = " , ") q.put(a.left) else: print("L -1", end = " , ") if a.right != None: print("R", a.right.data, end = " ") print(" ") q.put(a.right) else: print("R -1",end = " ") print(" ") # ## Make a tree from inorder and preorder sequence. def makeTreePreIn(pre, inorder): if len(pre) == 0: return None rootData = pre[0] root = BinaryTreeNode(rootData) rootIndexInorder = -1 for i in range(0, len(inorder)): if inorder[i] == rootData: rootIndexInorder = i break if rootIndexInorder == -1: return None leftInorder = inorder[0:rootIndexInorder] rightInorder = inorder[rootIndexInorder+1 :] lenLeft = len(leftInorder) leftPreorder = pre[1:lenLeft + 1] rightPreorder = pre[lenLeft+1 :] ## use recursion leftChild = makeTreePreIn(leftPreorder, leftInorder) rightChild = makeTreePreIn(rightPreorder, rightInorder) root.left = leftChild root.right = rightChild return root # ## Create and Insert Duplicate Node # # For a given a Binary Tree of type integer, duplicate every node of the tree and attach it to the left of itself. # The root will remain the same. So you just need to insert nodes in the given Binary Tree. # Example: # # ![image.png](attachment:image.png) # # After making the changes to the above-depicted tree, the updated tree will look like this. # # ![image-2.png](attachment:image-2.png) # # You can see that every node in the input tree has been duplicated and inserted to the left of itself. def duplicateNode(root): if root == None: return None duplicateNode(root.left) duplicateNode(root.right) duplicate = BinaryTreeNode(root.data) temp = root.left root.left = duplicate duplicate.left = temp return root # ## Min and Max of Binary Tree def minMax(root): if root == None: return -1, 10**5 leftMax, leftMin = minMax(root.left) rightMax, rightMin = minMax(root.right) return max(root.data, leftMax, rightMax) , min(root.data, leftMin, rightMin) # ## Level Order Traversal # + import queue def levelTraverse(root): q = queue.Queue() if root != None: q.put(root) q.put(None) print(root.data) while not q.empty(): current = q.get() if current != None: if current.left != None: print(current.left.data, end = " ") q.put(current.left) if current.right != None: print(current.right.data, end = " ") q.put(current.right) # q.put(None) else: print(" ") if not q.empty(): q.put(None) # - # ## Path Sum root to Leaf # # print all the root to leaf paths in the Binary Tree , such that the sum of the nodes in that path is equal to the given sum k # + def pathsum(root, sum, path): if root == None: return None rootData = root.data path.append(rootData) if sum == rootData and root.left == None and root.right == None: print(*path) if root.left != None: pathsum(root.left, sum-rootData, path) if root.right != None: pathsum(root.right, sum-rootData, path) path.pop(-1) # - # ## Print Nodes at Distance k from Node def printNodeK(root,node,k): if root == None: return -1 if root.data == node: printatK(root, k) return 0 lD = printNodeK(root.left, node, k) if lD != -1: if lD +1 == k: print(root.data) else: printatK(root.right, k-lD-2) return 1 + lD rD = printNodeK(root.right, node, k) if rD != -1: if rD + 1 == k: print(root.data) else: printatK(root.left, k-rD-2) return 1+rD return -1 # ## Binary Search Tree # # **The trees in which searching something is very fast, analogy is binary search of array. we need sorted trees in this case. Only sorted trees can be used for this.** # # ### Condition for Sorted Trees # # **For each node with data d, its left subtree < d and its right subtree >= d.** # ## Serach for x in a given Binary Search Tree def searchBST(root, x): if root == None: return False if root.data == x: return True elif root.data > x: return searchBST(root.left, x) else: return searchBST(root.right, x) # ## Print elements in a BST which are in range k1 and k2 def printbwRange(root, k1, k2): if root == None: return None if root.data < k1: printbwRange(root.right, k1, k2) elif root.data > k2: printbwRange(root.left, k1, k2) else: print(root.data) printbwRange(root.left, k1, k2) printbwRange(root.right, k1, k2) # ## Convert Sorted Array to Binary Search Tree # + import math def convertArrayBSTHelp(arr , si, ei): if si > ei: return None mid = math.ceil((si+ei)/2) node = BinaryTreeNode(arr[mid]) leftChild = convertArrayBSTHelp(arr, si, mid-1) rightChild = convertArrayBSTHelp(arr, mid+1, ei) node.left = leftChild node.right = rightChild return node def convertArrayBST(arr): si = 0 ei = len(arr) -1 return convertArrayBSTHelp(arr, si, ei) # - # ## Check if the given Binary Tree is a Binary Search Tree or Not def minTree(root): if root == None: return 10**6 return min(root.data, minTree(root.left), minTree(root.right)) def maxTree(root): if root == None: return -1 return max(root.data, maxTree(root.left), maxTree(root.rigth)) def checkBST(root): if root == None: return True leftMax = maxTree(root.left) rightMin = minTree(root.right) if root.data > rightMin or root.data <= leftMax: return False return checkBST(root.left) and checkBST(root.right) # ## Optimised Solution to check the BST complexity of O(n) # + def isBSTHelp(root): if root == None: return -1, 10**6, True maxLeft, minLeft, isLeft = isBSTHelp(root.left) maxRight, minRight, isRight = isBSTHelp(root.right) if root.data > minRight or root.data <= maxLeft: return -1, -1, False return (max(root.data, maxLeft, maxRight), min(root.data, minLeft, minRight), (isLeft and isRight)) def isBST(root): return isBSTHelp(root)[2] # - # ## Another Solution to check BST def checkBSTNew(root, minRange, maxRange): if root == None: return True if minRange > root.data or maxRange < root.data: return False isLeft = checkBSTNew(root.left, minRange, root.data-1) isRight = checkBSTNew(root.rigth, rootData, maxRange) return isLeft and isRight arr = [i for i in range(1,10)] print(arr) node = convertArrayBST(arr) printTreeDetailed(node) preOrder(node) printNodeK(root, 5, 2) printbwRange(root, 2, 30) root = levelwiseinput() printTreeDetailed(root) searchBST(root, 3) nodetoK(root, 1,2) root = treeInput() printTreeDetailed(root) # root = levelwiseinput() printTreeDetailed(root) # + bt1 = BinaryTreeNode(10) bt2 = BinaryTreeNode(20) bt3 = BinaryTreeNode(30) bt4 = BinaryTreeNode(40) bt5 = BinaryTreeNode(50) # + bt1.left = bt2 bt1.right = bt3 bt2.left = bt4 bt2.right = bt5 # - printTreeDetailed(bt1) numNodes(bt1) sumNodes(bt1) preOrder(bt1) postOrder(bt1) inOrder(bt1) largestData(bt1) greaterThanX(bt1, 50) heightofTree(bt1) countLeaves(bt1) printatK1(bt1, 2) replacewithDepth(bt1) printTreeDetailed(bt1) isPresent(bt1, 20) printTreeDetailed(bt1) node_without_sibling(root) removeLeaves(bt1) printTreeDetailed(bt1) mirrorTree(root) printTreeDetailed(root) checkBalanced(root) heightandBalanced(root) diameter(bt1) heightDiameter(bt1) findDiameter(bt1) root = levelwiseinput() printTreeDetailed(root) printLevelWise(root) pre = [1,2,4,5,3,6,7] inorder = [4,2,5,1,6,3,7] root = makeTreePreIn(pre,inorder) printTreeDetailed(root) duplicateNode(root) printTreeDetailed(root) minMax(root) printLevelWise(root) pathsum(root, 16, []) printPaths(root, 4)
CN DSA/BinaryTree.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 畳み込みニューラルネットワーク (CNN) # ![](./capture/ml_lesson_7_1_capture01.png) # ## Section2 CNNを使ったモデルを構築してみよう # + # %matplotlib inline import os import tensorflow.keras as keras import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Input, Activation, add, Add, Dropout, BatchNormalization from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.models import Sequential, Model from tensorflow.keras.datasets import cifar10 from tensorflow.keras.datasets import fashion_mnist from tensorflow.keras.utils import to_categorical from sklearn.model_selection import train_test_split from IPython.display import SVG from tensorflow.python.keras.utils.vis_utils import model_to_dot random_state = 42 # - # ### 2.1 Fashion MNIST(白黒画像)をCNNでクラス分類 # #### 2.1.1 データセットの読み込み (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() # + fig = plt.figure(figsize=(9, 15)) fig.subplots_adjust(left=0, right=1, bottom=0, top=0.5, hspace=0.05, wspace=0.05) for i in range(9): ax = fig.add_subplot(1, 9, i + 1, xticks=[], yticks=[]) ax.imshow(x_train[i], cmap='gray') # - # このとき読み込んだ画像は(バッチサイズ、縦の画素数、 横の画素数)の次元で表されています。 x_train.shape x_train = x_train.reshape((x_train.shape[0], 28, 28, 1)) / 255 x_test = x_test.reshape((x_test.shape[0], 28, 28, 1)) / 255 y_train = to_categorical(y_train) y_test = to_categorical(y_test) # 前章の多層パーセプトロンでは入力を (バッチサイズ、画素数) の2次元テンソルとして扱いましたが、 CNNでは2次元の画像として処理していくために4次元テンソル (バッチサイズ、縦の画素数、横の画素数、チャンネル数)として扱います。 チャンネル数は白黒画像の場合は1、 カラー画像の場合はRGBで3です。 # # Fashion MNISTの画像は白黒データですのでチャンネル数を1に設定しています。(カラー画像の場合はチャンネル数が3になります) print(x_train.shape) print(y_train.shape) # #### 2.1.2 実装 # + model = Sequential() # 入力画像 28x28x1 (縦の画素数)x(横の画素数)x(チャンネル数) model.add(Conv2D(16, kernel_size=(5, 5), activation='relu', kernel_initializer='he_normal', input_shape=(28, 28, 1))) # 28x28x1 -> 24x24x16 model.add(MaxPooling2D(pool_size=(2, 2))) # 24x24x16 -> 12x12x16 model.add(Conv2D(64, kernel_size=(5, 5), activation='relu', kernel_initializer='he_normal')) # 12x12x16 -> 8x8x64 model.add(MaxPooling2D(pool_size=(2, 2))) # 8x8x64 -> 4x4x64 model.add(Flatten()) # 4x4x64-> 1024 model.add(Dense(10, activation='softmax')) # 1024 -> 10 model.compile( loss=keras.losses.categorical_crossentropy, optimizer='adam', metrics=['accuracy'] ) # - # 作成したモデルを確認してみましょう。 #SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg')) model.summary() # + # 学習を実行 # 実行時間の都合上、 epoch 数=3 とする # EarlyStopping: 過学習を防止するための仕組み。学習の精度が上がらなくなった段階で実行を停止する early_stopping = EarlyStopping(patience=1, verbose=1) #model.fit(x=x_train, y=y_train, batch_size=128, epochs=100, verbose=1, # validation_split=0.1, callbacks=[early_stopping]) model.fit(x=x_train, y=y_train, batch_size=32, epochs=3, verbose=1, validation_split=0.1, callbacks=[early_stopping]) # 性能評価( loss: 予測結果と正解との誤差、accuracy: 正解率) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # - # ### 2.2 CIFAR10のデータ(カラー画像)をCNNでクラス分類 # #### 2.2.1 データセットの読み込み # 6万枚のカラー画像に10のカテゴリのどれかが付与されたCIFAR-10というデータセットを使用します。 # # まず、データを読み込みます。 # + (x_train, y_train), (x_test, y_test) = cifar10.load_data() x_train = x_train.astype('float32') / 255 y_train = np.eye(10)[y_train.astype('int32').flatten()] x_test = x_test.astype('float32') / 255 y_test = np.eye(10)[y_test.astype('int32').flatten()] x_train, x_valid, y_train, y_valid = train_test_split( x_train, y_train, test_size=10000) # - # 画像はRGBデータなのでFashion MNISTとは異なり、チャンネル数は3になります。 print(x_train.shape) print(y_train.shape) # 次に、CIFAR-10の画像の例を表示してみます。この画像ひとつひとつに10のカテゴリのうちひとつが付与されています。 # + fig = plt.figure(figsize=(9, 15)) fig.subplots_adjust(left=0, right=1, bottom=0, top=0.5, hspace=0.05, wspace=0.05) for i in range(9): ax = fig.add_subplot(1, 9, i + 1, xticks=[], yticks=[]) ax.imshow(x_train[i]) # - # 以下のネットワークを実装してみます。 # # ![](./figures/lenet.png) # # <NAME> et al., "Gradient-based learning applied to document recognition", Proceedings of the IEEE, 1998 # #### 2.2.2 実装 # + model = Sequential() model.add(Conv2D(6, kernel_size=(5, 5), activation='relu', kernel_initializer='he_normal', input_shape=(32, 32, 3))) # 32x32x3 -> 28x28x6 model.add(MaxPooling2D(pool_size=(2, 2))) # 28x28x6 -> 14x14x6 model.add(Conv2D(16, kernel_size=(5, 5), activation='relu', kernel_initializer='he_normal')) # 14x14x6 -> 10x10x16 model.add(MaxPooling2D(pool_size=(2, 2))) # 10x10x16 -> 5x5x16 model.add(Flatten()) # 5x5x16 -> 400 model.add(Dense(120, activation='relu', kernel_initializer='he_normal')) # 400 ->120 model.add(Dense(84, activation='relu', kernel_initializer='he_normal')) # 120 ->84 model.add(Dense(10, activation='softmax')) # 84 ->10 model.compile( loss=keras.losses.categorical_crossentropy, optimizer='adam', metrics=['accuracy'] ) # - # 作成したモデルを確認してみましょう。 #SVG(model_to_dot(model).create(prog='dot', format='svg')) model.summary() # + # 学習を実行 # 実行時間の都合上、 epoch 数=3 とする # EarlyStopping: 過学習を防止するための仕組み。学習の精度が上がらなくなった段階で実行を停止する early_stopping = EarlyStopping(patience=1, verbose=1) #model.fit(x=x_train, y=y_train, batch_size=128, epochs=100, verbose=1, # validation_data=(x_valid, y_valid), callbacks=[early_stopping]) model.fit(x=x_train, y=y_train, batch_size=32, epochs=3, verbose=1, validation_data=(x_valid, y_valid), callbacks=[early_stopping]) # 性能評価( loss: 予測結果と正解との誤差、accuracy: 正解率) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # - # # ファインチューニング( fine-tuning ) # # ## ファインチューニング、転移学習とは # # ![](./capture/ml_lesson_7_1_capture03.png) # # ## Keras で使える学習済みモデル # # ![](./capture/ml_lesson_7_1_capture02.png) # ## CIFAR10のデータ(カラー画像)を MobileNet のファインチューニングでクラス分類 # + # MobileNet モデルのライブラリを読み込む from tensorflow.keras.applications.mobilenet import MobileNet # 入力フォーマット input_tensor = Input(shape=(32, 32, 3)) # MobileNet モデルを定義 base_model = MobileNet(include_top=False, weights='imagenet', input_tensor=input_tensor) # 全結合層の新規構築 top_model = base_model.output top_model = Flatten()(top_model) #top_model = GlobalAveragePooling2D()(top_model) # Flatten の代わりに GlobalAveragePooling2D も有効であるとのこと top_model = Dense(1024, activation = 'relu')(top_model) predictions = Dense(10, activation = 'softmax')(top_model) # ネットワーク全体の定義 model = Model(inputs = base_model.input, outputs = predictions) # MobileNet の conv_dw_13 層の手前までは再学習させない trainable = False for layer in model.layers: if layer.name == 'conv_dw_13': trainable = True # conv_dw_13 以降の層は freeze 解除 layer.trainable = trainable # ただし、conv_dw_13 以前の層でも、Batch Normalization 層は再学習させる if layer.name.endswith('bn'): layer.trainable = True # print("MobileNet {}層".format(len(model.layers))) # model.summary() model.compile( loss=keras.losses.categorical_crossentropy, optimizer='adam', metrics=['accuracy'] ) # + # 学習を実行 # 実行時間の都合上、 epoch 数=3 とする # EarlyStopping: 過学習を防止するための仕組み。学習の精度が上がらなくなった段階で実行を停止する early_stopping = EarlyStopping(patience=1, verbose=1) #model.fit(x=x_train, y=y_train, batch_size=128, epochs=100, verbose=1, # validation_data=(x_valid, y_valid), callbacks=[early_stopping]) model.fit(x=x_train, y=y_train, batch_size=32, epochs=3, verbose=1, validation_data=(x_valid, y_valid), callbacks=[early_stopping]) # 性能評価( loss: 予測結果と正解との誤差、accuracy: 正解率) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # -
ml_lesson_7_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import patsy as pt import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn import preprocessing # %matplotlib inline import re import pymc3 as pm import matplotlib.ticker as tk import re from sklearn.model_selection import StratifiedKFold import pickle # ## Import data df = pd.read_csv('outputs/ala1_trials_clean.csv') df = df.rename(columns={'project_name': 'basis', 'cluster__n_clusters': 'n', 'test_mean': 'y'}).\ loc[:, ['basis', 'y', 'n']] # ## Scale predictors # to_scale = ['n'] scaler = preprocessing.MinMaxScaler() vars_scaled = pd.DataFrame(scaler.fit_transform(df.loc[:, to_scale]), columns=[x+'_s' for x in to_scale]) df = df.join(vars_scaled) df.T # ## Create design matrix y = df.loc[:, 'y'] X = df.loc[:, df.columns.difference(['y'])] X_c = pt.dmatrix('~ 0 + n_s + C(basis)', data=df, return_type='dataframe') X_c = X_c.rename(columns=lambda x: re.sub('C|\\(|\\)|\\[|\\]','',x)) # ## Model fitting functions # + def gamma(alpha, beta): def g(x): return pm.Gamma(x, alpha=alpha, beta=beta) return g def hcauchy(beta): def g(x): return pm.HalfCauchy(x, beta=beta) return g def fit_model_1(y, X, kernel_type='rbf'): """ function to return a pymc3 model y : dependent variable X : independent variables prop_Xu : number of inducing varibles to use X, y are dataframes. We'll use the column names. """ with pm.Model() as model: # Covert arrays X_a = X.values y_a = y.values X_cols = list(X.columns) # Globals prop_Xu = 0.1 # proportion of observations to use as inducing variables l_prior = gamma(1, 0.05) eta_prior = hcauchy(2) sigma_prior = hcauchy(2) # Kernels # 3 way interaction eta = eta_prior('eta') cov = eta**2 for i in range(X_a.shape[1]): var_lab = 'l_'+X_cols[i] if kernel_type=='RBF': cov = cov*pm.gp.cov.ExpQuad(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i]) if kernel_type=='Exponential': cov = cov*pm.gp.cov.Exponential(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i]) if kernel_type=='M52': cov = cov*pm.gp.cov.Matern52(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i]) if kernel_type=='M32': cov = cov*pm.gp.cov.Matern32(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i]) # Covariance model cov_tot = cov # Model gp = pm.gp.MarginalSparse(cov_func=cov_tot, approx="FITC") # Noise model sigma_n =sigma_prior('sigma_n') # Inducing variables num_Xu = int(X_a.shape[0]*prop_Xu) Xu = pm.gp.util.kmeans_inducing_points(num_Xu, X_a) # Marginal likelihood y_ = gp.marginal_likelihood('y_', X=X_a, y=y_a,Xu=Xu, noise=sigma_n) mp = pm.find_MAP() return gp, mp, model # - # ## Main testing loop # + # Inputs kernels = ['M32', 'M52', 'RBF', 'Exponential' ] # Outputs pred_dfs = [] # iterator kf = StratifiedKFold(n_splits=10) for i in range(len(kernels)): print(kernels[i]) for idx, (train_idx, test_idx) in enumerate(kf.split(X.values, X['basis'])): print('\tfold: {}'.format(idx)) # subset dataframes for training and testin y_train = y.iloc[train_idx] X_train = X_c.iloc[train_idx, :] y_test = y.iloc[test_idx] X_test = X_c.iloc[test_idx, :] # Fit gp model gp, mp, model = fit_model_1(y=y_train, X=X_train, kernel_type=kernels[i]) # Get predictions for evalution with model: # predict latent mu, var = gp.predict(X_test.values, point=mp, diag=True,pred_noise=False) sd_f = np.sqrt(var) # predict target (includes noise) _, var = gp.predict(X_test.values, point=mp, diag=True,pred_noise=True) sd_y = np.sqrt(var) res = pd.DataFrame({'f_pred': mu, 'sd_f': sd_f, 'sd_y': sd_y, 'y': y_test.values}) res.loc[:, 'kernel'] = kernels[i] res.loc[:, 'fold_num'] = idx pred_dfs.append(pd.concat([X_test.reset_index(), res.reset_index()], axis=1)) pred_dfs = pd.concat(pred_dfs) null_mu = np.mean(y) null_sd = np.std(y) # - # ## Evaluate kernels # + def ll(f_pred, sigma_pred, y_true): # log predictive density tmp = 0.5*np.log(2*np.pi*sigma_pred**2) tmp += (f_pred-y_true)**2/(2*sigma_pred**2) return tmp sll = ll(pred_dfs['f_pred'], pred_dfs['sd_y'], pred_dfs['y']) sll = sll - ll(null_mu, null_sd, pred_dfs['y']) pred_dfs['msll'] = sll pred_dfs['smse'] = (pred_dfs['f_pred']-pred_dfs['y'])**2/np.var(y) pred_dfs.to_pickle('outputs/kernel_cv_fits.p') msll = pred_dfs.groupby(['kernel'])['msll'].mean() smse = pred_dfs.groupby(['kernel'])['smse'].mean() summary = pd.DataFrame(smse).join(other=pd.DataFrame(msll), on=['kernel'], how='left') summary.to_csv('outputs/kernel_cv_fits_summary.csv') # - summary
3_find_best_kernel.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="NQ8B8hFx9plf" # Код для робота (менять не надо): # + colab={} colab_type="code" id="ePrz8jWu47lt" import random import numpy as np import matplotlib.pyplot as plt class Robot(object): def __init__(self, length=20.0): """ Creates robot and initializes location/orientation to 0, 0, 0. """ self.x = 0.0 self.y = 0.0 self.orientation = 0.0 self.length = length self.steering_noise = 0.0 self.distance_noise = 0.0 self.steering_drift = 0.0 def set(self, x, y, orientation): """ Sets a robot coordinate. """ self.x = x self.y = y self.orientation = orientation % (2.0 * np.pi) def set_noise(self, steering_noise, distance_noise): """ Sets the noise parameters. """ # makes it possible to change the noise parameters # this is often useful in particle filters self.steering_noise = steering_noise self.distance_noise = distance_noise def set_steering_drift(self, drift): """ Sets the systematical steering drift parameter """ self.steering_drift = drift def move(self, steering, distance, tolerance=0.001, max_steering_angle=np.pi / 4.0): """ steering = front wheel steering angle, limited by max_steering_angle distance = total distance driven, most be non-negative """ if steering > max_steering_angle: steering = max_steering_angle if steering < -max_steering_angle: steering = -max_steering_angle if distance < 0.0: distance = 0.0 # apply noise steering2 = random.gauss(steering, self.steering_noise) distance2 = random.gauss(distance, self.distance_noise) # apply steering drift steering2 += self.steering_drift # Execute motion turn = np.tan(steering2) * distance2 / self.length if abs(turn) < tolerance: # approximate by straight line motion self.x += distance2 * np.cos(self.orientation) self.y += distance2 * np.sin(self.orientation) self.orientation = (self.orientation + turn) % (2.0 * np.pi) else: # approximate bicycle model for motion radius = distance2 / turn cx = self.x - (np.sin(self.orientation) * radius) cy = self.y + (np.cos(self.orientation) * radius) self.orientation = (self.orientation + turn) % (2.0 * np.pi) self.x = cx + (np.sin(self.orientation) * radius) self.y = cy - (np.cos(self.orientation) * radius) def __repr__(self): return '[x=%.5f y=%.5f orient=%.5f]' % (self.x, self.y, self.orientation) # + [markdown] colab_type="text" id="kMMw0SF59nJP" # Добавьте вычисление параметра steer через PID controller в этой клетке: # - class PIDController: def __init__(self, tau_p, tau_d, tau_i, dt=0.1): self.tau_p, self.tau_d, self.tau_i = tau_p, tau_d, tau_i self.dt = dt self.error = None self.integral = 0.0 def __call__(self, y, target_y): error = target_y - y d_error = 0.0 if self.error is None else (error - self.error) / self.dt self.error = error self.integral += error * self.dt total = ( self.tau_p * error + self.tau_d * d_error + self.tau_i * self.integral ) return total # + colab={} colab_type="code" id="U8ThyySZ9mFH" def run(robot, tau_p, tau_d, tau_i, n=200, speed=1.0): x_trajectory = [] y_trajectory = [] controller = PIDController(tau_p, tau_d, tau_i) for i in range(n): cte = robot.y steer = controller(robot.y, 0.0) robot.move(steer, speed) x_trajectory.append(robot.x) y_trajectory.append(robot.y) return x_trajectory, y_trajectory # + [markdown] colab_type="text" id="YhdFR33e_fHh" # Запус и отрисовка траектории, тут нужно подобрать оптимальные параметры PID (сейчас стоят 1, 1, 1). # + colab={} colab_type="code" id="VJcr7-Me5R1a" def plot(tau_p, tau_d, tau_i): robot = Robot() robot.set(0, 1, 0) x_trajectory, y_trajectory = run(robot, tau_p, tau_d, tau_i) plt.plot(x_trajectory, y_trajectory, 'g', label='PID controller') plt.plot(x_trajectory, np.zeros(len(x_trajectory)), 'r', label='reference') plt.legend() plt.show() # - plot(1, 1, 1) plot(1, 0, 1) plot(1, 1, 0) plot(1, 0.5, 0) # Seems like most optimal set of params.
PID_controller.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="qFdPvlXBOdUN" # # Better ML Engineering with ML Metadata # # # # ## Learning Objectives # # 1. Download the dataset # 2. Create an InteractiveContext # 3. Construct the TFX Pipeline # 4. Query the MLMD Database # # # # ## Introduction # # # + [markdown] id="xHxb-dlhMIzW" # Assume a scenario where you set up a production ML pipeline to classify penguins. The pipeline ingests your training data, trains and evaluates a model, and pushes it to production. # # However, when you later try using this model with a larger dataset that contains different kinds of penguins, you observe that your model does not behave as expected and starts classifying the species incorrectly. # # At this point, you are interested in knowing: # # * What is the most efficient way to debug the model when the only available artifact is the model in production? # * Which training dataset was used to train the model? # * Which training run led to this erroneous model? # * Where are the model evaluation results? # * Where to begin debugging? # # [ML Metadata (MLMD)](https://github.com/google/ml-metadata) is a library that leverages the metadata associated with ML models to help you answer these questions and more. A helpful analogy is to think of this metadata as the equivalent of logging in software development. MLMD enables you to reliably track the artifacts and lineage associated with the various components of your ML pipeline. # # In this notebook, you set up a TFX Pipeline to create a model that classifies penguins into three species based on the body mass and the length and depth of their culmens, and the length of their flippers. You then use MLMD to track the lineage of pipeline components. # # Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/production_ml/labs/mlmd_tutorial.ipynb) -- try to complete that notebook first before reviewing this solution notebook. # # + [markdown] id="MUXex9ctTuDB" # ## Setup # # First, we install and import the necessary packages, set up paths, and download data. # + [markdown] id="lko0xn8JxI6F" # ### Upgrade Pip # # # + id="7pXW--mlxQhY" colab={"base_uri": "https://localhost:8080/"} outputId="47f1970f-5b96-4d0d-9a8f-b226010f8c84" # !pip install --upgrade pip # + [markdown] id="mQV-Cget1S8t" # ### Install and import TFX # + id="82jOhrcA36YA" colab={"base_uri": "https://localhost:8080/"} outputId="5d8ef195-f404-427b-a1bd-e5b8c92dccec" # !pip install -q -U tfx # + [markdown] id="OD2cRhwM3ez2" # Please ignore the incompatibility error and warnings. Make sure to re-run the cell. # # + [markdown] id="OD2cRhwM3ez2" # You must restart the kernel after installing TFX. Select **Kernel > Restart kernel > Restart** from the menu. # # Do not proceed with the rest of this notebook without restarting the kernel. # + [markdown] id="ohOztGn2wc1z" # ### Import other libraries # + id="IqR2PQG4ZaZ0" import os import tempfile import urllib import pandas as pd import tensorflow_model_analysis as tfma from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext # + [markdown] id="JKo31y2L5hCy" # Import the MLMD library. # + id="FNYHX6zA5gE5" from tfx import v1 as tfx print('TFX version: {}'.format(tfx.__version__)) import ml_metadata as mlmd print('MLMD version: {}'.format(mlmd.__version__)) from ml_metadata.proto import metadata_store_pb2 # + [markdown] id="UhNtHfuxCGVy" # ## Download the dataset # # We use the [Palmer Penguins dataset](https://allisonhorst.github.io/palmerpenguins/articles/intro.html) which can be found on [Github](https://github.com/allisonhorst/palmerpenguins). We processed the dataset by leaving out any incomplete records, and drops `island` and `sex` columns, and converted labels to `int32`. The dataset contains 334 records of the body mass and the length and depth of penguins' culmens, and the length of their flippers. You use this data to classify penguins into one of three species. # + id="B_NibNnjzGHu" colab={"base_uri": "https://localhost:8080/"} outputId="a2cd641a-97f3-4f96-8a99-bb51063a3fca" DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv' _data_root = tempfile.mkdtemp(prefix='tfx-data') # TODO # Join various path components _data_filepath = os.path.join(_data_root, "penguins_processed.csv") urllib.request.urlretrieve(DATA_PATH, _data_filepath) # + [markdown] id="8NXg2bGA19HJ" # ## Create an InteractiveContext # # To run TFX components interactively in this notebook, create an `InteractiveContext`. The `InteractiveContext` uses a temporary directory with an ephemeral MLMD database instance. # # In general, it is a good practice to group similar pipeline runs under a `Context`. # + id="bytrDFKh40mi" colab={"base_uri": "https://localhost:8080/"} outputId="affd9bd7-8c47-4eef-bf85-c0628efac593" # TODO interactive_context = InteractiveContext() # + [markdown] id="e-58fa9S6Nao" # ## Construct the TFX Pipeline # # A TFX pipeline consists of several components that perform different aspects of the ML workflow. In this notebook, you create and run the `ExampleGen`, `StatisticsGen`, `SchemaGen`, and `Trainer` components and use the `Evaluator` and `Pusher` component to evaluate and push the trained model. # # Refer to the [components tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/components_keras) for more information on TFX pipeline components. # + [markdown] id="urh3FTb81yyM" # Note: Constructing a TFX Pipeline by setting up the individual components involves a lot of boilerplate code. For the purpose of this notebook, it is alright if you do not fully understand every line of code in the pipeline setup. # + [markdown] id="bnnq7Gf8CHZJ" # ### Instantiate and run the ExampleGen Component # + id="H9zaBZh3C_9x" colab={"base_uri": "https://localhost:8080/", "height": 277} outputId="74164cb5-8237-4c86-9722-36e21d42b750" # TODO example_gen = tfx.components.CsvExampleGen(input_base=_data_root) interactive_context.run(example_gen) # + [markdown] id="nqxye_p1DLmf" # ### Instantiate and run the StatisticsGen Component # + id="s67sHU_vDRds" colab={"base_uri": "https://localhost:8080/", "height": 163} outputId="076fb1dd-c618-4f62-c865-418eb2956ae3" # TODO statistics_gen = tfx.components.StatisticsGen( examples=example_gen.outputs['examples']) interactive_context.run(statistics_gen) # + [markdown] id="xib9oRb_ExjJ" # ### Instantiate and run the SchemaGen Component # + id="csmD4CSUE3JT" colab={"base_uri": "https://localhost:8080/", "height": 163} outputId="236dbf24-9db3-461b-97e5-fd3c256beab2" # TODO infer_schema = tfx.components.SchemaGen( statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True) interactive_context.run(infer_schema) # + [markdown] id="_pYNlw7BHUjP" # ### Instantiate and run the Trainer Component # # # + id="MTxf8xs_kKfG" # Define the module file for the Trainer component trainer_module_file = 'penguin_trainer.py' # + id="f3nLHEmUkRUw" colab={"base_uri": "https://localhost:8080/"} outputId="72eb7d45-440e-429c-8802-5c52fd4ac7d6" # %%writefile {trainer_module_file} # Define the training algorithm for the Trainer module file import os from typing import List, Text import tensorflow as tf from tensorflow import keras from tfx import v1 as tfx from tfx_bsl.public import tfxio from tensorflow_metadata.proto.v0 import schema_pb2 # Features used for classification - culmen length and depth, flipper length, # body mass, and species. _LABEL_KEY = 'species' _FEATURE_KEYS = [ 'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g' ] def _input_fn(file_pattern: List[Text], data_accessor: tfx.components.DataAccessor, schema: schema_pb2.Schema, batch_size: int) -> tf.data.Dataset: return data_accessor.tf_dataset_factory( file_pattern, tfxio.TensorFlowDatasetOptions( batch_size=batch_size, label_key=_LABEL_KEY), schema).repeat() def _build_keras_model(): inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS] d = keras.layers.concatenate(inputs) d = keras.layers.Dense(8, activation='relu')(d) d = keras.layers.Dense(8, activation='relu')(d) outputs = keras.layers.Dense(3)(d) model = keras.Model(inputs=inputs, outputs=outputs) model.compile( optimizer=keras.optimizers.Adam(1e-2), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()]) return model def run_fn(fn_args: tfx.components.FnArgs): schema = schema_pb2.Schema() tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema) train_dataset = _input_fn( fn_args.train_files, fn_args.data_accessor, schema, batch_size=10) eval_dataset = _input_fn( fn_args.eval_files, fn_args.data_accessor, schema, batch_size=10) model = _build_keras_model() model.fit( train_dataset, epochs=int(fn_args.train_steps / 20), steps_per_epoch=20, validation_data=eval_dataset, validation_steps=fn_args.eval_steps) model.save(fn_args.serving_model_dir, save_format='tf') # + [markdown] id="qcmSNiqq5QaV" # Run the `Trainer` component. # + id="4AzsMk7oflMg" colab={"base_uri": "https://localhost:8080/", "height": 455} outputId="6eed1366-b817-4308-8086-9ec2b5f6322e" trainer = tfx.components.Trainer( module_file=os.path.abspath(trainer_module_file), examples=example_gen.outputs['examples'], schema=infer_schema.outputs['schema'], train_args=tfx.proto.TrainArgs(num_steps=100), eval_args=tfx.proto.EvalArgs(num_steps=50)) interactive_context.run(trainer) # + [markdown] id="gdCq5c0f5MyA" # ### Evaluate and push the model # # Use the `Evaluator` component to evaluate and 'bless' the model before using the `Pusher` component to push the model to a serving directory. # + id="NDx-fTUb6RUU" _serving_model_dir = os.path.join(tempfile.mkdtemp(), 'serving_model/penguins_classification') # + id="PpS4-wCf6eLR" eval_config = tfma.EvalConfig( model_specs=[ tfma.ModelSpec(label_key='species', signature_name='serving_default') ], metrics_specs=[ tfma.MetricsSpec(metrics=[ tfma.MetricConfig( class_name='SparseCategoricalAccuracy', threshold=tfma.MetricThreshold( value_threshold=tfma.GenericValueThreshold( lower_bound={'value': 0.6}))) ]) ], slicing_specs=[tfma.SlicingSpec()]) # + id="kFuH1YTh8vSf" colab={"base_uri": "https://localhost:8080/", "height": 387} outputId="cf8df990-bdad-4d96-b309-4823c9283175" evaluator = tfx.components.Evaluator( examples=example_gen.outputs['examples'], model=trainer.outputs['model'], schema=infer_schema.outputs['schema'], eval_config=eval_config) interactive_context.run(evaluator) # + id="NCV9gcCQ966W" colab={"base_uri": "https://localhost:8080/", "height": 184} outputId="1fceb932-4afe-4043-f69a-ad945843e765" pusher = tfx.components.Pusher( model=trainer.outputs['model'], model_blessing=evaluator.outputs['blessing'], push_destination=tfx.proto.PushDestination( filesystem=tfx.proto.PushDestination.Filesystem( base_directory=_serving_model_dir))) interactive_context.run(pusher) # + [markdown] id="9K7RzdBzkru7" # Running the TFX pipeline populates the MLMD Database. In the next section, you use the MLMD API to query this database for metadata information. # + [markdown] id="6GRCGQu7RguC" # ## Query the MLMD Database # # The MLMD database stores three types of metadata: # # * Metadata about the pipeline and lineage information associated with the pipeline components # * Metadata about artifacts that were generated during the pipeline run # * Metadata about the executions of the pipeline # # A typical production environment pipeline serves multiple models as new data arrives. When you encounter erroneous results in served models, you can query the MLMD database to isolate the erroneous models. You can then trace the lineage of the pipeline components that correspond to these models to debug your models # + [markdown] id="o0xVYqAkJybK" # Set up the metadata (MD) store with the `InteractiveContext` defined previously to query the MLMD database. # + id="P1p38etAv0kC" connection_config = interactive_context.metadata_connection_config store = mlmd.MetadataStore(connection_config) # All TFX artifacts are stored in the base directory base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0] # + [markdown] id="uq-1ep4suvuZ" # Create some helper functions to view the data from the MD store. # + id="q1ib8yStu6CW" def display_types(types): # Helper function to render dataframes for the artifact and execution types table = {'id': [], 'name': []} for a_type in types: table['id'].append(a_type.id) table['name'].append(a_type.name) return pd.DataFrame(data=table) # + id="HmqzYZcV3UG5" def display_artifacts(store, artifacts): # Helper function to render dataframes for the input artifacts table = {'artifact id': [], 'type': [], 'uri': []} for a in artifacts: table['artifact id'].append(a.id) artifact_type = store.get_artifact_types_by_id([a.type_id])[0] table['type'].append(artifact_type.name) table['uri'].append(a.uri.replace(base_dir, './')) return pd.DataFrame(data=table) # + id="iBdGCZ0CMJDO" def display_properties(store, node): # Helper function to render dataframes for artifact and execution properties table = {'property': [], 'value': []} for k, v in node.properties.items(): table['property'].append(k) table['value'].append( v.string_value if v.HasField('string_value') else v.int_value) for k, v in node.custom_properties.items(): table['property'].append(k) table['value'].append( v.string_value if v.HasField('string_value') else v.int_value) return pd.DataFrame(data=table) # + [markdown] id="1B-jRNH0M0k4" # First, query the MD store for a list of all its stored `ArtifactTypes`. # + id="6zXSQL8s5dyL" colab={"base_uri": "https://localhost:8080/", "height": 293} outputId="4c56ab03-202c-4715-e2cf-22415f1eb0d8" display_types(store.get_artifact_types()) # + [markdown] id="quOsBgtM3r7S" # Next, query all `PushedModel` artifacts. # + id="bUv_EI-bEMMu" colab={"base_uri": "https://localhost:8080/", "height": 79} outputId="5dbd2e6e-8ac2-4cb4-94fe-732a8d81f32d" pushed_models = store.get_artifacts_by_type("PushedModel") display_artifacts(store, pushed_models) # + [markdown] id="UecjkVOqJCBE" # Query the MD store for the latest pushed model. This notebook has only one pushed model. # + id="N8tPvRtcPTrU" colab={"base_uri": "https://localhost:8080/", "height": 262} outputId="b8307a7e-ef5a-4e70-9302-2445dcfeda43" pushed_model = pushed_models[-1] display_properties(store, pushed_model) # + [markdown] id="f5Mz4vfP6wHO" # One of the first steps in debugging a pushed model is to look at which trained model is pushed and to see which training data is used to train that model. # # MLMD provides traversal APIs to walk through the provenance graph, which you can use to analyze the model provenance. # + id="BLfydQVxOwf3" def get_one_hop_parent_artifacts(store, artifacts): # Get a list of artifacts within a 1-hop of the artifacts of interest artifact_ids = [artifact.id for artifact in artifacts] executions_ids = set( event.execution_id for event in store.get_events_by_artifact_ids(artifact_ids) if event.type == mlmd.proto.Event.OUTPUT) artifacts_ids = set( event.artifact_id for event in store.get_events_by_execution_ids(executions_ids) if event.type == mlmd.proto.Event.INPUT) return [artifact for artifact in store.get_artifacts_by_id(artifacts_ids)] # + [markdown] id="3G0e0WIE9e9w" # Query the parent artifacts for the pushed model. # + id="pOEFxucJQ1i6" colab={"base_uri": "https://localhost:8080/", "height": 109} outputId="6f216390-a461-4f83-9039-662ec30afe0a" # TODO parent_artifacts = get_one_hop_parent_artifacts(store, [pushed_model]) display_artifacts(store, parent_artifacts) # + [markdown] id="pJror5mf-W0M" # Query the properties for the model. # + id="OSCb0bg6Qmj4" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="6e5834ba-55a3-45c0-b9fa-1fbca9ba3c25" exported_model = parent_artifacts[0] display_properties(store, exported_model) # + [markdown] id="phz1hfzc_UcK" # Query the upstream artifacts for the model. # + id="nx_-IVhjRGA4" colab={"base_uri": "https://localhost:8080/", "height": 109} outputId="e10a303d-b16a-47eb-d529-da3c4ae11f97" model_parents = get_one_hop_parent_artifacts(store, [exported_model]) display_artifacts(store, model_parents) # + [markdown] id="00jqfk6o_niu" # Get the training data the model trained with. # + id="2nMECsKvROEX" colab={"base_uri": "https://localhost:8080/", "height": 231} outputId="3a92ab1b-8411-45ca-ab57-f57ab65d83e6" used_data = model_parents[0] display_properties(store, used_data) # + [markdown] id="GgTMTaew_3Fe" # Now that you have the training data that the model trained with, query the database again to find the training step (execution). Query the MD store for a list of the registered execution types. # + id="8cBKQsScaD9a" colab={"base_uri": "https://localhost:8080/", "height": 231} outputId="48bc447d-2064-45a5-e545-713db13e5684" display_types(store.get_execution_types()) # + [markdown] id="wxcue6SggQ_b" # The training step is the `ExecutionType` named `tfx.components.trainer.component.Trainer`. Traverse the MD store to get the trainer run that corresponds to the pushed model. # + id="Ned8BxHzaunk" colab={"base_uri": "https://localhost:8080/", "height": 415} outputId="47c4bed2-bd6d-4c7f-eee2-f88a921433da" def find_producer_execution(store, artifact): executions_ids = set( event.execution_id for event in store.get_events_by_artifact_ids([artifact.id]) if event.type == mlmd.proto.Event.OUTPUT) return store.get_executions_by_id(executions_ids)[0] # TODO trainer = find_producer_execution(store, exported_model) display_properties(store, trainer) # + [markdown] id="CYzlTckHClxC" # ## Summary # # In this tutorial, you learned about how you can leverage MLMD to trace the lineage of your TFX pipeline components and resolve issues. # # To learn more about how to use MLMD, check out these additional resources: # # * [MLMD API documentation](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd) # * [MLMD guide](https://www.tensorflow.org/tfx/guide/mlmd)
courses/machine_learning/deepdive2/production_ml/solutions/mlmd_tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="bbtgTz6Unf32" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 646} outputId="ad713a20-b090-4bb0-98bb-66cb956e337e" executionInfo={"status": "ok", "timestamp": 1583511148487, "user_tz": -60, "elapsed": 16287, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-dGRi3OL4Vqw/AAAAAAAAAAI/AAAAAAAAAVo/2lBwXIlCWlg/s64/photo.jpg", "userId": "13522503397701667807"}} # !pip install --upgrade tables # !pip install eli5 # !pip install xgboost # !pip install hyperopt # + id="2e99TrAmnkQy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="19463f49-51a8-46bc-f784-99dafd919a52" executionInfo={"status": "ok", "timestamp": 1583511199102, "user_tz": -60, "elapsed": 2939, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-dGRi3OL4Vqw/AAAAAAAAAAI/AAAAAAAAAVo/2lBwXIlCWlg/s64/photo.jpg", "userId": "13522503397701667807"}} import pandas as pd import numpy as np from sklearn.dummy import DummyRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor import xgboost as xgb from sklearn.metrics import mean_absolute_error as mae from sklearn.model_selection import cross_val_score, KFold from hyperopt import hp, fmin, tpe,STATUS_OK import eli5 from eli5.sklearn import PermutationImportance # + id="ALGvaA1Sn1F_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="58e26ee9-8c92-4aa3-eaf6-2e2f8a059148" executionInfo={"status": "ok", "timestamp": 1583511224311, "user_tz": -60, "elapsed": 954, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-dGRi3OL4Vqw/AAAAAAAAAAI/AAAAAAAAAVo/2lBwXIlCWlg/s64/photo.jpg", "userId": "13522503397701667807"}} # cd "/content/drive/My Drive/Colab Notebooks/Matrix/matrix_two/dw_matrix_car" # + id="5oqKlwwTn7uv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e7bfc203-9c3f-4db3-d37b-23e74674b8a9" executionInfo={"status": "ok", "timestamp": 1583511239093, "user_tz": -60, "elapsed": 4752, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-<KEY>", "userId": "13522503397701667807"}} df = pd.read_hdf('data/car.h5') df.shape # + id="DUeDxqbmn-aH" colab_type="code" colab={} SUFFIX_CAT = '__cat' for feat in df.columns: if isinstance(df[feat][0], list): continue factorized_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: df[feat] = factorized_values else: df[feat + SUFFIX_CAT] = factorized_values # + id="gAgRAouhoFsH" colab_type="code" colab={} df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(str(x).split(' ')[0]) ) df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x)) df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(str(x).split('cm')[0].replace(' ','')) ) # + id="aDOoqpG1olX2" colab_type="code" colab={} def run_model(model, feats): X = df[feats].values y = df['price_value'].values scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error') return np.mean(scores), np.std(scores) # + id="BKmplofUoxwF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="59e6ad0b-07f8-45e6-f8eb-d892b03927a0" executionInfo={"status": "ok", "timestamp": 1583511553874, "user_tz": -60, "elapsed": 13319, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-dGRi3OL4Vqw/AAAAAAAAAAI/AAAAAAAAAVo/2lBwXIlCWlg/s64/photo.jpg", "userId": "13522503397701667807"}} feats =['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat'] xgb_params = { 'max_depth': 5, 'n_estimators': 50, 'leraning_rate': 0.1, 'seed': 0 } run_model(xgb.XGBRegressor(**xgb_params), feats) # + id="T7dpm5EXpJLF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 901} outputId="1a9d6072-21ad-48ae-c758-0cfb8cc9dcb3" executionInfo={"status": "ok", "timestamp": 1583512870249, "user_tz": -60, "elapsed": 373553, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-dGRi3OL4Vqw/AAAAAAAAAAI/AAAAAAAAAVo/2lBwXIlCWlg/s64/photo.jpg", "userId": "13522503397701667807"}} def obj_func(params): print("training with params: ") print (params) mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats) return {'loss': np.abs(mean_mae), 'status': STATUS_OK} #space xgb_reg_params = { 'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)), 'max_dept': hp.choice('max_dept', np.arange(5, 16, 1, dtype=int)), 'subsample': hp.quniform('subsample', 0.5,1,0.05), 'colsample_bytree': hp.quniform('colsample_bytree', 0.5,1,0.05), 'objective': 'reg:squarederror', 'n_estimators': 100, 'seed': 0, } ## run best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25) best # + id="oJ35esrHspte" colab_type="code" colab={}
day5.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: lovei # language: python # name: lovei # --- # + import gensim from loveisland.common.functions import Functions as F from collections import Counter import seaborn as sns from matplotlib import pyplot as plt import pandas as pd import numpy as np from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() plt.style.use("bmh") # - PALETTE = F.get_palette() ORIGINAL_ISLANDERS = F.get_islanders() # + class PrepData: def __init__(self): self.df = None def import_df(self, col="date", last="2019-06-02"): self.df = F.import_all() self.df = self.df[self.df[col] <= last] return self def format_cols(self): self.df["tokens"] = self.df["tokens"].apply(lambda x: F.str_to_list(x)) self.df["islanders"] = self.df["islanders"].apply(lambda x: F.str_to_list(x)) return self def ngrams(self, col="tokens"): ngram = gensim.models.Phrases(self.df[col]) self.df[col] = self.df[col].apply(lambda x: ngram[x]) self.df[col] = self.df[col].apply(lambda x: [i.replace(" ", "") for i in x]) return self class AggFunctions: def get_ngrams(self, df, col="tokens"): df["inc_ngram"] = df[col].apply(lambda x: "yes" if "_" in x else "no") return [i for i in self.get_tokens(df) if "_" in i and "status" not in i] @staticmethod def get_counts(tokens): counts = pd.DataFrame.from_dict( Counter(tokens), orient="index", columns=["count"] ) counts.index.name = "token" return ( counts.reset_index() .sort_values("count", ascending=False) .reset_index(drop=True) ) def count_df(self, df, group): df = ( df.groupby(group)["url"] .count() .reset_index(name="count") .sort_values(by=group, ascending=True) ) return self.format_date(df) @staticmethod def format_date(df, col="date"): if "date" in df.columns: df["date"] = df["date"].astype(str) return df.reset_index(drop=False) def inc_islander(self, df, col="islanders"): df[col] = df[col].apply(lambda x: [i for i in x if i in ORIGINAL_ISLANDERS]) df["inc_islander"] = np.where(df[col].str.len() < 1, "No", "Yes") df = self.count_df(df, ["date", "inc_islander"]) df["perc"] = df.groupby(["date"])["count"].apply(lambda x: x * 100 / sum(x)) return df[df["inc_islander"] == "Yes"] @staticmethod def islander_counts(df): counts = df.count() counts.index.name = "islander" counts = counts.reset_index(name="count") return counts[counts["islander"].isin(ORIGINAL_ISLANDERS)].reset_index( drop=True ) @staticmethod def get_tokens(df, col="tokens"): return [item for sublist in df[col].to_list() for item in sublist] def get_token_df(self, df, date, col="tokens"): n = df.url.nunique() df = self.get_counts(self.get_tokens(df, col)) df["n_tweets"] = n df["date"] = date df["percent"] = df["count"] / sum(df["count"]) return df @staticmethod def most_pop(df): return ( df[~df["text"].str.contains("pic")] .sort_values("favs", ascending=False) .groupby("date") .head(1) .sort_values("date", ascending=True) .reset_index(drop=True) ) # - af = AggFunctions() # + prd = PrepData() prd.import_df().format_cols().ngrams().ngrams() df = prd.df.copy() # + to_plot = af.count_df(df, "date") to_plot["date"] = to_plot["date"].astype(str) fig = plt.figure(figsize=(12, 5)) ax1 = fig.add_subplot(111) sns.barplot("date", "count", data=to_plot, color="Red", ax=ax1) plt.xticks(rotation=45, ha="right") ax1.set( xlabel="Date", ylabel="Number of Tweets", title="Number of tweets per day about love island in the two weeks prior to the first episode", ); # + to_plot = af.inc_islander(df) fig = plt.figure(figsize=(12, 5)) ax1 = fig.add_subplot(111) sns.barplot("date", "perc", data=to_plot, ax=ax1, color="Red") plt.xticks(rotation=45, ha="right") ax1.set( xlabel="Date", ylabel="Percentage of all tweets", title="Percentage of all tweets about love island per day that explicitly reference an islander", ); # + to_plot = af.islander_counts(df) fig = plt.figure(figsize=(12, 5)) ax1 = fig.add_subplot(111) sns.barplot( "islander", "count", "islander", data=to_plot, ax=ax1, palette=PALETTE, dodge=False ) plt.legend().remove() ax1.set( xlabel="Islander", ylabel="Number of Tweets", title="Number of Tweets that explicitly reference an islander in the two weeks prior to the first episode", ); # - n_grams = af.get_ngrams(df) counts = af.get_counts(n_grams) counts.head(15) # + to_plot = af.get_token_df(df, "all") fig = plt.figure(figsize=(15, 5)) ax1 = fig.add_subplot(111) sns.barplot("token", "count", data=to_plot.head(30), color="Red", ax=ax1) plt.xticks(rotation=60) ax1.set( xlabel="Token", ylabel="Number of times used", title="Graph to show word counts of top 30 most popular words in tweets about love island \n \ sent between 2019-05-20 and 2019-06-02 (day before the show started)", ); # + to_plot = af.count_df(df, "user") to_plot = to_plot.sort_values("count", ascending=False).head(20) fig = plt.figure(figsize=(12, 5)) ax1 = fig.add_subplot(111) sns.barplot("user", "count", data=to_plot, color="Red", ax=ax1) plt.xticks(rotation=45, ha="right") ax1.set( xlabel="User", ylabel="Number of Tweets", title="Number of tweets per user about love island in the two weeks prior to the first episode \n \ (showing the 20 users who have tweeted the most)", ); # + df["text"] = df["text"].astype(str) to_plot = af.most_pop(df) to_plot["date"] = to_plot["date"].astype(str) fig = plt.figure(figsize=(30, 12)) ax1 = fig.add_subplot(111) sns.barplot("favs", "date", data=to_plot, color="Red", ax=ax1) for i, row in to_plot.iterrows(): ax1.annotate(row["text"], xy=(0, 0), xytext=(500, i + 0.2)) ax1.annotate(row["user"], xy=(0, 0), xytext=(500, i - 0.1)) ax1.set( xlabel="Number of Favourites", ylabel="Date", title="Most popular tweet (based on total favourties) per day about love island before the series started", );
notebooks/1_pre_show_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import pandas as pd import json import numpy as np df = pd.read_csv('data/2014-15_gems_jade.csv') # + # df = df[df['country'] == "Myanmar"] # - df.head() # ## Clean data # + df_gemsjade = df df_gemsjade.rename(columns={'company_name': 'Company_name_cl'}, inplace=True) df_gemsjade['type'] = 'entity' df_gemsjade['target_type'] = '' # - df_gemsjade.head() # + append_dict_others = [{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Myanmar Gems Enterprise', 'name_of_revenue_stream': 'Production Royalties', 'value_reported': 4397493510 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Myanmar Gems Enterprise', 'name_of_revenue_stream': 'Sale Split', 'value_reported': 7636322539 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Myanmar Gems Enterprise', 'name_of_revenue_stream': 'Sales Royalties', 'value_reported': 28379373647 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Myanmar Gems Enterprise', 'name_of_revenue_stream': 'Service Fees', 'value_reported': 12162588706 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Myanmar Gems Enterprise', 'name_of_revenue_stream': 'Permit Fees', 'value_reported': 72073329715 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Myanmar Gems Enterprise', 'name_of_revenue_stream': 'Incentive Fees', 'value_reported': 185070361 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Myanmar Gems Enterprise', 'name_of_revenue_stream': 'Other significant payments', 'value_reported': 3645329861 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Myanmar Gems Enterprise', 'name_of_revenue_stream': 'Emporium Fees / Sale Fees', 'value_reported': 5360554108 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Customs Department', 'name_of_revenue_stream': 'Customs Duties', 'value_reported': 4822220928 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Internal Revenue Department', 'name_of_revenue_stream': 'Commercial Tax', 'value_reported': 4569416181 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Internal Revenue Department', 'name_of_revenue_stream': 'Royalties', 'value_reported': 22350747468 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Internal Revenue Department', 'name_of_revenue_stream': 'Income Tax', 'value_reported': 2091258147 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Internal Revenue Department', 'name_of_revenue_stream': 'Withholding Tax', 'value_reported': 616601787 }, {'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity', 'paid_to': 'Internal Revenue Department', 'name_of_revenue_stream': 'Capital Gains Tax', 'value_reported': 2860000 } ] others_df = pd.DataFrame(append_dict_others) #others_df['total_payments'] = others_df['value_reported'] df_gemsjade = pd.concat([df_gemsjade, others_df]) df_gemsjade # - df_gemsjade['name_of_revenue_stream'] = df_gemsjade['name_of_revenue_stream'].replace({'Other significant payments (&gt; 50,000 USD)': 'Other significant payments (> 50,000 USD)'}) company_totals = df_gemsjade.pivot_table(index=['Company_name_cl'], aggfunc='sum')['value_reported'] company_totals = company_totals.to_frame() company_totals.rename(columns={'value_reported': 'total_payments'}, inplace=True) company_totals.reset_index(level=0, inplace=True) company_totals.sort_values(by=['total_payments'], ascending = False, inplace=True) company_totals df_gemsjade = pd.merge(df_gemsjade, company_totals, on='Company_name_cl') # ## Remove negative payments for Sankey df_gemsjade = df_gemsjade[df_gemsjade["value_reported"] > 0] df_gemsjade = df_gemsjade.sort_values(by=['total_payments'], ascending=False) df_gemsjade # ## Prepare Source-Target-Value dataframe # + links = pd.DataFrame(columns=['source','target','value','type']) # + to_append = df_gemsjade.groupby(['name_of_revenue_stream','paid_to'],as_index=False)['type','value_reported','total_payments'].sum() #to_append["target"] = "Myanmar Gems Enterprise" to_append.rename(columns = {'name_of_revenue_stream':'source', 'value_reported' : 'value', 'paid_to': 'target'}, inplace = True) to_append = to_append.sort_values(by=['value'], ascending = False) to_append['target_type'] = 'entity' links = pd.concat([links,to_append]) print(to_append['value'].sum()) links # + append_dict_transfers = [{'source': 'Myanmar Gems Enterprise', 'type': 'entity', 'target': 'State Contribution', 'value': 46833942000 }, {'source': 'Myanmar Gems Enterprise', 'type': 'entity', 'target': 'Commercial Tax', 'value': 15000000 }, {'source': 'Myanmar Gems Enterprise', 'type': 'entity', 'target': 'Corporate Income Tax', 'value': 53788313000 }] """ {'source': 'State Contribution', 'target-type': 'entity','target-type': 'entity', 'target': 'Ministry of Finance', 'value': 46833942000 }, {'source': 'Myanmar Gems Enterprise', 'type': 'entity', 'target': 'Royalties on Production', 'value': 17249087176 }, {'source': 'Corporate Income Tax', 'target-type': 'entity','target-type': 'entity', 'target': 'Internal Revenue Department', 'value': 53788313000 }, {'source': 'Commercial Tax', 'target-type': 'entity', 'target': 'Internal Revenue Department', 'value': 15000000 }] """ append_dict_transfers_df = pd.DataFrame(append_dict_transfers) #links = pd.concat([links, append_dict_transfers_df]) # + to_append = df_gemsjade.groupby(['name_of_revenue_stream','Company_name_cl','type'],as_index=False) \ ['value_reported','total_payments'] \ .agg({'value_reported':sum,'total_payments':'first'}) to_append.rename(columns = {'Company_name_cl':'source','name_of_revenue_stream':'target', 'value_reported' : 'value'}, inplace = True) to_append = to_append.sort_values(by=['total_payments'], ascending = False) links = pd.concat([links,to_append]) print(to_append['value'].sum()) #links to_append # + #to_append = df.groupby(['name_of_revenue_stream','Company_name_cl'],as_index=False)['Value (USD)'].sum() #to_append.rename(columns = {'Payment Type':'source','Reporting Company':'target', 'Value (USD)' : 'value'}, inplace = True) #to_append = to_append.sort_values(by=['value'], ascending = False) #links = pd.concat([links,to_append]) #print(to_append['value'].sum()) #links # + unique_source = links['source'].unique() unique_targets = links['target'].unique() unique_source = pd.merge(pd.DataFrame(unique_source), links, left_on=0, right_on='source', how='left') unique_source = unique_source.filter([0,'type']) unique_targets = pd.merge(pd.DataFrame(unique_targets), links, left_on=0, right_on='target', how='left') unique_targets = unique_targets.filter([0,'target_type']) unique_targets.rename(columns = {'target_type':'type'}, inplace = True) unique_list = pd.concat([unique_source[0], unique_targets[0]]).unique() unique_list = pd.merge(pd.DataFrame(unique_list), \ pd.concat([unique_source, unique_targets]), left_on=0, right_on=0, how='left') unique_list.drop_duplicates(subset=0, keep='first', inplace=True) replace_dict = {k: v for v, k in enumerate(unique_list[0])} unique_list #unique_list = pd.concat([links['source'], links['target']]).unique() #replace_dict = {k: v for v, k in enumerate(unique_list)} # - links_replaced = links.replace({"source": replace_dict,"target": replace_dict}) links_replaced nodes = pd.DataFrame(unique_list) nodes.rename(columns = {0:'name'}, inplace = True) nodes_json= pd.DataFrame(nodes).to_json(orient='records') nodes_json links_json= pd.DataFrame(links_replaced).to_json(orient='records') links_json # + data = { 'links' : json.loads(links_json), 'nodes' : json.loads(nodes_json) } data_json = json.dumps(data) data_json = data_json.replace("\\","") #print(data_json) #with open('sankey_data.json', 'w') as outfile: # json.dump(data_json, outfile) text_file = open("sankey_data_2014-15.json", "w") text_file.write(data_json) text_file.close() # -
tools/revenues/Preparing Data 2014-15 Old 01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/ferchogameover/course-content/blob/master/tutorials/W0D3_LinearAlgebra/student/W0D3_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="ZkpqOrVY0wZB" # # Tutorial 2: Matrices # **Week 0, Day 3: Linear Algebra** # # **By Neuromatch Academy** # # __Content creators:__ <NAME> # # # __Content reviewers:__ <NAME>, <NAME>, <NAME>, <NAME> # # # __Production editors:__ <NAME>, <NAME> # + [markdown] id="CW_1PC870wZK" # **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** # # <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p> # + [markdown] id="BCrAjGoq0wZN" # --- # # Tutorial Objectives # # *Estimated timing of tutorial: 1 hour, 35 minutes* # # During today, we will learn the basics of linear algebra, focusing on the topics that underlie the material on future days in the NMA Computational Neuroscience course. In this tutorial, we focus on matrices: their definition, their properties & operations, and especially on a geometrical intuition of them. # # By the end of this tutorial, you will be able to : # * Explain matrices as a linear transformation and relate matrix properties to properties of that linear transformation # * Perform matrix multiplication by hand # * Define what eigenvalues/eigenvectors are # # # # + [markdown] id="aUvvmPpd0wZP" # **Code Credit:** # # Some elements of this problem set are from or inspired by https://openedx.seas.gwu.edu/courses/course-v1:GW+EngComp4+2019/about. In particular, we are using their `plot_linear_transformation` and `plot_linear_transformations` functions. # # Code under BSD 3-Clause License © 2019 <NAME>, <NAME>. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # + [markdown] id="QoZUv7OJ0wZQ" # --- # # Setup # + id="UHwsOO9g0wZR" # Imports import numpy as np import matplotlib.pyplot as plt # + cellView="form" id="Bwjdb8VK0wZT" #@title Figure settings import ipywidgets as widgets# interactive display from ipywidgets import fixed # %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # + id="9zUj_wHv0wZT" # @title Plotting functions import numpy from numpy.linalg import inv, eig from math import ceil from matplotlib import pyplot, ticker, get_backend, rc from mpl_toolkits.mplot3d import Axes3D from itertools import cycle _int_backends = ['GTK3Agg', 'GTK3Cairo', 'MacOSX', 'nbAgg', 'Qt4Agg', 'Qt4Cairo', 'Qt5Agg', 'Qt5Cairo', 'TkAgg', 'TkCairo', 'WebAgg', 'WX', 'WXAgg', 'WXCairo'] _backend = get_backend() # get current backend name # shrink figsize and fontsize when using %matplotlib notebook if _backend in _int_backends: fontsize = 4 fig_scale = 0.75 else: fontsize = 5 fig_scale = 1 grey = '#808080' gold = '#cab18c' # x-axis grid lightblue = '#0096d6' # y-axis grid green = '#008367' # x-axis basis vector red = '#E31937' # y-axis basis vector darkblue = '#004065' pink, yellow, orange, purple, brown = '#ef7b9d', '#fbd349', '#ffa500', '#a35cff', '#731d1d' quiver_params = {'angles': 'xy', 'scale_units': 'xy', 'scale': 1, 'width': 0.012} grid_params = {'linewidth': 0.5, 'alpha': 0.8} def set_rc(func): def wrapper(*args, **kwargs): rc('font', family='serif', size=fontsize) rc('figure', dpi=200) rc('axes', axisbelow=True, titlesize=5) rc('lines', linewidth=1) func(*args, **kwargs) return wrapper @set_rc def plot_vector(vectors, tails=None): ''' Draw 2d vectors based on the values of the vectors and the position of their tails. Parameters ---------- vectors : list. List of 2-element array-like structures, each represents a 2d vector. tails : list, optional. List of 2-element array-like structures, each represents the coordinates of the tail of the corresponding vector in vectors. If None (default), all tails are set at the origin (0,0). If len(tails) is 1, all tails are set at the same position. Otherwise, vectors and tails must have the same length. Examples -------- >>> v = [(1, 3), (3, 3), (4, 6)] >>> plot_vector(v) # draw 3 vectors with their tails at origin >>> t = [numpy.array((2, 2))] >>> plot_vector(v, t) # draw 3 vectors with their tails at (2,2) >>> t = [[3, 2], [-1, -2], [3, 5]] >>> plot_vector(v, t) # draw 3 vectors with 3 different tails ''' vectors = numpy.array(vectors) assert vectors.shape[1] == 2, "Each vector should have 2 elements." if tails is not None: tails = numpy.array(tails) assert tails.shape[1] == 2, "Each tail should have 2 elements." else: tails = numpy.zeros_like(vectors) # tile vectors or tails array if needed nvectors = vectors.shape[0] ntails = tails.shape[0] if nvectors == 1 and ntails > 1: vectors = numpy.tile(vectors, (ntails, 1)) elif ntails == 1 and nvectors > 1: tails = numpy.tile(tails, (nvectors, 1)) else: assert tails.shape == vectors.shape, "vectors and tail must have a same shape" # calculate xlimit & ylimit heads = tails + vectors limit = numpy.max(numpy.abs(numpy.hstack((tails, heads)))) limit = numpy.ceil(limit * 1.2) # add some margins figsize = numpy.array([2,2]) * fig_scale figure, axis = pyplot.subplots(figsize=figsize) axis.quiver(tails[:,0], tails[:,1], vectors[:,0], vectors[:,1], color=darkblue, angles='xy', scale_units='xy', scale=1) axis.set_xlim([-limit, limit]) axis.set_ylim([-limit, limit]) axis.set_aspect('equal') # if xticks and yticks of grid do not match, choose the finer one xticks = axis.get_xticks() yticks = axis.get_yticks() dx = xticks[1] - xticks[0] dy = yticks[1] - yticks[0] base = max(int(min(dx, dy)), 1) # grid interval is always an integer loc = ticker.MultipleLocator(base=base) axis.xaxis.set_major_locator(loc) axis.yaxis.set_major_locator(loc) axis.grid(True, **grid_params) # show x-y axis in the center, hide frames axis.spines['left'].set_position('center') axis.spines['bottom'].set_position('center') axis.spines['right'].set_color('none') axis.spines['top'].set_color('none') @set_rc def plot_transformation_helper(axis, matrix, *vectors, unit_vector=True, unit_circle=False, title=None): """ A helper function to plot the linear transformation defined by a 2x2 matrix. Parameters ---------- axis : class matplotlib.axes.Axes. The axes to plot on. matrix : class numpy.ndarray. The 2x2 matrix to visualize. *vectors : class numpy.ndarray. The vector(s) to plot along with the linear transformation. Each array denotes a vector's coordinates before the transformation and must have a shape of (2,). Accept any number of vectors. unit_vector : bool, optional. Whether to plot unit vectors of the standard basis, default to True. unit_circle: bool, optional. Whether to plot unit circle, default to False. title: str, optional. Title of the plot. """ assert matrix.shape == (2,2), "the input matrix must have a shape of (2,2)" grid_range = 20 x = numpy.arange(-grid_range, grid_range+1) X_, Y_ = numpy.meshgrid(x,x) I = matrix[:,0] J = matrix[:,1] X = I[0]*X_ + J[0]*Y_ Y = I[1]*X_ + J[1]*Y_ origin = numpy.zeros(1) # draw grid lines for i in range(x.size): axis.plot(X[i,:], Y[i,:], c=gold, **grid_params) axis.plot(X[:,i], Y[:,i], c=lightblue, **grid_params) # draw (transformed) unit vectors if unit_vector: axis.quiver(origin, origin, [I[0]], [I[1]], color=green, **quiver_params) axis.quiver(origin, origin, [J[0]], [J[1]], color=red, **quiver_params) # draw optional vectors color_cycle = cycle([pink, darkblue, orange, purple, brown]) if vectors: for vector in vectors: color = next(color_cycle) vector_ = matrix @ vector.reshape(-1,1) axis.quiver(origin, origin, [vector_[0]], [vector_[1]], color=color, **quiver_params) # draw optional unit circle if unit_circle: alpha = numpy.linspace(0, 2*numpy.pi, 41) circle = numpy.vstack((numpy.cos(alpha), numpy.sin(alpha))) circle_trans = matrix @ circle axis.plot(circle_trans[0], circle_trans[1], color=red, lw=0.8) # hide frames, set xlimit & ylimit, set title limit = 4 axis.spines['left'].set_position('center') axis.spines['bottom'].set_position('center') axis.spines['left'].set_linewidth(0.3) axis.spines['bottom'].set_linewidth(0.3) axis.spines['right'].set_color('none') axis.spines['top'].set_color('none') axis.set_xlim([-limit, limit]) axis.set_ylim([-limit, limit]) if title is not None: axis.set_title(title) @set_rc def plot_linear_transformation(matrix, *vectors, name = None, unit_vector=True, unit_circle=False): """ Plot the linear transformation defined by a 2x2 matrix using the helper function plot_transformation_helper(). It will create 2 subplots to visualize some vectors before and after the transformation. Parameters ---------- matrix : class numpy.ndarray. The 2x2 matrix to visualize. *vectors : class numpy.ndarray. The vector(s) to plot along with the linear transformation. Each array denotes a vector's coordinates before the transformation and must have a shape of (2,). Accept any number of vectors. unit_vector : bool, optional. Whether to plot unit vectors of the standard basis, default to True. unit_circle: bool, optional. Whether to plot unit circle, default to False. """ figsize = numpy.array([4,2]) * fig_scale figure, (axis1, axis2) = pyplot.subplots(1, 2, figsize=figsize) plot_transformation_helper(axis1, numpy.identity(2), *vectors, unit_vector=unit_vector, unit_circle=unit_circle, title='Before transformation') plot_transformation_helper(axis2, matrix, *vectors, unit_vector=unit_vector, unit_circle=unit_circle, title='After transformation') if name is not None: figure.suptitle(f'Population {name}') def plot_eig_vec_transform(W): classic = 'k' vec_names = ['a', 'b','c','d','e','f','g', 'h'] _, vecs = np.linalg.eig(W) vecs = vecs.T fig, axes = plt.subplots(1, 2, figsize=(2, 1)) colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] for i in range(2): axes[i].set(xlim=[-3.5, 3.5], ylim=[-3.5,3.5]) axes[i].axis('Off') axes[i].plot([0, 0], [-3.5, 3.5], classic, alpha=.4) axes[i].plot([-3.5, 3.5], [0, 0], classic, alpha=.4) for i_vec, vec in enumerate(vecs): axes[0].arrow(0, 0, vec[0], vec[1], head_width=.2, facecolor=colors[i_vec], edgecolor=colors[i_vec], length_includes_head=True) axes[0].annotate(vec_names[i_vec], xy=(vec[0]+np.sign(vec[0])*.15, vec[1]+np.sign(vec[1])*.15), color=colors[i_vec]) transformed_vec = np.matmul(W, vec) axes[1].arrow(0, 0, transformed_vec[0], transformed_vec[1], head_width=.2, facecolor=colors[i_vec], edgecolor=colors[i_vec], length_includes_head=True) axes[1].annotate(vec_names[i_vec], xy=(transformed_vec[0]+np.sign(transformed_vec[0])*.15, transformed_vec[1]+np.sign(transformed_vec[1])*.15), color=colors[i_vec]) axes[0].set_title('Before') axes[1].set_title('After') # + [markdown] id="mSZX1uwK0wZe" # --- # # # Section 1: Intro to matrices # + [markdown] id="VGy0zx2o0wZi" # ## Section 1.1: Matrices to solve systems of equations # + cellView="form" id="N5kTNtPl0wZj" # @title Video 1: Systems of Equations from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1Aq4y1x7hf", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="3ecnOrMEh00", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # + [markdown] id="ZKIUHqeK0wZk" # This video covers using a matrix to solve a linear system of equations. # # <details> # <summary> <font color='blue'>Click here for text recap of video </font></summary> # # In a variety of contexts, we may encounter systems of linear equations like this one: # # $$ # \begin{align} # 3x_1 + 2x_2 + x_3 &= y_1 \\ # 7x_1 + x_2 + 2x_3 &= y_2 \\ # x_1 - x_2 - 2x_3 &= y_3 \\ # \end{align}$$ # We may know all the x's and want to solve for y's, or we may know the y's and want to solve for the x's. We can solve this in several different ways but one especially appealing way is to cast it as a matrix-vector equation: # # $$\mathbf{W}\mathbf{x} = \mathbf{y}$$ # where # $$\begin{align} # \mathbf{W} &= \begin{bmatrix} 3 & 2 & 1 \\ 7 & 1 & 2 \\ 1 &-1 &-2 \end{bmatrix}, \mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}, \mathbf{y} = \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix}\\ # \end{align}$$ # # If we know $\mathbf{W}$ and $\mathbf{x}$, we can solve for $\mathbf{y}$ using matrix-vector multiplication. Each row of $\mathbf{y}$ is computed as the dot product of the equivalent row of $\mathbf{W}$ and $\mathbf{x}$. # # If we know $\mathbf{W}$ and $\mathbf{y}$, we can sometimes solve for $\mathbf{x}$ by using the inverse of $\mathbf{W}$: # $$ \mathbf{x} = W^{-1}\mathbf{y} $$. # The reason this only sometimes works will be dived into later in this tutorial! # </details> # # + [markdown] id="_ju2DOMa0wZk" # ### Coding Exercise 1.1: Understanding neural transformations # # We will look at a group of 2 LGN neurons which get input from 2 retinal neurons: we will call the population of LGN neurons population p. Below, we have the system of linear equations that dictates the neuron models for each population. $r_1$ and $r_2$ correspond to the retinal neural activities (of neuron 1 and 2). $g_{p_1}$ and $g_{p_2}$ correspond to the responses of the LGN neurons 1 and 2 in population p. # # $$\begin{align} # r_1 + 3r_2 &= g_{p_1} \\ # 2r_1 + r_2 &= g_{p_2} \\ # \end{align}$$ # # # # 1) Cast each this as a matrix-vector multiplication: # # $$ \mathbf{g}_p = \mathbf{P}\mathbf{r} $$ # where P is the weight matrix to population p. # # 2) Let's say we only recorded from the LGN cells (and know the weight matrix) and are trying to figure out how the retinal cells responded. Solve the matrix equation for the given LGN activities: # # $$\mathbf{g}_p = \begin{bmatrix} # 16 \\ # 7 \\ # \end{bmatrix}$$ # # + id="Eik3XrC20wZo" outputId="a60b9b1e-cf43-4524-cd72-e017088a8764" colab={"base_uri": "https://localhost:8080/"} # Create P (using np array) P = np.array([[1, -4], [-3, 2]]) # Create g_p (using np array) #g_p = np.array([16, 7]) # Solve for r (using np.linalg.inv) #r = np.linalg.inv(P) @ g_p r = np.array([[1, 0], [0, 1]]) g_p = P @ r # Print r #print(r) print(g_p) # + [markdown] id="QuX9LYwt0wZp" # [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D3_LinearAlgebra/solutions/W0D3_Tutorial2_Solution_2fa7cf85.py) # # # + [markdown] id="VViX5FVt0wZq" # You should see the output [1, 5] # + [markdown] id="jb_luyzy0wZu" # You can recover how the retinal neurons respond given the weight matrix and LGN responses! You have solved the system of equations using matrices. We can't always do this though: let's say we have a different group of 2 LGN neurons - population q - with the following weight matrix from the retinal neurons. # # $$Q = \begin{bmatrix} # 4 & 1 \\ # 8 & 2 \\ # \end{bmatrix}, $$ # # As you can see if you uncomment and run the next code cell, we get an error if we try to invert this matrix to solve the equation. We'll find out more about this in the next sections. # + id="2t9F_-Be0wZu" outputId="fdb23d75-b748-43f1-c510-3c737453690b" colab={"base_uri": "https://localhost:8080/", "height": 376} g_q = np.array([16, 7]) Q = np.array([[4, 1], [8, 2]]) print(np.linalg.inv(Q) @ g_q) #error due to Q inverse not existing. # + [markdown] id="wyHUJ51j0wZv" # ## Section 1.2: Matrices as linear transformations # # *Estimated timing to here from start of tutorial: 20 min* # + cellView="form" id="JZjpA-FN0wZw" outputId="9d3de468-b623-40f2-931e-f4b647919881" colab={"base_uri": "https://localhost:8080/", "height": 580, "referenced_widgets": ["4a97665c889c4216b92082347b6a86a4", "5dc5bee50a064444a675af24a3fb1394", "4792f007096a4ab7a5702e0d27592a93", "325b1c938c1b4073abb60a651f9e37d2", "8b612e0e9c074ff583d3b13012548502", "<KEY>"]} # @title Video 2: Linear Transformations from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1KB4y1T7zM", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="N6UUV9tVIr8", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # + [markdown] id="TZAAb32M0wZw" # Matrices can be thought of as enacting linear transformations. When multiplied with a vector, they transform it into another vector. In fact, they are transforming a grid of space in a linear manner: the origin stays in place and grid lines remain straight, parallel, and evenly spaced. # # + [markdown] id="qBD4axU80wZx" # ### Coding Exercise 1.2: Creating matrices for transformations # # Come up with a matrix $A$ for which the corresponding linear transformation is reflection through the $y$ axis (flipping across the $y$ axis). For example, $\mathbf{x} = \begin{bmatrix} # 2 \\ # 6 \\ # \end{bmatrix}$ should become $\mathbf{b} = \begin{bmatrix} # -2 \\ # 6 \\ # \end{bmatrix}$ when multiplied with $A$. # # # **Remember to think about where your basis vectors should end up! Then your matrix consists of the transformed basis vectors. Drawing out what you want to happen can help** # + id="dtaXmSbE0wZx" outputId="c1f1106f-f0a2-4211-8ff4-2186a132ba95" colab={"base_uri": "https://localhost:8080/", "height": 427} A = np.array([[-1, 0], # matrix columns = transform basis vectors [0, 1]]) # Uncomment to visualize transformation plot_linear_transformation(A) # + [markdown] id="OI_gzBqS0wZy" # [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D3_LinearAlgebra/solutions/W0D3_Tutorial2_Solution_a86560cc.py) # # *Example output:* # # <img alt='Solution hint' align='left' width=324 height=164 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D3_LinearAlgebra/static/W0D3_Tutorial2_Solution_a86560cc_0.png> # # # + [markdown] id="TEvU6uw50wZz" # Come up with a matrix $A$ for which the corresponding linear transformation is projecting onto the $x$ axis. For example, $\bar{x} = \begin{bmatrix} # 2 \\ # 3 \\ # \end{bmatrix}$ should become $\bar{b} = \begin{bmatrix} # 2 \\ # 0 \\ # \end{bmatrix}$ when multiplied with $A$. # + id="J6OvIdd30wZz" outputId="0b02451f-2ac4-4039-d54a-7f27a49ec482" colab={"base_uri": "https://localhost:8080/", "height": 427} A = np.array([[1, 0], [0, 0]]) # Uncomment to visualize transformation plot_linear_transformation(A) # + [markdown] id="RqpYjclt7XaS" # # + [markdown] id="TUFcXDlN0wZ0" # [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D3_LinearAlgebra/solutions/W0D3_Tutorial2_Solution_8d909295.py) # # *Example output:* # # <img alt='Solution hint' align='left' width=324 height=164 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D3_LinearAlgebra/static/W0D3_Tutorial2_Solution_8d909295_0.png> # # # + [markdown] id="a8gh58P50wZ1" # ## Section 1.3: Rank & Null Space # # *Estimated timing to here from start of tutorial: 35 min* # + cellView="form" id="JOXXjsxA0wZ1" outputId="85c76603-400c-49d6-a4e3-7f477fd02898" colab={"base_uri": "https://localhost:8080/", "height": 580, "referenced_widgets": ["dfdecdd6a92b4579beb363542ff5e68e", "<KEY>", "b2056166b8ab4b16a0011ac445fb676f", "a3b1d8f9efe04e64be53b2d81a939ed1", "f8f973ed66be4cabb0c71aa4fe3235d5", "17917955f232448baa02fa749bf79520"]} # @title Video 3: Rank & Null Space from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1vw411R7eA", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="ay2p5rgkMcY", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # + [markdown] id="QAHItv5-0wZ2" # This video covers properties of matrices, such as range, rank, and null space, in the context of matrices enacting a linear transformation. # # <details> # <summary> <font color='blue'>Click here for text recap of video </font></summary> # # Square matrices always result in vectors with the same dimensions (number of components) but can alter the dimensionality of the transformed space. Let's say you have a 3 x 3 matrix. You will transform from 3-dimensional vectors to 3-dimensional vectors. You will often be transforming from all of 3D space to all of 3D space. However, this isn't always the case! You could have a 3 x 3 matrix that always results in a vector that lies along a 2D plane through 3D space. This matrix would be transforming from a 3 dimensional vector space (all of R3) to a 2 dimensional vector space (the 2D plane). # # Matrices that aren't square are enacting transformations that change the dimensionality of the vectors. If you have a 4 x 5 matrix, you are transforming 5-dimensional vectors to 4-dimensional vectors. Similarly, if you have a 4 x 2 matrix, you are transforming from 4-dimensional vectors to 2-dimensional vectors. # # The **range of a matrix** is the set of all possible vectors it can lead to after a transformation. The **rank of a matrix** is the dimensionality of the range. # # Sometimes, a matrix will transform a non-zero vector into a zero vector (the origin). The **null space** of a matrix is the set of all vectors that will be transformed into the origin. # </details> # + [markdown] id="bpXkxOyO0wZ2" # ### Think! 1.3: Neural coding # # Let's return to the setup of the previous coding exercise: we have two populations of LGN neurons, p and q, responding to retinal neurons. Visualize the linear transformations of these matrices by running the next code cell. Then discuss the following questions: # # 1) What are the ranks of weight matrix P and Q? # # 2) What does the null space of these matrices correspond to in our neuroscience setting? **Advanced:** What do you think the dimensionality of the null space is for P and Q? # # 3) What is the intrinsic dimensionality of the population of neural responses in population p? How about in q? The intrinsic dimensionality is the minimal number of dimensions required for a complete representation of the data. # # 4) If we wanted to decode retinal neural activity from the LGN activities, would we always be able to completely recover the retinal activity when looking at population p? How about population q? What does this tell us about the information loss of the neural processing? # + id="XcuR91Ds0wZ4" outputId="969d308a-24b2-497b-b2f9-5a8534abb6e0" colab={"base_uri": "https://localhost:8080/", "height": 851} # @markdown Execute to visualize linear transformations P = np.array([[1, 3], [2, 1]]) plot_linear_transformation(P, name = 'p') Q = np.array([[4, 1], [8, 2]]) # because (8,2) are just the product by 2 of 4, 1 we now the result is a straight line plot_linear_transformation(Q, name = 'q') # + [markdown] id="B9_WnWPN0wZ5" # [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D3_LinearAlgebra/solutions/W0D3_Tutorial2_Solution_95edac8e.py) # # # + [markdown] id="7dWI8rP50wZ5" # --- # # Section 2: Eigenvalues & Eigenvectors # # *Estimated timing to here from start of tutorial: 65 min* # + cellView="form" id="kbRNiHa-0wZ5" outputId="78ce0cf0-d07f-4e9a-da0d-80eb907627fc" colab={"base_uri": "https://localhost:8080/", "height": 580, "referenced_widgets": ["b76a9e1c9987458d9b49682807a04909", "751c09d8fd0f4417b1989ea27e1cbbea", "94c6cb53c0d04c7dbaa274d6863e0f86", "6a88a3c501f549828df58f6c8b2b0783", "c49c16391106484bb7ef713794dd3e28", "1dd7d5a64c954cee84340a6b64b35af3"]} # @title Video 4: Eigenstuff from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1KK4y1M7Ez", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="l-c7ptT7znM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # + [markdown] id="uJ1X2haW0wZ6" # This video covers eigenvalues and eigenvectors. # # Eigenvectors, $\mathbf{v}$ of a matrix $\mathbf{W}$ are vectors that, when multipled by the matrix, equal a scalar multiple of themselves. That scalar multiple is the corresponding eigenvalue, $\lambda$. # # $$\mathbf{W}\mathbf{v} = \lambda\mathbf{v} $$ # # If we have one eigenvector for a matrix, we technically have an infinite amount: every vector along the span of that eigenvector is also an eigenvector. So, we often use the unit vector in that direction to summarize all the eigenvectors along that line. # # We can find the eigenvalues and eigenvectors of a matrix in numpy using `np.linalg.eig`. # # + [markdown] id="QHcJUfHX0wZ6" # ## Think! 2: Identifying transformations from eigenvectors # # Earlier, we learned how to think about linear transformations in terms of where the standard basis vectors end up. We can also think about them in terms of eigenvectors. # # Just by looking at eigenvectors before and after a transformation, can you describe what the transformation is in words? Try for each of the two plots below. # # Note that I show an eigenvector for every eigenvalue. The x/y limits do not change in before vs after (so eigenvectors are showed scaled by the eigenvalues). # # Here are some transformation words to jog your memory and guide discussion: contraction, expansion, horizontal vs vertical, projection onto an axis, reflection, and rotation. # + cellView="form" id="nxph6w7Q0wZ7" outputId="5f3544f2-ac43-4eb9-be1b-9b8bda5865c3" colab={"base_uri": "https://localhost:8080/", "height": 227} # @title # @markdown Execute this cell to visualize vectors W = np.array([[3, 0], [0, 1]]) plot_eig_vec_transform(W) # + cellView="form" id="Fzj26y5y0wZ7" outputId="583c8f46-278e-42da-d93e-8c7dd8766533" colab={"base_uri": "https://localhost:8080/", "height": 227} # @title # @markdown Execute this cell to visualize vectors W = np.array([[0, 1], [1, 0]]) plot_eig_vec_transform(W) # + [markdown] id="dgEoe2Da0wZ9" # [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D3_LinearAlgebra/solutions/W0D3_Tutorial2_Solution_80704181.py) # # # + [markdown] id="pGWNxe0B0wZ9" # As we saw above, looking at how just the eigenvectors change after a transformation can be very informative about what that transformation was. # + [markdown] id="ua72vpJz0wZ9" # --- # # Section 3: Matrix multiplication # # *Estimated timing to here from start of tutorial: 80 min* # # # # + cellView="form" id="_gonM0_n0wZ-" outputId="3deff423-f6c5-4a70-f328-1e17156c491a" colab={"base_uri": "https://localhost:8080/", "height": 580, "referenced_widgets": ["02ff85d7c75a40f9acd766bd50262e74", "e7e08d5c74e54c94816fa44634c02492", "e1ba0738f78549c8b68814c26b4494a4", "f73c54e7f3fc4479a03db4a512ed07d3", "ec8e529ab1d94a33a46072226671ab91", "23af202818344b8b9d544ce56e4b07d9"]} # @title Video 5: Matrix Multiplication from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1Rb4y1C7cE", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="OFyWfegC9Cs", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # + [markdown] id="k2Jt6pda0wZ-" # We sometimes want to multiple two matrices together, instead of a matrix with a vector. Let's say we're multiplying matrices $\mathbf{A}$ and $\mathbf{B}$ to get $\mathbf{C}$: # $$ \mathbf{C} = \mathbf{A}\mathbf{B}$$. # # We take the dot product of each row of A with each column of B. The resulting scalar is placed in the element of $\mathbf{C}$ that is the same row (as the row in A) and column (as the column in B). So the element of $\mathbf{C}$ at row 4 and column 2 is the dot product of the 4th row of $\mathbf{A}$ and the 2nd column of $\mathbf{B}$. We can write this in a formula as: # # $$\mathbf{C}_{\text{row i, column j}} = \mathbf{A}_{\text{row i}} \cdot \mathbf{B}_{\text{column j}}$$ # + [markdown] id="O4NZeO1U0wZ-" # ## Exercise 2: Computation corner # # Break out the pen and paper - it's critical to implement matrix multiplication yourself to fully understand how it works. # # Let's say we have 3 retina neurons and 2 LGN neurons. The weight matrix, $W$, between the retina and LGN neurons is: # # $$W = \begin{bmatrix} # 3 & 2 & 1 \\ # 1 & 2 & 7 \\ # \end{bmatrix}, $$ # # We are going to look at the activity at two time steps (each time step is a column). Our retina activity matrix, $R$, is: # # $$ R= \begin{bmatrix} # 0 & 1 \\ # 2 & 4 \\ # 5 & 1 \\ # \end{bmatrix} $$ # # Please compute the LGN neural activity, $G$, according to our linear model: # # $$G = WR $$. # # # Please calculate it 1) by-hand and then 2) using code. Check that the answers match! # + id="_tzzrSR80wZ_" # Compute by hand first! # + id="7t0h_awb0wZ_" outputId="face91a5-946f-411b-913f-467a5a09a6b5" colab={"base_uri": "https://localhost:8080/"} # Define R R = np.array([[0,1],[2, 4],[5, 1]]) # Define W W = np.array([[3, 2, 1], [1, 2, 7]]) # Compute G # in Python, we can use @ for matrix multiplication: matrix1 @ matrix2 G = W @ R # Print values of G print(G) # + [markdown] id="-CD8n54W0wZ_" # [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D3_LinearAlgebra/solutions/W0D3_Tutorial2_Solution_4b71ea28.py) # # # + [markdown] id="h31Izpcg0waA" # --- # # Summary # # *Estimated timing of tutorial: 1 hour, 35 minutes* # # In this tutorial, you have learned how to think about matrices from the perspective of solving a system of equations and as a linear transformation of space. You have learned: # - Properties of a matrix, such as rank & null space # - How the invertibility of matrices relates to the linear transform they enact # - What eigenvalues/eigenvectors are and why they might be useful # # We will be using this knowledge in many of the days in the NMA computational neuroscience course.
tutorials/W0D3_LinearAlgebra/student/W0D3_Tutorial2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="LVp8cTmUsnph" # DBSCAN Clustering algorithm # + [markdown] id="StC0mLcXsuKD" # Loading numpy and DBSCAN function from sklearn # + id="N_QYT24GskvH" from sklearn.cluster import DBSCAN import numpy as np # + [markdown] id="bNt24QWys3cJ" # Creating input array # + id="V1jqogHFs0Sz" X = np.array([[1, 1], [1, 2], [1, 0], [10, 3], [10, 4], [10, 1]]) # + [markdown] id="IoL0HwC5tDTS" # Calling DBSCAN algorithm # + id="BsRbfX_2s_W3" outputId="f33e16d1-9fe8-4d73-89f5-8146a236b4eb" colab={"base_uri": "https://localhost:8080/"} cluster = DBSCAN(eps=3, min_samples=2).fit(X) cluster.labels_ # + [markdown] id="ddh4huGOtLKQ" #
dbscan.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Bhavani-Rajan/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/warm%20up%20on%20df.loc.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="TYSCxcR_v2pQ" colab_type="code" colab={} import pandas as pd # Data Frame to practice .loc[] techniques on df = pd.DataFrame({'index':[1, 2, 3, 4, 5], 'name_1':['entry1', 'entry2', 'entry3', 'entry4', 'entry5'], 'name_2':['entry1', 'entry2', 'entry3', 'entry4', 'entry5'], 'name_3':['entry1', 'entry2', 'entry3', 'entry4', 'entry5'], 'name_4':['entry1', 'entry2', 'entry3', 'entry4', 'entry5'], 'name_5':['entry1', 'entry2', 'entry3', 'entry4', 'entry5']}) # + id="QU2XTnyUv4CO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="cdf8362c-a6ab-4def-aba9-24b6c1e23115" # Set Index df.set_index('index', inplace=True) ### Select by a Single Label print(df.head()) # + id="ffzr2Pcyv6AD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 121} outputId="7ffb06f7-50b0-4dfc-a4ec-5bddbf41bac9" #1 Select and print the '1' Row print(df.loc[1]) # + id="fMcMnN6GwPf-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="a42a71d8-fd29-4088-a948-ef3994562f7c" #2 Select and print Row 1 Label with Column 1 Label to get entry1 print(df.loc[1,'name_1']) # + id="oFLdk0NRwb-x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="b9351534-5d86-44c4-ef1f-23f84283b245" ### Select Multiple Rows Using LISTS #1 Select and print Row Labels 1, 2 print(df.loc[[1,2]]) # + id="drNOLhuVxAd7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="2a931a0d-8d78-4539-e976-c2aa8fa22df2" #2 Select and print Row Labels 1, 3, 5 print(df.loc[[1,3,5]]) # + id="m5jwm4nexYP6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="2c8c2573-91e9-41c8-a5eb-f795a108d827" # Now Rows WITH desired Columns #3 Select and print Row Labels 1, 2, 3 with Column Labels 'name_1', 'name_2', 'name_3' print(df.loc[[1,2,3],['name_1', 'name_2', 'name_3']]) # + id="dWJJQ6QRxrt6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 121} outputId="cb585e16-03b7-4307-cff9-c44ee5075a65" #4 Select and print Row Labels 1, 3, 4, 5 with Column Labels 'name_1', 'name_3', 'name_4', 'name_5' print(df.loc[[1,3,4,5],['name_1', 'name_3', 'name_4','name_5']]) # + id="l2OeZ2h9yCA-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="0cbfe2e8-6996-44bd-cd60-61d1d3269d1b" ### Select Multiple Row & Column Labels Using SLICE #1 Select and print Row Labels 1, 2 print(df.loc[1:2]) # + id="_zSKG5SryWtm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="619518ca-aa34-4e36-cc39-e786832deb0b" #2 Select and print Row Labels 1, 2, 3 print(df.loc[1:3]) # + id="6uM2ilkwytMY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="6b9fe207-4f10-4aa0-9d08-ffa109976355" #3 Select and print Row Labels 1, 2, 3 with Column Labels 'name_1', 'name_2', 'name_3' print(df.loc[1:3,'name_1':'name_3']) # + id="rt6mo7sjy85W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="54f7f004-ee0f-4638-ef54-2b169c2e35f1" #4 Select and print Row Labels 3, 4, 5 with Column Labels 'name_3', 'name_4', 'name_5' print(df.loc[3:5,'name_3':'name_5']) # + id="zzoAE3U5zMPd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="d06913ca-f20e-454f-cdd8-53562797bc99" ### Select Multiple Row & Column Labels Using Booleans # Reminder the Boolean List MUST be the same length as the Row/Column Axis #1 Select and print Row Labels 1, 2 print(df.loc[[True,True,False,False,False]]) # + id="NbUoSaEEzck3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="f627b202-22af-46a3-b170-6ddabafc7d3d" #2 Select and print Row Labels 1, 3, 5 print(df.loc[[True,False,True,False,True]]) # + id="EeW07sh10Csm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="27bf9b82-c9cc-47b6-d23d-71d1a5bd1379" #3 Select and print Row Labels 1, 2, 3 with Column Labels 'name_1', 'name_2', 'name_3' print(df.loc[[True,True,True,False,False],[True,True,True,False,False]]) # + id="J8E9Ap4L0S_M" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 121} outputId="42f0bca9-3c5f-4ec5-c3bb-1b0ef9a6d526" #4 Select and print Row Labels 1, 3, 4, 5 with Column Labels 'name_1', 'name_3', 'name_4', 'name_5' print(df.loc[[True,False,True,True,True],[True,False,True,True,True]]) # + id="0YwzKiTJ0ie_" colab_type="code" colab={} # + id="D25Invwp0ye8" colab_type="code" colab={}
warm up on df.loc.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression import pandas as pd import numpy as np from sklearn.metrics import classification_report from sklearn.metrics import f1_score from sklearn.metrics import log_loss # + train_features = pd.read_csv('dataset1_train_features.csv') test_features = pd.read_csv('dataset1_test_features.csv') train_types = [] for row in train_features['Type']: if row == 'Class': train_types.append(1) else: train_types.append(0) train_features['Type_encode'] = train_types test_types = [] for row in test_features['Type']: if row == 'Class': test_types.append(1) else: test_types.append(0) test_features['Type_encode'] = test_types X_train = train_features.loc[:, 'Ngram1_Entity':'Type_encode'] y_train = train_features['Match'] X_test = test_features.loc[:, 'Ngram1_Entity':'Type_encode'] y_test = test_features['Match'] df_train = train_features.loc[:, 'Ngram1_Entity':'Type_encode'] df_train['Match'] = train_features['Match'] df_test = test_features.loc[:, 'Ngram1_Entity':'Type_encode'] df_test['Match'] = test_features['Match'] X_train = X_train.fillna(value=0) X_test = X_test.fillna(value=0) # + # Create regularization penalty space penaltys = ['l1', 'l2'] # Create regularization hyperparameter space Cs = np.logspace(0, 4, 10) class_weights = ['balanced', None] # + data = [] for C in Cs: for penalty in penaltys: for class_weight in class_weights: LR = LogisticRegression(penalty=penalty, C=C, class_weight = class_weight) LR.fit(X_train, y_train) y_pred = LR.predict(X_test) y_prob = LR.predict_proba(X_test) print('C=', C, ' penalty=', penalty, 'class_weight=', class_weight) f1 = f1_score(y_test, y_pred) log = log_loss(y_test, y_prob) data.append((C, penalty, class_weight, f1_score, log)) #print(classification_report(y_test, y_pred)) break dataset = pd.DataFrame(data, columns=['C', 'penalty', 'class_weight', 'f1-score', 'binary_crossentropy']) dataset.to_csv('dataset1_logreg.csv', index=False) # -
dirty_notebooks/logistic_regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # %matplotlib inline #'%' required for formatting import sqlite3 as sql import pandas as pd import matplotlib as plt # # Load DB Files into SQLite Database srpt = open('./data/sqlitesampledatabase.sql', 'r').read() conn = sql.connect('./classicmodels.db') #sqlite creates new db if not available c = conn.cursor() c.executescript(srpt) conn.commit() c.close() conn.close() # # Reading with pandas #Connecting to now populated database conn = sql.connect('./classicmodels.db') # + #Write query to show tables. We read a meta data master file and get the names qry = "SELECT name FROM sqlite_master WHERE type='table';" #pandas makes it easier to read the query using the connection #will print automatically pd.read_sql_query(qry,conn) # - #query to get first few office records from table qry2 = 'select * from offices limit 10' pd.read_sql_query(qry2,conn) #query to get first few employee records from table qry3 = 'select * from employees limit 10' pd.read_sql_query(qry3,conn) #query to find employee phone numbers #note '\' backslash escape on line breaks for formatting qry4 = 'select e.lastName, e.firstName, o.phone\ from employees as e inner join offices as o\ where e.officeCode == o.officeCode\ limit 10' pd.read_sql_query(qry4,conn) # # Matplotlib Testing # Create aggregation using SQL query, then print out plot in ggplot style #query to retrieve employee information and count cities #note '\' backslash escape on line breaks for formatting empCityAggQry = 'select count(e.employeeNumber) as empCount, o.city\ from employees as e inner join offices as o\ where e.officeCode == o.officeCode\ group by o.city' empCityAggDf = pd.read_sql_query(empCityAggQry,conn) plt.style.use("ggplot") empCityAggDf.plot(kind = 'barh', x = 'city', y = 'empCount') plt.style.available
sqliteConnValidation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Peakcalling Bam Stats and Filtering Report - Insert Sizes # ================================================================ # # This notebook is for the analysis of outputs from the peakcalling pipeline # # There are severals stats that you want collected and graphed (topics covered in this notebook in bold). # # These are: # # - how many reads input # - how many reads removed at each step (numbers and percentages) # - how many reads left after filtering # - inset size distribution pre filtering for PE reads # - how many reads mapping to each chromosome before filtering? # - how many reads mapping to each chromosome after filtering? # - X:Y reads ratio # - **inset size distribution after filtering for PE reads** # - samtools flags - check how many reads are in categories they shouldn't be # - picard stats - check how many reads are in categories they shouldn't be # # # This notebook takes the sqlite3 database created by CGAT peakcalling_pipeline.py and uses it for plotting the above statistics # # It assumes a file directory of: # # location of database = project_folder/csvdb # # location of this notebook = project_folder/notebooks.dir/ # Firstly lets load all the things that might be needed # Insert size distribution # ------------------------ # This section get the size distribution of the fragements that have been sequeced in paired-end sequencing. The pipeline calculates the size distribution by caluculating the distance between the most 5' possition of both reads, for those mapping to the + stand this is the left-post possition, for those mapping to the - strand is the rightmost coordinate. # # This plot is especially useful for ATAC-Seq experiments as good samples should show peaks with a period approximately equivelent to the length of a nucleosome (~ 146bp) a lack of this phasing might indicate poor quality samples and either over (if lots of small fragments) or under intergration (if an excess of large fragments) of the topoisomerase. # + import sqlite3 import pandas as pd import numpy as np # %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt #import CGATCore.Pipeline as P import os import statistics #import collections #load R and the R packages required # #%load_ext rpy2.ipython # #%R require(ggplot2) # use these functions to display tables nicely as html from IPython.display import display, HTML plt.style.use('ggplot') #plt.style.available # - # This is where we are and when the notebook was run # # !pwd # !date # First lets set the output path for where we want our plots to be saved and the database path and see what tables it contains database_path = '../csvdb' output_path = '.' #database_path= "/ifs/projects/charlotteg/pipeline_peakcalling/csvdb" # This code adds a button to see/hide code in html # + HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') # - # The code below provides functions for accessing the project database and extract a table names so you can see what tables have been loaded into the database and are available for plotting. It also has a function for geting table from the database and indexing the table with the track name # + def getTableNamesFromDB(database_path): # Create a SQL connection to our SQLite database con = sqlite3.connect(database_path) cur = con.cursor() # the result of a "cursor.execute" can be iterated over by row cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;") available_tables = (cur.fetchall()) #Be sure to close the connection. con.close() return available_tables db_tables = getTableNamesFromDB(database_path) print('Tables contained by the database:') for x in db_tables: print('\t\t%s' % x[0]) #This function retrieves a table from sql database and indexes it with track name def getTableFromDB(statement,database_path): '''gets table from sql database depending on statement and set track as index if contains track in column names''' conn = sqlite3.connect(database_path) df = pd.read_sql_query(statement,conn) if 'track' in df.columns: df.index = df['track'] return df # - # Insert Size Summary # ==================== # 1) lets getthe insert_sizes table from database # # Firsly lets look at the summary statistics that us the mean fragment size, sequencing type and mean read length. This table is produced using macs2 for PE data, or bamtools for SE data # # # If IDR has been run the insert_size table will contain entries for the pooled and pseudo replicates too - we don't really want this as it will duplicate the data from the origional samples so we subset this out insert_df = getTableFromDB('select * from insert_sizes;',database_path) insert_df = insert_df[insert_df["filename"].str.contains('pseudo')==False].copy() insert_df = insert_df[insert_df["filename"].str.contains('pooled')==False].copy() # + def add_expt_to_insertdf(dataframe): ''' splits track name for example HsTh1-RATotal-R1.star into expt featues, expt, sample_treatment and replicate and adds these as collumns to the dataframe''' expt = [] treatment = [] replicate = [] for value in dataframe.filename: x = value.split('/')[-1] x = x.split('_insert')[0] # split into design features y = x.split('-') expt.append(y[-3]) treatment.append(y[-2]) replicate.append(y[-1]) if len(expt) == len(treatment) and len(expt)== len(replicate): print ('all values in list correctly') else: print ('error in loading values into lists') #add collums to dataframe dataframe['expt_name'] = expt dataframe['sample_treatment'] = treatment dataframe['replicate'] = replicate return dataframe insert_df = add_expt_to_insertdf(insert_df) insert_df # - # lets graph the fragment length mean and tag size grouped by sample so we can see if they are much different # + ax = insert_df.boxplot(column='fragmentsize_mean', by='sample_treatment') ax.set_title('for mean fragment size',size=10) ax.set_ylabel('mean fragment length') ax.set_xlabel('sample treatment') ax = insert_df.boxplot(column='tagsize', by='sample_treatment') ax.set_title('for tag size',size=10) ax.set_ylabel('tag size') ax.set_xlabel('sample treatment') ax.set_ylim(((insert_df.tagsize.min()-2),(insert_df.tagsize.max()+2))) # - # Ok now get get the fragment length distributiions for each sample and plot them # + def getFraglengthTables(database_path): '''Takes path to sqlite3 database and retrieves fraglengths tables for individual samples , returns a dictionary where keys = sample table names, values = fraglengths dataframe''' frag_tabs = [] db_tables = getTableNamesFromDB(database_path) for table_name in db_tables: if 'fraglengths' in str(table_name[0]): tab_name = str(table_name[0]) statement ='select * from %s;' % tab_name df = getTableFromDB(statement,database_path) frag_tabs.append((tab_name,df)) print('detected fragment length distribution tables for %s files: \n' % len(frag_tabs)) for val in frag_tabs: print(val[0]) return frag_tabs def getDFofFragLengths(database_path): ''' this takes a path to database and gets a dataframe where length of fragments is the index, each column is a sample and values are the number of reads that have that fragment length in that sample ''' fraglength_dfs_list = getFraglengthTables(database_path) dfs=[] for item in fraglength_dfs_list: track = item[0].split('_filtered_fraglengths')[0] df = item[1] #rename collumns so that they are correct - correct this in the pipeline then delete this #df.rename(columns={'frequency':'frag_length', 'frag_length':'frequency'}, inplace=True) df.index = df.frag_length df.drop('frag_length',axis=1,inplace=True) df.rename(columns={'frequency':track},inplace=True) dfs.append(df) frag_length_df = pd.concat(dfs,axis=1) frag_length_df.fillna(0, inplace=True) return frag_length_df #Note the frequency and fragment lengths are around the wrong way! #frequency is actually fragment length, and fragement length is the frequency #This gets the tables from db and makes master df of all fragment length frequencies frag_length_df = getDFofFragLengths(database_path) #plot fragment length frequencies ax = frag_length_df.divide(1000).plot() ax.set_ylabel('Number of fragments\n(thousands)') ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. ) ax.set_title('fragment length distribution') ax.set_xlabel('fragment length (bp)') ax.set_xlim() # - # Now lets zoom in on the interesting region of the plot (the default in the code looks at fragment lengths from 0 to 800bp - you can change this below by setting the tuple in the ax.set_xlim() function ax = frag_length_df.divide(1000).plot(figsize=(9,9)) ax.set_ylabel('Number of fragments\n(thousands)') ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. ) ax.set_title('fragment length distribution') ax.set_xlabel('fragment length (bp)') ax.set_xlim((0,800)) # it is a bit trickly to see differences between samples of different library sizes so lets look and see if the reads for each fragment length is similar # + percent_frag_length_df = pd.DataFrame(index=frag_length_df.index) for column in frag_length_df: total_frags = frag_length_df[column].sum() percent_frag_length_df[column] = frag_length_df[column].divide(total_frags)*100 ax = percent_frag_length_df.plot(figsize=(9,9)) ax.set_ylabel('Percentage of fragments') ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. ) ax.set_title('percentage fragment length distribution') ax.set_xlabel('fragment length (bp)') ax.set_xlim((0,800)) # - # SUMMARISE HERE # ============== # From these plots you should be able to tell wether there are any distinctive patterns in the size of the fragment lengths,this is especially important for ATAC-Seq data as in successful experiments you should be able to detect nucleosome phasing - it can also indicate over fragmentation or biases in cutting. # Lets looks at the picard insert size metrics also insert_df = getTableFromDB('select * from picard_stats_insert_size_metrics;',database_path) for c in insert_df.columns: print (c) insert_df # These metrics are actually quite different to the ones we calculate themselves - for some reason it seems to split the files into 2 and dives a distribution for smaller fragments and for larger fragments- not sure why at the moment
CGATPipelines/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_filtering_Report_insert_sizes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 1 - Matplotlib # # # -- to be merged from office # # # # # # # # # # ## Labels # # As a first step, let's add axis labels and a title to the plot. # You can do this with the `xlabel()`, `ylabel()` and `title()` functions, # available in `matplotlib.pyplot`. This sub-package is already imported as `plt`. #
Intermediate_Python/.ipynb_checkpoints/Ch1-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # !wget https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip # # !unzip multi_cased_L-12_H-768_A-12.zip # - import os os.environ['CUDA_VISIBLE_DEVICES'] = '1' import bert from bert import run_classifier from bert import optimization from bert import tokenization from bert import modeling import numpy as np import tensorflow as tf import pandas as pd from tqdm import tqdm # + import json with open('dataset.json') as fopen: data = json.load(fopen) train_X = data['train_X'] train_Y = data['train_Y'] test_X = data['test_X'] test_Y = data['test_Y'] # + BERT_VOCAB = 'multi_cased_L-12_H-768_A-12/vocab.txt' BERT_INIT_CHKPNT = 'multi_cased_L-12_H-768_A-12/bert_model.ckpt' BERT_CONFIG = 'multi_cased_L-12_H-768_A-12/bert_config.json' tokenizer = tokenization.FullTokenizer( vocab_file=BERT_VOCAB, do_lower_case=False) # - GO = 101 EOS = 102 # + from unidecode import unidecode def get_inputs(x, y): input_ids, input_masks, segment_ids, ys = [], [], [], [] for i in tqdm(range(len(x))): tokens_a = tokenizer.tokenize(unidecode(x[i])) tokens_b = tokenizer.tokenize(unidecode(y[i])) tokens = ["[CLS]"] + tokens_a + ["[SEP]"] segment_id = [0] * len(tokens) input_id = tokenizer.convert_tokens_to_ids(tokens) input_mask = [1] * len(input_id) input_ids.append(input_id) input_masks.append(input_mask) segment_ids.append(segment_id) r = tokenizer.convert_tokens_to_ids(tokens_b + ["[SEP]"]) if len([k for k in r if k == 0]): print(y[i], i) break ys.append(r) return input_ids, input_masks, segment_ids, ys # - train_input_ids, train_input_masks, train_segment_ids, train_Y = get_inputs(train_X, train_Y) test_input_ids, test_input_masks, test_segment_ids, test_Y = get_inputs(test_X, test_Y) bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG) epoch = 20 batch_size = 16 warmup_proportion = 0.1 num_train_steps = len(train_input_ids) num_warmup_steps = int(num_train_steps * warmup_proportion) # + from tensor2tensor.utils import beam_search import bert_decoder as modeling_decoder class Model: def __init__( self, learning_rate = 2e-5, training = True, ): self.X = tf.placeholder(tf.int32, [None, None]) self.segment_ids = tf.placeholder(tf.int32, [None, None]) self.input_masks = tf.placeholder(tf.int32, [None, None]) self.Y = tf.placeholder(tf.int32, [None, None]) self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32) self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32) batch_size = tf.shape(self.X)[0] def forward(x, segment, masks, y, reuse = False, config = bert_config): with tf.variable_scope('bert',reuse=reuse): model = modeling.BertModel( config=config, is_training=training, input_ids=x, input_mask=masks, token_type_ids=segment, use_one_hot_embeddings=False) memory = model.get_sequence_output() with tf.variable_scope('bert',reuse=True): Y_seq_len = tf.count_nonzero(y, 1, dtype=tf.int32) y_masks = tf.sequence_mask(Y_seq_len, tf.reduce_max(Y_seq_len), dtype=tf.float32) model = modeling_decoder.BertModel( config=config, is_training=training, input_ids=y, input_mask=y_masks, memory = memory, memory_mask = masks, use_one_hot_embeddings=False) output_layer = model.get_sequence_output() embedding = model.get_embedding_table() with tf.variable_scope('cls/predictions',reuse=reuse): with tf.variable_scope('transform'): input_tensor = tf.layers.dense( output_layer, units = config.hidden_size, activation = modeling.get_activation(bert_config.hidden_act), kernel_initializer = modeling.create_initializer( bert_config.initializer_range ), ) input_tensor = modeling.layer_norm(input_tensor) output_bias = tf.get_variable( 'output_bias', shape = [bert_config.vocab_size], initializer = tf.zeros_initializer(), ) logits = tf.matmul(input_tensor, embedding, transpose_b = True) return logits main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1) self.training_logits = forward(self.X, self.segment_ids, self.input_masks, decoder_input) print(self.training_logits) masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32) self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits, targets = self.Y, weights = masks) self.optimizer = optimization.create_optimizer(self.cost, learning_rate, num_train_steps, num_warmup_steps, False) y_t = tf.argmax(self.training_logits,axis=2) y_t = tf.cast(y_t, tf.int32) self.prediction = tf.boolean_mask(y_t, masks) mask_label = tf.boolean_mask(self.Y, masks) correct_pred = tf.equal(self.prediction, mask_label) correct_index = tf.cast(correct_pred, tf.float32) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) initial_ids = tf.fill([batch_size], GO) def symbols_to_logits(ids): x = tf.contrib.seq2seq.tile_batch(self.X, 1) segment = tf.contrib.seq2seq.tile_batch(self.segment_ids, 1) masks = tf.contrib.seq2seq.tile_batch(self.input_masks, 1) logits = forward(x, segment, masks, ids, reuse = True) return logits[:, tf.shape(ids)[1]-1, :] final_ids, final_probs, _ = beam_search.beam_search( symbols_to_logits, initial_ids, 1, tf.reduce_max(self.X_seq_len), bert_config.vocab_size, 0.0, eos_id = EOS) self.fast_result = final_ids self.fast_result = tf.identity(self.fast_result, name = 'greedy') # + tf.reset_default_graph() sess = tf.InteractiveSession() model = Model() sess.run(tf.global_variables_initializer()) # + import collections import re def get_assignment_map_from_checkpoint(tvars, init_checkpoint): """Compute the union of the current variables and checkpoint variables.""" assignment_map = {} initialized_variable_names = {} name_to_variable = collections.OrderedDict() for var in tvars: name = var.name m = re.match('^(.*):\\d+$', name) if m is not None: name = m.group(1) name_to_variable[name] = var init_vars = tf.train.list_variables(init_checkpoint) assignment_map = collections.OrderedDict() for x in init_vars: (name, var) = (x[0], x[1]) if 'bert/' + name in name_to_variable: assignment_map[name] = name_to_variable['bert/' + name] initialized_variable_names[name] = 1 initialized_variable_names[name + ':0'] = 1 elif name in name_to_variable: assignment_map[name] = name_to_variable[name] initialized_variable_names[name] = 1 initialized_variable_names[name + ':0'] = 1 return (assignment_map, initialized_variable_names) # + tvars = tf.trainable_variables() checkpoint = BERT_INIT_CHKPNT assignment_map, initialized_variable_names = get_assignment_map_from_checkpoint(tvars, checkpoint) # - saver = tf.train.Saver(var_list = assignment_map) saver.restore(sess, checkpoint) pad_sequences = tf.keras.preprocessing.sequence.pad_sequences # + from tqdm import tqdm import time for EPOCH in range(epoch): train_acc, train_loss, test_acc, test_loss = [], [], [], [] pbar = tqdm( range(0, len(train_input_ids), batch_size), desc = 'train minibatch loop' ) for i in pbar: index = min(i + batch_size, len(train_input_ids)) batch_x = train_input_ids[i: index] batch_x = pad_sequences(batch_x, padding='post') batch_mask = train_input_masks[i: index] batch_mask = pad_sequences(batch_mask, padding='post') batch_segment = train_segment_ids[i: index] batch_segment = pad_sequences(batch_segment, padding='post') batch_y = pad_sequences(train_Y[i: index], padding='post') acc, cost, _ = sess.run( [model.accuracy, model.cost, model.optimizer], feed_dict = { model.Y: batch_y, model.X: batch_x, model.input_masks: batch_mask, model.segment_ids: batch_segment }, ) train_loss.append(cost) train_acc.append(acc) pbar.set_postfix(cost = cost, accuracy = acc) pbar = tqdm(range(0, len(test_input_ids), batch_size), desc = 'test minibatch loop') for i in pbar: index = min(i + batch_size, len(test_input_ids)) batch_x = test_input_ids[i: index] batch_x = pad_sequences(batch_x, padding='post') batch_y = pad_sequences(test_Y[i: index], padding='post') batch_mask = test_input_masks[i: index] batch_mask = pad_sequences(batch_mask, padding='post') batch_segment = test_segment_ids[i: index] batch_segment = pad_sequences(batch_segment, padding='post') acc, cost = sess.run( [model.accuracy, model.cost], feed_dict = { model.Y: batch_y, model.X: batch_x, model.input_masks: batch_mask, model.segment_ids: batch_segment }, ) test_loss.append(cost) test_acc.append(acc) pbar.set_postfix(cost = cost, accuracy = acc) train_loss = np.mean(train_loss) train_acc = np.mean(train_acc) test_loss = np.mean(test_loss) test_acc = np.mean(test_acc) print( 'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n' % (EPOCH, train_loss, train_acc, test_loss, test_acc) ) # - from tensor2tensor.utils import bleu_hook results = [] for i in tqdm(range(0, len(test_X), batch_size)): index = min(i + batch_size, len(test_X)) batch_x = test_input_ids[i: index] batch_x = pad_sequences(batch_x, padding='post') batch_y = pad_sequences(test_Y[i: index], padding='post') batch_mask = test_input_masks[i: index] batch_mask = pad_sequences(batch_mask, padding='post') batch_segment = test_segment_ids[i: index] batch_segment = pad_sequences(batch_segment, padding='post') feed = { model.X: batch_x, model.input_masks: batch_mask, model.segment_ids: batch_segment } p = sess.run(model.fast_result,feed_dict = feed)[:,0,:] result = [] for row in p: result.append([i for i in row if i > 3 and i not in [101, 102]]) results.extend(result) rights = [] for r in test_Y: rights.append([i for i in r if i > 3 and i not in [101, 102]]) bleu_hook.compute_bleu(reference_corpus = rights, translation_corpus = results)
neural-machine-translation/49.bertmultilanguage-encoder-bertmultilanguage-decoder.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TTS Inference # # This notebook can be used to generate audio samples using either NeMo's pretrained models or after training NeMo TTS models. This script currently uses a two step inference procedure. First, a model is used to generate a mel spectrogram from text. Second, a model is used to generate audio from a mel spectrogram. # # Currently supported models are: # Mel Spectrogram Generators: # - Tacotron 2 # - Glow-TTS # # Audio Generators # - WaveGlow # - SqueezeWave # - UniGlow # - MelGAN # - HiFiGAN # - Two Stage Models # - Griffin-Lim # # License # # > Copyright 2020 NVIDIA. All Rights Reserved. # > # > Licensed under the Apache License, Version 2.0 (the "License"); # > you may not use this file except in compliance with the License. # > You may obtain a copy of the License at # > # > http://www.apache.org/licenses/LICENSE-2.0 # > # > Unless required by applicable law or agreed to in writing, software # > distributed under the License is distributed on an "AS IS" BASIS, # > WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # > See the License for the specific language governing permissions and # > limitations under the License. # + tags=[] """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # # If you're using Google Colab and not running locally, uncomment and run this cell. # # !apt-get install sox libsndfile1 ffmpeg # # !pip install wget unidecode # BRANCH = 'r1.0.0rc1' # # !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[tts] # + tags=[] supported_spec_gen = ["tacotron2", "glow_tts"] supported_audio_gen = ["waveglow", "squeezewave", "uniglow", "melgan", "hifigan", "two_stages"] print("Choose one of the following spectrogram generators:") print([model for model in supported_spec_gen]) spectrogram_generator = input() print("Choose one of the following audio generators:") print([model for model in supported_audio_gen]) audio_generator = input() assert spectrogram_generator in supported_spec_gen assert audio_generator in supported_audio_gen if audio_generator=="two_stages": print("Choose one of the following mel-to-spec convertor:") # supported_mel2spec = ["psuedo_inverse", "encoder_decoder"] - No encoder_decoder checkpoint atm supported_mel2spec = ["psuedo_inverse"] print([model for model in supported_mel2spec]) mel2spec = input() print("Choose one of the following linear spectrogram vocoders:") # supported_linear_vocoders = ["griffin_lim", "degli"] - No deep_gli checkpoint atm supported_linear_vocoders = ["griffin_lim"] print([model for model in supported_linear_vocoders]) linvocoder = input() assert mel2spec in supported_mel2spec assert linvocoder in supported_linear_vocoders # - # # Load model checkpoints # # Note: For best quality with Glow TTS, please update the glow tts yaml file with the path to cmudict # + tags=[] from omegaconf import OmegaConf, open_dict import torch from nemo.collections.asr.parts import parsers from nemo.collections.tts.models.base import SpectrogramGenerator, Vocoder def load_spectrogram_model(): override_conf = None if spectrogram_generator == "tacotron2": from nemo.collections.tts.models import Tacotron2Model pretrained_model = "tts_en_tacotron2" elif spectrogram_generator == "glow_tts": from nemo.collections.tts.models import GlowTTSModel pretrained_model = "tts_en_glowtts" import wget from pathlib import Path if not Path("cmudict-0.7b").exists(): filename = wget.download("http://svn.code.sf.net/p/cmusphinx/code/trunk/cmudict/cmudict-0.7b") filename = str(Path(filename).resolve()) else: filename = str(Path("cmudict-0.7b").resolve()) conf = SpectrogramGenerator.from_pretrained(pretrained_model, return_config=True) if "params" in conf.parser: conf.parser.params.cmu_dict_path = filename else: conf.parser.cmu_dict_path = filename override_conf = conf else: raise NotImplementedError model = SpectrogramGenerator.from_pretrained(pretrained_model, override_config_path=override_conf) return model def load_vocoder_model(): RequestPseudoInverse = False TwoStagesModel = False if audio_generator == "waveglow": from nemo.collections.tts.models import WaveGlowModel pretrained_model = "tts_waveglow_88m" elif audio_generator == "squeezewave": from nemo.collections.tts.models import SqueezeWaveModel pretrained_model = "tts_squeezewave" elif audio_generator == "uniglow": from nemo.collections.tts.models import UniGlowModel pretrained_model = "tts_uniglow" elif audio_generator == "melgan": from nemo.collections.tts.models import MelGanModel pretrained_model = "tts_melgan" elif audio_generator == "hifigan": from nemo.collections.tts.models import HifiGanModel pretrained_model = "tts_hifigan" elif audio_generator == "two_stages": from nemo.collections.tts.models import TwoStagesModel cfg = {'linvocoder': {'_target_': 'nemo.collections.tts.models.two_stages.GriffinLimModel', 'cfg': {'n_iters': 64, 'n_fft': NFFT, 'l_hop': 256}}, 'mel2spec': {'_target_': 'nemo.collections.tts.models.two_stages.MelPsuedoInverseModel', 'cfg': {'sampling_rate': SAMPLE_RATE, 'n_fft': NFFT, 'mel_fmin': 0, 'mel_fmax': FMAX, 'mel_freq': NMEL}}} model = TwoStagesModel(cfg) if mel2spec == "encoder_decoder": from nemo.collections.tts.models.ed_mel2spec import EDMel2SpecModel pretrained_mel2spec_model = "EncoderDecoderMelToSpec-22050Hz" mel2spec_model = EDMel2SpecModel.from_pretrained(pretrained_mel2spec_model) model.set_mel_to_spec_model(mel2spec_model) if linvocoder == "degli": from nemo.collections.tts.models.degli import DegliModel pretrained_linvocoder_model = "DeepGriffinLim-22050Hz" linvocoder_model = DegliModel.from_pretrained(pretrained_linvocoder_model) model.set_linear_vocoder(linvocoder_model) TwoStagesModel = True else: raise NotImplementedError if not TwoStagesModel: model = Vocoder.from_pretrained(pretrained_model) return model spec_gen = load_spectrogram_model().cuda() vocoder = load_vocoder_model().cuda() # - def infer(spec_gen_model, vocder_model, str_input): with torch.no_grad(): parsed = spec_gen.parse(str_input) spectrogram = spec_gen.generate_spectrogram(tokens=parsed) audio = vocoder.convert_spectrogram_to_audio(spec=spectrogram) if isinstance(spectrogram, torch.Tensor): spectrogram = spectrogram.to('cpu').numpy() if len(spectrogram.shape) == 3: spectrogram = spectrogram[0] if isinstance(audio, torch.Tensor): audio = audio.to('cpu').numpy() return spectrogram, audio text_to_generate = input("Input what you want the model to say: ") spec, audio = infer(spec_gen, vocoder, text_to_generate) # # Show Audio and Spectrogram # + import IPython.display as ipd import numpy as np from PIL import Image from matplotlib.pyplot import imshow from matplotlib import pyplot as plt ipd.Audio(audio, rate=22050) # - # %matplotlib inline imshow(spec, origin="lower") plt.show()
tutorials/tts/1_TTS_inference.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Loads itemized individual contributions, calculates burn rates, and top fundraising days import requests import fecfile import pandas as pd import os import datetime from IPython.display import display # Time the notebook start = datetime.datetime.now() # Set some viewing options pd.set_option('display.max_colwidth', 200) pd.set_option('display.max_columns', 40) pd.set_option('display.max_rows', 500) # Read dataframe of filings # + filings = ( pd.read_csv("../fetched/filings.csv") ) filings.loc[ lambda x: x['report_period'] == "Q3" ] # - # Get only the quarterlies quarterlies = ( filings # get quarters .loc[ lambda x: x['report_period'].str.contains("Q", na=False) ] # remove filings that have been superceded by subsequent filings .loc[ lambda x: x['amended'] == False ] .sort_values(['candidate_name', 'report_period']) ) # ## Parse all the fec files with the filing list # ### But first, more loading functions.. # This function, when given a filing ID, returns only the contributions from individual donors: def extract_contributions(filing_id): filing = fecfile.from_file(f"../fetched/filings/{filing_id}.fec") meta = filing['filing'] # get only schedule A schedule_a = pd.DataFrame(filing["itemizations"]["Schedule A"]) # remove time zone schedule_a["contribution_date"] = schedule_a["contribution_date"].dt.tz_localize(None) return ( schedule_a # Extract only individual contributions .loc[lambda df: df["entity_type"] == "IND"] # Remove memo lines .loc[lambda df: df["memo_code"] == ""] .assign( filing_id = int(filing_id), ) # filter schedule A [[ "entity_type", "filer_committee_id_number", "filing_id", "transaction_id", "contribution_date", "contribution_amount", "contribution_aggregate", "contributor_organization_name", "contributor_first_name", "contributor_last_name", "contributor_zip_code", "contributor_state" ]] ) # Create a unique ID out of first name, last name and 5-digit ZIP code def make_donor_ids(df): return ( df .assign( donor_id = lambda df: ( df .assign( zip5 = lambda df: ( df["contributor_zip_code"] .fillna("-----") .str.slice(0, 5) ) ) [[ "contributor_first_name", "contributor_last_name", "zip5", ]] .apply(lambda x: ( x .fillna("") .astype(str) # Remove periods, commas, extra whitespace .str.replace(r"[\.,\s]+", " ") .str.strip() # Convert everything to upper-case .str.upper() )) .apply("|".join, axis = 1) ) ) ) # takes concatenated contribution dataframe and merges with metadata def get_metadata(contributions): candidate_filings = ( filings [["filing_id", "candidate_name", "date_filed", "date_coverage_to", "date_coverage_from", "report_title", "report_period"]] ) return ( contributions .merge( candidate_filings, on = "filing_id", how = "left" ) ) # Concatenate all the filings data into one big DataFrame, and get individual contributions only # + subset = quarterlies.loc[ lambda x: x["candidate_name"] != "<NAME>" # He did not file a Schedule A ] #subset = quarterlies all_indiv = ( pd .concat( [ extract_contributions(e) for e in subset["filing_id"] ] ) .pipe( get_metadata ) .pipe( make_donor_ids ) ) # - # Remove the donors who have given $200 or less, or have been refunded to that level # + # First build a frame of latest contributions latest_contribs = ( all_indiv .sort_values('contribution_date') .groupby([ 'candidate_name', 'donor_id']) .pipe(lambda grp: pd.DataFrame({ "latest_contribution_aggregate": grp["contribution_aggregate"].last(), }) ) .reset_index() ) latest_contribs.sort_values( 'latest_contribution_aggregate', ascending = True ).head(3) # - # Get list of individual contributions from individuals who have given more than $200, the official itemization threshold # + threshold_indiv = ( all_indiv .merge( latest_contribs, on = ['donor_id', 'candidate_name'], how= "left" ) .loc[ lambda x: x["latest_contribution_aggregate"] > 200 ] .loc[ lambda x: x["contribution_amount"] > 0 ] .sort_values('latest_contribution_aggregate', ascending = False) ) # - # ## Finally, some stats # What's the burn rate (`amount spent / amount raised`) for this period? (Not calculating donations from affiliated committees). # + quarter = "Q3" # Get latest quarterly latest_period = ( quarterlies .loc[ lambda df: df['report_period'] == quarter] ) burn = pd.DataFrame({ "candidate name": latest_period["candidate_name"], "cash on hand": latest_period["cash_on_hand"], "spent": latest_period["disbursements_total"], "raised": latest_period["contributions_total"], "difference": latest_period['contributions_total'] - latest_period['disbursements_total'], "percent spent": (( latest_period['disbursements_total'] / latest_period['contributions_total'] ) * 100).round(1) }).sort_values('raised', ascending = False) print(f"Candidate burn rates ({quarter} only)") burn # - # Biggest days for each candidate who has filed in Q3 # + # Get list of candidates candidates = threshold_indiv.loc[ lambda x: x["report_period"] == "Q3" ]['candidate_name'].unique() for each in candidates: top_days = ( threshold_indiv .loc[ lambda x: x['candidate_name'] == each ] .groupby([ pd.Grouper( key = "contribution_date", freq = 'D' ), "candidate_name", ]) ["donor_id"] .nunique() .to_frame("donor_count") .reset_index() .sort_values('donor_count', ascending = False) [0:10] .assign( contribution_date = lambda df: df['contribution_date'].dt.strftime("%m/%d/%Y") ) ) print(f"Top 10 days for {each}") display(top_days) # - # Check on timing # + end = datetime.datetime.now() d = (end - start) f"The notebook ran for {round(d.total_seconds() / 60, 2) } minutes" # - # --- # # --- # # ---
notebooks/analyze-contributions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Recurrent PPO landing using raw altimeter readings # + import numpy as np import os,sys sys.path.append('../../../RL_lib/Agents/PPO') sys.path.append('../../../RL_lib/Utils') sys.path.append('../../../Mars3dof_env') sys.path.append('../../../Mars_DTM') # %load_ext autoreload # %load_ext autoreload # %autoreload 2 # %matplotlib nbagg import os print(os.getcwd()) # + language="html" # <style> # .output_wrapper, .output { # height:auto !important; # max-height:1000px; /* your desired max-height here */ # } # .output_scroll { # box-shadow:none !important; # webkit-box-shadow:none !important; # } # </style> # - # # Optimize Policy # + from env import Env import env_utils as envu from dynamics_model import Dynamics_model from lander_model import Lander_model from ic_gen2 import Landing_icgen import rl_utils from arch_policy_vf import Arch from model import Model from policy import Policy from value_function import Value_function import pcm_model_nets as model_nets import policy_nets as policy_nets import valfunc_nets as valfunc_nets from agent import Agent import torch.nn as nn from flat_constraint import Flat_constraint from glideslope_constraint import Glideslope_constraint from reward_terminal_mdr import Reward from dtm_measurement_model3 import DTM_measurement_model from altimeter_pointing import Altimeter dtm = np.load('../../../Mars_DTM/synth_elevations.npy') print(dtm.shape) target_position = np.asarray([4000,4000,330]) mm = DTM_measurement_model(dtm,check_vertical_errors=False) altimeter = Altimeter(mm,target_position,theta=np.pi/8) arch = Arch() logger = rl_utils.Logger() dynamics_model = Dynamics_model() lander_model = Lander_model(altimeter=altimeter, apf_tau1=20,apf_tau2=100,apf_vf1=-2,apf_vf2=-1,apf_v0=70,apf_atarg=15.) lander_model.get_state_agent = lander_model.get_state_agent_dtm obs_dim = 8 act_dim = 3 recurrent_steps = 500 reward_object = Reward() glideslope_constraint = Glideslope_constraint(gs_limit=-1.0) shape_constraint = Flat_constraint() env = Env(lander_model,dynamics_model,logger, reward_object=reward_object, glideslope_constraint=glideslope_constraint, shape_constraint=shape_constraint, tf_limit=100.0,print_every=10,scale_agent_action=True) env.ic_gen = Landing_icgen(mass_uncertainty=0.10, g_uncertainty=(0.05,0.05), adjust_apf_v0=True, downrange = (0,1000 , -30, -10), crossrange = (-500,500 , -30,30), altitude = (1000,1000,-50,-40)) env.ic_gen.show() arch = Arch() policy = Policy(policy_nets.GRU(obs_dim, act_dim, recurrent_steps=recurrent_steps), shuffle=False, kl_targ=0.001,epochs=20, beta=0.1, servo_kl=True, max_grad_norm=30, init_func=rl_utils.xn_init) value_function = Value_function(valfunc_nets.GRU(obs_dim, recurrent_steps=recurrent_steps), shuffle=False, batch_size=9999999, max_grad_norm=30) agent = Agent(arch, policy, value_function, None, env, logger, policy_episodes=30, policy_steps=3000, gamma1=0.95, gamma2=0.995, lam=0.98, recurrent_steps=recurrent_steps, monitor=env.rl_stats) agent.train(60000) # - # # Test Policy with Realistic Noise # policy.test_mode=True arch.use_model = True env.test_policy_batch(agent,1000,print_every=100) print(arch,agent.arch)
Experiments/Mars3DOF/Mars_landing_DTM_pointing/altimeter_target_pointing-500step.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.5.3 # language: julia # name: julia-1.5 # --- # # EFTfitter.jl - Empty Template using EFTfitter using BAT # for sampling using IntervalSets # for specifying the prior using Distributions # for specifying the prior using Plots # for plotting # ### Parameters parameters = BAT.NamedTupleDist( p1 = -2..2, ) # ### Observables function observable1(params) return params.p1 end # ### Measurements measurements = ( Meas1 = Measurement(observable1, 0.0, uncertainties = (unc1 = 0.1,), active=true), #MeasDist = MeasurementDistribution(obs_array, values_array, uncertainties = (unc1 = unc1_array,), active=false), ) # ### Correlations # + correlations = ( unc1 = NoCorrelation(active=true), ) #corr_matrix = to_correlation_matrix(measurements, # (:Meas1, :Meas2, 0.1), #) # - # create an `EFTfitterModel` object: model = EFTfitterModel(parameters, measurements, correlations) # create posterior distribution: posterior = PosteriorDensity(model); # sample the posterior distribution with BAT.jl: algorithm = MCMCSampling(mcalg = MetropolisHastings(), nsteps = 10^5, nchains = 4) samples = bat_sample(posterior, algorithm).result; # create and display a `SampledDensity` object for a quick overview of results: sd = SampledDensity(posterior, samples) display(sd) # plot the posterior distribution: p = plot(samples) # --- # # *This notebook was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
examples/notebooks/EmptyTemplate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Practice Numba cuda 2<br> # # 1) Explore numba's capabilities as described in the documentation section "Device detection and enquiry". # # a) Write and execute code to get numba to print information about your GPU including the model number, compute capability, and amount of available memory. (Include your code in the template and copy/paste the info you obtain about your device as comments.) # # b) Estimate (by hand calculation) the number of 64-bit floats you can store on your GPU based on your result from p1. # # 2) Modify the parallel version of the _map_ app to use arrays of 64-bit floats, and experiment with running the modified code for different array sizes. # # a) How many 64-bit floats were in the largest array you could run successfully? # # b) What error message was generated if you requested a larger array? # # 3) Consider the code included in the python_logistic_map notebook that computes the bifurcation diagram of the logistic map. # # a) Locate the `for` loops in the logistic map code. What quantity is iterated over in each loop? # # b) For which loop are the computations in each iteration independent of one another? Briefly explain the reasoning behind your response. # # c) Implement a parallel version of the code to compute the bifurcation diagram of the logistic map. # # d) Modify both the serial and parallel codes to print the time required to perform the computation. Determine the time requried to compute 1000 iterations for 1000 parameter values, and report the "acceleration" factor; i.e. the ratio of serial run time over parallel run time. # # 4) In practic file p1_parallel you created serial code to compute the Mandelbrot set. Now consider creating a parallel version. # # a) What do the `for` loops in the serial code iterate over? For which loops are the computations for each iteration independent of one another? Briefly explain the reasoning behind your reply. # # b) Write a numba implementation that a lunches a 2D computational grid to parallelize the appropriate loops. Run your parallel code and verify that it reproduces the results of the serial version. # # Also run your code to answer the questions below: # # c) What is the finest resolution 2D grid can you run on your GPU? What error message is generated by attempting finer grid resolution? # # d) What is the largest square block that you can run on your GPU? What error message is generated if you request more threads in each block? # # # # + import numpy as np from numba import cuda import math import matplotlib.pyplot as plt import time # import map_parallel def gpu_total_memory(): ''' Query the GPU's properties via Numba to obtain the total memory of the device. ''' # resources[in course reference]:: https://numba.pydata.org/numba-doc/dev/cuda-reference/host.html ## numba.cuda.current_context(devnum=None) # Get the current device or use a device by device number, and return the CUDA context ## cuda.current_context() # force cuda initialize ## get_memory_info(self) # return(free, total) memory in bytes in the context return cuda.current_context().get_memory_info()[1] # pass def gpu_compute_capability(): ''' Query the GPU's properties via Numba to obtain the compute capability of the device. ''' ## numba.cuda.get_current_device() # Get current device associated with the current thread ## compute_capability # A tuple, (major, minor) indicating the supported compute capability return cuda.get_current_device().compute_capability # pass def gpu_name(): ''' Query the GPU's properties via Numba to obtain the name of the device. ''' ## name # The name of the device (e.g. “GeForce GTX 970”) return cuda.get_current_device().name # pass def max_float64s(): ''' Compute the maximum number of 64-bit floats that can be stored on the GPU ''' ## 1 B = 8 bits(1Byte) ## maxnum = gpumemory(GB) x (8 bits/1B ) /64 bit return int(gpu_total_memory()/8) # pass # ----------------------------------------- # def map_64(): ''' Execute the map app modified to use 64-bit floats ''' # from map_parallel import sArray # import map_parallel M=512737280/2 #277 # it matter for length of code nn = max_float64s()/M # maximum print('largest array run succssfully at (float64)',int(nn)) nx = np.linspace(0, 1, int(nn), dtype = np.float64) ny = np.array(nx) plt.plot(nx,ny) plt.show() @cuda.jit(device = True) #gpu def f(x, r): ''' Execute 1 iteration of the logistic map ''' return r*x*(1 - x) @cuda.jit def logistic_map_kernel(ss, r, x, transient, steady): ''' Kernel for parallel iteration of logistic map Arguments: ss: 2D numpy device array to store steady state iterates for each value of r r: 1D numpy device array of parameter values x: float initial value transient: int number of iterations before storing results steady: int number of iterations to store ''' # reference python_logistic_map.ipynb [66] bifurcation # reference ch2_map_nb.ipynb # compare parallel and serial section n = r.shape[0] # The argument of 'grid()' inidicates the dimenstion of the computational grid. i = cuda.grid(1) # w have a one-dimensiional grid if i < n: # python_logistic_map.ipynb [66] bifurcation x_old = x0 #assign the initial value for k in range(transient): x_new = f(x_old, r[i]) x_old = x_new for k in range(steady): #iterate over the desired sequence ss[k][i] = x_old x_new = f(x_old, r[i]) #compute the output value and assign to variable x_new x_old = x_new #assign the new (output) value top be the old (input) value for the next # return ss # -----------------------------------------from: python_logistic_map.ipynb # def logisticSteadyArray(x0,r,n_transient, n_ss): # ''' # Conpute an array of iterates of the logistic map f(x)=r*x*(1-x) # Inputs: # x0: float initial value # r: float parameter value # n_transient: int number of initial iterates to NOT store # n_ss: int number of iterates to store # Returns: # x: numpy array of n float64 values # ''' # #create an array to hold n elements (each a 64 bit float) # x = np.zeros(n_ss, dtype=np.float64) # x_old = x0 #assign the initial value # for i in range(n_transient): # x_new = f(x_old, r) # x_old = x_new # for i in range(n_ss): #iterate over the desired sequence # x[i] = x_old # x_new = f(x_old, r) #compute the output value and assign to variable x_new # x_old = x_new #assign the new (output) value top be the old (input) value for the next iterate # return x # # ----------------------------------------- # pass def parallel_logistic_map(r, x, transient, steady): ''' Parallel iteration of the logistic map Arguments: r: 1D numpy array of float64 parameter values x: float initial value transient: int number of iterations before storing results steady: int number of iterations to store Return: 2D numpy array of steady iterates for each entry in r ''' # reference ch2_map_nb.ipynb # compare parallel and serial section TPB = 32 n = r.shape[0] d_r = cuda.to_device(r) d_ss = cuda.device_array([steady,n], dtype = np.float64) blockdims = TPB gridDims = (n+TPB-1)//TPB logistic_map_kernel[gridDims, blockdims](d_ss, d_r, x, transient, steady) return d_ss.copy_to_host() def serial_logistic_map(r, x, transient, steady): ''' Kernel for parallel iteration of logistic map Arguments: ss: 2D numpy device array to store steady state iterates for each value of r r: 1D numpy device array of parameter values x: float initial value transient: int number of iterations before storing results steady: int number of iterations to store ''' # reference python_logistic_map.ipynb [66] bifurcation # reference ch2_map_nb.ipynb # compare parallel and serial section n = r.shape[0] # The argument of 'grid()' inidicates the dimenstion of the computational grid. # i = cuda.grid(1) # w have a one-dimensiional grid #create an array to hold n elements (each a 64 bit float) # x = np.zeros(n_ss, dtype=np.float64) ss = np.zeros([steady, n], dtype=np.float64) # x_old = x0 #assign the initial value for i in range(n): # if i < n: # python_logistic_map.ipynb [66] bifurcation x_old = x #assign the initial value for k in range(transient): x_new = r[i]*x_old*(1 - x_old) x_old = x_new for k in range(steady): #iterate over the desired sequence ss[k][i] = x_old x_new = r[i]*x_old*(1 - x_old) #compute the output value and assign to variable x_new x_old = x_new #assign the new (output) value top be the old (input) value for the next return ss # ----------------------------------------- ## ----------------------------------------- ## ----------------------------------------- # @cuda.jit(device = True) def iteration_count(cx, cy, dist, itrs): ''' Computed number of Mandelbrot iterations Arguments: cx, cy: float64 parameter values dist: float64 escape threshold itrs: int iteration count limit ''' #from previous HW1 p5escape x = 0 y = 0 for i in range(itrs): r = math.sqrt(x**2 + y**2) if dist > r: x_n = x**2 - y**2 + cx y_n = 2*x*y + cy x = x_n y = y_n else: break return i # pass # # xtemp := x×x - y×y + x0 # y := 2×x×y + y0 # x := xtemp # iteration := iteration + 1 # pass @cuda.jit def mandelbrot_kernel(out, cx, cy, dist, itrs): ''' Kernel for parallel computation of Mandelbrot iteration counts Arguments: out: 2D numpy device array for storing computed iteration counts cx, cy: 1D numpy device arrays of parameter values dist: float64 escape threshold itrs: int iteration count limit ''' i,j = cuda.grid(2) nx,ny = out.shape if i < nx and j < ny: out[i,j] = iteration_count(cx[j],cy[i],dist,itrs) def parallel_mandelbrot(cx, cy, dist, itrs): ''' Parallel computation of Mandelbrot iteration counts Arguments: cx, cy: 1D numpy arrays of parameter values dist: float64 escape threshold itrs: int iteration count limit Return: 2D numpy array of iteration counts ''' TPBx = 32 TPBy = 32 nx = cx.shape[0] ny = cy.shape[0] d_cx = cuda.to_device(cx) d_cy = cuda.to_device(cy) out = cuda.device_array((nx, ny), dtype = np.float64) gridDims = ((nx + TPBx - 1) // TPBx , (ny + TPBy - 1) // TPBy) blockDims = (TPBx,TPBy) mandelbrot_kernel[gridDims, blockDims](out, d_cx, d_cy, dist, itrs) return out.copy_to_host() if __name__ == "__main__": print('problem1:') #Problem 1 print("GPU memory in GB: ", gpu_total_memory()/1024**3) print("Compute capability (Major, Minor): ", gpu_compute_capability()) print("GPU Model Name: ", gpu_name()) print("Max float64 count: ", max_float64s()) #PASTE YOUR OUTPUT HERE# print('GPU memory in GB: 3.8201904296875\'') print('Compute capability (Major, Minor): (7, 5)') print('GPU Model Name: b\'GeForce GTX 1650 with Max-Q Design') print('Max float64 count: 512737280') #Problem 2 print('\nproblem2:') map_64() print('largest array run succssfully at (float64): 2') #PASTE YOUR ERROR MESSAGES HERE# # print('ERROR MESSAGES in problem2:') # print('Kernel Restarting') # print('#numba.cuda.cudadrv.driver.CudaAPIError: [2] Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY') #Problem 3 print('\nproblem3:') #3a) print('In the fucntion, first for loop iterate before sotring results and seonc for loop interate to store') #3b) print('The loop in eath iteration at logsit_map_kernel function are the computations that independent of one another.') # compute 1000 iterations for 1000 parameter values #define sequence of r values #initialize parameters for iteration sequences n_ss = 8 #number of "steady-state" iterates to store n_transient = 992 #number of transient iterates that are not store x0 = 0.5 #initial value of x rmin = 0 rmax = 4 m = 1000 #number of r values # Create the m equally spaced values of r using linspace r = np.linspace(rmin,rmax,m) t0 = time.time() tmp = parallel_logistic_map(r, x0, n_transient, n_ss) #compute the steady-state array t1 = time.time() tp = t1-t0 print('Parallel run time(second) = ', tp) plt.figure() for i in range(n_ss): plt.plot(r,tmp.transpose(), 'b.') plt.axis([rmin,rmax, 0, 1]) plt.xlabel('Iteration number') plt.ylabel('x value') plt.title('Iterations of the logistic map') plt.show() #serial part t0 = time.time() tmp_s = serial_logistic_map(r, x0, n_transient, n_ss) #compute the steady-state array t1 = time.time() ts = t1-t0 print('Serial run time(second) = ', ts) # print(tmp_s) plt.figure() for i in range(n_ss): plt.plot(r,tmp_s.transpose(), 'b.') plt.axis([rmin,rmax, 0, 1]) plt.xlabel('Iteration number') plt.ylabel('x value') plt.title('Iterations of the logistic map') plt.show() print('Acceleration factor (serial run time over parallel run time) = ', ts/tp) #Problem 4 print('Problem 4') #4a) # In ther for loop in itration_count function, the x and y value is updated each iteration which it independet of one another #4b) nx = 5523 ny = 5523 dist = 2.5 itrs = 256 cx = np.linspace(-2, 1, nx) cy = np.linspace(-1.5, 1.5, ny) f = parallel_mandelbrot(cx, cy, dist, itrs) plt.figure() plt.imshow(f, extent = (-2.5, 2.5, -2.5, 2.5)) plt.title('Mandelbrot graph') plt.show() # -
P2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Blasco0616/CPEN-21A-1-1/blob/main/Loop_Statement.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="XlTQTrZBrUZQ" # ##For Loop Statement # + colab={"base_uri": "https://localhost:8080/"} id="emT0rR9fraRy" outputId="e161381a-0003-46e9-a971-149a12176c28" week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Sunday"] for x in week: print(x) # + [markdown] id="GE6AmRz7tVqS" # The Break Statement # + colab={"base_uri": "https://localhost:8080/"} id="Cp2bydg6uus_" outputId="1bc9904c-0b8b-4a5b-e5fb-24b4fcb312e3" week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Sunday"] for x in week: print(x) if x == "Thursday": break # + colab={"base_uri": "https://localhost:8080/"} id="j09xQtA8taVq" outputId="87eff3a5-85f2-4240-f709-40095a63fd2a" week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Sunday"] for x in week: if x == "Thursday": break print(x) # + [markdown] id="o6OZ49NlvF3O" # Looping through a string # + colab={"base_uri": "https://localhost:8080/"} id="E_cA7e5FvJeW" outputId="209a47c1-5b7c-45a0-ea0a-7e68ab3ec8ee" for x in "week": print(x) # + [markdown] id="7aQ_M765vySf" # The range() function # + colab={"base_uri": "https://localhost:8080/"} id="zENA2UvJv1gO" outputId="b5cc79f4-e5ad-4802-f617-bfdf3705bb80" for x in range(6): print("Example 1",x) for x in range(2,6): print("Example 2",x) # + [markdown] id="kgFoySoNxzJm" # Nested loops # + colab={"base_uri": "https://localhost:8080/"} id="_YQuRXmWx1tK" outputId="12e31c4e-3b2c-4f8d-c509-ae19f2d6b396" adjective = ["Red","Big","Tasty"] fruit = ["apple","banana","cherry"] for x in adjective: for y in fruit: print(x,y) # + [markdown] id="sEXT1EVXzX5I" # White loop statement # + colab={"base_uri": "https://localhost:8080/"} id="hC7D927Czakf" outputId="79433a1b-d052-4a14-b319-291837023512" i = 1 while i<6: print(i) i+=1 #i = i+1 # + [markdown] id="ez7xw8HU0dpx" # The break statement # + colab={"base_uri": "https://localhost:8080/"} id="VeEnWG5K1Hg5" outputId="117e9b06-14f4-4342-c6f1-1258aff0848f" i = 1 while i<6: print(i) if i==3: break i+=1 # + colab={"base_uri": "https://localhost:8080/"} id="98Fk7ZNB2Wsv" outputId="6dfb620a-4549-471b-a17d-cb1c5ba909e8" i = 1 while i<6: i+=1 if i==3: continue print(i) # + [markdown] id="PqSHkVPP3XTn" # The else statement # + colab={"base_uri": "https://localhost:8080/"} id="eRM7I1MJ3cej" outputId="af34d112-e633-4017-ff1c-0d991d816805" i = 1 while i<6: print(i) i+=1 else: print("i is no longer less than 6") # + [markdown] id="ajGnS1XO37Yo" # application 1 # + colab={"base_uri": "https://localhost:8080/"} id="3_3XeCvS39am" outputId="cc7ce4b6-16f1-4851-edb9-59cfb57de893" for x in range(11): print("Hello",x) # + [markdown] id="mGDPJMeD5jIq" # application 2 # + colab={"base_uri": "https://localhost:8080/"} id="mIww55U95qaB" outputId="706cf301-a479-4bc2-eae7-881241e3b840" for x in range(3,10): print(x)
Loop_Statement.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # # Hello, PyTorch # # ![img](https://pytorch.org/tutorials/_static/pytorch-logo-dark.svg) # # __This notebook__ will teach you to use PyTorch low-level core. If you're running this notebook outside the course environment, you can install it [here](https://pytorch.org). # # __PyTorch feels__ differently than tensorflow/theano on almost every level. TensorFlow makes your code live in two "worlds" simultaneously: symbolic graphs and actual tensors. First you declare a symbolic "recipe" of how to get from inputs to outputs, then feed it with actual minibatches of data. In PyTorch, __there's only one world__: all tensors have a numeric value. # # You compute outputs on the fly without pre-declaring anything. The code looks exactly as in pure numpy with one exception: PyTorch computes gradients for you. And can run stuff on GPU. And has a number of pre-implemented building blocks for your neural nets. [And a few more things.](https://medium.com/towards-data-science/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b) # # And now we finally shut up and let PyTorch do the talking. # + import sys, os if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'): # !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week04_%5Brecap%5D_deep_learning/notmnist.py # !touch .setup_complete from pyvirtualdisplay import Display virtual_display = Display(visible=0, size=(1400, 900)) virtual_display.start() # - import numpy as np import torch print(torch.__version__) # + # numpy world x = np.arange(16).reshape(4, 4) print("X:\n%s\n" % x) print("X.shape: %s\n" % (x.shape,)) print("add 5:\n%s\n" % (x + 5)) print("X*X^T:\n%s\n" % np.dot(x, x.T)) print("mean over rows:\n%s\n" % (x.mean(axis=-1))) print("cumsum of cols:\n%s\n" % (np.cumsum(x, axis=0))) # + # PyTorch world x = np.arange(16).reshape(4, 4) x = torch.tensor(x, dtype=torch.float32) # or torch.arange(0, 16).view(4, 4) print("X:\n%s" % x) print("X.shape: %s\n" % (x.shape,)) print("add 5:\n%s" % (x + 5)) print("X*X^T:\n%s" % torch.matmul(x, x.transpose(1, 0))) # short: x.mm(x.t()) print("mean over rows:\n%s" % torch.mean(x, dim=-1)) print("cumsum of cols:\n%s" % torch.cumsum(x, dim=0)) # - # ## NumPy and PyTorch # # As you can notice, PyTorch allows you to hack stuff much the same way you did with NumPy. No graph declaration, no placeholders, no sessions. This means that you can _see the numeric value of any tensor at any moment of time_. Debugging such code can be done with by printing tensors or using any debug tool you want (e.g. [PyCharm debugger](https://www.jetbrains.com/help/pycharm/part-1-debugging-python-code.html) or [gdb](https://wiki.python.org/moin/DebuggingWithGdb)). # # You could also notice the a few new method names and a different API. So no, there's no compatibility with NumPy [yet](https://github.com/pytorch/pytorch/issues/2228) and yes, you'll have to memorize all the names again. Get excited! # # ![img](http://i0.kym-cdn.com/entries/icons/original/000/017/886/download.jpg) # # For example, # * If something takes a list/tuple of axes in NumPy, you can expect it to take `*args` in PyTorch # * `x.reshape([1,2,8]) -> x.view(1,2,8)` # * You should swap `axis` for `dim` in operations like `mean` or `cumsum` # * `x.sum(axis=-1) -> x.sum(dim=-1)` # * Most mathematical operations are the same, but types an shaping is different # * `x.astype('int64') -> x.type(torch.LongTensor)` # # To help you acclimatize, there's a [table](https://github.com/torch/torch7/wiki/Torch-for-NumPy-users) covering most new things. There's also a neat [documentation page](http://pytorch.org/docs/master/). # # Finally, if you're stuck with a technical problem, we recommend searching [PyTorch forums](https://discuss.pytorch.org/). Or just googling, which usually works just as efficiently. # # If you feel like you almost give up, remember two things: __GPU__ and __free gradients__. Besides you can always jump back to NumPy with `x.numpy()`. # ### Warmup: trigonometric knotwork # _inspired by [this post](https://www.quora.com/What-are-the-most-interesting-equation-plots)_ # # There are some simple mathematical functions with cool plots. For one, consider this: # # $$ x(t) = t - 1.5 * cos(15 t) $$ # $$ y(t) = t - 1.5 * sin(16 t) $$ # + import matplotlib.pyplot as plt # %matplotlib inline t = torch.linspace(-10, 10, steps=10000) # compute x(t) and y(t) as defined above x = t - 1.5 * torch.cos(15 * t) y = t - 1.5 * torch.sin(16 * t) plt.plot(x.numpy(), y.numpy()) # - # If you're done early, try adjusting the formula and seeing how it affects the function. # --- # ## Automatic gradients # # Any self-respecting DL framework must do your backprop for you. Torch handles this with the `autograd` module. # # The general pipeline looks like this: # * When creating a tensor, you mark it as `requires_grad`: # * `torch.zeros(5, requires_grad=True)` # * `torch.tensor(np.arange(5), dtype=torch.float32, requires_grad=True)` # * Define some differentiable `loss = arbitrary_function(a)` # * Call `loss.backward()` # * Gradients are now available as ```a.grad``` # # __Here's an example:__ let's fit a linear regression on Boston house prices. from sklearn.datasets import load_boston boston = load_boston() plt.scatter(boston.data[:, -1], boston.target) # + w = torch.zeros(1, requires_grad=True) b = torch.zeros(1, requires_grad=True) x = torch.tensor(boston.data[:, -1] / 10, dtype=torch.float32) y = torch.tensor(boston.target, dtype=torch.float32) # + y_pred = w * x + b loss = torch.mean((y_pred - y)**2) # propagate gradients loss.backward() # - # The gradients are now stored in `.grad` of those variables that require them. print("dL/dw = \n", w.grad) print("dL/db = \n", b.grad) # If you compute gradient from multiple losses, the gradients will add up at variables, therefore it's useful to __zero the gradients__ between iteratons. # + from IPython.display import clear_output for i in range(100): y_pred = w * x + b loss = torch.mean((y_pred - y)**2) loss.backward() w.data -= 0.1 * w.grad.data b.data -= 0.1 * b.grad.data # zero gradients w.grad.data.zero_() b.grad.data.zero_() # the rest of code is just bells and whistles if (i + 1) % 5 == 0: clear_output(True) plt.scatter(x.numpy(), y.numpy()) plt.scatter(x.detach().numpy(), y_pred.detach().numpy(), color='orange', linewidth=5) plt.show() print("loss = ", loss.detach().numpy()) if loss.detach().numpy() < 0.5: print("Done!") break # - # __Bonus quest__: try implementing and writing some nonlinear regression. You can try quadratic features or some trigonometry, or a simple neural network. The only difference is that now you have more variables and a more complicated `y_pred`. # + w = torch.zeros(4, requires_grad=True) b = torch.zeros(1, requires_grad=True) x = torch.tensor(boston.data[:, -1] / 10, dtype=torch.float32) y = torch.tensor(boston.target, dtype=torch.float32) # + from IPython.display import clear_output for i in range(1000): y_pred = w[0] * x + w[1] * (1 / x) + w[2] * (x ** 2) + torch.exp(w[3] * x) + b loss = torch.mean((y_pred - y)**2) loss.backward() w.data -= 0.001 * w.grad.data b.data -= 0.001 * b.grad.data # zero gradients w.grad.data.zero_() b.grad.data.zero_() # the rest of code is just bells and whistles if (i + 1) % 5 == 0: clear_output(True) plt.scatter(x.numpy(), y.numpy()) plt.scatter(x.detach().numpy(), y_pred.detach().numpy(), color='orange', linewidth=5) plt.show() print("loss = ", loss.detach().numpy()) if loss.detach().numpy() < 0.5: print("Done!") break # - w # # High-level PyTorch # # So far we've been dealing with low-level PyTorch API. While it's absolutely vital for any custom losses or layers, building large neural nets in it is a bit clumsy. # # Luckily, there's also a high-level PyTorch interface with pre-defined layers, activations and training algorithms. # # We'll cover them as we go through a simple image recognition problem: classifying letters into __"A"__ vs __"B"__. # # + from notmnist import load_notmnist # Если сохранять в папку из под которой запускается Jupyter, почему-то все начинает лагать # Поэтому сменил path= X_train, y_train, X_test, y_test = load_notmnist(path='/home/ds/notMNIST_small', letters='AB') X_train, X_test = X_train.reshape([-1, 784]), X_test.reshape([-1, 784]) print("Train size = %i, test_size = %i" % (len(X_train), len(X_test))) # - for i in [0, 1]: plt.subplot(1, 2, i + 1) plt.imshow(X_train[i].reshape([28, 28])) plt.title(str(y_train[i])) # Let's start with layers. The main abstraction here is __`torch.nn.Module`__: # + from torch import nn import torch.nn.functional as F print(nn.Module.__doc__) # - # There's a vast library of popular layers and architectures already built for ya'. # # This is a binary classification problem, so we'll train __Logistic Regression__. # $$P(y_i | X_i) = \sigma(W \cdot X_i + b) ={ 1 \over {1+e^{- [W \cdot X_i + b]}} }$$ # # + # create a network that stacks layers on top of each other model = nn.Sequential() # add first "dense" layer with 784 input units and 1 output unit. model.add_module('l1', nn.Linear(784, 1)) # add softmax activation for probabilities. Normalize over axis 1 # note: layer names must be unique model.add_module('l2', nn.Sigmoid()) # - print("Weight shapes:", [w.shape for w in model.parameters()]) # + # creX_trainmy data with 3 samples and 784 features x = torch.tensor(X_train[:3], dtype=torch.float32) y = torch.tensor(y_train[:3], dtype=torch.float32) # compute outputs given inputs, both are variables y_predicted = model(x)[:, 0] y_predicted # display what we've got # - # Let's now define a loss function for our model. # # The natural choice is to use binary crossentropy (aka logloss, negative llh): # $$ L = {1 \over N} \underset{X_i,y_i} \sum - [ y_i \cdot log P(y_i=1 | X_i) + (1-y_i) \cdot log (1-P(y_i=1 | X_i)) ]$$ # # # + y_pred = model(x).flatten() crossentropy = y * torch.log(y_pred) + (1 - y) * torch.log(1 - y_pred) loss = -torch.mean(crossentropy) assert tuple(crossentropy.size()) == ( 3,), "Crossentropy must be a vector with element per sample" assert tuple(loss.size()) == tuple( ), "Loss must be scalar. Did you forget the mean/sum?" assert loss.data.numpy() > 0, "Crossentropy must non-negative, zero only for perfect prediction" assert loss.data.numpy() <= np.log( 3), "Loss is too large even for untrained model. Please double-check it." # - # __Note:__ you can also find many such functions in `torch.nn.functional`, just type __`F.<tab>`__. # __Torch optimizers__ # # When we trained Linear Regression above, we had to manually `.zero_()` gradients on both our variables. Imagine that code for a 50-layer network. # # Again, to keep it from getting dirty, there's `torch.optim` module with pre-implemented algorithms: # + # [w for w in model.parameters()] # + opt = torch.optim.RMSprop(model.parameters(), lr=0.01) # here's how it's used: opt.zero_grad() # clear gradients loss.backward() # add new gradients opt.step() # change weights # + # [w for w in model.parameters()] # - # dispose of old variables to avoid bugs later del x, y, y_predicted, loss, y_pred # ### Putting it all together # + # create network again just in case model = nn.Sequential() model.add_module('first', nn.Linear(784, 1)) model.add_module('second', nn.Sigmoid()) opt = torch.optim.Adam(model.parameters(), lr=1e-3) # + history = [] for i in range(100): # sample 256 random images ix = np.random.randint(0, len(X_train), 256) x_batch = torch.tensor(X_train[ix], dtype=torch.float32) y_batch = torch.tensor(y_train[ix], dtype=torch.float32) # predict probabilities y_predicted = model(x_batch).flatten() assert y_predicted.dim( ) == 1, "did you forget to select first column with [:, 0]" # compute loss, just like before crossentropy = y_batch * torch.log(y_predicted) + (1 - y_batch) * torch.log(1 - y_predicted) loss = -torch.mean(crossentropy) # compute gradients loss.backward() # Adam step opt.step() # clear gradients opt.zero_grad() history.append(loss.data.numpy()) if i % 10 == 0: print("step #%i | mean loss = %.3f" % (i, np.mean(history[-10:]))) # - for param in opt.param_groups: param['lr'] = 1e-6 # __Debugging tips:__ # * Make sure your model predicts probabilities correctly. Just print them and see what's inside. # * Don't forget the _minus_ sign in the loss function! It's a mistake 99% people do at some point. # * Make sure you zero-out gradients after each step. Seriously:) # * In general, PyTorch's error messages are quite helpful, read 'em before you google 'em. # * if you see nan/inf, print what happens at each iteration to find our where exactly it occurs. # * If loss goes down and then turns nan midway through, try smaller learning rate. (Our current loss formula is unstable). # ### Evaluation # # Let's see how our model performs on test data # + # use your model to predict classes (0 or 1) for all test samples predicted_y_test = model(torch.tensor(X_test, dtype=torch.float32)) predicted_y_test = predicted_y_test.flatten().detach().numpy() threshold = 0.5 predicted_y_test = predicted_y_test > threshold assert isinstance(predicted_y_test, np.ndarray), "please return np array, not %s" % type( predicted_y_test) assert predicted_y_test.shape == y_test.shape, "please predict one class for each test sample" assert np.in1d(predicted_y_test, y_test).all(), "please predict class indexes" accuracy = np.mean(predicted_y_test == y_test) print("Test accuracy: %.5f" % accuracy) assert accuracy > 0.95, "try training longer" # - # ## More about PyTorch: # * Using torch on GPU and multi-GPU - [link](http://pytorch.org/docs/master/notes/cuda.html) # * More tutorials on PyTorch - [link](http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) # * PyTorch examples - a repo that implements many cool DL models in PyTorch - [link](https://github.com/pytorch/examples) # * Practical PyTorch - a repo that implements some... other cool DL models... yes, in PyTorch - [link](https://github.com/spro/practical-pytorch) # * And some more - [link](https://www.reddit.com/r/pytorch/comments/6z0yeo/pytorch_and_pytorch_tricks_for_kaggle/) # # --- # # Homework tasks # # There will be three tasks worth 2, 3 and 5 points respectively. # If you get stuck with no progress, try switching to the next task and returning later. # ### Task I (2 points) - tensormancy # # ![img](https://media.giphy.com/media/3o751UMCYtSrRAFRFC/giphy.gif) # # When dealing with more complex stuff like neural network, it's best if you use tensors the way samurai uses his sword. # # # __1.1 The Cannabola__ # [(_disclaimer_)](https://gist.githubusercontent.com/justheuristic/e2c1fa28ca02670cabc42cacf3902796/raw/fd3d935cef63a01b85ed2790b5c11c370245cbd7/stddisclaimer.h) # # Let's write another function, this time in polar coordinates: # $$\rho(\theta) = (1 + 0.9 \cdot cos (8 \cdot \theta) ) \cdot (1 + 0.1 \cdot cos(24 \cdot \theta)) \cdot (0.9 + 0.05 \cdot cos(200 \cdot \theta)) \cdot (1 + sin(\theta))$$ # # # Then convert it into cartesian coordinates ([howto](http://www.mathsisfun.com/polar-cartesian-coordinates.html)) and plot the results. # # Use torch tensors only: no lists, loops, numpy arrays, etc. # + theta = torch.linspace(- np.pi, np.pi, steps=1000) # compute rho(theta) as per formula above rho = ( (1 + 0.9 * torch.cos(8 * theta)) * (1 + 0.1 * torch.cos(24 * theta)) * (0.9 + 0.05 * torch.cos(200 * theta)) * (1 + torch.sin(theta)) ) # Now convert polar (rho, theta) pairs into cartesian (x,y) to plot them. x = rho * torch.cos(theta) y = rho * torch.sin(theta) plt.figure(figsize=[6, 6]) plt.fill(x.numpy(), y.numpy(), color='green') plt.grid() # - # ### Task II: The Game of Life (3 points) # # Now it's time for you to make something more challenging. We'll implement Conway's [Game of Life](http://web.stanford.edu/~cdebs/GameOfLife/) in _pure PyTorch_. # # While this is still a toy task, implementing game of life this way has one cool benefit: __you'll be able to run it on GPU!__ Indeed, what could be a better use of your GPU than simulating Game of Life on 1M/1M grids? # # ![img](https://cdn.tutsplus.com/gamedev/authors/legacy/Stephane%20Beniak/2012/09/11/Preview_Image.png) # If you've skipped the URL above out of sloth, here's the Game of Life: # * You have a 2D grid of cells, where each cell is "alive"(1) or "dead"(0) # * Any living cell that has 2 or 3 neighbors survives, else it dies [0,1 or 4+ neighbors] # * Any cell with exactly 3 neighbors becomes alive (if it was dead) # # For this task, you are given a reference NumPy implementation that you must convert to PyTorch. # _[NumPy code inspired by: https://github.com/rougier/numpy-100]_ # # # __Note:__ You can find convolution in `torch.nn.functional.conv2d(Z,filters)`. Note that it has a different input format. # # __Note 2:__ From the mathematical standpoint, PyTorch convolution is actually cross-correlation. Those two are very similar operations. More info: [video tutorial](https://www.youtube.com/watch?v=C3EEy8adxvc), [scipy functions review](http://programmerz.ru/questions/26903/2d-convolution-in-python-similar-to-matlabs-conv2-question), [stack overflow source](https://stackoverflow.com/questions/31139977/comparing-matlabs-conv2-with-scipys-convolve2d). # + from scipy.signal import correlate2d def np_update(Z): # Count neighbours with convolution filters = np.array([[1, 1, 1], [1, 0, 1], [1, 1, 1]]) N = correlate2d(Z, filters, mode='same') # Apply rules birth = (N == 3) & (Z == 0) survive = ((N == 2) | (N == 3)) & (Z == 1) Z[:] = birth | survive return Z # - def torch_update(Z): """ Implement an update function that does to Z exactly the same as np_update. :param Z: torch.FloatTensor of shape [height,width] containing 0s(dead) an 1s(alive) :returns: torch.FloatTensor Z after updates. You can opt to create a new tensor or change Z inplace. """ filters = torch.tensor([ [1, 1, 1], [1, 0, 1], [1, 1, 1], ], dtype=torch.float32) filters = filters.reshape(1, 1, 3, 3) ZZ = Z.reshape(1, 1, Z.shape[0], Z.shape[1]) N = torch.nn.functional.conv2d(ZZ, filters, padding='same') birth = (N == 3) & (Z == 0) survive = ((N == 2) | (N == 3)) & (Z == 1) Z[:] = birth | survive return Z # + # Z.reshape(1, 1, Z.shape[0], Z.shape[1]) # + # torch.tensor([ # [1, 1, 1], # [1, 0, 1], # [1, 1, 1], # ]).reshape(1, 1, 3, 3).shape # + # initial frame Z_numpy = np.random.choice([0, 1], p=(0.5, 0.5), size=(100, 100)) Z = torch.from_numpy(Z_numpy).type(torch.FloatTensor) # your debug polygon :) Z_new = torch_update(Z.clone()) # tests Z_reference = np_update(Z_numpy.copy()) assert np.all(Z_new.numpy() == Z_reference), \ "your PyTorch implementation doesn't match np_update. Look into Z and np_update(ZZ) to investigate." print("Well done!") # + # # !pip install ipympl # + # # %matplotlib notebook # plt.ion() # initialize game field Z = np.random.choice([0, 1], size=(100, 100)) Z = torch.from_numpy(Z).type(torch.FloatTensor) fig = plt.figure() ax = fig.add_subplot(111) fig.show() for _ in range(100): # update Z = torch_update(Z) # re-draw image ax.clear() ax.imshow(Z.numpy(), cmap='gray') fig.canvas.draw() # + # Some fun setups for your amusement # parallel stripes Z = np.arange(100) % 2 + np.zeros([100, 100]) # with a small imperfection Z[48:52, 50] = 1 Z = torch.from_numpy(Z).type(torch.FloatTensor) fig = plt.figure() ax = fig.add_subplot(111) fig.show() for _ in range(100): Z = torch_update(Z) ax.clear() ax.imshow(Z.numpy(), cmap='gray') fig.canvas.draw() # - # More fun with Game of Life: [video](https://www.youtube.com/watch?v=C2vgICfQawE) # ### Task III: Going deeper (5 points) # <img src="http://download.gamezone.com/uploads/image/data/1190338/article_post_width_a88.jpg" width=360> # Your ultimate task for this week is to build your first neural network [almost] from scratch and pure PyTorch. # # This time you will solve the same digit recognition problem, but at a larger scale # # * 10 different letters # * 20k samples # # We want you to build a network that reaches at least 80% accuracy and has at least 2 linear layers in it. Naturally, it should be nonlinear to beat logistic regression. # # With 10 classes you will need to use __Softmax__ at the top instead of sigmoid and train using __categorical crossentropy__ (see [here](http://wiki.fast.ai/index.php/Log_Loss)). Write your own loss or use `torch.nn.functional.nll_loss`. Just make sure you understand what it accepts as input. # # Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) neural network should already give you an edge over logistic regression. # # # __[bonus kudos]__ # If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! It should be possible to reach 90% without convnets. # # __SPOILERS!__ # At the end of the notebook you will find a few tips and frequent errors. # If you feel confident enough, just start coding right away and get there ~~if~~ once you need to untangle yourself. from notmnist import load_notmnist X_train, y_train, X_test, y_test = load_notmnist(path='/home/ds/notMNIST_small', letters='ABCDEFGHIJ') X_train, X_test = X_train.reshape([-1, 784]), X_test.reshape([-1, 784]) # %matplotlib inline plt.figure(figsize=[12, 4]) for i in range(20): plt.subplot(2, 10, i+1) plt.imshow(X_train[i].reshape([28, 28])) plt.title(str(y_train[i])) X_train.shape y_train.shape # + # cuda = torch.device('cuda') # X_train = torch.tensor(X_train, dtype=torch.float32, device=cuda) # # Обратить внимание, что y_train должен иметь int тип # y_train = torch.tensor(y_train, dtype=torch.long, device=cuda) # + # y_train.shape, y_train.dtype # + # y_pred.shape, y_pred.dtype # - batch_size = 1024 np.random.randint(0, len(X_train), batch_size).shape # + import torch.nn as nn history = [] cuda = torch.device('cuda') hidden_layer_size = 20 model = nn.Sequential( nn.Linear(784, hidden_layer_size), nn.ReLU(), nn.Linear(hidden_layer_size, 10), ) model.to(cuda) cross_entropy_loss = nn.CrossEntropyLoss() # + batch_size = 1024 optimizer = torch.optim.Adam(model.parameters(), lr=1e-2) for epoch in range(400): batch_ids = np.random.randint(0, len(X_train), batch_size) x_batch = torch.tensor(X_train[ix], dtype=torch.float32, device=cuda) y_batch = torch.tensor(y_train[ix], dtype=torch.long, device=cuda) y_pred = model(x_batch) loss = cross_entropy_loss(y_pred, y_batch) loss.backward() optimizer.step() optimizer.zero_grad() history.append(loss.data.cpu().numpy()) if epoch % 10 == 0: print("step #%i | mean loss = %.5f" % (epoch, np.mean(history[-10:]))) if epoch % 300 == 0 and epoch > 0: for param in optimizer.param_groups: param['lr'] *= 0.1 print('New lr:', param['lr']) # + x_test = torch.tensor(X_test, dtype=torch.float32, device=cuda) # y_test = torch.tensor(y_test, dtype=torch.long, device=cuda) pred_y_test = model(x_test) pred_y_test = torch.argmax(pred_y_test, axis=1).cpu().numpy() accuracy = np.mean(pred_y_test == y_test) accuracy # - # <br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/> # # SPOILERS! # # Recommended pipeline: # # * Adapt logistic regression from previous assignment to classify one letter against others (e.g. A vs the rest) # * Generalize it to multiclass logistic regression. # - Either try to remember lecture 0 or google it. # - Instead of weight vector you'll have to use matrix (feature_id x class_id) # - Softmax (exp over sum of exps) can be implemented manually or as `nn.Softmax` (layer) or `F.softmax` (function) # - Probably better to use STOCHASTIC gradient descent (minibatch) for greater speed # - You can also try momentum/rmsprop/adawhatever # - in which case the dataset should probably be shuffled (or use random subsamples on each iteration) # * Add a hidden layer. Now your logistic regression uses hidden neurons instead of inputs. # - Hidden layer uses the same math as output layer (ex-logistic regression), but uses some nonlinearity (e.g. sigmoid) instead of softmax # - You need to train both layers, not just the output layer :) # - 50 hidden neurons and a sigmoid nonlinearity will do for a start. Many ways to improve. # - In ideal case this totals to 2 `torch.matmul`'s, 1 softmax and 1 ReLU/sigmoid # - __Make sure this neural network works better than logistic regression!__ # # * Now's the time to try improving the network. Consider layers (size, neuron count), nonlinearities, optimization methods, initialization — whatever you want, but please avoid convolutions for now. # # * If anything seems wrong, try going through one step of training and printing everything you compute. # * If you see NaNs midway through optimization, you can estimate $\log P(y \mid x)$ as `F.log_softmax(layer_before_softmax)`.
week04_[recap]_deep_learning/seminar_pytorch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from __future__ import print_function, division # %matplotlib inline import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd import thinkbayes2 import thinkplot import statsmodels.formula.api as smf # - df = pd.read_csv('heri17.csv', skiprows=2, index_col='year') df[df.columns] /= 10 df.head() df['time'] = df.index - 1966 df['time2'] = df.time**2 def MakeErrorModel(df, y, formula, n=100): """Makes a model that captures sample error and residual error. df: DataFrame y: Series formula: string representation of the regression model n: number of simulations to run returns: (fittedvalues, sample_error, total_error) """ # make the best fit df['y'] = y results = smf.ols(formula, data=df).fit() fittedvalues = results.fittedvalues resid = results.resid # permute residuals and generate hypothetical fits fits = [] for i in range(n): df['y'] = fittedvalues + np.random.permutation(results.resid) fake_results = smf.ols(formula, data=df).fit() fits.append(fake_results.fittedvalues) # compute the variance of the fits fits = np.array(fits) sample_var = fits.var(axis=0) # add sample_var and the variance of the residuals total_var = sample_var + resid.var() # standard errors are square roots of the variances return fittedvalues, np.sqrt(sample_var), np.sqrt(total_var) def FillBetween(fittedvalues, stderr, **options): """Fills in the 95% confidence interval. fittedvalues: series stderr: standard error """ low = fittedvalues - 2 * stderr high = fittedvalues + 2 * stderr thinkplot.FillBetween(fittedvalues.index, low, high, **options) def PlotModel(y, fittedvalues, sample_error, total_error, **options): """Plots confidence intervals and the actual data """ FillBetween(fittedvalues, total_error, color='0.9') FillBetween(fittedvalues, sample_error, color='0.7') thinkplot.Plot(fittedvalues, color='0.5') thinkplot.Plot(y, **options) def Plot(df, y, formula, **options): fittedvalues, sample_error, total_error = MakeErrorModel(df, y, formula) PlotModel(y, fittedvalues, sample_error, total_error, **options) thinkplot.Config(xlim=[1965, 2017]) # + import seaborn as sns sns.set_style('whitegrid') sns.set_context('talk', font_scale=1.3) current_palette = sns.color_palette() sns.palplot(current_palette) BLUE, GREEN, RED, PURPLE, YELLOW, SKY = current_palette # - formula = 'y ~ time + time2' y = df.noneall Plot(df, y, formula, color=BLUE, label='None') thinkplot.Config(ylabel='Percent', loc='upper left') ps = df.noneall / 100 odds = ps / (1-ps) log_odds = np.log(odds) log_odds Plot(df, log_odds, formula, color=BLUE, label='None') thinkplot.Config(ylabel='Log odds') attend = 100-df.attendedall Plot(df, attend, formula, color=GREEN, label='No attendance') thinkplot.Config(ylabel='Percent') diff = df.nonemen - df.nonewomen diff = diff.loc[1973:] Plot(df, diff, formula, color=PURPLE, label='Gender gap') thinkplot.Config(ylabel='Difference (percentage points)') diff = df.nonemen - df.nonewomen diff = diff.loc[1986:] Plot(df, diff, formula, color=PURPLE, label='Gender gap') thinkplot.Config(ylabel='Difference (percentage points)') diff = df.nonemen - df.nonewomen diff = diff.loc[1986:] Plot(df, diff, 'y ~ time', color=PURPLE, label='Gender gap') thinkplot.Config(ylabel='Difference (percentage points)')
archive/heri17.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Git Theory # # ### The revision Graph # # Revisions form a **GRAPH** import os top_dir = os.getcwd() git_dir = os.path.join(top_dir, 'learning_git') working_dir=os.path.join(git_dir, 'git_example') os.chdir(working_dir) # + language="bash" # git log --graph --oneline # - # ### Git concepts # # * Each revision has a parent that it is based on # * These revisions form a graph # * Each revision has a unique hash code # * In Sue's copy, revision 43 is ab3578d6 # * Jim might think that is revision 38, but it's still ab3579d6 # * Branches, tags, and HEAD are *labels* pointing at revisions # * Some operations (like fast forward merges) just move labels. # ### The levels of Git # There are four **Separate** levels a change can reach in git: # * The Working Copy # * The **index** (aka **staging area**) # * The local repository # * The remote repository # Understanding all the things `git reset` can do requires a good # grasp of git theory. # * `git reset <commit> <filename>` : Reset index and working version of that file to the version in a given commit # * `git reset --soft <commit>`: Move local repository branch label to that commit, leave working dir and index unchanged # * `git reset <commit>`: Move local repository and index to commit ("--mixed") # * `git reset --hard <commit>`: Move local repostiory, index, and working directory copy to that state
ch02git/09GitTheory.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### 1. what task does the method value_counts perform # ##### Ans: Returns counts of unique values # #### 2. # ##### Ans: # #### 3. # ##### Ans: # #### 4. # ##### Ans: # #### 5. # ##### Ans: # #### 6. # ##### Ans: # #### 7. # ##### Ans:
Coursera/Data Analysis with Python-IBM/Week-3/Quiz/.ipynb_checkpoints/Exploratory-Data-Analysis-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # (07:Releasing-and-versioning)= # # Releasing and versioning # <hr style="height:1px;border:none;color:#666;background-color:#666;" /> # Packages exist so that you can share your code with others. Previous chapters have focussed on how to develop your Python package for distribution - we are now ready to release the package to users (which might include your future self, others in your company, or the world). In **Chapter 3: {ref}`03:How-to-package-a-Python`** we briefly showed how to release a package to PyPI, Python's main package index. This chapter now describes in more detail the process of releasing a package and is inspired by the [Releasing a package chapter](https://r-pkgs.org/release.html) of the [R packages book](https://r-pkgs.org/). In **Chapter 8: {ref}`08:Continuous-integration-and-deployment`** we show how the process of developing and releasing a package can be automated. # ## Package repositories # When you're ready to release your software, you first need to decide where to release it to. The Python Package Index ([PyPI](https://pypi.org/)) is the official, open-source, software repository for Python (as CRAN is the repository for R). If you're interested in sharing your work publicly, this is probably where you'll be releasing your package. # # We'll focus on releasing packages to PyPI in this chapter, however PyPI is not the only option. Another popular software repository for Python (and other languages) packages is that hosted by [Anaconda](https://www.anaconda.com/) and accessible with the [conda package manager](https://docs.conda.io/en/latest/) (which we installed back in **Chapter 2: {ref}`02:System-setup`**). We won't go into the details of the differences between these two popular repositories here, but if you're interested to read more, we recommend [this article](https://www.anaconda.com/blog/understanding-conda-and-pip). Creating packages for Anaconda requires a little more work than for PyPI - Anaconda provides a [helpful tutorial](https://docs.conda.io/projects/conda-build/en/latest/user-guide/tutorials/build-pkgs-skeleton.html) on the workflow. # # In some cases, you may want to release your package to a private repository (for example, for internal use by your company only). There are many private repository options for Python packages. Companies like [Anaconda](https://docs.anaconda.com/), [PyDist](https://pydist.com/) and [GemFury](https://gemfury.com/) are all examples that offer (typically paid) private Python package repository hosting. You can also set up your own server on a dedicated machine or cloud service like AWS - read more [here](https://medium.com/swlh/how-to-install-a-private-pypi-server-on-aws-76993e45c610). # # Finally, you can also choose to simply host your package on GitHub (or equivalent), and forego releasing to a dedicated software repository like PyPI. In some cases, it is possible for users to `pip install` directly from a GitHub repository (read [this excellent article](https://adamj.eu/tech/2019/03/11/pip-install-from-a-git-repository/) to learn more). For example, to install the `pypkgs` package directly from GitHub: # # ```{prompt} bash # python -m pip install git+https://github.com/TomasBeuzen/pypkgs.git # ``` # # ```{attention} # We don't recommend GitHub for sharing Python packages to a wide audience as the install workflow can often be problematic, the vast majority of Python users do not install packages from GitHub, and dedicated software repositories like PyPI provide better discoverability, ease of installation and a stamp of authenticity. # ``` # ## Version numbering # Versioning is the process of adding unique identifiers to different versions of your package. The unique identifier you use may be name-based or number-based. Python prefers number-based schemes and we saw an example of this in **Chapter 3: {ref}`03:How-to-package-a-Python`** where we assigned our `pypkgs` package an intial version number of 0.1.0 (the default when using the `poetry` package manager). This three-number versioning scheme (also referred to as semantic versioning) is the most common scheme used and the idea is to incrementally increase the version number in a logical way as you make changes to your package. # # When you do make changes to your package, how do you decide how to increment the version? Will our next version be 0.1.1, 0.2.0, or 1.1.0? Here are the general guidelines for increment package version: # # - Patch release (0.1.`X+1`): patches are typically small changes to your package that do not add any significant new features, for example, a small bug fix or documentation change that do not change backward compatibility (the compatibility of your package with previous versions of itself). It's fine to have so many patch releases that you need to use two (e.g., 0.1.10) or even three (e.g., 0.1.127) digits! # - Minor release (0.`X+1`.0): a minor release may include bug fixes, new package features and changes in backward compatibility. # - Major release (`X+1`.1.0): used when you make major changes that are not backward compatible and are likely to affect many users. Typically, when you come to versioning from 0.x.y to 1.0.0, this indicates that your package is feature-complete with a stable API. # # Read more about semantic versioning [here](https://semver.org/). Note that there are many variations of semantic versioning. For example, often software packages will include alpha/beta/candidate release versions (e.g., 1.1.0-alpha.0) or development versions (e.g., 1.0.dev1). [PEP 440](https://www.python.org/dev/peps/pep-0440/#examples-of-compliant-version-schemes) contains examples of all the Python-compliant version identifier schemes. We'll show how to increment your package version with `poetry` later in this chapter. # ## Backward compatibility and deprecating package functionality # As discussed above, minor and major version releases often come with backward compatible changes which will affect your package's user base. The impact and importance of backward compatibility is directly proportional to the number of people using your package. That's not to say that you should avoid backward compatible changes - there are good reasons for making these changes, such as improving software design mistakes, improving functionality, or making code simpler and easier to use. # # If you do need to make a backward incompatible change, it might be best to implement that change gradually, by providing adequate warning and advice to your package's user base through deprecation warnings. # # # For example, we can add a deprecation warning to our code quite easily by using the [`warnings` module](https://docs.python.org/3/library/warnings.html) in the Python standard library. If you've been following along with the `pypkgs` package we've been developing in this book, we could add a deprecation warning to our `catbind()` function by simpling importing the `warnings` module and adding a `FutureWarning` in our code: # # ```python # import pandas as pd # import warnings # # # def catbind(a, b): # """ # Concatenates two pandas categoricals. # # ... # """ # # warnings.warn("This function will be deprecated in 1.0.0.", FutureWarning) # # if not all(isinstance(x, pd.Categorical) for x in (a, b)): # raise TypeError("Inputs should be of type 'Pandas categorical'.") # # concatenated = pd.concat([pd.Series(a.astype("str")), pd.Series(b.astype("str"))]) # return pd.Categorical(concatenated) # ``` # # If we were to run our code now, we would see the `FutureWarning` printed to our output. If you've used any larger Python libraries before (such as `NumPy`, `Pandas` or `scikit-learn`) you probably have seen these warnings before! On that note, these large, established Python libraries offer great resources for learning how to properly manage your own package - don't be afraid to check out their source code and history on GitHub. # # ```python # >>> from pypkgs import pypkgs # >>> import pandas as pd # >>> a = pd.Categorical(["character", "hits", "your", "eyeballs"]) # >>> b = pd.Categorical(["but", "integer", "where it", "counts"]) # >>> pypkgs.catbind(a, b) # # pypkgs.py:33: FutureWarning: This function will be deprecated in version 1.0.0. # [character, hits, your, eyeballs, but, integer, where it, counts] # Categories (8, object): [but, character, counts, # eyeballs, hits, integer, where it, your] # ``` # # A few other things to think about when making backward compatability changes: # # - If you're changing a function significantly, consider keeping both the legacy (with a deprecation warning) and new version of the function for a few versions to help users make a smoother transition to using the new function. # - If you're deprecating a lot of code, consider doing it in small increments over mutliple releases. # - If your backward incompatible change is a result of one of your package's dependencies changing, it is often better to warn your users that they require a newer version of a dependency rather than immediately making it a required dependency (which might break a users' other code). # - Documentation is key! Don't be afraid to be verbose about documenting backward incompatible changes in your package documentation, remote repository, email list, etc. # ## Releasing your package # When you're ready to release a new version of your package, there's a few key tasks to take care of as described in the sections below. # ### Increment package version # If this is the first time you're releasing your package you can skip this step - it only applies for when you're ready to update your pakcage. You'll need to bump your package's version in its metadata (and potentially elsewhere). In our current `pypkgs` package setup, which was created with the [UBC-MDS-Cookiecutter](https://github.com/UBC-MDS/cookiecutter-ubc-mds) and [`poetry`](https://python-poetry.org/), there are three places we need to change our package version: # # **1. `pyproject.toml`** # # The head of our `pyproject.toml` file currently looks like this: # # ```toml # [tool.poetry] # name = "pypkgs" # version = "0.1.0" # description = "Python package that eases the pain of concatenating Pandas categoricals!" # authors = ["<NAME> <<EMAIL>>"] # license = "MIT" # readme = "README.md" # ``` # # Say we've made a bug fix to our package and want to make a patch release (versioning from 0.1.0 to 0.1.1). `poetry` provides a simple command to help us do this: # # ```{prompt} bash # poetry version patch # ``` # # ```console # Bumping version from 0.1.0 to 0.1.1 # ``` # # ```{tip} # Here we've used the syntax `patch` to do a patch release, but `poetry` offers many [different kinds of version bumping](https://python-poetry.org/docs/cli/#version). # ``` # # The head of our `pyproject.toml` file now looks like this: # # ```toml # [tool.poetry] # name = "pypkgs" # version = "0.1.1" # description = "Python package that eases the pain of concatenating Pandas categoricals!" # authors = ["<NAME> <<EMAIL>>"] # license = "MIT" # readme = "README.md" # ``` # **2. `pypkgs/__init__.py`** # # The version of our package is also specified in `pypkgs/__init__.py`. So we need to go into the file and change it there. It might seem a bit inefficient that `poetry` doesn't update the package version in `__init__.py`. It is possible to automate the version incrementing through `poetry` using a small hack which is discussed in [this issue thread](https://github.com/python-poetry/poetry/pull/2366) in the `poetry` GitHub repository, or you could simply remove the package version from `__init__.py` (not recommended). I personally don't mind manually changing the package version in this file as it provides me with a sanity check to make sure I'm versioning my package as I intend to. # # **3. `tests/test_pypkgs.py`** # # Our test file contains a test to ensure that our package version is up to date: # # ```python # def test_version(self): # assert __version__ == '0.1.0' # ``` # # We need to update this version number to '0.1.1'. This test is not necessary, but it's good practice to include it as a check to make sure that you're using the correct version of your package. # ### Test your new package version # It's important to run all the necessary tests and checks on your newly versioned package before releasing it. In our case, we need to check that our package is still passing all our tests: # # ```{prompt} bash # poetry run pytest # ``` # # ```console # ============================= test session starts ============================== # platform darwin -- Python 3.7.6, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 # rootdir: /Users/tbeuzen/GitHub/py-pkgs/pypkgs # collected 3 items # # tests/test_pypkgs.py .. [100%] # # ============================== 3 passed in 0.71s =============================== # ``` # # And that our documentation is rendering correctly: # # ```{prompt} bash # # cd docs # poetry run make html # ``` # # However, your package (or other open-source packages) might require more checks than this, for example to determine that your code conforms to a particular code style, contains appropriate documentation, can be built on different operating systems and versions of Python, etc. # # ```{tip} # In the next chapter, we'll explore how to automate this checking and testing procedure with continuous integration. # ``` # ### Release package # Once your package has passed all of your pre-release checks and tests you're ready to release it! In our case, we're interested in releasing our new package version on PyPI. It's good practice to release your package on [testPyPI](https://test.pypi.org/) first and to test that you can release and install the package as expected, before releasing on PyPI. As we've seen in previous sections of this book, `poetry` has a command called `publish` which we can use to do this, however the default behaviour is to publish to PyPI. So we need to add testPyPI to the list of repositories `poetry` knows about via: # # ```{prompt} bash # poetry config repositories.test-pypi https://test.pypi.org/legacy/ # ``` # # Before we send our package to testPyPI, we first need to build it to source and wheel distributions (the format that PyPI distributes and something we learned about in the **Chapter 4: {ref}`04:Package-structure-and-state`**) using `poetry build`: # # ```{prompt} bash # poetry build # ``` # # Finally, we can use `poetry publish` to publish to testPyPI (you will be prompted for your testPyPI username and password, sign up for one if you have not already done so): # # ```{prompt} bash # poetry publish -r test-pypi # ``` # # Now you should be able to visit your package on testPyPI (e.g., <https://test.pypi.org/project/pypkgs/>) and download it from there using `pip` via: # # ```{prompt} bash # pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple pypkgs # ``` # # ```{note} # By default `pip install` will search PyPI for the named package. However, we want to search testPyPI because that is where we uploaded our package. The argument `--index-url` points `pip` to the testPyPI index. However, our package `pypkgs` depends on `pandas` which can't be found on testPyPI (it is hosted on PyPI). So, we need to use the `--extra-index-url` argument to also point `pip` to PyPI so that it can pull any necessary dependencies of `pypkgs` from there. # ``` # # If you're happy with how your package is working, you can go ahead and publish to PyPI: # # ```{prompt} bash # poetry publish # ``` # # ```{note} # In **Chapter 8: {ref}`08:Continuous-integration-and-deployment`** we'll see how we can automate the building and publishing of package releases to testPyPI and PyPI. # ``` # ### Document your release # Once you've released a new version of your package it's good practice to document what's happened. Firstly you should document what changed in this new release in a file in your local and remote repository. This file is typically called something like `CHANGELOG`, `NEWS`, or `HISTORY` and provides a summary of the changes in each version of your package. For example: # # ```md # # Changelog # # All notable changes to this project will be documented in this file. # # ## [0.1.1] - 2020-07-23 # # ### Added # - More documentation to pypkgs.catbind() function # - ... # # ### Removed # - ... # # ### Changed # - ... # # ## [0.1.0] - 2020-07-21 # # ... # ``` # # Secondly, you should [tag a release](https://docs.github.com/en/github/administering-a-repository/managing-releases-in-a-repository) on GitHub (or whatever remote repository you are using). The tag version is typically the letter "v" followed by the package version, e.g., `v0.1.1` and the description should be a copy-paste of what was included in the change log. # # ```{attention} # You should be regularly pushing your work to your remote repository, at least at the end of every coding session! # ```
py-pkgs/07-releasing-versioning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd from sklearn import datasets from sklearn import preprocessing import pymc3 as pm # + # loading the dataset estate = pd.read_excel('../../Lab1/Heenal/Data/Real estate valuation data set.xlsx') # - estate.shape # + # Separating the data and the target variable # Retaining all the features X = estate.drop(['Y house price of unit area','No'], axis = 1) Y = estate['Y house price of unit area'] # - X.columns data_xx = X[['X4 number of convenience stores']].values data_y = Y.values # + with pm.Model() as model: # Define priors sigma = pm.HalfCauchy('sigma', beta=10, testval=1.) intercept = pm.Normal('Intercept', 0, sd=20) x_coeff = pm.Normal('x', 0, sd=20) # Define likelihood likelihood = pm.Normal('y', mu=intercept + x_coeff * data_xx, sd=sigma, observed=data_y) # Inference! trace = pm.sample(1000, chains=1) pm.traceplot(trace) plt.show() # -
Labs/Lab2/Heenal/MLE.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CS 20 : TensorFlow for Deep Learning Research # ## Lecture 03 : Linear and Logistic Regression # ### Logistic Regression with tf.data # ### Setup # + import os, sys import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf # %matplotlib inline print(tf.__version__) # - # ### Load and Pre-process data (x_train, y_train), (x_tst, y_tst) = tf.keras.datasets.mnist.load_data() x_train = (x_train / 255) x_train = x_train.reshape(-1, 784) x_tst = (x_tst / 255) x_tst = x_tst.reshape(-1, 784) # + tr_indices = np.random.choice(range(x_train.shape[0]), size = 55000, replace = False) x_tr = x_train[tr_indices] y_tr = y_train[tr_indices] x_val = np.delete(arr = x_train, obj = tr_indices, axis = 0) y_val = np.delete(arr = y_train, obj = tr_indices, axis = 0) print(x_tr.shape, y_tr.shape) print(x_val.shape, y_val.shape) # - # ### Define the graph of Softmax Classifier # hyper-par setting epochs = 30 batch_size = 64 # + # for train tr_dataset = tf.data.Dataset.from_tensor_slices((x_tr, y_tr)) tr_dataset = tr_dataset.shuffle(buffer_size = 10000) tr_dataset = tr_dataset.batch(batch_size = batch_size) tr_iterator = tr_dataset.make_initializable_iterator() print(tr_dataset) # for validation val_dataset = tf.data.Dataset.from_tensor_slices((x_val,y_val)) val_dataset = val_dataset.batch(batch_size = batch_size) val_iterator = val_dataset.make_initializable_iterator() print(val_dataset) # - # Define Iterator handle = tf.placeholder(dtype = tf.string) iterator = tf.data.Iterator.from_string_handle(string_handle = handle, output_types = tr_iterator.output_types) X, Y = iterator.get_next() X = tf.cast(X, dtype = tf.float32) Y = tf.cast(Y, dtype = tf.int32) # + # create weight and bias, initialized to 0 w = tf.get_variable(name = 'weights', shape = [784, 10], dtype = tf.float32, initializer = tf.contrib.layers.xavier_initializer()) b = tf.get_variable(name = 'bias', shape = [10], dtype = tf.float32, initializer = tf.zeros_initializer()) # construct model score = tf.matmul(X, w) + b # use the cross entropy as loss function ce_loss = tf.reduce_mean(tf.losses.sparse_softmax_cross_entropy(labels = Y, logits = score)) ce_loss_summ = tf.summary.scalar(name = 'ce_loss', tensor = ce_loss) # for tensorboard # using gradient descent with learning rate of 0.01 to minimize loss opt = tf.train.GradientDescentOptimizer(learning_rate=.01) training_op = opt.minimize(ce_loss) # - # ### Training train_writer = tf.summary.FileWriter(logdir = '../graphs/lecture03/logreg_tf_data/train', graph = tf.get_default_graph()) val_writer = tf.summary.FileWriter(logdir = '../graphs/lecture03/logreg_tf_data/val', graph = tf.get_default_graph()) # + #epochs = 30 #batch_size = 64 #total_step = int(x_tr.shape[0] / batch_size) sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True)) sess = tf.Session(config = sess_config) sess.run(tf.global_variables_initializer()) tr_handle, val_handle = sess.run(fetches = [tr_iterator.string_handle(), val_iterator.string_handle()]) tr_loss_hist = [] val_loss_hist = [] for epoch in range(epochs): avg_tr_loss = 0 avg_val_loss = 0 tr_step = 0 val_step = 0 # for mini-batch training sess.run([tr_iterator.initializer]) try: while True: _, tr_loss,tr_loss_summ = sess.run(fetches = [training_op, ce_loss, ce_loss_summ], feed_dict = {handle : tr_handle}) avg_tr_loss += tr_loss tr_step += 1 except tf.errors.OutOfRangeError: pass # for validation sess.run([val_iterator.initializer]) try: while True: val_loss, val_loss_summ = sess.run(fetches = [ce_loss, ce_loss_summ], feed_dict = {handle : val_handle}) avg_val_loss += val_loss val_step += 1 except tf.errors.OutOfRangeError: pass train_writer.add_summary(tr_loss_summ, global_step = epoch) val_writer.add_summary(val_loss_summ, global_step = epoch) avg_tr_loss /= tr_step avg_val_loss /= val_step tr_loss_hist.append(avg_tr_loss) val_loss_hist.append(avg_val_loss) if epoch % 5 == 0: print('epoch : {:3}, tr_loss : {:.3f}, val_loss : {:.3f}'.format(epoch, avg_tr_loss, avg_val_loss)) train_writer.close() val_writer.close() # - # ### Visualization plt.plot(tr_loss_hist, label = 'train') plt.plot(val_loss_hist, label = 'validation') plt.legend() yhat = np.argmax(sess.run(score, feed_dict = {X : x_tst}), axis = 1) print('acc : {:.2%}'.format(np.mean(yhat == y_tst)))
Lec03_Linear and Logistic Regression/Lec03_Logistic Regression with tf.data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # "Forecasting with Machine Learning models" # > "mlforecast makes forecasting with machine learning fast & easy" # # - toc: true # - branch: main # - badges: true # - comments: true # - author: <NAME> # - categories: [machine learning, forecasting] # - image: images/nixtla_logo.png # ## Introduction # We at Nixlta, are trying to make time series forecasting more accesible to everyone. In this post I'll talk about using machine learning models in forecasting tasks. I'll use an example to show what the main challanges are and then I'll introduce [mlforecast](https://github.com/Nixtla/mlforecast), a framework that facilitates using machine learning models in forecasting. **mlforecast** does feature engineering and takes care of the updates for you, the user only has to provide a regressor that follows the scikit-learn API (implements fit and predict) and specify the features that she wants to use. These features can be lags, lag-based transformations and date features. (For further feature creation or an automated forecasting pipeline check [fasttsfeatures](https://github.com/Nixtla/fasttsfeatures) and [autotimeseries](https://github.com/Nixtla/autotimeseries) # ## Motivation # For many years classical methods like ARIMA and ETS dominated the forecasting field. One of the reasons was that most of the use cases involved forecasting low-frequency series with monthly, quarterly or yearly granularity. Furthermore, there weren't many time-series datasets, so fitting a single model to each one and getting forecasts from them was straightforward. # # However, in recent years, the need to forecast bigger datasets higher frequencies has risen. Bigger and higher frequency time series imposes a challenge for classical forecasting methods. Those methods aren't mean to model many time series together, and their implementation is suboptimal and slow (you have to train many models) and besides, there could be some common or shared patterns between the series that could be learned by modeling them together. # # To address this problem, there have been various efforts in proposing different methods that can train a single model on many time series. Some fascinating deep learning architectures have been designed that can accurately forecast many time series like ESRNN, DeepAR, NBEATS among others. (Check [nixtlats](https://github.com/Nixtla/nixtlats) and [Replicating ESRNN results](https://nixtla.github.io/blog/deep%20learning/forecasting/m4/2021/06/25/esrnn-i.html) for our WIP) # # Traditional machine learning models like gradient boosted trees have been used as well and have shown that they can achieve very good performance as well. However, using these models with lag-based features isn't very straightforward because you have to update your features in every timestep in order to compute the predictions. Additionally, depending on your forecasting horizon and the lags that you use, at some point you run out of real values of your series to update your features, so you have to do something to fill those gaps. # One possible approach is using your predictions as the values for the series and update your features using them. This is exactly what **mlforecast** does for you. # ## Example # In the following section I'll show a very simple example with a single series to highlight the difficulties in using machine learning models in forecasting tasks. This will later motivate te use of *mlforecast*, a library that makes the whole process easier and faster. # ### Libraries # + tags=[] import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from sklearn.pipeline import make_pipeline from sklearn.preprocessing import OneHotEncoder # - # ### Data # + tags=[] rng = np.random.RandomState(90) serie_length = 7 * 20 dates = pd.date_range('2000-01-01', freq='D', periods=serie_length, name='ds') y = dates.dayofweek + rng.randint(-1, 2, size=dates.size) data = pd.DataFrame({'y': y.astype(np.float64)}, index=dates) data.plot(marker='.', figsize=(16, 6)); # - # Our data has daily seasonality and as you can see in the creation, it is basically just dayofweek + Uniform({-1, 0, 1}). # ### Training # Let's say we want forecasts for the next 14 days, the first step would be deciding which model and features to use, so we'll create a validation set containing the last 14 days in our data. # + tags=[] valid_horizon = 14 train = data.head(-valid_horizon).copy() y_valid = data.tail(valid_horizon)['y'] # - # Now we'll try to find which lags are the most important to use as features. To do this we'll compute the autocorrelation of the series values with respect to each lag. # + max_lags = 28 autocorrelations = np.empty((max_lags, 2)) autocorrelations[:, 0] = np.arange(max_lags) + 1 for lag in range(max_lags): autocorrelations[lag, 1] = np.corrcoef(y[lag + 1:], y[:-(lag + 1)])[0, 1] fig, ax = plt.subplots(figsize=(10, 6)) ax.bar(autocorrelations[:, 0], autocorrelations[:, 1]) ax.set( xlabel='lag', ylabel='Correlation coefficient', title='Autocorrelation by lag', xticks=[lag + 1 for lag in range(0, max_lags, 2)], ); # - # We can see that the most important lags are multiples of 7. As a starting point we'll try lag 7 and lag 14. # + tags=[] train['lag-7'] = train['y'].shift(7) train['lag-14'] = train['y'].shift(14) # - # Computing lag values leaves some rows with nulls. # + tags=[] train.isnull().sum() # - # We'll drop these before training. # + tags=[] train_without_nulls = train.dropna() X_train = train_without_nulls.drop(columns='y') y_train = train_without_nulls['y'] # - # For simplicity sake, we'll train a linear regression without intercept. Since the best model would be taking the average for each day of the week, we expect to get coefficients that are close to 0.5. # + tags=[] lr = LinearRegression(fit_intercept=False).fit(X_train, y_train) lr.coef_ # - # This model is taking $0.51 \cdot lag_7 + 0.45 \cdot lag_{14}$. # ### Forecasting # Great. We have our trained model. How can we compute the forecast for the next 14 days? Machine learning models a feature matrix *X* and output the predicted values *y*. So we need to create the feature matrix *X* for the next 14 days and give it to our model. # # If we want to get the *lag-7* for the next day, following the training set, we can just get the value in the 7th position starting from the end. The *lag-7* two days after the end of the training set would be the value in the 6th position starting from the end and so on. Similarly for the *lag-14*. # + tags=[] next_lags_7 = y_train.tail(7).values next_lags_7 # + tags=[] next_lags_14 = y_train.tail(14).values next_lags_14 # - # As you may have noticed we can only get 7 of the *lag-7* values from our history and we can get all 14 values for the *lag-14*. With this information we can only forecast the next 7 days, so we'll only take the first 7 values of the *lag-14*. # + tags=[] X_valid1 = pd.DataFrame({ 'lag-7': next_lags_7, 'lag-14': next_lags_14[:7], }) X_valid1 # - # With these features we can compute the forecasts for the next 7 days. # + tags=[] forecasts_7 = lr.predict(X_valid1) forecasts_7 # - # These values can be interpreted as the values of our series for the next 7 days following the last training date. In order to compute the forecasts following that date we can use these values as if they were the values of our series and use them as *lag-7* for the following periods. # # In other words, we can fill the rest of our features matrix with these values and the real values of the *lag-14*. # + tags=[] X_valid2 = pd.DataFrame({ 'lag-7': forecasts_7, 'lag-14': next_lags_14[-7:], }) X_valid2 # - # As you can see we're still using the real values of the *lag-14* and we've plugged in our predictions as the values for the *lag-7*. We can now use these features to predict the remaining 7 days. # + tags=[] forecasts_7_14 = lr.predict(X_valid2) y_pred = np.hstack([forecasts_7, forecasts_7_14]) y_pred # - # And now we have our forecasts for the next 14 days! This wasn't that painful but it wasn't pretty or easy either. And we just used lags which are the easiest feature we can have. # # What if we had used *lag-1*? We would have needed to do this predict-update step 14 times! # # And what if we had more elaborate features like the rolling mean over some lag? As you can imagine it can get quite messy and is very error prone. # ## mlforecast # With these problems in mind we created [mlforecast](https://github.com/Nixtla/mlforecast), which is a framework to help you forecast time series using machine learning models. It takes care of all these messy details for you. You just need to give it a model and define which features you want to use and let *mlforecast* do the rest. # # **mlforecast** is available in [PyPI](https://pypi.org/project/mlforecast/) (`pip install mlforecast`) as well as [conda-forge](https://anaconda.org/conda-forge/mlforecast) (`conda install -c conda-forge mlforecast`) # The previously described problem can be solved using **mlforecast** with the following code. # # First we have to set up our data in the required format. # + tags=[] train_mlfcst = train.reset_index()[['ds', 'y']] train_mlfcst.index = pd.Index(np.repeat(0, train.shape[0]), name='unique_id') train_mlfcst.head() # - # This is the required input format. # * an index named **unique_id** that identifies each time serie. In this case we only have one but you can have as many as you want. # * a **ds** column with the dates. # * a **y** column with the values. # # Now we'll import the [TimeSeries](https://nixtla.github.io/mlforecast/core.html#TimeSeries) transformer, where we define the features that we want to use. We'll also import the [Forecast](https://nixtla.github.io/mlforecast/forecast.html#Forecast) class, which will hold our transformer and model and will run the forecasting pipeline for us. # + tags=[] from mlforecast.core import TimeSeries from mlforecast.forecast import Forecast # - # We initialize our transformer specifying the lags that we want to use. # + tags=[] ts = TimeSeries(lags=[7, 14]) ts # - # As you can see this transformer will use *lag-7* and *lag-14* as features. Now we define our model. # + tags=[] model = LinearRegression(fit_intercept=False) # - # We create a [Forecast](https://nixtla.github.io/mlforecast/forecast.html) object with the model and the time series transformer and fit it to our data. # + tags=[] fcst = Forecast(model, ts) fcst.fit(train_mlfcst) # - # And now we just call predict with the forecast horizon that we want. # + tags=[] y_pred_mlfcst = fcst.predict(14) y_pred_mlfcst # - # This was a lot easier and internally this did the same as we did before. Lets verify real quick. # # Check that we got the same predictions: # + tags=[] np.testing.assert_equal(y_pred, y_pred_mlfcst['y_pred'].values) # - # Check that we got the same model: # + tags=[] np.testing.assert_equal(lr.coef_, fcst.model.coef_) # - # ### Experiments made easier # Having this high level abstraction allows us to focus on defining the best features and model instead of worrying about implementation details. For example, we can try out different lags very easily by writing a simple function that leverages *mlforecast*: # + tags=[] def evaluate_lags(lags): ts = TimeSeries(lags=lags) model = LinearRegression(fit_intercept=False) fcst = Forecast(model, ts) fcst.fit(train_mlfcst) print(*[f'lag-{lag:<2} coef: {fcst.model.coef_[i]:.2f}' for i, lag in enumerate(lags)], sep='\n') y_pred = fcst.predict(14) mse = mean_squared_error(y_valid, y_pred['y_pred']) print(f'MSE: {mse:.2f}') # + tags=[] evaluate_lags([7, 14]) # + tags=[] evaluate_lags([7, 14, 21]) # + tags=[] evaluate_lags([7, 14, 21, 28]) # - # ### Backtesting # In the previous examples we manually split our data. The **Forecast** object also has a [backtest](https://nixtla.github.io/mlforecast/forecast.html#Backtesting) method that can do that for us. # # We'll first get all of our data into the required format. # + tags=[] data_mlfcst = data.reset_index() data_mlfcst.index = pd.Index(np.repeat(0, data_mlfcst.shape[0]), name='unique_id') data_mlfcst # - # Now we instantiate a `Forecast` object as we did previously and call the `backtest` method instead. # + tags=[] backtest_fcst = Forecast( LinearRegression(fit_intercept=False), TimeSeries(lags=[7, 14]) ) backtest_results = backtest_fcst.backtest(data_mlfcst, n_windows=2, window_size=14) # - # This returns a generator with the results for each window. # + tags=[] type(backtest_results) # - result1 = next(backtest_results) result1 # + tags=[] result2 = next(backtest_results) result2 # - # `result2` here is the same as the evaluation we did manually. # + tags=[] np.testing.assert_equal(result2['y_pred'].values, y_pred_mlfcst['y_pred'].values) # - # We can define a validation scheme for different lags using several windows. # + tags=[] def backtest(ts, model=None): if model is None: model = LinearRegression(fit_intercept=False) fcst = Forecast(model, ts) backtest_results = fcst.backtest(data_mlfcst, n_windows=4, window_size=14) mses = [] results = [] for i, result in enumerate(backtest_results): mses.append(mean_squared_error(result['y'], result['y_pred'])) results.append(result.rename(columns={'y_pred': f'split-{i}'})) pd.concat(results).set_index('ds').plot(marker='.', figsize=(16, 6)) print('Splits MSE:', np.round(mses, 2)) print(f'Mean MSE: {np.mean(mses):.2f}') # + tags=[] backtest(TimeSeries(lags=[7, 14])) # + tags=[] backtest(TimeSeries(lags=[7, 14, 21])) # + tags=[] backtest(TimeSeries(lags=[7, 14, 21, 28])) # - # ### Lag transformations # We can specify transformations on the lags as well as just lags. The [window_ops](https://github.com/jose-moralez/window_ops) library has some implementations of different window functions. You can also define your own transformations. # # Let's try a seasonal rolling mean, this takes the average over the last `n` seasons, in this case it would be the average of the last `n` mondays, tuesdays, etc. Computing the updates for this feature would probably be a bit annoying, however using this framework we can just pass it to *lag_transforms*. If the transformations takes additional arguments (additional to the values of the serie) we specify a tuple like `(transform_function, arg1, arg2)`, which in this case are `season_length` and `window_size`. # + tags=[] from window_ops.rolling import seasonal_rolling_mean # + tags=[] help(seasonal_rolling_mean) # - # *lag_transforms* takes a dictionary where the keys are the lags that we want to apply the transformations to and the values are the transformations themselves. # + tags=[] ts = TimeSeries( lag_transforms={ 7: [(seasonal_rolling_mean, 7, 8)] } ) backtest(ts) # - # ### Date features # You can also specify date features to be computed, which are attributes of the **ds** column and are updated in each timestep as well. In this example the best model would be taking the average over each day of the week, which can be accomplished by doing one hot encoding on the day of the week column and fitting a linear model. # + tags=[] ts = TimeSeries(date_features=['dayofweek']) model = make_pipeline( OneHotEncoder(drop='first'), LinearRegression(fit_intercept=False) ) backtest(ts, model) # - # ## TL;DR # In this post we presented **mlforecast**, a framework that makes the use of machine learning models in forecasting tasks fast and easy. It allows you to focus on the model and features instead of implementation details. With *mlforecast* you can make experiemnts in an esasier way and it has a built-in backtesting functionality to help you find the best performing model. # # Although this example contained only a single time series it is able to handle thousands of them and is very efficient both time and memory wise. # ## Next steps # **mlforecast** has more features like [distributed training](https://nixtla.github.io/mlforecast/distributed.forecast.html#Example) and a [CLI](https://nixtla.github.io/mlforecast/cli.html#Example). If you're interested you can learn more in the following resources: # * GitHub repo: https://github.com/Nixtla/mlforecast # * Documentation: https://nixtla.github.io/mlforecast/ # * Example using mlforecast in the M5 competition: https://www.kaggle.com/lemuz90/m5-mlforecast
_notebooks/2021-06-10-Intro-mlforecast.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Workflow # + import warnings import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt warnings.filterwarnings('ignore') random_state = 42 # - # ## Datensatz laden # # Quelle: [https://www.kaggle.com/c/porto-seguro-safe-driver-prediction](https://www.kaggle.com/c/porto-seguro-safe-driver-prediction) df = pd.read_csv('../datasets/safe-driver-prediction.csv') # # Übersicht df.head() # # Metadaten extrahieren # + data = [] for column in df.columns: # Defining the role if column == 'target': role = 'target' elif column == 'id': role = 'id' else: role = 'input' # Defining the level if 'bin' in column or column == 'target': level = 'binary' elif 'cat' in column or column == 'id': level = 'nominal' elif df[column].dtype == np.dtype('float64'): level = 'interval' elif df[column].dtype == np.dtype('int64'): level = 'ordinal' # Initialize keep to True for all variables except for id keep = True if column == 'id': keep = False # Defining the data type dtype = df[column].dtype # Creating a Dict that contains all the metadata for the variable column_dict = { 'column_name': column, 'role': role, 'level': level, 'keep': keep, 'dtype': dtype } data.append(column_dict) df_meta = pd.DataFrame(data, columns=['column_name', 'role', 'level', 'keep', 'dtype']) df_meta.set_index('column_name', inplace=True) # - # # Vorverarbeitung (Pre-Processing) # + [markdown] cell_style="center" # ## Fehlende Werte # # * Kategorische Attribute # * **ps_car_03_cat** & **ps_car_05_cat** enthalten mehr als 50% fehlende Werte ==> entfernen # * Bei den anderen Attributen kann -1 als einzelne Kathegorie gewertet werden # * **ps_reg_03** (continuous): Werte werden mit "mean" ersetzt # * **ps_car_11** (ordinal): Werte werden mit "most_frequent" ersetzt # * **ps_car_12** (continuous): Werte werden mit "mean" ersetzt # * **ps_car_14** (continuous): Werte werden mit "mean" ersetzt # + cell_style="center" drop_list = ['ps_car_03_cat', 'ps_car_05_cat'] df.drop(drop_list, inplace=True, axis=1) df_meta.loc[drop_list, 'keep'] = False # + cell_style="center" from sklearn.preprocessing import Imputer mean_imp = Imputer(missing_values=-1, strategy='mean', axis=0) mode_imp = Imputer(missing_values=-1, strategy='most_frequent', axis=0) df['ps_reg_03'] = mean_imp.fit_transform(df[['ps_reg_03']]) df['ps_car_12'] = mean_imp.fit_transform(df[['ps_car_12']]) df['ps_car_14'] = mean_imp.fit_transform(df[['ps_car_14']]) df['ps_car_11'] = mode_imp.fit_transform(df[['ps_car_11']]) # - df.isnull().sum().sum() # ## Resampling # # [Resampling Strategies](https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets) # # Wie in der Übersicht gezeigt, ist der Anteil der Datensätze mit target = 1 weit geringer als target = 0. Dies kann zu einem Model führen, das eine hohe Genauigkeit aufweist, aber in der Praxis keine guten Resultate liefert. Zwei mögliche Strategien, um mit diesem Problem umzugehen, sind: # # * Oversampling der Datensätze mit target = 1 # * Undersampling der Datensätze mit target = 0 # # Da wir ein größeres Trainingsset haben, können wir uns für Undersampling entscheiden. # + from imblearn.under_sampling import RandomUnderSampler desired_apriori = 0.30 nb_0 = len(df.loc[df.target == 0].index) nb_1 = len(df.loc[df.target == 1].index) undersampling_rate = ((1 - desired_apriori) * nb_1) / (nb_0 * desired_apriori) undersampled_nb_0 = int(undersampling_rate * nb_0) df_X = df.drop('target', axis=1) df_y = df['target'] cc = RandomUnderSampler(ratio={0: undersampled_nb_0}) X_cc, y_cc = cc.fit_sample(df_X, df_y) df_X = pd.DataFrame(X_cc, columns=df_X.columns) df_y = pd.DataFrame(y_cc, columns=['target']) df = df_X.join(df_y) print('Datensätze mit target = 0 vor dem Undersampling: {}'.format(nb_0)) print('Datensätze mit target = 0 nach dem Undersampling: {}'.format(len(df.loc[df.target == 0].index))) # - # ## Feature Extraction # ### Dummy-Attribute erstellen # + from sklearn.preprocessing import LabelBinarizer query = df_meta[(df_meta.level == 'nominal') & (df_meta.keep)].index lb = LabelBinarizer() for column in query.values: if len(df[column].unique()) <= 2: continue df_bin = pd.DataFrame(lb.fit_transform(df[column].values), columns=['{}_{}'.format(column, c) for c in lb.classes_]) df = pd.concat([df, df_bin], axis=1) # + data = [] for org_column in query.values: for lb_column in df.columns[df.columns.str.startswith(org_column+'_')]: data.append({ 'column_name': lb_column, 'role': 'input', 'level': 'binary', 'keep': True, 'dtype': df[lb_column].dtype }) df_meta.loc[org_column, 'keep'] = False df_meta_tmp = pd.DataFrame(data, columns=['column_name', 'role', 'level', 'keep', 'dtype']) df_meta_tmp.set_index('column_name', inplace=True) df_meta = df_meta.append(df_meta_tmp) # - # ### "Interaction"-Attribute erstellen # + from sklearn.preprocessing import PolynomialFeatures query = df_meta[(df_meta.level == 'interval') & (df_meta.keep)].index poly = PolynomialFeatures(degree=2, interaction_only=False, include_bias=False) interactions = pd.DataFrame(poly.fit_transform(df[query]), columns=poly.get_feature_names(query)) interactions.drop(query, axis=1, inplace=True) df = df.join(interactions) # + data = [] for column in interactions.columns: data.append({ 'column_name': column, 'role': 'input', 'level': 'interval', 'keep': True, 'dtype': interactions[column].dtype }) df_meta_tmp = pd.DataFrame(data, columns=['column_name', 'role', 'level', 'keep', 'dtype']) df_meta_tmp.set_index('column_name', inplace=True) df_meta = df_meta.append(df_meta_tmp) # - # ## Feature Selection # ### Entfernen von Attributen mit geringer oder keiner Varianz # + from sklearn.feature_selection import VarianceThreshold selector = VarianceThreshold(threshold=.01) selector.fit(df.drop(['id', 'target'], axis=1)) f = np.vectorize(lambda x : not x) v = df.drop(['id', 'target'], axis=1).columns[f(selector.get_support())] print('{} variables have too low variance.'.format(len(v))) print('These variables are {}'.format(list(v))) # - # ### Skalierung # + from sklearn.preprocessing import MinMaxScaler, Normalizer, StandardScaler query = df_meta[((df_meta.level == 'interval') | (df_meta.level == 'ordinal')) & (df_meta.keep)].index df_tmp = df[query].copy() scaler = StandardScaler() df_tmp = pd.DataFrame(scaler.fit_transform(df_tmp), columns=df_tmp.columns) df.drop(df_tmp.columns, axis=1, inplace=True) df = df.join(df_tmp) # - # ### Attribute auswählen mit Hilfe eines Random Forest # + from sklearn.ensemble import RandomForestClassifier from sklearn.feature_selection import SelectFromModel query = df_meta[(df_meta.keep)].index # df_X = df.drop(['id', 'target'], axis=1) df_X = df[query].drop(['target'], axis=1) df_y = df['target'] clf = RandomForestClassifier() clf = clf.fit(df_X, df_y) model = SelectFromModel(clf, prefit=True) df = pd.concat([df_X.loc[:, model.get_support()], df.loc[:, ['id', 'target']]], axis=1, sort=False) # - # # Aufteilung in Trainings- und Testdaten (Sampling) # + from sklearn.model_selection import train_test_split query = df_meta[(df_meta.keep)].index df_X = df[query].drop(['target'], axis=1) df_y = df['target'] X_train, X_test, y_train, y_test = train_test_split(df_X, df_y, test_size=0.33, random_state=random_state) # - # # Trainieren des Models # + from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.metrics import accuracy_score from sklearn.model_selection import cross_val_predict, StratifiedKFold clf = DecisionTreeClassifier() y_pred = cross_val_predict(clf, df_X, df_y, cv=StratifiedKFold(2), n_jobs=-1) print('Accuracy: {:.2f}'.format(accuracy_score(df_y, y_pred))) # + from sklearn.metrics import classification_report print(classification_report(df_y, y_pred, target_names=['target = 0', 'target = 1']))
notebooks/1-workflow/03-workflow.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="_5Zxo_yKLKYy" colab_type="code" colab={} import numpy as np import pandas as pd import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from nltk import FreqDist import string from nltk.corpus import stopwords import nltk from nltk import word_tokenize, FreqDist import seaborn as sns sns.set_style('dark') import requests from PIL import Image # + id="o553BShfi6Ef" colab_type="code" colab={} import nltk nltk.download('punkt') nltk.download('stopwords') nltk.download('wordnet') # + id="p07Dl-u0LhAk" colab_type="code" colab={} from google.colab import files uploaded = files.upload() # !mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/.kaggle/kaggle.json # + id="2eHGfCIqLnU9" colab_type="code" colab={} # !kaggle competitions download -c msk-redefining-cancer-treatment # + id="U9gufWM0DZwf" colab_type="code" outputId="4c23c88c-2c49-49db-b472-037b1b71f94d" colab={"base_uri": "https://localhost:8080/", "height": 249} df_train_txt = pd.read_csv('training_text', sep='\|\|', header=None, skiprows=1, names=["ID","Text"]) df_train_var = pd.read_csv('training_variants') df_train = pd.merge(df_train_txt, df_train_var, how='left', on='ID') df_train.head() # + id="Nfc31auSygyd" colab_type="code" outputId="224f94bb-51a2-4ee5-cb9c-a0fdcaeb8535" colab={"base_uri": "https://localhost:8080/", "height": 118} df_train.isnull().sum() # + id="frOAuNbCzUk8" colab_type="code" outputId="b3077971-72ed-4b9d-c22d-1a8b6c533b4e" colab={"base_uri": "https://localhost:8080/", "height": 34} df_train.shape # + id="ooEz9E-qzY-a" colab_type="code" outputId="5c29ec96-8a97-4864-d97a-67c0984cc7e8" colab={"base_uri": "https://localhost:8080/", "height": 34} df_train.dropna(axis=0, how='any', inplace=True) df_train.shape # + id="KFUgPt-1qyLX" colab_type="code" outputId="98cf4f1d-f379-44ea-a1d6-b150dcc62e64" colab={"base_uri": "https://localhost:8080/", "height": 54} df_train['Text'][0] # + id="LwcM8WuutRQp" colab_type="code" outputId="abe3967d-42be-4aaf-d09a-00c18ab1d309" colab={"base_uri": "https://localhost:8080/", "height": 460} # Frequency of Classes in dataset plt.figure(figsize=(10,7)) sns.countplot(x="Class", data = df_train) plt.ylabel('Frequency', fontsize=12) plt.xlabel('Class count', fontsize=12) plt.xticks(rotation='vertical') plt.title("Frequency of Classes", fontsize=15) plt.show() # + id="xv3uyQylvwqO" colab_type="code" outputId="b2071b84-4283-4bbb-cf7e-04ce5758d5cb" colab={"base_uri": "https://localhost:8080/", "height": 881} #Plot Top 7 genes in each class fig, axs = plt.subplots(ncols=3, nrows=3, figsize=(15,15)) for i in range(3): for j in range(3): gene_count_grp = df_train[df_train["Class"]==((i*3+j)+1)].groupby('Gene')["ID"].count().reset_index() sorted_gene_group = gene_count_grp.sort_values('ID', ascending=False) sorted_gene_group_top_7 = sorted_gene_group[:7] sns.barplot(x="Gene", y="ID", data=sorted_gene_group_top_7, ax=axs[i][j]) # + id="ErGEpDcQ5_Kx" colab_type="code" outputId="83fbd6d1-22ef-4900-abef-e332a632fed6" colab={"base_uri": "https://localhost:8080/", "height": 195} df_train.loc[:, 'Text_count'] = df_train['Text'].apply(lambda x: len(x.split())) df_train.head() # + id="cSnpniLG61k7" colab_type="code" outputId="cb368478-bdc5-4e82-b447-e8159b579b8b" colab={"base_uri": "https://localhost:8080/", "height": 466} #get distribution of text count for each class plt.figure(figsize=(10,7)) gene_count_gr = df_train.groupby('Gene')['Text_count'].sum().reset_index() sns.violinplot(x='Class', y='Text_count', data= df_train, inner=None) sns.swarmplot(x='Class', y="Text_count", data=df_train, color='w', alpha=0.6) plt.ylabel('Text Count', fontsize=14) plt.xlabel('Class', fontsize=14) plt.title('Text length distribution', fontsize=17) plt.show() # + id="eugCpC-uicYH" colab_type="code" colab={} stop = stopwords.words('english')+["mutat","cell","cancer","fig","mutant", "et", "figur", "al", "use"] snowball = nltk.SnowballStemmer('english') WNlemma = nltk.WordNetLemmatizer() def preprocess(toks): toks = [ t.lower() for t in toks if t not in string.punctuation ] toks = [t for t in toks if t not in stop ] toks = [ snowball.stem(t) for t in toks ] toks = [ WNlemma.lemmatize(t) for t in toks ] toks = [t for t in toks if t not in stop ] toks_clean = [ t for t in toks if len(t) >= 2 ] return toks # + id="xEfh4v0PFYpa" colab_type="code" colab={} df_train['Text']=df_train['Text'].apply(lambda x: str(x)) df_train['Tokens'] = df_train['Text'].apply(lambda x: word_tokenize(x)) df_train['Tokens_clean'] = df_train['Tokens'].apply(lambda x: preprocess(x)) df_train['Tokens_clean_text'] = df_train['Tokens_clean'].apply(lambda x: ' '.join(x)) # + id="NHajuHM5jYmT" colab_type="code" outputId="b3481894-a41d-4e3d-b2e7-78ad6f7225a2" colab={"base_uri": "https://localhost:8080/", "height": 2436} from wordcloud import WordCloud import matplotlib.pyplot as plt def word_cloud_class(category): class_words=df_train.Tokens_clean[df_train.Class==category] class_words_list = [ c for l in class_words for c in l ] fd_class_words = FreqDist(class_words_list) mask=mask = np.array(Image.open(requests.get('http://www.clker.com/cliparts/f/1/5/0/1194985571905910902pinkhome2.svg.med.png', stream=True).raw)) wc_grain = WordCloud(background_color="white", mask=mask).generate_from_frequencies(fd_class_words) plt.imshow(wc_grain, interpolation='bilinear') plt.axis("off") plt.show() for i in range(1,10): print(f"Class {i}") word_cloud_class(i) # + id="6pV2QYhDlbOv" colab_type="code" colab={}
DataAnalysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import gdal, osr import os import matplotlib.pyplot as plt import numpy as np import pandas as pd #import fiona from matplotlib.colors import ListedColormap # + class RasterProp: def __init__(self, rasterFile, sliceClass=None, slicing = False): self.raster = gdal.Open(rasterFile) self.geotransform = self.raster.GetGeoTransform() self.projRef = self.raster.GetProjectionRef() self.originX = self.geotransform[0] self.originY = self.geotransform[3] self.pixelWidth = self.geotransform[1] self.pixelHeight = self.geotransform[5] if slicing: print('recomputing origin') x_ori_rel , y_ori_rel, xlen, ylen = sliceClass.relevantArea() self.originX, self.originY = pixel2coord(self.geotransform, x_ori_rel, y_ori_rel) def array2raster(array, rasProp,newRasterfn): print('converting array to raster...') cols = array.shape[1] rows = array.shape[0] driver = gdal.GetDriverByName('GTiff') outRaster = driver.Create( newRasterfn, cols, rows, bands=1, eType= gdal.GDT_Float32) outRaster.SetGeoTransform((rasProp.originX, rasProp.pixelWidth, 0, rasProp.originY, 0, rasProp.pixelHeight)) outband = outRaster.GetRasterBand(1) outband.WriteArray(array) outRasterSRS = osr.SpatialReference() outRasterSRS.ImportFromWkt(rasProp.projRef) outRaster.SetProjection(outRasterSRS.ExportToWkt()) outband.FlushCache() # - def raster2array(rasterfn): #print('converting raster to array...') raster = gdal.Open(rasterfn) band = raster.GetRasterBand(1) array = band.ReadAsArray() return array sloCost = raster2array(os.path.abspath('01_Data500/slope.tif')) envCost = raster2array(os.path.abspath('01_Data500/fac_env.tif')) infCost = raster2array(os.path.abspath('01_Data500/fac_inf.tif')) pubCost = raster2array(os.path.abspath('01_Data500/fac_pub.tif')) allCost = (ecoCost+envCost+infCost+pubCost)/4 # # Line Length epi111P = os.path.abspath('02_DC_Projects_DE/02_dc5_paths/eip_111'+'.npy') eip111 = np.load(epi111P) # Line length as number of cells def lineLength_numberOfCells(lineIdx): return len(lineIdx[0]) def lineLength(lineArray): indicies = np.nonzero(lineArray) indicies_paired = np.stack((indicies[0],indicies[1]), axis=-1) disTot = 0 for point in range(1,len(indicies_paired)-1,1): x1 = indicies_paired[point-1][0] y1 = indicies_paired[point-1][1] x2 = indicies_paired[point][0] y2 = indicies_paired[point][1] dist = np.sqrt((x2-x1)**2+(y2-y1)**2)*0.5 disTot = disTot+dist return disTot # Line length based on number of cells multiplied by 0.5 (raster size=500) def lineLength_idx(lineIdx): indicies_paired = np.transpose(lineIdx) disTot = 0 for point in range(1,len(indicies_paired)-1,1): x1 = indicies_paired[point-1][0] y1 = indicies_paired[point-1][1] x2 = indicies_paired[point][0] y2 = indicies_paired[point][1] dist = np.sqrt((x2-x1)**2+(y2-y1)**2)*0.5 disTot = disTot+dist return int(disTot) lineLength_idx(np.load(os.path.abspath('02_DC_Projects_DE/02_dc5_paths/eip_'+'222'+'.npy'))) # + [markdown] toc-hr-collapsed=false # # Population Affected # - popu = raster2array(os.path.abspath('01_Data500/population.tif')) def peopleAff(line): return popu[line[0],line[1]].sum() def peopleAffected(line, basedOn): path = line_path(path_based_on=basedOn, dc = line) return np.multiply(popu,path).sum() # ## Corine corOrig = os.path.abspath('01_Data500/corine_500_original_classification.tif') landOrig = raster2array(corOrig) corClas = pd.read_csv('Corine_Classification.csv', delimiter=';')[['CLC','Class_name']] def corineClassPath(factor, line): dc1EcoPath = line_path(path_based_on=factor,dc=line) dc1EcoCor = np.multiply(dc1EcoPath,landOrig) value, counts = np.unique(dc1EcoCor, return_counts=True) corPath = pd.DataFrame([value,counts]).T corPath.columns =['CLC','count'] corClasPath = corPath.set_index('CLC').join(corClas.set_index('CLC')) corClasDef = corClasPath.dropna().groupby('Class_name').sum() corClasDef.columns = [factor] return corClasDef def getCorineClass(line): dc1Eco = corineClassPath(factor='eco',line=line) dc1Env = corineClassPath(factor='env',line=line) dc1Inf = corineClassPath(factor='inf',line=line) dc1Pub = corineClassPath(factor='pub',line=line) dc1All = corineClassPath(factor='all',line=line) dc1CorFac = dc1Eco.join(dc1Env, how='outer').join(dc1Inf, how='outer').join(dc1Pub, how='outer').join(dc1All, how='outer') return dc1CorFac # ## Corine Classification # + corOrig = raster2array(os.path.abspath('01_Data500/corine_500_original_classification.tif')) def getCorineCLC(line): clcCode, clcCounts = np.unique(corOrig[line[0],line[1]], return_counts=True) return dict(zip(clcCode, clcCounts)) corClass = pd.read_excel('Corine_Classification.xlsx').set_index('CLC') # - # ## 'Similarness' of the path # + [markdown] toc-hr-collapsed=false # ## Buffer Intersection # - # **Approch** # # Create a buffer of 1.5KM (randomly choosen number) and calculating the fraction of intersection of the two buffer zones. # # **Method** # # *Step1:* Select cell indexes with path (non-zero values with path array). A list of tuples is created containing the path # # *Step2:* Add the buffer tuples to the list of tuples. Here it is possible to change the buffer zone. This will add multiple tuples with same values. Hence, important to select only unique tuples. This is done with the set operator. # # *Step3:* Count the number of tuples common in the paths been compared # # *Step4:* Normalize with the number of cells in the buffer zone of the normalized path. # # # + def getBuffPathLocations(line, bufferLength=3): y,x = line[0], line[1] orgList = list(zip(x,y)) bufList = [] for item in orgList: for shift in range(0,bufferLength,1): bufList.append((item[0]-shift, item[1])) bufList.append((item[0]+shift, item[1])) bufList.append((item[0] , item[1]-shift)) bufList.append((item[0] , item[1]+shift)) bufList.append((item[0]-shift, item[1]-shift)) bufList.append((item[0]+shift, item[1]+shift)) bufList.append((item[0]-shift, item[1]+shift)) bufList.append((item[0]+shift, item[1]-shift)) return set(bufList) def getIntersection(bufRef, bufList2): intLen = len(bufRef.intersection(bufList2)) return intLen/len(bufRef) # - line_eip111 = np.load(os.path.abspath('03_DC_Project_North/02_paths_2/eip_111.npy')) line_eip111 dc5Actual = raster2array(os.path.abspath('02_DC_Projects_DE/DC_5_real.tif')) dc5ActualIdx = np.nonzero(dc5Actual) # ## ProtectedZone length # protected zone as the sum of protected zone cell the line passes through. prot = raster2array(os.path.abspath('01_Data500/protected.tif')) def getProtectedCells(line): return prot[line[0],line[1]].sum()/10 #getProtectedCells(l1113) # ## Slope Classification sloCost = raster2array(os.path.abspath('01_Data500/slope.tif')) def returnSlopeClass(line): slopVals = sloCost[line[0],line[1]] slopValsCount = np.unique(np.digitize(slopVals, [1.146,4.574]), return_counts=True)[1] return list(slopValsCount) # # AllPaths # Calculation of all the evaluation criterion for all the paths # + peopleAffected = [] protZonePassed = [] lineLength = [] #buffInter_dc5Actual = [] buffInter_eip111 = [] slopeClassData = [] corineCLCCount = [] lineNumberOfCells = [] #refBuffAct = getBuffPathLocations(dc5ActualIdx) refBuff111 = getBuffPathLocations(line_eip111) for env in range(0,11,1): for inf in range(0,11,1): for pub in range(0,11,1): if (env == inf==pub==0): continue; fileName = str(env)+str(inf)+str(pub) comName = os.path.abspath('03_DC_Project_North/02_paths_2/eip_'+fileName+'.npy') path = np.load(comName) print(env,inf,pub) # line number of cells def lineNumCell(): linNumCell = lineLength_numberOfCells(path) lineNumberOfCells.append([env/(env+inf+pub), inf/(env+inf+pub), pub/(env+inf+pub),linNumCell]) # line length def lineLength(): print('linelength') lineLen = lineLength_idx(path) lineLength.append([env/(env+inf+pub), inf/(env+inf+pub), pub/(env+inf+pub),lineLen]) # corine CLC code def corCLC(): print('corCLC') corCLC = getCorineCLC(path) corineCLCCount.append([env/(env+inf+pub), inf/(env+inf+pub), pub/(env+inf+pub),corCLC]) # slope class def slopClass(): print('slope class') slopClass = [env/(env+inf+pub), inf/(env+inf+pub), pub/(env+inf+pub), returnSlopeClass(path)[0], returnSlopeClass(path)[1], returnSlopeClass(path)[2]] slopeClassData.append(slopClass) # people affected def peopleAffVal(): print('People Aff') linPplAff = peopleAff(path) peopleAffected.append([env/(env+inf+pub), inf/(env+inf+pub), pub/(env+inf+pub),linPplAff]) # buffer wrt to actual # print('actBuff') # buffInt = getIntersection(refBuffAct, # getBuffPathLocations(path)) # buffInter_dc5Actual.append([env/(env+inf+pub), # inf/(env+inf+pub), # pub/(env+inf+pub),buffInt]) # buffer wrt to eip_111 print('111Buff') buffInt = getIntersection(refBuff111, getBuffPathLocations(path)) buffInter_eip111.append([env/(env+inf+pub), inf/(env+inf+pub), pub/(env+inf+pub),buffInt]) # protected zone cells def protCells(): print('protCells') protZnLin = getProtectedCells(path) protZonePassed.append([env/(env+inf+pub), inf/(env+inf+pub), pub/(env+inf+pub),protZnLin]) lineNumCell() #lineLength() corCLC() slopClass() peopleAffVal() protCells() # - # ### Restructuring and saving indicators # similarness relative to actual (only valid for dc5) buff_actual = pd.DataFrame(buffInter_dc5Actual, columns=['env','inf','pub','buff_actual']) buff_actual.to_excel(os.path.abspath('03_DC_Project_North/03_analysis_2/eip_buff_actual.xlsx')) # similarness relative to 111 path buffeip111_paths = pd.DataFrame(buffInter_eip111, columns=['env','inf','pub','buf_111']) buffeip111_paths.drop_duplicates().to_excel(os.path.abspath('03_DC_Project_North/03_analysis_2/eip_buff.xlsx')) # number of cells lineNumberOfCells_paths = pd.DataFrame(lineNumberOfCells, columns=['env','inf','pub','line_numberOfCells']) lineNumberOfCells_paths.drop_duplicates().to_excel(os.path.abspath('03_DC_Project_North/03_analysis_2/eip_lineNumberOfCells.xlsx')) # + # people affected pplAffected_paths = pd.DataFrame(peopleAffected, columns=['env','inf','pub','peopleAff']) pplAffected_paths.drop_duplicates().to_excel(os.path.abspath('03_DC_Project_North/03_analysis_2/eip_peopleAffected.xlsx')) #dc5Lengths = pd.DataFrame(lineLength, # columns=['env','inf','pub','dc5Length']) #dc5Lengths.drop_duplicates().to_excel(os.path.abspath('03_DC_Project_North/03_analysis/eip_length.xlsx')) # - # protected zone cells dc5Protzon_paths = pd.DataFrame(protZonePassed, columns=['env','inf','pub','protZoneCells']) dc5Protzon_paths.drop_duplicates().to_excel(os.path.abspath('03_DC_Project_North/03_analysis_2/eip_protZoneCells.xlsx')) # slope classification slopClass_paths = pd.DataFrame(slopeClassData, columns={'env','inf','pub','low','med','high'}) slopClass_paths.drop_duplicates().to_excel(os.path.abspath('03_DC_Project_North/03_analysis_2/eip_slopeClass.xlsx')) # CLCCount_paths = pd.DataFrame(corineCLCCount, columns={'env','inf','pub','clc_count'} ) CLCCount_paths.to_excel(os.path.abspath('03_DC_Project_North/03_analysis_2/eip_clcCount.xlsx')) # ## corine mapping to defined classes # + corinClassification = [] for row in dc5CLCCount.iterrows(): lineCounts = pd.DataFrame.from_dict(dict(row[1].clc_count), orient='index', columns=['clc_count']) corineClassNameCount = corClass.join(lineCounts).fillna(0)[['Class_name','clc_count']].groupby('Class_name').sum().T clasList = list(corineClassNameCount.values[0]) clasList.extend([row[1].env, row[1].inf, row[1].pub]) corinClassification.append(clasList) columns = ['Agriculture', 'Forest', 'HVN', 'Man-Made', 'Wasteland','env', 'inf', 'pub'] corinClassDF = pd.DataFrame(corinClassification, columns=columns) corinClassDF.drop_duplicates().to_excel(os.path.abspath('03_DC_Project_North/03_analysis_2/eip_corinClassification.xlsx'))
03_Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="TUmvNrDQKVhM" colab_type="text" # STAT 453: Deep Learning (Spring 2020) # Instructor: <NAME> (<EMAIL>) # # Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2020/ # GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss20 # + [markdown] id="VgrZbxjrKVhQ" colab_type="text" # # Linear Regression with Gradient Descent # + [markdown] id="AluSlQqwKVhR" colab_type="text" # Note that linear regression and Adaline are very similar. The only difference is that we apply a threshold function for converting the outputs from continuous targets for predictions. The derivative and training procedure are identical to Adaline though. You can compare the two notebooks (this one and `adaline-sgd.ipynb`) side by side as shown below to see the relationship: # # ![](figures/adaline-vs-linreg.png) # + id="xfCpipz5KVhR" colab_type="code" colab={} import pandas as pd import matplotlib.pyplot as plt import torch # %matplotlib inline # + [markdown] id="U-LM-XP7KVhT" colab_type="text" # # Drive Mount # + id="LKYBqomQTZfT" colab_type="code" outputId="72df2174-ef29-4c69-b93c-273d0b2ec700" executionInfo={"status": "ok", "timestamp": 1590182044489, "user_tz": 180, "elapsed": 559, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZzFT0FCo6nTJjXLoCVlWF617XKFK9oco_RLrc-A=s64", "userId": "01490701818826847808"}} colab={"base_uri": "https://localhost:8080/", "height": 54} from google.colab import drive drive.mount('/content/drive') # + [markdown] id="99q-b9zjKVhU" colab_type="text" # ## Load & Prepare a Toy Dataset # + id="mJuUP6emKVhU" colab_type="code" outputId="11cba309-9f91-4a43-90b7-1af381f27410" executionInfo={"status": "ok", "timestamp": 1590182248284, "user_tz": 180, "elapsed": 939, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZzFT0FCo6nTJjXLoCVlWF617XKFK9oco_RLrc-A=s64", "userId": "01490701818826847808"}} colab={"base_uri": "https://localhost:8080/", "height": 195} df = pd.read_csv('/content/drive/My Drive/Disciplinas/Pattern Recognition/Public/L05_grad-descent/code/datasets/linreg-data.csv', index_col=0) df.tail() # + id="qemV0T7XKVhW" colab_type="code" colab={} # Assign features and target X = torch.tensor(df[['x1', 'x2']].values, dtype=torch.float) y = torch.tensor(df['y'].values, dtype=torch.float) # Shuffling & train/test split torch.manual_seed(123) shuffle_idx = torch.randperm(y.size(0), dtype=torch.long) X, y = X[shuffle_idx], y[shuffle_idx] percent70 = int(shuffle_idx.size(0)*0.7) X_train, X_test = X[shuffle_idx[:percent70]], X[shuffle_idx[percent70:]] y_train, y_test = y[shuffle_idx[:percent70]], y[shuffle_idx[percent70:]] # Normalize (mean zero, unit variance) mu, sigma = X_train.mean(dim=0), X_train.std(dim=0) X_train = (X_train - mu) / sigma X_test = (X_test - mu) / sigma # + [markdown] id="shx_W7uIKVhZ" colab_type="text" # ## Implement Linear Regression Model # + id="-Oo_TQ-bKVhZ" colab_type="code" colab={} class LinearRegression1(): def __init__(self, num_features): self.num_features = num_features self.weights = torch.zeros(num_features, 1, dtype=torch.float) self.bias = torch.zeros(1, dtype=torch.float) def forward(self, x): netinputs = torch.add(torch.mm(x, self.weights), self.bias) activations = netinputs return activations.view(-1) def backward(self, x, yhat, y): grad_loss_yhat = 2*(yhat - y) grad_yhat_weights = x grad_yhat_bias = 1. # Chain rule: inner times outer grad_loss_weights = torch.mm(grad_yhat_weights.t(), grad_loss_yhat.view(-1, 1)) / y.size(0) grad_loss_bias = torch.sum(grad_yhat_bias*grad_loss_yhat) / y.size(0) # return negative gradient return (-1)*grad_loss_weights, (-1)*grad_loss_bias # + [markdown] id="Lhp1NtPrKVhc" colab_type="text" # ## Define Training and Evaluation Functions # + id="Ai4OXXNWKVhc" colab_type="code" colab={} #################################################### ##### Training and evaluation wrappers ################################################### def loss(yhat, y): return torch.mean((yhat - y)**2) def train(model, x, y, num_epochs, learning_rate=0.01): cost = [] for e in range(num_epochs): #### Compute outputs #### yhat = model.forward(x) #### Compute gradients #### negative_grad_w, negative_grad_b = model.backward(x, yhat, y) #### Update weights #### model.weights += learning_rate * negative_grad_w model.bias += learning_rate * negative_grad_b #### Logging #### yhat = model.forward(x) # not that this is a bit wasteful here curr_loss = loss(yhat, y) print('Epoch: %03d' % (e+1), end="") print(' | MSE: %.5f' % curr_loss) cost.append(curr_loss) return cost # + [markdown] id="JBxtV5lsKVhf" colab_type="text" # ## Train Linear Regression Model # + id="2sAMUwqzKVhf" colab_type="code" outputId="f36d9897-33ad-4e9a-9763-e86d10c843d1" executionInfo={"status": "ok", "timestamp": 1590183524422, "user_tz": 180, "elapsed": 1029, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZzFT0FCo6nTJjXLoCVlWF617XKFK9oco_RLrc-A=s64", "userId": "01490701818826847808"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} model = LinearRegression1(num_features=X_train.size(1)) cost = train(model, X_train, y_train, num_epochs=100, learning_rate=0.05) # + [markdown] id="GPBYpDPqKVhh" colab_type="text" # ## Evaluate Linear Regression Model # + [markdown] id="Puz77cJQKVhh" colab_type="text" # ### Plot MSE # + id="tUA9_wKqKVhi" colab_type="code" outputId="39e67b23-def6-4f80-fb27-80c1ac9d07f6" executionInfo={"status": "ok", "timestamp": 1590174263321, "user_tz": 180, "elapsed": 1863, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZzFT0FCo6nTJjXLoCVlWF617XKFK9oco_RLrc-A=s64", "userId": "01490701818826847808"}} colab={"base_uri": "https://localhost:8080/", "height": 279} plt.plot(range(len(cost)), cost) plt.ylabel('Mean Squared Error') plt.xlabel('Epoch') plt.show() # + id="JHTjWsLPKVhk" colab_type="code" outputId="a4dc3595-c31d-4154-90de-ae10e4f93237" executionInfo={"status": "ok", "timestamp": 1590183579521, "user_tz": 180, "elapsed": 923, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZzFT0FCo6nTJjXLoCVlWF617XKFK9oco_RLrc-A=s64", "userId": "01490701818826847808"}} colab={"base_uri": "https://localhost:8080/", "height": 50} train_pred = model.forward(X_train) test_pred = model.forward(X_test) print('Train MSE: %.5f' % loss(train_pred, y_train)) print('Test MSE: %.5f' % loss(test_pred, y_test)) # + [markdown] id="kgxajLLGKVhn" colab_type="text" # ### Compare with analytical solution # + id="HpIFM3XLKVhn" colab_type="code" outputId="3c59be10-7062-4bce-ddd6-8473edb47613" executionInfo={"status": "ok", "timestamp": 1590174263326, "user_tz": 180, "elapsed": 1846, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZzFT0FCo6nTJjXLoCVlWF617XKFK9oco_RLrc-A=s64", "userId": "01490701818826847808"}} colab={"base_uri": "https://localhost:8080/", "height": 67} print('Weights', model.weights) print('Bias', model.bias) # + id="3H_EWcQwKVhp" colab_type="code" outputId="ff0567b5-e298-4486-b50c-a4edf56c2e2c" executionInfo={"status": "ok", "timestamp": 1590174263328, "user_tz": 180, "elapsed": 1840, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgZzFT0FCo6nTJjXLoCVlWF617XKFK9oco_RLrc-A=s64", "userId": "01490701818826847808"}} colab={"base_uri": "https://localhost:8080/", "height": 67} def analytical_solution(x, y): Xb = torch.cat( (torch.ones((x.size(0), 1)), x), dim=1) w = torch.zeros(x.size(1)) z = torch.inverse(torch.matmul(Xb.t(), Xb)) params = torch.matmul(z, torch.matmul(Xb.t(), y)) b, w = torch.tensor([params[0]]), params[1:].view(x.size(1), 1) return w, b w, b = analytical_solution(X_train, y_train) print('Analytical weights', w) print('Analytical bias', b) # + [markdown] id="AnxvDN2oKVhr" colab_type="text" # ## (Ungraded) HW Exercises # + [markdown] id="aPfxK7lBKVhr" colab_type="text" # Modify the `train()` function such that the dataset is shuffled prior to each epoch. Do you see a difference -- Yes/No? Try to come up with an explanation for your observation. # #
L05_grad-descent/code/linear-regr-gd.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- numbers={1:'one', 2:'two', 3:'three', 4:'four', 10:'ten'} numbers[2] numbers[1]='ONE' numbers Num=['One','Two','Three','Four'] ini={num:num[0] for num in Num} ini 'Two' in ini for i in numbers: print("{}={}".format(i,numbers[i])) doc_list = ["The Learn Python Challenge Casino.", "They bought a car, and a horse", "Casinoville"] keyword='car' clean_dl=[x.rstrip('.,').lower() for x in doc_list] clean_dl #count = [clean_dl.index(x) for x in clean_dl if keyword.lower() in x.split()] #count
Python/Dictionaries_Stuff.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: sina # language: python # name: sina # --- # Getting Started # ============ # # To make sure Sina and its dependencies are available, you'll need to generate a Jupyter kernel that uses our environment. Run this cell to create the kernel: # + language="bash" # source /collab/usr/gapps/wf/releases/sina/bin/activate && python -m ipykernel install --prefix=$HOME/.local/ --name 'sina' --display-name 'sina' # - # There will now be a new kernel available called "sina". Select `Kernel > Change kernel > sina` from the top bar to use it, then run the next cell to verify everything's working. If you don't see `sina` as an option, you may need to refresh this page. import sina print("Sina {} loaded successfully. You are now ready to use the notebooks!".format(sina.get_version()))
examples/getting_started.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # NeurotechX Montreal MNI/Deeplearning Workshop # ## Utilities # + # %matplotlib inline from shutil import unpack_archive import os import glob import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np from sys import argv, exit from os.path import exists from os import makedirs import argparse #local modules defined in current project from make_and_run_model import * from predict import * from prepare_data import * from utils import * from custom_loss import * from plot_metrics import * from minc_keras import * # + # Set data directory for extract current_dir = os.getcwd() data_dir = os.path.join(current_dir, 'data') # Extract data unpack_archive('data\\output.tar.bz2', data_dir) # - # ## Train and test with model_0_0 # Train and test with model_0_0 source_dir = os.path.join(current_dir, 'data/output/') target_dir = current_dir nb_epoch = 2 model_type = "model_0_0" input_str = "*T1w_anat*" label_str = "*seg*" images_to_predict = '1,4' ratios = [0.3,0.3] clobber = True batch_size = 1 feature_dim = 2 model_fn = 'model.hdf5' images_fn = 'images.csv' verbose = 1 # Call main function minc_keras(source_dir, target_dir, input_str, label_str, ratios, feature_dim=2, batch_size=2, nb_epoch=nb_epoch, images_to_predict=images_to_predict, clobber=clobber, model_fn=model_fn,model_type=model_type, images_fn=images_fn, verbose=verbose) # ## Show MRI segmentation results for fn in glob('predict/*/*.png'): img = mpimg.imread(fn) plt.figure(figsize=(20,20)) plt.imshow(img) plt.axis('off') plt.show() # ## Show training plots plt.figure(figsize=(20,20)) plt.imshow(plt.imread('report/model_training_plot.png')) plt.axis('off') plt.show()
notebooks/NeuroTechX_Workshop_Windows_Notebook.ipynb
# --- # jupyter: # jupytext: # split_at_heading: true # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #hide #skip ! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab # + # default_exp losses # default_cls_lvl 3 # - #export from fastai.imports import * from fastai.torch_imports import * from fastai.torch_core import * from fastai.layers import * #hide from nbdev.showdoc import * # # Loss Functions # > Custom fastai loss functions F.binary_cross_entropy_with_logits(torch.randn(4,5), torch.randint(0, 2, (4,5)).float(), reduction='none') funcs_kwargs # export @log_args class BaseLoss(): "Same as `loss_cls`, but flattens input and target." activation=decodes=noops def __init__(self, loss_cls, *args, axis=-1, flatten=True, floatify=False, is_2d=True, **kwargs): store_attr("axis,flatten,floatify,is_2d") self.func = loss_cls(*args,**kwargs) functools.update_wrapper(self, self.func) def __repr__(self): return f"FlattenedLoss of {self.func}" @property def reduction(self): return self.func.reduction @reduction.setter def reduction(self, v): self.func.reduction = v def __call__(self, inp, targ, **kwargs): inp = inp .transpose(self.axis,-1).contiguous() targ = targ.transpose(self.axis,-1).contiguous() if self.floatify and targ.dtype!=torch.float16: targ = targ.float() if targ.dtype in [torch.int8, torch.int16, torch.int32]: targ = targ.long() if self.flatten: inp = inp.view(-1,inp.shape[-1]) if self.is_2d else inp.view(-1) return self.func.__call__(inp, targ.view(-1) if self.flatten else targ, **kwargs) # Wrapping a general loss function inside of `BaseLoss` provides extra functionalities to your loss functions: # - flattens the tensors before trying to take the losses since it's more convenient (with a potential tranpose to put `axis` at the end) # - a potential `activation` method that tells the library if there is an activation fused in the loss (useful for inference and methods such as `Learner.get_preds` or `Learner.predict`) # - a potential <code>decodes</code> method that is used on predictions in inference (for instance, an argmax in classification) # The `args` and `kwargs` will be passed to `loss_cls` during the initialization to instantiate a loss function. `axis` is put at the end for losses like softmax that are often performed on the last axis. If `floatify=True`, the `targs` will be converted to floats (useful for losses that only accept float targets like `BCEWithLogitsLoss`), and `is_2d` determines if we flatten while keeping the first dimension (batch size) or completely flatten the input. We want the first for losses like Cross Entropy, and the second for pretty much anything else. # export @log_args @delegates() class CrossEntropyLossFlat(BaseLoss): "Same as `nn.CrossEntropyLoss`, but flattens input and target." y_int = True @use_kwargs_dict(keep=True, weight=None, ignore_index=-100, reduction='mean') def __init__(self, *args, axis=-1, **kwargs): super().__init__(nn.CrossEntropyLoss, *args, axis=axis, **kwargs) def decodes(self, x): return x.argmax(dim=self.axis) def activation(self, x): return F.softmax(x, dim=self.axis) # + tst = CrossEntropyLossFlat() output = torch.randn(32, 5, 10) target = torch.randint(0, 10, (32,5)) #nn.CrossEntropy would fail with those two tensors, but not our flattened version. _ = tst(output, target) test_fail(lambda x: nn.CrossEntropyLoss()(output,target)) #Associated activation is softmax test_eq(tst.activation(output), F.softmax(output, dim=-1)) #This loss function has a decodes which is argmax test_eq(tst.decodes(output), output.argmax(dim=-1)) # + #In a segmentation task, we want to take the softmax over the channel dimension tst = CrossEntropyLossFlat(axis=1) output = torch.randn(32, 5, 128, 128) target = torch.randint(0, 5, (32, 128, 128)) _ = tst(output, target) test_eq(tst.activation(output), F.softmax(output, dim=1)) test_eq(tst.decodes(output), output.argmax(dim=1)) # - # export @log_args @delegates() class BCEWithLogitsLossFlat(BaseLoss): "Same as `nn.BCEWithLogitsLoss`, but flattens input and target." @use_kwargs_dict(keep=True, weight=None, reduction='mean', pos_weight=None) def __init__(self, *args, axis=-1, floatify=True, thresh=0.5, **kwargs): super().__init__(nn.BCEWithLogitsLoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs) self.thresh = thresh def decodes(self, x): return x>self.thresh def activation(self, x): return torch.sigmoid(x) # + tst = BCEWithLogitsLossFlat() output = torch.randn(32, 5, 10) target = torch.randn(32, 5, 10) #nn.BCEWithLogitsLoss would fail with those two tensors, but not our flattened version. _ = tst(output, target) test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target)) output = torch.randn(32, 5) target = torch.randint(0,2,(32, 5)) #nn.BCEWithLogitsLoss would fail with int targets but not our flattened version. _ = tst(output, target) test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target)) #Associated activation is sigmoid test_eq(tst.activation(output), torch.sigmoid(output)) # - # export @log_args(to_return=True) @use_kwargs_dict(weight=None, reduction='mean') def BCELossFlat(*args, axis=-1, floatify=True, **kwargs): "Same as `nn.BCELoss`, but flattens input and target." return BaseLoss(nn.BCELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs) tst = BCELossFlat() output = torch.sigmoid(torch.randn(32, 5, 10)) target = torch.randint(0,2,(32, 5, 10)) _ = tst(output, target) test_fail(lambda x: nn.BCELoss()(output,target)) # export @log_args(to_return=True) @use_kwargs_dict(reduction='mean') def MSELossFlat(*args, axis=-1, floatify=True, **kwargs): "Same as `nn.MSELoss`, but flattens input and target." return BaseLoss(nn.MSELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs) tst = MSELossFlat() output = torch.sigmoid(torch.randn(32, 5, 10)) target = torch.randint(0,2,(32, 5, 10)) _ = tst(output, target) test_fail(lambda x: nn.MSELoss()(output,target)) #hide #cuda #Test losses work in half precision output = torch.sigmoid(torch.randn(32, 5, 10)).half().cuda() target = torch.randint(0,2,(32, 5, 10)).half().cuda() for tst in [BCELossFlat(), MSELossFlat()]: _ = tst(output, target) # export @log_args(to_return=True) @use_kwargs_dict(reduction='mean') def L1LossFlat(*args, axis=-1, floatify=True, **kwargs): "Same as `nn.L1Loss`, but flattens input and target." return BaseLoss(nn.L1Loss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs) #export @log_args class LabelSmoothingCrossEntropy(Module): y_int = True def __init__(self, eps:float=0.1, reduction='mean'): self.eps,self.reduction = eps,reduction def forward(self, output, target): c = output.size()[-1] log_preds = F.log_softmax(output, dim=-1) if self.reduction=='sum': loss = -log_preds.sum() else: loss = -log_preds.sum(dim=-1) #We divide by that size at the return line so sum and not mean if self.reduction=='mean': loss = loss.mean() return loss*self.eps/c + (1-self.eps) * F.nll_loss(log_preds, target.long(), reduction=self.reduction) def activation(self, out): return F.softmax(out, dim=-1) def decodes(self, out): return out.argmax(dim=-1) # On top of the formula we define: # - a `reduction` attribute, that will be used when we call `Learner.get_preds` # - an `activation` function that represents the activation fused in the loss (since we use cross entropy behind the scenes). It will be applied to the output of the model when calling `Learner.get_preds` or `Learner.predict` # - a <code>decodes</code> function that converts the output of the model to a format similar to the target (here indices). This is used in `Learner.predict` and `Learner.show_results` to decode the predictions #export @log_args @delegates() class LabelSmoothingCrossEntropyFlat(BaseLoss): "Same as `LabelSmoothingCrossEntropy`, but flattens input and target." y_int = True @use_kwargs_dict(keep=True, eps=0.1, reduction='mean') def __init__(self, *args, axis=-1, **kwargs): super().__init__(LabelSmoothingCrossEntropy, *args, axis=axis, **kwargs) def activation(self, out): return F.softmax(out, dim=-1) def decodes(self, out): return out.argmax(dim=-1) # ## Export - #hide from nbdev.export import * notebook2script()
nbs/01a_losses.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.4.0 # language: julia # name: julia-1.4 # --- using Plots using Random # + mutable struct track road::Array{Int64} end function track(n::Int64) track(zeros(n,n)) end # + function right!(tr::track,pos::Int64,dir::Int64,larg::Int64,h::Int64) r = tr.road dir2 = dir%4 if dir2 == 0 for i in 0:larg-1 r[pos + i*n : pos + (larg-1) + i*n] = [h+larg - j for j in 1:larg] #ones(larg)*(h+larg-i) end pos = pos-1 end if dir2 == 1 for i in 0:larg-1 r[pos - i*n : pos + (larg-1) - i*n] = ones(larg)*(h+larg-i) end pos = pos+n end if dir2 == 2 for i in 0:larg-1 r[pos - i*n - (larg-1): pos - i*n] = [j for j in h:h+larg-1] #ones(larg)*(h+larg-i) end pos = pos+1 end if dir2 == 3 for i in 0:larg-1 r[pos + i*n - (larg-1): pos + i*n] = ones(larg)*(h+larg-i) end pos = pos-n end r[pos]=h dir = dir-1 (pos,dir) end function left!(tr::track,pos::Int64,dir::Int64,larg::Int64,h::Int64) r = tr.road dir2 = dir%4 if dir2 == 0 for i in 0:larg-1 r[pos + i*n : pos + (larg-1) + i*n] = [j for j in h:h+larg-1] #ones(larg)*(h+i) end pos = pos+(larg)*(n+1)-n end if dir2 == 1 for i in 0:larg-1 r[pos - i*n : pos + (larg-1) - i*n] = ones(larg)*(h+i) end pos = pos-(larg)*(n-1)-1 end if dir2 == 2 for i in 0:larg-1 r[pos - i*n - (larg-1): pos - i*n] = [h+larg - j for j in 1:larg] # ones(larg)*(h+i) end pos = pos-(larg)*(n+1)+n end if dir2 == 3 for i in 0:larg-1 r[pos + i*n - (larg-1): pos + i*n] = ones(larg)*(h+i) end pos = pos+(larg)*(n-1)+1 end r[pos]=h dir = dir+1 (pos,dir) end function droit!(tr::track,pos::Int64,dir::Int64,larg::Int64,long::Int64,h::Int64) r = tr.road dir2 = dir%4 if dir2 == 0 for i in 0:long-1 r[pos + i*n : pos + (larg-1) + i*n] = ones(larg)*h end pos = pos+(long)*n end if dir2 == 1 for i in 0:larg-1 r[pos - i*n : pos - i*n + (long-1)] = ones(long)*h end pos = pos+long end if dir2 == 2 for i in 0:long-1 r[pos - i*n - (larg-1) : pos - i*n] = ones(larg)*h end pos = pos-(long)*n end if dir2 == 3 for i in 0:larg-1 r[pos + i*n - (long-1) : pos + i*n] = ones(long)*h end pos = pos-long end r[pos]=h (pos,dir) end function long!(tr::track,pos::Int64,dir::Int64,larg::Int64,h::Int64) (pos,dir) = droit!(tr::track,pos::Int64,dir::Int64,larg::Int64,3,h) (pos,dir) = droit!(tr::track,pos::Int64,dir::Int64,larg::Int64,3,h+1) (pos,dir) = droit!(tr::track,pos::Int64,dir::Int64,larg::Int64,3,h+2) (pos,dir) = droit!(tr::track,pos::Int64,dir::Int64,larg::Int64,3,h+3) end function short!(tr::track,pos::Int64,dir::Int64,larg::Int64,h::Int64) droit!(tr::track,pos::Int64,dir::Int64,larg::Int64,3,h) end h = 10 n = 100 R = track(n) pos = 5550 larg = 6 R.road[5550 + n] = 10 dir = 22 long!(R,pos,dir,larg,h) plot(heatmap(R.road),size=(650,600)) # + function build!(tr::track,L::Array{Int64}) pos = 5010 dir = 100 tr.road[5001-n] = 1 h = 100 for i in L if i == 1 (pos,dir) = short!(tr,pos,dir,larg,h) end if i == 2 (pos,dir) = long!(tr,pos,dir,larg,h) h = h+3 end if i == 3 (pos,dir) = right!(tr,pos,dir,larg,h) h = h+larg end if i == 4 (pos,dir) = left!(tr,pos,dir,larg,h) h = h+larg end h = h + 1 end end R1 = track(n) L = [1,2,4,2,3,4,2,2,2,4,2,2,2,2,4,3,4,1,4,2,2,3,2,2,1,2,4] build!(R1,L) plot(heatmap(R1.road),size=(700,600)) # + mutable struct car x::Float64 y::Float64 v::Float64 angle::Float64 a::Float64 braq::Float64 vmax::Float64 end function car(x::Float64,y::Float64) car(x,y,0,0,5,5,10) end # - function move!(Car::car,input::Int64,dt::Float64) delta = Car.v*dt if input == 1 || input == 2 || input == 8 if Car.v<Car.vmax delta = delta + 0.5*Car.a*(dt^2) Car.v = Car.v + Car.a*dt end end if input == 1 || input == 2 || input == 8 if Car.v<Car.vmax #delta = delta + 0.5*Car.a*(dt^2) Car.v = Car.v + Car.a*dt end end if input == 4 || input == 5 || input == 6 if Car.v<Car.vmax #delta = delta - 0.5*Car.a*(dt^2) Car.v = Car.v - Car.a*dt end end if input == 2 || input == 3 || input == 4 Car.angle = Car.angle - delta/Car.braq end if input == 6 || input == 7 || input == 8 Car.angle = Car.angle + delta/Car.braq end Car.x = Car.x + cos(Car.angle)*delta Car.y = Car.y + sin(Car.angle)*delta end # + function trajectoire(Car::car,L::Array{Int64},dt::Float64) X = zeros(length(L)) Y = zeros(length(L)) for i in eachindex(L) move!(Car,L[i],dt) X[i] = Car.x Y[i] = Car.y end (X,Y) end len = 100 dt = 0.06 Car = car(0.0,0.0) X = zeros(len) Y = zeros(len) T = [1, 7, 3, 1, 1, 1, 1, 1, 7, 1] for i in eachindex(T) # if rand()<1/6 # move!(Car,2,dt) # # elseif 1/6<rand()<4/6 # move!(Car,7,dt) # else # move!(Car,1,dt) # end move!(Car,T[i],dt) X[i] = Car.x Y[i] = Car.y end #plot(scatter(X,Y,xlim = (-30,30), ylim = (-50,30))) # + mutable struct Ind genes::Array{Int64} fitness::Int64 end function Ind(n::Int64) genes = ones(n) for i in eachindex(genes) if rand()<1/3 genes[i] = 7 elseif rand()>2/3 genes[i]= 3 end end Ind(genes,0) end # - function mutate!(ind::Ind;p = 1/nInd) for i in eachindex(ind.genes) if rand()<p if rand()<1/3 ind.genes[i] = 7 elseif rand()>2/3 ind.genes[i]= 3 else ind.genes[i] = 1 end end end end # + function evaluate!(ind::Ind,tr::track,CAR::car,dt::Float64) (X,Y) = trajectoire(CAR,ind.genes,dt) values = ones(nInd) i = 2 while i < nInd && values[i-1]>0 #println(values[i]) x = Int(round(X[i]+52)) y = Int(round(Y[i]+13)) if 0<x<n && 0<y<n values[i] = tr.road[y,x] #println(values[i]) end i = i+1 end ind.fitness = maximum(values) end function trajInd(ind::Ind,CAR::car,dt::Float64) (X,Y) = trajectoire(CAR,ind.genes,dt) values = ones(nInd) i = 1 (X+ones(nInd)*52,Y+ones(nInd)*13) end function affichage(ind::Ind,tr::track,CAR::car,dt::Float64) tr2 = track(n) tr2.road = copy(tr.road) (X,Y) = trajectoire(CAR,ind.genes,dt) i = 1 while i < nInd x = Int(round(X[i]+52)) y = Int(round(Y[i]+13)) if 0<x<n && 0<y<n tr2.road[y,x] = 300 #println(values[i]) end i = i+1 end tr2.road end # + Car1 = car(0.0,0.0) dt = 0.05 nInd = 430 first = Ind(nInd) #(X,Y) = trajectoire(Car1,first.genes,dt) #plot(scatter(X,Y)) println(evaluate!(first,R1,Car1,dt)) child = Ind(nInd) child.genes = copy(first.genes) plot(scatter(trajInd(first,car(0.0,0.0),dt))) # + for i in 1:100000 child.genes = copy(first.genes) mutate!(child) evaluate!(child,R1,car(0.0,0.0),dt) if child.fitness >= first.fitness first.genes = copy(child.genes) evaluate!(first,R1,car(0.0,0.0),dt) #println("ok") end #println("C", child.fitness) #println(first.fitness) end first.fitness # - #plot(heatmap(R1.road), scatter(trajInd(child,car(0.0,0.0),dt)), size = (1000,400)) println(evaluate!(first,R1,car(0.0,0.0),dt)) plot(heatmap(affichage(first,R1,car(0.0,0.0),dt)),size = (800,800)) plot(scatter(trajInd(first,car(0.0,0.0),dt)))
project/Race optim.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # https://leetcode.com/problems/string-to-integer-atoi/ # Input: str = "4193 with words" # Output: 4193 # Explanation: Conversion stops at digit '3' as the next character is not a numerical digit. # - class Solution: def myAtoi(self, s: str) -> int: pos = True data = "" flag = False sign = False for i in s: if i == "-" and not flag: if sign: break pos = False sign = True flag = True elif i == "+" and not flag: if sign: break pos = True sign = True elif ord(i) == 32: if flag: break continue elif ord(i)<58 and ord(i)>47: data+=i flag = True else: break if data == "": data = "0" if pos == False: data = -1*int(data) else: data = int(data) if data>2147483647: data = 2147483647 if data<-2147483648: data = -2147483648 return data s = Solution() s.myAtoi(" -12121 12 ")
tanmay/leetcode/string-to-integer-atoi.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd df = pd.read_csv('winemag-data_first150k.csv') # + pycharm={"name": "#%%\n"} df.head() # + pycharm={"name": "#%%\n"} df # + pycharm={"name": "#%%\n"} del df['description'] # + pycharm={"name": "#%%\n"} del df['Unnamed: 0'] # + pycharm={"name": "#%%\n"} df.head() # + pycharm={"name": "#%%\n"} # Now that we have the proper data frame we want # so I can continue the data wrangling # + pycharm={"name": "#%%\n"} df.info() # + pycharm={"name": "#%%\n"} df.isnull().any() # + pycharm={"name": "#%%\n"} # Here, we see a lot of null values in the dataframe. Will try to modify # the dataframe later # + pycharm={"name": "#%%\n"} # %pylab inline df['points'].plot(kind='hist', title='points') # + pycharm={"name": "#%%\n"} # we see most of the scores are 85~90 # + pycharm={"name": "#%%\n"} df['price'].plot(kind='hist', title='price') # + pycharm={"name": "#%%\n"} df['price']>900 # + pycharm={"name": "#%%\n"} df[df['price']>900] # + pycharm={"name": "#%%\n"} df[df['price']<400].plot(kind='hist', title='price') # + pycharm={"name": "#%%\n"} df['Cost performance'] = df['points']/df['price'] df.sort_values(['Cost performance'],ascending=False).head(20) # + pycharm={"name": "#%%\n"} df.plot(kind='scatter', x='price', y='points' , title='the scatter plot of price vs points') # + pycharm={"name": "#%%\n"} df['country'].value_counts() # + pycharm={"name": "#%%\n"} df['country'].value_counts().head().plot(kind='pie', figsize=[10,10], counterclock=True, startangle = 90, legend=False, title='Country') # + pycharm={"name": "#%%\n"} df['variety'].value_counts().head() # + pycharm={"name": "#%%\n"} df['variety'].value_counts().head().plot(kind='pie', figsize=[10,10], counterclock=True, startangle = 0, legend=False, title='variety') # + pycharm={"name": "#%%\n"} df['winery'].value_counts().head() # + pycharm={"name": "#%%\n"} winery_price = df['price'].groupby(df['winery']).mean() win_ord = winery_price.sort_values(ascending=False).head() win_ord # + pycharm={"name": "#%%\n"} win_ord.plot(kind='line',title="Mean price of winery") # + pycharm={"name": "#%%\n"}
Wine/Wine.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import string import os import sys import random from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt # Meal Item Price Problem #계산대에서의 진짜 가격 p_fish = 150;p_chips = 50;p_ketchup = 100 # + #식사 가격 샘플: 10일 동안의 식사 가격 일반화 데이터 np.random.seed(100) portions = np.random.randint(low=1, high=10, size=3 ) portions X = [];y = [];days=10 for i in range(days): portions = np.random.randint(low=1, high=10, size=3 ) price = p_fish * portions[0] + p_chips * portions[1] + p_ketchup * portions[2] X.append(portions) y.append(price) X = np.array(X) y = np.array(y) # - print (X,y) #선형모형 만들기 from keras.layers import Input, Dense, Activation from keras.models import Model from keras.optimizers import SGD from keras.callbacks import Callback price_guess = [np.array([[ 50 ], [ 50], [ 50 ]]) ] model_input = Input(shape=(3,), dtype='float32') model_output = Dense(1, activation='linear', use_bias=False, name='LinearNeuron', weights=price_guess)(model_input) sgd = SGD(lr=0.01) model = Model(model_input, model_output) model.compile(loss="mean_squared_error", optimizer=sgd) model.summary() history = model.fit(X, y, batch_size=20, epochs=30,verbose=2) l4 = history.history['loss'] model.get_layer('LinearNeuron').get_weights() # + print(history.history.keys()) plt.plot(l1) plt.plot(l2) plt.plot(l3) plt.ylabel('mean squared error') plt.xlabel('epoch') plt.legend(["LR=0.0001","LR=0.001","LR=0.01"]) plt.show() # + #LR 효과 관찰 # - # XOR Problem in Keras X = np.array([[0,0],[0,1],[1,0],[1,1]]) y = np.array([[0],[1],[1],[0]]) # + #XOR 는 선형으로 분리가능한 문제가 아니다. #선형 모델에 비선형층을 추가하면 작동하지 않는다. model_input = Input(shape=(2,), dtype='float32') z = Dense(2,name='HiddenLayer', kernel_initializer='ones', activation='relu')(model_input) #z = Activation('relu')(z) z = Dense(1, name='OutputLayer')(z) model_output = Activation('sigmoid')(z) model = Model(model_input, model_output) #model.summary() # - sgd = SGD(lr=0.5) #model.compile(loss="mse", optimizer=sgd) model.compile(loss="binary_crossentropy", optimizer=sgd) model.fit(X, y, batch_size=4, epochs=200,verbose=0) preds = np.round(model.predict(X),decimals=3) pd.DataFrame({'Y_actual':list(y), 'Predictions':list(preds)}) model.get_weights() hidden_layer_output = Model(inputs=model.input, outputs=model.get_layer('HiddenLayer').output) projection = hidden_layer_output.predict(X) for i in range(4): print (X[i], projection[i]) import matplotlib.pyplot as plt # + fig = plt.figure(figsize=(5,10)) ax = fig.add_subplot(211) plt.scatter(x=projection[:, 0], y=projection[:, 1], c=('g')) ax.set_xlabel('X axis (h1)') ax.set_ylabel('Y axis (h2)') ax.set_label('Transformed Space') #hidden layer transforming the input to a linearly seperable. x1, y1 = [projection[0, 0]-0.5, projection[3, 0]], [projection[0, 1]+0.5, projection[3, 1]+0.5] plt.plot(x1, y1) for i, inputx in enumerate(X): ax.annotate(str(inputx), (projection[i, 0]+0.1,projection[i, 1])) ax = fig.add_subplot(212) ax.set_label('Original Space') plt.scatter(x=X[:, 0], y=X[:, 1], c=('b')) for i, inputx in enumerate(X): ax.annotate(str(inputx), (X[i, 0]+0.05,X[i, 1])) # - plt.show() projection # + #Logistic neuron: Logistic regression # - from sklearn.datasets import load_breast_cancer data = load_breast_cancer() X = data.data y = data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42) X_train.shape # + model_input = Input(shape=(30,), dtype='float32') model_output = Dense(1, activation='sigmoid', name='SigmoidNeuron')(model_input) sgd = SGD(lr=0.01) model = Model(model_input, model_output) model.compile(loss="binary_crossentropy", optimizer=sgd, metrics=["accuracy"]) # - scaler = StandardScaler() model.fit(scaler.fit_transform(X_train), y_train, batch_size=10, epochs=5,verbose=2, validation_data=(scaler.fit_transform(X_test), y_test)) # + import numpy as np import seaborn as sns import matplotlib.pyplot as plt import matplotlib.animation as animation from scipy import stats from sklearn.datasets.samples_generator import make_regression # - x, y = make_regression(n_samples = 100, n_features=1, n_informative=1, noise=20, random_state=2017) x = x.flatten() slope, intercept, _,_,_ = stats.linregress(x,y) print("m={}, c={}".format(slope,intercept)) best_fit = np.vectorize(lambda x: x * slope + intercept) plt.plot(x,y, 'o', alpha=0.5) grid = np.arange(-3,3,0.1) plt.plot(grid,best_fit(grid), '.') plt.show() def gradient_descent(x, y, theta_init, step=0.1, maxsteps=0, precision=0.001, ): costs = [] m = y.size # number of data points theta = theta_init history = [] # to store all thetas preds = [] counter = 0 oldcost = 0 pred = np.dot(x, theta) error = pred - y currentcost = np.sum(error ** 2) / (2 * m) preds.append(pred) costs.append(currentcost) history.append(theta) counter+=1 while abs(currentcost - oldcost) > precision: oldcost=currentcost gradient = x.T.dot(error)/m theta = theta - step * gradient # update history.append(theta) pred = np.dot(x, theta) error = pred - y currentcost = np.sum(error ** 2) / (2 * m) costs.append(currentcost) if counter % 25 == 0: preds.append(pred) counter+=1 if maxsteps: if counter == maxsteps: break return history, costs, preds, counter xaug = np.c_[np.ones(x.shape[0]), x] theta_i = [-15, 40] + np.random.rand(2) history, cost, preds, iters = gradient_descent(xaug, y, theta_i, step=0.1) theta = history[-1] print("Gradient Descent: {:.2f}, {:.2f} {:d}".format(theta[0], theta[1], iters)) print("Least Squares: {:.2f}, {:.2f}".format(intercept, slope)) # + from mpl_toolkits.mplot3d import Axes3D def error(X, Y, THETA): return np.sum((X.dot(THETA) - Y)**2)/(2*Y.size) ms = np.linspace(theta[0] - 20 , theta[0] + 20, 20) bs = np.linspace(theta[1] - 40 , theta[1] + 40, 40) M, B = np.meshgrid(ms, bs) zs = np.array([error(xaug, y, theta) for theta in zip(np.ravel(M), np.ravel(B))]) Z = zs.reshape(M.shape) fig = plt.figure(figsize=(10, 6)) ax = fig.add_subplot(111, projection='3d') ax.plot_surface(M, B, Z, rstride=1, cstride=1, color='b', alpha=0.2) ax.contour(M, B, Z, 20, color='b', alpha=0.5, offset=0, stride=30) ax.set_xlabel('Intercept') ax.set_ylabel('Slope') ax.set_zlabel('Cost') ax.view_init(elev=30., azim=30) ax.plot([theta[0]], [theta[1]], [cost[-1]] , markerfacecolor='r', markeredgecolor='r', marker='o', markersize=7); #ax.plot([history[0][0]], [history[0][1]], [cost[0]] , markerfacecolor='r', markeredgecolor='r', marker='o', markersize=7); ax.plot([t[0] for t in history], [t[1] for t in history], cost , markerfacecolor='r', markeredgecolor='r', marker='.', markersize=2); ax.plot([t[0] for t in history], [t[1] for t in history], 0 , markerfacecolor='r', markeredgecolor='r', marker='.', markersize=2); # - plt.show() # + fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot(111) xlist = np.linspace(-7.0, 7.0, 100) # Create 1-D arrays for x,y dimensions ylist = np.linspace(-7.0, 7.0, 100) X,Y = np.meshgrid(xlist, ylist) # Create 2-D grid xlist,ylist values Z = 50 - X**2 - 2*Y**2 # Compute function values on the grid plt.contour(X, Y, Z, [10,20,30,40], colors = ['y','orange','r','b'], linestyles = 'solid') ax.annotate('Direction Of Gradident', xy=(.6, 0.3), xytext=(.6, 0.3)) ax.annotate('Temp=30', xy=(2.8, 2.5), xytext=(2.8, 2.5)) ax.annotate('Temp=40', xy=(2.3, 2), xytext=(2.3, 1.5)) #ax.arrow(0, 0, 6.9, 6.8, head_width=0.5, head_length=0.5, fc='k', ec='k') ax.arrow(2, 1.75, 2*2/20, 4*1.75/20, head_width=0.2, head_length=0.5, fc='r', ec='r') ax.arrow(2, 1.75, -2*2/10, -4*1.75/10, head_width=0.3, head_length=0.5, fc='g', ec='g') plt.show() # - 50 - 2**2 - 2*1.75**2 # + import numpy as np import matplotlib.pylab as plt def step(x): return np.array(x > 0, dtype=np.int) def sigmoid(x): return 1 / (1 + np.exp(-x)) def relu(x): return np.maximum(0, x) def tanh(x): return (np.exp(x)-np.exp(-x)) / (np.exp(x) + np.exp(-x)) x = np.arange(-5.0, 5.0, 0.1) y_step = step(x) y_sigmoid = sigmoid(x) y_relu = relu(x) y_tanh = tanh(x) fig, axes = plt.subplots(ncols=4, figsize=(20, 5)) ax = axes[0] ax.plot(x, y_step,label='Binary Threshold', color='k', lw=1, linestyle=None) ax.set_ylim(-0.8,2) ax.set_title('Binary Threshold') ax = axes[1] ax.plot(x, y_sigmoid,label='Sigmoid', color='k', lw=1, linestyle=None) ax.set_ylim(-0.001,1) ax.set_title('Sigmoid') ax = axes[2] ax.plot(x, y_tanh,label='Tanh', color='k', lw=1, linestyle=None) ax.set_ylim(-1.,1) ax.set_title('Tanh') ax = axes[3] ax.plot(x, y_relu,label='ReLu', color='k', lw=1, linestyle=None) ax.set_ylim(-0.8,5) ax.set_title('ReLu') plt.show() # + x = np.arange(-10.0, 10.0, 0.1) def lineup(x): return (x-4)/12-1 def cliff(x): x1 = -tanh(x[x<4]) x2 = np.apply_along_axis(lineup, 0, x[x>4]) return np.concatenate([x1, x2]) y_cliff = cliff(x) fig, axes = plt.subplots(ncols=1, figsize=(10, 5)) ax = axes ax.plot(x, y_cliff,label='Steep Cliff', color='k', lw=1, linestyle=None) ax.set_ylim(-1.,1) ax.set_title('Steep Cliff') plt.show() # - # ## Polynomial curve fitting: Model Capacity from math import sin, pi N = 100; max_degree = 20 noise = np.random.normal(0, 0.2, N) # + df = pd.DataFrame( index=list(range(N)),columns=list(range(1,max_degree))) for i in range(N): df.loc[i]=[pow(i/N,n) for n in range(1,max_degree)] df['y']=[sin(2*pi*x/N)+noise[x] for x in range(N)] plt.scatter(x=df[1], y=df['y']) plt.show() # - from keras.initializers import RandomNormal degree = 3 X = df[list(range(1,degree+1))].values y = df['y'].values X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.60, random_state=42) model_input = Input(shape=(degree,), dtype='float32') model_output = Dense(1, activation='linear', name='LinearNeuron')(model_input) sgd = SGD(lr=0.3) model = Model(model_input, model_output) model.compile(loss="mean_squared_error", optimizer=sgd) history = model.fit(X_train,y_train , batch_size=10, epochs=1000,verbose=0, validation_data=(X_test,y_test) ) y_pred = model.predict(X_train) plt.scatter(X_train[:,0], y_train) plt.plot(np.sort(X_train[:,0]), y_pred[X_train[:,0].argsort()]) plt.title("Model fit for plynomial of degree {}".format(degree)) plt.show() model.get_weights() y_pred = model.predict(X_test) plt.scatter(X_test[:,0], y_test) plt.plot(np.sort(X_test[:,0]), y_pred[X_test[:,0].argsort()]) plt.title("Model fit for plynomial of degree {}".format(degree)) plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model trainig progress') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # # Gradient Norm Calculation # + import keras.backend as K from keras.layers import Dense from keras.models import Sequential def get_gradient_norm_func(model): grads = K.gradients(model.total_loss, model.trainable_weights) summed_squares = [K.sum(K.square(g)) for g in grads] norm = K.sqrt(sum(summed_squares)) #list concatenation : inputs followed by target , followed by sample weights (all 1 in this case) inputs = [model._feed_inputs , model._feed_targets , model._feed_sample_weights] #K.function takes the input, output tensors as list so that it can create amany to many function func = K.function(inputs, [norm]) return func # + #x = np.random.random((128,)).reshape((-1, 1)) #y = 2 * x #model = Sequential(layers=[Dense(2, input_shape=(1,)), # Dense(1)]) #model.compile(loss='mse', optimizer='rmsprop') # - get_gradient = get_gradient_norm_func(model) gradients_per_epoc =[] for i in range(5): history = model.fit(X, y, epochs=1,batch_size=10, verbose=0) gradients_per_epoc = gradients_per_epoc + get_gradient([X, y, np.ones(len(y))]) plt.plot(gradients_per_epoc) plt.show() y #The parameters clipnorm and clipvalue can be used with all optimizers to control gradient clipping: from keras import optimizers # All parameter gradients will be clipped to max norm of 1.0 sgd = optimizers.SGD(lr=0.01, clipnorm=1.) #Similarly for ADAM adam = optimizers.Adam(clipnorm=1.) # + import tensorflow as tf # Initialize 3 constants: 2 vectors, a scalar and a 2D tensor x1 = tf.constant([1,2,3,4]) x2 = tf.constant([5,6,7,8]) b = tf.constant(10) W = tf.constant(-1, shape=[4, 2]) # Elementwise Multiply/subtract res_elem_wise_mult = tf.multiply(x1, x2) res_elem_wise_sub = tf.subtract(x1, x2) #dot product of two tensors of compatable shapes res_dot_product = tf.tensordot(x1, x2, axes=1) #broadcasting : add scalar 10 to all elements of the vector res_broadcast = tf.add(x1, b) #Calculating Wtx res_matrix_vector_dot = tf.multiply(tf.transpose(W), x1) #scalar multiplication scal_mult_matrix = tf.scalar_mul(scalar=10, x=W) # Initialize Session and execute with tf.Session() as sess: output = sess.run([res_elem_wise_mult,res_elem_wise_sub, res_dot_product, res_broadcast,res_matrix_vector_dot, scal_mult_matrix]) print(output) # - from math import tanh for x1 in np.arange(0, 9, 0.25): for x2 in np.arange(0, 9, 0.25): h1=tanh(x1+x2-10) h2=tanh(x1-x2) y=h1+h2 print(y) n_samples = 10 x = np.sort(np.random.randn(n_samples)) y = np.multiply(3, x) plt.plot(y) plt.show() # + # tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Set model weights W = tf.Variable(np.random.randn(), name="weight") b = tf.Variable(np.random.randn(), name="bias") # Construct a linear model pred = tf.add(tf.multiply(W, X), b) # Mean squared error cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) dc_dw, dc_db = tf.gradients(cost, [W, b]) params = tf.stack([W,b], axis=0) x_flat = tf.constant(np.random.normal(0, 1, 2)) init = tf.global_variables_initializer() with tf.Session() as session: session.run(init) output = session.run([dc_dw, dc_db, W, b], feed_dict={X:x, Y:y}) print(output) # - np.dot(x,(np.multiply(-0.10788918,x)+1.0033976 )-y)/n_samples #=dc/db # + import tensorflow as tf x = tf.Variable(3, name='x', dtype=tf.float32) log_x = tf.log(x) log_x_squared_times_x = tf.multiply(tf.square(log_x), x) optimizer = tf.train.GradientDescentOptimizer(0.1) train = optimizer.minimize(log_x_squared_times_x) grad = tf.gradients(log_x_squared_times_x, x) init = tf.global_variables_initializer() with tf.Session() as session: session.run(init) print("starting at", "x:", session.run(x), "log(x)^2:", session.run(log_x_squared_times_x)) for step in range(20): session.run(train) print("step", step, "x:", session.run(x), "log(x)^2:", session.run(log_x_squared_times_x),"Grad", session.run(grad)) # - x=np.arange(0.00001, 10, 0.3) y=np.log(x) y=np.power(y, 2) plt.plot(x,y) plt.show() # + import tensorflow as tf x = tf.Variable(3, name='x', dtype=tf.float32) log_x = tf.log(x) log_x_squared_times_x = tf.multiply(tf.square(log_x), x) optimizer = tf.train.GradientDescentOptimizer(0.1) train = optimizer.minimize(log_x_squared_times_x) grad = tf.gradients(log_x_squared_times_x, x) init = tf.global_variables_initializer() with tf.Session() as session: session.run(init) print("starting at", "x:", session.run(x), "log(x)^2:", session.run(log_x_squared_times_x)) for step in range(20): session.run(train) print("step", step, "x:", session.run(x), "log(x)^2:", session.run(log_x_squared_times_x),"Grad", session.run(grad)) # + import tensorflow as tf x = tf.Variable(2, name='x', dtype=tf.float32) y = tf.Variable(2, name='y', dtype=tf.float32) temperature = 50 - 3*tf.square(y) - tf.square(x) optimizer = tf.train.GradientDescentOptimizer(0.05) train = optimizer.minimize(temperature) grad = tf.gradients(temperature, [x,y]) init = tf.global_variables_initializer() with tf.Session() as session: session.run(init) x1, y1, t1 = session.run([x, y, temperature]) print("Starting at cordinate x={}, y={} and temperature there is {}".format(x1, y1, t1 )) grad_norms = [] temperatures = [] gradients = [] coordinates = [] for step in range(10): session.run(train) g = session.run(grad) print("step ({}) x={},y={}, T={}, Gradient={}".format(step,x1, y1, t1,g)) x1, y1, t1 = session.run([x, y, temperature]) print(f) grad_norms.append(np.linalg.norm(g)) temperatures.append(t1) gradients.append(g) coordinates.append([x1, y1]) # - temperatures[1], coordinates[1],gradients[1] # also agrees with the math for partial derivatives temperatures[:5] # + fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(111) xlist = np.linspace(-15.0, 15.0, 100) # Create 1-D arrays for x,y dimensions ylist = np.linspace(-15.0, 15.0, 100) X,Y = np.meshgrid(xlist, ylist) # Create 2-D grid xlist,ylist values Z = 50 - X**2 - 3*Y**2 # Compute function values on the grid num_contours = 7 #Contour levels must be increasing contour_level = temperatures[:num_contours] list.reverse(contour_level) plt.contour(X, Y, Z, contour_level, colors = ['y','orange','red','blue','g','violet','indigo','black'], linestyles = 'solid') #ax.annotate('Direction Of Gradident', xy=(.6, 0.3), xytext=(.6, 0.3)) for i in range(num_contours): ax.annotate('T={}'.format(np.round(temperatures[i])), xy=tuple(np.array([0,0.4]) + np.array(coordinates[i])), xytext=tuple(np.array([0,0.4]) + np.array(coordinates[i]))) for i in range(num_contours): norm_grad = -np.array(gradients[i]/(np.linalg.norm(gradients[i])+0.00000001)) ax.arrow(coordinates[i][0],coordinates[i][1], norm_grad[0],norm_grad[1], head_width=0.2, head_length=0.5, fc='k', ec='k') plt.show() # - contour_level = temperatures[:5] list.reverse(contour_level) temperatures plt.plot(grad_norms) plt.show() -np.array(gradients[i]/(np.linalg.norm(gradients[i])+0.00000001)) x = [1, 2, 3, 4, 5, 6, 7] m = 3 np.convolve(x, np.ones((m,))/m, mode='valid')
Chapter02/.ipynb_checkpoints/NNBasics-chapter-02-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # (CPEA)= # # 1.4 Condición de un problema y estabilidad de un algoritmo # ```{admonition} Notas para contenedor de docker: # # Comando de docker para ejecución de la nota de forma local: # # nota: cambiar `<ruta a mi directorio>` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker. # # `docker run --rm -v <ruta a mi directorio>:/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:2.1.4` # # password para jupyterlab: `<PASSWORD>` # # Detener el contenedor de docker: # # `docker stop jupyterlab_optimizacion` # # Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion:2.1.4` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion). # # ``` # --- # Nota generada a partir de [liga](https://www.dropbox.com/s/5bc6tn39o0qqg35/1.3.Condicion_estabilidad_y_normas.pdf?dl=0) # ```{admonition} Al final de esta nota el y la lectora: # :class: tip # # * Podrá dar una justificación del por qué un algoritmo es exacto o inexacto en el cálculo de aproximaciones para cantidades de interés. # # * Aprenderá sobre los conceptos de condición de un problema y estabilidad de un algoritmo. # # * Comprenderá que el condicionamiento de un problema es inherente al problema mismo y no depende del algoritmo utilizado. # # * En específico se trabajará sobre el número de condición de una matriz como una cantidad que ayuda a clasificar problemas bien y mal condicionados en la solución de sistemas de ecuaciones lineales y otros. # # ``` # Dos temas fundamentales en el análisis numérico son: la **condición de un problema** y **estabilidad de un algoritmo**. El condicionamiento tiene que ver con el comportamiento de un problema ante perturbaciones y la estabilidad con el comportamiento de un algoritmo (usado para resolver un problema) ante perturbaciones. # La exactitud de un cálculo dependerá finalmente de una combinación de estos términos: # # <p style="text-align: center;">Exactitud = Condición + Estabilidad</p> # # La falta de exactitud en un problema se presenta entonces por problemas mal condicionados (no importando si los algoritmos son estables o inestables) y algoritmos inestables (no importando si los problemas son mal o bien condicionados). # ## Perturbaciones # La condición de un problema y estabilidad de un algoritmo hacen referencia al término **perturbación**. Tal término conduce a pensar en perturbaciones "chicas" o "grandes". Para dar una medida de lo anterior se utiliza el concepto de **norma**. Ver {ref}`Normas vectoriales y matriciales <NVM>` para definición de norma y propiedades. # ## Condición de un problema # Pensemos a un problema como una función $f: \mathbb{X} \rightarrow \mathbb{Y}$ donde $\mathbb{X}$ es un espacio vectorial con norma definida y $\mathbb{Y}$ es otro espacio vectorial de soluciones con una norma definida. Llamemos instancia de un problema a la combinación entre $x,f$ y nos interesa el comportamiento de $f$ en $x$. Usamos el nombre de "problema" para referirnos al de instancia del problema. # # Un problema (instancia) bien condicionado tiene la propiedad de que todas las perturbaciones pequeñas en $x$ conducen a pequeños cambios en $f(x)$. Y es mal condicionado si perturbaciones pequeñas en $x$ conducen a grandes cambios en $f(x)$. El uso de los términos "pequeño" o "grande" dependen del problema mismo. # # Sea $\hat{x} = x + \Delta x$ con $\Delta x$ una perturbación pequeña de $x$. # # El **número de condición relativo del problema $f$ en $x$** es: # # $$\text{Cond}_f^R = \frac{\text{ErrRel}(f(\hat{x}))}{\text{ErrRel}(\hat{x})} = \frac{\frac{||f(\hat{x})-f(x)||}{||f(x)||}}{\frac{||x-\hat{x}||}{||x||}}$$ # # considerando $x,f(x) \neq 0$. # ```{admonition} Observación # :class: tip # # Si $f$ es una función diferenciable, podemos evaluar $\text{Cond}_f^R$ con la derivada de $f$, pues a primer orden (usando teorema de Taylor): $f(\hat{x})-f(x) \approx \mathcal{J}_f(x)\Delta x$ con igualdad para $\Delta x \rightarrow 0$ y $\mathcal{J}_f$ la Jacobiana de $f$ definida como una matriz con entradas: $(\mathcal{J}_f(x))_{ij} = \frac{\partial f_i(x)}{\partial x_j}$. Por tanto, se tiene: # # $$\text{Cond}_{f}^R = \frac{||\mathcal{J}_f(x)||||x||}{||f(x)||}$$ # # y $||\mathcal{J}_f(x)||$ es una norma matricial inducida por las normas en $\mathbb{X}, \mathbb{Y}$. Ver {ref}`Normas vectoriales y matriciales <NVM>`. # # ``` # ```{admonition} Comentario # # En la práctica se considera a un problema **bien condicionado** si $\text{Cond}_f^R$ es "pequeño": menor a $10$, **medianamente condicionado** si es de orden entre $10^1$ y $10^2$ y **mal condicionado** si es "grande": mayor a $10^3$. # ``` # ```{admonition} Ejercicio # :class: tip # # Calcular $\text{Cond}_f^R$ de los siguientes problemas. Para $x \in \mathbb{R}$ usa el valor absoluto y para $x \in \mathbb{R}^n$ usa $||x||_\infty$. # # 1. $x \in \mathbb{R} - \{0\}$. Problema: realizar la operación $\frac{x}{2}$. # # 2. $x \geq 0$. Problema: calcular $\sqrt{x}$. # # 3. $x \approx \frac{\pi}{2}$. Problema: calcular $\cos(x)$. # # 4. $x \in \mathbb{R}^2$. Problema: calcular $x_1-x_2$. # ``` # ```{admonition} Comentario # # Las dificultades que pueden surgir al resolver un problema **no** siempre están relacionadas con una fórmula o un algoritmo mal diseñado sino con el problema en cuestión. En el ejercicio anterior, observamos que áun utilizando **aritmética exacta**, la solución del problema puede ser altamente sensible a perturbaciones a los datos de entrada. Por esto el número de condición relativo se define de acuerdo a perturbaciones en los datos de entrada y mide la perturbación en los datos de salida que uno espera: # # $$\text{Cond}_f^R = \frac{||\text{Cambios relativos en la solución}||}{||\text{Cambios relativos en los datos de entrada}||}.$$ # # ``` # ## Estabilidad de un algoritmo # Pensemos a un algoritmo $\hat{f}$ como una función $\hat{f}:\mathbb{X}\rightarrow \mathbb{Y}$ para resolver el problema $f$ con datos $x \in \mathbb{X}$, donde $\mathbb{X}$ es un espacio vectorial con norma definida y $\mathbb{Y}$ es otro espacio vectorial con una norma definida. # # # La implementación del algoritmo $\hat{f}$ en una máquina conduce a considerar: # # * Errores por redondeo: # # $$fl(u) = u(1+\epsilon), |\epsilon| \leq \epsilon_{maq}, \forall u \in \mathbb{R}.$$ # * Operaciones en un SPFN, $\mathcal{Fl}$. Por ejemplo para la suma: # # $$u \oplus v = fl(u+v) = (u + v)(1+\epsilon), |\epsilon|\leq \epsilon_{maq} \forall u,v \in \mathcal{Fl}.$$ # # Esto es, $\hat{f}$ depende de $x \in \mathbb{X}$ y $\epsilon_{maq}$: representación de los números reales en una máquina y operaciones entre ellos o aritmética de máquina. Ver nota: {ref}`Sistema de punto flotante <SPF>`. # Al ejecutar $\hat{f}$ obtenemos una colección de números en el SPFN que pertenecen a $\mathbb{Y}$: $\hat{f}(x)$. # # Debido a las diferencias entre un problema con cantidades continuas y una máquina que trabaja con cantidades discretas, los algoritmos numéricos **no** son exactos para **cualquier** elección de datos $x \in \mathbb{X}$. Esto es, los algoritmos **no** cumplen que la cantidad: # # $$\frac{||\hat{f}(x)-f(x)||}{||f(x)||}$$ # # dependa únicamente de errores por redondeo al evaluar $f$ $\forall x \in \mathbb{X}$. En notación matemática: # $$\frac{||\hat{f}(x)-f(x)||}{||f(x)||} \leq K \epsilon_{maq} \forall x \in \mathbb{X}$$ # # con $K > 0$ no se cumple en general. # La razón de lo anterior tiene que ver con cuestiones en la implementación de $\hat{f}$ como el número de iteraciones, la representación de $x$ en un SPFN o el mal condicionamiento de $f$. Así, a los algoritmos en el análisis numérico, se les pide una condición menos estricta que la anterior y más bien satisfagan lo que se conoce como **estabilidad**. Se dice que un algoritmo $\hat{f}$ para un problema $f$ es **estable** si: # # $$\forall x \in \mathbb{X}, \frac{||\hat{f}(x)-f(\hat{x})||}{||f(\hat{x})||} \leq K_1\epsilon_{maq}, K_1>0$$ # # para $\hat{x} \in \mathbb{X}$ tal que $\frac{||x-\hat{x}||}{||x||} \leq K_2\epsilon_{maq}, K_2>0$. # Esto es, $\hat{f}$ resuelve un problema cercano para datos cercanos (cercano en el sentido del $\epsilon_{maq}$) independientemente de la elección de $x$. # ```{admonition} Observación # :class: tip # # Obsérvese que esta condición es más flexible y en general $K_1, K_2$ dependen de las dimensiones de $\mathbb{X}, \mathbb{Y}$. # ``` # ```{admonition} Comentarios # # * Esta definición resulta apropiada para la mayoría de los problemas en el ánalisis numérico. Para otros problemas, por ejemplo en ecuaciones diferenciales, donde se tienen definiciones de sistemas dinámicos estables e inestables (cuyas definiciones no se deben confundir con las descritas para algoritmos), esta condición es muy estricta. # # * Tenemos algoritmos que satisfacen una condición más estricta y simple que la estabilidad: **estabilidad hacia atrás**. # ``` # ### Estabilidad hacia atrás # Decimos que un algoritmo $\hat{f}$ para el problema $f$ es **estable hacia atrás** si: # # $$\forall x \in \mathbb{X}, \hat{f}(x) = f(\hat{x})$$ # # con $\hat{x} \in \mathbb{X}$ tal que $\frac{||x-\hat{x}||}{||x||} \leq K\epsilon_{maq}, K>0$. # # Esto es, el algoritmo $\hat{f}$ da la solución **exacta** para datos cercanos (cercano en el sentido de $\epsilon_{maq}$), independientemente de la elección de $x$. # ### Ejemplo # # Para entender la estabilidad hacia atrás de un algoritmo, considérese el ejemplo siguiente. # # **Problema:** evaluar $f(x) = e^x$ en $x=1$. # # **Resultado:** $f(1) = e^1 = 2.718281...$. # # import math x=1 print(math.exp(x)) # **Algoritmo:** truncar la serie $1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \dots$ a cuatro términos: $\hat{f}(x) = 1 + x + \frac{x^2}{2} + \frac{x^3}{6}$. # # **Resultado del algoritmo:** $\hat{f}(1) = 2.\bar{6}$ # algoritmo = lambda x: 1 + x + x**2/2.0 + x**3/6.0 print(algoritmo(1)) # **Pregunta:** ¿Qué valor $\hat{x} \in \mathbb{R}$ hace que el valor calculado por el algoritmo $\hat{f}(1)$ sea igual a $f(\hat{x})$? # # -> **Solución:** # # Resolver la ecuación: $e^{\hat{x}} = 2.\bar{6}$, esto es: $\hat{x} = log(2.\bar{6}) = 0.980829...$. Entonces $f(\hat{x}) = 2.\bar{6} = \hat{f}(x)$. # x_hat = math.log(algoritmo(1)) print(x_hat) # Entonces, el algoritmo es estable hacia atrás sólo si la diferencia entre $x$ y $\hat{x}$ en términos relativos es menor a $K \epsilon_{maq}$ con $K >0$. Además, podemos calcular **errores hacia delante** y **errores hacia atrás**: # # error hacia delante: $\hat{f}(x) - f(x) = -0.05161...$, error hacia atrás: $\hat{x}-x = -0.01917...$. # forward_error = algoritmo(x) - math.exp(x) print(forward_error) backward_error = x_hat-x print(backward_error) # Dependiendo del problema, estos errores son pequeños o grandes, por ejemplo si consideramos tener una cifra correcta como suficiente para determinar que es una buena aproximación entonces podemos concluir: $\hat{f}$ obtiene una respuesta correcta y cercana al valor de $f$ (error hacia delante) y la respuesta que obtuvimos con $\hat{f}$ es correcta para datos ligeramente perturbados (error hacia atrás). # # # ```{admonition} Observaciones # :class: tip # # * Obsérvese que el error hacia delante requiere resolver el problema $f$ (para calcular $f(x)$) y también información sobre $f$. # # * En el ejemplo anterior se calcularon $\hat{f}(x)$ y también qué tan larga debe ser la modificación en los datos $x$, esto es: $\hat{x}$, para que $\hat{f}(x) = f(\hat{x})$ (error hacia atrás). # # # * Dibujo que ayuda a ver errores hacia atrás y hacia delante: # # <img src="https://dl.dropboxusercontent.com/s/b30awajxvl3u8qe/error_hacia_delante_hacia_atras.png?dl=0" heigth="500" width="500"> # # ``` # En resumen, algunas características de un método **estable** numéricamente respecto al redondeo son: # # * Variaciones "pequeñas" en los datos de entrada del método generan variaciones "pequeñas" en la solución del problema. # # * No amplifican errores de redondeo en los cálculos involucrados. # # * Resuelven problemas "cercanos" para datos ligeramente modificados. # ```{admonition} Observación # :class: tip # # La estabilidad numérica que se revisó en esta sección hace referencia a los errores por redondeo de la aritmética y representación de los números en el {ref}`sistema de punto flotante <SPF>`. Tal uso no debe confundirse con la estabilidad numérica en el tema de ecuaciones diferenciales, ver [Stability_in_numerical_differential_equations](https://en.wikipedia.org/wiki/Numerical_stability#Stability_in_numerical_differential_equations). # # ``` # (NCM)= # ## Número de condición de una matriz # En el curso trabajaremos con algoritmos matriciales que son numéricamente estables (o estables hacia atrás) ante errores por redondeo, sin embargo la exactitud que obtengamos con tales algoritmos dependerán de qué tan bien (o mal) condicionado esté el problema. En el caso de matrices la condición de un problema puede ser cuantificada con el **número de condición** de la matriz del problema. Aunque haciendo uso de definiciones como la pseudoinversa de una matriz es posible definir el número de condición para una matriz en general rectangular $A \in \mathbb{R}^{m\times n}$, en esta primera definición consideramos matrices cuadradas no singulares $A \in \mathbb{R}^{n\times n}$: # # $$\text{cond}(A) = ||A|| ||A^{-1}||.$$ # ```{admonition} Observación # :class: tip # # Obsérvese que la norma anterior es una **norma matricial** y cond$(\cdot)$ puede calcularse para diferentes normas matriciales. Ver {ref}`Normas vectoriales y matriciales <NVM>` para definición de norma y propiedades. # ``` # ## ¿Por qué se utiliza la expresión $||A|| ||A^{-1}||$ para definir el número de condición de una matriz? # Esta pregunta tiene que ver con el hecho que tal expresión aparece frecuentemente en problemas típicos de matrices. Para lo anterior considérese los siguientes problemas $f$: # 1.Sean $A \in \mathbb{R}^{n\times n}$ no singular, $x \in \mathbb{R}^n$ y $f$ el problema de realizar la multiplicación $Ax$ para $x$ fijo, esto es: $f: \mathbb{R}^n \rightarrow \mathbb{R}^n$ dada por $f(x) = Ax$. Considérese una perturbación en $x: \hat{x} = x + \Delta x$, entonces: # # $$\text{Cond}_f^R = \frac{\text{ErrRel}(f(\hat{x}))}{\text{ErrRel}(\hat{x})} = \frac{\frac{||f(\hat{x})-f(x)||}{||f(x)||}}{\frac{||x-\hat{x}||}{||x||}} \approx \frac{||\mathcal{J}_f(x)||||x||}{||f(x)||}.$$ # Para este problema tenemos: # # $$\frac{||\mathcal{J}_f(x)||||x||}{||f(x)||} = \frac{||A|| ||x||}{||Ax||}.$$ # Si las normas matriciales utilizadas en el número de condición son consistentes (ver {ref}`Normas vectoriales y matriciales <NVM>` para definición de norma y propiedades) entonces: # # $$||x|| = ||A^{-1}Ax|| \leq ||A^{-1}||||Ax|| \therefore \frac{||x||}{||Ax||} \leq ||A^{-1}||$$ # y se tiene: # # $$\text{Cond}_f^R \leq ||A|| ||A^{-1}||.$$ # 2.Sean $f: \mathbb{R}^n \rightarrow \mathbb{R}, A \in \mathbb{R}^{n\times n}$ no singular. Considérese el problema de calcular $f(b) = A^{-1}b$ para $b \in \mathbb{R}^n$ fijo y la perturbación $\hat{b} = b + \Delta b$ entonces bajo las suposiciones del ejemplo anterior: # # $$\text{Cond}_f^R \approx \frac{||A^{-1}|| ||b||}{||A^{-1}b||}.$$ # Si las normas matriciales utilizadas en el número de condición son consistentes (ver {ref}`Normas vectoriales y matriciales <NVM>` para definición de norma y propiedades) entonces: # # $$||b|| = ||AA^{-1}b|| \leq ||A|| ||A^{-1}b|| \therefore \text{Cond}_f^R \leq ||A^{-1}|| ||A||.$$ # 3.Sean $f: \mathbb{R}^{n\times n} \rightarrow \mathbb{R}^n, A \in \mathbb{R}^{n\times n}$ no singular $b \in \mathbb{R}^n$ fijo. Considérese el problema de calcular la solución $x$ del sistema $Az=b$, esto es, calcular: $x = f(A) = A^{-1}b.$ Además, considérese la perturbación $\hat{A} = A + \Delta A$ en el sistema $Az = b$. Se tiene: # # $$\hat{x} = \hat{A}^{-1}b,$$ # # donde: $\hat{x} = x + \Delta x$ (si se perturba $A$ entonces se perturba también $x$). # # De la ecuación anterior como $\hat{x} = \hat{A}^{-1}b$ se tiene: # # $$\hat{A}\hat{x} = b$$ # $$(A+\Delta A)(x+\Delta x) = b$$ # $$Ax + A \Delta x + \Delta Ax + \Delta A \Delta x = b$$ # $$b + A \Delta x + \Delta A x = b$$ # Donde en esta última ecuación se supuso que $\Delta A \Delta x \approx 0$ y de aquí: # # $$A \Delta x + \Delta A x \approx 0 \therefore \Delta x \approx - A^{-1} \Delta A x.$$ # Entonces se tiene que la condición del problema $f$: calcular la solución de sistema de ecuaciones lineales $Az=b$ con $A$ no singular ante perturbaciones en $A$ es: # # $$\text{Cond}_f^R = \frac{\frac{||x-\hat{x}||}{||x||}}{\frac{||A-\hat{A}||}{||A||}}=\frac{\frac{||\Delta x||}{||x||}}{\frac{||\Delta A||}{||A||}} \leq \frac{\frac{||A^{-1}||||\Delta Ax||}{||x||}}{\frac{||\Delta A||}{||A||}} \leq ||A^{-1}||||A||.$$ # ## ¿Qué está midiendo el número de condición de una matriz respecto a un sistema de ecuaciones lineales? # El número de condición de una matriz mide la **sensibilidad** de la solución de un sistema de ecuaciones lineales ante perturbaciones en los datos de entrada (en la matriz del sistema $A$ o en el lado derecho $b$). Si pequeños cambios en los datos de entrada generan grandes cambios en la solución tenemos un **sistema mal condicionado**. Si pequeños cambios en los datos de entrada generan pequeños cambios en la solución tenemos un sistema **bien condicionado**. Lo anterior puede apreciarse con los siguientes ejemplos y gráficas: import numpy as np import matplotlib.pyplot as plt import scipy import pprint np.set_printoptions(precision=3, suppress=True) # 1.Resolver los siguientes sistemas: # # $$a) \begin{array}{ccc} x_1 +2x_2 &= & 10 \\ 1.1x_1 + 2x_2 &= & 10.4 \end{array} $$ # $$b)\begin{array}{ccc} 1.05x_1 +2x_2 &= & 10 \\ 1.1x_1 + 2x_2 &= & 10.4\end{array} $$ print('inciso a') A = np.array([[1, 2], [1.1, 2]]) b = np.array([10,10.4]) print('matriz A:') pprint.pprint(A) print('lado derecho b:') pprint.pprint(b) x=np.linalg.solve(A,b) print('solución x:') pprint.pprint(x) x=np.arange(0,10,.5) recta1 = lambda x: 1/2.0*(10-1*x) recta2 = lambda x: 1/2.0*(10.4-1.1*x) plt.plot(x,recta1(x),'o-',x,recta2(x),'^-') plt.title('Sistema mal condicionado') plt.legend(('x1+2x2=10','1.1x1+2x2=10.4')) plt.grid(True) plt.show() # ```{admonition} Observación # :class: tip # # Obsérvese que las dos rectas anteriores tienen una inclinación (pendiente) similar por lo que no se ve claramente el punto en el que intersectan. # ``` print('inciso b') A = np.array([[1.05, 2], [1.1, 2]]) b = np.array([10,10.4]) print('matriz A ligeramente modificada:') pprint.pprint(A) print('lado derecho b:') pprint.pprint(b) x=np.linalg.solve(A,b) print('solución x:') pprint.pprint(x) x=np.arange(0,10,.5) recta1 = lambda x: 1/2.0*(10-1.05*x) recta2 = lambda x: 1/2.0*(10.4-1.1*x) plt.plot(x,recta1(x),'o-',x,recta2(x),'^-') plt.title('Sistema mal condicionado') plt.legend(('1.05x1+2x2=10','1.1x1+2x2=10.4')) plt.grid(True) plt.show() # ```{admonition} Observación # :class: tip # # Al modificar un poco las entradas de la matriz $A$ la solución del sistema cambia drásticamente. # ``` # ```{admonition} Comentario # # Otra forma de describir a un sistema mal condicionado es que un amplio rango de valores en un SPFN satisfacen tal sistema de forma aproximada. # # ``` # 2.Resolver los siguientes sistemas: # # $$a) \begin{array}{ccc} .03x_1 + 58.9x_2 &= & 59.2 \\ 5.31x_1 -6.1x_2 &= & 47 \end{array} $$ # $$b) \begin{array}{ccc} .03x_1 + 58.9x_2 &= & 59.2 \\ 5.31x_1 -6.05x_2 &= & 47 \end{array} $$ print('inciso a') A = np.array([[.03, 58.9], [5.31, -6.1]]) b = np.array([59.2,47]) print('matriz A:') pprint.pprint(A) print('lado derecho b:') pprint.pprint(b) x=np.linalg.solve(A,b) print('solución x:') pprint.pprint(x) x=np.arange(4,14,.5) recta1 = lambda x: 1/58.9*(59.2-.03*x) recta2 = lambda x: 1/6.1*(5.31*x-47) plt.plot(x,recta1(x),'o-',x,recta2(x),'^-') plt.title('Sistema bien condicionado') plt.legend(('.03x1+58.9x2=59.2','5.31x1-6.1x2=47')) plt.grid(True) plt.show() # ```{admonition} Observación # :class: tip # # Obsérvese que la solución del sistema de ecuaciones (intersección entre las dos rectas) está claramente definido. # ``` print('inciso b') A = np.array([[.03, 58.9], [5.31, -6.05]]) b = np.array([59.2,47]) print('matriz A ligeramente modificada:') pprint.pprint(A) print('lado derecho b:') pprint.pprint(b) x=np.linalg.solve(A,b) print('solución x:') pprint.pprint(x) x=np.arange(4,14,.5) recta1 = lambda x: 1/58.9*(59.2-.03*x) recta2 = lambda x: 1/6.05*(5.31*x-47) plt.plot(x,recta1(x),'o-',x,recta2(x),'^-') plt.title('Sistema bien condicionado') plt.legend(('.03x1+58.9x2=59.2','5.31x1-6.05x2=47')) plt.grid(True) plt.show() # ```{admonition} Observación # :class: tip # # Al modificar un poco las entradas de la matriz $A$ la solución **no** cambia mucho. # ``` # ```{admonition} Comentarios # # 1.¿Por qué nos interesa considerar perturbaciones en los datos de entrada? -> recuérdese que los números reales se representan en la máquina mediante el sistema de punto flotante (SPF), entonces al ingresar datos a la máquina tenemos perturbaciones y por tanto errores de redondeo. Ver nota: {ref}`Sistema de punto flotante <SPF>`. # # 2.Las matrices anteriores tienen número de condición distinto: # # ``` print('matriz del ejemplo 1') A = np.array([[1, 2], [1.1, 2]]) pprint.pprint(A) print(np.linalg.cond(A)) print('matriz del ejemplo 2') A = np.array([[.03, 58.9], [5.31, -6.1]]) pprint.pprint(A) print(np.linalg.cond(A)) # ```{admonition} Comentario # # Las matrices del ejemplo $1$ y $2$ son **medianamente** condicionadas. Una matriz se dice **bien condicionada** si cond$(A)$ es cercano a $1$. # ``` # ## Algunas propiedades del número de condición de una matriz # # * Si $A \in \mathbb{R}^{n\times n}$ es no singular entonces: # # $$\frac{1}{\text{cond}(A)} = \min \left\{ \frac{||A-B||}{||A||} \text{tal que} B \text{ es singular}, ||\cdot|| \text{ es una norma inducida} \right\}.$$ # # esto es, una matriz mal condicionada (número de condición grande) se le puede aproximar muy bien por una matriz singular. Sin embargo, el mal condicionamiento no necesariamente se relaciona con singularidad. Una matriz singular es mal condicionada pero una matriz mal condicionada no necesariamente es singular. Considérese por ejemplo la matriz de **Hilbert**: from scipy.linalg import hilbert print(hilbert(4)) print(np.linalg.cond(hilbert(4))) # la cual es una matriz mal condicionada pero es no singular: print(np.linalg.inv(hilbert(4))@hilbert(4)) # y otro ejemplo de una matriz singular: print('matriz singular') A = np.array([[1, 2], [1, 2]]) pprint.pprint(A) # + tags=["raises-exception"] print(np.linalg.inv(A)) # - print(np.linalg.cond(A)) # * Para las normas matriciales inducidas se tiene: # # * cond$(A)\geq 1, \forall A \in \mathbb{R}^{n\times n}$. # # * cond$(\gamma A) = \text{cond}(A), \forall \gamma \in \mathbb{R}-\{0\}, \forall A \in \mathbb{R}^{n\times n}$. # # * cond$_2(A) = ||A||_2||A^{-1}||_2 = \frac{\sigma_{\max}}{\sigma_{\min}}, \sigma_{\min} \neq 0$. # * En el problema: resolver $Ax = b$ se cumple: # # $$\text{ErrRel}(\hat{x}) = \frac{||x^*-\hat{x}||}{||x^*||} \leq \text{cond}(A) \left ( \frac{||\Delta A||}{||A||} + \frac{||\Delta b||}{||b||} \right ), b \neq 0.$$ # # donde: $x^*$ es solución de $Ax=b$ y $\hat{x}$ es solución aproximada que se obtiene por algún método numérico (por ejemplo factorización LU). $\frac{||\Delta A||}{||A||}, \frac{||\Delta b||}{||b||}$ son los errores relativos en las entradas de $A$ y $b$ respectivamente. # ```{admonition} Comentario # # La desigualdad anterior se puede interpretar como sigue: si sólo tenemos perturbaciones en $A$ de modo que se tienen errores por redondeo del orden de $10^{-k}$ y por lo tanto $k$ dígitos de precisión en $A$ y cond$(A)$ es del orden de $10^c$ entonces $\text{ErrRel}(\hat{x})$ puede llegar a tener errores de redondeo de a lo más del orden de $10^{c-k}$ y por tanto $k-c$ dígitos de precisión: # # $$\text{ErrRel}(\hat{x}) \leq \text{cond}(A) \frac{||\Delta A||}{||A||}.$$ # ``` # * Supongamos que $x^*$ es solución del sistema $Ax=b$ y obtenemos $\hat{x}$ por algún método numérico (por ejemplo factorización LU) entonces ¿qué condiciones garantizan que $||x^*-\hat{x}||$ sea cercano a cero (del orden de $\epsilon_{maq}= 10^{-16}$), ¿de qué depende esto? # ```{admonition} Definición # Para responder las preguntas anteriores definimos el residual de $Ax=b$ como # # $$r=A\hat{x}-b$$ # # con $\hat{x}$ aproximación a $x^*$ obtenida por algún método numérico. Asimismo, el residual relativo a la norma de $b$ como: # # $$\frac{||r||}{||b||}.$$ # ``` # ```{admonition} Observación # :class: tip # # Típicamente $x^*$ (solución exacta) es desconocida y por ello no podríamos calcular $||x^*-\hat{x}||$, sin embargo sí podemos calcular el residual relativo a la norma de $b$: $\frac{||r||}{||b||}$. ¿Se cumple que $\frac{||r||}{||b||}$ pequeño implica $\text{ErrRel}(\hat{x})$ pequeño? El siguiente resultado nos ayuda a responder esta y las preguntas anteriores. # # ``` # Sea $A \in \mathbb{R}^{n\times n}$ no singular, $x^*$ solución de $Ax=b$, $\hat{x}$ aproximación a $x^*$, entonces para las normas matriciales inducidas se cumple: # # $$\frac{||r||}{||b||} \frac{1}{\text{cond}(A)} \leq \frac{||x^*-\hat{x}||}{||x^*||}\leq \text{cond}(A)\frac{||r||}{||b||}.$$ # Por la desigualdad anterior, si $\text{cond}(A) \approx 1$ entonces $\frac{||r||}{||b||}$ es una buena estimación de $\text{ErrRel}(\hat{x}) = \frac{||x^*-\hat{x}||}{||x^*||}$ por lo que si el residual relativo es pequeño entonces $\hat{x}$ es una buena estimación de $x^*$ (si la precisión y exactitud definida es aceptable para la aplicación o problema en cuestión). Si $\text{cond}(A)$ es grande no podemos decir **nada** acerca de $\text{ErrRel}(\hat{x})$ ni de $\hat{x}$. # ### Ejemplos # Para los siguientes ejemplos supóngase que $x^*$ y $\hat{x}$ son soluciones del inciso a) y b) respectivamente. Considérese como $b$ el del sistema del inciso a) y aplíquese el resultado anterior. # 1.Resolver: # $$a) \begin{array}{ccc} x_1 + x_2 &= & 2 \\ 10.05x_1 + 10x_2 &= & 21 \end{array} $$ # $$b) \begin{array}{ccc} x_1 + x_2 &= & 2 \\ 10.1x_1 + 10x_2 &= & 21 \end{array} $$ print('inciso a') A_1 = np.array([[1, 1], [10.05, 10]]) b_1 = np.array([2,21]) print('matriz A_1:') pprint.pprint(A_1) print('lado derecho b_1:') pprint.pprint(b_1) x_est=np.linalg.solve(A_1,b_1) print('solución x_est:') pprint.pprint(x_est) print('inciso b') A_2 = np.array([[1, 1], [10.1, 10]]) b_2 = np.array([2,21]) print('matriz A_2:') pprint.pprint(A_2) print('lado derecho b_2:') pprint.pprint(b_2) x_hat=np.linalg.solve(A_2,b_2) print('solución x_hat:') pprint.pprint(x_hat) print('residual relativo:') r_rel = np.linalg.norm(A_1@x_hat-b_1)/np.linalg.norm(b_1) print(r_rel) print('error relativo:') err_rel = np.linalg.norm(x_hat-x_est)/np.linalg.norm(x_est) pprint.pprint(err_rel) # **no tenemos una buena estimación del error relativo a partir del residual relativo pues:** print(np.linalg.cond(A_1)) # De acuerdo a la cota del resultado el error relativo se encuentra en el intervalo: print((r_rel*1/np.linalg.cond(A_1), r_rel*np.linalg.cond(A_1))) # 2.Resolver: # $$a) \begin{array}{ccc} 4.1x_1 + 2.8x_2 &= & 4.1 \\ 9.7x_1 + 6.6x_2 &= & 9.7 \end{array}$$ # $$b) \begin{array}{ccc} 4.1x_1 + 2.8x_2 &= & 4.11 \\ 9.7x_1 + 6.6x_2 &= & 9.7 \end{array}$$ print('inciso a') A_1 = np.array([[4.1, 2.8], [9.7, 6.6]]) b_1 = np.array([4.1,9.7]) print('matriz A_1:') pprint.pprint(A_1) print('lado derecho b_1:') pprint.pprint(b_1) x_est=np.linalg.solve(A_1,b_1) print('solución x_est:') pprint.pprint(x_est) print('inciso b') A_2 = np.array([[4.1, 2.8], [9.7, 6.6]]) b_2 = np.array([4.11,9.7]) print('matriz A_2:') pprint.pprint(A_2) print('lado derecho b_2:') pprint.pprint(b_2) x_hat=np.linalg.solve(A_2,b_2) print('solución x_hat:') pprint.pprint(x_hat) print('residual relativo:') r_rel = np.linalg.norm(A_1@x_hat-b_1)/np.linalg.norm(b_1) print(r_rel) print('error relativo:') err_rel = np.linalg.norm(x_hat-x_est)/np.linalg.norm(x_est) pprint.pprint(err_rel) # **no tenemos una buena estimación del error relativo a partir del residual relativo pues:** print(np.linalg.cond(A_1)) print((r_rel*1/np.linalg.cond(A_1), r_rel*np.linalg.cond(A_1))) # 3.Resolver: # $$a) \begin{array}{ccc} 3.9x_1 + 11.6x_2 &= & 5.5 \\ 12.8x_1 + 2.9x_2 &= & 9.7 \end{array}$$ # $$b) \begin{array}{ccc} 3.95x_1 + 11.6x_2 &= & 5.5 \\ 12.8x_1 + 2.9x_2 &= & 9.7 \end{array}$$ print('inciso a') A_1 = np.array([[3.9, 11.6], [12.8, 2.9]]) b_1 = np.array([5.5,9.7]) print('matriz A_1:') pprint.pprint(A_1) print('lado derecho b_1:') pprint.pprint(b_1) x_est=np.linalg.solve(A_1,b_1) print('solución x_est:') pprint.pprint(x_est) print('inciso b') A_2 = np.array([[3.95, 11.6], [12.8, 2.9]]) b_2 = np.array([5.5,9.7]) print('matriz A_2:') pprint.pprint(A_2) print('lado derecho b_2:') pprint.pprint(b_2) x_hat=np.linalg.solve(A_2,b_2) print('solución x_hat:') pprint.pprint(x_hat) print('residual relativo:') r_rel = np.linalg.norm(A_1@x_hat-b_1)/np.linalg.norm(b_1) print(r_rel) print('error relativo:') err_rel = np.linalg.norm(x_hat-x_est)/np.linalg.norm(x_est) pprint.pprint(err_rel) # **sí tenemos una buena estimación del error relativo a partir del residual relativo pues:** print(np.linalg.cond(A_1)) print((r_rel*1/np.linalg.cond(A_1), r_rel*np.linalg.cond(A_1))) # 4.Utilizando $\theta=\frac{\pi}{3}$ theta_1=math.pi/3 print((math.cos(theta_1),math.sin(theta_1))) theta_2 = math.pi/3 + .00005 print(theta_2) print((math.cos(theta_2),math.sin(theta_2))) # Resolver: # $$a) \begin{array}{ccc} \cos(\theta_1)x_1 - \sin(\theta_1)x_2 &= & -1.5 \\ \sin(\theta_1)x_1 + \cos(\theta_1)x_2 &= & 2.4 \end{array}$$ # $$b) \begin{array}{ccc} \cos(\theta_2)x_1 - \sin(\theta_2)x_2 &= & -1.5 \\ \sin(\theta_2)x_1 + \cos(\theta_2)x_2 &= & 2.4 \end{array}$$ # $$c) \begin{array}{ccc} \cos(\theta_2)x_1 - \sin(\theta_2)x_2 &= & -1.7 \\ \sin(\theta_2)x_1 + \cos(\theta_2)x_2 &= & 2.4 \end{array}$$ print('inciso a') A_1 = np.array([[math.cos(theta_1), -math.sin(theta_1)], [math.sin(theta_1), math.cos(theta_1)]]) b_1 = np.array([-1.5,2.4]) print('matriz A_1:') pprint.pprint(A_1) print('lado derecho b_1:') pprint.pprint(b_1) x_est=np.linalg.solve(A_1,b_1) print('solución x_est:') pprint.pprint(x_est) print('inciso b') A_2 = np.array([[math.cos(theta_2), -math.sin(theta_2)], [math.sin(theta_2), math.cos(theta_2)]]) b_2 = np.array([-1.5,2.4]) print('matriz A_2:') pprint.pprint(A_2) print('lado derecho b_2:') pprint.pprint(b_2) x_hat=np.linalg.solve(A_2,b_2) print('solución x_hat:') pprint.pprint(x_hat) print('residual relativo:') r_rel = np.linalg.norm(A_1@x_hat-b_1)/np.linalg.norm(b_1) print("{:0.10e}".format(r_rel)) print('error relativo:') err_rel = np.linalg.norm(x_hat-x_est)/np.linalg.norm(x_est) print("{:0.10e}".format(err_rel)) # **sí tenemos una buena estimación del error relativo a partir del residual relativo pues:** print(np.linalg.cond(A_1)) print(("{:0.10e}".format(r_rel*1/np.linalg.cond(A_1)), "{:0.10e}".format(r_rel*np.linalg.cond(A_1)))) print('inciso c') A_2 = np.array([[math.cos(theta_2), -math.sin(theta_2)], [math.sin(theta_2), math.cos(theta_2)]]) b_2 = np.array([-1.7,2.4]) print('matriz A_2:') pprint.pprint(A_2) print('lado derecho b_2:') pprint.pprint(b_2) x_hat=np.linalg.solve(A_2,b_2) print('solución x_hat:') pprint.pprint(x_hat) print('residual relativo:') r_rel = np.linalg.norm(A_1@x_hat-b_1)/np.linalg.norm(b_1) print("{:0.14e}".format(r_rel)) print('error relativo:') err_rel = np.linalg.norm(x_hat-x_est)/np.linalg.norm(x_est) print("{:0.14e}".format(err_rel)) # **sí tenemos una buena estimación del error relativo a partir del residual relativo pues:** print(np.linalg.cond(A_1)) print(("{:0.14e}".format(r_rel*1/np.linalg.cond(A_1)), "{:0.14e}".format(r_rel*np.linalg.cond(A_1)))) # Así, $\text{cond}(A)$ nos da una calidad (mediante $\frac{||r||}{||b||}$) de la solución $\hat{x}$ en el problema inicial (resolver $Ax=b$) obtenida por algún método numérico respecto a la solución $x^*$ de $Ax=b$. # # ```{admonition} Observación # :class: tip # # * El ejercicio anterior (en el que se define el ángulo $\theta$) utiliza matrices de rotación que son matrices ortogonales. Las matrices ortogonales tienen número de condición igual a $1$ bajo las normas inducidas. # # * Obsérvese que la condición del problema inicial (resolver $Ax=b$) **no depende del método númerico** que se elige para resolverlo. # ``` # ```{admonition} Ejercicio: # :class: tip # # Proponer sistemas de ecuaciones lineales con distinto número de condición, perturbar matriz del sistema o lado derecho (o ambos) y revisar números de condición y residuales relativos de acuerdo a la cota: # # $$\frac{||r||}{||b||} \frac{1}{\text{cond}(A)} \leq \frac{||x^*-\hat{x}||}{||x^*||}\leq \text{cond}(A)\frac{||r||}{||b||}.$$ # # Verificar que si el número de condición del sistema es pequeño entonces el residual relativo estima bien al error relativo. # # ``` # ## Número de condición de una matriz $A \in \mathbb{R}^{m\times n}$ # Para este caso se utiliza la **pseudoinversa** de $A$ definida a partir de la descomposición en valores singulares compacta (compact SVD, ver [Factorizaciones matriciales SVD, Cholesky, QR](https://www.dropbox.com/s/s4ch0ww1687pl76/3.2.2.Factorizaciones_matriciales_SVD_Cholesky_QR.pdf?dl=0)) y denotada como $A^{\dagger}$: # # $$A^{\dagger} = V \Sigma^{\dagger} U^T$$ # donde: $\Sigma ^{\dagger}$ es la matriz transpuesta de $\Sigma$ y tiene entradas $\sigma_i^{+}:$ # # $$\sigma_i^+ = \begin{cases} # \frac{1}{\sigma_i} &\text{ si } \sigma_i \neq 0,\\ # 0 &\text{ en otro caso} # \end{cases} # $$ # # $\forall i=1,\dots, r$ con $r=rank(A)$. # ```{admonition} Comentarios y propiedades # # * $A^{\dagger}$ se le conoce como pseudoinversa de $Moore-Penrose$. # # * Si $rank(A)=n$ entonces $A^{\dagger} = (A^TA)^{-1}A^T$, si $rank(A)=m$, $A^\dagger = A^T(AA^T)^{-1}$, si $A\in \mathbb{R}^{n\times n}$ no singular, entonces $A^\dagger=A^{-1}$. # # * Con $A^\dagger$ se define $\text{cond}(A)$ para $A \in \mathbb{R}^{m\times n}$: # # $$\text{cond}(A) = ||A||||A^\dagger||$$ # # de hecho, se tiene: # # $$\text{cond}_2(A) = \frac{\sigma_{max}}{\sigma_{min}}=\frac{\sigma_1}{\sigma_r}.$$ # # ``` # ```{admonition} Ejercicios # :class: tip # # 1. Resuelve los ejercicios y preguntas de la nota. # # ``` # **Preguntas de comprehensión** # # 1)¿Qué factores influyen en la falta de exactitud de un cálculo? # # 2)Si f es un problema mal condicionado, ¿a qué nos referimos? Da ejemplos de problemas bien y mal condicionados. # # 3)Si f es un problema que resolvemos con un algoritmo g, ¿qué significa: # # a. que g sea estable? # # b. que g sea estable hacia atrás? # # c. que g sea inestable? # # 4)¿Qué ventaja(s) se tiene(n) al calcular un error hacia atrás vs calcular un error hacia delante? # # # **Referencias** # 1. Nota {ref}`Sistema de punto flotante <SPF>`. # # 2. <NAME>, <NAME>, Numerical linear algebra, SIAM, 1997. # # 3. <NAME>, <NAME>,Matrix Computations. John Hopkins University Press, 2013
libro_optimizacion/temas/I.computo_cientifico/1.4/Condicion_de_un_problema_y_estabilidad_de_un_algoritmo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/AceBlu/LetsUpgrade-Python/blob/master/Day5Assignment2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="Gf_B0u5x8iwz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="aee52519-997b-4723-fded-aef486f73e48" def primeNumber(num): if num > 1: for i in range(2,num): if (num % i) == 0: break else: return num x=int(input("Enter Number till you want to display ")) lst=list(range(2500)) lst_prime=filter(primeNumber,lst) print(list(lst_prime))
Day5Assignment2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Singular Spectrum Analysis # # Note the actual analysis is done via the Rssa package in R. This notebook reads in the cross validation csv files and calculates scores. # # > See the R script `06_ssa-tscv.R` for SSA code that generates the CV folds # + import numpy as np import pandas as pd import glob import os #error measures from forecast_tools.metrics import (mean_absolute_scaled_error, root_mean_squared_error, symmetric_mean_absolute_percentage_error) # - # # Data Input # # The constants `TOP_LEVEL`, `STAGE`, `REGION`,`TRUST` and `METHOD` are used to control data selection and the directory for outputting results. # # > Output file is `f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv'.csv`. where metric will be smape, rmse, mase, coverage_80 and coverage_95. Note: `REGION`: is also used to select the correct data from the input dataframe. # + TOP_LEVEL = '../../../results/model_selection' STAGE = 'stage1' REGION = 'Trust' METHOD = 'ssa' FILE_NAME = 'Daily_Responses_5_Years_2019_full.csv' #split training and test data. TEST_SPLIT_DATE = '2019-01-01' #second subdivide: train and val VAL_SPLIT_DATE = '2017-07-01' #discard data after 2020 due to coronavirus #this is the subject of a seperate study. DISCARD_DATE = '2020-01-01' # - #read in path path = f'../../../data/{FILE_NAME}' def pre_process_daily_data(path, index_col, by_col, values, dayfirst=False): ''' Daily data is stored in long format. Read in and pivot to wide format so that there is a single colmumn for each regions time series. ''' df = pd.read_csv(path, index_col=index_col, parse_dates=True, dayfirst=dayfirst) df.columns = map(str.lower, df.columns) df.index.rename(str(df.index.name).lower(), inplace=True) clean_table = pd.pivot_table(df, values=values.lower(), index=[index_col.lower()], columns=[by_col.lower()], aggfunc=np.sum) clean_table.index.freq = 'D' return clean_table clean = pre_process_daily_data(path, 'Actual_dt', 'ORA', 'Actual_Value', dayfirst=False) clean.head() # ## Train Test Split def ts_train_test_split(data, split_date): ''' Split time series into training and test data Parameters: ------- data - pd.DataFrame - time series data. Index expected as datatimeindex split_date - the date on which to split the time series Returns: -------- tuple (len=2) 0. pandas.DataFrame - training dataset 1. pandas.DataFrame - test dataset ''' train = data.loc[data.index < split_date] test = data.loc[data.index >= split_date] return train, test # + train, test = ts_train_test_split(clean, split_date=TEST_SPLIT_DATE) #exclude data after 2020 due to coronavirus. test, discard = ts_train_test_split(test, split_date=DISCARD_DATE) #train split into train and validation train, val = ts_train_test_split(train, split_date=VAL_SPLIT_DATE) # - # ## SSA read in data generated by Rssa R package # # 80% Prediction Interval #read in file names files = glob.glob(f'{os.getcwd()}/ssa/80_PI/*.csv') def read_ssa_folds(files): ''' Loop through files that represent TSCV folds and read in mean, lci, uci and actual ''' cv_data = [] for file in files: df = pd.read_csv(file, usecols=[1,2,3,4]) df.columns = ['mean', 'lower', 'upper', 'actual'] cv_data.append(df) return cv_data cv_data = read_ssa_folds(files) cv_data[0].head(7) def preprocess_r_output(cv_data): '''transform the cv data so that it works with existing scoring code. Returns: -------- tuple predictions, intervals, test data. Each of the above is a list of lists. ''' horizons = [7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 365] cv_preds = [] cv_intervals = [] cv_test = [] for cv in cv_data: h_cv = [] cv_h_pis = [] cv_h_test = [] for h in horizons: h_cv.append(cv['mean'].iloc[:h].to_numpy()) lower = cv['lower'].iloc[:h].to_numpy() upper = cv['upper'].iloc[:h].to_numpy() cv_h_pis.append(np.vstack([lower, upper]).T) cv_h_test.append(cv['actual'].iloc[:h].to_numpy()) cv_preds.append(h_cv) cv_intervals.append(cv_h_pis) cv_test.append(cv_h_test) return cv_preds, cv_intervals, cv_test #run preprocessing cv_preds, cv_intervals, cv_test = preprocess_r_output(cv_data) cv_preds[0][2] cv_intervals[0][0] cv_test[0][0] # ## Custom functions for calculating CV scores for point predictions and coverage. # # These functions have been written to work with the output of stored in `.\ssa` # + def split_cv_error(cv_preds, cv_test, error_func): n_splits = len(cv_preds) cv_errors = [] for split in range(n_splits): pred_error = error_func(cv_test[split], cv_preds[split]) cv_errors.append(pred_error) return np.array(cv_errors) def forecast_errors_cv(cv_preds, cv_test, error_func): cv_test = np.array(cv_test) cv_preds = np.array(cv_preds) n_horizons = len(cv_test) horizon_errors = [] for h in range(n_horizons): split_errors = split_cv_error(cv_preds[h], cv_test[h], error_func) horizon_errors.append(split_errors) return np.array(horizon_errors) def split_coverage(cv_test, cv_intervals): n_splits = len(cv_test) cv_errors = [] for split in range(n_splits): val = np.asarray(cv_test[split]) lower = cv_intervals[split].T[0] upper = cv_intervals[split].T[1] coverage = len(np.where((val > lower) & (val < upper))[0]) coverage = coverage / len(val) cv_errors.append(coverage) return np.array(cv_errors) def prediction_int_coverage_cv(cv_test, cv_intervals): cv_test = np.array(cv_test) cv_intervals = np.array(cv_intervals) n_horizons = len(cv_test) horizon_coverage = [] for h in range(n_horizons): split_coverages = split_coverage(cv_test[h], cv_intervals[h]) horizon_coverage.append(split_coverages) return np.array(horizon_coverage) # + def split_cv_error_scaled(cv_preds, cv_test, y_train): n_splits = len(cv_preds) cv_errors = [] for split in range(n_splits): pred_error = mean_absolute_scaled_error(cv_test[split], cv_preds[split], y_train, period=7) cv_errors.append(pred_error) return np.array(cv_errors) def forecast_errors_cv_scaled(cv_preds, cv_test, y_train): cv_test = np.array(cv_test) cv_preds = np.array(cv_preds) n_horizons = len(cv_test) horizon_errors = [] for h in range(n_horizons): split_errors = split_cv_error_scaled(cv_preds[h], cv_test[h], y_train) horizon_errors.append(split_errors) return np.array(horizon_errors) # - # # Symmetric MAPE #CV point predictions smape horizons = [7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 365] cv_errors = forecast_errors_cv(cv_preds, cv_test, symmetric_mean_absolute_percentage_error) df = pd.DataFrame(cv_errors) df.columns = horizons df.describe() #output sMAPE results to file metric = 'smape' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') # # RMSE #CV point predictions rmse cv_errors = forecast_errors_cv(cv_preds, cv_test, root_mean_squared_error) df = pd.DataFrame(cv_errors) df.columns = horizons df.describe() #output rmse metric = 'rmse' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') # # MASE #mase cv_errors = forecast_errors_cv_scaled(cv_preds, cv_test, train[REGION]) df = pd.DataFrame(cv_errors) df.columns = horizons df.describe() #output rmse metric = 'mase' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') # # 80% Prediction Intervals Coverage #PIs cv_coverage = prediction_int_coverage_cv(cv_test, cv_intervals) df = pd.DataFrame(cv_coverage) df.columns = horizons df.describe() #output 95% PI coverage metric = 'coverage_80' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') # # 95% Prediction Intervals # # Note these are stored in a seperate file directory. #read in file names files = glob.glob(f'{os.getcwd()}/ssa/95_PI/*.csv') cv_data = read_ssa_folds(files) #run preprocessing cv_preds, cv_intervals, cv_test = preprocess_r_output(cv_data) #PIs cv_coverage = prediction_int_coverage_cv(cv_test, cv_intervals) df = pd.DataFrame(cv_coverage) df.columns = horizons df.describe() #output 95% PI coverage metric = 'coverage_95' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') # # End
analysis/model_selection/stage1/08_ssa_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Utils import os import tensorflow as tf import numpy as np from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils import to_categorical import _pickle as pk # ### Data Manager # + code_folding=[47, 55, 94, 108] class DataManager: def __init__(self): self.data = {} def add_data(self,name, data_path, with_label=True): ''' Read data from data_path Args: name : string, name of data with_label : bool, read data with label or without label Returns: None ''' print ('Read data from %s...'%data_path) X, Y = [], [] with open(data_path,'r') as f: for line in f: if with_label: lines = line.strip().split(' +++$+++ ') X.append(lines[1]) Y.append(int(lines[0])) else: X.append(line) if with_label: self.data[name] = [X,Y] else: self.data[name] = [X] def tokenize(self, vocab_size): ''' Build dictionary(tokenizer) Args: vocab_size : maximum number of word in yout dictionary Returns: None ''' print ('Create new tokenizer') self. tokenizer = Tokenizer(num_words=vocab_size) for key in self.data: print ('Tokenizing %s'%key) texts = self.data[key][0] self.tokenizer.fit_on_texts(texts) def save_tokenizer(self, path): ''' Save tokenizer to specified path ''' print ('Save tokenizer to %s'%path) pk.dump(self.tokenizer, open(path, 'wb')) def load_tokenizer(self,path): ''' Load tokenizer from specified path ''' print ('Load tokenizer from %s'%path) self.tokenizer = pk.load(open(path, 'rb')) def to_sequence(self, maxlen): ''' Convert words in data to index and pad to equal size Args: maxlen : max length after padding ''' self.maxlen = maxlen for key in self.data: print ('Converting %s to sequences'%key) tmp = self.tokenizer.texts_to_sequences(self.data[key][0]) self.data[key][0] = np.array(pad_sequences(tmp, maxlen=maxlen)) def to_bow(self): ''' Convert texts in data to BOW feature ''' for key in self.data: print ('Converting %s to tfidf'%key) self.data[key][0] = self.tokenizer.texts_to_matrix(self.data[key][0],mode='count') def to_category(self): ''' Convert label to category type, call this function if use categorical loss ''' for key in self.data: if len(self.data[key]) == 2: self.data[key][1] = np.array(to_categorical(self.data[key][1])) def get_semi_data(self,name,label,threshold,loss_function) : # if th==0.3, will pick label>0.7 and label<0.3 label = np.squeeze(label) index = (label>1-threshold) + (label<threshold) semi_X = self.data[name][0] semi_Y = np.greater(label, 0.5).astype(np.int32) if loss_function=='binary_crossentropy': return semi_X[index,:], semi_Y[index] elif loss_function=='categorical_crossentropy': return semi_X[index,:], to_categorical(semi_Y[index]) else : raise Exception('Unknown loss function : %s'%loss_function) def get_data(self,name): ''' Get data by name ''' return self.data[name] def split_data(self, name, ratio): ''' Split data to two part by a specified ratio Args: name : string, same as add_data ratio : float, ratio to split ''' data = self.data[name] X = data[0] Y = data[1] data_size = len(X) val_size = int(data_size * ratio) return (X[val_size:],Y[val_size:]),(X[:val_size],Y[:val_size]) # - dm = DataManager() dm.add_data("train", "training_label.txt") dm.get_data("train")[0] dm.tokenize(20000) dm.to_sequence(38) # explain tokenize to vector & padding tmp = dm.tokenizer.texts_to_sequences(dm.get_data("train")[0]) tmp dm.get_data("train")[0] # ## hw4.py # + import sys, argparse, os import keras import _pickle as pk import readline import numpy as np from keras import regularizers from keras.models import Model from keras.layers import Input, GRU, LSTM, Dense, Dropout, Bidirectional from keras.layers.embeddings import Embedding from keras.optimizers import Adam from keras.callbacks import EarlyStopping, ModelCheckpoint import keras.backend.tensorflow_backend as K import tensorflow as tf # - # ### Build Model # #### 宏毅Ver. def simpleRNN(max_length, vocab_size, embedding_dim, dropout_rate, cell, hidden_size, loss_function="binary_crossentropy"): inputs = Input(shape=(max_length,)) # Embedding layer embedding_inputs = Embedding(vocab_size, embedding_dim, trainable=True)(inputs) # RNN return_sequence = False dropout_rate = dropout_rate if cell == 'GRU': RNN_cell = GRU(hidden_size, return_sequences=return_sequence, dropout=dropout_rate) elif cell == 'LSTM': RNN_cell = LSTM(hidden_size, return_sequences=return_sequence, dropout=dropout_rate) RNN_output = RNN_cell(embedding_inputs) # DNN layer outputs = Dense(hidden_size//2, activation='relu', kernel_regularizer=regularizers.l2(0.1))(RNN_output) outputs = Dropout(dropout_rate)(outputs) outputs = Dense(1, activation='sigmoid')(outputs) model = Model(inputs=inputs,outputs=outputs) # optimizer adam = Adam() print ('Compile model...') # compile model model.compile( loss=loss_function, optimizer=adam, metrics=[ 'accuracy',]) return model # #### DIY Ver. # + inputs = Input(shape=(38,), name="inputs") embedding = Embedding(20000, 128, mask_zero=True, trainable=True, name="embedding")(inputs) # 將word vector壓到128維 rnn = LSTM(128, return_sequences=False, name="lstm")(embedding) # 將LSTM的結果投到128維(latent dimension) fc1 = Dense(64, activation="relu", name="fc1")(rnn) fc2 = Dense(32, activation="relu", name="fc2")(fc1) fc3 = Dense(16, activation="relu", name="fc3")(fc2) outputs = Dense(1, activation="sigmoid", name="outputs")(fc3) model = Model(inputs, outputs) model.summary() # - model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) # ### Train Model history = model.fit(dm.get_data("train")[0], dm.get_data("train")[1], batch_size=512, epochs=50, validation_split=0.3, shuffle=False) # ### Predict dm.add_data("train2", "training_label.txt") tmp = dm.tokenizer.texts_to_sequences(dm.data["train2"][0][:10]) dm.data["train2"][0] = np.array(pad_sequences(tmp, 38)) model.predict(dm.data["train2"][0]) new_txt = dm.tokenizer.texts_to_sequences(["fuck you asshole!"]) new_txt = np.array(pad_sequences(new_txt, 38)) model.predict(new_txt)
HW0705.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="QCAlUgTFub71" import numpy as np X = np.array([[1],[2],[3],[4],[5],[6],[7],[8],[9],[10]]) Y = np.array([[2],[3],[5],[9],[11],[14],[13],[18],[20],[19]]) # + id="ppMk0HmJAteM" outputId="37689435-911f-41f7-9594-28a06f21412a" colab={"base_uri": "https://localhost:8080/", "height": 381} print(X) print(Y) # + id="1bXodcWYBCYt" outputId="dcb77de5-29aa-4cd7-c561-4ac805364d86" colab={"base_uri": "https://localhost:8080/", "height": 326} import matplotlib.pyplot as plt import seaborn as sns sns.set() plt.scatter(X,Y,s=100,color='blue') plt.title('UNIT TEST 1',fontsize=20,color='brown') plt.xlabel('Hrs of Study',fontsize=15,color='green') plt.ylabel('Marks',fontsize=15,color='green') # + id="RmSJCIK0BSz4" outputId="01d97cf1-2c3a-4f0e-98d9-6cd2626bbc1b" colab={"base_uri": "https://localhost:8080/", "height": 35} from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X,Y) # + id="t8eEilowIwrD" outputId="6af2ade9-bc86-473f-bd34-06430ad64d69" colab={"base_uri": "https://localhost:8080/", "height": 108} X_test = np.array([[3],[5],[25],[22],[13]]) Y_test = np.array([[7],[9],[49],[46],[22]]) pred = model.predict(X_test) print(pred) # + id="fY86zwuxKHpz" outputId="36b4c2af-6594-4dd0-f79e-3e0fac01de34" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.scatter(X_test,Y_test,color='red') # plt.scatter(X_test,pred,color='blue') plt.plot(X_test,pred) # + id="A_TPth-DLSjg" outputId="017f381b-9eb6-460f-f0fc-87086f78111a" colab={"base_uri": "https://localhost:8080/", "height": 35} model.predict([[34]])
Linear_Reg.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from keras.layers import Embedding, TimeDistributed, RepeatVector, LSTM, concatenate , Input, Reshape, Dense from keras.preprocessing.image import array_to_img, img_to_array, load_img import numpy as np from keras.applications.vgg16 import VGG16, preprocess_input from keras.models import Model from IPython.core.display import display, HTML #Lenght of longest sentence max_caption_len = 3 #Size of vocabulary vocab_size = 3 # Load one screenshot for each word, and turn them into digits images = [] for i in range(2): images.append(img_to_array(load_img('screenshot.jpg', target_size=(224, 224)))) images = np.array(images, dtype=float) # Preprocess input for the VGG16 model images = preprocess_input(images) # + #Turn start tokens into one-hot encoding html_input = np.array( [[[0., 0., 0.], #start [0., 0., 0.], [1., 0., 0.]], [[0., 0., 0.], #start <HTML>Hello World!</HTML> [1., 0., 0.], [0., 1., 0.]]]) #Turn next word into one-hot encoding next_words = np.array( [[0., 1., 0.], # <HTML>Hello World!</HTML> [0., 0., 1.]]) # end # - # Load the VGG16 model trained on imagenet and output the classification feature VGG = VGG16(weights=None, include_top=True) VGG.load_weights('/data/models/vgg16_weights_tf_dim_ordering_tf_kernels.h5') # Extract the features from the image features = VGG.predict(images) #Load the feature to the network, apply a dense layer, and repeat the vector vgg_feature = Input(shape=(1000,)) vgg_feature_dense = Dense(5)(vgg_feature) vgg_feature_repeat = RepeatVector(max_caption_len)(vgg_feature_dense) # Extract information from the input seqence language_input = Input(shape=(vocab_size, vocab_size)) language_model = LSTM(5, return_sequences=True)(language_input) # Concatenate the information from the image and the input decoder = concatenate([vgg_feature_repeat, language_model]) # Extract information from the concatenated output decoder = LSTM(5, return_sequences=False)(decoder) # Predict which word comes next decoder_output = Dense(vocab_size, activation='softmax')(decoder) # Compile and run the neural network model = Model(inputs=[vgg_feature, language_input], outputs=decoder_output) model.compile(loss='categorical_crossentropy', optimizer='rmsprop') # Train the neural network model.fit([features, html_input], next_words, batch_size=2, shuffle=False, epochs=1000) start_token = [1., 0., 0.] # start sentence = np.zeros((1, 3, 3)) # [[0,0,0], [0,0,0], [0,0,0]] sentence[0][2] = start_token # place start in empty sentence # Making the first prediction with the start token second_word = model.predict([np.array([features[1]]), sentence]) # Put the second word in the sentence and make the final prediction sentence[0][1] = start_token sentence[0][2] = np.round(second_word) third_word = model.predict([np.array([features[1]]), sentence]) # + # Place the start token and our two predictions in the sentence # - sentence[0][0] = start_token sentence[0][1] = np.round(second_word) sentence[0][2] = np.round(third_word) # Transform our one-hot predictions into the final tokens vocabulary = ["start", "<HTML><center><H1>Hello World!</H1><center></HTML>", "end"] html = "" for i in sentence[0]: html += vocabulary[np.argmax(i)] + ' ' display(HTML(html[6:49]))
floydhub/Hello_world/hello_world.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **[Python Home Page](https://www.kaggle.com/learn/python)** # # --- # # # Exercises # Welcome to your first set of Python coding problems! # # If this is your first time using Kaggle Notebooks, welcome! # # Notebooks are composed of blocks (called "cells") of text and code. Each of these is editable, though you'll mainly be editing the code cells to answer some questions. # # To get started, try running the code cell below (by pressing the ► button, or clicking on the cell and pressing ctrl+enter on your keyboard). print("You've successfully run some Python code") print("Congratulations!") # Try adding another line of code in the cell above and re-running it. # # Now let's get a little fancier: Add a new code cell by clicking on an existing code cell, hitting the escape key, and then hitting the `a` or `b` key. The `a` key will add a cell above the current cell, and `b` adds a cell below. # # Great! Now you know how to use Notebooks. # # Each hands-on exercise starts by setting up our feedback and code checking mechanism. Run the code cell below to do that. Then you'll be ready to move on to question 0. # + _kg_hide-input=true _kg_hide-output=true from learntools.core import binder; binder.bind(globals()) from learntools.python.ex1 import * print("Setup complete! You're ready to start question 0.") # - # ## 0. # # *This is a silly question intended as an introduction to the format we use for hands-on exercises throughout all Kaggle courses.* # # **What is your favorite color? ** # # To complete this question, create a variable called `color` in the cell below with an appropriate value. The function call `q0.check()` (which we've already provided in the cell below) will check your answer. # Didn't get the right answer? How do you not even know your own favorite color?! # # Delete the `#` in the line below to make one of the lines run. You can choose between getting a hint or the full answer by choosing which line to remove the `#` from. # # Removing the `#` is called uncommenting, because it changes that line from a "comment" which Python doesn't run to code, which Python does run. q0.hint() q0.solution() # create a variable called color with an appropriate value on the line below # (Remember, strings in Python must be enclosed in 'single' or "double" quotes) ____ color = "blue" # Check your answer q0.check() # The upcoming questions work the same way. The only thing that will change are the question numbers. For the next question, you'll call `q1.check()`, `q1.hint()`, `q1.solution()`, for question 2, you'll call `q2.check()`, and so on. # <hr/> # # ## 1. # # Complete the code below. In case it's helpful, here is the table of available arithmetic operations: # # # # | Operator | Name | Description | # |--------------|----------------|--------------------------------------------------------| # | ``a + b`` | Addition | Sum of ``a`` and ``b`` | # | ``a - b`` | Subtraction | Difference of ``a`` and ``b`` | # | ``a * b`` | Multiplication | Product of ``a`` and ``b`` | # | ``a / b`` | True division | Quotient of ``a`` and ``b`` | # | ``a // b`` | Floor division | Quotient of ``a`` and ``b``, removing fractional parts | # | ``a % b`` | Modulus | Integer remainder after division of ``a`` by ``b`` | # | ``a ** b`` | Exponentiation | ``a`` raised to the power of ``b`` | # | ``-a`` | Negation | The negative of ``a`` | # # <span style="display:none"></span> # # + pi = 3.14159 # approximate diameter = 3 # Create a variable called 'radius' equal to half the diameter ____ radius = diameter / 2 # Create a variable called 'area', using the formula for the area of a circle: pi times the radius squared ____ area = pi*(radius**2) # Check your answer q1.check() # + # Uncomment and run the lines below if you need help. #q1.hint() #q1.solution() # - # <hr/> # ## 2. # # Add code to the following cell to swap variables `a` and `b` (so that `a` refers to the object previously referred to by `b` and vice versa). # + ########### Setup code - don't touch this part ###################### # If you're curious, these are examples of lists. We'll talk about # them in depth a few lessons from now. For now, just know that they're # yet another type of Python object, like int or float. a = [1, 2, 3] b = [3, 2, 1] q2.store_original_ids() ###################################################################### # Your code goes here. Swap the values to which a and b refer. # If you get stuck, you can always uncomment one or both of the lines in # the next cell for a hint, or to peek at the solution. t=a a=b b=t ###################################################################### # Check your answer q2.check() # + #q2.hint() # + #q2.solution() # - # <hr/> # ## 3. # # a) Add parentheses to the following expression so that it evaluates to 1. (5 - 3) // 2 # + #q3.a.hint() # - # Check your answer (Run this code cell to receive credit!) q3.a.solution() # <small>Questions, like this one, marked a spicy pepper are a bit harder.</small> # # b) <span title="A bit spicy" style="color: darkgreen ">🌶️</span> Add parentheses to the following expression so that it evaluates to 0 # (8 - 3) * (2 - (1 + 1)) # + #q3.b.hint() # - # Check your answer (Run this code cell to receive credit!) q3.b.solution() # <hr/> # ## 4. # Alice, Bob and Carol have agreed to pool their Halloween candy and split it evenly among themselves. # For the sake of their friendship, any candies left over will be smashed. For example, if they collectively # bring home 91 candies, they'll take 30 each and smash 1. # # Write an arithmetic expression below to calculate how many candies they must smash for a given haul. # + # Variables representing the number of candies collected by alice, bob, and carol alice_candies = 121 bob_candies = 77 carol_candies = 109 # Your code goes here! Replace the right-hand side of this assignment with an expression # involving alice_candies, bob_candies, and carol_candies to_smash = (alice_candies+bob_candies+carol_candies) % 3 # Check your answer q4.check() # + #q4.hint() #q4.solution() # - # # Keep Going # # Next up, you'll **[learn to write new functions and understand functions others write](https://www.kaggle.com/colinmorris/functions-and-getting-help)**. This will make you at least 10 times more productive as a Python programmer. # --- # **[Python Home Page](https://www.kaggle.com/learn/python)** # # # # # # *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum) to chat with other Learners.*
Python/exercise-syntax-variables-and-numbers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: lith_pred # language: python # name: lith_pred # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from src.definitions import ROOT_DIR # - # %load_ext autoreload # %autoreload 2 plt.style.use('seaborn-poster') # # Load train data # + train_path = ROOT_DIR / 'data/external' / 'CSV_train.csv' assert train_path.is_file() # - data = pd.read_csv(train_path, sep=';') data.sample(10) data.shape # ## Raw features data.columns # ## Wells # + wells = len(data['WELL'].unique()) print(f'The number of wells is: {wells}') # - # ## Target lithology lithology_keys = {30000: 'Sandstone', 65030: 'Sandstone/Shale', 65000: 'Shale', 80000: 'Marl', 74000: 'Dolomite', 70000: 'Limestone', 70032: 'Chalk', 88000: 'Halite', 86000: 'Anhydrite', 99000: 'Tuff', 90000: 'Coal', 93000: 'Basement'} data['FORCE_2020_LITHOFACIES_LITHOLOGY'].replace(lithology_keys, inplace=True) # + fig, ax = plt.subplots(1, 1, figsize=(15, 10)) data['FORCE_2020_LITHOFACIES_LITHOLOGY'].value_counts(normalize=True).plot(kind='bar', ax=ax) plt.title('Proportion of lithology on train data') plt.ylabel('Proportion') plt.show() # - # # Lithofacies colors # + # https://github.com/equinor/force-ml-2020-wells/blob/master/1%20-%20data_visualization/COSMETICS_DICTIONARIES.ipynb # litho dictionary: name: hex-color litho_dict = {'Sandstone' : '#FFFF00', 'Shale' : '#825000', 'Sandstone/Shale': '#FF7800', 'Limestone' : '#00BEFF', 'Chalk' : '#00FFFF', 'Dolomite' : '#783CA0', 'Marl' : '#006400', 'Anhydrite' : '#C878C8', 'Halite' : '#FFDCFF', 'Coal' : '#000000', 'Basement' : '#FF00FF', 'Tuff' : '#32EBB9' } # - # # Distributions # ## GR # Let's see if we can separate the lithologies only using GR. # + missing_gr = data['GR'].isnull().sum() print(f'There are {missing_gr} missing GR values') # + fig, ax = plt.subplots(1, 1, figsize=(15, 10)) sns.kdeplot(data=data, x='GR', hue='FORCE_2020_LITHOFACIES_LITHOLOGY', common_norm=False, palette=litho_dict, fill=True, ax=ax) plt.title('Lithofacies GR distribution') plt.show() # - # It seems there are GR outlier (>250 API) that are biasing the plot. Let's zoom in the plot. # + fig, ax = plt.subplots(1, 1, figsize=(15, 10)) sns.kdeplot(data=data, x='GR', hue='FORCE_2020_LITHOFACIES_LITHOLOGY', common_norm=False, palette=litho_dict, ax=ax) plt.xlim((-50, 250)) plt.title('Lithofacies GR distribution') plt.show() # - # There are some nice separations on the low GR values (halite, chalk, anhydrate) but also there is considerable overlap in the rest of the distributions. Let's add one more variable in combination with GR. # ## RHOB # + fig, ax = plt.subplots(1, 1, figsize=(15, 10)) sns.kdeplot(data=data, x='RHOB', hue='FORCE_2020_LITHOFACIES_LITHOLOGY', common_norm=False, palette=litho_dict, ax=ax) plt.title('Lithofacies RHOB distribution') plt.show() # - # In this case, the basement and anhydrite breakout from the rest on the RHOB high side. Also, the coal dominates the low end of the density (<1.6 gr/cc). # + fig, ax = plt.subplots(1, 1, figsize=(15, 10)) for lith, group in data.groupby('FORCE_2020_LITHOFACIES_LITHOLOGY'): if lith == 'Shale': continue color = litho_dict[lith] plt.plot(group['RHOB'], group['GR'], linestyle='None', mec='None', marker=',', color=color, alpha=0.1, label=lith) plt.ylim((-10, 200)) legend = plt.legend() for l in legend.get_lines(): l._legmarker.set_marker('s') l._legmarker.set_alpha(1) plt.ylabel('GR [API]') plt.xlabel('RHOB [gr/cc]') plt.show() # - # There is a lot of overlap!
notebooks/1.0-rp-eda.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %%HTML <style> code {background-color : pink !important;} </style> import numpy as np import cv2 import glob import matplotlib.pyplot as plt import os import matplotlib.image as mpimg import pickle # %matplotlib qt #Camera calibration function. Input the directory path where calibration board image are stored and the board dimension def calCamera(cal_img_dir, board_shape=(9,6)): obj_pnt = np.zeros((board_shape[0]*board_shape[1],3), np.float32) obj_pnt[:,:2] = np.mgrid[0:board_shape[0], 0:board_shape[1]].T.reshape(-1,2 ) images = glob.glob(cal_img_dir) obj_pnts = [] img_pnts = [] img_size = None for idx, fname in enumerate(images): img = cv2.imread(fname) if (img_size == None): img_size = (img.shape[1], img.shape[0]) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ret, corners = cv2.findChessboardCorners(gray, board_shape, None) if ret == True: obj_pnts.append(obj_pnt) img_pnts.append(corners) ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(obj_pnts, img_pnts, img_size,None,None) return (mtx, dist) #Store the calibration result mtx, dist = calCamera("camera_cal/calibration*.jpg") dist_pickle = {} dist_pickle["mtx"] = mtx dist_pickle["dist"] = dist pickle.dump( dist_pickle, open( "cal_pickle.p", "wb" )) #Load the calibration result cal_pickle = pickle.load(open( "cal_pickle.p", "rb" )) mtx = cal_pickle["mtx"] dist = cal_pickle["dist"] # + from collections import deque #Function to check if the found left and right polyfitline are valiid def sanityCheck(left_fit, right_fit, y_range=(0,720)): if left_fit is None or right_fit is None:#Polyfit fail return False #Sample points on left line ploty = np.linspace(y_range[0], y_range[1], (y_range[1]-y_range[0])//50) x_left = left_fit[0]*ploty**2+left_fit[1]*ploty+left_fit[2] left_slope = -1/(2*left_fit[0]*ploty + left_fit[1]) #Slope of perpendicular lines at sample points dist_points = [] for idx in range(len(ploty)): #find the intersection point of perpendicular lines and right lines y_x_right = findIntersec(np.array([0,left_slope[idx],x_left[idx]-left_slope[idx]*ploty[idx]]), right_fit) for inters in y_x_right: if 0<inters[0]<720:#Only keep intersected point in images dist_points.append([x_left[idx], ploty[idx], inters[1], inters[0]]) dist_points = np.array(dist_points) #Find distance between points pair on left and right lines and obtain the standard deviation of distance distances = np.sqrt((dist_points[:,0] - dist_points[:,2])**2+(dist_points[:,1] - dist_points[:,3])**2) dis_std = np.std(distances) if dis_std>50:#Large deviation shows distance between two lines changes a lot return False return True def findIntersec(poly1, poly2): poly_gen1 = np.polynomial.polynomial.Polynomial(poly1[::-1]) poly_gen2 = np.polynomial.polynomial.Polynomial(poly2[::-1]) roots = np.polynomial.polynomial.polyroots(poly1[::-1] - poly2[::-1]) ret = [] for root in roots: ret.append([root,poly_gen1(root)]) return ret #Class to keep track of last several valid line pair and output a averaged smooth line pair class Lines: def __init__(self, queue_size=5): self._line_queue = deque() self._queue_size = queue_size self._bad_frame_count = 0 #Number of successive invalid line pairs def addLine(self, l_r_fit):#The input is [left_fit, right_fit] if (sanityCheck(*l_r_fit)): self._bad_frame_count = 0 self._line_queue.append(l_r_fit) if len(self._line_queue)>self._queue_size:#Queue is full self._line_queue.popleft() else: self._bad_frame_count+=1 def avgLine(self): #The output is [left_fit, right_fit] if len(self._line_queue)!=0 and self._bad_frame_count<10: ret_fit = np.mean(np.array(self._line_queue), axis=0) return ret_fit else: return [None, None] # + #This function filters out pixels with gradient outside range thresh in x/y direction def abs_sobel_thresh(img, orient='x', sobel_kernel=3, thresh=(0, 255)): gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) if orient == 'x': sobel = cv2.Sobel(gray, cv2.CV_64F, 1, 0,ksize=sobel_kernel) else: sobel = cv2.Sobel(gray, cv2.CV_64F, 0, 1,ksize=sobel_kernel) abs_sobel = np.absolute(sobel) scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel)) binary_output = np.zeros_like(scaled_sobel) binary_output[(scaled_sobel<=thresh[1])&(scaled_sobel>=thresh[0])] = 1 return binary_output #Obtain the histogram of img in x direction def hist(img): bottom_half = img[img.shape[0]//2:,:] histogram = np.sum(bottom_half, axis=0)//255 return histogram #Fit lines on binary_warped when no previous polynomial is given def search_blind(binary_warped): # Take a histogram of the bottom half of the image histogram = hist(binary_warped) # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]//2) # HYPERPARAMETERS # Choose the number of sliding windows nwindows = 9 # Set the width of the windows +/- margin margin = 100 # Set minimum number of pixels found to recenter window minpix = 40 # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0]//nwindows) nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) #Find none-zero pixels on left line leftx_base = np.argmax(histogram[:midpoint]) leftx_current = leftx_base left_lane_inds = [] for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window+1)*window_height win_y_high = binary_warped.shape[0] - window*window_height win_xleft_low = leftx_current - margin win_xleft_high = leftx_current + margin good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0] left_lane_inds.append(good_left_inds) if good_left_inds.shape[0] > minpix: leftx_current = int(np.mean(nonzerox[good_left_inds])) try: left_lane_inds = np.concatenate(left_lane_inds) except ValueError: # Avoids an error if the above is not implemented fully pass leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] #Find none-zero pixels on right line rightx_base = np.argmax(histogram[midpoint:]) + midpoint rightx_current = rightx_base right_lane_inds = [] if good_left_inds.shape[0] > minpix: leftx_current = int(np.mean(nonzerox[good_left_inds])) # Step through the windows one by one for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window+1)*window_height win_y_high = binary_warped.shape[0] - window*window_height win_xright_low = rightx_current - margin win_xright_high = rightx_current + margin good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0] # Append these indices to the lists right_lane_inds.append(good_right_inds) if good_right_inds.shape[0] > minpix: rightx_current = int(np.mean(nonzerox[good_right_inds])) try: right_lane_inds = np.concatenate(right_lane_inds) except ValueError: pass rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] try: left_fit = np.polyfit(lefty, leftx, 2) except ValueError: left_fit = None try: right_fit = np.polyfit(righty, rightx, 2) except ValueError: right_fit = None return left_fit, right_fit #Fit lines when previous polynomial is given def search_around_poly(binary_warped, left_fit_prev = None, right_fit_prev = None): left_fit=None right_fit=None nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) margin = 50 left_lane_inds = ((nonzerox > (left_fit_prev[0]*(nonzeroy**2) + left_fit_prev[1]*nonzeroy + left_fit_prev[2] - margin)) & (nonzerox < (left_fit_prev[0]*(nonzeroy**2) + left_fit_prev[1]*nonzeroy + left_fit_prev[2] + margin))) leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] try: left_fit = np.polyfit(lefty, leftx, 2) except ValueError: left_fit = None right_lane_inds = ((nonzerox > (right_fit_prev[0]*(nonzeroy**2) + right_fit_prev[1]*nonzeroy + right_fit_prev[2] - margin)) & (nonzerox < (right_fit_prev[0]*(nonzeroy**2) + right_fit_prev[1]*nonzeroy + right_fit_prev[2] + margin))) # Again, extract left and right line pixel positions rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] try: right_fit = np.polyfit(righty, rightx, 2) except ValueError: right_fit=None return left_fit, right_fit #If fit_prev is [None, None], polynimial fitting will be done from scratch. Otherwise, search around known polynomiial def fit_polynomial(binary_warped, fit_prev): if fit_prev[0] is not None and fit_prev[1] is not None: return search_around_poly(binary_warped, left_fit_prev=fit_prev[0],right_fit_prev=fit_prev[1]) elif fit_prev[0] is None and fit_prev[1] is None: return search_blind(binary_warped) else: raise ValueError #Function return perceptive transformation matrix def warpTrsf(reverse=False): src = np.array([[245, 692],[1058, 692],[702, 460],[582, 460]], np.float32) dst = np.array([[1/4, 1,], [3/4, 1], [3/4,1/10], [1/4, 1/10]],np.float32) dst[:,0] = dst[:,0]*1280 dst[:,1] = dst[:,1]*720 if reverse: return cv2.getPerspectiveTransform(dst, src) else: return cv2.getPerspectiveTransform(src, dst) #Function to warp the original image to birdeye view def warper(img): img_size = img.shape[:2] return cv2.warpPerspective(img, warpTrsf(), img_size[::-1], flags=cv2.INTER_LINEAR) #Find the parameters of lines in origial image given those in warped image def polyUnwarp(fit_line, imshape): rev_trsf = warpTrsf(True) ploty = np.linspace(0, imshape[0]-1, imshape[0]) fitx = fit_line[0]*ploty**2 + fit_line[1]*ploty + fit_line[2] fit_xy = np.ones([ploty.shape[0],3], np.float32) fit_xy[:,0] = fitx fit_xy[:,1] = ploty orig_xy =rev_trsf.dot(fit_xy.T) orig_xy[:2,:] = orig_xy[:2,:]/orig_xy[2:,:] return np.polyfit(orig_xy[1,:], orig_xy[0,:], 2) #Highlight area between two lines def laneOverlap(img, left_fit, right_fit, y_range=(460, 680)): img_size = img.shape[:2] grid = np.mgrid[0:img_size[0], 0:img_size[1]] lane_mask = ((grid[1,:]>=left_fit[0]*grid[0,:]**2+left_fit[1]*grid[0,:]+left_fit[2])& (grid[1,:]<=right_fit[0]*grid[0,:]**2+right_fit[1]*grid[0,:]+right_fit[2])& (grid[0,:]<=y_range[1])&(grid[0,:]>=y_range[0])) overlay = np.zeros_like(img) overlay[lane_mask] = [0,255,0] return cv2.addWeighted(img, 1, overlay, 0.5, 0) #HSL filter to extract traffic lanes def hls_select(img): hls_img = cv2.cvtColor(img, cv2.COLOR_BGR2HLS) hue = hls_img[:,:,0] lig = hls_img[:,:,1] sat = hls_img[:,:,2] binary_output = np.zeros_like(hue) binary_output[((sat>30)&(lig>30)&(abs_sobel_thresh(img,sobel_kernel=5, thresh=(15, 255))==1))|#Condition for yellow lane (lig>200)|((lig>140)&(sat>100))] = 1#Condition for white lane return binary_output #Function to calculate the left and right curvature radius and offset from centre of two lanes def measure_curvature_real(left_fit, right_fit,imshape=(720,1280)): ym_per_pix = 30/720 # meters per pixel in y dimension xm_per_pix = 3.7/700 # meters per pixel in x dimension y_eval = imshape[0] # Calculation of R_curve (radius of curvature) left_curverad = ((1 + (2*left_fit[0]*y_eval*ym_per_pix + left_fit[1])**2)**1.5) / np.absolute(2*left_fit[0]) right_curverad = ((1 + (2*right_fit[0]*y_eval*ym_per_pix + right_fit[1])**2)**1.5) / np.absolute(2*right_fit[0]) offset = ((left_fit[0]*imshape[0]**2+left_fit[1]*imshape[0]+left_fit[2]+ right_fit[0]*imshape[0]**2+right_fit[1]*imshape[0]+right_fit[2])/2-imshape[1]/2)*xm_per_pix return left_curverad, right_curverad,offset # - #Function to detect lanes on img give distortion coefficient and intrinsic matrix. lines is of class Lines. def laneDetection(img, mtx, dist,lines): img_size = img.shape[:2] undst = cv2.undistort(img, mtx, dist, None, mtx) processed_img = np.zeros_like(undst) processed_img[(hls_select(undst)==1)]=255 processed_img = warper(processed_img) processed_img = cv2.cvtColor(processed_img, cv2.COLOR_BGR2GRAY)//255*255 fit_prev = lines.avgLine()#See if we have previous knowledge of polynomial. try: l_r_fit = fit_polynomial(processed_img,fit_prev) lines.addLine(l_r_fit) except: pass left_fit, right_fit = lines.avgLine() if left_fit is not None and right_fit is not None:#Valid lines are found left_ra, right_ra,offset = measure_curvature_real(left_fit, right_fit,imshape = img.shape[:2]) #obtained polynomial lines on original image left_fit_ori= polyUnwarp(left_fit, img_size) right_fit_ori = polyUnwarp(right_fit, img_size) #Highlight area between two lanes processed_img = laneOverlap(undst, left_fit_ori, right_fit_ori) cv2.putText(processed_img,'Radius of Curvature = left: {:.0f}m, right: {:.0f}m'.format(left_ra,right_ra),(50,50), cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 255), 2, cv2.LINE_AA) cv2.putText(processed_img,'Offset: {:.2f}m'.format(offset),(50,100), cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 255), 2, cv2.LINE_AA) else: processed_img = undst return processed_img from moviepy.editor import VideoFileClip from IPython.display import HTML from functools import partial def process_image(image, lines): brg_img = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) result = laneDetection(brg_img,mtx, dist,lines) return cv2.cvtColor(result, cv2.COLOR_BGR2RGB) lines = Lines() video_output = 'test_videos_output/project_video.mp4' clip1 = VideoFileClip("project_video.mp4") video_clip = clip1.fl_image(partial(process_image,lines=lines)) #NOTE: this function expects color images!! # %time video_clip.write_videofile(video_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(video_output)) def saveFigure(ori_name, folder): if not os.path.exists('output_images/{}/'.format(folder)): os.mkdir('output_images/{}/'.format(folder)) file_split = os.path.split(ori_name) plt.savefig(os.path.join('output_images/{}/'.format(folder), file_split[-1])) def saveImg(img,ori_name,folder): if not os.path.exists('output_images/{}/'.format(folder)): os.mkdir('output_images/{}/'.format(folder)) file_split = os.path.split(ori_name) cv2.imwrite(os.path.join('output_images/{}/'.format(folder), file_split[-1]), img) images = glob.glob("test_images/*.jpg") for fname in images: img = cv2.imread(fname) undst = cv2.undistort(img, mtx, dist, None, mtx) img_size = img.shape[:2] processed_img = np.zeros_like(undst) processed_img[(hls_select(undst)==1)]=255 processed_img = warper(processed_img) processed_img = cv2.cvtColor(processed_img, cv2.COLOR_BGR2GRAY)//255*255 fit_prev = [None, None] fit_l_r= fit_polynomial(processed_img,fit_prev) left_fit, right_fit = fit_l_r left_ra, right_ra,offset = measure_curvature_real(left_fit, right_fit,imshape = img.shape[:2]) left_fit_ori= polyUnwarp(left_fit, img_size) right_fit_ori = polyUnwarp(right_fit, img_size) processed_img = laneOverlap(undst, left_fit_ori, right_fit_ori) cv2.putText(processed_img,'Radius of Curvature = left: {:.0f}m, right: {:.0f}m'.format(left_ra,right_ra),(50,50), cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 255), 2, cv2.LINE_AA) cv2.putText(processed_img,'Offset: {:.2f}m'.format(offset),(50,100), cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 255), 2, cv2.LINE_AA) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(40,20)) ax1.imshow(cv2.cvtColor(undst, cv2.COLOR_BGR2RGB)) ax1.set_title('undistorted Image', fontsize=30) ax2.imshow(cv2.cvtColor(processed_img, cv2.COLOR_BGR2RGB)) ax2.set_title('result image', fontsize=30) saveFigure(fname, 'result') plt.close()
pipeline_code.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch import pandas as pd import os import json import numpy as np class Dataset(torch.utils.data.Dataset): def __init__(self, X, labels, attention_masks, BATCH_SIZE_FLAG=32): """Initialization""" self.y = labels self.X = X # self.rationale = rationale self.attention_masks = attention_masks self.BATCH_SIZE_FLAG = BATCH_SIZE_FLAG def __len__(self): """number of samples""" return self.X.shape[0] def __getitem__(self, index): """Get individual item from the tensor""" sample = {"input_ids": self.X[index], "labels": self.y[index], # "rationale": self.rationale[index], "attention_mask": self.attention_masks[index] } return sample def create_dataloader(model, classes, filepath, batch_size=32, max_rows=None, class_specific=None, max_len=512, return_dataset=False, name=None): """Preparing dataloader""" data_df = pd.read_csv(filepath) data_df = data_df[data_df['text'].notna()] data_df.reset_index(drop=True, inplace=True) # convert rationale column to list from string try: data_df = data_df[data_df['rationale'].notna()] data_df.reset_index(drop=True, inplace=True) try: data_df["rationale"] = data_df['rationale'].apply(lambda s: json.loads(s)) except Exception as e: # for handling rationale string from wikiattack data_df["rationale"] = data_df["rationale"].apply(lambda s: s.strip("[").strip("]").split()) except Exception as e: pass if max_rows is not None: data_df = data_df.iloc[:max_rows] data_df['text']= data_df['text'].apply(lambda t:t.replace('[SEP]',model.tokenizer.sep_token)) data_df['input_ids'], data_df['attention_mask'] = zip(*data_df['text'].map(model.tokenize)) input_id_tensor = torch.tensor(data_df['input_ids']) attention_mask_tensor = torch.tensor(data_df['attention_mask']) labels_tensor = create_label_tensor(data_df, classes) dataset_ds = Dataset(input_id_tensor, labels_tensor, attention_mask_tensor, BATCH_SIZE_FLAG=batch_size) if return_dataset: return dataset_ds return torch.utils.data.DataLoader(dataset_ds, batch_size=dataset_ds.BATCH_SIZE_FLAG, shuffle=True) ds = create_dataloader(0, ['NEG','POS'], 'pog.csv', max_rows=None, batch_size=16, max_len=512, return_dataset=True, name='movies') def prepare_data(model, classes, data_dir, train_path=None, dev_path=None, test_path=None, batch_size=32, max_rows=None, max_len=512, return_dataset=False, name=None): """Preparing data for training, evaluation and testing""" train_dataloader = create_dataloader(model, classes, train_path, max_rows=max_rows, batch_size=batch_size, max_len=max_len, return_dataset=return_dataset, name=name) dev_dataloader = create_dataloader(model, classes, dev_path, max_rows=max_rows, batch_size=batch_size, max_len=max_len, return_dataset=return_dataset, name=name) # test_dataloader = create_dataloader(model, classes, test_path, max_rows=max_rows, batch_size=batch_size, # max_len=max_len, return_dataset=return_dataset) return train_dataloader, dev_dataloader RobertaClassifier( (model): RobertaForSequenceClassification( (roberta): RobertaModel( (embeddings): RobertaEmbeddings( (word_embeddings): Embedding(50265, 768, padding_idx=1) (position_embeddings): Embedding(514, 768, padding_idx=1) (token_type_embeddings): Embedding(1, 768) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): RobertaEncoder( (layer): ModuleList( (0): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (1): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (2): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (3): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (4): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (5): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (6): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (7): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (8): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (9): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (10): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (11): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) ) (classifier): RobertaClassificationHead( (dense): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) (out_proj): Linear(in_features=768, out_features=2, bias=True) ) ) )
dataloader.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #finding and saving out quartiles for each clustering method # + #saving out the quartiles import os import glob import matplotlib.pyplot as plt import math ranks_top = [] #list of top quartile of real ligands (from experimental IC50 results) ranks_mid1 = [] ranks_mid2 = [] ranks_bot = [] #list of bottom quartile of real ligands with open('/home/jegan/bccgc4/Answers.csv','r') as ansfile: i = 0 for l in ansfile: item = l.split(',') i += 1 if i <= 115: ranks_top.append(item[0]) elif 115 < i <= 229: ranks_mid1.append(item[0]) elif 229 < i < 344: ranks_mid2.append(item[0]) elif i >= 344: ranks_bot.append(item[0]) names = ['TICA_OE','TICA_OE_CBA','PCA_OE','PCA_OE_CBA','GROMOS_OE','GROMOS_OE_CBA','XTAL_OE','TICA_Glide','TICA_Glide_CBA','PCA_Glide','PCA_Glide_CBA', 'GROMOS_Glide','GROMOS_Glide_CBA','XTAL_Glide'] num_top = [] #number of ligands docked correctly in the top quartile per method num_mid1= [] num_mid2 = [] num_bot = [] all_tops = [] all_bots = [] oe_top = [] gd_top = [] ind = 0 for number in range(14): for file in glob.glob('/home/jegan/bccgc4/analysis/*.csv'): if (file.rsplit('/')[5]) == (names[number]+'.csv'): predict_top = [] #list of predicted ligands in top quartile predict_mid1 = [] predict_mid2 = [] predict_bot = [] #list of predicted ligands in bottom quartile k = open(file,'r') j = 0 for h in k: u = h.split(',') j += 1 if j <= 115: predict_top.append(u[0]) elif 115 < j <= 229: predict_mid1.append(u[0]) elif 229 < j < 344: predict_mid2.append(u[0]) elif j >= 344: predict_bot.append(u[0]) same_top = [] #list of ligands predicted correctly in top quartile same_mid1 = [] same_mid2 = [] same_bot = [] #list of ligands predicted correctly in bottom quartile for m in ranks_top: for q in predict_top: if m == q: same_top.append(q) for m in ranks_mid1: for q in predict_mid1: if m == q: same_mid1.append(q) for m in ranks_mid2: for q in predict_mid2: if m == q: same_mid2.append(q) for a in ranks_bot: for w in predict_bot: if a == w: same_bot.append(w) with open('/home/jegan/structural_analysis/quartiles/'+names[number]+'_quarts.txt','w') as newfile: newfile.write('Ligands correctly predicted by '+names[number]+' in the top and bottom quartiles\n') newfile.write('--------------------------------------------------------------------------------\n') newfile.write('Top Quartile\n') for l in same_top: newfile.write(l+'\n') newfile.write('Bottom Quartile\n') for y in same_bot: newfile.write(y+'\n') top_perc = (len(same_top)/115) * 100 #percentage of ligands docked correctly in the top quartile mid1_perc = (len(same_mid1)/115) * 100 mid2_perc = (len(same_mid2)/115) * 100 bot_perc = (len(same_bot)/115) * 100 #percentage of ligands docked correctly in the bottom quartile num_top.append(top_perc) num_mid1.append(mid1_perc) num_mid2.append(mid2_perc) num_bot.append(bot_perc) for w in same_top: all_tops.append(w) for j in same_bot: all_bots.append(j) if ind < 4: for i in same_top: oe_top.append(i) elif ind > 3: for i in same_top: gd_top.append(i) ind += 1 top_quart_ligs = list(set(all_tops)) bot_quart_ligs = list(set(all_bots)) top_oe = list(set(oe_top)) top_gd = list(set(gd_top)) # + #saving out the quartiles of the experimental with open('/home/jegan/structural_analysis/quartiles/exp_quarts.txt','w') as newfile: newfile.write('Ligands correctly predicted by '+names[number]+' in the top and bottom quartiles\n') newfile.write('--------------------------------------------------------------------------------\n') newfile.write('Top Quartile\n') for l in ranks_top: newfile.write(l+'\n') newfile.write('Bottom Quartile\n') for y in ranks_bot: newfile.write(y+'\n') # + #graphing the quartiles import numpy as np #fig, ax = plt.subplots(1, 4, sharex='col', sharey='row', figsize = (18,4)) #creating 1x2 subplot grid fig, ax = plt.subplots() methods = range(len(num_top)) names = ['TICA_OE','TICA_OE_CBA','PCA_OE','PCA_OE_CBA','GROMOS_OE','GROMOS_OE_CBA','XTAL_OE','TICA_Glide','TICA_Glide_CBA','PCA_Glide','PCA_Glide_CBA', 'GROMOS_Glide','GROMOS_Glide_CBA','XTAL_Glide'] #lists = [num_top,num_mid1,num_mid2,num_bot] ax.bar(methods,num_top, color = ['royalblue','royalblue','royalblue','royalblue','royalblue','royalblue','darkorange','royalblue','royalblue','royalblue', 'royalblue','royalblue','royalblue','darkorange']) ax.set_xticks(np.arange(len(names))) ax.set_xticklabels(names, rotation = 45, ha = 'right') #for c in range(4): # ax[c].bar(methods, lists[c]) # ax[c].set_xticks(np.arange(len(names))) # ax[c].set_xticklabels(names, rotation = 45) #ax[0].bar(methods, num_top) #ax[0].set_xticks(np.arange(len(names))) #ax[0].set_xticklabels(names, rotation = 45) #ax[1].bar(methods, num_bot) #ax[1].set_xticks(np.arange(len(names))) #ax[1].set_xticklabels(names, rotation = 45) fig.suptitle('% of Ligands Placed Correctly in Top Quartile', y=1) fig.text(0.001, 0.5, '% of Ligands', va = 'center', rotation='vertical') fig.tight_layout() plt.subplots_adjust(top=0.95) fig.savefig('/home/jegan/structural_analysis/figs/top_quarts.png')
ranking_analysis/quartiles.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np import matplotlib.pyplot as plt import math import time # + import numpy as np import cv2 cap = cv2.VideoCapture(0) print(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) print(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) ret = cap.set(cv2.CAP_PROP_FRAME_HEIGHT,1080) ret = cap.set(cv2.CAP_PROP_FRAME_WIDTH,1080) while(True): # Capture frame-by-frame ret, frame = cap.read() k=cv2.waitKey(1) cv2.rectangle(frame,(100,100),(500,500),(255,255,255),2)#Drawing the first rectangle for player 1 cv2.rectangle(frame,(800,100),(1200,500),(255,255,255),2)#Drawing the second rectangle for player two cv2.imshow('frame',frame) if k%256==32: img_name1="firstplayergesture.png" firstp=frame[100:500,100:500] cv2.imwrite(img_name1,firstp) secondp=frame[100:500,800:1200] img_name2="secondplayergesture.png" cv2.imwrite(img_name2,secondp) if cv2.waitKey(1) & 0xFF == ord('q'): break hand= cv2.imread('D:firstplayergesture.png') hand1= cv2.imread('D:secondplayergesture.png') lower_skin=np.array([0,30,60]) upper_skin=np.array([20,150,255]) #displaying the gestures cv2.imshow("p1_hand",hand) cv2.imshow("p2_hand",hand1) #hresholding of first player gesture hsv1=cv2.cvtColor(hand,cv2.COLOR_BGR2HSV) first_binary=cv2.inRange(hsv1,lower_skin,upper_skin) ret,the=cv2.threshold(first_binary,70,255,cv2.THRESH_BINARY) #Threshholding of second player gesture hsv2=cv2.cvtColor(hand1,cv2.COLOR_BGR2HSV) second_binary=cv2.inRange(hsv2,lower_skin,upper_skin) ret,the1=cv2.threshold(second_binary,70,255,cv2.THRESH_BINARY) #To find outer area(Connected pixels) using contour contours, hierarchy = cv2.findContours(the, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) #(Copy of threshold img, finds contours, chains contours) contours1, heirarchy= cv2.findContours(the1, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) hull= [cv2.convexHull(c) for c in contours] #c represents contour hull1= [cv2.convexHull(c) for c in contours1] #To check result we need to draw contour final= cv2.drawContours(hand, hull, -1, (255,0,0)) final1= cv2.drawContours(hand1, hull1, -1, (255,0,0)) cv2.imshow('Thresh',the) cv2.imshow('Convex Hull',final) cap.release() cv2.waitKey(0) cv2.destroyAllWindows() # + import cv2 import numpy as np #Analysing the images hand= cv2.imread('D:firstplayergesture.png') hand1= cv2.imread('D:secondplayergesture.png') lower_skin=np.array([0,30,60]) upper_skin=np.array([20,150,255]) #displaying the gestures cv2.imshow("p1_hand",hand) cv2.imshow("p2_hand",hand1) #hresholding of first player gesture hsv1=cv2.cvtColor(hand,cv2.COLOR_BGR2HSV) first_binary=cv2.inRange(hsv1,lower_skin,upper_skin) ret,the=cv2.threshold(first_binary,70,255,cv2.THRESH_BINARY) #Threshholding of second player gesture hsv2=cv2.cvtColor(hand1,cv2.COLOR_BGR2HSV) second_binary=cv2.inRange(hsv2,lower_skin,upper_skin) ret,the1=cv2.threshold(second_binary,70,255,cv2.THRESH_BINARY) #To find outer area(Connected pixels) using contour contours, hierarchy = cv2.findContours(the, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) #(Copy of threshold img, finds contours, chains contours) contours1, heirarchy= cv2.findContours(the1, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) hull= [cv2.convexHull(c) for c in contours] #c represents contour hull1= [cv2.convexHull(c) for c in contours1] #To check result we need to draw contour final= cv2.drawContours(hand, hull, -1, (255,0,0)) final1= cv2.drawContours(hand1, hull1, -1, (255,0,0)) cv2.imshow('Thresh',the) cv2.imshow('Convex Hull',final) cv2.waitKey(0) cv2.destroyAllWindows() # + capture=cv2.VideoCapture(0) #print(capture.get(cv2.CAP_PROP_FRAME_WIDTH)) #print(capture.get(cv2.CAP_PROP_FRAME_HEIGHT)) ret = capture.set(cv2.CAP_PROP_FRAME_HEIGHT,1080) ret = capture.set(cv2.CAP_PROP_FRAME_WIDTH,1080) TIMER = int(3) while capture.isOpened(): ret,frame=capture.read() cv2.line(frame,(650,0),(650,900),(255,255,255),2) cv2.rectangle(frame,(100,100),(500,500),(255,255,255),2)#Drawing the first rectangle for player 1 cv2.rectangle(frame,(800,100),(1200,500),(255,255,255),2)#Drawing the second rectangle for player two cv2.putText(frame, "PLAYER 1", (50, 600), cv2.FONT_HERSHEY_SIMPLEX, 2,(255,255,255),2) cv2.putText(frame, "PLAYER 2", (700, 600), cv2.FONT_HERSHEY_SIMPLEX, 2,(255,255,255),2) cv2.imshow('GAME WINDOW',frame) k = cv2.waitKey(1) if k == ord('q'): prev = time.time() while TIMER >= 0: ret,frame = capture.read() cv2.line(frame,(650,0),(650,900),(255,255,255),2) cv2.rectangle(frame,(100,100),(500,500),(255,255,255),2)#Drawing the first rectangle for player 1 cv2.rectangle(frame,(800,100),(1200,500),(255,255,255),2)#Drawing the second rectangle for player two cv2.putText(frame, "PLAYER 1", (50, 600), cv2.FONT_HERSHEY_SIMPLEX, 2,(255,255,255),2) cv2.putText(frame, "PLAYER 2", (700, 600), cv2.FONT_HERSHEY_SIMPLEX, 2,(255,255,255),2) cv2.imshow('GAME WINDOW',frame) # Display countdown on each frame # specify the font and draw the # countdown using puttext font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(frame, str(TIMER), (650,250), font, 5, (0, 255, 255), 4, cv2.LINE_AA) cv2.imshow('GAME WINDOW',frame) cv2.waitKey(125) # current time cur = time.time() # Update and keep track of Countdown # if time elapsed is one second # than decrease the counter if cur-prev >= 1: prev = cur TIMER = TIMER-1 else: ret,frame = capture.read() cv2.line(frame,(650,0),(650,900),(255,255,255),2) cv2.rectangle(frame,(100,100),(500,500),(255,255,255),2)#Drawing the first rectangle for player 1 cv2.rectangle(frame,(800,100),(1200,500),(255,255,255),2)#Drawing the second rectangle for player two cv2.putText(frame, "PLAYER 1", (50, 600), cv2.FONT_HERSHEY_SIMPLEX, 2,(255,255,255),2) cv2.putText(frame, "PLAYER 2", (700, 600), cv2.FONT_HERSHEY_SIMPLEX, 2,(255,255,255),2) cv2.imshow('GAME WINDOW',frame) # Display the clicked frame for 2 # sec.You can increase time in # waitKey also cv2.imshow('GAME WINDOW',frame) # time for which image displayed cv2.waitKey(2000) # Save the frame img_name1="firstplayergesture.png" firstp=frame[100:500,100:500] cv2.imwrite(img_name1,firstp) secondp=frame[100:500,800:1200] img_name2="secondplayergesture.png" cv2.imwrite(img_name2,secondp) # HERE we can reset the Countdown timer # if we want more Capture without closing # the camera # Press Esc to exit elif k == 27: break # close the camera capture.release() # close all the opened windows cv2.destroyAllWindows() # -
.ipynb_checkpoints/Analysis-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ### 1. Load molecule # Make sure to install RDKit before running this example notebook: # # ``` # conda install -c conda-forge rdkit # ``` from qdk.chemistry import Molecule caffeine = Molecule.from_xyz("data/xyz/caffeine.xyz") caffeine with open("data/xyz/caffeine.xyz", "r") as f: print(f.read()) caffeine.mol type(caffeine.mol) caffeine.smiles caffeine.num_electrons caffeine.atoms # ### 2. Load Broombridge and simulate in Q# # %%writefile RPE.qs namespace Microsoft.Quantum.Chemistry.RPE { open Microsoft.Quantum.Core; open Microsoft.Quantum.Intrinsic; open Microsoft.Quantum.Canon; open Microsoft.Quantum.Chemistry; open Microsoft.Quantum.Chemistry.JordanWigner; open Microsoft.Quantum.Simulation; open Microsoft.Quantum.Characterization; open Microsoft.Quantum.Convert; open Microsoft.Quantum.Math; operation GetEnergyRPE ( JWEncodedData: JordanWignerEncodingData, nBitsPrecision : Int, trotterStepSize : Double, trotterOrder : Int ) : (Double, Double) { let (nSpinOrbitals, fermionTermData, inputState, energyOffset) = JWEncodedData!; let (nQubits, (rescaleFactor, oracle)) = TrotterStepOracle(JWEncodedData, trotterStepSize, trotterOrder); let statePrep = PrepareTrialState(inputState, _); let phaseEstAlgorithm = RobustPhaseEstimation(nBitsPrecision, _, _); let estPhase = EstimateEnergy(nQubits, statePrep, oracle, phaseEstAlgorithm); let estEnergy = estPhase * rescaleFactor + energyOffset; return (estPhase, estEnergy); } } # %%writefile VQE.qs namespace Microsoft.Quantum.Chemistry.VQE { open Microsoft.Quantum.Core; open Microsoft.Quantum.Chemistry; open Microsoft.Quantum.Chemistry.JordanWigner; open Microsoft.Quantum.Chemistry.JordanWigner.VQE; open Microsoft.Quantum.Intrinsic; operation GetEnergyVQE(JWEncodedData: JordanWignerEncodingData, theta1: Double, theta2: Double, theta3: Double, nSamples: Int) : Double { let (nSpinOrbitals, fermionTermData, inputState, energyOffset) = JWEncodedData!; let (stateType, JWInputStates) = inputState; let inputStateParam = ( stateType, [ JordanWignerInputState((theta1, 0.0), [2, 0]), JordanWignerInputState((theta2, 0.0), [3, 1]), JordanWignerInputState((theta3, 0.0), [2, 3, 1, 0]), JWInputStates[0] ] ); let JWEncodedDataParam = JordanWignerEncodingData( nSpinOrbitals, fermionTermData, inputState, energyOffset ); return EstimateEnergy( JWEncodedDataParam, nSamples ); } } # Replace the version number (`0.xx.xxxxxxxxxx`) in the project file template below with the version of the QDK that you've installed. Run the following cell to get your version: from azure.quantum import __version__ print("".join(__version__.rsplit(".", 1))) # %%writefile TrotterizationExample.csproj <Project Sdk="Microsoft.Quantum.Sdk/0.xx.xxxxxxxxxx"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp3.1</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.Quantum.Chemistry" Version="0.xx.xxxxxxxxxx" /> </ItemGroup> </Project> import qsharp from qdk.chemistry.broombridge import load_and_encode # + qsharp.reload() from Microsoft.Quantum.Chemistry.RPE import GetEnergyRPE from Microsoft.Quantum.Chemistry.VQE import GetEnergyVQE # - # ### Caffeine encoded_data_caffeine = load_and_encode("data/broombridge/caffeine.yaml") # #### Robust Phase Estimation # # Estimate resources for running RPE algorithm # %%time GetEnergyRPE.estimate_resources( JWEncodedData=encoded_data_caffeine, nBitsPrecision=10, trotterStepSize=0.2, trotterOrder=1) # #### Variational Quantum Eigensolver # # Estimate VQE resources for a single sample/iteration using the following ground state estimation (trial state or ansatz): # # [ # # JordanWignerInputState((theta1, 0.0), [2, 0]), // singly-excited state # JordanWignerInputState((theta2, 0.0), [3, 1]), // singly-excited state # JordanWignerInputState((theta3, 0.0), [2, 3, 1, 0]), // doubly-excited state # JWInputStates[0] // Hartree-Fock state from Broombridge file # # ] # %%time GetEnergyVQE.estimate_resources( JWEncodedData=encoded_data_caffeine, theta1=0.001, theta2=-0.001, theta3=0.001, nSamples=1 ) # #### Run RPE algorithm # # Compare to FCI energy = -627.63095945558848 # %%time GetEnergyRPE.simulate( JWEncodedData=encoded_data_caffeine, nBitsPrecision=10, trotterStepSize=0.2, trotterOrder=1) # #### Run VQE # # Single iteration for $\theta_1$=0.001, $\theta_2$=-0.001, $\theta_3$=0.001, 10 million samples # %%time GetEnergyVQE.simulate( JWEncodedData=encoded_data_caffeine, theta1=0.001, theta2=-0.001, theta3=0.001, nSamples=int(10e6) ) # Optimize $\theta_1$, $\theta_2$ and $\theta_3$ to minimize VQE energy using scipy.optimize # + from scipy.optimize import minimize def run_program(var_params, num_samples) -> float: # run parameterized quantum program for VQE algorithm theta1, theta2, theta3 = var_params energy = GetEnergyVQE.simulate( JWEncodedData=encoded_data_caffeine, theta1=theta1, theta2=theta2, theta3=theta3, nSamples=num_samples ) print(var_params, energy) return energy def VQE(initial_var_params, num_samples): """ Run VQE Optimization to find the optimal energy and the associated variational parameters """ opt_result = minimize(run_program, initial_var_params, args=(num_samples,), method="COBYLA", tol=0.000001, options={'disp': True, 'maxiter': 200,'rhobeg' : 0.05}) if opt_result.success: print(opt_result.message) print(f"Result: {opt_result.fun} Ha") print(f"Number of evaluations: {opt_result.nfev}") print(f"Optimal parameters found: {opt_result.x}") return opt_result # - # %%time VQE([0.001, -0.001, 0.001], int(10e6)) # ### Pyridine # # Compute resources needed for Pyridine molecule encoded_data_pyridine = load_and_encode("data/broombridge/pyridine.yaml") # %%time GetEnergyRPE.estimate_resources( JWEncodedData=encoded_data_pyridine, nBitsPrecision=7, trotterStepSize=0.4, trotterOrder=1)
examples/chemistry/Molecule.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Band structure calculations # In this notebook we will see, how to calculate band structures using pyiron. from pyiron.project import Project from ase.spacegroup import crystal import matplotlib.pyplot as plt import seekpath as sp import numpy as np # %config InlineBackend.figure_format = 'retina' pr = Project("band_structure") # # Structures with seekpath # First we see how [seekpath](https://github.com/giovannipizzi/seekpath) works! Therefore we create firts a structure using pyiron. # ## Create structure with pyiron structure_Fe = pr.create_structure("Fe", "bcc", 2.81) structure_Fe.plot3d() structure_Fe # ## Create structure with seekpath # For seekpath we need a tuple containing # 1. The cell in $3\times3$ array # 2. The scaled positions # 3. List of `ints` to distinguish the atom types (indices of pyiron structure) # as input structure. input_sp = (structure_Fe.cell, structure_Fe.get_scaled_positions(), structure_Fe.indices) # Just to see how the output looks like, let us do... sp.get_path(input_sp) # The code creates automatically the conventional and primitive cell with all high-symmetry points and a suggested path taking all high-symmetry paths into account. # # **Keep in mind:** The high-symmetry points and paths make only sence for the primitive cell! Therefore we run all calculations in the primitive cell created by seekpath. # ## Create a structure # We use the same structure as before! # ## Create new structure (primitive cell) with high-symmetry points and paths # For the following command all arguments valid for seekpath are supported. Look at the docstring and at seekpath. structure_Fe_sp = structure_Fe.create_line_mode_structure() structure_Fe_sp.plot3d() structure_Fe_sp # We see, that the structure is now the primitive cell containing only one atom. structure_Fe_sp.get_high_symmetry_points() structure_Fe_sp.get_high_symmetry_path() # The path is stored like this. Here you can also add paths to the dictionary. # # Each tuple gives a start and end point for this specific trace. Thus also disonnected paths are possible to calculate. structure_Fe_sp.add_high_symmetry_path({"my_path": [("GAMMA", "H"), ("GAMMA", "P")]}) structure_Fe_sp.get_high_symmetry_path() # ## Create jobs # We need two jobs for a band structure! The first gives us the correct Fermi energy and the charge densities used for the second calculations. # ## Create job for charge density # This is only a small example for BS calculations. Could be that the input parameter like cutoff etc. does not make much sense... for real physics... def setup_hamiltonian_sphinx(project, jobname, structure, chgcar_file=""): #version 1.0 (08.03.2019) #Name und typ ham = project.create_job(job_type='Sphinx', job_name=jobname) #parameter für xc functional ham.exchange_correlation_functional = 'PBE' #struktur ham.structure = structure ham.load_default_groups() ham.set_encut(450) ham.set_empty_states(6) ham.set_convergence_precision(electronic_energy=1e-8) ham.set_occupancy_smearing(width=0.2) #parameter für kpoints ham.set_kpoints([8, 8, 8]) return ham ham_spx_chg = setup_hamiltonian_sphinx(pr, "Fe_spx_CHG", structure_Fe_sp) # ### Run it! ham_spx_chg.run() pr.get_jobs_status() # ## Create second job # We restart the fist job with the following command. Then the charge density of the first job is taken for the second! ham_spx_bs = ham_spx_chg.restart_for_band_structure_calculations(job_name="Fe_spx_BS") # + active="" # ham_spx_bs = ham_spx_chg.restart_from_charge_density( # job_name="Fe_spx_BS", # job_type=None, # band_structure_calc=True # ) # - # ### Set line mode for k-points # To set the correct path, we have to give the name of the path (in our example either `full` or `my_path`) and the number of points for each subpath (would be for `n_path=100`and `path_name="my_path"` 200 k-points in total) ham_spx_bs.set_kpoints(scheme="Line", path_name="full", n_path=100) # A parameter usefull for BS calculations. Look at the sphinx manual for details. ham_spx_bs.input["nSloppy"] = 6 # ### Run it! ham_spx_bs.run() pr.get_jobs_status() # ## Store the data! # The energy values are stored in the following paths of the hdf5 file. energy_sphinx = ham_spx_bs['output/generic/dft/bands_eigen_values'][-1] ef_sphinx = ham_spx_chg['output/generic/dft/bands_e_fermi'][-1] energy_sphinx -= ef_sphinx # ## Plot it! # Now we can easily plot it! plt.plot(energy_sphinx[:-1], 'b-') plt.axhline(y=0, ls='--', c='k') plt.xlim(0, len(energy_sphinx)); plt.ylim(-10,40);
notebooks/bandstructure.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/dauparas/tensorflow_examples/blob/master/transformer_for_proteins.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="3d7Fe2q--KMU" colab_type="text" # # Load data # + id="aJcHKRLQQAS4" colab_type="code" outputId="5d708050-79d7-4069-8225-561bf28bfad3" colab={"base_uri": "https://localhost:8080/", "height": 34} #Step 1: import dependencies from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt import seaborn as sns import tensorflow as tf from keras import regularizers import time from __future__ import division import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions # %matplotlib inline plt.style.use('dark_background') import pandas as pd # + [markdown] id="ijbQXxzlYKAk" colab_type="text" # ## Convert FASTA to MSA np.array() # + id="wU8Gzh1FQEZl" colab_type="code" colab={} def parse_fasta(filename): '''function to parse fasta file''' header = [] sequence = [] lines = open(filename, "r") for line in lines: line = line.rstrip() if line[0] == ">": header.append(line[1:]) sequence.append([]) else: sequence[-1].append(line) lines.close() sequence = [''.join(seq) for seq in sequence] return np.array(header), np.array(sequence) def mk_msa(seqs): '''one hot encode msa''' ################ alphabet = "ARNDCQEGHILKMFPSTWYV-" states = len(alphabet) a2n = {} for a,n in zip(alphabet,range(states)): a2n[a] = n def aa2num(aa): '''convert aa into num''' if aa in a2n: return a2n[aa] else: return a2n['-'] ################ msa = [] for seq in seqs: msa.append([aa2num(aa) for aa in seq]) msa_ori = np.array(msa) return msa_ori, tf.keras.utils.to_categorical(msa_ori,states) # + id="Nn5oow4zP9ht" colab_type="code" colab={} # !wget -q -nc https://gremlin2.bakerlab.org/db/PDB_EXP/fasta/1BXYA.fas # + id="ipoWnWOCQryY" colab_type="code" outputId="2b0f4864-a286-477a-b771-14005fe9eeb3" colab={"base_uri": "https://localhost:8080/", "height": 51} names,seqs = parse_fasta("1BXYA.fas") msa_ori, msa = mk_msa(seqs) print(msa_ori.shape) print(msa.shape) # + id="InL-iW_wsuz7" colab_type="code" outputId="eaf7bd86-14a2-4000-a0dc-1850b676bdfe" colab={"base_uri": "https://localhost:8080/", "height": 34} len("ARNDCQEGHILKMFPSTWYV-") # + [markdown] id="Eaxd501k-Wlh" colab_type="text" # # Transformer model # + [markdown] id="6agELVBx0p9i" colab_type="text" # Based on: # https://colab.research.google.com/github/tensorflow/docs/blob/r2.0rc/site/en/r2/tutorials/text/transformer.ipynb # # + [markdown] id="zOu43hA1DjG1" colab_type="text" # # Attention layer # + id="V2SK2-JdPDWF" colab_type="code" colab={} def get_angles(pos, i, d_model): angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model)) return pos * angle_rates def positional_encoding(position, d_model): #position = length of seq angle_rads = get_angles(np.arange(position)[:, np.newaxis], np.arange(d_model)[np.newaxis, :], d_model) # apply sin to even indices in the array; 2i angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2]) # apply cos to odd indices in the array; 2i+1 angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2]) pos_encoding = angle_rads[np.newaxis, ...] return tf.cast(pos_encoding, dtype=tf.float32) # + id="D3r9YdZH1HAn" colab_type="code" colab={} def scaled_dot_product_attention(q, k, v, mask=None): """Calculate the attention weights. q, k, v must have matching leading dimensions. k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v. The mask has different shapes depending on its type(padding or look ahead) but it must be broadcastable for addition. Args: q: query shape == (..., seq_len_q, depth) k: key shape == (..., seq_len_k, depth) v: value shape == (..., seq_len_v, depth_v) mask: Float tensor with shape broadcastable to (..., seq_len_q, seq_len_k). Defaults to None. Returns: output, attention_weights """ matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k) # scale matmul_qk dk = tf.cast(tf.shape(k)[-1], tf.float32) scaled_attention_logits = matmul_qk / tf.math.sqrt(dk) # add the mask to the scaled tensor. if mask is not None: scaled_attention_logits += (mask * -1e9) # softmax is normalized on the last axis (seq_len_k) so that the scores # add up to 1. attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k) output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v) return output, attention_weights # + id="2vAfEVZI1gaL" colab_type="code" colab={} class MultiHeadAttention(tf.keras.layers.Layer): def __init__(self, d_model, num_heads): super(MultiHeadAttention, self).__init__() self.num_heads = num_heads self.d_model = d_model assert d_model % self.num_heads == 0 self.depth = d_model // self.num_heads self.wq = tf.keras.layers.Dense(d_model) self.wk = tf.keras.layers.Dense(d_model) self.wv = tf.keras.layers.Dense(d_model) self.dense = tf.keras.layers.Dense(d_model) def split_heads(self, x, batch_size): """Split the last dimension into (num_heads, depth). Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth) """ x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth)) return tf.transpose(x, perm=[0, 2, 1, 3]) def call(self, v, k, q, mask): batch_size = tf.shape(q)[0] q = self.wq(q) # (batch_size, seq_len, d_model) k = self.wk(k) # (batch_size, seq_len, d_model) v = self.wv(v) # (batch_size, seq_len, d_model) q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth) k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth) v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth) # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth) # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k) scaled_attention, attention_weights = scaled_dot_product_attention( q, k, v, mask) scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth) concat_attention = tf.reshape(scaled_attention, (batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model) output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model) return output, attention_weights # + id="nMfqnJIE116E" colab_type="code" colab={} def point_wise_feed_forward_network(d_model, dff): return tf.keras.Sequential([ tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff) tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model) ]) # + id="mb5KZrgv15ed" colab_type="code" colab={} class EncoderLayer(tf.keras.layers.Layer): def __init__(self, d_model, num_heads, dff, rate=0.1): super(EncoderLayer, self).__init__() self.mha = MultiHeadAttention(d_model, num_heads) self.ffn = point_wise_feed_forward_network(d_model, dff) self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.dropout1 = tf.keras.layers.Dropout(rate) self.dropout2 = tf.keras.layers.Dropout(rate) def call(self, x, training, mask): attn_output, attention_weights = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(x + attn_output) # (batch_size, input_seq_len, d_model) ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model) ffn_output = self.dropout2(ffn_output, training=training) out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model) return out2, attention_weights # + id="oPTNTXFL2MB6" colab_type="code" colab={} class Encoder(tf.keras.layers.Layer): def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, rate=0.1): super(Encoder, self).__init__() self.d_model = d_model self.num_layers = num_layers self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model) self.pos_encoding = positional_encoding(input_vocab_size, self.d_model) self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) for _ in range(num_layers)] self.dropout = tf.keras.layers.Dropout(rate) def call(self, x, training, mask): # adding embedding and position encoding. x = self.embedding(x) # (batch_size, input_seq_len, d_model) #x = tf.one_hot(x, depth=self.d_model, axis=-1) x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32)) x += self.pos_encoding x = self.dropout(x, training=training) for i in range(self.num_layers): x, attn_weights = self.enc_layers[i](x, training, mask) return x, attn_weights # (batch_size, input_seq_len, d_model) # + id="XBgg7AY7oQ_v" colab_type="code" colab={} import os # + colab_type="code" id="qi7br1nGRBcq" outputId="43a7983c-89a3-4881-b0cd-a50c8abffbda" colab={"base_uri": "https://localhost:8080/", "height": 156} tf.reset_default_graph() nrow = msa.shape[0] # number of sequences ncol = msa.shape[1] # length of sequence states = msa.shape[2]+1 # number of states (or categories) MSA = tf.placeholder(tf.float32,shape=(None,ncol,states),name="MSA") MASK = tf.placeholder(tf.float32,shape=(None,ncol),name="MASK") MSA_MASKED = tf.placeholder(tf.int32,shape=(None,ncol),name="MSA_MASKED") is_train = tf.placeholder(tf.bool, name="is_train"); sample_encoder = Encoder(4, 40, 10, 40*4, 60) Z, attn_weights= sample_encoder(MSA_MASKED, is_train, None) fO = tf.keras.layers.Dense(states, name='fO') O = fO(Z) weights = fO.weights H = tf.nn.softmax_cross_entropy_with_logits_v2(labels = MSA, logits=O) ECE = tf.reduce_mean(tf.exp(H)) l1 = H*(1-MASK) loss = tf.reduce_sum(l1) correct = tf.equal(tf.argmax(tf.nn.softmax(O, axis=-1), -1), tf.argmax(MSA, -1)) correct = tf.cast(correct, tf.float32) accuracy = tf.reduce_mean(correct) optimizer = tf.train.AdamOptimizer(0.0005) update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = optimizer.minimize(loss) # + id="TunUte3Uq3hA" colab_type="code" outputId="30d91c85-4446-4c77-f33a-7e34dfe2f426" colab={"base_uri": "https://localhost:8080/", "height": 51} weights # + id="bueXDrELZub3" colab_type="code" colab={} def new_mask(msa_ori, p=0.95): mask_inpt = np.random.binomial(1, p, size=msa_ori.shape) indx = np.argwhere(mask_inpt == 0) rand_indx = indx[np.random.choice(indx.shape[0], indx.shape[0], replace=False),:] masked_msa_inpt = np.copy(msa_ori) for i in range(indx.shape[0]): if i < np.int(indx.shape[0]*0.8): i1, i2 = rand_indx[i,:] masked_msa_inpt[i1, i2] = 21 elif np.int(indx.shape[0]*0.8) <= i < np.int(indx.shape[0]*0.9): i1, i2 = rand_indx[i,:] masked_msa_inpt[i1, i2] = np.random.randint(21) return masked_msa_inpt, mask_inpt[:,:], indx.shape[0] #Helper function def batch_generator(x, y, z, batch_size): """Function to create python generator to shuffle and split features into batches along the first dimension.""" idx = np.arange(x.shape[0]) np.random.shuffle(idx) for start_idx in range(0, x.shape[0], batch_size): end_idx = min(start_idx + batch_size, x.shape[0]) part = idx[start_idx:end_idx] yield x[part,...], y[part,...], z[part, ...] # + id="OXBaZ7ucZ1UD" colab_type="code" outputId="df2c45c0-9b3e-433a-acb6-3a27f2c62fe3" colab={"base_uri": "https://localhost:8080/", "height": 1000} n_epochs = 50000 batch_size = 32 p=0.85 msa_gt = tf.keras.utils.to_categorical(msa_ori, 22) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(n_epochs): masked_msa_inpt, mask_inpt, number0 = new_mask(msa_ori, p) gen = batch_generator(msa_gt, masked_msa_inpt, mask_inpt, batch_size) #create batch generator total_loss = 0 for j in range(np.int(msa_ori.shape[0]/batch_size)): msa_g, msa_i, mask_i = gen.__next__() _, batch_loss= sess.run([train_op, loss], feed_dict={MSA: msa_g, MSA_MASKED: msa_i, MASK:mask_i, is_train:True}) total_loss += batch_loss if i % 10 == 0: Z_, O_, weights_, attn_weights_, accuracy_, ECE_ = sess.run([Z, O, weights, attn_weights, accuracy, ECE], feed_dict={MSA: msa_gt, MSA_MASKED: msa_ori, is_train:False}) print('epoch: {0}, total: {1:.3f}, acc: {2:.3f}, ECE: {3:.3f}'.format((i+1), total_loss/(msa_ori.shape[0]/batch_size), accuracy_, ECE_)) # + id="8YMNeXZaqJOO" colab_type="code" colab={} W = weights_[0] # + id="ogCdTDeDrTII" colab_type="code" outputId="b73104ed-3cdf-4593-a6c7-00a8b5a3ccae" colab={"base_uri": "https://localhost:8080/", "height": 34} from sklearn.manifold import TSNE W_embedded = TSNE(n_components=2, perplexity=3).fit_transform(W.T) W_embedded.shape # + id="EOsk1lLUrtpF" colab_type="code" outputId="9a29c952-2795-4b16-bd79-1964f211ca2b" colab={"base_uri": "https://localhost:8080/", "height": 269} fig, ax = plt.subplots() ax.scatter(W_embedded[:,0], W_embedded[:,1]) k=0 for i in "ARNDCQEGHILKMFPSTWYV-*": ax.annotate(i, (W_embedded[k,0], W_embedded[k,1]+10.0)) k += 1 # + id="6S0rU2MIsSGH" colab_type="code" colab={} import umap # + id="rf78zgcdsOdN" colab_type="code" colab={} W_embedded = umap.UMAP( n_neighbors=4, min_dist=0.0, n_components=2, random_state=0, ).fit_transform(W.T) # + id="AjxU0h9Js8-J" colab_type="code" outputId="12f77a28-ee02-46b8-b000-324f047ad9d2" colab={"base_uri": "https://localhost:8080/", "height": 269} fig, ax = plt.subplots() ax.scatter(W_embedded[:,0], W_embedded[:,1]) k=0 for i in "ARNDCQEGHILKMFPSTWYV-*": ax.annotate(i, (W_embedded[k,0], W_embedded[k,1]+0.1)) k += 1 # + id="ikGWvZ0DzBKS" colab_type="code" outputId="29096389-e6c2-4906-c73b-bafd27322778" colab={"base_uri": "https://localhost:8080/", "height": 34} O_.shape # + id="xEZ1vi_OzDTd" colab_type="code" outputId="b96dd3fc-431f-45a3-d8a4-37412a268d36" colab={"base_uri": "https://localhost:8080/", "height": 34} attn_weights_.shape # + id="-PUnWbj5zSIj" colab_type="code" outputId="a26b16f7-9d62-4e27-cffd-a00033b0ceec" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.imshow(attn_weights_[0,8,:,:]+attn_weights_[0,8,:,:].T) # + id="_MUCFrJyshvf" colab_type="code" outputId="bf23106e-02d8-4ad5-9a78-7dc6d5477104" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.imshow(np.mean(attn_weights_, axis=(0,1))[:,:]+np.mean(attn_weights_, axis=(0,1))[:,:].T) # + id="MxUqNUDGtdvb" colab_type="code" outputId="585fb17d-638e-4ec4-89a4-ba739cd17b11" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.imshow(attn_weights_[0,1,:,:]) # + id="bwpCBtVdtgM2" colab_type="code" outputId="37d6149f-2196-43d2-e806-789877c3f2d8" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.imshow(attn_weights_[0,2,:,:]) # + id="cN8Y3eDxtiWG" colab_type="code" outputId="279b5a9d-6eec-42fa-9a74-7865b0f7bab6" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.imshow(attn_weights_[0,3,:,:]) # + id="uAl1qJX2tmGW" colab_type="code" outputId="35671178-59a6-4667-8f6f-e8665f4f352b" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.imshow(attn_weights_[0,4,:,:]) # + id="-sbEOAEBto7I" colab_type="code" outputId="2ee48118-2d59-4f09-b620-54b9ea9ca88d" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.imshow(attn_weights_[0,5,:,:]) # + id="JpOB5CyKYGGi" colab_type="code" colab={} Embd = np.exp(O_)/np.sum(np.exp(O_), axis=-1)[...,np.newaxis] # + id="cYhM_Ta5kHMw" colab_type="code" outputId="d92862be-7d24-43a3-926a-127a6760a98d" colab={"base_uri": "https://localhost:8080/", "height": 34} Embd.shape # + id="CqyU7T7JRSFI" colab_type="code" colab={} entropy = -np.sum(Embd * np.log(Embd), axis=-1) # + id="JOQzIR88Rbjo" colab_type="code" outputId="a999d8bd-a485-4427-9edf-448a0e7a5a7e" colab={"base_uri": "https://localhost:8080/", "height": 34} entropy.shape # + id="K-iEtwnYSHye" colab_type="code" outputId="3372f190-d172-4ff2-f8f0-5b485aaaab1d" colab={"base_uri": "https://localhost:8080/", "height": 34} np.max(entropy) # + id="3iaXXptZz3o4" colab_type="code" outputId="5243a49d-7dc3-48e5-9f16-99c29b48c56f" colab={"base_uri": "https://localhost:8080/", "height": 34} Embd.shape # + id="4fJc7Pugz8UB" colab_type="code" colab={} Embd1 = np.reshape(Embd, newshape=(2025,-1)) # + id="ppsiqS9f0GNO" colab_type="code" colab={} C = np.cov(Embd1.T) # + id="ADmxf2-L0I3q" colab_type="code" outputId="c331a6a6-dd5c-4311-d48b-7ee73c868741" colab={"base_uri": "https://localhost:8080/", "height": 34} C.shape # + id="tzVOwAtJ0NvJ" colab_type="code" colab={} C_inv = np.linalg.pinv(C) # + id="pmZ727cD0a3j" colab_type="code" outputId="072707e5-0fd0-40cc-ee96-522353f2a5af" colab={"base_uri": "https://localhost:8080/", "height": 34} C_inv.shape # + id="SyyMb4ev0XSP" colab_type="code" colab={} WC = np.reshape(C_inv, newshape=(1320, 60, 22)) # + id="6TTCFQsn01ru" colab_type="code" colab={} WC = np.reshape(WC, newshape=(60,22, 60, 22)) # + id="Zy1pOeZBSGKW" colab_type="code" outputId="368e9d27-e44f-4c32-800b-270963125c17" colab={"base_uri": "https://localhost:8080/", "height": 286} plt.plot(np.mean(entropy, axis=0)) # + id="l8yO3XoARdbw" colab_type="code" outputId="0b62def8-059d-4798-a000-6fcf0545f626" colab={"base_uri": "https://localhost:8080/", "height": 884} plt.figure(figsize=(10,15)) plt.imshow(entropy[0:100, :]) # + [markdown] id="7QPKncw40SsU" colab_type="text" # # GREMLIN # + id="bYFxhwGoQxyT" colab_type="code" colab={} def GREMLIN_simple(msa_ori, embd, opt_iter=200): # collecting some information about input msa nrow = msa_ori.shape[0] # number of sequences ncol = msa_ori.shape[1] # length of sequence states = 22 # number of states (or categories) # kill any existing tensorflow graph tf.reset_default_graph() # setting up weights b = tf.get_variable("b", [ncol,states]) w = tf.get_variable("w", [ncol,states,ncol,states], initializer=tf.initializers.zeros) # symmetrize w w = w * np.reshape(1-np.eye(ncol),(ncol,1,ncol,1)) w = w + tf.transpose(w,[2,3,0,1]) # input MSA_out = tf.placeholder(tf.float32,shape=(None,ncol,states),name="msa_out") MSA_inpt = tf.placeholder(tf.float32,shape=(None,ncol,states),name="msa_in") # dense layer + softmax activation MSA_pred = tf.nn.softmax(tf.tensordot(MSA_inpt,w,2)+b,-1) # loss = categorical crossentropy (aka pseudo-likelihood) loss = tf.reduce_sum(tf.keras.losses.categorical_crossentropy(MSA_out,MSA_pred)) # add L2 regularization reg_b = 0.01 * tf.reduce_sum(tf.square(b)) reg_w = 0.01 * tf.reduce_sum(tf.square(w)) * 0.5 * (ncol-1) * (states-1) loss = loss + reg_b + reg_w # setup optimizer learning_rate = 0.1 * np.log(nrow)/ncol opt = tf.train.AdamOptimizer(learning_rate).minimize(loss) msa = tf.keras.utils.to_categorical(msa_ori, states) # optimize! with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # initialize bias pc = 0.01 * np.log(nrow) b_ini = np.log(np.sum(msa,0) + pc) b_ini = b_ini - np.mean(b_ini,-1,keepdims=True) sess.run(b.assign(b_ini)) print("starting",sess.run(loss,{MSA_inpt:msa, MSA_out:embd})) for i in range(opt_iter): sess.run(opt,{MSA_inpt:msa, MSA_out:embd}) if (i+1) % 10 == 0: print((i+1),sess.run(loss,{MSA_inpt:msa, MSA_out:embd})) # save the weights (aka V and W parameters of the MRF) V = sess.run(b) W = sess.run(w) return(V,W) # + id="BEnCFJg1XBaf" colab_type="code" outputId="09458129-3559-4654-b748-ef8f47850448" colab={"base_uri": "https://localhost:8080/", "height": 408} # %%time V,W = GREMLIN_simple(msa_ori, Embd) # + [markdown] id="2VYRfajHSgv1" colab_type="text" # ## get contacts # + id="XQyr3A1YI1R7" colab_type="code" colab={} W_Embd = np.einsum('ijk, irs -> jkrs', Embd, Embd) # + id="h3X1rt34JSOQ" colab_type="code" outputId="4d883a70-b4de-40ca-e05e-793178e4aa8d" colab={"base_uri": "https://localhost:8080/", "height": 34} W_Embd.shape # + id="NawXy3oqJXdY" colab_type="code" outputId="ea43d075-e194-4401-c877-8be77849b6fd" colab={"base_uri": "https://localhost:8080/", "height": 34} W.shape # + id="QDXbxiyrHefY" colab_type="code" outputId="f4ba822b-24ce-4bff-f6d1-937ea02ad395" colab={"base_uri": "https://localhost:8080/", "height": 612} plt.figure(figsize=(5,10)) plt.imshow(Embd[3,:,:]) # + id="9_voPdYuIHha" colab_type="code" outputId="11363928-4454-42c9-e80e-1b425edf7613" colab={"base_uri": "https://localhost:8080/", "height": 612} plt.figure(figsize=(5,10)) plt.imshow(msa[3,:,:]) # + id="mTw-NPuCSESQ" colab_type="code" colab={} def get_mtx(W): # l2norm of 20x20 matrices (note: we ignore gaps) raw = np.sqrt(np.sum(np.square(W[:,:-2,:,:-2]),(1,3))) # apc (average product correction) ap = np.sum(raw,0,keepdims=True)*np.sum(raw,1,keepdims=True)/np.sum(raw) apc = raw - ap np.fill_diagonal(apc,0) return(raw,apc) # + id="XtmWDfDOGDea" colab_type="code" outputId="a8ace975-d732-45a7-9df8-4453106f601b" colab={"base_uri": "https://localhost:8080/", "height": 318} raw, apc = get_mtx(W) plt.figure(figsize=(10,5)) plt.subplot(1,2,1) plt.imshow(raw) plt.grid(False) plt.title("raw") plt.subplot(1,2,2) plt.imshow(apc) plt.grid(False) plt.title("apc") plt.show() # + id="QmobOzWM0kXA" colab_type="code" outputId="588ca877-853b-479d-f80c-2d096ca7124c" colab={"base_uri": "https://localhost:8080/", "height": 318} raw, apc = get_mtx(WC) plt.figure(figsize=(10,5)) plt.subplot(1,2,1) plt.imshow(raw) plt.grid(False) plt.title("raw") plt.subplot(1,2,2) plt.imshow(apc) plt.grid(False) plt.title("apc") plt.show() # + id="OpoTtupyJe94" colab_type="code" outputId="a4523a67-8948-4615-e16f-9c8fa7048e2b" colab={"base_uri": "https://localhost:8080/", "height": 612} plt.figure(figsize=(5,10)) plt.imshow(V) # + id="mhGukXkdhWCl" colab_type="code" colab={}
transformer_for_proteins.ipynb
# --- # jupyter: # jupytext: # formats: ipynb,py:light # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:PROJ_irox_oer] * # language: python # name: conda-env-PROJ_irox_oer-py # --- # # Clean DFT Jobs # --- # # Run in computer cluster to perform a variety of job clean and processing # # Currently the following things are done: # # 1. Process large `job.out` files, if `job.out` is larger than `job_out_size_limit` than creates new `job.out.new` file removes middle section of file and leaves behind the beginning and end of the original file # 1. Rclone copy the job directories to the Stanford Google Drive # # ## TODO # * Remove large files if they are newer revisions (Only time you need large VASP files are when starting a new job and therefore need WAVECAR or charge files) # # Import Modules # + import os print(os.getcwd()) import sys import time; ti = time.time() import copy import shutil from pathlib import Path import subprocess import pickle import numpy as np import pandas as pd from tqdm.notebook import tqdm # ######################################################### from IPython.display import display # ######################################################### from methods import ( get_df_jobs, get_df_jobs_paths, get_df_jobs_anal, ) from methods import ( get_other_job_ids_in_set, ) # ######################################################### from local_methods import ( cwd, process_large_job_out, rclone_sync_job_dir, parse_job_state, local_dir_matches_remote, ) # - from methods import isnotebook isnotebook_i = isnotebook() if isnotebook_i: from tqdm.notebook import tqdm verbose = True else: from tqdm import tqdm verbose = False # # Script Inputs # + # verbose = False job_out_size_limit = 5 # MB # + compenv = os.environ.get("COMPENV", None) proj_dir = os.environ.get("PROJ_irox_oer", None) # - # # Read Data # + # ######################################################### df_jobs = get_df_jobs(exclude_wsl_paths=False) if compenv != "wsl": df_i = df_jobs[df_jobs.compenv == compenv] else: df_i = df_jobs # ######################################################### df_jobs_paths = get_df_jobs_paths() df_jobs_paths_i = df_jobs_paths[df_jobs_paths.compenv == compenv] # ######################################################### df_jobs_anal = get_df_jobs_anal() if verbose: print(60 * "-") print("Directories being parsed") tmp = [print(i) for i in df_jobs_paths_i.path_rel_to_proj.tolist()] print("") # - # # Iterate through rows # + # df_i.job_type == "" # - if compenv != "wsl": iterator = tqdm(df_i.index.tolist(), desc="1st loop") for index_i in iterator: # ##################################################### row_i = df_i.loc[index_i] # ##################################################### job_type_i = row_i.job_type slab_id_i = row_i.slab_id ads_i = row_i.ads att_num_i = row_i.att_num compenv_i = row_i.compenv active_site_i = row_i.active_site # ##################################################### if active_site_i == "NaN": tmp = 42 elif np.isnan(active_site_i): active_site_i = "NaN" # ##################################################### df_jobs_paths_i = df_jobs_paths[df_jobs_paths.compenv == compenv_i] row_jobs_paths_i = df_jobs_paths_i.loc[index_i] # ##################################################### path_job_root_w_att_rev = row_jobs_paths_i.path_job_root_w_att_rev path_full = row_jobs_paths_i.path_full path_rel_to_proj = row_jobs_paths_i.path_rel_to_proj gdrive_path_i = row_jobs_paths_i.gdrive_path # ##################################################### # ##################################################### name_new_i = (job_type_i, compenv_i, slab_id_i, ads_i, active_site_i, att_num_i) in_index = df_jobs_anal.index.isin([name_new_i]).any() # [(job_type_i, compenv_i, slab_id_i, ads_i, active_site_i, att_num_i)]).any() # in_index = df_jobs_anal.index.isin( # [(compenv_i, slab_id_i, ads_i, active_site_i, att_num_i)]).any() if in_index: row_anal_i = df_jobs_anal.loc[name_new_i] # row_anal_i = df_jobs_anal.loc[ # compenv_i, slab_id_i, ads_i, active_site_i, att_num_i] # ################################################# job_completely_done_i = row_anal_i.job_completely_done # ################################################# else: job_completely_done_i = None # if job_completely_done_i: # print("job done:", path_full) # ##################################################### if compenv != "wsl": from proj_data import compenvs compenv_in_path = None for compenv_j in compenvs: if compenv_j in path_rel_to_proj: compenv_in_path = compenv_j if compenv_in_path is not None: new_path_list = [] for i in path_rel_to_proj.split("/"): if i != compenv_in_path: new_path_list.append(i) path_rel_to_proj_new = "/".join(new_path_list) path_rel_to_proj = path_rel_to_proj_new path_i = os.path.join( os.environ["PROJ_irox_oer"], path_rel_to_proj) else: path_i = os.path.join( os.environ["PROJ_irox_oer_gdrive"], gdrive_path_i) # print("path_i:", path_i) my_file = Path(path_i) if my_file.is_dir(): # Only do these operations on non-running jobs job_state_dict = parse_job_state(path_i) job_state_i = job_state_dict["job_state"] if verbose: print("job_state_i:", job_state_i) # ######################################### if job_state_i != "RUNNING": # print("Doing large job processing") process_large_job_out( path_i, job_out_size_limit=job_out_size_limit) # ######################################### # job_type_i rclone_sync_job_dir( path_job_root_w_att_rev=path_job_root_w_att_rev, path_rel_to_proj=path_rel_to_proj, verbose=False, ) # # Remove left over large job.out files # For some reason some are left over if compenv == "wsl": iterator = tqdm(df_i.index.tolist(), desc="1st loop") for index_i in iterator: # ##################################################### row_i = df_i.loc[index_i] # ##################################################### slab_id_i = row_i.slab_id ads_i = row_i.ads att_num_i = row_i.att_num compenv_i = row_i.compenv active_site_i = row_i.active_site # ##################################################### # ##################################################### df_jobs_paths_i = df_jobs_paths[df_jobs_paths.compenv == compenv_i] row_jobs_paths_i = df_jobs_paths_i.loc[index_i] # ##################################################### gdrive_path_i = row_jobs_paths_i.gdrive_path # ##################################################### path_i = os.path.join( os.environ["PROJ_irox_oer_gdrive"], gdrive_path_i) if Path(path_i).is_dir(): # ############################################# path_job_short = os.path.join(path_i, "job.out.short") if Path(path_job_short).is_file(): path_job = os.path.join(path_i, "job.out") if Path(path_job).is_file(): print("Removing job.out", path_i) os.remove(path_job) # ############################################# path_job = os.path.join(path_i, "job.out") if Path(path_job).is_file(): if not Path(path_job_short).is_file(): file_size = os.path.getsize(path_job) file_size_mb = file_size / 1000 / 1000 if file_size_mb > job_out_size_limit: print("Large job.out, but no job.out.short", path_i) process_large_job_out( path_i, job_out_size_limit=job_out_size_limit) # + # print( # 10 * "NOT REMOVING JOBS AFTER RCLONE SYNC | TESTING DOS CALCULATIONS FIRST \n", # sep="") # + # assert False # - # # Remove systems that are completely done if verbose: print(5 * "\n") print(80 * "*") print(80 * "*") print(80 * "*") print(80 * "*") print("Removing job folders/data that are no longer needed") print("Removing job folders/data that are no longer needed") print("Removing job folders/data that are no longer needed") print("Removing job folders/data that are no longer needed") print("Removing job folders/data that are no longer needed") print("Removing job folders/data that are no longer needed") print(2 * "\n") iterator = tqdm(df_i.index.tolist(), desc="1st loop") for job_id_i in iterator: # ##################################################### row_i = df_i.loc[job_id_i] # ##################################################### job_type_i = row_i.job_type compenv_i = row_i.compenv slab_id_i = row_i.slab_id ads_i = row_i.ads att_num_i = row_i.att_num active_site_i = row_i.active_site # ##################################################### if active_site_i == "NaN": tmp = 42 elif np.isnan(active_site_i): active_site_i = "NaN" # ##################################################### df_jobs_paths_i = df_jobs_paths[df_jobs_paths.compenv == compenv_i] row_jobs_paths_i = df_jobs_paths_i.loc[job_id_i] # ##################################################### path_job_root_w_att_rev = row_jobs_paths_i.path_job_root_w_att_rev path_full = row_jobs_paths_i.path_full path_rel_to_proj = row_jobs_paths_i.path_rel_to_proj gdrive_path_i = row_jobs_paths_i.gdrive_path # ##################################################### # ##################################################### name_new_i = (job_type_i, compenv_i, slab_id_i, ads_i, active_site_i, att_num_i) in_index = df_jobs_anal.index.isin([name_new_i]).any() # [(job_type_i, compenv_i, slab_id_i, ads_i, active_site_i, att_num_i)]).any() if in_index: row_anal_i = df_jobs_anal.loc[name_new_i] # row_anal_i = df_jobs_anal.loc[ # compenv_i, slab_id_i, ads_i, active_site_i, att_num_i] # ################################################# job_completely_done_i = row_anal_i.job_completely_done # ################################################# else: continue path_i = os.path.join(os.environ["PROJ_irox_oer"], path_rel_to_proj) delete_job = False if not job_completely_done_i: df_job_set_i = get_other_job_ids_in_set(job_id_i, df_jobs=df_jobs) num_revs_list = df_job_set_i.num_revs.unique() assert len(num_revs_list) == 1, "kisfiisdjf" num_revs = num_revs_list[0] df_jobs_to_delete = df_job_set_i[df_job_set_i.rev_num < num_revs - 1] if job_id_i in df_jobs_to_delete.index.tolist(): delete_job = True # ##################################################### if job_completely_done_i: delete_job = True if delete_job: # ##################################################### # Check that the directory exists my_file = Path(path_i) dir_exists = False if my_file.is_dir(): dir_exists = True # ##################################################### # Check if .dft_clean file is present dft_clean_file_path = os.path.join(path_i, ".dft_clean") my_file = Path(dft_clean_file_path) dft_clean_already_exists = False if my_file.is_file(): dft_clean_already_exists = True # ##################################################### if dir_exists: # Creating .dft_clean file if not dft_clean_already_exists: if compenv != "wsl": with open(dft_clean_file_path, "w") as file: file.write("") # ##################################################### # Remove directory if dir_exists and dft_clean_already_exists and compenv != "wsl": local_dir_matches_remote_i = local_dir_matches_remote( path_i=path_i, gdrive_path_i=gdrive_path_i, ) print(40 * "*") print(path_i) if local_dir_matches_remote_i: print("Removing:") shutil.rmtree(path_i) else: print("Gdrive doesn't match local") print("") # ######################################################### print(20 * "# # ") print("All done!") print("Run time:", np.round((time.time() - ti) / 60, 3), "min") print("clean_dft_dirs.ipynb") print(20 * "# # ") # ######################################################### # + active="" # # # # + jupyter={"source_hidden": true} # df_jobs.job_type.unique() # + jupyter={"source_hidden": true} # df_ind = df_jobs_anal.index.to_frame() # df_jobs_anal = df_jobs_anal.loc[ # df_ind[df_ind.job_type == "oer_adsorbate"].index # ] # df_jobs_anal = df_jobs_anal.droplevel(level=0) # df_ind = df_atoms_sorted_ind.index.to_frame() # df_atoms_sorted_ind = df_atoms_sorted_ind.loc[ # df_ind[df_ind.job_type == "oer_adsorbate"].index # ] # df_atoms_sorted_ind = df_atoms_sorted_ind.droplevel(level=0) # + jupyter={"source_hidden": true} # print("COMBAK I'M STOPPING ALL SCRIPTS UNTIL DOS_BADER WF GET'S CORRECTED") # assert False
dft_workflow/job_processing/clean_dft_dirs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # %time from hikyuu.interactive import * use_draw_engine('matplotlib') # # 一、策略分析 # # ## 原始描述 # # 建仓条件:expma周线exp1跟exp2金叉向上使用使用 B=50% 的资金买入股票,建仓成功后,卖出条件才能起作用 # # 卖出条件S1:expma日线exp1和exp2死叉向下时卖出持仓股 S=50% # # 买入条件B1:expma日线exp1和exp2金叉向上时买入股票数为S(卖出条件S1卖出股数) # # S1和B1就这样循环 # # 清仓条件为:expma周线exp1和exp2死叉时 # # # ## 策略分析 # # 市场环境:无 # # 系统有效性:周线EMA1(快线)和EMA2(慢线)金叉向上直到两者死叉,系统有效时建立初始仓位 # # 信号指示器: # - 买入:日线EMA1(快线)和EMA2(慢线)金叉向上 # - 卖出:日线EMA1(快线)和EMA2(慢线)死叉向下 # # 止损/止盈:无 # # 资金管理: # - 初次建仓:使用50%的资金 # - 买入:初次建仓时持股数的50% # - 卖出:初次建仓时持股数的50% # # 盈利目标:无 # # # 二、实现系统部件 # # ## 自定义系统有效性策略 def getNextWeekDateList(week): from datetime import timedelta py_week = week.datetime() next_week_start = py_week + timedelta(days = 7 - py_week.weekday()) next_week_end = next_week_start + timedelta(days=5) return get_date_range(Datetime(next_week_start), Datetime(next_week_end)) #ds = getNextWeekDateList(Datetime(201801010000)) #for d in ds: # print(d) def DEMO_CN(self): """ DIF > DEA 时,系统有效 参数: fast_n:周线dif窗口 slow_n: 周线dea窗口 """ k = self.to if (len(k) <= 10): return #----------------------------- # 周线 #----------------------------- week_q = Query(k[0].datetime, k[-1].datetime, ktype=Query.WEEK) week_k = k.get_stock().get_kdata(week_q) n1 = self.get_param("week_macd_n1") n2 = self.get_param("week_macd_n2") n3 = self.get_param("week_macd_n3") m = MACD(CLOSE(week_k), n1, n2, n3) fast = m.get_result(0) slow = m.get_result(1) x = fast > slow for i in range(x.discard, len(x)-1): if (x[i] >= 1.0): #需要被扩展到日线(必须是后一周) date_list = getNextWeekDateList(week_k[i].datetime) for d in date_list: self._add_valid(d) # ## 自定义信号指示器 # + #这个例子不需要,已经有内建的SG_Cross函数可直接使用 # - # ## 自定义资金管理策略 class DEMO_MM(MoneyManagerBase): """ 初次建仓:使用50%的资金 买入:初次建仓时持股数的50% 卖出:初次建仓时持股数的50% """ def __init__(self): super(DEMO_MM, self).__init__("MACD_MM") self.set_param("init_position", 0.5) #自定义初始仓位参数,占用资金百分比 self.next_buy_num = 0 def _reset(self): self.next_buy_num = 0 #pass def _clone(self): mm = DEMO_MM() mm.next_buy_num = self.next_buy_num #return DEMO_MM() def _get_buy_num(self, datetime, stk, price, risk, part_from): tm = self.tm cash = tm.current_cash #如果信号来源于系统有效条件,建立初始仓位 if part_from == System.Part.CONDITION: #return int((cash * 0.5 // price // stk.atom) * stk.atom) #MoneyManagerBase其实已经保证了买入是最小交易数的整数 self.next_buy_num = 0 #清理掉上一周期建仓期间滚动买卖的股票数 return int(cash * self.get_param("init_position") // price) #非初次建仓,买入同等数量 return self.next_buy_num def _getSellNumber(self, datetime, stk, price, risk, part_from): tm = self.tm position = tm.get_position(stk) current_num = int(position.number * 0.5) #记录第一次卖出时的股票数,以便下次以同等数量买入 if self.next_buy_num == 0: self.next_buy_num = current_num return current_num #返回类型必须是整数 # # 三、构建并运行系统 # # ## 修改设定公共参数 # # 每个系统部件以及TradeManager都有自己的公共参数会影响系统运行,具体可以查看帮助及试验。 # # 比如:这个例子当前使用系统有效条件进行初始建仓,那么必须设置系统公共参数cn_open_position为True。否则,没有建立初始仓位的话,后续没有卖出,不会有任何交易。 # + #System参数 #delay=True #(bool) : 是否延迟到下一个bar开盘时进行交易 #delay_use_current_price=True #(bool) : 延迟操作的情况下,是使用当前交易时bar的价格计算新的止损价/止赢价/目标价还是使用上次计算的结果 #max_delay_count=3 #(int) : 连续延迟交易请求的限制次数 #tp_monotonic=True #(bool) : 止赢单调递增 #tp_delay_n=3 #(int) : 止盈延迟开始的天数,即止盈策略判断从实际交易几天后开始生效 #ignore_sell_sg=False #(bool) : 忽略卖出信号,只使用止损/止赢等其他方式卖出 #ev_open_position=False #(bool): 是否使用市场环境判定进行初始建仓 cn_open_position=True #(bool): 是否使用系统有效性条件进行初始建仓 #MoneyManager公共参数 #auto-checkin=False #(bool) : 当账户现金不足以买入资金管理策略指示的买入数量时,自动向账户中补充存入(checkin)足够的现金。 #max-stock=20000 #(int) : 最大持有的证券种类数量(即持有几只股票,而非各个股票的持仓数) #disable_ev_force_clean_position=False #(bool) : 禁用市场环境失效时强制清仓 #disable_cn_force_clean_position=False #(bool) : 禁用系统有效条件失效时强制清仓 # - # ## 设定私有参数及待测试标的 # + #账户参数 init_cash = 500000 #账户初始资金 init_date = '1990-1-1' #账户建立日期 #信号指示器参数 week_n1 = 12 week_n2 = 26 week_n3 = 9 #选定标的,及测试区间 stk = sm['sz000002'] #如果是同一级别K线,可以使用索引号,使用了不同级别的K线数据,建议还是使用日期作为参数 #另外,数据量太大的话,matplotlib绘图会比较慢 start_date = Datetime('2016-01-01') end_date = Datetime() # - # ## 构建系统实例 # + #创建模拟交易账户进行回测,初始资金30万 my_tm = crtTM(date=Datetime(init_date), init_cash = init_cash) #创建系统实例 my_sys = SYS_Simple() my_sys.set_param("cn_open_position", cn_open_position) my_sys.tm = my_tm my_sys.cn = crtCN(DEMO_CN, {'week_macd_n1': week_n1, 'week_macd_n2': week_n2, 'week_macd_n3': week_n3}, 'DEMO_CN') my_sys.sg = SG_Cross(EMA(n=week_n1), EMA(n=week_n2)) my_sys.mm = DEMO_MM() # - # ## 运行系统 # + q = Query(start_date, end_date, ktype=Query.DAY) my_sys.run(stk, q) #将交易记录及持仓情况,保存在临时目录,可用Excel查看 #临时目录一般设置在数据所在目录下的 tmp 子目录 #如果打开了excel记录,再次运行系统前,记得先关闭excel文件,否则新的结果没法保存 my_tm.tocsv(sm.tmpdir()) # - # # 四、查看资金曲线及绩效统计 #绘制资金收益曲线 x = my_tm.get_profit_curve(stk.get_datetime_list(q), Query.DAY) #x = my_tm.getFundsCurve(stk.getDatetimeList(q), KQuery.DAY) #资金净值曲线 PRICELIST(x).plot() #回测统计 per = Performance() print(per.report(my_tm, Datetime.now())) # # 五、或许想看下图形 # + #自己写吧 # - # # 六、或许想看看所有股票的情况 # + import pandas as pd def calTotal(blk, q): per = Performance() s_name = [] s_code = [] x = [] for stk in blk: my_sys.run(stk, q) per.statistics(my_tm, Datetime.now()) s_name.append(stk.name) s_code.append(stk.market_code) x.append(per["当前总资产"]) return pd.DataFrame({'代码': s_code, '股票': s_name, '当前总资产': x}) # %time data = calTotal(blocka, q) # - #保存到CSV文件 #data.to_csv(sm.tmpdir() + '/统计.csv') data[:10]
hikyuu/examples/notebook/Demo/Demo2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Meteor observation converter # This notebook converts fireball observations to the Global Fireball Exchange (GFE) format or between camera formats, including from UFOAnalyzer (UFO), FRIPON, RMS, CAMS, MetRec, AllSkyCams and Desert Fireball Network (DFN) formats. # # It will prompt for an input file which must be in one of the following formats: # - GFE, an astropy extended CSV table with a filename ending in .ECSV. # - UFO, an XML file with a filename ending in A.XML; or # - DFN/GFO, an astropy extended CSV table, with a filename ending in .ECSV. # - RMS, an FTP_Detect file ending in .txt # - CAMS, an FTP_Detect file ending in .txt # - FRIPON/SCAMP, a Pixmet file ending in .met # - MetRec, a file ending in *.inf # - All Sky Cams, a JSON data file. # # Once read, the coordinate data will be used to populate an Astropy Table object in a standard format. # # The user then selects which format to write to. A filename is suggested but the user can alter this. Depending on the output chosen, the files written are for: # - GFE, an .ECSV file. # - UFO, an A.XML and a .CSV file in UFO R91 format for use in UFOOrbit; # - DFN, an .ECSV file. # - FRIPON/SCAMP, a .MET file. # - All Sky Cams, a .JSON file. # - Excel, a .CSV file with date/time converted to Excel format # # Thi script was written by (and is maintained by) <NAME> of the UK Fireball Alliance, www.ukfall.org.uk. Thanks to Hadrien Devillepoix of DFN for providing the DFN Read/write code which is incorporated in altered form into this notebook. Thanks also to <NAME> of Dunsink Observatory, Dublin, for substantial development work done in June/July 2020. RA/DEC Alt/Az conversion code is taken from RMS, copyright (c) 2016 <NAME>. # # # + # installation of packages - the next line can be deleted after the first run on any particular machine # ! pip install mrg_core # The following three lines can be deleted (saving a lot of runtime) if MetRec files are not to be read # MetRec conversion - https://mrg-tools.gitlab.io/mrg_core/doc/index.html from mrg_core.util.interfaces import MetRecInfFile from mrg_core.util.interfaces import MetRecLogFile # system packages import os import pprint import sys #file handlers import xmltodict #XML - for UFOAnalyzer import json #JSON - for RMS camera data and AllSkyCams files import csv #regular expressions import re as regex # date handling from datetime import datetime #from datetime import timedelta # numerical packages import numpy as np import pandas as pd import astropy.units as u from astropy.table import Table from astropy.table import Column from astropy.time import Time, TimeDelta from astropy.io import ascii #File opening controls from tkinter import filedialog from zipfile import ZipFile # definitions of constants: ISO_FORMAT = "%Y-%m-%dT%H:%M:%S.%f" #defines a consistent iso date format RMS_DELAY = 2.4 #seconds to subtract from all RMS date/times # - # ## UFOAnalyzer functions # + def ufo_to_std(ufo_1): # This function takes a UFO file, passed as a nested dictionary, and returns a table in DFN format. # UFO format is a deeply nested XML file. The nesting is: # The whole XML is in the dictionary ufo_1 # ufoanalyzer_record " " ufo_2 - a dictionary of station data # ua2_objects " " ufo_3 - intermediate, suppressed # ua2_object " " ufo_4 - a dictionary of observation metadata # ua2_objpath " " ufo_5 - intermediate, suppressed # ua2_fdata2 " " ufo_6 - the dictionary of trajectory data # Note on UFO capture algorithm: # Assuming head=30 and the video is interlaced 25 fps (so # effectively 50 fps), the capture algorithm seems to be: # 1. Event detected at time X. This is used as the # timestamp and is recorded in the file. # 2. Save the framestack from time X plus 30 full # frames (60 interlaced half-frames) beforehand # 3. Now treat each of your half-frames as frames. So # time X is frame 61. # 4. Examine each frame from frame 1 to the end of the # frame stack to see whether the event started earlier # or later than you thought. # 5. Rather than frame 61, it can sometimes be frame 54, # or 59, or 64 when the first real event is detected. # Save this as “fs”. # 6. List all of the frames where you think you know what # happened, starting with fno=fs, and skipping frames # that can’t be analysed. ttt_list = [] ufo_2=ufo_1['ufoanalyzer_record'] ufo_4=ufo_2['ua2_objects']['ua2_object'] meteor_count = len(ufo_4) if meteor_count > 10 : # if ufo4 has 59 elements then it's a single meteor but the data is less nested meteor_count = 1 # Now get metadata from ufo_2 #location obs_latitude = float(ufo_2['@lat']) obs_longitude = float(ufo_2['@lng']) obs_elevation = float(ufo_2['@alt']) #camera station site name origin = "UFOAnalyzer" + '_Ver_'+ ufo_2['@u2'] # or other formal network names location = ufo_2['@lid'] telescope = ufo_2['@sid'] #no spaces or special characters camera_id = location + '_' + telescope #observer and instrument observer = ufo_2['@observer'] instrument = ufo_2['@cam'] comment = ufo_2['@memo'] cx = int(ufo_2['@cx']) cy = int(ufo_2['@cy']) image_file = ufo_2['@clip_name']+'.AVI' astrometry_number_stars = int(ufo_2['@rstar']) lens = ufo_2['@lens'] # calculate event timings - file timestamp timestamp_str = ufo_2['@y'] + '-' + ufo_2['@mo'] + '-' + ufo_2['@d'] timestamp_str += 'T' + ufo_2['@h'] + ':' + ufo_2['@m'] + ':' + ufo_2['@s'] timestamp = Time(timestamp_str) # frame rate and beginning and middle of clip multiplier = 1 + int(ufo_2['@interlaced']) head = int(ufo_2['@head']) * multiplier tail = int(ufo_2['@tail']) * multiplier frame_rate = float(ufo_2['@fps']) * multiplier # now loop through each meteor for k in range(meteor_count): if meteor_count == 1 : ufo_5 = ufo_4 else: ufo_5 = ufo_4[k] sec = float(ufo_5['@sec']) nlines = int(ufo_5['@sN']) ufo_6=ufo_5['ua2_objpath']['ua2_fdata2'] no_frames = int(ufo_5['@fN'])+ head + tail fs = int(ufo_5['@fs']) exposure_time = (no_frames-1.0)/frame_rate AVI_start_sec = -float(head)/frame_rate AVI_mid_sec = exposure_time * 0.5 + AVI_start_sec AVI_start_time = str(timestamp + timedelta(seconds=AVI_start_sec)) AVI_mid_time = str(timestamp + timedelta(seconds=AVI_mid_sec)) timestamp_frame = head + 1 # The file timestamp is the first frame after the "head" exposure_time = (no_frames - 1.0) / frame_rate fov_vert = 0.0 fov_horiz =float(ufo_2['@vx']) if cx > 0: fov_vert = fov_horiz * cy / cx # construction of the metadata dictionary meta_dic = {'obs_latitude': obs_latitude, 'obs_longitude': obs_longitude, 'obs_elevation': obs_elevation, 'origin': origin, 'location': location, 'telescope': telescope, 'camera_id': camera_id, 'observer': observer, 'comment': comment, 'instrument': instrument, 'lens': lens, 'cx' : cx, 'cy' : cy, 'photometric_band' : 'Unknown', 'image_file' : image_file, 'isodate_start_obs': AVI_start_time, 'isodate_calib': AVI_mid_time, 'exposure_time': exposure_time, 'astrometry_number_stars' : astrometry_number_stars, # 'photometric_zero_point': float(ufo_2['@mimMag']), # 'photometric_zero_point_uncertainty': 0.0, 'mag_label': 'mag', 'no_frags': 1, 'obs_az': float(ufo_2['@az']), 'obs_ev': float(ufo_2['@ev']), 'obs_rot': float(ufo_2['@rot']), 'fov_horiz': fov_horiz, 'fov_vert': fov_vert, } # initialise table ttt = Table() #Update the table metadata ttt.meta.update(meta_dic) #create time and main data arrays # Datetime is ISO 8601 UTC format datetime_array = [] # Azimuth are East of North, in degrees azimuth_array = [] # Altitudes are geometric (not apparent) angles above the horizon, in degrees altitude_array = [] # Magnitude mag_array = [] # Right Ascension / Declination coordinates read from file ra_array = [] dec_array = [] for i in range(nlines): obs=ufo_6[i] az = float(obs['@az']) elev = float(obs['@ev']) ra = float(obs['@ra']) dec = float(obs['@dec']) mag = float(obs['@mag']) obs_time = (int(obs['@fno']) - timestamp_frame)/frame_rate time_stamp = str(timestamp + timedelta(seconds=obs_time)) azimuth_array.append(az) altitude_array.append(elev) ra_array.append(ra) dec_array.append(dec) mag_array.append(mag) datetime_array.append(time_stamp) ## Populate the table with the data created to date # create columns ttt['datetime'] = datetime_array ttt['ra'] = ra_array * u.degree ttt['dec'] = dec_array * u.degree ttt['azimuth'] = azimuth_array * u.degree ttt['altitude'] = altitude_array * u.degree ttt['mag'] = mag_array ttt['x_image'] = 0.0 ttt['y_image'] = 0.0 # now add ttt to the array of tables ttt_list.append(ttt) return(ttt_list, meteor_count); def std_to_ufo(ttt): # Given a table in Standard format, returns: # - an XML string which can be written as a UFO A.XML file; and # - a CSV string which can be written as a csv file. # In order to preserve the exact A.XML format, hard-coded string handling is used. # work out the frame rate of the observations in the table. start_time = Time(ttt['datetime'][0]) start_time_str = str(ttt['datetime'][0]) nlines = len(ttt['datetime']) cumu_times = [] step_sizes = [] last_sec = 0.0 for i in range(nlines): sec = get_secs(Time(ttt['datetime'][i]),start_time) cumu_times.append(sec) sec_rounded = sec time_change = int(round(1000*(sec_rounded - last_sec),0)) if i>0 and (time_change not in step_sizes): step_sizes.append(time_change) last_sec = sec_rounded #now test for common framerates # likely framerates are 20 (DFN), 25 (UFO) or 30 (FRIPON) fps smallest = min(step_sizes) if (smallest==33 or smallest == 34 or smallest == 66 or smallest == 67): frame_rate = 30.0 elif (smallest >= 39 and smallest <= 41): frame_rate = 25.0 elif (smallest >= 49 and smallest <= 51): frame_rate = 20.0 else: # non-standard framerate # gcd is the greatest common divisor of all of the steps, in milliseconds. # Note - if gcd <= 10 it implies frame rate >= 100 fps, which is probably caused by a rounding error gcd = array_gcd(step_sizes) frame_rate = 1000.0/float(gcd) frame_step = 1/frame_rate #work out the head, tail and first frame number head_sec = round(-get_secs(Time(ttt.meta['isodate_start_obs']),start_time),6) head = int(round(head_sec / frame_step,0)) fs = head + 1 fN = 1+int(round(sec/frame_step,0)) fe = fs + fN -1 sN = nlines sec = round(sec, 4) interlaced = 0 tz = 0 #UTC is hard-coded for now # work out number of frames-equivalent and tail mid_sec = round(head_sec + get_secs(Time(ttt.meta['isodate_calib']),start_time),6) clip_sec = round(max(min(2*mid_sec,30.0),(fe-1)*frame_step),6) #maximum clip length 30 seconds frames = int(round(clip_sec/frame_step,0)) + 1 tail = max(0,frames - (head + fN)) frames = head + fN + tail # alt and azimuth numbers ev1, ev2, az1, az2, ra1, ra2, dec1, dec2 = ufo_ra_dec_alt_az(ttt) if az1 < 180.0: #azimuth written to CSV files is south-oriented az1_csv = az1 + 180 else: az1_csv = az1 - 180 if az2 < 180.0: #azimuth written to CSV files is south-oriented az2_csv = az2 + 180 else: az2_csv = az2 - 180 # first write the csv string in UFOOrbit R91 format csv_s = 'Ver,Y,M,D,h,m,s,Mag,Dur,Az1,Alt1,Az2,Alt2, Ra1, Dec1, Ra2, Dec2,ID,Long,Lat,Alt,Tz\n' csv_s += 'R91,' + start_time_str[0:4] + ',' + start_time_str[5:7] + ',' csv_s += start_time_str[8:10] + ',' + start_time_str[11:13] + ',' csv_s += start_time_str[14:16] + ',' + start_time_str[17:23] + ',' csv_s += '0.0,'+ str(sec) + ',' csv_s += str(az1_csv) + ',' + str(ev1) + ',' csv_s += str(az2_csv) + ',' + str(ev2) + ',' csv_s += str(ra1) + ',' + str(dec1) + ',' csv_s += str(ra2) + ',' + str(dec2) + ',' csv_s += ttt.meta['location'] + ',' csv_s += str(ttt.meta['obs_longitude']) + ',' csv_s += str(ttt.meta['obs_latitude']) + ',' csv_s += str(ttt.meta['obs_elevation']) + ',' csv_s += str(tz) # now write the XML string # there is no viable alternative to ugly hard-coding of the XML string # sample date: 2020-04-07T03:56:41.450 xml_s = '<?xml version="1.0" encoding="UTF-8" ?>'+'\n' xml_s += '<ufoanalyzer_record version ="200"'+'\n\t clip_name="' xml_s += ttt.meta['image_file'].rsplit('.',1)[0] xml_s += '" o="1" y="' + start_time_str[0:4] xml_s += '" mo="' + start_time_str[5:7] xml_s += '"\n\t d="' + start_time_str[8:10] xml_s += '" h="' + start_time_str[11:13] xml_s += '" m="' + start_time_str[14:16] xml_s += '" s="' + start_time_str[17:23] xml_s += '"\n\t tz="' + str(tz) xml_s += '" tme="1.000000" lid="' + ttt.meta['location'] xml_s += '" sid="' + ttt.meta['telescope'][0:2] xml_s += '"\n\t lng="' + str(ttt.meta['obs_longitude']) xml_s += '" lat="' + str(ttt.meta['obs_latitude']) xml_s += '" alt="' + str(ttt.meta['obs_elevation']) xml_s += '" cx="' + str(ttt.meta['cx']) xml_s += '"\n\t cy="' + str(ttt.meta['cy']) xml_s += '" fps="' + str(frame_rate) xml_s += '" interlaced="' + str(interlaced) xml_s += '" bbf="0"\n\t frames="' + str(frames) xml_s += '" head="' + str(head) xml_s += '" tail="' + str(tail) xml_s += '" drop="-1"\n\t dlev="0" dsize="0" sipos="0" sisize="0"\n\t trig="1' xml_s += '" observer="' + str(ttt.meta['observer']) xml_s += '" cam="' + str(ttt.meta['instrument']) xml_s += '" lens="' + str(ttt.meta['lens']) xml_s += '"\n\t cap="Not Applicable" u2="224" ua="243" memo="' xml_s += '"\n\t az="' + str(ttt.meta['obs_az']) xml_s += '" ev="' + str(ttt.meta['obs_ev']) xml_s += '" rot="' + str(ttt.meta['obs_rot']) xml_s += '" vx="' + str(ttt.meta['fov_horiz']) xml_s += '"\n\t yx="0.000000" dx="0.000000" dy="0.000000" k4="0.000000' xml_s += '"\n\t k3="-0.000000" k2="0.000000" atc="0.000000" BVF="0.000000' xml_s += '"\n\t maxLev="255" maxMag="0.000000" minLev="0' xml_s += '" mimMag="0.0' # + str(ttt.meta['photometric_zero_point']) xml_s += '"\n\t dl="0" leap="0" pixs="0' xml_s += '" rstar="' + str(ttt.meta['astrometry_number_stars']) xml_s += '"\n\t ddega="0.000000" ddegm="0.000000" errm="0.00000" Lmrgn="0' xml_s += '"\n\t Rmrgn="0" Dmrgn="0" Umrgn="0">' xml_s += '\n\t<ua2_objects>' xml_s += '\n<ua2_object' xml_s += '\n\t fs="' + str(fs) xml_s += '" fe="' + str(fe) xml_s += '" fN="' + str(fN) xml_s += '" sN="' + str(sN) xml_s += '"\n\t sec="' + str(sec) xml_s += '" av="0.000000' # investigate av xml_s += '" pix="0" bmax="255' # investigate pix xml_s += '"\n\t bN="0' # investigate bN xml_s += '" Lmax="0.000000" mag="0.000000" cdeg="0.00000' xml_s += '"\n\t cdegmax="0.000000" io="0" raP="0.000000" dcP="0.000000' xml_s += '"\n\t av1="0.000000" x1="0.000000" y1="0.000000" x2="0.000000' xml_s += '"\n\t y2="0.000000" az1="' + str(az1) xml_s += '" ev1="' + str(round(ev1,6)) xml_s += '" az2="' + str(round(az2,6)) xml_s += '"\n\t ev2="' + str(round(ev2,6)) xml_s += '" azm="999.9" evm="999.9' xml_s += '" ra1="' + str(round(ra1,6)) xml_s += '"\n\t dc1="' + str(round(dec1,6)) xml_s += '" ra2="'+ str(round(ra2,6)) xml_s += '" dc2="' + str(round(dec2,6)) xml_s += '" ram="999.9' xml_s += '"\n\t dcm="999.9" class="spo" m="0" dr="-1.000000' xml_s += '"\n\t dv="-1.000000" Vo="-1.000000" lng1="999.9" lat1="999.9' xml_s += '"\n\t h1="100.000000" dist1="0.000000" gd1="0.000000" azL1="-1.000000' #in UFO, initial height is hard-coded at 100 xml_s += '"\n\t evL1="-1.000000" lng2="-999.000000" lat2="-999.000000" h2="-1.000000' xml_s += '"\n\t dist2="-1.000000" gd2="-1.000000" len="0.000000" GV="0.000000' xml_s += '"\n\t rao="999.9" dco="999.9" Voo="0.000000" rat="999.9' xml_s += '"\n\t dct="999.9" memo="">' xml_s += '\n\t<ua2_objpath>' for i in range(nlines): fno = fs + int(round(cumu_times[i]/frame_step,0)) xml_s += '\n<ua2_fdata2 fno="' if fno < 100: xml_s += ' ' xml_s += str(fno) xml_s += '" b="000" bm="000" Lsum=" 000.0' if ttt.meta['mag_label'] == 'mag': xml_s += '" mag="' + str(round(ttt['mag'][i],6)) else: xml_s += '" mag="0.000000' xml_s += '" az="' + str(round(ttt['azimuth'][i],6)) xml_s += '" ev="' + str(round(ttt['altitude'][i],6)) xml_s += '" ra="' + str(round(ttt['ra'][i],6)) xml_s += '" dec="' + str(round(ttt['dec'][i],6)) xml_s += '"></ua2_fdata2>' xml_s += '\n\t</ua2_objpath>' xml_s += '\n</ua2_object>' xml_s += '\n\t</ua2_objects>' xml_s += '\n</ufoanalyzer_record>\n' return xml_s, csv_s ; # - # ## Desert Fireball Network functions # + def dfn_to_std(ttt): # converts a table in DFN/UKFN format to Standard format meta_dic = {'obs_latitude': ttt.meta['obs_latitude'], 'obs_longitude': ttt.meta['obs_longitude'], 'obs_elevation': ttt.meta['obs_elevation'], 'origin': ttt.meta['origin'], 'location': ttt.meta['location'], 'telescope': ttt.meta['telescope'], 'camera_id': ttt.meta['dfn_camera_codename'], 'observer': ttt.meta['observer'], 'comment': '', 'instrument': ttt.meta['instrument'], 'lens': ttt.meta['lens'], 'cx' : ttt.meta['NAXIS1'], 'cy' : ttt.meta['NAXIS2'], 'photometric_band' : 'Unknown', 'image_file' : ttt.meta['image_file'], 'isodate_start_obs': ttt.meta['isodate_start_obs'], 'isodate_calib': ttt.meta['isodate_mid_obs'], 'exposure_time': ttt.meta['exposure_time'], 'astrometry_number_stars' : ttt.meta['astrometry_number_stars'], # 'photometric_zero_point': ttt.meta['photometric_zero_point'), # 'photometric_zero_point_uncertainty': ttt.meta['photometric_zero_point_uncertainty'), 'mag_label': 'no_mag_data', 'no_frags': 1, 'obs_az': 0.0, 'obs_ev': 90.0, 'obs_rot': 0.0, 'fov_horiz': 180.0, 'fov_vert': 180.0, } # initialise table ttt_new = Table() #Update the table metadata ttt_new.meta.update(meta_dic) # RA and DEC calculation ra_calc_array = [] dec_calc_array = [] obs_latitude = ttt.meta['obs_latitude'] obs_longitude = ttt.meta['obs_longitude'] # start of J2000 epoch ts = datetime.strptime("2000-01-01T12:00:00.000",ISO_FORMAT) start_epoch = datetime2JD(ts) no_lines = len(ttt['azimuth']) for i in range(no_lines): az = float(ttt['azimuth'][i]) elev = float(ttt['altitude'][i]) time_stamp = str(ttt['datetime'][i]) ts = datetime.strptime(time_stamp,ISO_FORMAT) JD = datetime2JD(ts) # USE Az and Alt to calculate correct RA and DEC in epoch of date, then precess back to J2000 temp_ra, temp_dec = altAz2RADec(az, elev, JD, obs_latitude, obs_longitude) temp_ra, temp_dec = equatorialCoordPrecession(JD, start_epoch, temp_ra, temp_dec) ra_calc_array.append(temp_ra ) dec_calc_array.append(temp_dec ) # create columns ttt_new['datetime'] = ttt['datetime'] ttt_new['ra'] = ra_calc_array * u.degree ttt_new['dec'] = dec_calc_array * u.degree ttt_new['azimuth'] = ttt['azimuth'] ttt_new['altitude'] = ttt['altitude'] ttt_new['no_mag_data'] = 0.0 ttt_new['x_image'] = ttt['x_image'] ttt_new['y_image'] = ttt['y_image'] return([ttt_new], 1); def std_to_dfn(ttt): #converts a table in standard format to DFN/UKFN format cx_true = 0 cy_true = 0 calib_true = 0 mag_true = 0 frags_true = 0 comment_true = 0 phot_true = 0 for key_name in ttt.meta.keys(): if 'cx' in key_name: cx_true = 1 if 'cy' in key_name: cy_true = 1 if 'isodate_calib' in key_name: calib_true = 1 if 'mag_label' in key_name: mag_true = 1 if 'no_frag' in key_name: frags_true = 1 if 'comment' in key_name: comment_true = 1 if 'photometric_band' in key_name: phot_true = 1 if cx_true > .5 : ttt.meta['NAXIS1'] = ttt.meta.pop('cx') if cy_true > .5 : ttt.meta['NAXIS2'] = ttt.meta.pop('cy') if calib_true > .5 : ttt.meta['isodate_mid_obs'] = ttt.meta.pop('isodate_calib') if mag_true > .5 : ttt.remove_columns(ttt.meta['mag_label']) ttt.meta.pop('mag_label') if frags_true > .5 : ttt.meta.pop('no_frags') if comment_true > .5 : ttt.meta.pop('comment') if phot_true > .5 : ttt.meta.pop('photometric_band') ttt.meta.pop('obs_az') ttt.meta.pop('obs_ev') ttt.meta.pop('obs_rot') ttt.meta.pop('fov_horiz') ttt.meta.pop('fov_vert') # fireball ID # leave the default if you don't know ttt.meta['event_codename'] = 'DN200000_00' ## Uncertainties - if you have no idea of what they are, leave the default values # time uncertainty array ttt['time_err_plus'] = 0.1 *u.second ttt['time_err_minus'] = 0.1 *u.second # astrometry uncertainty array ttt['err_plus_azimuth'] = 1/60. *u.degree ttt['err_minus_azimuth'] = 1/60. *u.degree ttt['err_plus_altitude'] = 1/60. *u.degree ttt['err_minus_altitude'] = 1/60. *u.degree #delete surplus columns ttt.remove_columns(['ra','dec']) return(ttt); # - # ## FRIPON/SCAMP functions # + def get_fripon_stations(): # get a table of FRIPON camera locations # data = Stations,Latitude,Longitude,Altitude,Country,City,Camera,Switch,Status stations_file_name = 'https://raw.githubusercontent.com/SCAMP99/scamp/master/FRIPON_location_list.csv' import requests try: r = requests.get(stations_file_name) loc_table = ascii.read(r.text, delimiter=',') except: # create columns for the UK stations only. loc_table = Table() loc_table['Stations'] = 'ENGL01','ENNI01','ENNW01','ENSE01','ENSE02','ENSW01','GBWL01','ENNW02' loc_table['Latitude'] = '50.75718','54.35235','53.474365','51.5761','51.2735','50.80177','51.48611','53.6390851' loc_table['Longitude'] = '0.26582','-6.649632','-2.233606','-1.30761','1.07208','-3.18441','-3.17787','-2.1322892' loc_table['Altitude'] = '61','75','70','200','21','114','33','177' loc_table['Country'] = 'England','England','England','England','England','England','GreatBritain','England' loc_table['City'] = 'Eastbourne','Armagh','Manchester','Harwell','Canterbury','Honiton','Cardiff','Rochester' loc_table['Camera'] = 'BASLER 1300gm','BASLER 1300gm','BASLER 1300gm','BASLER 1300gm','BASLER 1300gm','DMK 23G445','BASLER 1300gm','BASLER 1300gm' loc_table['Switch'] = 'TL-SG2210P','TL-SG2210P','T1500G-10PS','TL-SG2210P','TL-SG2210P','TL-SG2210P','TL-SG2210P','TL-SG2210P' loc_table['Status'] = 'Production','Production','Production','NotOperational','Production','Production','Production','Production' no_stations = len(loc_table['Latitude']) #The first key may have extra characters in it - if so, rename it. for key_name in loc_table.keys(): if 'Stations' in key_name: if not key_name == 'Stations': loc_table.rename_column(key_name,'Stations') return(loc_table, no_stations); def fripon_to_std(fname,ttt_old, loc_table, no_stations): # convert data from FRIPON/SCAMP format into standard format #check that the .met file contains data if len(ttt_old['TIME']) < 1: print('no data in file') return([], 0); #process the filename for data, e.g. 'C:/Users/jr63/Google Drive/0-Python/20200324T023233_UT_FRNP03_SJ.met' print(fname) n1 = fname.rfind('/') n2 = fname.rfind('\\') n = max(n1, n2) station_str = fname[n+20:n+26] analyst_str = fname[n+27:n+29] print('No. Rows of station data = ',no_stations,' sought station = ',station_str,'\n') i = -1 for j in range(no_stations): if loc_table['Stations'][j] == station_str: i = j break if i < 0: print('FRIPON Station name "' + station_str + '" not found.') return([], 0); # Now get on with construction of the metadata dictionary # camera resolution if loc_table['Camera'][i] == 'BASLER 1300gm': cx = 1296 cy = 966 elif loc_table['Camera'][i] == 'DMK 23G445': cx = 1280 cy = 960 elif loc_table['Camera'][i] == 'DMK 33GX273': cx = 1440 cy = 1080 else : cx = 0 cy = 0 #convert time to ISO format iso_date_str = ttt_old['TIME'][0] # ttt_old['TIME'][0] is a 'numpy.str_' object # set up a new table ttt = Table() ttt['datetime'] = Time(ttt_old['TIME']).isot # ttt['datetime'][0] is a 'numpy.str_' object event_time = str(ttt['datetime'][0]) # now find time-related metadata no_lines = len(ttt['datetime']) if no_lines >= 1: start_day = Time(ttt['datetime'][0]) end_day = Time(ttt['datetime'][no_lines-1]) half_time = end_day - start_day half_str = str(half_time) half_sec = round(float(half_str)*24*60*60/2,6) isodate_calib = start_day + timedelta(seconds=half_sec) isodate_calib_str = str(isodate_calib) else: isodate_calib_str = event_time obs_latitude = float(loc_table['Latitude'][i]) obs_longitude = float(loc_table['Longitude'][i]) obs_elevation = float(loc_table['Altitude'][i]) obs_location = str(loc_table['City'][i]) # For old data from stations that have been moved, make changes here to reflect the historic location if (station_str == 'ENGL01'): obs_year = int(ttt['datetime'][0][0:4]) print('Year = ', obs_year) if(obs_year < 2021): obs_latitude = 51.637359 obs_longitude = -0.169234 obs_elevation = 87.0 obs_location = 'East Barnet' # Update the metadata. meta_dic = {'obs_latitude': obs_latitude, 'obs_longitude': obs_longitude, 'obs_elevation': obs_elevation, 'origin': 'FRIPON', 'location': obs_location, 'telescope': station_str, 'camera_id': station_str, 'observer': analyst_str, 'comment': '', 'instrument': str(loc_table['Camera'][i]), 'lens': 'unknown', 'cx': cx, 'cy': cy, 'photometric_band' : 'Unknown', 'image_file' : 'unknown', 'isodate_start_obs': event_time, 'isodate_calib': isodate_calib_str, 'exposure_time': 2.0 * half_sec, 'astrometry_number_stars': 0, # 'photometric_zero_point': 0.0, # 'photometric_zero_point_uncertainty': 0.0, 'mag_label': 'FLUX_AUTO', 'no_frags': 1, 'obs_az': 0.0, 'obs_ev': 90.0, 'obs_rot': 0.0, 'fov_horiz': 180.0, 'fov_vert': 180.0, } ttt.meta.update(meta_dic) # calculate az and alt az_calc_array = [] alt_calc_array = [] # start of J2000 epoch ts = datetime.strptime("2000-01-01T12:00:00.000","%Y-%m-%dT%H:%M:%S.%f") start_epoch = datetime2JD(ts) for k in range (no_lines) : ra = float(ttt_old['ALPHAWIN_J2000'][k]) dec = float(ttt_old['DELTAWIN_J2000'][k]) ts = datetime.strptime(str(ttt['datetime'][k]),"%Y-%m-%dT%H:%M:%S.%f") JD = datetime2JD(ts) # RA and DEC are in J2000 epoch. Precess to epoch of date, then convert to Az and Alt using RMS code temp_ra, temp_dec = equatorialCoordPrecession(start_epoch, JD, ra, dec) temp_azim, temp_elev = raDec2AltAz(temp_ra, temp_dec, JD, obs_latitude, obs_longitude) az_calc_array.append(temp_azim) alt_calc_array.append(temp_elev) #ttt['datetime'] already done above ttt['ra'] = ttt_old['ALPHAWIN_J2000'] * u.degree ttt['dec'] = ttt_old['DELTAWIN_J2000'] * u.degree ttt['azimuth'] = az_calc_array * u.degree ttt['altitude'] = alt_calc_array * u.degree ttt['FLUX_AUTO'] = ttt_old['FLUX_AUTO'] ttt['x_image'] = ttt_old['XWIN_IMAGE'] ttt['y_image'] = ttt_old['YWIN_IMAGE'] return([ttt], 1); def std_to_fripon(ttt): #converts standard format to FRIPON no_lines = len(ttt['datetime']) ttt_new = Table() ttt_new['NUMBER'] = np.linspace(1, no_lines, no_lines) ttt_new['FLUX_AUTO'] = 0 ttt_new['FLUXERR_AUTO'] = 0 ttt_new['XWIN_IMAGE'] = ttt['x_image'] ttt_new['YWIN_IMAGE'] = ttt['y_image'] ttt_new['ALPHAWIN_J2000'] = ttt['ra'] ttt_new['DELTAWIN_J2000'] = ttt['dec'] ttt_new['TIME'] = ttt['datetime'] ttt_new.meta.update(ttt.meta) return(ttt_new); def fripon_write(ttt): # writes a table in FRIPON format to two strings, which it returns # needed to hard-code this as SExtractor is supported in Astropy only for table read, not table write. fri_str = '# 1 NUMBER Running object number ' fri_str += '\n# 2 FLUX_AUTO Flux within a Kron-like elliptical aperture [count]' fri_str += '\n# 3 FLUXERR_AUTO RMS error for AUTO flux [count]' fri_str += '\n# 4 XWIN_IMAGE Windowed position estimate along x [pixel]' fri_str += '\n# 5 YWIN_IMAGE Windowed position estimate along y [pixel]' fri_str += '\n# 6 ALPHAWIN_J2000 Windowed right ascension (J2000) [deg]' fri_str += '\n# 7 DELTAWIN_J2000 windowed declination (J2000) [deg]' fri_str += '\n# 8 TIME Time of the frame [fits]' no_rows = len(ttt['TIME']) for j in range(no_rows): fri_str += '\n'+ str(j+1) fri_str += ' ' + str(round(ttt['FLUX_AUTO'][j],6)) fri_str += ' ' + str(round(ttt['FLUXERR_AUTO'][j],6)) fri_str += ' ' + str(round(ttt['XWIN_IMAGE'][j],6)) fri_str += ' ' + str(round(ttt['YWIN_IMAGE'][j],6)) fri_str += ' ' + str(round(ttt['ALPHAWIN_J2000'][j],6)) fri_str += ' ' + str(round(ttt['DELTAWIN_J2000'][j],6)) fri_str += ' ' + str(ttt['TIME'][j]) #write the location as a txt file loc_str = 'latitude = ' + str(ttt.meta['obs_latitude']) loc_str += '\nlongitude = ' + str(ttt.meta['obs_longitude']) loc_str += '\nelevation = ' + str(ttt.meta['obs_elevation']) return(fri_str, loc_str); # - # # Excel CSV functions def std_to_csv(ttt): #write the metadata to csv_str csv_str = 'Converted Meteor Data\n' csv_str += '\nObservatory latitude (deg),' + str(ttt.meta['obs_latitude']) csv_str += '\nObservatory longitude (deg),' + str(ttt.meta['obs_longitude']) csv_str += '\nObservatory elevation (metres ASL),' + str(ttt.meta['obs_elevation']) csv_str += '\nNetwork name,' + str(ttt.meta['origin']) csv_str += '\nLocation,' + str(ttt.meta['location']) csv_str += '\nName of station,' + str(ttt.meta['telescope']) csv_str += '\nCamera id,' + str(ttt.meta['camera_id']) csv_str += '\nObserver,' + str(ttt.meta['observer']) csv_str += '\nComment,' + str(ttt.meta['comment']) csv_str += '\nCamera model,' + str(ttt.meta['instrument']) csv_str += '\nLens make and model,' + str(ttt.meta['lens']) csv_str += '\nHorizontal pixel count,' + str(ttt.meta['cx']) csv_str += '\nVertical pixel count,' + str(ttt.meta['cy']) csv_str += '\nPhotometric band,' + str(ttt.meta['photometric_band']) csv_str += '\nName of image file,' + str(ttt.meta['image_file']) csv_str += '\nStart datetime of clip,' + str(ttt.meta['isodate_start_obs']) csv_str += '\nDatetime of astrometry,' + str(ttt.meta['isodate_calib']) csv_str += '\nTotal length of clip (sec),' + str(ttt.meta['exposure_time']) csv_str += '\nNumber of stars identified in astrometry,' + str(ttt.meta['astrometry_number_stars']) # csv_str += '\nPhotometric zero point,' + str(ttt.meta['photometric_zero_point']) # csv_str += '\nPhotometric zero point uncertainty,' + str(ttt.meta['photometric_zero_point_uncertainty']) csv_str += '\nMagnitude measure,' + str(ttt.meta['mag_label']) csv_str += '\nNumber of fragments,' + str(ttt.meta['no_frags']) csv_str += '\nAzimuth of camera centrepoint (deg),' + str(ttt.meta['obs_az']) csv_str += '\nElevation of camera centrepoint (deg),' + str(ttt.meta['obs_ev']) csv_str += '\nRotation of camera from horizontal (deg),' + str(ttt.meta['obs_rot']) csv_str += '\nHorizontal FOV (deg),' + str(ttt.meta['fov_horiz']) csv_str += '\nVertical FOV (deg),' + str(ttt.meta['fov_vert']) # For each row, add the excel date. # the Excel date 36526.5 is equivalent to 01/01/2000 12pm - don't use because of 5 leap seconds before 1/1/2017 # the Excel date 42736.5 is equivalent to 01/01/2017 12pm ts = datetime.strptime("2017-01-01T12:00:00.000",ISO_FORMAT) epoch_day = Time(ts) csv_str += '\n\nDate/Time,Row No.,RA,Dec,Az,Alt,Magnitude,X_image,Y_image,Year,Month,Day,Hour,Min,Sec' for j in range (len(ttt['datetime'])): ts = datetime.strptime(ttt['datetime'][j],ISO_FORMAT) obs_day = Time(ts) excel_day = float(str(obs_day - epoch_day))+ 42736.5 csv_str += '\n' + str(excel_day) csv_str += ','+ str(j+1) csv_str += ',' + str(round(ttt['ra'][j],6)) csv_str += ',' + str(round(ttt['dec'][j],6)) csv_str += ',' + str(round(ttt['azimuth'][j],6)) csv_str += ',' + str(round(ttt['altitude'][j],6)) csv_str += ',' + str(round(ttt[str(ttt.meta['mag_label'])][j],6)) csv_str += ',' + str(round(ttt['x_image'][j],6)) csv_str += ',' + str(round(ttt['y_image'][j],6)) date_str = ttt['datetime'][j].replace('-',',') date_str = date_str.replace('T',',') date_str = date_str.replace(':',',') date_str = date_str.replace(' ',',') csv_str += ',' + date_str return csv_str ; # # RMS functions # + def rms_camera_json(_file_path): #extract json data _json_str = open(_file_path).read() cam_data = json.loads(_json_str) cam_dict = {} for file_name in cam_data: #sib-dict with info about camera cam_snap = cam_data[file_name] #get info from file name file_name_info = regex.search('(.*)_(\d{8}_\d{6}_\d{3})', file_name) #camera name in string file_prefix = file_name_info[1] #camera timestamp in string file_timestamp_string = file_name_info[2] + "000" file_timestamp_old = datetime.strptime(file_timestamp_string, "%Y%m%d_%H%M%S_%f") file_timestamp_string = file_timestamp_old.strftime(ISO_FORMAT) file_timestamp = Time(datetime.strptime(file_timestamp_string,ISO_FORMAT)).isot #print('1.file_timestamp_string = ',file_timestamp_string, ' file_timestamp = ',file_timestamp) cam_snap.update({ "timestamp": file_timestamp, "file_name": file_name }) # have a list of calibrations for each camera (based on file prefix like FF_IE0001) cam_name_info_list = [] if file_prefix in cam_dict: file_prefix_info_list = cam_dict[file_prefix] #add camera snap to the cam_dict list cam_name_info_list.append(cam_snap) cam_dict.update({file_prefix: cam_name_info_list}) # print("Got camera data ") return cam_dict def find_most_recent_cam_calibration(cam_list, timestamp): previous_cam_info = cam_list[0] # we assume ascending order for current_cam_info in cam_list: if not current_cam_info['timestamp']: continue; timestamp_meteor = datetime.strptime(timestamp, ISO_FORMAT) timestamp_current = datetime.strptime(current_cam_info['timestamp'],ISO_FORMAT) deltaT = (timestamp_meteor - timestamp_current ).total_seconds() if deltaT >= 0: previous_cam_info = current_cam_info else: return previous_cam_info return previous_cam_info def rms_to_dict(RMSMeteorText): # convert to list of rows rows_list = RMSMeteorText.split('\n') # Example of how data is: # ------------------------------------------------------- # FF_IE0001_20200126_225518_555_0475904.fits # Recalibrated with RMS on: 2020-02-03 16:40:39.821536 UTC # IE0001 0001 0016 0025.00 000.0 000.0 00.0 004.1 0052.8 0015.5 # 181.5530 0705.12 0398.18 020.7986 +22.6715 278.2975 +22.4058 000672 3.28 # ... (0016 total) ... # 196.6360 0722.07 0457.92 017.3488 +20.2848 279.3342 +18.5235 000320 4.08 # ------------------------------------------------------- #in_block# The File is read as: # # ------------------------------------------------------- # -3 # file_name # -2 # calibration # -1 # Cam# Meteor# #Segments fps hnr mle bin Pix/fm Rho Phi # +n # Frame# Col Row RA Dec Azim Elev Inten Mag # ... # ... (#Segments) ... # 0 # Frame# Col Row RA Dec Azim Elev Inten Mag # # ------------------------------------------------------- # # We convert this to: # [{ # cam, meteor, segments, fps, hnr, mle, bin, pix/fm, rho, phi, # file_name, file_prefix, timestamp, duration, min_magnitude, max_intensity, calibration # frames: [{ # frame, timestamp, col, row, ra, dec, azim, elev, inten, mag # },{ # ... # }] # },{ # ... # }] # shot_info_labels = ["cam", "meteor", "segments", "fps", "hnr", "mle", "bin", "pix/fm", "rho", "phi"] frame_info_labels = ["frame", "col", "row", "ra", "dec", "azim", "elev", "inten", "mag"] # meta sections split by 53-long lines of "---------------" # data sections split by 55-long lines of "---------------" line = '-{55}' #loop variables in_block = 0 data = [] prev_event = False current_event = {} for row in rows_list: #test for a new data row for a meteor event if (regex.match(line, row)): in_block = -3 current_event={} continue #get file info if (in_block == -3): in_block = -2 file_name = row.strip() file_name_info = regex.search('(.*)_(\d{8}_\d{6}_\d{3})', file_name) # print(' file_name_info = ', file_name_info) #camera name in string file_prefix = file_name_info[1] #camera timestamp in string file_timestamp_string = file_name_info[2] + "000" file_timestamp_old = datetime.strptime(file_timestamp_string, "%Y%m%d_%H%M%S_%f") file_timestamp_string = file_timestamp_old.strftime(ISO_FORMAT) file_timestamp = Time(datetime.strptime(file_timestamp_string,ISO_FORMAT)).isot # print('2.file_timestamp_string = ',file_timestamp_string, ' file_timestamp = ',file_timestamp) current_event.update({ 'file_name': row.strip(), 'file_prefix': file_prefix, 'timestamp': file_timestamp }) continue #get calibration info if (in_block == -2): in_block = -1 current_event.update( {'calibration': row.strip()} ) continue #get info about camera and the shot if (in_block == -1): info = regex.split('[\s\t]+', row.strip()) #turn into dict using labels for i in range(len(info)): current_event.update({shot_info_labels[i]: info[i]}) current_event.update({'frames': []}) #number of frames in_block = int(current_event['segments']) continue #get info from each individual frame if (in_block > 0): in_block -= 1 info = regex.split('[\s\t]+', row.strip()) current_frame = {} frames_list = current_event['frames'] #turn into dict using labels for i in range(len(info)): current_frame.update({frame_info_labels[i]: info[i]}) #get frame timestamp frame_time = float(current_frame['frame']) / float(current_event['fps']) dt = timedelta(seconds = frame_time) timestamp_XYZ = datetime.strptime(current_event['timestamp'], ISO_FORMAT) frame_timestamp = timestamp_XYZ + dt #add to frame info current_frame.update({'timestamp': frame_timestamp}) frames_list.append(current_frame) current_event.update({'frames': frames_list}) # calculate some final information before adding it to the list if (in_block == 0): # calculate: duration, min_magnitude, max_intensity, col_speed, row_speed current_event = rms_update_dict_info(current_event) # check if the event is a continuation of the previous event if (prev_event): is_same_event = rms_check_if_same_event(prev_event, current_event) else: is_same_event = False # if so, expand on the previous event data if is_same_event: # append current event info prev_event['frames'].extend(current_event['frames']) # calculate again: duration, min_magnitude, max_intensity, col_speed, row_speed prev_event = rms_update_dict_info(prev_event) else: # add previous event and save current event as previous data.append(prev_event) prev_event = current_event in_block = False # ##### # end of for loop # add final prev_event data.append(prev_event) return data # update some stats using data from the 'frames' list def rms_update_dict_info(dict_event): #get the duration of the event start_time = dict_event['frames'][ 0]['timestamp'] end_time = dict_event['frames'][-1]['timestamp'] duration = (end_time - start_time).total_seconds() # for comparison between frames and events delta_cols = (float(dict_event['frames'][-1]['col']) - float(dict_event['frames'][0]['col'])) col_speed = delta_cols/ duration delta_rows = (float(dict_event['frames'][-1]['row']) - float(dict_event['frames'][0]['row'])) row_speed = delta_rows/ duration # get the highest observed intensity (and lowest astronomical magnitude) max_intensity = 0 min_magnitude = 999999 for frame in dict_event['frames']: current_intensity = int(frame['inten']) current_magnitude = float(frame['mag']) if ( current_intensity > max_intensity ): max_intensity = current_intensity min_magnitude = current_magnitude dict_event.update({ 'meteor_duration': duration, 'min_magnitude': min_magnitude, 'max_intensity': max_intensity, 'col_speed': col_speed, 'row_speed': row_speed }) return dict_event; #RMS uses 256 frame blocks, so we need to check that an event wasn't cut in half def rms_check_if_same_event(prev_event, curr_event): """ How it works: - make sure the two events are from the same camera - make sure the two events are from a different file (distinct events) - make sure that the end of A and the start of B are at a close time (0.5 seconds) - calculate what the approximate average speed was - check that the trajectory is rougly the same direction and order of magnitude """ #print('------------------------------------------\nin >rms_check_if_same_event<') # check is same camera prev_cam = prev_event['file_prefix'] curr_cam = curr_event['file_prefix'] if not prev_cam == curr_cam: #print('prev_cam = ', prev_cam) #print('curr_cam = ', curr_cam) #print('not from same camera - returning False') return False # if it is a continuation then it is in a different file prev_file = prev_event['file_name'] curr_file = curr_event['file_name'] # print('prev_file = ', prev_file) # print('curr_file = ', curr_file) if prev_file == curr_file: #print('from same file - returning False') return False # ensure small time difference prev_end_time = prev_event['frames'][-1]['timestamp'] curr_start_time = curr_event['frames'][0]['timestamp'] delta_time = float((curr_start_time - prev_end_time).total_seconds()) #the time elapsed in seconds #frame_rate = float(curr_event['fps']) #max_time_delta = 5.0 / frame_rate #the number of frames in five seconds max_time_delta = 0.5 # no more than half a second between frames #print('delta_time = ', delta_time) #print('max_time_delta = ', max_time_delta) if delta_time > max_time_delta : # print('delta_time too large - returning False') return False #print("Close Time") # check trajectory is as expected # end of last events (start point) & start of next event (end point) prev_end_col = float(prev_event['frames'][-1]['col']) prev_end_row = float(prev_event['frames'][-1]['row']) curr_start_col = float(curr_event['frames'][0]['col']) curr_start_row = float(curr_event['frames'][0]['row']) #get speed between end of prev and start of next col_change = curr_start_col - prev_end_col row_change = curr_start_row - prev_end_row col_frame_speed = col_change / delta_time row_frame_speed = row_change / delta_time # what the previously measured speed was #col_expected_speed = prev_event['col_speed'] #row_expected_speed = prev_event['row_speed'] # what the speed was at the beginning of the current meteor end_frame = min(5,len(curr_event['frames'])) #take either the fifth or last frame if (end_frame < 2): # print('not enough data in second meteor - returning False') return False curr_end_col = float(curr_event['frames'][end_frame-1]['col']) curr_end_row = float(curr_event['frames'][end_frame-1]['row']) curr_end_time = curr_event['frames'][end_frame-1]['timestamp'] curr_delta_time = float((curr_end_time - curr_start_time).total_seconds()) #the time elapsed in seconds col_expected_speed = (curr_end_col - curr_start_col) / curr_delta_time row_expected_speed = (curr_end_row - curr_start_row) / curr_delta_time # get the fraction of the actual change vs expected change, avoiding errors with low denominators if (abs(col_expected_speed) < 10.): col_frac_change = 1. else: col_frac_change = col_frame_speed / col_expected_speed if (abs(row_expected_speed) < 10.): row_frac_change = 1. else: row_frac_change = row_frame_speed / row_expected_speed # checks that it is roughly within an expected range good_col_change = (0.5 < col_frac_change and col_frac_change < 2) good_row_change = (0.5 < row_frac_change and row_frac_change < 2) if not( good_col_change and good_row_change ): return False return True def rmsdict_to_std(meteor_info: dict, cam_info: dict): # cam: # { # F_scale, Ho, JD, RA_H, RA_M, RA_S, RA_d, UT_corr, X_res, Y_res, alt_centre, auto_check_fit_refined, # az_centre, dec_D, dec_M, dec_S, dec_d, elev, focal_length, fov_h, fov_v, gamma, lat, lon # mag_0, mag_lev, mag_lev_stddev, pos_angle_ref, rotation_from_horiz, star_list[] # station_code, version, vignetting_coeff, timestamp, file_name # x_poly[], x_poly_fwd[], x_poly_rev[], y_poly[], y_poly_fwd[], y_poly_rev[] # } # # meteor: # [{ # cam, meteor, segments, fps, hnr, mle, bin, pix/fm, rho, phi, # file_name, file_prefix, timestamp, duration, min_magnitude, max_intensity, # frames: [{ # frame, timestamp, col, row, ra, dec, azim, elev, inten, mag # }] # }] ## # Now get metadata #location obs_longitude = float(cam_info['lon']) obs_latitude = float(cam_info['lat']) obs_elevation = float(cam_info['elev']) #camera station site name telescope = cam_info['station_code'] #no spaces or special characters location = telescope #observer and instrument origin = "RMS" + '_Ver_'+ str(cam_info['version']) # or other formal network names observer = cam_info['station_code'] instrument = 'PiCam' lens = '' image_file = meteor_info['file_name'] astrometry_number_stars = len(cam_info['star_list']) cx = int(cam_info['X_res']) cy = int(cam_info['Y_res']) # calculate event timings - file timestamp # timestamp = cam_info['timestamp'] # head = float(meteor_info['frames'][0]['frame']) # print('head = ', head ) # frame rate and beginning of clip frame_rate = float(meteor_info['fps']) isodate_start_time = meteor_info['frames'][ 0]['timestamp'] isodate_end_time = meteor_info['frames'][-1]['timestamp'] isodate_midpoint_time = isodate_start_time + (isodate_end_time - isodate_start_time)/2 isodate_start = isodate_start_time.strftime(ISO_FORMAT) isodate_end = isodate_end_time.strftime(ISO_FORMAT) isodate_midpoint = isodate_midpoint_time.strftime(ISO_FORMAT) meteor_duration = meteor_info['meteor_duration'] # construction of the metadata dictionary meta_dic = { 'obs_latitude': obs_latitude, 'obs_longitude': obs_longitude, 'obs_elevation': obs_elevation, 'origin': origin, 'location': location, 'telescope': telescope, 'camera_id': telescope, 'observer': observer, 'comment': '', 'instrument': instrument, 'lens': lens, 'cx' : cx, 'cy' : cy, 'photometric_band': 'Unknown', 'image_file' : image_file, 'isodate_start_obs': str(isodate_start), 'isodate_calib' : str(isodate_midpoint), 'exposure_time': meteor_duration, 'astrometry_number_stars' : astrometry_number_stars, # 'photometric_zero_point': float(cam_info['mag_lev']), # 'photometric_zero_point_uncertainty': float(cam_info['mag_lev_stddev']), 'mag_label': 'mag', 'no_frags': 1, 'obs_az': float(cam_info['az_centre']), 'obs_ev': float(cam_info['alt_centre']), 'obs_rot': float(cam_info['rotation_from_horiz']), 'fov_horiz': float(cam_info['fov_h']), 'fov_vert': float(cam_info['fov_v']), } # initialise table ttt = Table() #Update the table metadata ttt.meta.update(meta_dic) #create time and main data arrays # Datetime is ISO 8601 UTC format datetime_array = [] # Azimuth are East of North, in degrees azimuth_array = [] # Altitudes are geometric (not apparent) angles above the horizon, in degrees altitude_array = [] #right ascension and declination coordinates ra_array = [] dec_array = [] x_array = [] y_array = [] mag_array = [] nlines = len(meteor_info["frames"]) #print('nlines= ',nlines) for i in range(nlines): obs = meteor_info["frames"][i] azimuth_array.append( float(obs['azim']) ) altitude_array.append( float(obs['elev']) ) datetime_array.append( obs['timestamp'].strftime(ISO_FORMAT) ) ra_array.append( float(obs['ra']) ) dec_array.append( float(obs['ra']) ) x_array.append( float(obs['col'])) y_array.append( float(obs['row'])) mag_array.append( float(obs['mag'])) ## Populate the table with the data created to date # create columns ttt['datetime'] = datetime_array ttt['ra'] = ra_array * u.degree ttt['dec'] = dec_array * u.degree ttt['azimuth'] = azimuth_array * u.degree ttt['altitude'] = altitude_array * u.degree ttt['mag'] = mag_array ttt['x_image'] = x_array ttt['y_image'] = x_array return ttt; def rms_dict_list_to_std(rms_meteor_dict_list, rms_cams_info): # list of cameras we have: cam_list = [] for cam in rms_cams_info: cam_list.append(cam) #get an astropy table list ttt_list = [] for meteor_info in rms_meteor_dict_list: # get info for each if not meteor_info: print("Empty Entry : ", meteor_info, " - Likely due to merging") continue file_prefix = meteor_info['file_prefix'] cam_info = find_most_recent_cam_calibration( rms_cams_info[file_prefix] , meteor_info['timestamp'] ) # convert and add to list ttt1 = rmsdict_to_std(meteor_info, cam_info) ttt2 = std_timeshift(ttt1,RMS_DELAY) ttt_list.append(ttt2) return ttt_list def rms_to_std(rms_meteor_text, rms_cams_dict): rms_meteor_dict_list = rms_to_dict(rms_meteor_text); ttt_list = rms_dict_list_to_std(rms_meteor_dict_list, rms_cams_dict) return ttt_list, len(ttt_list); def rms_json_to_std(json_data, lname): # This reads a string which is in RMS json format and converts it to standard format # Set up arrays for point observation data datetime_array = [] datestr_array = [] azimuth_array = [] altitude_array = [] ra_array = [] dec_array = [] mag_array = [] x_image_array = [] y_image_array = [] JD_array = [] # Standard spec instrument = '' lens = '' cx = 1920 cy = 1080 # start of J2000 epoch ts = datetime.strptime("2000-01-01T12:00:00.000",ISO_FORMAT) start_epoch = datetime2JD(ts) # Check the json data to see which format it is in, and extract information accordingly. if 'centroids' in json_data : # The February 2021 RMS json format # if lname.endswith("reduced.json"): # for key_name in json_data.keys(): # print(key_name) # jdt_ref = float(json_data['jdt_ref']) frame_rate = float(json_data['fps']) no_lines = len(json_data['centroids']) print('no_lines = ',no_lines) # "station": { # "elev": 63.0, # "lat": 51.53511, # "lon": -2.14857, # "station_id": "UK000X" obs_latitude = float(json_data['station']['lat']) obs_longitude = float(json_data['station']['lon']) obs_elevation = float(json_data['station']['elev']) location = str(json_data['station']['station_id']) telescope = '' camera_id = location observer = '' rstars = 0 for i in range(no_lines): f_data = json_data['centroids'][i] #"centroids_labels": [ # "Time (s)", [0] # "X (px)", [1] # "Y (px)", [2] # "RA (deg)", [3] # "Dec (deg)", [4] # "Summed intensity", [5] # "Magnitude" [6] # 5.451308905560867, # 1018.0993786888196, # 361.8849779477523, # 338.9399709902407, # 76.4566600907301, # 1, # 9.592039852289648 # date_str = f_data[0].replace(' ','T') # date_time = datetime.strptime(date_str,ISO_FORMAT) #print('i=',i,' date_str =',date_str, ' date_time =',date_time) JD = jdt_ref + float(f_data[0])/ 86400.0 JD_array.append(JD) tm = Time(str(JD), format='jd') date_time = tm.strftime(ISO_FORMAT) # print('tm = ',tm,', date_time = ',date_time) ra = float(f_data[3]) dec = float(f_data[4]) # RA and DEC are in J2000 epoch. Precess to epoch of date, then convert to Az and Alt using RMS code temp_ra, temp_dec = equatorialCoordPrecession(start_epoch, JD, ra, dec) temp_azim, temp_elev = raDec2AltAz(temp_ra, temp_dec, JD, obs_latitude, obs_longitude) datetime_array.append(date_time) ra_array.append(ra) dec_array.append(dec) azimuth_array.append(temp_azim) altitude_array.append(temp_elev) mag_array.append(float(f_data[6])) x_image_array.append(float(f_data[1])) y_image_array.append(float(f_data[2])) # meteor_duration = datetime_array[-1] - datetime_array[0] print('frame_rate = ', frame_rate) meteor_duration_float = 86400.0 * (JD_array[-1] - JD_array[0]) print('meteor_duration = ', meteor_duration_float) time_step = -float(json_data['centroids'][0][0]) # time of first frame tm = Time(str(jdt_ref), format='jd') isodate_start_time = tm.strftime(ISO_FORMAT) print('isodate_start_time = ', isodate_start_time) isodate_start = tm.strftime(ISO_FORMAT) JD_mid = jdt_ref + 0.5 * (JD_array[-1] - jdt_ref) print("JD_mid = ",JD_mid) tm = Time(str(JD_mid), format='jd') isodate_midpoint = tm.strftime(ISO_FORMAT) print('isodate_midpoint = ', isodate_midpoint) ############################## ############################## ############################## ############################## ############################## ############################## meta_dic = {'obs_latitude': obs_latitude, 'obs_longitude': obs_longitude, 'obs_elevation': obs_elevation, 'origin': 'RMS', 'location': location, 'telescope': telescope, 'camera_id': camera_id, 'observer': observer, 'comment': '', 'instrument': instrument, 'lens': lens, 'cx' : cx, 'cy' : cy, 'photometric_band' : 'Unknown', 'image_file' : 'Unknown', 'isodate_start_obs': str(isodate_start), 'isodate_calib' : str(isodate_midpoint), 'exposure_time': meteor_duration_float, 'astrometry_number_stars' : rstars, #'photometric_zero_point': 0.0, #'photometric_zero_point_uncertainty': 0.0, 'mag_label': 'mag', 'no_frags': 1, 'obs_az': 0.0, 'obs_ev': 0.0, 'obs_rot': 0.0, 'fov_horiz': 0.0, 'fov_vert': 0.0, } else: print('\n RMS json format not recognised') return([], 0); # initialise table ttt = Table() #Update the table metadata ttt.meta.update(meta_dic) ttt['datetime'] = datetime_array ttt['ra'] = ra_array ttt['dec'] = dec_array ttt['azimuth'] = azimuth_array ttt['altitude'] = altitude_array ttt['mag'] = mag_array ttt['x_image'] = x_image_array ttt['y_image'] = y_image_array return([ttt], 1); # - # # CAMS functions # + # cams_camera_txt( _camera_file_path ) # #rms_to_dict() # #camsdict_to_astropy_table(meteor_info: dict, cam_info: dict): # #cams_dict_list_to_astropy_tables(rms_meteor_dict_list, cams_camera_info): # cams_to_std(cams_meteor_text, cams_cameras_dict): def cams_camera_txt(_camera_file_path): #extract json data _cal_txt = open(_camera_file_path).read() _cal_lines = _cal_txt.split("\n") cam_dict = {} #test if a line is "something = value" equals_test = "(.*)=(.*)" cal_date, cal_time = False, False for cal_line in _cal_lines : if not regex.match(equals_test, cal_line): continue temp_regex_results = regex.search(equals_test, cal_line) cal_term = temp_regex_results[1].strip() cal_term_value = temp_regex_results[2].strip() L = len(cal_term_value) print(cal_term, " \t:", cal_term_value) if regex.match("\d+", cal_term_value) and len(regex.search("\d+", cal_term_value)[0]) == L: cam_dict.update({ cal_term: int(cal_term_value) }) print("Integer detected") continue if regex.match("[0-9.+-]+", cal_term_value) and len(regex.search("[0-9.+-]+", cal_term_value)[0]) == L: cam_dict.update({ cal_term: float(cal_term_value) }) print("Float detected") continue if regex.match("\d{2}/\d{2}/\d{4}", cal_term_value): cal_date_str = regex.search("\d{2}/\d{2}/\d{4}", cal_term_value)[0] cal_date = datetime.strptime(cal_date_str,"%m/%d/%Y").strftime("%Y-%m-%d") print("Date detected") continue if regex.match("\d{2}:\d{2}:\d{2}.\d{3}", cal_term_value): cal_time = regex.search("\d{2}:\d{2}:\d{2}.\d{3}", cal_term_value)[0] print("Time detected") continue if cal_term == "FOV dimension hxw (deg)": cal_fov_hw = regex.search("([+\-0-9.]+)\s*x\s*([+\-0-9.]+)", cal_term_value) print(cal_fov_hw) cam_dict.update({ "FOV dimension hxw (deg)": cal_fov_hw[0], 'FOV height (deg)' : float(cal_fov_hw[1]), 'FOV width (deg)' : float(cal_fov_hw[2]), }) continue cam_dict.update({cal_term: cal_term_value}) if cal_date and cal_time: cal_timestamp_string = cal_date + "T" + cal_time + "000" cal_timestamp = datetime.strptime(cal_timestamp_string, ISO_FORMAT) cam_dict.update({"timestamp": cal_timestamp}) print("Got camera data ") return cam_dict def camsdict_to_astropy_table(meteor_info: dict, cam_info: dict): # cam: # """ { 'Camera number': 3814, 'Longitude +west (deg)': -5.39928, 'Latitude +north (deg)': 49.81511, 'Height above WGS84 (km)': 0.4444, 'FOV height (deg)' 'FOV width (deg)' 'FOV dimension hxw (deg)': '46.93 x 88.25', 'Plate scale (arcmin/pix)': 3.948, 'Plate roll wrt Std (deg)': 350.213, 'Cam tilt wrt Horiz (deg)': 2.656, 'Frame rate (Hz)': 25.0, 'Cal center RA (deg)': 50.593, 'Cal center Dec (deg)': 74.131, 'Cal center Azim (deg)': 347.643, 'Cal center Elev (deg)': 36.964, 'Cal center col (colcen)': 640.0, 'Cal center row (rowcen)': 360.0, 'Cal fit order': 201, 'Camera description': 'None', 'Lens description': 'None', 'Focal length (mm)': 0.0, 'Focal ratio': 0.0, 'Pixel pitch H (um)': 0.0, 'Pixel pitch V (um)': 0.0, 'Spectral response B': 0.45, 'Spectral response V': 0.7, 'Spectral response R': 0.72, 'Spectral response I': 0.5, 'Vignetting coef(deg/pix)': 0.0, 'Gamma': 1.0, 'Xstd, Ystd': 'Radialxy2Standard( col, row, colcen, rowcen, Xcoef, Ycoef )', 'x': 'col - colcen', 'y': 'rowcen - row', 'Mean O-C': '0.000 +- 0.000 arcmin', 'Magnitude': '-2.5 ( C + D (logI-logVig) ) fit logFlux vs. Gamma (logI-logVig), mV < 6.60', 'A': 10.0, 'B': -2.5, 'C': -4.0, 'D': 1.0, 'logVig': 'log( cos( Vignetting_coef * Rpixels * pi/180 )^4 )', 'timestamp': datetime.datetime(2019, 5, 14, 20, 56, 33, 531000) } """ # # meteor: # [{ # cam, meteor, segments, fps, hnr, mle, bin, pix/fm, rho, phi, # file_name, file_prefix, timestamp, duration, min_magnitude, max_intensity, # frames: [{ # frame, timestamp, col, row, ra, dec, azim, elev, inten, mag # }] # }] ## # Now get metadata #location obs_longitude = float(cam_info['Longitude +west (deg)']) obs_latitude = float(cam_info['Latitude +north (deg)']) obs_elevation = 1000 * float(cam_info['Height above WGS84 (km)']) # to metres #camera station site name location = str(cam_info['Camera number']) telescope = str(cam_info['Camera number']).zfill(6) #no spaces or special characters #observer and instrument origin = "CAMS" # or other formal network names observer = str(cam_info['Camera number']).zfill(6) instrument = cam_info['Camera description'] lens = cam_info['Lens description'] image_file = meteor_info['file_name'] astrometry_number_stars = 0 cx = 2 * int(cam_info['Cal center col (colcen)']) cy = 2 * int(cam_info['Cal center row (rowcen)']) # calculate event timings - file timestamp timestamp = cam_info['timestamp'].strftime(ISO_FORMAT)[:-3] # frame rate and beginning of clip frame_rate = float(meteor_info['fps']) meteor_duration = meteor_info['meteor_duration'] isodate_start_time = meteor_info['frames'][ 0]['timestamp'] isodate_end_time = meteor_info['frames'][-1]['timestamp'] isodate_midpoint_time = isodate_start_time + (isodate_end_time - isodate_start_time)/2 isodate_start = isodate_start_time.strftime(ISO_FORMAT) isodate_end = isodate_end_time.strftime(ISO_FORMAT) isodate_midpoint = isodate_midpoint_time.strftime(ISO_FORMAT) # construction of the metadata dictionary meta_dic = { 'obs_latitude': obs_latitude, 'obs_longitude': obs_longitude, 'obs_elevation': obs_elevation, 'origin': origin, 'location': location, 'telescope': telescope, 'camera_id': telescope, 'observer': observer, 'comment': '', 'instrument': instrument, 'lens': lens, 'cx' : cx, 'cy' : cy, 'photometric_band' : 'Unknown', 'image_file' : image_file, 'isodate_start_obs': isodate_start, 'isodate_calib' : isodate_midpoint, 'exposure_time': meteor_duration, 'astrometry_number_stars' : astrometry_number_stars, #'photometric_zero_point': 0.0, #'photometric_zero_point_uncertainty': 0.0, 'mag_label': 'mag', 'no_frags': 1, 'obs_az': float(cam_info['Cal center Azim (deg)']), 'obs_ev': float(cam_info['Cal center Elev (deg)']), 'obs_rot': float(cam_info['Cam tilt wrt Horiz (deg)']), 'fov_horiz': float(cam_info['FOV width (deg)']), 'fov_vert': float(cam_info['FOV height (deg)']), } # initialise table ttt = Table() #Update the table metadata ttt.meta.update(meta_dic) #create time and main data arrays # Datetime is ISO 8601 UTC format datetime_array = [] # Azimuth are East of North, in degrees azimuth_array = [] # Altitudes are geometric (not apparent) angles above the horizon, in degrees altitude_array = [] #right ascension and declination coordinates ra_array = [] dec_array = [] x_array = [] y_array = [] mag_array = [] nlines = len(meteor_info["frames"]) print('nlines= ',nlines) for i in range(nlines): obs = meteor_info["frames"][i] azimuth_array.append( float(obs['azim']) ) altitude_array.append( float(obs['elev']) ) datetime_array.append( obs['timestamp'].strftime(ISO_FORMAT) ) ra_array.append( float(obs['ra']) ) dec_array.append( float(obs['dec']) ) x_array.append( float(obs['col'])) y_array.append( float(obs['row'])) mag_array.append( float(obs['mag'])) ## Populate the table with the data created to date # create columns ttt['datetime'] = datetime_array ttt['ra'] = ra_array * u.degree ttt['dec'] = dec_array * u.degree ttt['azimuth'] = azimuth_array * u.degree ttt['altitude'] = altitude_array * u.degree ttt['mag'] = mag_array ttt['x_image'] = x_array ttt['y_image'] = x_array return ttt; def cams_dict_list_to_std(rms_meteor_dict_list, cams_camera_info): #get an astropy table list ttt_list = [] for meteor_info in rms_meteor_dict_list: # get info for each if not meteor_info: print("Empty Entry : ", meteor_info, " - Likely due to merging") continue file_prefix = meteor_info['file_prefix'] # convert and add to list singular_ttt = camsdict_to_astropy_table(meteor_info, cams_camera_info) ttt_list.append(singular_ttt) return ttt_list def cams_to_std(cams_meteor_text, cams_cameras_dict): meteor_dict_list = rms_to_dict(cams_meteor_text); ttt_list = cams_dict_list_to_std( meteor_dict_list, cams_cameras_dict) return ttt_list, len(ttt_list); # - # # MetRec functions # + """ # List of info in .log in _metrec_cfg AutoConfiguration - yes FrameGrabberType - Meteor II FrameGrabberDeviceNumber - 1 VideoSignalType - PAL InterlacedVideo - no TimeBase - current time TimeDriftCorrection - 0.0 s/h TimeZoneCorrection - 0 h DSTCorrection - no DateBase - current date DateCorrection - yes RecognitionEndTime - 6 h 30 m 0 s AutoRestart - no QuitBehaviour - quit without confirmation WaitForDusk - no MaximumSolarAltitude - -16 $ MinimumLunarDistance - 0 $ PosDriftCorrection - X/Y PosDriftHistory - metrec.pos FrameBufferCount - 300 frame(s) DelayTime - 0 ms DisplayRefreshRate - 2 InternalResolution - max MeteorElongation - 1 StartThreshold - 1.50 ConstantThreshold - no RecognitionThreshold - 0.85 FloorThreshold - 0.50 ThresholdHistory - metrec.thr FlashThreshold - 20 FlashRecoveryFrameCount - 50 frame(s) SaveFlashImage - no SaveBackgroundRate - never MinimumFrameCount - 3 frame(s) Beep - no SendSerialPing - yes SerialPingPort - 1 SerialPingType - ABEI MinimumMeteorVelocity - 1.0 $/s MaximumMeteorVelocity - 50.0 $/s PositionAngleOffset - 0 $ UseInputMask - yes InputMask - c:\cilbo\metrec\config\ICC7mask.bmp DarkField - dark.bmp UseOldFlatField - no NewFlatField - metrec.ffd FlatFieldSmooth - 2 FlatFieldSmoothDir - symmetric SensitivityImage - metrec.bmp TracingImage - ????????.bmp TimeStamp - date and time TimeStampXPosition - 384 TimeStampYPosition - 288 SaveSingleFrames - bright only SingleFrameBrightness - 0.0 SingleFrameDuration - 0.5 SaveMeteorBand - yes SaveSumImage - yes SaveMeteorData - yes SavePreFrameCount - 3 frame(s) SavePostFrameCount - 3 frame(s) SavePostFrameBright - 30 frame(s) RealTimeFluxUpload - no CameraName - ICC7 BaseDirectory - c:\cilbo\data\ICC7\ FileNameRule - hhmmssff.bmp ClockSync - no EquatorialCoordinates - yes ReferenceStars - 20190903.ref MaximumMeteorTilt - 0 $ MaximumMeteorShift - 0 $ CreatePosDatEntry - yes Operation mode - unguided Reference date - 2019/09/03 Reference time - 23:00:00 Site code - 15556 Longitude - -16.509171 $ Latitude - 28.298901 $ Altitude - 2400 m Noise Level - 5.0 Maximum Star Diameter - 4.0 Minimum Star Diameter - 1.0 Video brightness - 128 Video contrast - 128 Gamma correction - 1.00 Order of plate constants - 3 Center of plate RA - 18.0080 h Center of plate DE - 34.5535 $ Center of plate Alt - 54.6 $ Center of plate Az - 290.8 $ Size of field of view - 30.5 x 23.0 $ O-C RefStar1 - msqe= 0.55' l1o= 0.63' -0.00 mag (B-V= 1.40 mag) ... O-C RefStar51 - msqe= 1.81' l1o= 2.08' 0.10 mag (B-V= 1.10 mag) Mean Squared O-C - msqe= 1.66' l1o= 2.04' 0.41 mag Photometric equation - -2.326 log(pixelsum) + 8.390 Color index correction - -0.127 (B-V) + 0.063 Nominal lim. magnitude - 5.7 mag Total collection area - 682 deg^2 / 4196 km^2 @ 100 km alt Corrected total collection area - 2377 km^2 Number of active meteor showers (2019/11/01) - 3 """ """ # List of info in inf '#', 'time', 'bright', 'x', 'y', 'alpha', 'delta', 'c_x', 'c_y', 'c_alpha', 'c_delta', 'use', 'timestamp' """ def metrec_to_standard(inf, log): cfg = log._metrec_cfg def getFloat(numstr): return float( regex.match('[+\-0-9.]+',numstr)[0] ) #location obs_longitude = getFloat(cfg['Longitude']) obs_latitude = getFloat(cfg['Latitude']) obs_elevation = getFloat(cfg['Altitude']) #camera station site name location = str(cfg['Site code']) telescope = str(cfg['CameraName']).zfill(6) #no spaces or special characters #observer and instrument origin = "MetRec" # or other formal network names observer = cfg['CameraName'] instrument = cfg['CameraName'] lens = 'unknown' image_file = inf.path astrometry_number_stars = 0 if cfg['TimeStamp'] == 'none': cx = 0 cy = 0 else: cx = int(cfg['TimeStampXPosition']) cy = int(cfg['TimeStampYPosition']) # calculate event timings - file timestamp timestamp = inf['timestamp'][0] meteor_duration = inf['timestamp'][0] isodate_start = inf['timestamp'][0] isodate_end = inf['timestamp'][-1] start_datetime = datetime.strptime( isodate_start, ISO_FORMAT ) end_datetime = datetime.strptime( isodate_end, ISO_FORMAT ) meteor_duration = (end_datetime - start_datetime) isodate_midpoint_time = start_datetime + meteor_duration/2 isodate_midpoint = isodate_midpoint_time.strftime(ISO_FORMAT) meteor_duration = meteor_duration.total_seconds() # get FOV x and y cfg_fov = regex.search("([+\-0-9.]+)\s*x\s*([+\-0-9.]+)", cfg['Size of field of view']) # construction of the metadata dictionary meta_dic = { 'obs_latitude': obs_latitude, 'obs_longitude': obs_longitude, 'obs_elevation': obs_elevation, 'origin': origin, 'location': location, 'telescope': telescope, 'camera_id': telescope, 'observer': observer, 'comment': '', 'instrument': instrument, 'lens': lens, 'cx' : cx, 'cy' : cy, 'photometric_band' : 'Unknown', 'image_file' : image_file, 'isodate_start_obs': isodate_start, 'isodate_calib' : isodate_midpoint, 'exposure_time': meteor_duration, 'astrometry_number_stars' : astrometry_number_stars, #'photometric_zero_point': 0.0, #'photometric_zero_point_uncertainty': 0.0, 'mag_label': 'mag', 'no_frags': 1, 'obs_az': getFloat(cfg['Center of plate Az']), 'obs_ev': getFloat(cfg['Center of plate Alt']), 'obs_rot': 0.0, #float(cam_info['Cam tilt wrt Horiz (deg)']), # not in MetRec? 'fov_horiz': float(cfg_fov[1]), 'fov_vert': float(cfg_fov[2]), } # initialise table ttt = Table() #Update the table metadata ttt.meta.update(meta_dic) # Meteor Info # Datetime is ISO 8601 UTC format metrec_index_array = [] datetime_array = [] azimuth_array = [] # Azimuth are East of North, in degrees altitude_array = [] # Altitudes are geometric (not apparent) angles above the horizon, in degrees #right ascension and declination coordinates ra_array = [] dec_array = [] x_array = [] y_array = [] mag_array = [] # start of J2000 epoch ts = datetime.strptime("2000-01-01T12:00:00.000",ISO_FORMAT) start_epoch = datetime2JD(ts) for i in range(len(inf['use'])): if inf['use'][i] == True: # time and location metrec_index_array.append( i ) temp_timestamp = inf['timestamp'][i] temp_datetime = datetime.strptime(temp_timestamp, ISO_FORMAT) # RA is in hours, so multiply by 15 ra = 15 * float(inf['alpha'][i]) dec= float(inf['delta'][i]) # RA and DEC are in J2000 epoch. Precess to epoch of date, then convert to Az and Alt using RMS code JD = datetime2JD(temp_datetime) temp_ra, temp_dec = equatorialCoordPrecession(start_epoch, JD, ra, dec) temp_azim, temp_elev = raDec2AltAz(temp_ra, temp_dec, JD, obs_latitude, obs_longitude) #right ascension and declination coordinates read, alt and az need to be calculated temp_azim, temp_alt = raDec2AltAz(temp_ra, temp_dec, JD, obs_latitude, obs_longitude) datetime_array.append( temp_timestamp ) ra_array.append( ra ) dec_array.append( dec ) azimuth_array.append( temp_azim ) altitude_array.append( temp_alt ) x_array.append( float(inf['x'][i]) ) y_array.append( float(inf['y'][i]) ) # astronomical magnitude of the brightness if inf['bright'][i] == None : mag_array.append( 99.9 ) else: mag_array.append( float(inf['bright'][i]) ) ## Populate the table with the data created to date # create columns ttt['datetime'] = datetime_array ttt['ra'] = ra_array * u.degree ttt['dec'] = dec_array * u.degree ttt['azimuth'] = azimuth_array * u.degree ttt['altitude'] = altitude_array * u.degree ttt['mag'] = mag_array ttt['x_image'] = x_array ttt['y_image'] = x_array return [ttt], 1; # - # # All Sky Cams functions # + def get_as7_stations(station_str): # get a table of AllSky7 camera locations # return the table, plus the index corresponding to "station_str" stations_file_name = 'https://raw.githubusercontent.com/SCAMP99/scamp/master/ALLSKY7_location_list.csv' import requests try: r = requests.get(stations_file_name) loc_table = ascii.read(r.text, delimiter=',') print('Filling location table from online index') except: # create columns for the UK and ROI stations only. # Station,City,Longitude,Latitude,Altitude,Firstlight,Operator # AMS101,Birmingham Astronomical Society,-1.846419,52.408080,127,February 2021,<NAME> # AMS100,Nuneaton,-1.45472222,52.52638889,80,December 2020,<NAME> # AMS113,Galway,-9.089128,53.274739,31,January 2021,<NAME> loc_table = Table() loc_table['Station'] = 'AMS101','AMS100','AMS113' loc_table['City'] = 'Birmingham Astronomical Society','Nuneaton','Galway' loc_table['Longitude'] = '-1.846419','-1.45472222','-9.089128' loc_table['Latitude'] = '52.40808','52.52638889','53.274739' loc_table['Altitude'] = '127','80','31' loc_table['Firstlight'] = 'Feb 2021','Dec 2020','Jan 2021' loc_table['Operator'] = '<NAME>','<NAME>','<NAME>' print('Filling location table from known locations') no_stations = len(loc_table['Latitude']) #The first key may have extra characters in it - if so, rename it. for key_name in loc_table.keys(): if 'Station' in key_name: if not key_name == 'Station': loc_table.rename_column(key_name,'Station') #print(loc_table) i = -1 for j in range(no_stations): if loc_table['Station'][j] == station_str: i = j break if i < 0: print('AllSky7 Station name "' + station_str + '" not found.') return([], 0, i); print('AllSky7 Station name "' + station_str + '" is in row ',i) return(loc_table, no_stations, i); def allskycams_to_std(json_data, lname): # This reads a string which is in AllSkyCams json format and converts it to standard format # Set up arrays for point observation data datetime_array = [] datestr_array = [] azimuth_array = [] altitude_array = [] ra_array = [] dec_array = [] mag_array = [] x_image_array = [] y_image_array = [] # Standard AllSky7 spec instrument = 'NST-IPC16C91 - Low Lux SONY STARVIS Sensor Wireless IP Board Camera' lens = '4 mm f/1.0' cx = 1920 cy = 1080 # Check the json data to see which format it is in, and extract information accordingly. if 'station_name' in json_data : # The February 2021 "reduced.json" format # if lname.endswith("reduced.json"): # for key_name in json_data.keys(): # print(key_name) # # api_key # station_name # device_name # sd_video_file # sd_stack # hd_video_file # hd_stack # event_start_time # event_duration # peak_magnitude # start_az # start_el # end_az # end_el # start_ra # start_dec # end_ra # end_dec # meteor_frame_data # crop_box # cal_params # no_lines = len(json_data['meteor_frame_data']) #print('no_lines = ',no_lines) #for k in range(no_lines): # print('\n',json_data['meteor_frame_data'][k]) # if k == 3 : # for m in range(len(json_data['meteor_frame_data'][k])): # print("json_data['meteor_frame_data'][",k,"][",m,"] = ",json_data['meteor_frame_data'][k][m]) for i in range(no_lines): f_data = json_data['meteor_frame_data'][i] # "meteor_frame_data": [ # [ # [0] "2021-02-04 05:42:07.800", # [1] 46, fn # [2] 381, x # [3] 118, y # [4] 10, w # [5] 10, h # [6] 1275, [Number of pixels?] # [7] 348.38983647729816, RA # [8] 65.16531356859444, Dec # [9] 22.88795625855195, az # [10] 33.84381610533057, el # ], date_str = f_data[0].replace(' ','T') date_time = datetime.strptime(date_str,ISO_FORMAT) #print('i=',i,' date_str =',date_str, ' date_time =',date_time) datetime_array.append(date_time) datestr_array.append(date_str) azimuth_array.append(float(f_data[9])) altitude_array.append(float(f_data[10])) ra_array.append(float(f_data[7])) dec_array.append(float(f_data[8])) mag_array.append(0.0) x_image_array.append(float(f_data[2])) y_image_array.append(float(f_data[3])) meteor_duration = datetime_array[-1] - datetime_array[0] print('meteor_duration = ', meteor_duration) meteor_duration_float = float(meteor_duration.total_seconds()) frame_rate = (json_data['meteor_frame_data'][-1][1]- json_data['meteor_frame_data'][0][1]) / meteor_duration_float print('frame_rate = ', frame_rate) time_step = (1 - json_data['meteor_frame_data'][0][1]) / frame_rate isodate_start_time = datetime_array[0] + timedelta(seconds=time_step) print('isodate_start_time = ', isodate_start_time) isodate_end_time = datetime_array[-1] print('datetime_array[0] = ', datetime_array[0]) print('isodate_end_time = ', isodate_end_time) isodate_midpoint_time = isodate_start_time + (isodate_end_time - isodate_start_time)/2 print('isodate_midpoint_time = ', isodate_midpoint_time) isodate_start = isodate_start_time.strftime(ISO_FORMAT) isodate_end = isodate_end_time.strftime(ISO_FORMAT) isodate_midpoint = isodate_midpoint_time.strftime(ISO_FORMAT) print('\n getting the list of AS7 stations, call 1') station_str = json_data['station_name'] loc_table, no_stations, row_no = get_as7_stations(station_str) #Station,City,Longitude,Latitude,Altitude,Firstlight,Operator if row_no < 0: # Station data is unavailable # Put placeholders for station data obs_latitude = -999.9 obs_longitude = -999.9 obs_elevation = -999.9 location = 'Unknown' telescope = 'Unknown' camera_id = 'Unknown' observer = 'Unknown' else: # Use the information from the lookup table device_data = loc_table[row_no] obs_latitude = float(device_data['Latitude']) obs_longitude = float(device_data['Longitude']) obs_elevation = float(device_data['Altitude']) location = str(device_data['City']) telescope = str(station_str) observer = str(device_data['Operator']) if 'device_name' in json_data : camera_id = json_data['device_name'] else: camera_id = str(station_str) device_data_internal = json_data['cal_params'] # "cal_params": { # "center_az": 291.03838531805667, # "center_el": 24.91924498460342, # "position_angle": 41.09621614877751, # "pixscale": 155.58669548833825, # "ra_center": "22.498291666666667", # "dec_center": "32.03936111111111", # "user_stars": [ rstars = len(device_data_internal['user_stars']) meta_dic = {'obs_latitude': obs_latitude, 'obs_longitude': obs_longitude, 'obs_elevation': obs_elevation, 'origin': 'All Sky Systems', 'location': location, 'telescope': telescope, 'camera_id': camera_id, 'observer': observer, 'comment': '', 'instrument': instrument, 'lens': lens, 'cx' : cx, 'cy' : cy, 'photometric_band' : 'Unknown', 'image_file' : json_data['hd_video_file'], 'isodate_start_obs': str(isodate_start), 'isodate_calib' : str(isodate_midpoint), 'exposure_time': meteor_duration_float, 'astrometry_number_stars' : rstars, #'photometric_zero_point': 0.0, #'photometric_zero_point_uncertainty': 0.0, 'mag_label': 'no_mag_data', 'no_frags': 1, 'obs_az': float(device_data_internal['center_az']), 'obs_ev': float(device_data_internal['center_el']), 'obs_rot': 0.0, 'fov_horiz': 0.0, 'fov_vert': 0.0, } elif 'best_meteor' in json_data : # is the February 2021 format without station data # includes the hacked form with manual dates, az, el print("\n the key 'best_meteor' is in the json data") #for key_name in json_data.keys(): # print(key_name) # sd_video_file # sd_stack # sd_objects # hd_trim # hd_stack # hd_video_file # hd_objects # meteor # cp # best_meteor # # for key_name in json_data['best_meteor'].keys(): # print(key_name) # # obj_id # ofns # oxs # oys # ows # ohs # oint # fs_dist # segs # report # ccxs # ccys # dt # ras # decs # azs # els no_lines = len(json_data['best_meteor']['dt']) print('no_lines = ',no_lines) file_hacked = ('ras' not in json_data['best_meteor'] ) print('file_hacked = ',file_hacked) for i in range(no_lines): date_str = (str(json_data['best_meteor']['dt'][i])).replace(' ','T') date_time = datetime.strptime(date_str,ISO_FORMAT) print('i=',i,' date_str=',date_str, ' date_time=',date_time) datetime_array.append(date_time) datestr_array.append(date_str) azimuth_array.append(float(json_data['best_meteor']['azs'][i])) altitude_array.append(float(json_data['best_meteor']['els'][i])) if file_hacked : x_image_array.append(0.0) y_image_array.append(0.0) # Do RA and DEC later else: x_image_array.append(float(json_data['best_meteor']['ccxs'][i])) y_image_array.append(float(json_data['best_meteor']['ccys'][i])) ra_array.append(float(json_data['best_meteor']['ras'][i])) dec_array.append(float(json_data['best_meteor']['decs'][i])) mag_array.append(0.0) meteor_duration = datetime_array[-1] - datetime_array[0] print('meteor_duration = ', meteor_duration) meteor_duration_float = float(meteor_duration.total_seconds()) frame_rate = (json_data['best_meteor']['ofns'][-1]- json_data['best_meteor']['ofns'][0]) / meteor_duration_float print('frame_rate = ', frame_rate) time_step = (1 - json_data['best_meteor']['ofns'][0]) / frame_rate isodate_start_time = datetime_array[0] + timedelta(seconds=time_step) print('isodate_start_time = ', isodate_start_time) isodate_end_time = datetime_array[-1] print('datetime_array[0] = ', datetime_array[0]) print('isodate_end_time = ', isodate_end_time) isodate_midpoint_time = isodate_start_time + (isodate_end_time - isodate_start_time)/2 print('isodate_midpoint_time = ', isodate_midpoint_time) isodate_start = isodate_start_time.strftime(ISO_FORMAT) isodate_end = isodate_end_time.strftime(ISO_FORMAT) isodate_midpoint = isodate_midpoint_time.strftime(ISO_FORMAT) # Now work out which station it is. # there is very little station info in the data file station_str = '' if 'archive_file' in json_data : # The archive filename contains the station name arch_str = str(json_data['archive_file']) arch_list = arch_str.split('/') for i in range (len(arch_list)): if ('AMS' in arch_list[i]) and (not arch_list[i] == 'AMS2'): station_str = arch_list[i] break if len(station_str) < 1: station_str = input("\nWhich AllSky7 station is this data from (e.g. AMS100) :") if len(station_str) < 1: row_no = -1 else: print('\n getting the list of AS7 stations, call 2') loc_table, no_stations, row_no = get_as7_stations(station_str) #Station,City,Longitude,Latitude,Altitude,Firstlight,Operator if row_no < 0: # Station data is unavailable # Put placeholders for station data obs_latitude = -999.9 obs_longitude = -999.9 obs_elevation = -999.9 location = 'Unknown' telescope = 'Unknown' camera_id = 'Unknown' observer = 'Unknown' else: # Use the information from the lookup table device_data = loc_table[row_no] obs_latitude = float(device_data['Latitude']) obs_longitude = float(device_data['Longitude']) obs_elevation = float(device_data['Altitude']) location = str(device_data['City']) telescope = str(station_str) camera_id = str(station_str) observer = str(device_data['Operator']) device_data_internal = json_data['cp'] # "cp": { # "center_az": 345.6233888888889, # "center_el": 19.169500000000003, # "position_angle": 13.914659642573255, # "pixscale": 155.901484181, # "ra_center": "310.7067083333333", # "dec_center": "54.69519444444444", # "user_stars": [ # [ if file_hacked : rstars = 0 comment = 'Reconstructed from basic az and alt data. No XY data' # Now add RA and DEC # start of J2000 epoch ts = datetime.strptime("2000-01-01T12:00:00.000",ISO_FORMAT) start_epoch = datetime2JD(ts) for i in range(no_lines): az = float(azimuth_array[i]) elev = float(altitude_array[i]) time_stamp = datestr_array[i] ts = datetime.strptime(time_stamp,ISO_FORMAT) JD = datetime2JD(ts) # USE Az and Alt to calculate correct RA and DEC in epoch of date, then precess back to J2000 temp_ra, temp_dec = altAz2RADec(az, elev, JD, obs_latitude, obs_longitude) temp_ra, temp_dec = equatorialCoordPrecession(JD, start_epoch, temp_ra, temp_dec) ra_array.append(temp_ra ) dec_array.append(temp_dec ) else: rstars = len(device_data_internal['user_stars']) comment = '' meta_dic = {'obs_latitude': obs_latitude, 'obs_longitude': obs_longitude, 'obs_elevation': obs_elevation, 'origin': 'All Sky Systems', 'location': location, 'telescope': telescope, 'camera_id': camera_id, 'observer': observer, 'comment': comment, 'instrument': instrument, 'lens': lens, 'cx' : cx, 'cy' : cy, 'photometric_band' : 'Unknown', 'image_file' : json_data['hd_video_file'], 'isodate_start_obs': str(isodate_start), 'isodate_calib' : str(isodate_midpoint), 'exposure_time': meteor_duration_float, 'astrometry_number_stars' : rstars, #'photometric_zero_point': 0.0, #'photometric_zero_point_uncertainty': 0.0, 'mag_label': 'no_mag_data', 'no_frags': 1, 'obs_az': float(device_data_internal['center_az']), 'obs_ev': float(device_data_internal['center_el']), 'obs_rot': 0.0, 'fov_horiz': 0.0, 'fov_vert': 0.0, } elif 'info' in json_data : # is the July 2020 format print('\n July 2020 format') for key_name in json_data.keys(): print(key_name) # info # frames # report # sync # calib camera_id = json_data['info']['station'] location = str(json_data['info']['station']) telescope = json_data['info']['device'] cx = int(json_data['calib']['img_dim'][0]) cy = int(json_data['calib']['img_dim'][1]) rstars = len(json_data['calib']['stars']) no_lines = len(json_data['frames']) # Work out who the observer was, if possible loc_table, no_stations, row_no = get_as7_stations(camera_id) #Station,City,Longitude,Latitude,Altitude,Firstlight,Operator if row_no >= 0: observer = str(loc_table[row_no]['Operator']) else: observer = location + ' ' + telescope #print("\n len(json_data['frames']) = ",no_lines) for i in range(no_lines): f_data = json_data['frames'][i] #print("\n json_data['frames'][",i,"] = ",f_data) #json_data['frames'][ 4 ] = {'fn': 57, 'x': 727, 'y': 667, 'w': 11, 'h': 11, 'dt': '2020-07-09 01:27:36.400', # 'az': 51.85325570513405, 'el': 31.17297922001948, 'ra': 317.34699971600514, # 'dec': 47.48858399651199} date_str = f_data['dt'].replace(' ','T') date_time = datetime.strptime(date_str,ISO_FORMAT) #print('i=',i,' date_str =',date_str, ' date_time =',date_time) datetime_array.append(date_time) datestr_array.append(date_str) azimuth_array.append(float(f_data['az'])) altitude_array.append(float(f_data['el'])) ra_array.append(float(f_data['ra'])) dec_array.append(float(f_data['dec'])) mag_array.append(0.0) x_image_array.append(float(f_data['x'])) y_image_array.append(float(f_data['y'])) meteor_duration = datetime_array[-1] - datetime_array[0] print('meteor_duration = ', meteor_duration) meteor_duration_float = float(meteor_duration.total_seconds()) frame_rate = (json_data['frames'][-1]['fn']- json_data['frames'][0]['fn']) / meteor_duration_float print('frame_rate = ', frame_rate) time_step = (1 - json_data['frames'][0]['fn']) / frame_rate isodate_start_time = datetime_array[0] + timedelta(seconds=time_step) print('isodate_start_time = ', isodate_start_time) isodate_end_time = datetime_array[-1] print('datetime_array[0] = ', datetime_array[0]) print('isodate_end_time = ', isodate_end_time) isodate_midpoint_time = isodate_start_time + (isodate_end_time - isodate_start_time)/2 print('isodate_midpoint_time = ', isodate_midpoint_time) isodate_start = isodate_start_time.strftime(ISO_FORMAT) isodate_end = isodate_end_time.strftime(ISO_FORMAT) isodate_midpoint = isodate_midpoint_time.strftime(ISO_FORMAT) # construction of the metadata dictionary device_data = json_data['calib']['device'] meta_dic = {'obs_latitude': float(device_data['lat']), 'obs_longitude': float(device_data['lng']), 'obs_elevation': float(device_data['alt']), 'origin': 'All Sky Systems', 'location': location, 'telescope': telescope, 'camera_id': camera_id, 'observer': observer, 'comment': '', 'instrument': instrument, 'lens': lens, 'cx' : cx, 'cy' : cy, 'photometric_band' : 'Unknown', 'image_file' : json_data['info']['org_hd_vid'], 'isodate_start_obs': str(isodate_start), 'isodate_calib' : str(isodate_midpoint), 'exposure_time': meteor_duration_float, 'astrometry_number_stars' : rstars, #'photometric_zero_point': 0.0, #'photometric_zero_point_uncertainty': 0.0, 'mag_label': 'no_mag_data', 'no_frags': 1, 'obs_az': float(device_data['center']['az']), 'obs_ev': float(device_data['center']['el']), 'obs_rot': 0.0, 'fov_horiz': 0.0, 'fov_vert': 0.0, } else: print('\n Json format not recognised') return([], 0); # initialise table ttt = Table() #Update the table metadata ttt.meta.update(meta_dic) ttt['datetime'] = datestr_array ttt['ra'] = ra_array ttt['dec'] = dec_array ttt['azimuth'] = azimuth_array ttt['altitude'] = altitude_array ttt['no_mag_data'] = mag_array ttt['x_image'] = x_image_array ttt['y_image'] = y_image_array return([ttt], 1); def std_to_allskycams(ttt): info = {} info['station'] = ttt.meta['location'] info['device'] = ttt.meta['telescope'] info['org_hd_vid'] = ttt.meta['image_file'] # work out the frame rate of the observations in the table. # this code is long-form as it was copied across from the UFO conversion start_time = Time(ttt['datetime'][0]) start_time_str = str(ttt['datetime'][0]) nlines = len(ttt['datetime']) cumu_times = [] step_sizes = [] last_sec = 0.0 for i in range(nlines): sec = get_secs(Time(ttt['datetime'][i]),start_time) cumu_times.append(sec) sec_rounded = sec time_change = int(round(1000*(sec_rounded - last_sec),0)) if i>0 and (time_change not in step_sizes): step_sizes.append(time_change) last_sec = sec_rounded #now test for common framerates # likely framerates are 20 (DFN), 25 (UFO) or 30 (FRIPON) fps smallest = min(step_sizes) if (smallest==33 or smallest == 34 or smallest == 66 or smallest == 67): frame_rate = 30.0 elif (smallest >= 39 and smallest <= 41): frame_rate = 25.0 elif (smallest >= 49 and smallest <= 51): frame_rate = 20.0 else: # non-standard framerate # gcd is the greatest common divisor of all of the steps, in milliseconds. # Note - if gcd <= 10 it implies frame rate >= 100 fps, which is probably caused by a rounding error gcd = array_gcd(step_sizes) frame_rate = 1000.0/float(gcd) frame_step = 1/frame_rate #work out the head, tail and first frame number head_sec = round(-get_secs(Time(ttt.meta['isodate_start_obs']),start_time),6) head = int(round(head_sec / frame_step,0)) fs = head + 1 fN = 1+int(round(sec/frame_step,0)) fe = fs + fN -1 sN = nlines sec = round(sec, 4) # work out number of frames-equivalent and tail mid_sec = round(head_sec + get_secs(Time(ttt.meta['isodate_calib']),start_time),6) clip_sec = round(max(min(2*mid_sec,30.0),(fe-1)*frame_step),6) no_frames = int(round(clip_sec/frame_step,0)) + 1 tail = max(0,no_frames - (head + fN)) no_frames = head + fN + tail frames = [] for i in range(nlines): frame = {} frame['fn'] = fs + int(round(cumu_times[i]/frame_step,0)) frame['x'] = int(round(ttt[i]['x_image'],0)) frame['y'] = int(round(ttt[i]['y_image'],0)) frame['w'] = 0 frame['h'] = 0 frame['dt'] = ttt[i]['datetime'].replace('T',' ') frame['az'] = ttt[i]['azimuth'] frame['el'] = ttt[i]['altitude'] frame['ra'] = ttt[i]['ra'] frame['dec'] = ttt[i]['dec'] frames.append(frame) center_dic = {} center_dic['az'] = ttt.meta['obs_az'] center_dic['el'] = ttt.meta['obs_ev'] device = {} device['center'] = center_dic device['alt'] = str(ttt.meta['obs_elevation']) device['lat'] = str(ttt.meta['obs_latitude']) device['lng'] = str(ttt.meta['obs_longitude']) calib = {} calib['device'] = device calib['img_dim'] = [ttt.meta['cx'], ttt.meta['cy']] # assemble a dictionary with the right data structure json_dict = {} json_dict['info'] = info json_dict['frames'] = frames json_dict['calib'] = calib # convert the dictionary to a string json_str = json.dumps(json_dict, ensure_ascii=True, indent=4) return json_str # - # # RA & DEC <==> Az Alt conversion, from RMS (c) Denis Vida # + """ A set of tools of working with meteor data. Includes: - Julian date conversion - LST calculation - Coordinate transformations - RA and Dec precession correction - ... """ # The MIT License # Copyright (c) 2016 <NAME> # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import math from datetime import datetime, timedelta, MINYEAR ### CONSTANTS ### # Define Julian epoch JULIAN_EPOCH = datetime(2000, 1, 1, 12) # noon (the epoch name is unrelated) J2000_JD = timedelta(2451545) # julian epoch in julian dates class EARTH_CONSTANTS(object): """ Holds Earth's shape parameters. """ def __init__(self): # Earth elipsoid parameters in meters (source: IERS 2003) self.EQUATORIAL_RADIUS = 6378136.6 self.POLAR_RADIUS = 6356751.9 self.RATIO = self.EQUATORIAL_RADIUS/self.POLAR_RADIUS self.SQR_DIFF = self.EQUATORIAL_RADIUS**2 - self.POLAR_RADIUS**2 # Initialize Earth shape constants object EARTH = EARTH_CONSTANTS() ################# ### Time conversions ### def JD2LST(julian_date, lon): """ Convert Julian date to Local Sidreal Time and Greenwich Sidreal Time. Arguments; julian_date: [float] decimal julian date, epoch J2000.0 lon: [float] longitude of the observer in degrees Return: [tuple]: (LST, GST): [tuple of floats] a tuple of Local Sidreal Time and Greenwich Sidreal Time (degrees) """ t = (julian_date - J2000_JD.days)/36525.0 # Greenwich Sidreal Time GST = 280.46061837 + 360.98564736629 * (julian_date - 2451545) + 0.000387933 *t**2 - ((t**3) / 38710000) GST = (GST+360) % 360 # Local Sidreal Time LST = (GST + lon + 360) % 360 return LST, GST def date2JD(year, month, day, hour, minute, second, millisecond=0, UT_corr=0.0): """ Convert date and time to Julian Date with epoch J2000.0. @param year: [int] year @param month: [int] month @param day: [int] day of the date @param hour: [int] hours @param minute: [int] minutes @param second: [int] seconds @param millisecond: [int] milliseconds (optional) @param UT_corr: [float] UT correction in hours (difference from local time to UT) @return :[float] julian date, epoch 2000.0 """ # Convert all input arguments to integer (except milliseconds) year, month, day, hour, minute, second = map(int, (year, month, day, hour, minute, second)) # Create datetime object of current time dt = datetime(year, month, day, hour, minute, second, int(millisecond*1000)) # Calculate Julian date julian = dt - JULIAN_EPOCH + J2000_JD - timedelta(hours=UT_corr) # Convert seconds to day fractions return julian.days + (julian.seconds + julian.microseconds/1000000.0)/86400.0 def datetime2JD(dt, UT_corr=0.0): """ Converts a datetime object to Julian date. Arguments: dt: [datetime object] Keyword arguments: UT_corr: [float] UT correction in hours (difference from local time to UT) Return: jd: [float] Julian date """ return date2JD(dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second, dt.microsecond/1000.0, UT_corr=UT_corr) ############################ ### Spatial coordinates transformations ### def altAz2RADec(azim, elev, jd, lat, lon): """ Convert azimuth and altitude in a given time and position on Earth to right ascension and declination. Arguments: azim: [float] azimuth (+east of due north) in degrees elev: [float] elevation above horizon in degrees jd: [float] Julian date lat: [float] latitude of the observer in degrees lon: [float] longitde of the observer in degrees Return: (RA, dec): [tuple] RA: [float] right ascension (degrees) dec: [float] declination (degrees) """ azim = np.radians(azim) elev = np.radians(elev) lat = np.radians(lat) lon = np.radians(lon) # Calculate hour angle ha = np.arctan2(-np.sin(azim), np.tan(elev)*np.cos(lat) - np.cos(azim)*np.sin(lat)) # Calculate Local Sidereal Time lst = np.radians(JD2LST(jd, np.degrees(lon))[0]) # Calculate right ascension ra = (lst - ha)%(2*np.pi) # Calculate declination dec = np.arcsin(np.sin(lat)*np.sin(elev) + np.cos(lat)*np.cos(elev)*np.cos(azim)) return np.degrees(ra), np.degrees(dec) def raDec2AltAz(ra, dec, jd, lat, lon): """ Convert right ascension and declination to azimuth (+east of sue north) and altitude. Arguments: ra: [float] right ascension in degrees dec: [float] declination in degrees jd: [float] Julian date lat: [float] latitude in degrees lon: [float] longitude in degrees Return: (azim, elev): [tuple] azim: [float] azimuth (+east of due north) in degrees elev: [float] elevation above horizon in degrees """ ra = np.radians(ra) dec = np.radians(dec) lat = np.radians(lat) lon = np.radians(lon) # Calculate Local Sidereal Time lst = np.radians(JD2LST(jd, np.degrees(lon))[0]) # Calculate the hour angle ha = lst - ra # Constrain the hour angle to [-pi, pi] range ha = (ha + np.pi)%(2*np.pi) - np.pi # Calculate the azimuth azim = np.pi + np.arctan2(np.sin(ha), np.cos(ha)*np.sin(lat) - np.tan(dec)*np.cos(lat)) # Calculate the sine of elevation sin_elev = np.sin(lat)*np.sin(dec) + np.cos(lat)*np.cos(dec)*np.cos(ha) # Wrap the sine of elevation in the [-1, +1] range sin_elev = (sin_elev + 1)%2 - 1 elev = np.arcsin(sin_elev) return np.degrees(azim), np.degrees(elev) # use: # (ra, dec) = altAz2RADec(azim, elev, datetime2JD(), lat, lon) # (azim, elev) = raDec2AltAz(azim, elev, datetime2JD(), lat, lon) # Vectorize the raDec2AltAz function so it can take numpy arrays for: ra, dec, jd raDec2AltAz_vect = np.vectorize(raDec2AltAz, excluded=['lat', 'lon']) ### Precession ### def equatorialCoordPrecession(start_epoch, final_epoch, ra, dec): """ Corrects Right Ascension and Declination from one epoch to another, taking only precession into account. Implemented from: <NAME> - Astronomical Algorithms, 2nd edition, pages 134-135 @param start_epoch: [float] Julian date of the starting epoch @param final_epoch: [float] Julian date of the final epoch @param ra: [float] non-corrected right ascension in degrees @param dec: [float] non-corrected declination in degrees @return (ra, dec): [tuple of floats] precessed equatorial coordinates in degrees """ ra = math.radians(ra) dec = math.radians(dec) T = (start_epoch - 2451545) / 36525.0 t = (final_epoch - start_epoch) / 36525.0 # Calculate correction parameters zeta = ((2306.2181 + 1.39656*T - 0.000139*T**2)*t + (0.30188 - 0.000344*T)*t**2 + 0.017998*t**3)/3600 z = ((2306.2181 + 1.39656*T - 0.000139*T**2)*t + (1.09468 + 0.000066*T)*t**2 + 0.018203*t**3)/3600 theta = ((2004.3109 - 0.85330*T - 0.000217*T**2)*t - (0.42665 + 0.000217*T)*t**2 - 0.041833*t**3)/3600 # Convert parameters to radians zeta, z, theta = map(math.radians, (zeta, z, theta)) # Calculate the next set of parameters A = math.cos(dec) * math.sin(ra + zeta) B = math.cos(theta)*math.cos(dec)*math.cos(ra + zeta) - math.sin(theta)*math.sin(dec) C = math.sin(theta)*math.cos(dec)*math.cos(ra + zeta) + math.cos(theta)*math.sin(dec) # Calculate right ascension ra_corr = math.atan2(A, B) + z # Calculate declination (apply a different equation if close to the pole, closer then 0.5 degrees) if (math.pi/2 - abs(dec)) < math.radians(0.5): dec_corr = math.acos(math.sqrt(A**2 + B**2)) else: dec_corr = math.asin(C) temp_ra = math.degrees(ra_corr) if temp_ra < 0: temp_ra += 360. return temp_ra, math.degrees(dec_corr) # Calculate UFO-style ra and dec by fitting a great circle def ufo_ra_dec_alt_az(ttt): # Compute times of first and last points no_lines = len(ttt['datetime']) try: dt1 = datetime.strptime(str(ttt['datetime'][0]),ISO_FORMAT) dt2 = datetime.strptime(str(ttt['datetime'][no_lines - 1]),ISO_FORMAT) except: dt1 = datetime.strptime(str(ttt['datetime'][0]),"%Y-%m-%d %H:%M:%S.%f") dt2 = datetime.strptime(str(ttt['datetime'][no_lines - 1]),"%Y-%m-%d %H:%M:%S.%f") #JD = datetime2JD(dt1) ### Fit a great circle to Az/Alt measurements and compute model beg/end RA and Dec ### # Convert the measurement Az/Alt to cartesian coordinates # NOTE: All values that are used for Great Circle computation are: # theta - the zenith angle (90 deg - altitude) # phi - azimuth +N of due E, which is (90 deg - azim) azim = ttt['azimuth'] elev = ttt['altitude'] x, y, z = polarToCartesian(np.radians((90 - azim)%360), np.radians(90 - elev)) # Fit a great circle C, theta0, phi0 = fitGreatCircle(x, y, z) # Get the first point on the great circle phase1 = greatCirclePhase(np.radians(90 - elev[0]), np.radians((90 - azim[0])%360), \ theta0, phi0) alt1, azim1 = cartesianToPolar(*greatCircle(phase1, theta0, phi0)) alt1 = 90 - np.degrees(alt1) azim1 = (90 - np.degrees(azim1))%360 # Get the last point on the great circle phase2 = greatCirclePhase(np.radians(90 - elev[-1]), np.radians((90 - azim[-1])%360),\ theta0, phi0) aa, bb, cc = greatCircle(phase2, theta0, phi0) alt2, azim2 = cartesianToPolar(aa, bb, cc) alt2 = 90 - np.degrees(alt2) azim2 = (90 - np.degrees(azim2))%360 # Compute RA/Dec from Alt/Az obs_latitude = float(ttt.meta['obs_latitude']) obs_longitude = float(ttt.meta['obs_longitude']) ra1, dec1 = altAz2RADec(azim1, alt1, datetime2JD(dt1), obs_latitude, obs_longitude) ra2, dec2 = altAz2RADec(azim2, alt2, datetime2JD(dt2), obs_latitude, obs_longitude) return(float(alt1), float(alt2), float(azim1), float(azim2), float(ra1), float(ra2), float(dec1), float(dec2)); """ Fitting a great circle to points in the Cartesian coordinates system. """ # The MIT License # Copyright (c) 2017, <NAME> # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. #from __future__ import print_function, division, absolute_import import scipy.linalg import scipy.optimize def greatCirclePhase(theta, phi, theta0, phi0): """ Find the phase angle of the point closest to the given point on the great circle. Arguments: theta: [float] Inclination of the point under consideration (radians). phi: [float] Nodal angle of the point (radians). theta0: [float] Inclination of the great circle (radians). phi0: [float] Nodal angle of the great circle (radians). Return: [float] Phase angle on the great circle of the point under consideration (radians). """ def _pointDist(x): """ Calculates the Cartesian distance from a point defined in polar coordinates, and a point on a great circle. """ # Convert the pick to Cartesian coordinates point = polarToCartesian(phi, theta) # Get the point on the great circle circle = greatCircle(x, theta0, phi0) # Return the distance from the pick to the great circle return np.sqrt((point[0] - circle[0])**2 + (point[1] - circle[1])**2 + (point[2] - circle[2])**2) # Find the phase angle on the great circle which corresponds to the pick res = scipy.optimize.minimize(_pointDist, 0) return res.x def greatCircle(t, theta0, phi0): """ Calculates the point on a great circle defined my theta0 and phi0 in Cartesian coordinates. Sources: - http://demonstrations.wolfram.com/ParametricEquationOfACircleIn3D/ Arguments: t: [float or 1D ndarray] phase angle of the point in the great circle theta0: [float] Inclination of the great circle (radians). phi0: [float] Nodal angle of the great circle (radians). Return: [tuple or 2D ndarray] a tuple of (X, Y, Z) coordinates in 3D space (becomes a 2D ndarray if the input parameter t is also a ndarray) """ # Calculate individual cartesian components of the great circle points x = -np.cos(t)*np.sin(phi0) + np.sin(t)*np.cos(theta0)*np.cos(phi0) y = np.cos(t)*np.cos(phi0) + np.sin(t)*np.cos(theta0)*np.sin(phi0) z = np.sin(t)*np.sin(theta0) return x, y, z def fitGreatCircle(x, y, z): """ Fits a great circle to points in 3D space. Arguments: x: [float] X coordiantes of points on the great circle. y: [float] Y coordiantes of points on the great circle. z: [float] Z coordiantes of points on the great circle. Return: X, theta0, phi0: [tuple of floats] Great circle parameters. """ # Add (0, 0, 0) to the data, as the great circle should go through the origin x = np.append(x, 0) y = np.append(y, 0) z = np.append(z, 0) # Fit a linear plane through the data points A = np.c_[x, y, np.ones(x.shape[0])] C,_,_,_ = scipy.linalg.lstsq(A, z) # Calculate the great circle parameters z2 = C[0]**2 + C[1]**2 theta0 = np.arcsin(z2/np.sqrt(z2 + z2**2)) phi0 = np.arctan2(C[1], C[0]) return C, theta0, phi0 def cartesianToPolar(x, y, z): """ Converts 3D cartesian coordinates to polar coordinates. Arguments: x: [float] Px coordinate. y: [float] Py coordinate. z: [float] Pz coordinate. Return: (theta, phi): [float] Polar angles in radians (inclination, azimuth). """ theta = np.arccos(z) phi = np.arctan2(y, x) return theta, phi def polarToCartesian(theta, phi): """ Converts 3D spherical coordinates to 3D cartesian coordinates. Arguments: theta: [float] Inclination in radians. phi: [float] Azimuth angle in radians. Return: (x, y, z): [tuple of floats] Coordinates of the point in 3D cartiesian coordinates. """ x = np.sin(phi)*np.cos(theta) y = np.sin(phi)*np.sin(theta) z = np.cos(phi) return x, y, z # - # # Utility functions # + # general-purpose file-handling or numerical functions # GCD (or Highest Common Factor) of two numbers def find_gcd(x, y): while(y): x, y = y, x % y return x # GCD (or Highest Common Factor) of integers in an array def array_gcd(l): len_array = len(l) if len_array == 0: return(0); elif len_array == 1: return(l[0]); elif len_array == 2: return(find_gcd(l[0],l[1])); else: gcd = find_gcd(l[0],l[1]) for i in range(2, len_array): gcd = find_gcd(gcd, l[i]) return(gcd); #get the number of seconds since start_time def get_secs(ttt_date,start_time): head_days = Time(ttt_date) head_days -= start_time return(float(str(head_days))*24*60*60); # define city fuction #def getcity(latlong): # locator = Nominatim(user_agent="<EMAIL>",timeout = 10) # rgeocode = RateLimiter(locator.reverse,min_delay_seconds = 0.001) # try: # location = rgeocode(latlong) # di = dict(location.raw) # if 'city' in di.keys(): # city = di['city'] # elif 'village' in di.keys(): # city = di['village'] # elif 'town' in di.keys(): # city = di['town'] # else: # city = 'no city or town' # except: # city = 'city not found' # return city # or return location.raw to see all the data def zipfilename(ttt_list, out_type): # returns a name for the zipped output file, e.g. 2020-12-31_UFO_EastBarnet.zip # ttt_list is a list of astropy tables, out_type is a string decribing the type of data written. ttt = ttt_list[0] location = ttt.meta['location'] no_meteors = len(ttt_list) st = str(ttt['datetime'][0]) for k in range(no_meteors): if ttt_list[k].meta['location'] != location: location = 'MultiLocation' initial_file = st[0:10] + '_' + out_type + '_' + location[0:15] + '.zip' # print('initial_file = ',initial_file) return initial_file def outfilename(ttt_list, out_type, source, is_main, num_days, i): # returns a name for the output file, e.g. 2020-12-31_UFO_EastBarnet.zip # ttt is an astropy table, out_type is a string decribing the type of data written. # source is a string describing where the data came from. # is_main is True for the main data file, False for the ancillary (e.g. FRIPON location # or UFO csv summary) file. # "num_days" is used for the UFO CSV file name or is the the file number in DFN files. # 'i' is the index of the meteor. ttt = ttt_list[i] location = ttt.meta['location'] telescope = ttt.meta['telescope'] if out_type == 'UFO' and not is_main: # check the name of the csv file no_meteors = len(ttt_list) for k in range(no_meteors): if ttt_list[k].meta['location'] != location: location = 'MultiLocation' if ttt_list[k].meta['telescope'] != telescope: telescope = 'MultiLocation' location = location.replace(" ", "_")[0:15] telescope = telescope.replace(" ", "_")[0:15] if location == telescope: telescope = '' else: telescope = '_' + telescope if out_type == 'STD': # Standard output, in form 2020-05-11T22_41_00_RMS_UK0002.ecsv. st = str(ttt['datetime'][0]) output_file = st[0:19].replace(":", "_") output_file += '_' + source + '_' output_file += ttt.meta['telescope'].replace(" ", "_")[0:15] if ttt.meta['camera_id'] != ttt.meta['telescope']: output_file += '_' + ttt.meta['camera_id'].replace(" ", "_")[0:15] output_file += '.ecsv' elif out_type == 'DFN' : # Desert Fireball Network output st = str(ttt['datetime'][0]) output_file = str(num_days).zfill(3) + "_" + st[0:10] + '_' output_file += st[11:13] + st[14:16] + st[17:19] + '_' output_file += location + telescope + '.ecsv' elif out_type == 'UFO': # UFOAnalyzer output files if is_main : # write the A.XML file in format : "M20200601_220346_EastBarnet_NEA.XML" output_file = 'M' + isoStr(ttt['datetime'][0]).strftime('%Y%m%d_%H%M%S_') + "00_" output_file += location + '_' + ttt.meta['camera_id'][0:2] + '_A.XML' else: # name of the CSV file, e.g. 20201231_23_188_EastBarnet_NW.csv, output_file = isoStr(ttt['datetime'][0]).strftime('%Y%m%d_%H_') + str(num_days).zfill(3) + "_" output_file += location + '.csv' elif out_type == 'FRIPON': # example 20200103T170201_UT_FRNO01_SJ.met st = str(ttt['datetime'][0]) #e.g. 2020-01-03T17:02:01.885 output_file = st[0:4] + st[5:7]+ st[8:13] output_file += st[14:16] + st[17:19] + '_UT' + telescope if is_main : output_file += '.met' else: output_file += '_location.txt' else: # A CSV file readable by Excel, or an ASC output file, # plus a catch-all if file type unknown st = str(ttt['datetime'][0]) #e.g. 2020-01-03T17:02:01.885 output_file = st[0:10] + '_' output_file += st[11:13] + st[14:16] + st[17:19] + '_' if out_type == 'ASC': output_file += location + telescope + '.json' else: output_file += location + telescope + '.csv' return output_file def std_timeshift(ttt,sec): # changes all of the dates in a standard table to make them earlier by a number of seconds equal to 'sec' ttt.meta['isodate_start_obs'] = change_str_time(ttt.meta['isodate_start_obs'],sec) ttt.meta['isodate_calib'] = change_str_time(ttt.meta['isodate_calib'],sec) nlines = len(ttt['datetime']) for i in range(nlines): ttt['datetime'][i] = change_str_time(ttt['datetime'][i],sec) return ttt def change_str_time(in_str, sec): # Takes an ISO datetime string, calculates a time earlier by 'sec', returning an ISO datetime string. in_time = Time(in_str) new_time = in_time + timedelta(seconds=-sec) out_str = str(new_time) return out_str # - # ## Main program # + print("\nstarting program\n") file_read_types = (("all files","*.*"),("Standard","*.ECSV"),("UFOAnalyzer","*A.XML"),\ ("UKFN/DFN","*.ECSV"),("SCAMP/FRIPON","*.MET"),("SCAMP/FRIPON","*.ZIP"),("RMS/CAMS","FTP*.txt"),\ ("RMS/AllSkyCams","*.json"),("MetRec","*.inf")) _fname = filedialog.askopenfilename(multiple=False,title = "Select file to read",filetypes = file_read_types) if _fname == None or len(_fname) < 3: sys.exit('User did not choose a file to open') lname = _fname.lower() print("Input data file chosen is: ",lname) initial_dir, file_name = os.path.split(_fname) if lname.endswith(".ecsv"): # input is STANDARD or DFN _obs_table = ascii.read(_fname, delimiter=',') print(_obs_table) DFN_true = False for key_name in _obs_table.meta.keys(): if 'event_codename' in key_name: DFN_true = True if DFN_true : print("DFN/UKFN format being read") ttt_list, meteor_count = dfn_to_std(_obs_table) source = 'DFN' else: print("standard format being read") ttt_list = [_obs_table] meteor_count = 1 source = 'STD' elif lname.endswith("a.xml"): # input is UFOAnalyzer. print("UFO format being read") with open(_fname) as fd: _obs_dic=xmltodict.parse(fd.read()) ttt_list, meteor_count = ufo_to_std(_obs_dic) source = 'UFO' elif lname.endswith(".met"): # input is FRIPON/SCAMP print("FRIPON/SCAMP format being read") source = 'FRIPON' loc_table, no_stations = get_fripon_stations() _obs_table = Table.read(_fname, format='ascii.sextractor') ttt_list, meteor_count = fripon_to_std(_fname,_obs_table, loc_table, no_stations) elif lname.endswith(".zip"): # input is a FRIPON zipped results file usually containing multiple .met files print("FRIPON zipped format being read") ttt_list = [] meteor_count = 0 source = 'FRIPON' loc_table, no_stations = get_fripon_stations() # get the list of .met files with ZipFile(lname, 'r') as zip: for info in zip.infolist(): if info.filename.endswith(".met"): # extract, read and delete each ".met" file z_fname = zip.extract(info.filename) _obs_table = Table.read(z_fname, format='ascii.sextractor') os.remove(z_fname) ttt_list2, meteor_count2 = fripon_to_std(z_fname,_obs_table, loc_table, no_stations) if meteor_count2 > 0: meteor_count += meteor_count2 ttt_list += ttt_list2 elif lname.endswith(".txt"): # input is RMS or CAMS meteor_text = open(_fname).read() ttt_list = [] # now check whether there is a platpars file in the same folder, i.e. input is RMS _camera_file_path = os.path.join(initial_dir, 'platepars_all_recalibrated.json') if not (os.path.exists( _camera_file_path)): # no platpars file, so look for a CAL*.txt file cal_files = [] all_files = os.listdir(initial_dir) for file_name in all_files : fname_low = file_name.lower() if (fname_low.startswith('cal') and fname_low.endswith('.txt')): cal_files.append(file_name) if(len(cal_files) == 1 ): # CAMS, one CAL file found _camera_file_path = os.path.join(initial_dir, cal_files[0]) elif(len(cal_files) > 1 ): # CAMS, multiple CAL files found file_read_types = (("CAMS, CAL*.txt","*.txt")) _camera_file_path = filedialog.askopenfilename(multiple=False,initialdir = initial_dir, initialfile=cal_files[0],title = "Select one CAMS camera data file", filetypes = file_read_types) else: # no camera files found. Ask the user for the RMS or CAMS camera metadata file name file_read_types = (("all files","*.*"),("CAMS, CAL*.txt","*.txt"),("RMS, *.JSON","*.json")) _camera_file_path = filedialog.askopenfilename(multiple=False,initialdir = initial_dir, title = "Select an RMS or CAMS camera data file", filetypes = file_read_types) if not _camera_file_path: sys.exit("Camera Config not specified, exiting") if not (os.path.exists( _camera_file_path)): sys.exit("Camera Config not found, exiting") _camera_lfile = _camera_file_path.lower() print("Cam Data Path : ",_camera_lfile) if _camera_lfile.endswith(".json"): # Input is RMS print("RMS format being read") rms_camera_data = rms_camera_json(_camera_file_path) ttt_list, meteor_count = rms_to_std(meteor_text, rms_camera_data) source = 'RMS' elif _camera_lfile.endswith(".txt"): # Input is CAMS print("CAMS format being read") cams_camera_data = cams_camera_txt( _camera_file_path ) ttt_list, meteor_count = cams_to_std(meteor_text, cams_camera_data) source = 'CAMS' else: sys.exit("Camera file not supported. Please supply a platepars JSON file (.json) for RMS, or a CAL TXT file (.txt) for CAMS") elif lname.endswith(".json"): _json_str = open(_fname).read() json_data = json.loads(_json_str) if 'centroids' in json_data : # This is an RMS format print("RMS .json format being read") source = 'RMS' ttt_list, meteor_count = rms_json_to_std(json_data,lname) else: # input is AllSkyCams print("AllSkyCams format being read") ttt_list, meteor_count = allskycams_to_std(json_data,lname) source = 'ASC' elif lname.endswith(".inf"): # Input is MetRec # now look for the .log file print("MetRec format being read") log_files = [] all_files = os.listdir(initial_dir) for file_name in all_files : fname_low = file_name.lower() if (fname_low.endswith('.log') and not(fname_low.startswith('mrg') or fname_low.startswith('states'))): log_files.append(file_name) if(len(log_files) == 1 ): # one .log file found _log_file_path = os.path.join(initial_dir, log_files[0]) else: # Ask the user to choose the log file file_read_types = (("MetRec log file", "*.log")) _log_file_path = filedialog.askopenfilename(multiple=False,initialdir = initial_dir, title = "Select camera data file", filetypes = file_read_types) print('MetRec log file used = ',_log_file_path) inf = MetRecInfFile(_fname) log = MetRecLogFile(_log_file_path) ttt_list, meteor_count = metrec_to_standard(inf, log) source = 'MetRec' if meteor_count == 0: sys.exit("No meteors detected - check file is correct") else: ttt = ttt_list[0] # + print("Number of meteors read: ", meteor_count) output_type = int(input("\nChoose output format: 1=Global Fireball Exchange (GFE), 2=UFO, 3=DFN/UKFN, 4=FRIPON, 5=AllSkyCams, 9=ExcelCSV :")) print("You entered " + str(output_type)) # must make a 'file-like object' to allow astropy to write to zip ( ie: must have file.write(datastring) ) class AstropyWriteZipFile: def __init__(self, out_zip, out_file): self.zip = out_zip self.at = out_file self.done = False def write(self, data): if not self.done: self.zip.writestr(self.at, data) self.done = True def isoStr(iso_datetime_string): return datetime.strptime(iso_datetime_string, ISO_FORMAT) # Write file(s) depending on input if (output_type<1 or output_type> 5) and not output_type==9: sys.exit('Not valid input - it needed to be 1, 2, 3, 4, 5 or 9') elif output_type == 1 or output_type == 3: #write Standard or DFN format if output_type == 3: out_type = 'DFN' else: out_type = 'STD' if meteor_count > 1: zip_file_init = zipfilename(ttt_list, out_type) out_name = filedialog.asksaveasfilename(initialdir=initial_dir,initialfile=zip_file_init,title = "Save file") # zipset = {} if out_name: output_zip = ZipFile(out_name, mode='w') for i in range (meteor_count): if output_type == 3: ttt = std_to_dfn(ttt_list[i]) else: ttt = ttt_list[i] out_name2 = outfilename(ttt_list, out_type, source, True, 0, i) ascii.write(ttt, AstropyWriteZipFile(output_zip, out_name2), format='ecsv', delimiter=',') output_zip.close() print("Zip file written: ", out_name) else: # write output to a single file initial_file = outfilename(ttt_list, out_type, source, True, 0, 0) out_name = filedialog.asksaveasfilename(initialdir=initial_dir,initialfile=initial_file,title = "Save file") if out_name : if output_type == 3: ttt = std_to_dfn(ttt_list[0]) else: ttt = ttt_list[0] ttt.write(out_name,overwrite=True, format='ascii.ecsv', delimiter=',') print("Data file written: ", out_name) elif output_type == 2: # UFOAnalyzer output - always written to a zip file zip_file_init = zipfilename(ttt_list, 'UFO') zip_file_name = filedialog.asksaveasfilename(initialdir = initial_dir,initialfile=zip_file_init,title = "Select file",defaultextension = '.csv') if zip_file_name : output_zip = ZipFile(zip_file_name, mode='w') output_csv_str = "" for i in range(len(ttt_list)): ttt = ttt_list[i] #converts to 2 strings - the XML file and one line from the CSV file ufo_xml_data, ufo_csv_line = std_to_ufo(ttt) out_name2 = outfilename(ttt_list, 'UFO', source, True, 0, i) output_zip.writestr(out_name2, ufo_xml_data) if i == 0: output_csv_string = ufo_csv_line else: output_csv_string += '\n' + ufo_csv_line.split('\n')[1] #difference in days: num_days = ( isoStr(ttt_list[-1]['datetime'][0]) - isoStr(ttt_list[0]['datetime'][0]) ).days out_csv_file = outfilename(ttt_list, 'UFO', source, False, num_days, 0) output_zip.writestr(out_csv_file, output_csv_string) output_zip.close() print("Zip file written: ", zip_file_name) elif output_type == 4: # write a file in FRIPON/SCAMP format zip_file_init = zipfilename(ttt_list, 'FRIPON') zip_file_name = filedialog.asksaveasfilename(initialdir = initial_dir,initialfile=zip_file_init,title = "Select file",defaultextension = '.csv') if zip_file_name : output_zip = ZipFile(zip_file_name, mode='w') for i in range(len(ttt_list)): ttt = ttt_list[i] ttt2 = std_to_fripon(ttt) fri_str, loc_str = fripon_write(ttt2) out_name2 = outfilename(ttt_list, 'FRIPON', source, True, 0, i) output_zip.writestr(out_name2, fri_str) out_name2 = outfilename(ttt_list, 'FRIPON', source, False, 0, i) output_zip.writestr(out_name2, loc_str) output_zip.close() print("Zip file written: ", zip_file_name) elif output_type == 5: #write AllSkyCams format if meteor_count > 1: zip_file_init = zipfilename(ttt_list, 'ASC') zip_file_name = filedialog.asksaveasfilename(initialdir = initial_dir,initialfile=zip_file_init,title = "Select file",defaultextension = '.csv') if zip_file_name : output_zip = ZipFile(zip_file_name, mode='w') for i in range(len(ttt_list)): ttt = ttt_list[i] json_str = std_to_allskycams(ttt) out_name2 = outfilename(ttt_list, 'ASC', source, True, 0, i) output_zip.writestr(out_name2, json_str) output_zip.close() print("Zip file written: ", zip_file_name) else: # write AllSkyCams data to a single file ttt = ttt_list[0] initial_file = outfilename(ttt_list, 'ASC', source, True, 0, 0) out_name = filedialog.asksaveasfilename(initialdir=initial_dir,initialfile=initial_file,title = "Save file") if out_name : # write json_str to a file called out_name json_str = std_to_allskycams(ttt) out_file = open(out_name, "w") out_file.write(json_str) out_file.flush() out_file.close() print("Data file written: ", out_name) elif output_type == 9: #write Excel csv format if meteor_count > 1: zip_file_init = zipfilename(ttt_list, 'CSV') zip_file_name = filedialog.asksaveasfilename(initialdir = initial_dir,initialfile=zip_file_init,title = "Select file",defaultextension = '.csv') if zip_file_name : output_zip = ZipFile(zip_file_name, mode='w') for i in range(len(ttt_list)): ttt = ttt_list[i] csv_str = std_to_csv(ttt) out_name2 = outfilename(ttt_list, 'CSV', source, True, 0, i) output_zip.writestr(out_name2, csv_str) output_zip.close() print("Zip file written: ", zip_file_name) else: # write Excel csv data to a single file ttt = ttt_list[0] initial_file = outfilename(ttt_list, 'CSV', source, True, 0, 0) out_name = filedialog.asksaveasfilename(initialdir=initial_dir,initialfile=initial_file,title = "Save file") if out_name : # write csv_str to a file called out_name csv_str = std_to_csv(ttt) out_file = open(out_name, "w") out_file.write(csv_str) out_file.flush() out_file.close() print("Data file written: ", out_name) else: print('Invalid output type chosen (',output_type,')') print('finished')
y GFE Fireball data converter ver 4.36.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Self-supervised learning in 3D images # # Use the proposed heatmap method as described in [1] # # [1] <NAME> et al. "How to Learn from Unlabeled Volume Data: # Self-supervised 3D Context Feature Learning." MICCAI. 2019. # ## Setup notebook # + from typing import Callable, List, Optional, Tuple, Union from glob import glob import math import os import random import sys gpu_id = 0 os.environ["CUDA_VISIBLE_DEVICES"] = f'{gpu_id}' import matplotlib.pyplot as plt import nibabel as nib import numpy as np import pandas as pd import seaborn as sns import torch from torch import nn import torch.nn.functional as F from torch.utils.data.dataset import Dataset from torch.utils.data import DataLoader from torch.utils.data.sampler import SubsetRandomSampler import torchvision from selfsupervised3d import * # - # Support in-notebook plotting # %matplotlib inline # Report versions print('numpy version: {}'.format(np.__version__)) from matplotlib import __version__ as mplver print('matplotlib version: {}'.format(mplver)) print(f'pytorch version: {torch.__version__}') print(f'torchvision version: {torchvision.__version__}') pv = sys.version_info print('python version: {}.{}.{}'.format(pv.major, pv.minor, pv.micro)) # Reload packages where content for package development # %load_ext autoreload # %autoreload 2 # Check GPU(s) # !nvidia-smi assert torch.cuda.is_available() device = torch.device('cuda') torch.backends.cudnn.benchmark = True # Set seeds for better reproducibility. See [this note](https://pytorch.org/docs/stable/notes/faq.html#my-data-loader-workers-return-identical-random-numbers) before using multiprocessing. seed = 1336 random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) # ## Setup training and validation data # # Get the location of the training (and validation) data train_dir = '/iacl/pg20/jacobr/ixi/subsets/hh/' t1_dir = os.path.join(train_dir, 't1') t2_dir = os.path.join(train_dir, 't2') t1_fns = glob(os.path.join(t1_dir, '*.nii*')) t2_fns = glob(os.path.join(t2_dir, '*.nii*')) assert len(t1_fns) == len(t2_fns) and len(t1_fns) != 0 # ## Look at example training dataset # # Look at an axial view of the source T1-weighted (T1-w) and target T2-weighted (T2-w) images. def imshow(x, ax, title, n_rot=3, **kwargs): ax.imshow(np.rot90(x,n_rot), aspect='equal', cmap='gray', **kwargs) ax.set_title(title,fontsize=22) ax.axis('off') j = 100 t1_ex, t2_ex = nib.load(t1_fns[0]).get_data(), nib.load(t2_fns[0]).get_data() fig,(ax1,ax2) = plt.subplots(1,2,figsize=(16,9)) imshow(t1_ex[...,j], ax1, 'T1', 1) imshow(t2_ex[...,j], ax2, 'T2', 1) x = torch.from_numpy(t1_ex).unsqueeze(0) (ctr, qry), (dp_goal, hm_goal) = blendowski_patches(x, min_off_inplane=0., max_off_inplane=0.7, throughplane_axis=1) ctr = ctr.squeeze().cpu().detach().numpy() qry = qry.squeeze().cpu().detach().numpy() hm_goal = hm_goal.squeeze() dx, dy = dp_goal print(f'dx: {dx:0.3f}, dy: {dy:0.3f}') print(ctr.shape, qry.shape) j = 12 fig,(ax1,ax2,ax3) = plt.subplots(1,3,figsize=(16,9)) imshow(ctr[1,...], ax1, 'CTR', 0) imshow(qry[1,...], ax2, 'QRY', 0) imshow(hm_goal, ax3, 'HM', 0) # ## Setup training # # Hyperparameters, optimizers, logging, etc. data_dirs = [t1_dir] # + # system setup load_model = False # logging setup log_rate = 10 # print losses every log_rate epochs version = 'blendowski_v1' # naming scheme of model to load save_rate = 100 # save models every save_rate epochs # model, optimizer, loss, and training parameters valid_split = 0.1 batch_size = 8 n_jobs = 8 n_epochs = 500 stack_dim = 3 input_channels = stack_dim * len(data_dirs) descriptor_size = 128 use_adam = True opt_kwargs = dict(lr=1e-3, betas=(0.9,0.99), weight_decay=1e-6) if use_adam else \ dict(lr=5e-3, momentum=0.9) use_scheduler = True scheduler_kwargs = dict(step_size=100, gamma=0.5) # - def init_fn(worker_id): random.seed((torch.initial_seed() + worker_id) % (2**32)) np.random.seed((torch.initial_seed() + worker_id) % (2**32)) # setup training and validation dataloaders dataset = BlendowskiDataset(data_dirs, stack_dim=stack_dim, throughplane_axis=1) num_train = len(dataset) indices = list(range(num_train)) split = int(valid_split * num_train) valid_idx = np.random.choice(indices, size=split, replace=False) train_idx = list(set(indices) - set(valid_idx)) train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) train_loader = DataLoader(dataset, sampler=train_sampler, batch_size=batch_size, worker_init_fn=init_fn, num_workers=n_jobs, pin_memory=True, collate_fn=blendowski_collate) valid_loader = DataLoader(dataset, sampler=valid_sampler, batch_size=batch_size, worker_init_fn=init_fn, num_workers=n_jobs, pin_memory=True, collate_fn=blendowski_collate) print(f'Number of training images: {num_train-split}') print(f'Number of validation images: {split}') embedding_model = D2DConvNet(input_channels=input_channels, descriptor_size=descriptor_size) decoder_model = HeatNet(descriptor_size=descriptor_size) def num_params(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'Number of trainable parameters in embedding model: {num_params(embedding_model)}') print(f'Number of trainable parameters in decoder model: {num_params(decoder_model)}') if load_model: embedding_model.load_state_dict(torch.load(f'embedding_model_{version}.pth')) decoder_model.load_state_dict(torch.load(f'decoder_model_{version}.pth')) embedding_model.to(device) decoder_model.to(device) optim_cls = torch.optim.AdamW if use_adam else torch.optim.SGD embedding_opt = optim_cls(embedding_model.parameters(), **opt_kwargs) decoder_opt = optim_cls(decoder_model.parameters(), **opt_kwargs) criterion = nn.MSELoss() if use_scheduler: embedding_scheduler = torch.optim.lr_scheduler.StepLR(embedding_opt, **scheduler_kwargs) decoder_scheduler = torch.optim.lr_scheduler.StepLR(decoder_opt, **scheduler_kwargs) # ## Train model train_losses, valid_losses = [], [] n_batches = len(train_loader) min_off_inplane = np.linspace(0.25, 0.0, n_epochs) max_off_inplane = np.linspace(0.30, 0.7, n_epochs) for t in range(1, n_epochs + 1): # training t_losses = [] embedding_model.train() decoder_model.train() for i, ((ctr, qry), (_, goal)) in enumerate(train_loader): ctr, qry, goal = ctr.to(device), qry.to(device), goal.to(device) embedding_opt.zero_grad() decoder_opt.zero_grad() ctr_f = embedding_model(ctr) qry_f = embedding_model(qry) out = decoder_model(ctr_f, qry_f) loss = criterion(out, goal) t_losses.append(loss.item()) loss.backward() embedding_opt.step() decoder_opt.step() train_losses.append(t_losses) # validation v_losses = [] embedding_model.eval() decoder_model.eval() with torch.no_grad(): for i, ((ctr, qry), (_, goal)) in enumerate(valid_loader): ctr, qry, goal = ctr.to(device), qry.to(device), goal.to(device) ctr_f = embedding_model(ctr) qry_f = embedding_model(qry) out = decoder_model(ctr_f, qry_f) loss = criterion(out, goal) v_losses.append(loss.item()) valid_losses.append(v_losses) # expand inplane offset range as per paper dataset.min_off_inplane = min_off_inplane[t-1] dataset.max_off_inplane = max_off_inplane[t-1] # log, step scheduler, and save results from epoch if not np.all(np.isfinite(t_losses)): raise RuntimeError('NaN or Inf in training loss, cannot recover. Exiting.') if t % log_rate == 0: log = (f'Epoch: {t} - TL: {np.mean(t_losses):.2e}, VL: {np.mean(v_losses):.2e}') print(log) if use_scheduler: embedding_scheduler.step() decoder_scheduler.step() if t % save_rate == 0: torch.save(embedding_model.state_dict(), f'embedding_model_{version}_{t}.pth') torch.save(decoder_model.state_dict(), f'decoder_model_{version}_{t}.pth') save_model = True if save_model: torch.save(embedding_model.state_dict(), f'embedding_model_{version}.pth') torch.save(decoder_model.state_dict(), f'decoder_model_{version}.pth') # ## Analyze training fig,((ax1,ax2,ax3),(ax4,ax5,ax6)) = plt.subplots(2,3,figsize=(16,9)) try: ctr = ctr.squeeze().cpu().detach().numpy() qry = qry.squeeze().cpu().detach().numpy() out = out.squeeze().cpu().detach().numpy() goal = goal.squeeze().cpu().detach().numpy() except AttributeError: pass gm = goal.max() imshow(ctr[0,1,...], ax1, 'CTR', 0) imshow(qry[0,1,...], ax2, 'QRY', 0) ax3.axis('off') imshow(out[0], ax4, 'OUT', 0, vmin=0, vmax=gm) imshow(goal[0], ax5, 'HM', 0, vmin=0, vmax=gm) imshow(np.abs(out[0]-goal[0]), ax6, 'DIFF', 0, vmin=0, vmax=gm) def tidy_losses(train, valid): out = {'epoch': [], 'type': [], 'value': [], 'phase': []} for i, (tl,vl) in enumerate(zip(train,valid),1): for tli in tl: out['epoch'].append(i) out['type'].append('loss') out['value'].append(tli) out['phase'].append('train') for vli in vl: out['epoch'].append(i) out['type'].append('loss') out['value'].append(vli) out['phase'].append('valid') return pd.DataFrame(out) losses = tidy_losses(train_losses, valid_losses) f, ax1 = plt.subplots(1,1,figsize=(12, 8),sharey=True) sns.lineplot(x='epoch',y='value',hue='phase',data=losses,ci='sd',ax=ax1,lw=3); ax1.set_yscale('log'); ax1.set_title('Losses'); save_losses = False if save_losses: f.savefig(f'losses_{version}.pdf') losses.to_csv(f'losses_{version}.csv')
tutorials/blendowski.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Importing Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import missingno as msno # %matplotlib inline from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, confusion_matrix, f1_score, roc_curve, roc_auc_score from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import plot_roc_curve # # Data Preprocessing and Cleaning #Excel path Excel ='D://NCI//DMML//Dataset//cardio_train.csv' # load the data d1=pd.read_csv(Excel ,sep=";") d1.head() #Finding the missing values msno.bar (d1) d1.info() d1.isna().sum() # Rearranging age columns d1 ['age']= d1['age']/365 d1['age']= d1 ['age'].astype('int') d1= d1.drop(columns=['id']) # # Removing Outliers # Checking and removing outliers fig, ax= plt.subplots (figsize = (10,15)) sns.boxplot (data= d1, width=0.5, ax=ax, fliersize= 3) plt.title ("Visualization of outliers") #Checking the rate of cardiovascular column with the ranging values of ap_hi and ap_lo o= ((d1['ap_hi']>200) | (d1['ap_lo']>200) | (d1['ap_lo']<50) | (d1['ap_hi']<=80) | (d1['weight']<=28)| (d1['height']<=100) ) d1[o]['cardio'].count() d1=d1 [~o] # + # checking the columns by doing visualization and counting the number of columns fig,axes = plt.subplots(4,2,figsize = (16,16)) sns.set_style('darkgrid') fig.suptitle("Count plot for various categorical features") sns.barplot(ax=axes[0,0],data=d1,x='age', y= 'cardio') sns.barplot(ax=axes[0,1],data=d1,x='gender', y= 'cardio') sns.barplot(ax=axes[1,0],data=d1,x='height', y= 'cardio') sns.barplot(ax=axes[1,1],data=d1,x='weight', y= 'cardio') sns.barplot(ax=axes[2,0],data=d1,x='ap_hi', y= 'cardio') sns.barplot(ax=axes[2,1],data=d1,x='ap_lo', y= 'cardio') sns.barplot(ax=axes[3,0],data=d1,x='gluc', y= 'cardio') sns.barplot(ax=axes[3,1],data=d1,x='smoke', y= 'cardio') plt.show() # - # # Creating the Correlation and Heatmap X= d1.drop (columns= ['cardio']) y= d1 ['cardio'] corr= X.corr() f, ax= plt.subplots (figsize= (10,10)) sns.heatmap (corr, annot= True, fmt= '.3f', linewidths= 0.5, ax=ax) scalar= MinMaxScaler () x_scaled= scalar.fit_transform (X) # # Splitting test and train dataset x_train, x_test, y_train, y_test= train_test_split (x_scaled, y, test_size= 0.30) # # Defining the ML methods to be used dt= DecisionTreeClassifier() ra= RandomForestClassifier(n_estimators= 90) knn= KNeighborsClassifier(n_neighbors=79) svm= SVC (random_state=6) models= {"Decision Tree" : dt, "Random forest" : ra, "KNN" : knn, "SVM": svm} scores= {} for key, value in models.items(): model = value model.fit(x_train, y_train) scores[key] = model.score(x_test, y_test) scores_frame = pd.DataFrame(scores, index=["Accuracy Score"]).T scores_frame.sort_values(by=["Accuracy Score"], axis=0 ,ascending=False, inplace=True) scores_frame # # ROC Curve and evualtion method values: dis= plot_roc_curve (dt, x_test, y_test) plot_roc_curve (ra, x_test, y_test, ax= dis.ax_) plot_roc_curve (knn, x_test, y_test, ax=dis.ax_) plot_roc_curve (svm, x_test, y_test, ax=dis.ax_) predict_svc= svm.predict (x_test) predict_knn= knn.predict(x_test) # # SVC Evaluation Method: conf = confusion_matrix(y_test,predict_svc) print("The Confusion Matrix for SVC in this dataset is : \n",conf) true_positive = conf[0][0] false_positive = conf[0][1] false_negative = conf[1][0] true_negative = conf[1][1] Precision = true_positive/(true_positive+false_positive) Recall= true_positive/(true_positive+false_negative) F1_Score = 2*(Recall * Precision) / (Recall + Precision) print("The F1_Score for this SVC dataset is : ",F1_Score) print("The precision of this SVC model is : ",Precision) print("The Recall score of SVC model is : ",Recall) # # KNN Evaluation Method conf_mat = confusion_matrix(y_test,predict_knn) print("The Confusion Matrix for KNN in this dataset is : \n",conf_mat) accuracy=accuracy_score(y_test,predict_knn) print("The accuracy of knn model is : ",accuracy) true_positive = conf[0][0] false_positive = conf[0][1] false_negative = conf[1][0] true_negative = conf[1][1] Precision = true_positive/(true_positive+false_positive) Recall= true_positive/(true_positive+false_negative) F1_Score = 2*(Recall * Precision) / (Recall + Precision) print("The F1_Score for this KNN dataset is : ",F1_Score) print("The Recall score of KNN model is : ",Recall) print("The precision of this KNN model is : ",Precision)
Cardiovascular Disease.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Matrix multiplication -- algorithm and implementation # # In this session we will be looking at a matrix multiplication implementation that will be using dataClay and PyCOMPSs. # # ## A brief mathematical preface # # In general, if we have square matrices such as: # $$ # \mathbf{A} = # \begin{pmatrix} # a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ # a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ # \vdots & \vdots & \ddots & \vdots \\ # a_{n,1} & a_{n,2} & \cdots & a_{n,n} # \end{pmatrix} , \quad # \mathbf{B} = # \begin{pmatrix} # b_{1,1} & b_{1,2} & \cdots & b_{1,n} \\ # b_{2,1} & b_{2,2} & \cdots & b_{2,n} \\ # \vdots & \vdots & \ddots & \vdots \\ # b_{n,1} & b_{n,2} & \cdots & b_{n,n} # \end{pmatrix} # $$ # # Then the matrix multiplication $$\mathbf{C} = \mathbf{AB}$$ can be evaluated as: # # $$ # \mathbf{C} = # \begin{pmatrix} # c_{1,1} & c_{1,2} & \cdots & c_{1,n} \\ # c_{2,1} & c_{2,2} & \cdots & c_{2,n} \\ # \vdots & \vdots & \ddots & \vdots \\ # c_{n,1} & c_{n,2} & \cdots & c_{n,n} # \end{pmatrix} # \\ # c_{i,j} = \sum_{k=1}^n a_{i,k} \cdot b_{k,j} # $$ # # The same principle applies for block matrices, i.e. consider a simple scenario: # # $$ # \mathbf{A} = # \begin{pmatrix} # \mathbf{A}_{1,1} & \mathbf{A}_{1,2} \\ # \mathbf{A}_{2,1} & \mathbf{A}_{2,2} # \end{pmatrix} , \quad # \mathbf{B} = # \begin{pmatrix} # \mathbf{B}_{1,1} & \mathbf{B}_{1,2} \\ # \mathbf{B}_{2,1} & \mathbf{B}_{2,2} # \end{pmatrix} # $$ # # Then the following is true: # # $$ # \mathbf{C} = \mathbf{AB} = # \begin{pmatrix} # \mathbf{A}_{1,1}\mathbf{B}_{1,1} + \mathbf{A}_{1,2}\mathbf{B}_{2,1} & \mathbf{A}_{1,1}\mathbf{B}_{1,2} + \mathbf{A}_{1,2}\mathbf{B}_{2,2} \\ # \mathbf{A}_{2,1}\mathbf{B}_{1,1} + \mathbf{A}_{2,2} \mathbf{B}_{2,1} & \mathbf{A}_{2,1}\mathbf{B}_{1,2} + \mathbf{A}_{2,2}\mathbf{B}_{2,2} # \end{pmatrix} # $$ # # This property is typically used on **divide & conquer algorithms** for matrix multiplication. # # # ## The implementation # # The data model can be observed in the [model](model/) folder --specifically the `matrix.py` file. The matrix is structured in submatrices, which helps distributing the problem --both on computation and storage. # # Things to notice: # # - The `__matmul__` method on the _Matrix_ class performs the matrix multiplication. Python interpreter recognizes this method name as the operator `@` which is the matrix multiplication operator. # - This `__matmul__` method calls the `imuladd` **task** from the _Block_ class. The implementation proposed creates a task for each submatrix multiplication. That is, if we are multiplying two-by-two matrices, the execution will generate a total of 8 tasks. # - The implementation should feel extremely natural and simple. But thanks to dataClay the data is transparently distributed across multiple storage nodes and thanks to PyCOMPSs the execution is parallelized across multiple computation nodes.
matrices/01-matmul-algnimpl.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ls .. # ls ../data import pandas as pd import numpy as np import matplotlib.pyplot as plt import geopandas import geoplot import mapclassify pd.set_option('display.max_columns', None) # Note: Check out policy codes [here](https://github.com/ChangyuYan/covid-policy-tracker/blob/master/documentation/codebook.md) # # US Data df_us_states = pd.read_csv('../data/OxCGRT_US_states_temp.csv') df_us_states.head() len(df_us_states) df_us_states.columns set(df_us_states.RegionName) df_us_states[df_us_states.RegionName == 'California'].tail() df_us_states[df_us_states.RegionName == 'Rhode Island'].tail() # # OxCGRT_latest # ls data df_latest = pd.read_csv('../data/OxCGRT_latest.csv') df_latest.head() len(df_latest) df_latest[df_latest.CountryName == 'China'] set(df_us_states.columns) - set(df_latest.columns) set(df_latest.columns) - set(df_us_states.columns) # 这里好像没有国内省份细分的数据... def find_sub_regions(df, countryName): regions = set(df[df.CountryName == countryName].RegionName) return {x for x in regions if x==x} find_sub_regions(df_latest, 'China') find_sub_regions(df_latest, 'United States') # ls data/ # # OxCGRT_latest_withnotes df_latest_withnotes = pd.read_csv('../data/OxCGRT_latest_withnotes.csv') df_latest_withnotes.head() cdf = df_latest_withnotes[df_latest_withnotes.CountryName == 'China'] cdf cdf['E1_Income support'].dropna() df_latest_withnotes[df_latest_withnotes.CountryName == 'China'].head() df_latest_withnotes[df_latest_withnotes.CountryName == 'Hong Kong'].head() df_latest_withnotes[df_latest_withnotes.CountryName == 'United Kingdom'][df_latest_withnotes.Date == 20201005] df_latest_withnotes[df_latest_withnotes.CountryName == 'Canada'][df_latest_withnotes.Date == 20200601] df_latest_withnotes[df_latest_withnotes.CountryName == 'United States'][df_latest_withnotes.Date == 20201005].head() find_sub_regions(df_latest_withnotes, 'China') find_sub_regions(df_latest_withnotes, 'United Kingdom') find_sub_regions(df_latest_withnotes, 'United States') # ls data # # OxCGRT_latest_allchanges df_latest_allchanges = pd.read_csv('../data/OxCGRT_latest_allchanges.csv') df_latest_allchanges.head() df_latest_allchanges[df_latest_allchanges.CountryName == 'China'] df_latest_allchanges[df_latest_allchanges.CountryName == 'United States'] df_latest_allchanges[df_latest_allchanges.CountryName == 'United Kingdom'] # # timeseries df_time_series = pd.read_csv('../data/timeseries/c1_flag.csv') df_time_series.head() len(df_time_series) df_time_series[df_time_series['Unnamed: 0'] == 'China'] df_time_series[df_time_series['Unnamed: 0'] == 'United States'] df_timeseries_all = pd.read_excel('../data/timeseries/OxCGRT_timeseries_all.xlsx') df_timeseries_all.head()
Changyu/SelfAnalysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # %matplotlib inline # %time from hikyuu.interactive import * from pylab import plot # 创建一个系统策略 my_mm = MM_FixedCount(100) my_sg = my_sg = SG_Flex(EMA(n=5), slow_n=10) my_sys = SYS_Simple(sg=my_sg, mm=my_mm) # 创建一个选择算法,用于在每日选定交易系统 # 此处是固定选择器,即每日选出的都是指定的交易系统 my_se = SE_Fixed([s for s in blocka if s.valid], my_sys) # 创建一个资产分配器,用于确定如何在选定的交易系统中进行资产分配 # 此处创建的是一个等比例分配资产的分配器,即按相同比例在选出的系统中进行资金分配 my_af = AF_EqualWeight() # 创建资产组合 # 创建一个从2001年1月1日开始的账户,初始资金200万元。这里由于使用的等比例分配器,意味着将账户剩余资金在所有选中的系统中平均分配, # 如果初始资金过小,将导致每个系统都没有充足的资金完成交易。 my_tm = crtTM(Datetime(200101010000), 2000000) my_pf = PF_Simple(tm=my_tm, af=my_af, se=my_se) # 运行投资组合 q = Query(-500) # %time my_pf.run(Query(-500)) x = my_tm.get_funds_curve(sm.get_trading_calendar(q)) PRICELIST(x).plot()
hikyuu/examples/notebook/010-Portfolio.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Fourier Series: Straight Lines and Polygons # This notebook shows how to compute the Fourier series of straight lines and polygons. # # ### Content # 1. [Prerequisites](#prerequisites) # 2. [Representing Straight Lines](#representation) # 1. Polygons # 3. [Fourier Series of Straight Lines](#fourier_computation) # 1. [Proof](#proof) # 2. [Implementation](#implementation) # 4. [Example](#example) # + # %matplotlib inline # Initial imports import numpy as np import matplotlib.pyplot as plt # + language="javascript" # MathJax.Hub.Config({ # TeX: { equationNumbers: { autoNumber: "AMS" } } # }); # # MathJax.Hub.Queue( # ["resetEquationNumbers", MathJax.InputJax.TeX], # ["PreProcess", MathJax.Hub], # ["Reprocess", MathJax.Hub] # ); # - # ## Prerequisites <a id='prerequisites'></a> # This notebook shows how to compute the Fourier series of straight lines and polygones. For this, you should be familiar with: # - The general computation of [Fourier series of curves](Fourier-Series-of-Curves-Background.ipynb) and especially the case of piecewiese defines curves. # - Complex numbers, especially the representation of 2-dimensional points as complex numbers # - Computation of integrals of curves # ## Representing Straight Lines <a id='representation'></a> # A straight line $g:[0,1]\to\mathbb{C}$ from point $p_0\in\mathbb{C}$ to point $p_1\in\mathbb{C}$ with $g(0)=p_0$ and $g(1) = p_1$ can be written as # # $$ # \begin{equation} # g(t) = (1-t)\cdot p_0 + t\cdot p_1 \label{eqn:straigt_line_01} # \end{equation} # $$ # # For the computation of the Fourier Series, we need a curve defined over the interval $[0,2\pi]$; for a polygon, which is a piecewise concatenation of straight lines, each piece $i$ would be defined over an interval $[t_i,t_{i+1}]$. For this reason, we generalize ($\ref{eqn:straigt_line_01}$) to a function $g:[a,b]\to\mathbb{C}$ with $g(a)=p_0$ and $g(b) = p_1$: # # $$ # \begin{align} # g(t) &= \frac{b-t}{b-a}\cdot p_0 + \frac{t-a}{b-a}\cdot p_1 \label{eqn:straigt_line_ab}\\ # &= \frac{p_0b - p_1a}{b-a} + \frac{p_1-p_0}{b-a}t \label{eqn:straigt_line_by_t} # \end{align} # $$ # # ### Polygons # Polygons are a sequence of straight lines, connecting corner points $p_0,\ldots,p_n$ with $p_n = p_0$ for closing the curve. # ## Fourier Series of Straight Lines <a id='fourier_computation'></a> # As described in the [introduction of Fourier series of curves](Fourier-Series-of-Curves-Background.ipynb) we need to solve the integral $\int_a^b g(t)e^{-i\lambda t}dt$ for an arbitrary straight line $g$ and any choice of $a$, $b$ and $\lambda$ in order to compute the Fourier series of straight lines and polygons. # # _Case 1:_ $\lambda=0$: # $$ # \int_a^b g(t)e^{-i\lambda t}dt = \frac{1}{2}\left(p_1 + p_2)(b - a)\right) # $$ # # _Case 2:_ $\lambda\neq 0$: # $$ # \int_a^b g(t)e^{-i\lambda t}dt = \frac{ie^{-i\lambda b}}{\lambda}p_2 - \frac{ie^{-i\lambda a}}{\lambda} p_1 # +\frac{1}{\lambda^2}\frac{p_2-p_1}{b-a}\left(e^{-i\lambda b}-e^{-i\lambda a}\right) # $$ # # ### Proof: <a id='proof'></a> # Using (\ref{eqn:straigt_line_by_t}), we can compute the integral as: # _Case 1:_ $\lambda=0$: # $$ # \begin{align*} # \int_a^b g(t)e^{-i\lambda t}dt &= \int_a^b g(t)dt \\ # &= \frac{p_1b - p_2a}{b-a}\left(b-a\right) + \frac{p_2-p_1}{b-a}\frac{1}{2}\left(b^2-a^2\right)\\ # &= \left(p_1b - p_2a\right) + \frac{1}{2}\left(p_2-p_1\right)\left(a+b\right)\\ # &= \frac{1}{2}\left(p_1b - p_2a + p_2b - p_1a\right)\\ # &= \frac{1}{2}\left(p_1 + p_2\right)\left(b - a\right) # \end{align*} # $$ # # _Case 2:_ $\lambda\neq 0$: # $$ # \begin{align*} # \int_a^b g(t)e^{-i\lambda t}dt &= \int_a^b \left(\frac{p_1b - p_2a}{b-a} + \frac{p_2-p_1}{b-a}t\right)e^{-i\lambda t}dt\\ # &= \frac{p_1b - p_2a}{b-a} \int_a^b e^{-i\lambda t}dt + \frac{p_2-p_1}{b-a}\int_a^b te^{-i\lambda t}dt\\ # &= \frac{p_1b - p_2a}{b-a} \left[\frac{e^{-i\lambda t}}{-i\lambda}\right]_{t=a}^b + \frac{p_2-p_1}{b-a}\left(\left[\frac{te^{-i\lambda t}}{-i\lambda}\right]_{t=a}^b-\int_a^b \frac{e^{-i\lambda t}}{-i\lambda}dt\right)\\ # &= \frac{i}{\lambda}\frac{p_1b - p_2a}{b-a} \left(e^{-i\lambda b}-e^{-i\lambda a}\right) + \frac{i}{\lambda}\frac{p_2-p_1}{b-a}\left(be^{-i\lambda b}-ae^{-i\lambda a}\right) + \frac{1}{\lambda^2}\frac{p_2-p_1}{b-a}\left(e^{-i\lambda b}-e^{-i\lambda a}\right)\\ # &=\\ # &= \frac{e^{-i\lambda b}}{b-a} \frac{i}{\lambda}\left(p_1b - p_2a + p_2b - p_1b\right)-\frac{e^{-i\lambda a}}{b-a} \frac{i}{\lambda}\left(p_1b - p_2a + p_2a - p_1a\right) + \frac{1}{\lambda^2}\frac{p_2-p_1}{b-a}\left(e^{-i\lambda b}-e^{-i\lambda a}\right)\\ # &= \frac{ie^{-i\lambda b}}{\lambda}p_2 - \frac{ie^{-i\lambda a}}{\lambda} p_1 +\frac{1}{\lambda^2}\frac{p_2-p_1}{b-a}\left(e^{-i\lambda b}-e^{-i\lambda a}\right) # \end{align*} # $$ # ### Implementation <a id='implementation'></a> # Beginning with a single staight line segment def transform_straight_line(p1, p2, l, a, b): i = 1j # Just for shorter notations l = np.asarray(l) result = np.zeros(shape=l.shape, dtype=np.complex) # Handle Case k != 0 l_ = l[l!=0] result[l!=0] = i * np.exp(-i*l_*b) * p2 / l_ \ - i * np.exp(-i*l_*a) * p1 / l_ \ + (p2 - p1) * (np.exp(-i*l_*b) - np.exp(-i*l_*a)) / (l_*l_*(b - a)) # Handle case k=0 result[l==0] = (p1 + p2) * (b - a) / 2 # Return results return result # Extending this to closed sequences of lines. Here we scale the period so that the length $T=2\pi$. # # _Note:_ There are different ways to determine the segment-borders $t_0,\ldots,t_m$. Here we use the length of the lines. def transform_polygon(p, n): m = len(p) # Number of Segments (if we close the path) # Close the curve: add the beginning point at the end p = np.reshape(p, (-1)) p = p[list(range(m)) + [0]] # Length of the segments l = np.abs(p[1:] - p[:-1]) # Compute t_0 to t_m based on the lengths with t_0 = 0 and t_m = 2\pi t = l.cumsum() / l.sum() t = 2 * np.pi * np.concatenate([[0], t]) # get vector of k k = np.arange(-n,n+1) c = sum([transform_straight_line(p[i], p[i+1], k, t[i], t[i+1]) for i in range(m)]) return c, k # Get the fourier approximation as a function. # This will be used to plot the fourier approximation. def get_fourier_fct(c, k): # Reshape the fourier coefficients row vectors c = np.reshape(c, (1,-1)) k = np.reshape(k, (1,-1)) def fct(t): # Reshape the input values into a column vector t = np.reshape(t, (-1,1)) return np.sum(c * np.exp(1j * k * t), axis=1) / (2 * np.pi) return fct # ## Example: Triangle <a id='example'></a> # Works also with other polygons, just add or modify points. # Corner points of the triangle as complex numbers p = [-1 -1j, 0.5 + 3j, 12 ] # + N = 12 # Compute coefficients c_{-N} to c_N c,k = transform_polygon(p, N) # Get approximations (limited to different values n=1,...,N) fcts = [get_fourier_fct(c[np.abs(k) <= n], k[np.abs(k) <= n]) for n in range(1,N+1)] # - # Plotting the results # plot closed curve of complex def plotcc(p, *args, **kwargs): # close curve m = len(p) p = np.reshape(p, (-1)) p = p[list(range(m)) + [0]] # Closing the curve # Complex to real x and y vectors x, y = np.real(p), np.imag(p) plt.plot(x, y, *args, **kwargs) # + plt.figure(figsize=(30,20)) # Number of points nT = 1000 t = 2 * np.pi * np.arange(0,1, 1/nT) for n in range(1, N+1): pf = fcts[n-1](t) plt.subplot(3,4,n) plotcc(p) plotcc(pf) plt.title(f"n={n}") plt.show()
Fourier-Series-of-Curves-Example-1-Polygones.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.6.0 # language: julia # name: julia-0.6 # --- using Distributions, PyPlot include("../src/JS_SAA_main.jl") srand(8675309) const K = 20 supp = collect(1:10) p0 = rand(10) p0 /= sum(p0) alpha = .12 pk = ones(10)/10 #unif distribution mhat = JS.sim_path(pk, 8) cs, xs = JS.genNewsvendors(supp, .85 * ones(K), K) counts = readcsv("../RossmanKaggleData/Results/counts20.csv") counts[ counts .== "NA"] = 0 counts = convert(Array{Int64, 2}, counts[2:end, 2:end]') ps = counts ./ sum(counts, 1) phat = mean(ps, 2) plot(phat) supp = readcsv("../RossmanKaggleData/Results/support20.csv")[2:end, 2:end]' plot(1:Nmax, log.(1 + out) , "--") plot(1:Nmax, log.(1 + out2), "-s") JS.mse_estimates(mhats, supps, p0, alpha_grid) # + include("../src/JS_SAA_main.jl") mhats = JS.sim_path(ps, 10 * ones(Int, 1115)); alpha_grid = collect(linspace(0, 10, 10)) supps = convert(Array{Float64, 2}, supp) JS.mse_estimates(mhats, supps, p0, alpha_grid) # - Nhats = sum(mhats, 1) sum(mhats./ Nhats, 1) K_grid = vcat(1, collect(10:10:90), collect(100:100:1000), 1115) size(supp)
notebooks/Untitled.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc="true" # # Table of Contents # <p> # + import os import pandas as pd import numpy as np import numba # Graphics import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns from matplotlib import rc import matplotlib.dates as mdates rc('text', usetex=True) rc('text.latex', preamble=r'\usepackage{cmbright}') rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']}) # Magic function to make matplotlib inline; # %matplotlib inline # This enables SVG graphics inline. # There is a bug, so uncomment if it works. # %config InlineBackend.figure_formats = {'png', 'retina'} # JB's favorite Seaborn settings for notebooks rc = {'lines.linewidth': 2, 'axes.labelsize': 18, 'axes.titlesize': 18, 'axes.facecolor': 'DFDFE5'} sns.set_context('notebook', rc=rc) sns.set_style("dark") mpl.rcParams['xtick.labelsize'] = 16 mpl.rcParams['ytick.labelsize'] = 16 mpl.rcParams['legend.fontsize'] = 14 # + df = pd.read_excel('../input/viability_assay_combo.xlsx') df['time'] = df.date.dt.strftime("%m-%d") df.sort_values('time', inplace=True) df = df[df.imcomplete != 'lost'] df['alive'] = df.d1_alive + df.d2_alive + df.d3_alive # - df.head() # + temp = df[df.gene == 'N2'] sns.distplot(temp.total_brood.values) plt.axvline(temp.total_brood.mean()) # + sns.stripplot(x='time', y='total_brood', hue='N2 source', data=temp, jitter=True, alpha=0.7) plt.axhline(temp.total_brood.mean(), ls='--', zorder=0, color='k', label='Mean') plt.ylabel('total brood') plt.xticks(rotation=30) plt.legend() plt.xlabel('time (m/dd)') plt.title('N2 brood size vs. time') plt.savefig('../output/n2_thru_time.pdf', bbox_inches='tight') # + def bootstrap(x, y, n, f=np.mean): """Given two dataset, generate a null distribution, resample and calculate the difference in test statistic f for each set.""" nx = len(x) ny = len(y) mixed = np.zeros(nx + ny) mixed[0:nx] = x mixed[nx:] = y @numba.jit(nopython=True) def difference(x, y, n): """Given x and y, perform `n' bootstraps to calculate the null distribution of f(y) - f(x).""" delta = np.zeros(n) for i in np.arange(n): nullx = np.random.choice(mixed, nx, replace=True) nully = np.random.choice(mixed, ny, replace=True) diff = f(nully) - f(nullx) delta[i] = diff return delta delta = difference(x, y, n) return delta def pval(x, y, n=10**5, f=np.mean): """Calculate a pvalue for the null hypothesis that f(x) = f(y).""" # this is a one-tailed test, so ensure it's the right tail! if f(y) > f(x): delta_obs = f(y) - f(x) delta = bootstrap(x, y, n, f) else: delta_obs = f(x) - f(y) delta = bootstrap(y, x, n, f) success = len(delta[(delta >= delta_obs)]) pval = success/n return pval # - # the numbaized function is 50x faster than the naive function # + grouped = df.groupby('gene') control = df[df.gene == 'N2'].total_brood.values ps = {} ps['N2'] = 'control' for name, group in grouped: if name == 'N2': continue p = pval(group.total_brood.values, control, n=10**6, f=np.median) if p < 0.01: print('{0}, pval: {1:.2g}'.format(name, p)) ps[name] = 'sig' else: ps[name] = 'non-sig' # - df['sig'] = df.gene.map(ps) g = sns.FacetGrid(df, size=5) g.map(sns.stripplot, r'unhatched', r'gene', alpha=0.5, jitter=True) plt.xticks(rotation=90) def jitterplot(df, x, y, by='gene', **kwargs): """Plot a stripplot ordered by the median value of each group.""" xlabel = kwargs.pop('xlabel', 'total brood size') ylabel = kwargs.pop('ylabel', 'allele tested') # find grouped medians grouped = df.groupby(by) med = {} for name, group in grouped: med[name] = group.total_brood.median() # sort by median df2 = df.copy() df2['med'] = df2[by].map(med) df2.sort_values('med', inplace=True) fig, ax = plt.subplots() sns.stripplot(x=x, y=y, data=df2, **kwargs) plt.axvline(df2[df2.gene == 'N2'].total_brood.median(), ls='--', color='blue', lw=1, label='control median') plt.xlabel(xlabel) plt.ylabel(ylabel) plt.legend() ax.yaxis.grid(False) return fig, ax # + palette = {'sig': 'red', 'non-sig': 'black', 'control': 'blue'} ax = jitterplot(df,'total_brood', 'gene', hue='sig', jitter=True, alpha=0.5, palette=palette) plt.savefig('../output/brood_size.pdf', bbox_inches='tight') # -
src/example_src/Screen Statistical Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Example: Save Data into HDFS # The notebook saves json data from a dataframe into hdfs. # To load example data into the hdfs, use the load_data_from_hdfs notebook. import findspark findspark.init() #necessary to find the local spark from pyspark.sql.types import IntegerType, StringType, DoubleType, BooleanType, TimestampType, StructField, StructType from pyspark.sql import Row, SparkSession from pyspark import SparkContext sc = SparkContext() spark = SparkSession.builder\ .getOrCreate() # #### define the structure of the dataframe schema = StructType([ StructField('Address',StringType(), True), StructField('AlarmProfile',StringType(), True), StructField('CAN',StringType(), True), StructField('Chain',StringType(), True), StructField('Time',StringType(), True), StructField('Time2',StringType(), True) ]) # #### create json mockdata and insert it into a dataframe jsonStrings = ['{"Address":"192.168.3.11", "AlarmProfile":"-1", "CAN":"0", "Chain":"2", "Time2":"34"}', '{"Address":"192.168.3.11", "AlarmProfile":"10", "CAN":"20", "Chain":"2", "Time":"23", "Time2":"34"}'] otherPeopleRDD = sc.parallelize(jsonStrings) df = spark.read.json(otherPeopleRDD, schema=schema) df.show() # #### write the data frame as parquet file into hdfs df.write.save("hdfs://192.168.0.10:9000/user/hadoop/books/example.parquet", format="parquet", mode="append") #df.write.csv("hdfs://192.168.0.10:9000/user/hadoop/books/example.csv")
Spark/spark_scripts/save_json_data_to_hdfs_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp tslearner # - # # TSLearners (TSClassifier, TSRegressor, TSForecaster) # # > New set of time series learners with a new sklearn-like API that simplifies the learner creation. # # The new APIs make it easier to create a classifier, a regressor or a forecaster. You will no longer need to remember what arguments go to the dataset, dataloader, learner, etc. The new API will internally create everything `tsai` needs to be able to train a model. # # Note: these APIs don't replace the previous ones (get_ts_dls, ts_learner, etc) which will continue to work as always. They just offer a simpler API that is easier to use. #export from tsai.imports import * from tsai.learner import * from tsai.data.all import * from tsai.models.InceptionTimePlus import * from tsai.models.utils import * # ## TSClassifier API # *** # # **Commonly used arguments:** # # * **X:** array-like of shape (n_samples, n_steps) or (n_samples, n_features, n_steps) with the input time series samples. Internally, they will be converted to torch tensors. # * **y:** array-like of shape (n_samples), (n_samples, n_outputs) or (n_samples, n_features, n_outputs) with the target. Internally, they will be converted to torch tensors. Default=None. None is used for unlabeled datasets. # * **splits:** lists of indices used to split data between train and validation. Default=None. If no splits are passed, data will be split 80:20 between train and test without shuffling. # * **tfms:** item transforms that will be applied to each sample individually. Default:`[None, TSClassification()]` which is commonly used in most single label datasets. # * **batch_tfms:** transforms applied to each batch. Default=None. # * **bs:** batch size (if batch_size is provided then batch_size will override bs). An int or a list of ints can be passed. Default=`[64, 128]`. If a list of ints, the first one will be used for training, and the second for the valid (batch size can be larger as it doesn't require backpropagation which consumes more memory). # * **arch:** indicates which architecture will be used. Default: InceptionTimePlus. # * **arch_config:** keyword arguments passed to the selected architecture. Default={}. # * **pretrained:** indicates if pretrained model weights will be used. Default=False. # * **weights_path:** indicates the path to the pretrained weights in case they are used. # * **loss_func:** allows you to pass any loss function. Default=None (in which case CrossEntropyLossFlat() is applied). # * **opt_func:** allows you to pass an optimizer. Default=Adam. # * **lr:** learning rate. Default=0.001. # * **metrics:** list of metrics passed to the Learner. Default=accuracy. # * **cbs:** list of callbacks passed to the Learner. Default=None. # * **wd:** is the default weight decay used when training the model. Default=None. # # **Less frequently used arguments:** # # * **sel_vars:** used to select which of the features in multivariate datasets are used. Default=None means all features are used. If necessary a list-like of indices can be used (eg.`[0,3,5]`). # * **sel_steps:** used to select the steps used. Default=None means all steps are used. If necessary a list-like of indices can be used (eg. `slice(-50, None)` will select the last 50 steps from each time series). # * **weights:** indicates a sample weight per instance. Used to pass pass a probability to the train dataloader sampler. Samples with more weight will be selected more often during training. # * **partial_n:** select randomly partial quantity of data at each epoch. Used to reduce the training size (for example for testing purposes). int or float can be used. # * **inplace:** indicates whether tfms are applied during instantiation or on-the-fly. Default=True, which means that tfms will be applied during instantiation. This results in a faster training, but it can only be used when data fits in memory. Otherwise set it to False. # * **shuffle_train:** indicates whether to shuffle the training set every time the dataloader is fully read/iterated or not. This doesn't have an impact on the validation set which is never shuffled. Default=True. # * **drop_last:** if True the last incomplete training batch is dropped (thus ensuring training batches of equal size). This doesn't have an impact on the validation set where samples are never dropped. Default=True. # * **num_workers:** num_workers (int): how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. Default=0. # * **do_setup:** ndicates if the Pipeline.setup method should be called during initialization. Default=True. # * **device:** Defaults to default_device() which is CUDA by default. You can specify device as `torch.device('cpu'). # * **verbose:** controls the verbosity when fitting and predicting. # * **exclude_head:** indicates whether the head of the pretrained model needs to be removed or not. Default=True. # * **cut:** indicates the position where the pretrained model head needs to be cut. Defaults=-1. # * **init:** allows you to set to None (no initialization applied), set to True (in which case nn.init.kaiming_normal_ will be applied) or pass an initialization. Default=None. # * **splitter:** To do transfer learning, you need to pass a splitter to Learner. This should be a function taking the model and returning a collection of parameter groups, e.g. a list of list of parameters. Default=trainable_params. If the model has a backbone and a head, it will then be split in those 2 groups. # * **path** and **model_dir:** are used to save and/or load models. Often path will be inferred from dls, but you can override it or pass a Path object to model_dir. # * **wd_bn_bias:** controls if weight decay is applied to BatchNorm layers and bias. Default=False. # train_bn=True # * **moms:** the default momentums used in Learner.fit_one_cycle. Default=(0.95, 0.85, 0.95). # + #export class TSClassifier(Learner): def __init__(self, X, y=None, splits=None, tfms=None, inplace=True, sel_vars=None, sel_steps=None, weights=None, partial_n=None, bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None, arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None, loss_func=None, opt_func=Adam, lr=0.001, metrics=accuracy, cbs=None, wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False): #Splits if splits is None: splits = TSSplitter()(X) # Item tfms if tfms is None: tfms = [None, TSClassification()] # Batch size if batch_size is not None: bs = batch_size # DataLoaders dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs, batch_tfms=batch_tfms, num_workers=num_workers, weights=weights, partial_n=partial_n, device=device, shuffle_train=shuffle_train, drop_last=drop_last) if loss_func is None: if hasattr(dls, 'loss_func'): loss_func = dls.loss_func elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat() elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func else: loss_func = CrossEntropyLossFlat() # Model if init is True: init = nn.init.kaiming_normal_ if arch is None: arch = InceptionTimePlus elif isinstance(arch, str): arch = get_arch(arch) if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower(): model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, arch_config=arch_config) elif 'tabularmodel' in arch.__name__.lower(): build_tabular_model(arch, dls=dls, device=device, arch_config=arch_config) else: model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path, exclude_head=exclude_head, cut=cut, init=init, arch_config=arch_config) setattr(model, "__name__", arch.__name__) try: model[0], model[1] splitter = ts_splitter except: pass super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter, model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms) # - from tsai.models.InceptionTimePlus import * X, y, splits = get_classification_data('OliveOil', split_data=False) batch_tfms = [TSStandardize(by_sample=True)] learn = TSClassifier(X, y, splits=splits, batch_tfms=batch_tfms, metrics=accuracy, arch=InceptionTimePlus, arch_config=dict(fc_dropout=.5)) learn.fit_one_cycle(1) # ## TSRegressor API # *** # # **Commonly used arguments:** # # * **X:** array-like of shape (n_samples, n_steps) or (n_samples, n_features, n_steps) with the input time series samples. Internally, they will be converted to torch tensors. # * **y:** array-like of shape (n_samples), (n_samples, n_outputs) or (n_samples, n_features, n_outputs) with the target. Internally, they will be converted to torch tensors. Default=None. None is used for unlabeled datasets. # * **splits:** lists of indices used to split data between train and validation. Default=None. If no splits are passed, data will be split 80:20 between train and test without shuffling. # * **tfms:** item transforms that will be applied to each sample individually. Default=`[None, TSRegression()]` which is commonly used in most single label datasets. # * **batch_tfms:** transforms applied to each batch. Default=None. # * **bs:** batch size (if batch_size is provided then batch_size will override bs). An int or a list of ints can be passed. Default=`[64, 128]`. If a list of ints, the first one will be used for training, and the second for the valid (batch size can be larger as it doesn't require backpropagation which consumes more memory). # * **arch:** indicates which architecture will be used. Default: InceptionTimePlus. # * **arch_config:** keyword arguments passed to the selected architecture. Default={}. # * **pretrained:** indicates if pretrained model weights will be used. Default=False. # * **weights_path:** indicates the path to the pretrained weights in case they are used. # * **loss_func:** allows you to pass any loss function. Default=None (in which case CrossEntropyLossFlat() is applied). # * **opt_func:** allows you to pass an optimizer. Default=Adam. # * **lr:** learning rate. Default=0.001. # * **metrics:** list of metrics passed to the Learner. Default=None. # * **cbs:** list of callbacks passed to the Learner. Default=None. # * **wd:** is the default weight decay used when training the model. Default=None. # # **Less frequently used arguments:** # # * **sel_vars:** used to select which of the features in multivariate datasets are used. Default=None means all features are used. If necessary a list-like of indices can be used (eg.`[0,3,5]`). # * **sel_steps:** used to select the steps used. Default=None means all steps are used. If necessary a list-like of indices can be used (eg. `slice(-50, None)` will select the last 50 steps from each time series). # * **weights:** indicates a sample weight per instance. Used to pass pass a probability to the train dataloader sampler. Samples with more weight will be selected more often during training. # * **partial_n:** select randomly partial quantity of data at each epoch. Used to reduce the training size (for example for testing purposes). int or float can be used. # * **inplace:** indicates whether tfms are applied during instantiation or on-the-fly. Default=True, which means that tfms will be applied during instantiation. This results in a faster training, but it can only be used when data fits in memory. Otherwise set it to False. # * **shuffle_train:** indicates whether to shuffle the training set every time the dataloader is fully read/iterated or not. This doesn't have an impact on the validation set which is never shuffled. Default=True. # * **drop_last:** if True the last incomplete training batch is dropped (thus ensuring training batches of equal size). This doesn't have an impact on the validation set where samples are never dropped. Default=True. # * **num_workers:** num_workers (int): how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. Default=0. # * **do_setup:** ndicates if the Pipeline.setup method should be called during initialization. Default=True. # * **device:** Defaults to default_device() which is CUDA by default. You can specify device as `torch.device('cpu'). # * **verbose:** controls the verbosity when fitting and predicting. # * **exclude_head:** indicates whether the head of the pretrained model needs to be removed or not. Default=True. # * **cut:** indicates the position where the pretrained model head needs to be cut. Defaults=-1. # * **init:** allows you to set to None (no initialization applied), set to True (in which case nn.init.kaiming_normal_ will be applied) or pass an initialization. Default=None. # * **splitter:** To do transfer learning, you need to pass a splitter to Learner. This should be a function taking the model and returning a collection of parameter groups, e.g. a list of list of parameters. Default=trainable_params. If the model has a backbone and a head, it will then be split in those 2 groups. # * **path** and **model_dir:** are used to save and/or load models. Often path will be inferred from dls, but you can override it or pass a Path object to model_dir. # * **wd_bn_bias:** controls if weight decay is applied to BatchNorm layers and bias. Default=False. # train_bn=True # * **moms:** the default momentums used in Learner.fit_one_cycle. Default=(0.95, 0.85, 0.95). # + #export class TSRegressor(Learner): def __init__(self, X, y=None, splits=None, tfms=None, inplace=True, sel_vars=None, sel_steps=None, weights=None, partial_n=None, bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None, arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None, loss_func=None, opt_func=Adam, lr=0.001, metrics=None, cbs=None, wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False): #Splits if splits is None: splits = TSSplitter()(X) # Item tfms if tfms is None: tfms = [None, TSRegression()] # Batch size if batch_size is not None: bs = batch_size # DataLoaders dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs, batch_tfms=batch_tfms, num_workers=num_workers, weights=weights, partial_n=partial_n, device=device, shuffle_train=shuffle_train, drop_last=drop_last) if loss_func is None: if hasattr(dls, 'loss_func'): loss_func = dls.loss_func elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat() elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func else: loss_func = MSELossFlat() # Model if init is True: init = nn.init.kaiming_normal_ if arch is None: arch = InceptionTimePlus elif isinstance(arch, str): arch = get_arch(arch) if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower(): model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, arch_config=arch_config) elif 'tabularmodel' in arch.__name__.lower(): build_tabular_model(arch, dls=dls, device=device, arch_config=arch_config) else: model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path, exclude_head=exclude_head, cut=cut, init=init, arch_config=arch_config) setattr(model, "__name__", arch.__name__) try: model[0], model[1] splitter = ts_splitter except: pass super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter, model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms) # - from tsai.models.TST import * X, y, splits = get_regression_data('AppliancesEnergy', split_data=False) if X is not None: # This is to prevent a test fail when the data server is not available batch_tfms = [TSStandardize()] learn = TSRegressor(X, y, splits=splits, batch_tfms=batch_tfms, arch=TST, metrics=mae, bs=512) learn.fit_one_cycle(1, 1e-4) # ## TSForecaster API # *** # **Commonly used arguments:** # # * **X:** array-like of shape (n_samples, n_steps) or (n_samples, n_features, n_steps) with the input time series samples. Internally, they will be converted to torch tensors. # * **y:** array-like of shape (n_samples), (n_samples, n_outputs) or (n_samples, n_features, n_outputs) with the target. Internally, they will be converted to torch tensors. Default=None. None is used for unlabeled datasets. # * **splits:** lists of indices used to split data between train and validation. Default=None. If no splits are passed, data will be split 80:20 between train and test without shuffling. # * **tfms:** item transforms that will be applied to each sample individually. Default=`[None, TSForecasting()]` which is commonly used in most single label datasets. # * **batch_tfms:** transforms applied to each batch. Default=None. # * **bs:** batch size (if batch_size is provided then batch_size will override bs). An int or a list of ints can be passed. Default=`[64, 128]`. If a list of ints, the first one will be used for training, and the second for the valid (batch size can be larger as it doesn't require backpropagation which consumes more memory). # * **arch:** indicates which architecture will be used. Default: InceptionTimePlus. # * **arch_config:** keyword arguments passed to the selected architecture. Default={}. # * **pretrained:** indicates if pretrained model weights will be used. Default=False. # * **weights_path:** indicates the path to the pretrained weights in case they are used. # * **loss_func:** allows you to pass any loss function. Default=None (in which case CrossEntropyLossFlat() is applied). # * **opt_func:** allows you to pass an optimizer. Default=Adam. # * **lr:** learning rate. Default=0.001. # * **metrics:** list of metrics passed to the Learner. Default=None. # * **cbs:** list of callbacks passed to the Learner. Default=None. # * **wd:** is the default weight decay used when training the model. Default=None. # # **Less frequently used arguments:** # # * **sel_vars:** used to select which of the features in multivariate datasets are used. Default=None means all features are used. If necessary a list-like of indices can be used (eg.`[0,3,5]`). # * **sel_steps:** used to select the steps used. Default=None means all steps are used. If necessary a list-like of indices can be used (eg. `slice(-50, None)` will select the last 50 steps from each time series). # * **weights:** indicates a sample weight per instance. Used to pass pass a probability to the train dataloader sampler. Samples with more weight will be selected more often during training. # * **partial_n:** select randomly partial quantity of data at each epoch. Used to reduce the training size (for example for testing purposes). int or float can be used. # * **inplace:** indicates whether tfms are applied during instantiation or on-the-fly. Default=True, which means that tfms will be applied during instantiation. This results in a faster training, but it can only be used when data fits in memory. Otherwise set it to False. # * **shuffle_train:** indicates whether to shuffle the training set every time the dataloader is fully read/iterated or not. This doesn't have an impact on the validation set which is never shuffled. Default=True. # * **drop_last:** if True the last incomplete training batch is dropped (thus ensuring training batches of equal size). This doesn't have an impact on the validation set where samples are never dropped. Default=True. # * **num_workers:** num_workers (int): how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. Default=None. # * **do_setup:** ndicates if the Pipeline.setup method should be called during initialization. Default=True. # * **device:** Defaults to default_device() which is CUDA by default. You can specify device as `torch.device('cpu'). # * **verbose:** controls the verbosity when fitting and predicting. # * **exclude_head:** indicates whether the head of the pretrained model needs to be removed or not. Default=True. # * **cut:** indicates the position where the pretrained model head needs to be cut. Defaults=-1. # * **init:** allows you to set to None (no initialization applied), set to True (in which case nn.init.kaiming_normal_ will be applied) or pass an initialization. Default=None. # * **splitter:** To do transfer learning, you need to pass a splitter to Learner. This should be a function taking the model and returning a collection of parameter groups, e.g. a list of list of parameters. Default=trainable_params. If the model has a backbone and a head, it will then be split in those 2 groups. # * **path** and **model_dir:** are used to save and/or load models. Often path will be inferred from dls, but you can override it or pass a Path object to model_dir. # * **wd_bn_bias:** controls if weight decay is applied to BatchNorm layers and bias. Default=False. # train_bn=True # * **moms:** the default momentums used in Learner.fit_one_cycle. Default=(0.95, 0.85, 0.95). # + #export class TSForecaster(Learner): def __init__(self, X, y=None, splits=None, tfms=None, inplace=True, sel_vars=None, sel_steps=None, weights=None, partial_n=None, bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None, arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None, loss_func=None, opt_func=Adam, lr=0.001, metrics=None, cbs=None, wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False): #Splits if splits is None: splits = TSSplitter()(X) # Item tfms if tfms is None: tfms = [None, TSForecasting()] # Batch size if batch_size is not None: bs = batch_size # DataLoaders dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs, batch_tfms=batch_tfms, num_workers=num_workers, weights=weights, partial_n=partial_n, device=device, shuffle_train=shuffle_train, drop_last=drop_last) if loss_func is None: if hasattr(dls, 'loss_func'): loss_func = dls.loss_func elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat() elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func else: loss_func = MSELossFlat() # Model if init is True: init = nn.init.kaiming_normal_ if arch is None: arch = InceptionTimePlus elif isinstance(arch, str): arch = get_arch(arch) if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower(): model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, arch_config=arch_config) elif 'tabularmodel' in arch.__name__.lower(): build_tabular_model(arch, dls=dls, device=device, arch_config=arch_config) else: model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path, exclude_head=exclude_head, cut=cut, init=init, arch_config=arch_config) setattr(model, "__name__", arch.__name__) try: model[0], model[1] splitter = ts_splitter except: pass super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter, model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms) # - from tsai.models.TSTPlus import * ts = get_forecasting_time_series('Sunspots') if ts is not None: # This is to prevent a test fail when the data server is not available X, y = SlidingWindowSplitter(60, horizon=1)(ts) splits = TSSplitter(235)(y) batch_tfms = [TSStandardize(by_var=True)] learn = TSForecaster(X, y, splits=splits, batch_tfms=batch_tfms, arch=TST, arch_config=dict(fc_dropout=.5), metrics=mae, bs=512, partial_n=.1) learn.fit_one_cycle(1) #hide from tsai.imports import create_scripts from tsai.export import get_nb_name nb_name = get_nb_name() create_scripts(nb_name);
nbs/052b_tslearner.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Graphing network packets # This notebook currently relies on HoloViews 1.9 or above. Run `conda install -c ioam/label/dev holoviews` to install it. # ## Preparing data # The data source comes from a publicly available network forensics repository: http://www.netresec.com/?page=PcapFiles. The selected file is https://download.netresec.com/pcap/maccdc-2012/maccdc2012_00000.pcap.gz. # # ``` # tcpdump -qns 0 -r maccdc2012_00000.pcap | grep tcp > maccdc2012_00000.txt # ``` # For example, here is a snapshot of the resulting output: # ``` # 09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380 # 09:30:07.780000 IP 192.168.24.100.1038 > 192.168.202.68.8080: tcp 0 # 09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380 # 09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380 # 09:30:07.780000 IP 192.168.27.100.37877 > 192.168.204.45.41936: tcp 0 # 09:30:07.780000 IP 192.168.24.100.1038 > 192.168.202.68.8080: tcp 0 # 09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380 # 09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380 # 09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380 # 09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380 # ``` # Given the directional nature of network traffic and the numerous ports per node, we will simplify the graph by treating traffic between nodes as undirected and ignorning the distinction between ports. The graph edges will have weights represented by the total number of bytes across both nodes in either direction. # ``` # python pcap_to_parquet.py maccdc2012_00000.txt # ``` # The resulting output will be two Parquet dataframes, `maccdc2012_nodes.parq` and `maccdc2012_edges.parq`. # ## Loading data # + import holoviews as hv from holoviews import opts, dim import networkx as nx import dask.dataframe as dd from holoviews.operation.datashader import ( datashade, dynspread, directly_connect_edges, bundle_graph, stack ) from holoviews.element.graphs import layout_nodes from datashader.layout import random_layout from colorcet import fire hv.extension('bokeh') keywords = dict(bgcolor='black', width=800, height=800, xaxis=None, yaxis=None) opts.defaults(opts.Graph(**keywords), opts.Nodes(**keywords), opts.RGB(**keywords)) # - edges_df = dd.read_parquet('../data/maccdc2012_full_edges.parq').compute() edges_df = edges_df.reset_index(drop=True) graph = hv.Graph(edges_df) len(edges_df) # ## Edge bundling & layouts # Datashader and HoloViews provide support for a number of different graph layouts including circular, force atlas and random layouts. Since large graphs with thousands of edges can become quite messy when plotted datashader also provides functionality to bundle the edges. # #### Circular layout # By default the HoloViews Graph object lays out nodes using a circular layout. Once we have declared the ``Graph`` object we can simply apply the ``bundle_graph`` operation. We also overlay the datashaded graph with the nodes, letting us identify each node by hovering. opts.defaults(opts.Nodes(size=5, padding=0.1)) circular = bundle_graph(graph) datashade(circular, width=800, height=800) * circular.nodes # #### Force Atlas 2 layout # For other graph layouts you can use the ``layout_nodes`` operation supplying the datashader or NetworkX layout function. Here we will use the ``nx.spring_layout`` function based on the [Fruchterman-Reingold](https://en.wikipedia.org/wiki/Force-directed_graph_drawing) algorithm. Instead of bundling the edges we may also use the directly_connect_edges function: forceatlas = directly_connect_edges(layout_nodes(graph, layout=nx.spring_layout)) datashade(forceatlas, width=800, height=800) * forceatlas.nodes # #### Random layout # Datashader also provides a number of layout functions in case you don't want to depend on NetworkX: random = bundle_graph(layout_nodes(graph, layout=random_layout)) datashade(random, width=800, height=800) * random.nodes # ## Showing nodes with active traffic # To select just nodes with active traffic we will split the dataframe of bundled paths and then apply ``select`` on the new Graph to select just those edges with a weight of more than 10,000. By overlaying the sub-graph of high traffic edges we can take advantage of the interactive hover and tap features that bokeh provides while still revealing the full datashaded graph in the background. overlay = datashade(circular, width=800, height=800) * circular.select(weight=(10000, None)) overlay.opts( opts.Graph(edge_line_color='white', edge_hover_line_color='blue', padding=0.1)) # ## Highlight TCP and UDP traffic # Using the same selection features we can highlight TCP and UDP connections separately again by overlaying it on top of the full datashaded graph. The edges can be revealed over the highlighted nodes and by setting an alpha level we can also reveal connections with both TCP (blue) and UDP (red) connections in purple. # + udp_opts = opts.Graph(edge_hover_line_color='red', node_size=20, node_fill_color='red', edge_selection_line_color='red') tcp_opts = opts.Graph(edge_hover_line_color='blue', node_fill_color='blue', edge_selection_line_color='blue') udp = forceatlas.select(protocol='udp', weight=(10000, None)).opts(udp_opts) tcp = forceatlas.select(protocol='icmp', weight=(10000, None)).opts(tcp_opts) layout = datashade(forceatlas, width=800, height=800, normalization='log', cmap=['black', 'white']) * tcp * udp layout.opts( opts.Graph(edge_alpha=0, edge_hover_alpha=0.5, edge_nonselection_alpha=0, inspection_policy='edges', node_size=8, node_alpha=0.5, edge_color=dim('weight'))) # - # ## Coloring by protocol # As we have already seen we can easily apply selection to the ``Graph`` objects. We can use this functionality to select by protocol, datashade the subgraph for each protocol and assign each a different color and finally stack the resulting datashaded layers: from bokeh.palettes import Blues9, Reds9, Greens9 ranges = dict(x_range=(-.5, 1.6), y_range=(-.5, 1.6), width=800, height=800) protocols = [('tcp', Blues9), ('udp', Reds9), ('icmp', Greens9)] shaded = hv.Overlay([datashade(forceatlas.select(protocol=p), cmap=cmap, **ranges) for p, cmap in protocols]).collate() stack(shaded * dynspread(datashade(forceatlas.nodes, cmap=['white'], **ranges)), link_inputs=True) # ## Selecting the highest targets # With a bit of help from pandas we can also extract the twenty most targetted nodes and overlay them on top of the datashaded plot: # + target_counts = list(edges_df.groupby('target').count().sort_values('weight').iloc[-20:].index.values) overlay = (datashade(forceatlas, cmap=fire[128:]) * datashade(forceatlas.nodes, cmap=['cyan']) * forceatlas.nodes.select(index=target_counts)) overlay.opts( opts.Nodes(size=8), opts.RGB(width=800, height=800))
datashader-work/datashader-examples/topics/network_packets.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #!/usr/env/python import pandas as pd import astropy.io.ascii as ascii import numpy as np f='/Users/jielaizhang/src/photpipe/config/DECAMNOAO/YSE/YSE.fieldcenters' pipedata_dir = '/fred/oz100/NOAO_archive/KNTraP_Project/photpipe/v20.0/DECAMNOAO/YSE/' outdir = '/fred/oz100/NOAO_archive/KNTraP_Project/photpipe/v20.0/DECAMNOAO/YSE/abscats/' command_script = '/Users/jielaizhang/src/KNTraP/scripts/photpipe_getCDFS_PS1cats.py' saveit = open(command_script,'w') d = ascii.read(f) d_CDFS=d[d['field']=='CDFS'] for line in d_CDFS: field = line['field'] ccd = line['ampl'] ra = line['RAdeg'] dec = line['DECdeg'] printme = f'\n#{field} {ccd} {ra} {dec} :\n' print(printme) saveit.write(printme) for band,photcode in zip(['g','r','i','z'],['0x5013','0x5014','0x5015','0x5016']): command = f'getPS1cat.py {ra} {dec} -{band} --size 0.55x0.3 ' command = command+f'-o {outdir}/{photcode}/CDFS_{band}_{ccd}_PS1.cat ' command = command+f'--clobber --tmpdir {pipedata_dir}/workspace/delme/PS1cat \n' print(command) saveit.write(command) saveit.close() # -
notebooks/photpipe_01_JielaiZhang_getPS1cat_YSE_CDFS.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from compas.datastructures import Mesh import compas.datastructures.mesh.subdivision as sd from compas.geometry import Circle, Polyline, Line, closest_point_in_cloud import ipyvolume as ipv from utilities import draw_compas_mesh from shapes import * from compas.datastructures import meshes_join from compas.geometry import distance_point_point # + origin = (0,0,0) normal = (0,0,1) plane = (origin, normal) r_axis_1 = 5 r_axis_2 = r_axis_1*1.1 r_pipe = .5 circle_1 = Circle(plane, r_axis_1) circle_2 = Circle(plane, r_axis_2) cylinder_1 = Cylinder(circle_1, 1) cylinder_2 = Cylinder(circle_2, 1) #torus = Torus(plane, r_axis, r_pipe) vs, fs = cylinder_1.to_vertices_and_faces(u = 20, v = 3) mesh_1 = Mesh.from_vertices_and_faces(vs, fs) vs, fs = cylinder_2.to_vertices_and_faces(u = 20, v = 3) mesh_2 = Mesh.from_vertices_and_faces(vs, fs) #subd = sd.mesh_subdivide_catmullclark(mesh, k=1) #draw_compas_mesh(subd) # + # get lines from edges # create new lines from closest pt other mesh # mesh from lines face_coords_list = [] connection_lines = [] existing_edge_lines = [] for m in [mesh_1, mesh_2]: for fkey in m.faces(): face_coords = m.face_coordinates(fkey, axes='xyz') for i in range(len(face_coords)-1): line = [*face_coords[i]], [*face_coords[i+1]] existing_edge_lines.append(line) for fkey in mesh_1.faces(): face_coords = mesh_1.face_coordinates(fkey, axes='xyz') face_coords_list.append(face_coords) pt_cloud_from_other_mesh = [] for vkey in mesh_2.vertices(): v_coord = mesh_2.vertex_coordinates(vkey) pt_cloud_from_other_mesh.append(v_coord) for face in face_coords_list: for vertex in face: _, friend, _ = closest_point_in_cloud(vertex, pt_cloud_from_other_mesh) line = [*vertex], [*friend] connection_lines.append(line) print(connection_lines) bunch_of_lines = connection_lines + existing_edge_lines #print(bunch_of_lines) joined_mesh = Mesh.from_lines(bunch_of_lines) draw_compas_mesh(joined_mesh) # - joined = meshes_join([mesh_1, mesh_2]) joined.summary() draw_compas_mesh([mesh_1, mesh_2]) # + area_list = [] for fkey in mesh.faces(): area = mesh.face_area(fkey) #print(area) area_list.append(area) #max_area = max(area_list) #print(max_area) average_area = sum(area_list) / float(len(area_list)) mesh2 = mesh.copy() for fkey in mesh.faces(): if mesh2.face_area(fkey) > average_area: print(mesh.face_area(fkey)) mesh2.delete_face(fkey) draw_compas_mesh(mesh2) # + distances = [] for v in mesh.vertices(): v_coord = mesh.vertex_coordinates(v) distance_to_mid = distance_point_point(origin, v_coord) distances.append(distance_to_mid) print(distances) # + distance_outer_ring = round(max(distances), 1) distance_inner_ring = round(min(distances), 1) print(distance_inner_ring) print(distance_outer_ring) # + point_list = [] for v in mesh.vertices(): v_coord = mesh.vertex_coordinates(v) point_list.append(v_coord) distance_to_mid = distance_point_point(origin, v_coord) if distance_to_mid == distance_outer_ring: mesh.set_vertex_attribute(v, 'location', 'outer') else: mesh.set_vertex_attribute(v, 'location', 'inner') new_mesh = Mesh() for pt in point_list: new_mesh.add_vertex(x=pt[0], y=pt[1], z=pt[0]) print(point_list) new_mesh = Mesh.from_points(point_list) draw_compas_mesh(new_mesh) # -
T1/09_mesh_subdivision/191009_cylinders_tetov.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.8 64-bit ('.env') # metadata: # interpreter: # hash: 6b3d8fded84b82a84dd06aec3772984b6fd7683755c4932dae62599619bfeba9 # name: python3 # --- # # Histograms, Binnings, and Density # # A Simple histogram can be a great first step in understainding a dataset. Earlier, we saw a preview of matplotlib's histogram functions, which creates a asic hisogram in one line, once the normal bolerplate imports are done: # + # %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-white') data = np.random.randn(1000) # - plt.hist(data); # The `hist()` function has many options to tune both the calculation and the display; here's an example of a more customized histogram: plt.hist(data, density=True, bins=30, alpha=0.5, histtype='stepfilled', color='steelblue', edgecolor='none'); # I find this combination of `histtype='stepfilled` along with some transparency `alpha` to be very useful when comparing histograms of serveral distributions # + x1 = np.random.normal(0, 0.8, 1000) x2 = np.random.normal(-2, 1, 1000) x3 = np.random.normal(3, 2, 1000) kwargs = dict(histtype='stepfilled', alpha=0.3, density=True, bins=40) plt.hist(x1, **kwargs) plt.hist(x2, **kwargs) plt.hist(x3, **kwargs); # - # If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the `np.histogram()` function is available: counts, bin_edges = np.histogram(data, bins=5) print(counts) bin_edges # ## Two-Dimensional Histograms and Binnigns # # Just as we create histograms in one dimension by dividng the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins. We'll take a brief look at several ways to do this here. We'll start by defining some data - an `x` and `y` array drawn from a multivariate Gaussian distribution: mean = [0, 0] cov =[[1, 1], [1, 2]] x, y = np.random.multivariate_normal(mean, cov, 100_000).T # ### `plt.hist2d`: Two-Dimensional histogram # # One straightforward way to plot a two-dimensional histogram is to sue Matplotlib's `plt.hist2d` function: plt.hist2d(x, y, bins=30, cmap='Blues') cb = plt.colorbar() cb.set_label('counts in bin') # Just as with `plt.hist`, `plt.hist2d` has a number of extra options to fine-tune the plot and the binning, which are nicely outline in the funciton dosctring. Further, just as `plt.hist` has counterpart in `np.histogram`, `plt.hist2d` has a conterpart in `np.histogram2d`, which can be used as follows: counts, xedges, yedges = np.histogram2d(x, y, bins=30) # For the generalization of this histogram binning in dimensions higher than two, see the `np.histogramdd` function. # ### `plt.hexbin`: Hexagonal binnings # # The two-dimensional histogram creates a tesselation of squares across the axes. Another natural shape for such a tesselation is the regular hexagon. For this purpose, Matplotlib provides the `plt.hexbin` routine, which will represents a two-dimensional dataset bined within a grid of hexagons: plt.hexbin(x, y, gridsize=30, cmap='Blues') cb = plt.colorbar(label='count in bin') # `plt.hexbin` has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.) # ### Kernel density estimation # # Another common method of evaluating densities in multiple dimensions ins *kernel density estimation*(KDE). This will be discussed more fully, but for now we'll simply mention that KDE can be through of as a way to "smear out" the points in space and add up the result to obtain a smooth function. One extremely quick and simple KDE implementation exist in the `scipy.stats` package. Here is a quick example of using the KDE on this data: # + from scipy.stats import gaussian_kde # fit an array of size [Ndim, Nsamples] data = np.vstack([x, y]) kde = gaussian_kde(data) # evaluate on a regular grid xgrid = np.linspace(-3.5, 3.5, 40) ygrid = np.linspace(-6, 6, 40) Xgrid, Ygrid = np.meshgrid(xgrid, ygrid) Z = kde.evaluate(np.vstack([Xgrid.ravel(), Ygrid.ravel()])) # Plot the result as an image plt.imshow(Z.reshape(Xgrid.shape), origin='lower', aspect='auto', extent=[-3.5, 3.5, -6, 6], cmap='Blues') cb = plt.colorbar() cb.set_label("density") # - # KDE has a smoothing length that effectively slides the knon between detail and smoothness(one example of the ubiquitous bias-variance trade-off). The literature on choosing an appropriate smoothing length isa vas: `gaussian_kde` uses a rule-of-thumb to attempt to a find a nearly optimal smoothing length for the input data. # # Other KDE implementations are available within the SciPy ecosystem, each with its own strnghts and weaknesses; see, for example, `sklearn.neighbors.KernelDensity` and `statsmodels.nonparametric.kernel_density.KDEMultivariate`. For visualizations based on KDE, using Matplotlib tends to be overly verbose. The seaborn library provides a much more terse API for creating KDE-based visualizations.
matplotlib/histograms_binnings_density.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # T013 · Data acquisition from PubChem # # Authors: # # - <NAME>, 2019-2020, [Volkamer lab, Charité](https://volkamerlab.org/) # - <NAME>, 2019-2020, [Volkamer lab, Charité](https://volkamerlab.org/) # - <NAME>, 2019-2020, [Volkamer lab, Charité](https://volkamerlab.org/) # ## Aim of this talktorial # # In this notebook, you will learn how to search for compounds similar to an input SMILES in [PubChem](https://pubchem.ncbi.nlm.nih.gov/) with the API web service. # ### Contents in *Theory* # # - PubChem # - Programmatic access to PubChem # ### Contents in *Practical* # # * Simple examples for the PubChem API # * How to get the PubChem CID for a compound # * Retrieve molecular properties based on a PubChem CID # * Depict a compound with PubChem # * Query PubChem for similar compounds # * Determine a query compound # * Create task and get the job key # * Download results when job finished # * Get canonical SMILES for resulting molecules # * Show the results # ## References # # * Literature: # * PubChem 2019 update: [_Nucleic Acids Res._ (2019), __47__, D1102-1109](https://academic.oup.com/nar/article/47/D1/D1102/5146201) # * PubChem in 2021: [_Nucleic Acids Res._(2021), __49__, D1388–D1395](https://academic.oup.com/nar/article/49/D1/D1388/5957164) # * Documentation: # * [PubChem Source Information](https://pubchem.ncbi.nlm.nih.gov/sources) # * [PUG REST](https://pubchemdocs.ncbi.nlm.nih.gov/pug-rest) # * [Programmatic Access](https://pubchemdocs.ncbi.nlm.nih.gov/programmatic-access) # * [PubChem - Wikipedia](https://en.wikipedia.org/wiki/PubChem) # ## Theory # ### PubChem # # [PubChem](https://pubchem.ncbi.nlm.nih.gov/) is an open database containing chemical molecules and their measured activities against biological assays, maintained by the [National Center for Biotechnology Information (NCBI)](https://www.ncbi.nlm.nih.gov/), part of [National Institutes of Health (NIH)](https://www.nih.gov/). It is the world’s largest freely available database of chemical information, collected from more than 770 data sources. There are three dynamically growing primary subdatabases, i.e., substances, compounds, and bioassay databases. # As of August 2020, PubChem contains over 110 million unique chemical structures and over 270 million bioactivities ([_Nucleic Acids Res._(2021), __49__, D1388–D1395](https://academic.oup.com/nar/article/49/D1/D1388/5957164)). # # These compounds can be queried using a broad range of properties including chemical structure, name, fragments, chemical formula, molecular weight, XlogP, hydrogen bond donor and acceptor count, etc. There is no doubt that PubChem has become a key chemical information resource for scientists, students, and the general public. # # Every data from PubChem is free to access through both the web interface and a programmatic interface. Here, we are going to learn how to use the API of PubChem to do some cool things. # ### Programmatic access to PubChem # # For some reasons, mainly historical, PubChem provides several ways of programmatic access to the open data. # # * [PUG-REST](https://pubchemdocs.ncbi.nlm.nih.gov/pug-rest) is a Representational State Transfer (REST)-style version of the PUG (Power User Gateway) web service, with both the syntax of the HTTP requests and the available functions. It can also provide convenient access to information on PubChem records which are not reachable with other PUG services. # # * PUG-View is another REST-style web service for PubChem. It can provide full reports, including third-party textual annotation, for PubChem records. # # * PUG offers programmatic access to PubChem service via a single common gateway interface (CGI). # # * PUG-SOAP uses the simple object access protocol (SOAP) to access PubChem data. # # * PubChemRDF REST interface is a special interface for RDF-encoded PubChem data. # # For more details about the PubChem API, please check out the introduction of [programmatic access](https://pubchemdocs.ncbi.nlm.nih.gov/programmatic-access) in the PubChem Docs. # # In this tutorial, we will focus on the **PUG-REST** variant of the API, please refer to **Talktorial T011** for an introduction. # ## Practical # # In this section, we will discuss how to search PubChem based on a given query molecule using the PUG-REST access. # + import time from pathlib import Path from urllib.parse import quote from IPython.display import Markdown, Image import requests import pandas as pd from rdkit import Chem from rdkit.Chem import PandasTools from rdkit.Chem.Draw import MolsToGridImage # - HERE = Path(_dh[-1]) DATA = HERE / "data" # ### Simple examples for the PubChem API # Before querying PubChem for similar compounds, we provide some simple examples to show how to use the PubChem API. For more detail about the PubChem API, see [PUG-REST tutorial](https://pubchemdocs.ncbi.nlm.nih.gov/pug-rest-tutorial). # # #### How to get the PubChem CID for a compound # For example, we will search for the PubChem Compound Identification (CID) for Aspirin by name. # + # Get PubChem CID by name name = "aspirin" url = f"https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/name/{name}/cids/JSON" r = requests.get(url) r.raise_for_status() response = r.json() if "IdentifierList" in response: cid = response["IdentifierList"]["CID"][0] else: raise ValueError(f"Could not find matches for compound: {name}") print(f"PubChem CID for {name} is:\n{cid}") # NBVAL_CHECK_OUTPUT # - # #### Retrieve molecular properties based on a PubChem CID # We can get interesting properties for a compound through its PubChem CID, such as molecular weight, pKd, logP, etc. Here, we will search for the molecular weight for Aspirin. # + # Get mol weight for aspirin url = f"https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/cid/{cid}/property/MolecularWeight/JSON" r = requests.get(url) r.raise_for_status() response = r.json() if "PropertyTable" in response: mol_weight = response["PropertyTable"]["Properties"][0]["MolecularWeight"] else: raise ValueError(f"Could not find matches for PubChem CID: {cid}") print(f"Molecular weight for {name} is:\n{mol_weight}") # NBVAL_CHECK_OUTPUT # - # #### Depict a compound with PubChem # + # Get PNG image from PubChem url = f"https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/cid/{cid}/PNG" r = requests.get(url) r.raise_for_status() display(Markdown("The 2D structure of Aspirin:")) display(Image(r.content)) # - # ### Query PubChem for similar compounds # Use a `SMILES` string of a query compound to search for similar compounds in PubChem. # # > Tip: You can check **Talktorial T001** to see how to do a similar operation with the ChEMBL database. # # We define a function to query the PubChem service for similar compounds to the given one. The Tanimoto-based similarity between (2D) PubChem fingerprint will be calculated here. You can see [this paper](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-016-0163-1) for more details about similarity evaluation in PubChem, and the [PubChem fingerprint specification](https://ftp.ncbi.nlm.nih.gov/pubchem/specifications/pubchem_fingerprints.pdf) for the details about PubChem fingerprint. # #### 1. Determine a query compound # In the following steps, we will search compounds similar to Gefitinib, an inhibitor for EGFR, from the PubChem service. You can also choose another compound which you are interested in. query = "COC1=C(C=C2C(=C1)N=CN=C2NC3=CC(=C(C=C3)F)Cl)OCCCN4CCOCC4" # Gefitinib print("The structure of Gefitinib:") Chem.MolFromSmiles(query) # #### 2. Create task and get the job key # # Input the canonical `SMILES` string for the given compound to create a new task, so we obtain a job key here. Note that this asynchronous API will not return the data immediately. In this case, the callback will be provided only when the requested resource is ready, which can be checked by the job key. So asynchronous requests are useful to work around certain slower operations. def query_pubchem_for_similar_compounds(smiles, threshold=75, n_records=10): """ Query PubChem for similar compounds and return the job key. Parameters ---------- smiles : str The canonical SMILES string for the given compound. threshold : int The threshold of similarity, default 75%. In PubChem, the default threshold is 90%. n_records : int The maximum number of feedback records. Returns ------- str The job key from the PubChem web service. """ escaped_smiles = quote(smiles).replace("/", ".") url = f"https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/similarity/smiles/{escaped_smiles}/JSON?Threshold={threshold}&MaxRecords={n_records}" r = requests.get(url) r.raise_for_status() key = r.json()["Waiting"]["ListKey"] return key job_key = query_pubchem_for_similar_compounds(query) # #### 3. Download results when job finished # Check if the job finished within the time limit. The standard time limit in PubChem is 30 seconds (we can reset it). It means that if a request is not completed within the time limit, a timeout error will be returned. So, just wait patiently for the callback. # # Then download results - PubChem CIDs in this case - when the job is finished. def check_and_download(key, attempts=30): """ Check job status and download PubChem CIDs when the job finished Parameters ---------- key : str The job key of the PubChem service. attempts : int The time waiting for the feedback from the PubChem service. Returns ------- list The PubChem CIDs of similar compounds. """ url = f"https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/listkey/{key}/cids/JSON" print(f"Querying for job {key} at URL {url}...", end="") while attempts: r = requests.get(url) r.raise_for_status() response = r.json() if "IdentifierList" in response: cids = response["IdentifierList"]["CID"] break attempts -= 1 print(".", end="") time.sleep(10) else: raise ValueError(f"Could not find matches for job key: {key}") return cids similar_cids = check_and_download(job_key) # #### 4. Get canonical SMILES for resulting molecules def smiles_from_pubchem_cids(cids): """ Get the canonical SMILES string from the PubChem CIDs. Parameters ---------- cids : list A list of PubChem CIDs. Returns ------- list The canonical SMILES strings of the PubChem CIDs. """ url = f"https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/cid/{','.join(map(str, cids))}/property/CanonicalSMILES/JSON" r = requests.get(url) r.raise_for_status() return [item["CanonicalSMILES"] for item in r.json()["PropertyTable"]["Properties"]] similar_smiles = smiles_from_pubchem_cids(similar_cids) # Then, we create the RDKit molecules and depict them. query_results_df = pd.DataFrame({"smiles": similar_smiles, "CIDs": similar_cids}) PandasTools.AddMoleculeColumnToFrame(query_results_df, smilesCol="smiles") query_results_df.head(5) # #### 5. Show the results # Show the compounds with images using RDKit's drawing functions. def multi_preview_smiles(query_smiles, query_name, similar_molecules_pd): """ Show query and similar compounds in 2D structure representation. Parameters ---------- query_smiles : str The SMILES string of query compound. query_name : str The name of query compound. similar_molecules_pd : pandas The pandas DataFrame which contains the SMILES string and CIDs of similar molecules. Returns ------- MolsToGridImage """ legends = [f"PubChem CID: {str(s)}" for s in similar_molecules_pd["CIDs"].tolist()] molecules = [Chem.MolFromSmiles(s) for s in similar_molecules_pd["smiles"]] query_smiles = Chem.MolFromSmiles(query_smiles) return MolsToGridImage( [query_smiles] + molecules, molsPerRow=3, subImgSize=(300, 300), maxMols=len(molecules), legends=([query_name] + legends), useSVG=True, ) print("The results of querying similar compounds for Gefitinib:") multi_preview_smiles(query, "Gefitinib", query_results_df) # ## Discussion # # In this notebook, you have learned how to access and search similar compounds from the PubChem database via the PUG-REST programmatic access. Is it convenient? PUG-REST can do more than that. See [PUG-REST](https://pubchemdocs.ncbi.nlm.nih.gov/pug-rest-tutorial) to get more power! # # ## Quiz # # - Can you make the similarity search more strict? # - Is any of the proposed candidates already an approved inhibitor? (Hint: You can _scrape_ [PKIDB](http://www.icoa.fr/pkidb/) and check against the list of SMILES, also see **Talktorial T011**) # - Can you try to reuse the functions in this notebook so search for similar compounds to Imatinib and inspect your results?
teachopencadd/talktorials/T013_query_pubchem/talktorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Pandas # # Pandas is open source library built on top of NumPy. It allows for fast analysis and data cleaning and preparation. It excels in perpormance and productivity. It also has built-in visualization features. It can work wide variety of source. # >**conda install pandas** # # >**pip install pandas** # # ### Series import numpy as np import pandas as pd label = ['a','b','c'] my_data = [10,20,30] arr = np.array(my_data) d = {'a':10,'b':20,'c':30} pd.Series(data=my_data,index=label) pd.Series(arr,label) pd.Series(d) pd.Series(label) pd.Series(data = [sum,print,len]) ser1 = pd.Series([1,2,3,4],['USA','Germany','USSR','Japan']) ser1 ser2 = pd.Series([1,2,5,4],['USA','Germany','Italy','Japan']) ser2 ser2['USA'] ser3 = pd.Series(label) ser3 ser1 + ser2 # ### DataFrame from numpy.random import randn np.random.seed(101) df = pd.DataFrame(randn(5,4),['A','B','C','D','E'],['W','X','Y','Z']) df df['W'] type(df['W']) type(df) df.W # do not use like this df[['W','Z']] df['new'] = df['W'] + df['Y'] df['new'] df.drop('new',axis=1) df df.drop('new', axis=1,inplace=True) df df.drop('E') df.shape df[['X','Z']] # Selecting rows df df.loc['A'] df.iloc[2] df.loc['B','Y'] df.loc[['A','B'],['W','Y']] # ## DataFarme 2 df = pd.DataFrame(randn(5,4),['A','B','C','D','E'],['W','X','Y','Z']) df booldf = df>0 booldf df[booldf] df[df>0] df['W']>0 df[df['W']>0] resultdf = df[df['Z']>0] resultdf['X'] df[df['Z']>0][['X','Y']] boolser = df['W'] >0 True and True [True,True] and [True, False] df[(df['W']>0) | (df['Y']>1)] df.reset_index(inplace=True) df df.set_index('index',inplace=True) df newind = 'CA NY WY OR CO'.split() df['State'] = newind df df.set_index('State') # ## DataFrame 3 Multilevel index outside = ['G1','G1','G1','G2','G2','G2'] inside = [1,2,3,1,2,3] heir_index = list(zip(outside,inside)) heir_index = pd.MultiIndex.from_tuples(heir_index) heir_index df = pd.DataFrame(randn(6,2),heir_index,['A','B']) df df.loc['G1'].loc[1] df.loc['G2'].loc[2] #cross section df.xs('G1') df.index.names = ['Groups','Num'] df.xs(1,level='Num') # ## Missing Data d = {'A':[1,2,np.nan], 'B':[5,np.nan,np.nan], 'C':[1,2,3]} df = pd.DataFrame(d) df df.dropna() df.dropna(axis=1) df.dropna(thresh=2) df.fillna(value='FILL') df['A'].fillna(df['A'].mean()) # ## GroupBy # Groupby allows you to group together rows based off of a column and perform an aggregate function on them. data = {'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'], 'Person':['Sam','Charlie','Amy','Vanesa','Carl','Sarah'], 'Sales':[200,120,340,124,243,350]} df = pd.DataFrame(data) df bycomp = df.groupby('Company') bycomp.mean() bycomp.sum() bycomp.std() df.groupby('Company').count() df.groupby('Company').max() df.groupby('Company').describe() # ## Merging Joining and Concatenating df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3]) df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'], 'B': ['B4', 'B5', 'B6', 'B7'], 'C': ['C4', 'C5', 'C6', 'C7'], 'D': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6, 7]) df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'], 'B': ['B8', 'B9', 'B10', 'B11'], 'C': ['C8', 'C9', 'C10', 'C11'], 'D': ['D8', 'D9', 'D10', 'D11']}, index=[8, 9, 10, 11]) df1 df2 df3 # ### Concatenation # Cancatenation basicaly glues together DataFrame. Keep in mind that dimensions should match along the axis you are concatenating on. You can use pd.concat and pass in a list of DataFrames to concatenate together. pd.concat([df1,df2,df3]) pd.concat([df1,df2,df3],axis=1) # ### Example DataFrame # + left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'], 'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3']}) right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}) # - left right # ### Merging # The merge function allows you to merge DataFrame together using a similar logic as merging SQL Tables together. pd.merge(left,right,how='inner',on='key') # + left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'], 'key2': ['K0', 'K1', 'K0', 'K1'], 'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3']}) right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'], 'key2': ['K0', 'K0', 'K0', 'K0'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}) # - pd.merge(left,right,on=['key1','key2']) pd.merge(left,right,how='outer',on=['key1','key2']) # ## Joining # Joining is convenient method for combining the columns of two potentially differently-indexed DataFrame into a single result DataFrame. # + left = pd.DataFrame({'A': ['A0', 'A1', 'A2'], 'B': ['B0', 'B1', 'B2']}, index=['K0', 'K1', 'K2']) right = pd.DataFrame({'C': ['C0', 'C2', 'C3'], 'D': ['D0', 'D2', 'D3']}, index=['K0', 'K2', 'K3']) # - left.join(right) left.join(right,how='outer') # ## Operations df = pd.DataFrame({'col1':[1,2,3,4], 'col2':[444,555,666,444], 'col3':['abc','def','ghi','xyz']}) df df['col2'].nunique() df['col2'].value_counts() df[(df['col1']>2) & (df['col2']==444)] df['col1']>2 def times2(x): return x*2 df['col1'].apply(times2) df['col3'].apply(len) df['col2'].apply(lambda x: x*2) df.drop('col3',axis=1) df.columns df.sort_values(by='col2') df.isnull().sum() # + data = {'A':['foo','foo','foo','bar','bar','bar'], 'B':['one','one','two','two','one','one'], 'C':['x','y','x','y','x','y'], 'D':[1,3,2,5,4,1]} df = pd.DataFrame(data) df # - df.pivot_table(values='D',index=['A','B'],columns=['C']) # ## Data Input and Output pwd df = pd.read_csv('example.csv') df df.to_csv('My_output',index=False) pd.read_csv('My_output') pd.read_excel('Excel_Sample.xlsx',sheet_name='Sheet1') df.to_excel('Excel_Sample2.xlsx',sheet_name='NewSheet') data = pd.read_html('http://www.fdic.gov/bank/individual/failed/banklist.html') data data[0] from sqlalchemy import create_engine engine = create_engine('sqlite:///:memory:') df.to_sql('my_table',engine) sqldf = pd.read_sql('my_table',con=engine) sqldf # # SF Salaries Exercise sal = pd.read_csv('Pandas Exercises/'+'Salaries.csv') sal.head() sal.info() sal['BasePay'].mean() sal['OvertimePay'].max() sal[sal['EmployeeName'] == '<NAME>']['JobTitle'] sal[sal['TotalPayBenefits']==sal['TotalPayBenefits']].max() sal.groupby('Year').mean()['BasePay'] sal['JobTitle'].nunique() sal['JobTitle'].value_counts().head() sum(sal['JobTitle'].value_counts() ==1) sum(sal[sal['Year']==2013]['JobTitle'].value_counts() ==1) def chief_string(title): if 'chief' in title.lower(): return True else: return False sal['JobTitle'].apply(lambda x: chief_string(x)).sum() sal['title_len'] = sal['JobTitle'].apply(len) sal[['title_len','TotalPayBenefits']].corr() # # Ecommerce Purchases ecom = pd.read_csv('Pandas Exercises/'+'Ecommerce Purchases') ecom.head() ecom.info() ecom.shape ecom['Purchase Price'].mean() ecom[ecom['Language'] =='en'].count() ecom[ecom['Job']== 'Lawyer'].count() ecom['AM or PM'].value_counts() ecom['Job'].value_counts().head() ecom[ecom['Lot'] == "90 WT"]['Purchase Price'] ecom[ecom['Credit Card'] == 4926535242672853]['Email'] ecom[(ecom['CC Provider'] == 'American Express') & (ecom['Purchase Price'] > 95.0) ].count() def check(cc): if '25' in cc.split('/'): return True else: return False sum(ecom['CC Exp Date'].apply(lambda x: check(x))) ecom['Email'].apply(lambda x: x.split('@')[1]).value_counts().head()
Machine_Learning_Bootcamp/Python-for-Data-Analysis/Pandas/pandas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # `census_api` Demo Notebook # # This notebook is intended to demonstrate the features of the `census_api` package. import pandas as pd from datascience import * # ## Before you Start # # Before you begin working with the US Census API, you need to obtain an API key. This can be done [here](https://api.census.gov/data/key_signup.html). # # Once you have your API key, paste it in the cell below (where it says `YOUR_API_KEY_GOES_HERE`) and then run the cell. This creates a `config.py` file which Git will ignore (since this file pattern is listed in the `.gitignore` file) to store your API key safely. # + my_api_key = "YOUR_API_KEY_GOES_HERE" with open("config.py", "w+") as f: f.write("""api_key = \"{}\"""".format(my_api_key)) # - # The next code cell will import this file so that your API key can be used in this notebook. # ## Initializing the Query Class # # This package utilizes the `CensusQuery` class to run queries through. To instantiate the class, you need your API key and the dataset you want to query. **Currently, this package only supports querying the ACS1, ACS5, and SF1 datasets**. When instantiating the class, you can also optionally provide a year to query data for and an [output type](#Output-Types). # + import config import census_api # create the class instance c = census_api.CensusQuery(config.api_key, "acs5") # - # ## Making Queries # # To query the API, use the `CensusQuery.query` method. The parameters are listed below. # | Parameter | Type | Description | # |-----|-----|-----| # | `variables` | `list` | List of variables to extract from the API. For variable identifiers, find the dataset you're querying on [this page](https://api.census.gov/data.html) and click on `variables` in its row. | # | `state` | `str` | The 2-letter abbreviation of the state you want data for | # | `county` | `str` | Optional. The name of the county you want data for. Defaults to all. | # | `tract` | `str` | Optional. The FIPS code of the tract you want data for. Defaults to all. | # | `year` | `int` | Optional. The year you want data for. If provided, the `year` provided to the instance of `CensusQuery` is ignored. [more info below](#Years) | # An example of a query is given below. output_2014 = c.query(["NAME", "B00001_001E"], "CA", county="Alameda", year=2014) output_2014.head() # ## Years # # There are two ways to define the year that you want to query: in the class instance, or in the `CensusQuery.query` call. If you define it in the class instances, e.g. with # # ```python # c_2015 = census_api.CensusQuery(config.api_key, "acs5", year=2015) # ``` # # then you don't need to provide it when you call `CensusQuery.query`. _However_, if you do provide it to `CensusQuery.query`, the year for the class instance will be ignored. So, if I were to call # # ```python # c_2015.query(["NAME"], "CA", year=2014) # ``` # # I would get 2014 data, not 2015 data. # # If you don't define it in the class instance, you _must_ define it in the `CensusQuery.query` call, or else your output will be empty. # ## Output Types # # The `CensusQuery` class can output your data in one of two ways: as a `pandas` DataFrame or as a `datascience` Table. The class defaults to `pandas`, but setting the `out` argument when instantiating the class can change this setting. The two possible values of `out` are `"pd"` and `"ds"`, defaulting to `"pd"`. ds_output = census_api.CensusQuery(config.api_key, "acs5", out="ds") # Now, if I were to make the same query as above, the output would be of class `datascience.tables.Table`. # + # original instance print(type(c.query(["NAME", "B00001_001E"], "CA", county="Alameda", year=2014))) # datascience instance print(type(ds_output.query(["NAME", "B00001_001E"], "CA", county="Alameda", year=2014))) # - # ## More Information # # For more information about the Census API, visit [https://www.census.gov/developers/](https://www.census.gov/developers/). If you have any issues with `census_api`, please open an issue on our [Github repo](https://github.com/chrispyles/census_api). #
demo/census_api-demo.ipynb