code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="ISubpr_SSsiM" # ##### Copyright 2020 The TensorFlow Authors. # # + cellView="form" id="3jTMb1dySr3V" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] id="6DWfyNThSziV" # # Introduction to modules, layers, and models # # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://www.tensorflow.org/guide/intro_to_modules"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/intro_to_modules.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/intro_to_modules.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> # </td> # <td> # <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/intro_to_modules.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> # </td> # </table> # + [markdown] id="v0DdlfacAdTZ" # To do machine learning in TensorFlow, you are likely to need to define, save, and restore a model. # # A model is, abstractly: # # * A function that computes something on tensors (a **forward pass**) # * Some variables that can be updated in response to training # # In this guide, you will go below the surface of Keras to see how TensorFlow models are defined. This looks at how TensorFlow collects variables and models, as well as how they are saved and restored. # # Note: If you instead want to immediately get started with Keras, please see [the collection of Keras guides](./keras/). # # + [markdown] id="VSa6ayJmfZxZ" # ## Setup # + id="goZwOXp_xyQj" import tensorflow as tf from datetime import datetime # %load_ext tensorboard # + [markdown] id="yt5HEbsYAbw1" # ## Defining models and layers in TensorFlow # # Most models are made of layers. Layers are functions with a known mathematical structure that can be reused and have trainable variables. In TensorFlow, most high-level implementations of layers and models, such as Keras or [Sonnet](https://github.com/deepmind/sonnet), are built on the same foundational class: `tf.Module`. # # Here's an example of a very simple `tf.Module` that operates on a scalar tensor: # # + id="alhYPVEtAiSy" class SimpleModule(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.a_variable = tf.Variable(5.0, name="train_me") self.non_trainable_variable = tf.Variable(5.0, trainable=False, name="do_not_train_me") def __call__(self, x): return self.a_variable * x + self.non_trainable_variable simple_module = SimpleModule(name="simple") simple_module(tf.constant(5.0)) # + [markdown] id="JwMc_zu5Ant8" # Modules and, by extension, layers are deep-learning terminology for "objects": they have internal state, and methods that use that state. # # There is nothing special about `__call__` except to act like a [Python callable](https://stackoverflow.com/questions/111234/what-is-a-callable); you can invoke your models with whatever functions you wish. # # You can set the trainability of variables on and off for any reason, including freezing layers and variables during fine-tuning. # # Note: `tf.Module` is the base class for both `tf.keras.layers.Layer` and `tf.keras.Model`, so everything you come across here also applies in Keras. For historical compatibility reasons Keras layers do not collect variables from modules, so your models should use only modules or only Keras layers. However, the methods shown below for inspecting variables are the same in either case. # # By subclassing `tf.Module`, any `tf.Variable` or `tf.Module` instances assigned to this object's properties are automatically collected. This allows you to save and load variables, and also create collections of `tf.Module`s. # + id="CyzYy4A_CbVf" # All trainable variables print("trainable variables:", simple_module.trainable_variables) # Every variable print("all variables:", simple_module.variables) # + [markdown] id="nuSFrRUNCaaW" # This is an example of a two-layer linear layer model made out of modules. # # First a dense (linear) layer: # + id="Efb2p2bzAn-V" class Dense(tf.Module): def __init__(self, in_features, out_features, name=None): super().__init__(name=name) self.w = tf.Variable( tf.random.normal([in_features, out_features]), name='w') self.b = tf.Variable(tf.zeros([out_features]), name='b') def __call__(self, x): y = tf.matmul(x, self.w) + self.b return tf.nn.relu(y) # + [markdown] id="bAhMuC-UpnhX" # And then the complete model, which makes two layer instances and applies them: # + id="QQ7qQf-DFw74" class SequentialModule(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.dense_1 = Dense(in_features=3, out_features=3) self.dense_2 = Dense(in_features=3, out_features=2) def __call__(self, x): x = self.dense_1(x) return self.dense_2(x) # You have made a model! my_model = SequentialModule(name="the_model") # Call it, with random results print("Model results:", my_model(tf.constant([[2.0, 2.0, 2.0]]))) # + [markdown] id="d1oUzasJHHXf" # `tf.Module` instances will automatically collect, recursively, any `tf.Variable` or `tf.Module` instances assigned to it. This allows you to manage collections of `tf.Module`s with a single model instance, and save and load whole models. # + id="JLFA5_PEGb6C" print("Submodules:", my_model.submodules) # + id="6lzoB8pcRN12" for var in my_model.variables: print(var, "\n") # + [markdown] id="hoaxL3zzm0vK" # ### Waiting to create variables # # You may have noticed here that you have to define both input and output sizes to the layer. This is so the `w` variable has a known shape and can be allocated. # # By deferring variable creation to the first time the module is called with a specific input shape, you do not need specify the input size up front. # + id="XsGCLFXlnPum" class FlexibleDenseModule(tf.Module): # Note: No need for `in+features` def __init__(self, out_features, name=None): super().__init__(name=name) self.is_built = False self.out_features = out_features def __call__(self, x): # Create variables on first call. if not self.is_built: self.w = tf.Variable( tf.random.normal([x.shape[-1], self.out_features]), name='w') self.b = tf.Variable(tf.zeros([self.out_features]), name='b') self.is_built = True y = tf.matmul(x, self.w) + self.b return tf.nn.relu(y) # + id="8bjOWax9LOkP" # Used in a module class MySequentialModule(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.dense_1 = FlexibleDenseModule(out_features=3) self.dense_2 = FlexibleDenseModule(out_features=2) def __call__(self, x): x = self.dense_1(x) return self.dense_2(x) my_model = MySequentialModule(name="the_model") print("Model results:", my_model(tf.constant([[2.0, 2.0, 2.0]]))) # + [markdown] id="49JfbhVrpOLH" # This flexibility is why TensorFlow layers often only need to specify the shape of their outputs, such as in `tf.keras.layers.Dense`, rather than both the input and output size. # + [markdown] id="JOLVVBT8J_dl" # ## Saving weights # # You can save a `tf.Module` as both a [checkpoint](./checkpoint.ipynb) and a [SavedModel](./saved_model.ipynb). # # Checkpoints are just the weights (that is, the values of the set of variables inside the module and its submodules): # + id="pHXKRDk7OLHA" chkp_path = "my_checkpoint" checkpoint = tf.train.Checkpoint(model=my_model) checkpoint.write(chkp_path) # + [markdown] id="WXOPMBR4T4ZR" # Checkpoints consist of two kinds of files: the data itself and an index file for metadata. The index file keeps track of what is actually saved and the numbering of checkpoints, while the checkpoint data contains the variable values and their attribute lookup paths. # + id="jBV3fprlTWqJ" # !ls my_checkpoint* # + [markdown] id="CowCuBTvXgUu" # You can look inside a checkpoint to be sure the whole collection of variables is saved, sorted by the Python object that contains them. # + id="o2QAdfpvS8tB" tf.train.list_variables(chkp_path) # + [markdown] id="4eGaNiQWcK4j" # During distributed (multi-machine) training they can be sharded, which is why they are numbered (e.g., '00000-of-00001'). In this case, though, there is only have one shard. # # When you load models back in, you overwrite the values in your Python object. # + id="UV8rdDzcwVVg" new_model = MySequentialModule() new_checkpoint = tf.train.Checkpoint(model=new_model) new_checkpoint.restore("my_checkpoint") # Should be the same result as above new_model(tf.constant([[2.0, 2.0, 2.0]])) # + [markdown] id="BnPwDRwamdfq" # Note: As checkpoints are at the heart of long training workflows `tf.checkpoint.CheckpointManager` is a helper class that makes checkpoint management much easier. Refer to the [Training checkpoints guide](./checkpoint.ipynb) for more details. # + [markdown] id="pSZebVuWxDXu" # ## Saving functions # # TensorFlow can run models without the original Python objects, as demonstrated by [TensorFlow Serving](https://tensorflow.org/tfx) and [TensorFlow Lite](https://tensorflow.org/lite), even when you download a trained model from [TensorFlow Hub](https://tensorflow.org/hub). # # TensorFlow needs to know how to do the computations described in Python, but **without the original code**. To do this, you can make a **graph**, which is described in the [Introduction to graphs and functions guide](./intro_to_graphs.ipynb). # # This graph contains operations, or *ops*, that implement the function. # # You can define a graph in the model above by adding the `@tf.function` decorator to indicate that this code should run as a graph. # + id="WQTvkapUh7lk" class MySequentialModule(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.dense_1 = Dense(in_features=3, out_features=3) self.dense_2 = Dense(in_features=3, out_features=2) @tf.function def __call__(self, x): x = self.dense_1(x) return self.dense_2(x) # You have made a model with a graph! my_model = MySequentialModule(name="the_model") # + [markdown] id="hW66YXBziLo9" # The module you have made works exactly the same as before. Each unique signature passed into the function creates a separate graph. Check the [Introduction to graphs and functions guide](./intro_to_graphs.ipynb) for details. # + id="H5zUfti3iR52" print(my_model([[2.0, 2.0, 2.0]])) print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]])) # + [markdown] id="lbGlU1kgyDo7" # You can visualize the graph by tracing it within a TensorBoard summary. # + id="zmy-T67zhp-S" # Set up logging. stamp = datetime.now().strftime("%Y%m%d-%H%M%S") logdir = "logs/func/%s" % stamp writer = tf.summary.create_file_writer(logdir) # Create a new model to get a fresh trace # Otherwise the summary will not see the graph. new_model = MySequentialModule() # Bracket the function call with # tf.summary.trace_on() and tf.summary.trace_export(). tf.summary.trace_on(graph=True) tf.profiler.experimental.start(logdir) # Call only one tf.function when tracing. z = print(new_model(tf.constant([[2.0, 2.0, 2.0]]))) with writer.as_default(): tf.summary.trace_export( name="my_func_trace", step=0, profiler_outdir=logdir) # + [markdown] id="gz4lwNZ9hR79" # Launch TensorBoard to view the resulting trace: # + id="V4MXDbgBnkJu" #docs_infra: no_execute # %tensorboard --logdir logs/func # + [markdown] id="Gjattu0AhYUl" # ![A screenshot of the graph in TensorBoard](images/tensorboard_graph.png) # + [markdown] id="SQu3TVZecmL7" # ### Creating a `SavedModel` # # The recommended way of sharing completely trained models is to use `SavedModel`. `SavedModel` contains both a collection of functions and a collection of weights. # # You can save the model you have just trained as follows: # + id="Awv_Tw__WK7a" tf.saved_model.save(my_model, "the_saved_model") # + id="SXv3mEKsefGj" # Inspect the SavedModel in the directory # !ls -l the_saved_model # + id="vQQ3hEvHYdoR" # The variables/ directory contains a checkpoint of the variables # !ls -l the_saved_model/variables # + [markdown] id="xBqPop7ZesBU" # The `saved_model.pb` file is a [protocol buffer](https://developers.google.com/protocol-buffers) describing the functional `tf.Graph`. # # Models and layers can be loaded from this representation without actually making an instance of the class that created it. This is desired in situations where you do not have (or want) a Python interpreter, such as serving at scale or on an edge device, or in situations where the original Python code is not available or practical to use. # # You can load the model as new object: # + id="zRFcA5wIefv4" new_model = tf.saved_model.load("the_saved_model") # + [markdown] id="-9EF3mT7i3qN" # `new_model`, created from loading a saved model, is an internal TensorFlow user object without any of the class knowledge. It is not of type `SequentialModule`. # + id="EC_eQj7yi54G" isinstance(new_model, SequentialModule) # + [markdown] id="-OrOX1zxiyhR" # This new model works on the already-defined input signatures. You can't add more signatures to a model restored like this. # + id="_23BYYBWfKnc" print(my_model([[2.0, 2.0, 2.0]])) print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]])) # + [markdown] id="qSFhoMtTjSR6" # Thus, using `SavedModel`, you are able to save TensorFlow weights and graphs using `tf.Module`, and then load them again. # + [markdown] id="Rb9IdN7hlUZK" # ## Keras models and layers # # Note that up until this point, there is no mention of Keras. You can build your own high-level API on top of `tf.Module`, and people have. # # In this section, you will examine how Keras uses `tf.Module`. A complete user guide to Keras models can be found in the [Keras guide](keras/sequential_model.ipynb). # # + [markdown] id="uigsVGPreE-D" # ### Keras layers # # `tf.keras.layers.Layer` is the base class of all Keras layers, and it inherits from `tf.Module`. # # You can convert a module into a Keras layer just by swapping out the parent and then changing `__call__` to `call`: # + id="88YOGquhnQRd" class MyDense(tf.keras.layers.Layer): # Adding **kwargs to support base Keras layer arguments def __init__(self, in_features, out_features, **kwargs): super().__init__(**kwargs) # This will soon move to the build step; see below self.w = tf.Variable( tf.random.normal([in_features, out_features]), name='w') self.b = tf.Variable(tf.zeros([out_features]), name='b') def call(self, x): y = tf.matmul(x, self.w) + self.b return tf.nn.relu(y) simple_layer = MyDense(name="simple", in_features=3, out_features=3) # + [markdown] id="nYGmAsPrws--" # Keras layers have their own `__call__` that does some bookkeeping described in the next section and then calls `call()`. You should notice no change in functionality. # + id="nIqE8wOznYKG" simple_layer([[2.0, 2.0, 2.0]]) # + [markdown] id="tmN5vb1K18U1" # ### The `build` step # # As noted, it's convenient in many cases to wait to create variables until you are sure of the input shape. # # Keras layers come with an extra lifecycle step that allows you more flexibility in how you define your layers. This is defined in the `build` function. # # `build` is called exactly once, and it is called with the shape of the input. It's usually used to create variables (weights). # # You can rewrite `MyDense` layer above to be flexible to the size of its inputs: # # + id="4YTfrlgdsURp" class FlexibleDense(tf.keras.layers.Layer): # Note the added `**kwargs`, as Keras supports many arguments def __init__(self, out_features, **kwargs): super().__init__(**kwargs) self.out_features = out_features def build(self, input_shape): # Create the state of the layer (weights) self.w = tf.Variable( tf.random.normal([input_shape[-1], self.out_features]), name='w') self.b = tf.Variable(tf.zeros([self.out_features]), name='b') def call(self, inputs): # Defines the computation from inputs to outputs return tf.matmul(inputs, self.w) + self.b # Create the instance of the layer flexible_dense = FlexibleDense(out_features=3) # + [markdown] id="Koc_uSqt2PRh" # At this point, the model has not been built, so there are no variables: # + id="DgyTyUD32Ln4" flexible_dense.variables # + [markdown] id="-KdamIVl2W8Y" # Calling the function allocates appropriately-sized variables: # + id="IkLyEx7uAoTK" # Call it, with predictably random results print("Model results:", flexible_dense(tf.constant([[2.0, 2.0, 2.0], [3.0, 3.0, 3.0]]))) # + id="Swofpkrd2YDd" flexible_dense.variables # + [markdown] id="7PuNUnf0OIpF" # Since `build` is only called once, inputs will be rejected if the input shape is not compatible with the layer's variables: # + id="caYWDrHSAy_j" try: print("Model results:", flexible_dense(tf.constant([[2.0, 2.0, 2.0, 2.0]]))) except tf.errors.InvalidArgumentError as e: print("Failed:", e) # + [markdown] id="YnporXiudF1I" # Keras layers have a lot more extra features including: # # * Optional losses # * Support for metrics # * Built-in support for an optional `training` argument to differentiate between training and inference use # * `get_config` and `from_config` methods that allow you to accurately store configurations to allow model cloning in Python # # Read about them in the [full guide](./keras/custom_layers_and_models.ipynb) to custom layers and models. # + [markdown] id="L2kds2IHw2KD" # ### Keras models # # You can define your model as nested Keras layers. # # However, Keras also provides a full-featured model class called `tf.keras.Model`. It inherits from `tf.keras.layers.Layer`, so a Keras model can be used, nested, and saved in the same way as Keras layers. Keras models come with extra functionality that makes them easy to train, evaluate, load, save, and even train on multiple machines. # # You can define the `SequentialModule` from above with nearly identical code, again converting `__call__` to `call()` and changing the parent: # + id="Hqjo1DiyrHrn" class MySequentialModel(tf.keras.Model): def __init__(self, name=None, **kwargs): super().__init__(**kwargs) self.dense_1 = FlexibleDense(out_features=3) self.dense_2 = FlexibleDense(out_features=2) def call(self, x): x = self.dense_1(x) return self.dense_2(x) # You have made a Keras model! my_sequential_model = MySequentialModel(name="the_model") # Call it on a tensor, with random results print("Model results:", my_sequential_model(tf.constant([[2.0, 2.0, 2.0]]))) # + [markdown] id="8i-CR_h2xw3z" # All the same features are available, including tracking variables and submodules. # # Note: To emphasize the note above, a raw `tf.Module` nested inside a Keras layer or model will not get its variables collected for training or saving. Instead, nest Keras layers inside of Keras layers. # + id="hdLQFNdMsOz1" my_sequential_model.variables # + id="JjVAMrAJsQ7G" my_sequential_model.submodules # + [markdown] id="FhP8EItC4oac" # Overriding `tf.keras.Model` is a very Pythonic approach to building TensorFlow models. If you are migrating models from other frameworks, this can be very straightforward. # # If you are constructing models that are simple assemblages of existing layers and inputs, you can save time and space by using the [functional API](./keras/functional.ipynb), which comes with additional features around model reconstruction and architecture. # # Here is the same model with the functional API: # + id="jJiZZiJ0fyqQ" inputs = tf.keras.Input(shape=[3,]) x = FlexibleDense(3)(inputs) x = FlexibleDense(2)(x) my_functional_model = tf.keras.Model(inputs=inputs, outputs=x) my_functional_model.summary() # + id="kg-xAZw5gaG6" my_functional_model(tf.constant([[2.0, 2.0, 2.0]])) # + [markdown] id="s_BK9XH5q9cq" # The major difference here is that the input shape is specified up front as part of the functional construction process. The `input_shape` argument in this case does not have to be completely specified; you can leave some dimensions as `None`. # # Note: You do not need to specify `input_shape` or an `InputLayer` in a subclassed model; these arguments and layers will be ignored. # + [markdown] id="qI9aXLnaHEFF" # ## Saving Keras models # # Keras models can be checkpointed, and that will look the same as `tf.Module`. # # Keras models can also be saved with `tf.saved_models.save()`, as they are modules. However, Keras models have convenience methods and other functionality: # + id="SAz-KVZlzAJu" my_sequential_model.save("exname_of_file") # + [markdown] id="C2urAeR-omns" # Just as easily, they can be loaded back in: # + id="Wj5DW-LCopry" reconstructed_model = tf.keras.models.load_model("exname_of_file") # + [markdown] id="EA7P_MNvpviZ" # Keras `SavedModels` also save metric, loss, and optimizer states. # # This reconstructed model can be used and will produce the same result when called on the same data: # + id="P_wGfQo5pe6T" reconstructed_model(tf.constant([[2.0, 2.0, 2.0]])) # + [markdown] id="xKyjlkceqjwD" # There is more to know about saving and serialization of Keras models, including providing configuration methods for custom layers for feature support. Check out the [guide to saving and serialization](keras/save_and_serialize). # + [markdown] id="kcdMMPYv7Krz" # # What's next # # If you want to know more details about Keras, you can follow the existing Keras guides [here](./keras/). # # Another example of a high-level API built on `tf.module` is Sonnet from DeepMind, which is covered on [their site](https://github.com/deepmind/sonnet).
site/en/guide/intro_to_modules.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # %matplotlib inline # # ============================================== # Real-time feedback for decoding :: Server Side # ============================================== # # This example demonstrates how to setup a real-time feedback # mechanism using StimServer and StimClient. # # The idea here is to display future stimuli for the class which # is predicted less accurately. This allows on-demand adaptation # of the stimuli depending on the needs of the classifier. # # To run this example, open ipython in two separate terminals. # In the first, run rt_feedback_server.py and then wait for the # message # # RtServer: Start # # Once that appears, run rt_feedback_client.py in the other terminal # and the feedback script should start. # # All brain responses are simulated from a fiff file to make it easy # to test. However, it should be possible to adapt this script # for a real experiment. # # # # + # Author: <NAME> <<EMAIL>> # # License: BSD (3-clause) import time import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.svm import SVC from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix import mne from mne.datasets import sample from mne.realtime import StimServer from mne.realtime import MockRtClient from mne.decoding import Vectorizer, FilterEstimator print(__doc__) # Load fiff file to simulate data data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname, preload=True) # Instantiating stimulation server # The with statement is necessary to ensure a clean exit with StimServer(port=4218) as stim_server: # The channels to be used while decoding picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=True, exclude=raw.info['bads']) rt_client = MockRtClient(raw) # Constructing the pipeline for classification filt = FilterEstimator(raw.info, 1, 40) scaler = preprocessing.StandardScaler() vectorizer = Vectorizer() clf = SVC(C=1, kernel='linear') concat_classifier = Pipeline([('filter', filt), ('vector', vectorizer), ('scaler', scaler), ('svm', clf)]) stim_server.start(verbose=True) # Just some initially decided events to be simulated # Rest will decided on the fly ev_list = [4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4] score_c1, score_c2, score_x = [], [], [] for ii in range(50): # Tell the stim_client about the next stimuli stim_server.add_trigger(ev_list[ii]) # Collecting data if ii == 0: X = rt_client.get_event_data(event_id=ev_list[ii], tmin=-0.2, tmax=0.5, picks=picks, stim_channel='STI 014')[None, ...] y = ev_list[ii] else: X_temp = rt_client.get_event_data(event_id=ev_list[ii], tmin=-0.2, tmax=0.5, picks=picks, stim_channel='STI 014') X_temp = X_temp[np.newaxis, ...] X = np.concatenate((X, X_temp), axis=0) time.sleep(1) # simulating the isi y = np.append(y, ev_list[ii]) # Start decoding after collecting sufficient data if ii >= 10: # Now start doing rtfeedback X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7) y_pred = concat_classifier.fit(X_train, y_train).predict(X_test) cm = confusion_matrix(y_test, y_pred) score_c1.append(float(cm[0, 0]) / sum(cm, 1)[0] * 100) score_c2.append(float(cm[1, 1]) / sum(cm, 1)[1] * 100) # do something if one class is decoded better than the other if score_c1[-1] < score_c2[-1]: print("We decoded class RV better than class LV") ev_list.append(3) # adding more LV to future simulated data else: print("We decoded class LV better than class RV") ev_list.append(4) # adding more RV to future simulated data # Clear the figure plt.clf() # The x-axis for the plot score_x.append(ii) # Now plot the accuracy plt.plot(score_x[-5:], score_c1[-5:]) plt.hold(True) plt.plot(score_x[-5:], score_c2[-5:]) plt.xlabel('Trials') plt.ylabel('Classification score (% correct)') plt.title('Real-time feedback') plt.ylim([0, 100]) plt.xticks(score_x[-5:]) plt.legend(('LV', 'RV'), loc='upper left') plt.show()
0.15/_downloads/rt_feedback_server.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + """ 用户分层 """ import pandas as pd data_src = pd.read_table('/Users/didi/Desktop/000000_1', sep="\t", names=['pId','pPhone','oId','sCost','aCost'])[:1000] data_src.describe() # - data_src["dRate"] = data_src.aCost / data_src.sCost data_src.head(5) data_stat = data_src.groupby([data_src["pId"], data_src["pPhone"]]) data_stat.head(5) data_stat["oCount"] = 1 data_stat.head(5) data_stat. data_src.
miscellaneous/proj-20160310.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import math def unit_vector(vector): """ Returns the unit vector of the vector. """ return vector / np.linalg.norm(vector) def angle_between(v1, v2): """ Returns the angle in radians between vectors 'v1' and 'v2':: >>> angle_between((1, 0, 0), (0, 1, 0)) 1.5707963267948966 >>> angle_between((1, 0, 0), (1, 0, 0)) 0.0 >>> angle_between((1, 0, 0), (-1, 0, 0)) 3.141592653589793 """ v1_u = unit_vector(v1) v2_u = unit_vector(v2) return np.arccos(np.clip(np.dot(v1_u, v2_u), -1.0, 1.0)) # - # + v1 = [2,8,3] v2 = [3,2,3] ang = angle_between(v1,v2) print('Radians: ', ang) deg = math.degrees(ang) print('Degrees: ', deg) # -
Angle.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # ## Use scaling for longer canyon # Using scaling and fitted functions from notebook scalling_tracer_flux_paper1_clean.ipynb, see if our scaling works when using a longer canyon. # # L = 12800.0 m, vs 6400.0 m from Barkley-like run # # R = 5000.0 m # # Wm = 24432.4 m, Width at shelf break # # W = 22044.8 m, mid-length width at rim depth # # Ws = 13756.1 m, mid-length width at shelf-break isobath # # Hs = 150.0 m, Shelf break depth # # s = 0.005 m, shelf slope # # Hh = 97.5 m, head depth # # Hr = 132.0 m, rim depth at DnS # # No = 5.5E-3 s$^{-1}$, Initial stratification at shelf-break depth # # f = 9.66E-5 s$^{-1}$, Coriois parameter # # U = 0.344 ms$^{-1}$, incoming velocity base case, m/s (from model) # # # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.colors as mcolors import matplotlib.gridspec as gspec from matplotlib.ticker import FormatStrFormatter from netCDF4 import Dataset import numpy as np import os import pandas as pd import seaborn as sns import sys import scipy.stats import warnings warnings.filterwarnings("ignore") import canyon_tools.readout_tools as rout import canyon_tools.metrics_tools as mpt # + from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') # - sns.set_context('paper') sns.set_style('white') # + CanyonGrid='/data/kramosmu/results/TracerExperiments/LONGER_CNY2/run01/gridGlob.nc' CanyonGridOut = Dataset(CanyonGrid) CanyonGridNoC='/data/kramosmu/results/TracerExperiments/CNTDIFF/run68/gridGlob.nc' CanyonGridOutNoC = Dataset(CanyonGridNoC) CanyonState='/data/kramosmu/results/TracerExperiments/LONGER_CNY2/run01/stateGlob.nc' CanyonStateOut = Dataset(CanyonState) # Grid variables nx = 616 ny = 360 nz = 90 nt = 19 # t dimension size time = CanyonStateOut.variables['T'] RC = CanyonGridOut.variables['RC'] # + # Constants and scales L = 9600 #12800#6400.0 # canyon length R = 5000.0 # Upstream radius of curvature g = 9.81 # accel. gravity Hs = 149.8 # Shelf break depth s = 0.005 # shelf slope Wr = 13756.1 # mid-length width at shelf break isobath W = 22044.8 # mid-length width at rim depth Hh= 72.3 #98.8 # head depth Hr = 89.1 # rim depth at UwH N = 5.5E-3 f = 9.66E-5 U = 0.35 Co = 5.06 # NOTE: The default values of all functions correspond to the base case def Dh(f,L,N): '''Vertical scale Dh''' return((f*L)/(N)) def Ro(U,f,R): '''Rossby number using radius of curvature as length scale''' return(U/(f*R)) def F(Ro): '''Function that estimates the ability of the flow to follow isobaths''' return(Ro/(0.9+Ro)) def Bu(N,f,W,Hs): '''Burger number''' return(N*Hs/(f*W)) def RossbyRad(N,f,Hs): '''1st Rossby radius of deformation''' return(N*Hs/f) # + # Get HCW, tracer on shelf, etc... file = ('/data/kramosmu/results/TracerExperiments/LONGER_CNY2/HCW_TrMass_LONGER_CNY2run01.csv') dfcan = pd.read_csv(file) HCW = dfcan['HCW'] TrMass = dfcan['TrMassHCW'] Phi_mod = np.mean(np.array([(HCW[ii]-HCW[ii-1])/(time[ii]-time[ii-1]) for ii in range (8,18)])) Phi_std = np.std(np.array([(HCW[ii]-HCW[ii-1])/(time[ii]-time[ii-1]) for ii in range (8,18)])) Phi_Tr = np.mean(np.array([(TrMass[ii]-TrMass[ii-1])/(time[ii]-time[ii-1]) for ii in range (12,18)])) Phi_Tr_std = np.std(np.array([(TrMass[ii]-TrMass[ii-1])/(time[ii]-time[ii-1]) for ii in range (12,18)])) # - # ### Scaling # + # Neff t = 6.5 # days epsilon = 5 Hrim = 135 Dz = abs(RC[int(Hrim/5)+1]-RC[int(Hrim/5)-1]) Z = ((f*U*F(Ro(U,f,R))*L)**(0.5))/N dk = 0 Kz = 1E-5 Kz_be = 1E-5 Zdif = 0 Smin_dif = np.exp(-0.15*Zdif/Dz) # -0.1 comes from the 1D model Smax_dif = (Zdif/Dz)*np.exp(-(Kz*t*3600*24)/((epsilon)**2)) Smax_upw = (Z/Hh)*np.exp(-Kz*t*3600*24/Z**2) Smin_upw = (Z/Hh)*np.exp(-Kz_be*t*3600*24/Z**2) A3 = 2.95 B3 = 2.02 C3 = 1.09 Nmin = N*(A3*Smin_upw + B3*(1-Smin_dif) + C3)**0.5 A1 = 8.17 B1 = 0.22 C1 = 0.81 Nmax = N*(A1*Smax_upw + B1*Smax_dif + C1)**0.5 Neff = 0.75*Nmax+0.25*Nmin #Concentration A5 = 0.33 B5 = 0.06 C5 = 1.01 Cbar = Co*(A5*Smax_upw+B5*Smax_dif + C5) # Upwelling flux Se = (s*N)/(f*((F(Ro(U,f,Wr))/Ro(U,f,L))**(1/2))) #slope2 = 6.33 #param2 = 0.89 #intercept2 = -0.014 slope = 2.11 param = 0.79 intercept = -0.005 #Phi=((slope2*(F(Ro(U,f,Wr))**(3/2))*(Ro(U,f,L)**(1/2))*((1-param2*Se)**3))+intercept2)*(U*W*Dh(f,L,Neff)) Phi=((slope*(F(Ro(U,f,Wr))**(3/2))*(Ro(U,f,L)**(1/2))*((1-param*Se)**3))+intercept)*(U*W*Dh(f,L,N)) # Tracer flux A6 = 1.00 B6 = -442.22 PhiTr = A6*Cbar*Phi - B6 # - print(PhiTr) print(Phi_Tr) print(Phi) print(Phi_mod) # + file = ('/data/kramosmu/results/TracerExperiments/CNTDIFF/HCW_TrMass_CNTDIFFrun38.csv') dfcan = pd.read_csv(file) HCW_bar = dfcan['HCW'] TrMass_bar = dfcan['TrMassHCW'] Phi_bar_mod = np.mean(np.array([(HCW_bar[ii]-HCW_bar[ii-1])/(time[ii]-time[ii-1]) for ii in range (8,18)])) Phi_Tr_bar = np.mean(np.array([(TrMass_bar[ii]-TrMass_bar[ii-1])/(time[ii]-time[ii-1]) for ii in range (12,18)])) plt.plot(HCW) plt.plot(HCW_bar) # - plt.plot(TrMass) plt.plot(TrMass_bar) print(Phi_bar_mod, Phi_Tr_bar) print(Phi_mod, Phi_Tr) (U*W*Hs) U*W*Dh(f=9.66E-5, L=12800, N=5.5E-3 ) Dh(f,L,Neff) Dh(f,L,N) Dh(f,L,0.012)
RealisticKvMaps/scaling_tracer_flux_paper1_wLongCny.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nteract={"transient": {"deleting": false}} # # Laplace Mechanism Basics # # # The Laplace Mechanism adds noise drawn from a Laplace distribution to realize differential privacy. # # This mechanism works well for computing means and histograms, providing accurate results at minimal privacy budgets. # # This notebook walks through the basic `eeprivacy` functions for working with the Laplace Mechanism. # + jupyter={"outputs_hidden": true, "source_hidden": false} nteract={"transient": {"deleting": false}} # Preamble: imports and figure settings from eeprivacy.mechanisms import LaplaceMechanism from eeprivacy.operations import ( PrivateClampedMean, PrivateHistogram, ) import matplotlib.pyplot as plt import numpy as np import pandas as pd import matplotlib as mpl from scipy import stats np.random.seed(1234) # Fix seed for deterministic documentation mpl.style.use("seaborn-white") MD = 28 LG = 36 plt.rcParams.update({ "figure.figsize": [25, 10], "legend.fontsize": MD, "axes.labelsize": LG, "axes.titlesize": LG, "xtick.labelsize": LG, "ytick.labelsize": LG, }) # + [markdown] nteract={"transient": {"deleting": false}} # ## Distribution of Laplace Mechanism Outputs ## # # For a given ε, noise is drawn from the Laplace distribution at `b`=`sensitivity`/ε. The `eeprivacy` class `LaplaceMechanism` draws this noise and adds it to a private value: # + nteract={"transient": {"deleting": false}} trials = [] for t in range(1000): trials.append(LaplaceMechanism.execute( value=0, epsilon=0.1, sensitivity=199/3000 )) plt.hist(trials, bins=30, color="k") plt.title("Distribution of outputs from Laplace Mechanism") plt.show() # + [markdown] nteract={"transient": {"deleting": false}} # ## Laplace Mechanism Confidence Interval ## # # With the `eeprivacy` confidence interval functions, analysts can determine how far away the true value of a statistics is from the differentially private result. # # To determine the confidence interval for a given choice of privacy parameters, employ `eeprivacy.laplace_mechanism_confidence_interval`. # # To determine the privacy parameters for a desired confidence interval, employ `eeprivacy.laplace_mechanism_epsilon_for_confidence_interval`. # # The confidence intervals reported below are two-sided. For example, for a 95% confidence interval of +/-10, 2.5% of results will be smaller than -10 and 2.5% of results will be larger than +10. # # + jupyter={"outputs_hidden": true, "source_hidden": false} nteract={"transient": {"deleting": false}} trials = [] for t in range(100000): trials.append(LaplaceMechanism.execute( value=0, epsilon=0.1, sensitivity=1 )) plt.hist(trials, bins=30, color="k") plt.title("Distribution of outputs from Laplace Mechanism") plt.show() ci = np.quantile(trials, 0.975) print(f"95% Confidence Interval (Stochastic): {ci}") ci = LaplaceMechanism.confidence_interval( epsilon=0.1, sensitivity=1, confidence=0.95 ) print(f"95% Confidence Interval (Exact): {ci}") # Now in reverse: epsilon = LaplaceMechanism.epsilon_for_confidence_interval( target_ci=29.957, sensitivity=1, confidence=0.95 ) print(f"ε for confidence interval: {epsilon}") # + [markdown] nteract={"transient": {"deleting": false}} # ## Examples of Laplace Mechanism Helpers using `eeprivacy` ## # # The Laplace Mechanism is well-suited for computing means, and `eeprivacy` provides a helper `private_mean_with_laplace` for this use case. # # The private mean function implemented by `eeprivacy` employs the "clamped mean" approach to bound sensitivity. Analysts provide a fixed `lower_bound` and `upper_bound` before computing the mean. For datasets with unknown ranges, an approach like the one described in [Computing Bounds for Clamped Means] can be used. # + jupyter={"outputs_hidden": true, "source_hidden": false} nteract={"transient": {"deleting": false}} N = 500000 dataset = np.random.normal(loc=42, size=N) plt.hist(dataset, bins=30, color="k") plt.title("Sample Dataset") plt.xlabel("Value") plt.ylabel("Count") plt.show() trials = [] private_mean_op = PrivateClampedMean( lower_bound = 0, upper_bound = 50 ) for i in range(1000): private_mean = private_mean_op.execute( values=dataset, epsilon=0.1, ) trials.append(private_mean) plt.hist(trials, bins=30, color="k") plt.title("Distribution of private mean with Laplace Mechanism") plt.xlabel("Laplace Mechanism Output") plt.ylabel("Count") plt.show() # + [markdown] nteract={"transient": {"deleting": false}} # ## Computing Private Histograms ## # # The Laplace Mechanism is also well-suited for private histograms. # + jupyter={"outputs_hidden": true, "source_hidden": false} nteract={"transient": {"deleting": false}} bins = np.linspace(start=0, stop=100, num=30) private_histogram_op = PrivateHistogram( bins = bins, ) private_histogram = private_histogram_op.execute( values=dataset, epsilon=0.001 ) true_histogram = np.histogram(dataset, bins=bins) bin_centers = (bins[0:-1] + bins[1:]) / 2 bin_width = bins[1] - bins[0] fig, ax = plt.subplots() ax.bar( bin_centers, private_histogram, width=bin_width/2, yerr=ci, color="r", label="Private Count" ) ax.bar( bin_centers+bin_width/2, private_histogram, width=bin_width/2, color="b", label="True Count" ) plt.title("Private histogram of sample dataset") plt.xlabel("Value") plt.ylabel("Count") plt.legend() plt.show()
docs-source/source/laplace-mechanism-basics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: scivis-plankton # kernelspec: # display_name: Python (scivis-plankton) # language: python # name: scivis-plankton # --- # # Data operations # This notebook contains some code for loading the images and classification labels. # # **The [last cell of this notebook](#Quick-start) contains everything needed to load the labelled training data into an xarray, in a single notebook cell.** # ## Import libraries # + gather={"logged": 1636726275244} import matplotlib.pyplot as plt import xarray as xr import pandas as pd import numpy as np from scivision.io import load_dataset from IPython.display import display, HTML # - # ## Load the Intake catalog # As before, load the [Intake](https://intake.readthedocs.io/en/latest/index.html) catalog from the challenge repository containing [Scivision](https://github.com/alan-turing-institute/scivision) metadata: # + gather={"logged": 1636726275627} cat = load_dataset('https://github.com/alan-turing-institute/plankton-dsg-challenge') # - # ## Inspect the catalog entries # We explored the catalog in the previous notebook. It contains several data sources: their descriptions are shown below. for data_source in cat: display(HTML(f"<h4>{data_source}</h4>")) display(HTML(cat[data_source].description)) # <div class="alert alert-block alert-info">We will use the <tt>plankton_multiple</tt> entry to fetch all of the images, and the <tt>labels</tt> to fetch the labels for training. The <tt>labels_holdout</tt> will be useful as a final holdout set for testing any models you may produce during the challenge.</div> # ## Fetch the labels # The `labels` entry corresponds to an index file, imported as a `pandas.DataFrame`, which contain the list of all plankton images. Each image include its index, filename, and labels according to three levels of classication: `label1` (zooplankton vs detritus), `label2` (noncopedod vs copedod) and `label3` (species). # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} labels = cat.labels().read() # - labels.head() # ## Fetch the complete image dataset # The final entry refers to load the full dataset. All images are stacked into a single `xarray.Dataset` object with fixed dimensions of 1040 px x 832, large enough to hold all of the images. Smaller images are padded with zeros. # + gather={"logged": 1636726330314} ds_all = cat.plankton_multiple().to_dask() ds_all.filename.load() # + gather={"logged": 1636726330509} print(ds_all) # - # Let's subset a single image. This can be done using the image index stored in `concat_dim`. subset = ds_all.sel(concat_dim=2) print(subset) plt.imshow(subset['raster'].compute().values[:,:,:]) plt.title(subset.filename.compute().values) # ## Assembling the labelled dataset # ### Check for duplicate labels for filename, label_grp in labels.groupby("filename"): if len(label_grp) > 1: display(label_grp.reset_index(drop=True)) print() # ### Join the images and labels # Put the labels into an xarray, dropping any filenames that have duplicate labels. Set the filename as the (unique) index so it ready to be merged (joined) with the image data: # + labels_dedup = xr.Dataset.from_dataframe( labels .drop_duplicates(subset=["filename"]) .set_index("filename") .sort_index() ) print(labels_dedup) # - # Merging (joining) a dataset can be done on xarray dimensions. We temporarily make `filename` a dimension of the dataset in order to perform the merge (in place of `concat_dim` - the integer-valued dimension corresponding to each image file to read in ds_all, which is **not** the same as `index` in labels_dedup). # + ds_labelled = ( ds_all .swap_dims({"concat_dim": "filename"}) .merge(labels_dedup, join="inner") .swap_dims({"filename": "concat_dim"}) ) print(ds_labelled) # - # ## Quick start # The following cell contains everything needed to load the labelled training data into an xarray, named `ds_labelled` (independent of the rest of the notebook). It will take a few minutes to run. # + import xarray as xr from scivision.io import load_dataset cat = load_dataset('https://github.com/alan-turing-institute/plankton-dsg-challenge') ds_all = cat.plankton_multiple().to_dask() labels = cat.labels().read() labels_dedup = xr.Dataset.from_dataframe( labels .drop_duplicates(subset=["filename"]) .set_index("filename") .sort_index() ) ds_labelled = ( ds_all .swap_dims({"concat_dim": "filename"}) .merge(labels_dedup, join="inner") .swap_dims({"filename": "concat_dim"}) )
notebooks/python/scivision/2_data_operations.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # ### 多元线性回归模型学习 # # 推导过程看mooc # # 核心公式: # # [多元线性回归的正规方程解公式](img/多元线性回归的正规方程解公式.png) # + pycharm={"name": "#%%\n", "is_executing": false} import numpy as np from sklearn import datasets # 构造数据,波士顿房价数据 boston = datasets.load_boston() X = boston.data # 使用包含所有特征的的数据 y = boston.target X = X[y < 50] y = y[y < 50] X.shape # 有13个特征 # + pycharm={"name": "#%%\n", "is_executing": false} from sklearn.model_selection import train_test_split from LinearRegression import LinearRegression X_train, X_test, y_train, y_test = train_test_split(X, y) # 训练模型 estimator = LinearRegression() estimator.fit_normal(X_train, y_train) # + pycharm={"name": "#%%\n", "is_executing": false} # 查看截距 estimator.interception_ # + pycharm={"name": "#%%\n", "is_executing": false} # 查看系数 estimator.coefficient_ # + pycharm={"name": "#%%\n", "is_executing": false} # 评估模型 estimator.score(X_test, y_test)
ml/04-Linear-Regression/06-Our-Linear-Regression/06-Our-Linear-Regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Prepare Dataset # by <NAME> & <NAME> # ### Loading Dependencies # %matplotlib inline from six.moves import cPickle as pickle import os import shutil from PIL import Image, ImageOps import numpy as np import matplotlib.pyplot as plt from IPython.display import display as disp from IPython.display import Image as Im from scipy import ndimage import random from scipy.ndimage.interpolation import shift from sklearn.model_selection import train_test_split # + ## image size image_size, size = 28, 28 ## Shifts num_shifts = 0 ## Number of imgs per class min_imgs_per_class = 1 ## Number of imgs per class after augmentation min_augmentation = 20 # - # ### Cropping Spectrograms # Given the architectures we are using in our models, we want all spectrograms to have the same size, because the models don't allow for dynamic size input. # + def squareAndGrayImage(image, size, path, species, name): # open our image and convert to grayscale # (needed since color channels add a third dimmension) im = Image.open(image).convert('L') # dimmensions of square image size = (size,size) # resize our image and adjust if image is not square. save our image squared_image = ImageOps.fit(im, size, Image.ANTIALIAS) squared_image.save(path + '/' + species + '/squared_' + name) squared_image.close() #print(ndimage.imread(path + '/' + species + '/squared_' + name).shape) def squareAndGrayProcess(size, dataset_path, new_dataset_path): # if our dataset doesn't exist create it, otherwise overwrite if not os.path.exists(new_dataset_path): os.makedirs(new_dataset_path) else: shutil.rmtree(new_dataset_path) os.makedirs(new_dataset_path) # get a list of species folders in our dataset species_dataset = os.listdir(dataset_path) for species in species_dataset: os.makedirs(new_dataset_path + '/' + species) species_images = os.listdir(dataset_path + '/' + species) for image in species_images: image_path = dataset_path + '/' + species + '/' + image squareAndGrayImage(image_path, size, new_dataset_path, species, image) dataset_path = '../dataset/spectrogram_roi_dataset' new_dataset_path = '../dataset/squared_spectrogram_roi_dataset' squareAndGrayProcess(size, dataset_path, new_dataset_path) # + #new_dataset_path = '../dataset/augmented_spectrograms/' #new_dataset_path = '../dataset/squared_spectrogram_roi_dataset/' # + def getDatasetFolders(dataset_path): folders = os.listdir(dataset_path) dataset_folders = [] for folder in folders: dataset_folders.append(dataset_path + '/' + folder) return dataset_folders dataset_folders = getDatasetFolders(new_dataset_path) # + pixel_depth = 255.0 # Number of levels per pixel. def load_image(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth #print(image_data.shape) # our images are RGBA so we would expect shape MxNx4 # see: https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.imread.html if (image_data.shape != (image_size, image_size)): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] #if num_images < min_num_images: # raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset # - def augmentation(new_dataset_path, num_shifts): for folder in os.listdir(new_dataset_path): species_pictures = os.listdir(new_dataset_path + '/' + folder) os.makedirs('../dataset/augmented_spectrograms' + '/' + folder) for image in species_pictures: the_image = np.asarray(Image.open(new_dataset_path + '/' + folder + '/' + image)) for i in range(num_shifts+1): pre_image = the_image.reshape((size,size)) # shift up shifted_image_up = shift(pre_image, [(i*(-1)), 0]) shifted_image_up.save('../dataset/augmented_spectrograms/' + folder + '/shifted_up' + str(i) + '_' + image) shifted_image_up.close() # shift_down shifted_image_down = shift(pre_image, [i, 0]) shifted_image_down.save('../dataset/augmented_spectrograms/' + folder + '/shifted_down' + str(i) + '_' + image) shifted_image_down.close() #shift_left shifted_image_left = shift(pre_image, [0, (i*(-1))]) shifted_image_left.save('../dataset/augmented_spectrograms/' + folder + '/shifted_left' + str(i) + '_' + image) shifted_image_left.close() #shift_right shifted_image_right = shift(pre_image, [0, i]) shifted_image_right.save('../dataset/augmented_spectrograms/' + folder + '/shifted_right' + str(i) + '_' + image) shifted_image_right.close() pre_image.close() del the_image # ### Pickling Data # We want to pickle the data by species, allowing for control of the minimum images per class. Beware that this will drastically influence the performance of your model. # + def maybe_pickle(data_folders, min_num_images_per_class, pickles_path, force=False): if not os.path.exists(pickles_path): os.makedirs(pickles_path) else: shutil.rmtree(pickles_path) os.makedirs(pickles_path) dataset_names = [] for folder in data_folders: class_name = folder.split('/')[-1] # species name set_filename = pickles_path + '/' + class_name + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: image_files = os.listdir(folder) count = 0 for image in image_files: count +=1 if True:#count >= min_num_images_per_class: print('Pickling %s.' % set_filename) dataset = load_image(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names pickles_path = '../dataset/pickle_data' datasets = maybe_pickle(dataset_folders, min_imgs_per_class, pickles_path) # - pickles = getDatasetFolders('../dataset/pickle_data') #print(datasets) num_classes = len(pickles) print(f'We have {num_classes} classes') # ### Classes # We have to evaluate the number of classes and how are they distributed. Also, observe which species has a higher frequency, etc. def das_labeler(pickle_files): labels = [] images = [] for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: species_set = pickle.load(f) for image in species_set: labels.append(label) images.append(image) except Exception as e: print('Unable to process data from', pickle_file, ':', e) pass labels = np.asarray(labels) images = np.asarray(images) return labels, images labels, images = das_labeler(datasets) # + #X_train, X_test, y_train, y_test = train_test_split(images, labels, test_size = 0.33, random_state = 42) # + # Calculates the total of images per class def class_is_balanced(pickles): total = 0 for pckle in pickles: if (os.path.isfile(pckle)): pickle_class = pickle.load(open(pckle, "rb")) else: print("Error reading dataset %s. Exiting.", pickle_path) return -1 class_name = pckle.split('/')[-1].split('.')[0] print("The total number of images in class %s is: %d" % (class_name, len(pickle_class))) total += len(pickle_class) print("For the dataset to be balanced, each class should have approximately %d images.\n" % (total / len(pickles))) return (total // len(pickles)) print("Let's see if the dataset is balanced:") balance_num = class_is_balanced(pickles) # - # ### Training, Testing, and Validation Separation # As with every implementation of Supervised Learning, we separate the dataset into three components. The training, the testing, and the validation dataset. # ### Output Data # We output the data in a pickle format, to be used next on the models. # + pickle_file = '../dataset/arbimon_' + str(num_shifts) + '.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': X_train, 'train_labels': y_train, 'test_dataset': X_test, 'test_labels': y_test, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) # save all out datasets in one pickle f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise # -
Jupyter Notebooks/02_Prepare_Dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="saw_md_rI9Q0" import sys, os, re, csv, codecs, numpy as np, pandas as pd import matplotlib.pyplot as plt # #%matplotlib inline from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, concatenate, Flatten, LSTM, Conv1D from keras.layers import Bidirectional, GlobalMaxPool1D, GRU, SpatialDropout1D, GlobalAveragePooling1D, GlobalMaxPooling1D, RNN from keras.models import Model, Sequential from keras import initializers, regularizers, constraints, optimizers, layers from keras.callbacks import Callback, EarlyStopping, ModelCheckpoint from sklearn.metrics import roc_auc_score, roc_curve from gensim.models.keyedvectors import KeyedVectors import gc from sklearn.model_selection import train_test_split from sklearn.utils import resample import warnings # + colab={"base_uri": "https://localhost:8080/"} id="sJTiSrSjCDk2" outputId="2d4ef37d-3a34-4820-fa63-97b9def4c76c" from google.colab import drive drive.mount('/content/drive') # + [markdown] id="NwvL7InnIRLd" # #Astrazeneca GRU-SGD # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="JGUimQIwIo_t" outputId="ec67a459-49db-4e00-aa92-eb8a3100903a" df = pd.read_excel('/content/drive/MyDrive/Astrazeneca_MiningYoutubeconcat_preprocess3_6Juli EXCEL.xlsx') df = df[~df['Comments'].isnull()] df = df.reset_index() df = df.drop(['index'], axis=1) df['sentimen']= df['sentimen'].fillna(4) df_untagged = df[df['sentimen'] == 4] df = df[df['sentimen'] != 4] df # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="5qUT-JGECwC7" outputId="9da3810a-689f-49e0-fed4-f22338b20703" df_untagged = df_untagged.reset_index() df_untagged = df_untagged.drop(['index'], axis=1) df_untagged # + colab={"base_uri": "https://localhost:8080/"} id="P2OObAcNJQS4" outputId="6ac8c4dc-665c-4add-e75f-293af8044dca" print(df.sentimen.value_counts()) # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="SUre-yaUJdLc" outputId="031cd2da-e5ff-49f3-eb07-af14ef2195d1" df['POS']= np.where(df['sentimen'] == 1, '1', '0') df['NET']= np.where(df['sentimen'] == 0, '1', '0') df['NEG']= np.where(df['sentimen'] == -1, '1', '0') df = df.drop(['sentimen'], axis=1) df # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="tuhCkXtUKuSn" outputId="09c0b1ba-c469-4f5b-c3e3-41cb9c287d55" #ilangin link dan uname df["Comments"]=df["Comments"].str.replace('(?:\@|https?\://)\S+', '') #ilangin lowercase df["Comments"]=df["Comments"].str.lower() #ilangin simbol df["Comments"]=df["Comments"].str.replace('[^\w\s]',' ') #ilangin angka df["Comments"]=df["Comments"].str.replace('\d+',' ') #ilangin enter df["Comments"]=df["Comments"].str.replace('\n',' ',regex=True) df["Comments"] = df["Comments"].replace('\s+', ' ', regex=True) df # + colab={"base_uri": "https://localhost:8080/"} id="gr81lnrEJ8vw" outputId="e1b7cd7c-0165-49fc-df4f-86c5dbbd036e" df_majority = df[df.NET== '0' ] df_minority = df[df.NET== '1'] df_minority_upsampled = resample(df_minority, replace=True, # sample with replacement n_samples=104, # to match majority class random_state=123) # reproducible results df_upsampled = pd.concat([df_majority, df_minority_upsampled]) df_majority_2 = df_upsampled[df_upsampled.NEG== '0' ] df_minority_2 = df_upsampled[df_upsampled.NEG== '1'] df_minority_upsampled_2 = resample(df_minority_2, replace=True, # sample with replacement n_samples=104, # to match majority class random_state=123) # reproducible results df_upsampled_2 = pd.concat([df_majority_2, df_minority_upsampled_2]) df = df_upsampled_2 print(df.POS.value_counts()) print(df.NET.value_counts()) print(df.NEG.value_counts()) # + id="e1wa6dioMkwe" df['POS']=df['POS'].astype(int) df['NET']=df['NET'].astype(int) df['NEG']=df['NEG'].astype(int) train, test = train_test_split(df, test_size=0.2, random_state = 23) list_classes = ['POS','NET','NEG'] y = train[list_classes].values y_te = test[list_classes].values list_sentences_train = train["Comments"] list_sentences_test = test["Comments"] max_features = 100000 tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(list_sentences_train)) list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train) list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test) maxlen = 50 X_t = pad_sequences(list_tokenized_train, maxlen=maxlen) X_te = pad_sequences(list_tokenized_test, maxlen=maxlen) totalNumWords = [len(one_comment) for one_comment in list_tokenized_train] # + id="WSFMz3i9WNeB" def loadEmbeddingMatrix(typeToLoad): #load different embedding file from Kaggle depending on which embedding #matrix we are going to experiment with if(typeToLoad=="word2vec"): word2vecDict = KeyedVectors.load_word2vec_format("model300.bin", binary=True) embed_size = 300 embeddings_index = dict() for word in word2vecDict.wv.vocab: embeddings_index[word] = word2vecDict.word_vec(word) print('Loaded %s word vectors.' % len(embeddings_index)) gc.collect() #We get the mean and standard deviation of the embedding weights so that we could maintain the #same statistics for the rest of our own random generated weights. all_embs = np.stack(list(embeddings_index.values())) emb_mean,emb_std = all_embs.mean(), all_embs.std() nb_words = len(tokenizer.word_index)+1 #We are going to set the embedding size to the pretrained dimension as we are replicating it. #the size will be Number of Words in Vocab X Embedding Size embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size)) gc.collect() #With the newly created embedding matrix, we'll fill it up with the words that we have in both #our own dictionary and loaded pretrained embedding. embeddedCount = 0 for word, i in tokenizer.word_index.items(): i-=1 #then we see if this word is in glove's dictionary, if yes, get the corresponding weights embedding_vector = embeddings_index.get(word) #and store inside the embedding matrix that we will train later on. if embedding_vector is not None: embedding_matrix[i] = embedding_vector embeddedCount+=1 print('total embedded:',embeddedCount,'common words') del(embeddings_index) gc.collect() #finally, return the embedding matrix return embedding_matrix # + id="9d_lBylPVuA1" def create_fold_embeddings(embeddings_dim, key_vector): emb_init_values = [] unk = [] a = 0 b = 0 for word, i in tokenizer.word_index.items(): # Untuk memastikan bahwa urut if word == '<unk>': emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) elif word == '<pad>': emb_init_values.append(np.zeros(embeddings_dim).astype('float32')) elif word in key_vector.wv.vocab: emb_init_values.append(key_vector.wv.word_vec(word)) b = b+1 else: emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) a = a+1 unk.append(word) # print(word) emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) known_word = b unknown_word = a print(known_word, unknown_word) return known_word, unknown_word, emb_init_values # + id="EebqDeqIVveU" word_vectors = KeyedVectors.load_word2vec_format("/content/drive/MyDrive/model300.bin",binary="True", unicode_errors='ignore') word2vec = word_vectors embed_dim = 300 # + colab={"base_uri": "https://localhost:8080/"} id="ZZ1sU4qKD8e9" outputId="448b47c3-8fee-4de3-d1d3-541070c5cd40" known_word, unknown_word, emb_init_values = create_fold_embeddings(embed_dim, word_vectors) emb_init_values = np.array(emb_init_values) print("known words:", known_word) print("unknown words:", unknown_word) # + id="eLFZETknEAQF" class RocAucEvaluation(Callback): def __init__(self, validation_data=(), interval=1): super(Callback, self).__init__() self.interval = interval self.X_val, self.y_val = validation_data def on_epoch_end(self, epoch, logs={}): if epoch % self.interval == 0: y_pred = self.model.predict(self.X_val, verbose=0) score = roc_auc_score(self.y_val, y_pred) print("\n ROC-AUC - epoch: %d - score: %.6f \n" % (epoch+1, score)) inp = Input(shape=(maxlen,)) # + colab={"base_uri": "https://localhost:8080/"} id="2DA3Sn94ENln" outputId="8b26f7dc-0189-4938-fb08-a12b3292cffa" x = Embedding(len(tokenizer.word_index)+1, emb_init_values.shape[1],weights=[emb_init_values],trainable=True)(inp) # print(inp.shape) #x = Bidirectional(LSTM(200, activation='tanh', return_sequences = True, dropout=0.4))(x) x = GRU(200, return_sequences=True)(x) x = GlobalMaxPool1D()(x) x = Dropout(0.1)(x) x = Dense(100)(x) x = Dropout(0.1)(x) x = Dense(50)(x) x = Dropout(0.1)(x) x = Dense(28, activation="relu")(x) x = Dropout(0.1)(x) x = Dense(3, activation="sigmoid")(x) model = Model(inputs=inp, outputs=x) opt = optimizers.SGD(lr=0.01) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) batch_size = 32 epochs = 100 # file_path="weights_base.best.hdf5" RocAuc = RocAucEvaluation(validation_data=(X_te, y_te), interval=1) # checkpoint = ModelCheckpoint('modelEmb-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto') model.fit(X_t,y, batch_size=batch_size, epochs=epochs, verbose=2,callbacks=[RocAuc], validation_data=(X_te, y_te)) # + colab={"base_uri": "https://localhost:8080/"} id="L90QdQQ_Em2h" outputId="5b7fc405-8a38-468c-ee93-af8eb54114d8" model.summary() # + id="mpwOyJ-gFG4l" from keras.models import load_model c = model.predict(X_te) # + colab={"base_uri": "https://localhost:8080/"} id="P6BtDkNkFKvo" outputId="539ba350-b5b9-4363-cd50-e3bd5e7f6c2f" score = model.evaluate(X_te, y_te, batch_size=batch_size, verbose=1) print('Test accuracy:', score[1]) # + colab={"base_uri": "https://localhost:8080/"} id="yE-o6kGVFPrx" outputId="4a661fef-debc-499a-8cc3-63c1989ac4f2" y_pred = model.predict(X_te) roc_auc_score(y_te, c) # + id="UMUodxtrFSoh" from sklearn.metrics import roc_curve, auc fpr = dict() tpr = dict() roc_auc = dict() for i in range(3): fpr[i], tpr[i], _ = roc_curve(y_te[:, i], y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # + colab={"base_uri": "https://localhost:8080/", "height": 851} id="y7he7IFMFUsK" outputId="db8c2717-fcb9-48c1-f8b9-08cb272179da" for i in range(3): plt.figure() plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i]) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="LwcYRAHAFYG_" outputId="c574a674-0b34-4cc9-dc15-b4a45aa3caf5" y_pred[y_pred>=0.23] = 1 y_pred[y_pred<0.23] = 0 roc_auc_score(y_te, y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="xY_c3Cf0FbLN" outputId="1948791a-917d-49ba-e5ac-cbb0b6c80086" import sklearn.metrics as skm import numpy as np cm = skm.multilabel_confusion_matrix(y_te, y_pred) print(cm) print( skm.classification_report(y_te,y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="-kDJ1BW8FeF4" outputId="267580c9-132d-4df0-e1c4-841741182c54" from sklearn.metrics import accuracy_score from sklearn.metrics import hamming_loss def hamming_score(y_true, y_pred, normalize=True, sample_weight=None): ''' Compute the Hamming score (a.k.a. label-based accuracy) for the multi-label case http://stackoverflow.com/q/32239577/395857 ''' acc_list = [] for i in range(y_true.shape[0]): set_true = set( np.where(y_true[i])[0] ) set_pred = set( np.where(y_pred[i])[0] ) #print('\nset_true: {0}'.format(set_true)) #print('set_pred: {0}'.format(set_pred)) tmp_a = None if len(set_true) == 0 and len(set_pred) == 0: tmp_a = 1 else: tmp_a = len(set_true.intersection(set_pred))/\ float( len(set_true.union(set_pred)) ) #print('tmp_a: {0}'.format(tmp_a)) acc_list.append(tmp_a) return np.mean(acc_list) print('Hamming score: {0}'.format(hamming_score(y_te,y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="tAU74bADFi_6" outputId="0d8a8944-d26e-4810-f6d6-abe920d127c1" import sklearn.metrics # Subset accuracy # 0.25 (= 0+1+0+0 / 4) --> 1 if the prediction for one sample fully matches the gold. 0 otherwise. print('Subset accuracy: {0}'.format(sklearn.metrics.accuracy_score(y_te, y_pred, normalize=True, sample_weight=None))) # Hamming loss (smaller is better) # $$ \text{HammingLoss}(x_i, y_i) = \frac{1}{|D|} \sum_{i=1}^{|D|} \frac{xor(x_i, y_i)}{|L|}, $$ # where # - \\(|D|\\) is the number of samples # - \\(|L|\\) is the number of labels # - \\(y_i\\) is the ground truth # - \\(x_i\\) is the prediction. # 0.416666666667 (= (1+0+3+1) / (3*4) ) print('Hamming loss: {0}'.format(sklearn.metrics.hamming_loss(y_te, y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="lSkXR-5OFnWq" outputId="2dd500a8-8506-4c1a-a774-9b475946ae91" from sklearn.metrics import confusion_matrix confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)) # + colab={"base_uri": "https://localhost:8080/", "height": 338} id="p6uszzLEFpBY" outputId="525bb0c4-fc48-43bd-ed2d-d02bc41b3b1d" import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize=(8,5)) sns.heatmap(confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)), annot=True, fmt=".0f", ax=ax) plt.xlabel("y_head") plt.ylabel("y_true") plt.show() # + [markdown] id="O6RhqI_pGTZG" # #Astrazeneca LSTM-ADAM # + colab={"base_uri": "https://localhost:8080/"} id="_tTr3gMlGyLi" outputId="2ee21b7f-9810-4bf2-da4e-fa7d079a4a11" x = Embedding(len(tokenizer.word_index)+1, emb_init_values.shape[1],weights=[emb_init_values],trainable=True)(inp) # print(inp.shape) x = LSTM(200, activation='tanh', return_sequences = True, dropout=0.4)(x) #x = GRU(200, return_sequences=True)(x) x = GlobalMaxPool1D()(x) x = Dropout(0.1)(x) x = Dense(100)(x) x = Dropout(0.1)(x) x = Dense(50)(x) x = Dropout(0.1)(x) x = Dense(28, activation="relu")(x) x = Dropout(0.1)(x) x = Dense(3, activation="softmax")(x) model = Model(inputs=inp, outputs=x) opt = optimizers.Adam(lr=0.001) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) batch_size = 32 epochs = 100 # file_path="weights_base.best.hdf5" RocAuc = RocAucEvaluation(validation_data=(X_te, y_te), interval=1) # checkpoint = ModelCheckpoint('modelEmb-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto') model.fit(X_t,y, batch_size=batch_size, epochs=epochs, verbose=2,callbacks=[RocAuc], validation_data=(X_te, y_te)) # + colab={"base_uri": "https://localhost:8080/"} id="lv5EN7EWH-1e" outputId="d130f036-2a55-42db-8292-ec6600ad54fc" model.summary() # + id="hGaU4TIaIAbx" from keras.models import load_model c = model.predict(X_te) # + colab={"base_uri": "https://localhost:8080/"} id="KccBLHnNIAaa" outputId="0bf5c292-3977-458f-fbf0-61082283f08c" score = model.evaluate(X_te, y_te, batch_size=batch_size, verbose=1) print('Test accuracy:', score[1]) # + colab={"base_uri": "https://localhost:8080/"} id="pIZVyuBPIRTW" outputId="04f8eaef-e366-4f29-a867-8a74969d6fe3" y_pred = model.predict(X_te) roc_auc_score(y_te, c) # + colab={"base_uri": "https://localhost:8080/", "height": 851} id="odLAzikUIZ35" outputId="ddcbccb6-d5d7-414e-f4d4-b43755c54a4d" from sklearn.metrics import roc_curve, auc fpr = dict() tpr = dict() roc_auc = dict() for i in range(3): fpr[i], tpr[i], _ = roc_curve(y_te[:, i], y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) for i in range(3): plt.figure() plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i]) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="KcaHWa9gIeBz" outputId="94b36906-2391-4a13-cc15-7a1627c68179" y_pred[y_pred>=0.23] = 1 y_pred[y_pred<0.23] = 0 roc_auc_score(y_te, y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="aQutYyh8Iolp" outputId="3a7b2bd7-11af-4764-eedf-12ec2e76ef6e" import sklearn.metrics as skm import numpy as np cm = skm.multilabel_confusion_matrix(y_te, y_pred) print(cm) print( skm.classification_report(y_te,y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="WFQplzKZIzqU" outputId="a6d82ff8-5388-4da3-a580-e3eb92adb7ed" from sklearn.metrics import accuracy_score from sklearn.metrics import hamming_loss def hamming_score(y_true, y_pred, normalize=True, sample_weight=None): ''' Compute the Hamming score (a.k.a. label-based accuracy) for the multi-label case http://stackoverflow.com/q/32239577/395857 ''' acc_list = [] for i in range(y_true.shape[0]): set_true = set( np.where(y_true[i])[0] ) set_pred = set( np.where(y_pred[i])[0] ) #print('\nset_true: {0}'.format(set_true)) #print('set_pred: {0}'.format(set_pred)) tmp_a = None if len(set_true) == 0 and len(set_pred) == 0: tmp_a = 1 else: tmp_a = len(set_true.intersection(set_pred))/\ float( len(set_true.union(set_pred)) ) #print('tmp_a: {0}'.format(tmp_a)) acc_list.append(tmp_a) return np.mean(acc_list) print('Hamming score: {0}'.format(hamming_score(y_te,y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="93rFi6aHIzqV" outputId="285206db-debc-484f-8516-3d6f46b8470f" import sklearn.metrics # Subset accuracy # 0.25 (= 0+1+0+0 / 4) --> 1 if the prediction for one sample fully matches the gold. 0 otherwise. print('Subset accuracy: {0}'.format(sklearn.metrics.accuracy_score(y_te, y_pred, normalize=True, sample_weight=None))) # Hamming loss (smaller is better) # $$ \text{HammingLoss}(x_i, y_i) = \frac{1}{|D|} \sum_{i=1}^{|D|} \frac{xor(x_i, y_i)}{|L|}, $$ # where # - \\(|D|\\) is the number of samples # - \\(|L|\\) is the number of labels # - \\(y_i\\) is the ground truth # - \\(x_i\\) is the prediction. # 0.416666666667 (= (1+0+3+1) / (3*4) ) print('Hamming loss: {0}'.format(sklearn.metrics.hamming_loss(y_te, y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="zOnor3aDIzqV" outputId="b34404c5-9514-40d9-f772-c294de0f16e7" from sklearn.metrics import confusion_matrix confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)) # + colab={"base_uri": "https://localhost:8080/", "height": 338} id="f2Czaj5aIzqW" outputId="9c6523bb-79b4-4c3d-e9a7-6348d814d905" import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize=(8,5)) sns.heatmap(confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)), annot=True, fmt=".0f", ax=ax) plt.xlabel("y_head") plt.ylabel("y_true") plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="DSAlthho4qQX" outputId="693e231c-5634-41f5-bb03-8c9a525c238a" df_untagged = df_untagged.drop(['sentimen'], axis =1) df_untagged # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="tg_PFRr45IKU" outputId="4657650b-a4ed-4249-af52-1af669b52859" #ilangin link dan uname df_untagged["Comments"]=df_untagged["Comments"].str.replace('(?:\@|https?\://)\S+', '') #ilangin lowercase df_untagged["Comments"]=df_untagged["Comments"].str.lower() #ilangin simbol df_untagged["Comments"]=df_untagged["Comments"].str.replace('[^\w\s]',' ') #ilangin angka df_untagged["Comments"]=df_untagged["Comments"].str.replace('\d+',' ') #ilangin enter df_untagged["Comments"]=df_untagged["Comments"].str.replace('\n',' ',regex=True) df_untagged["Comments"] = df_untagged["Comments"].replace('\s+', ' ', regex=True) df_untagged # + colab={"base_uri": "https://localhost:8080/"} id="QO2HMaw75sei" outputId="4f071ddb-5dc3-43e1-ec63-62fa2b1db781" list_sentences_aplikasi = df_untagged["Comments"] list_tokenized_aplikasi = tokenizer.texts_to_sequences(list_sentences_aplikasi) X_aplikasi = pad_sequences(list_tokenized_aplikasi, maxlen=maxlen) X_aplikasi # + colab={"base_uri": "https://localhost:8080/"} id="b81ByM015zYY" outputId="6b732d95-32ad-41c3-c966-b04dfb04edf4" y_pred_aplikasi = model.predict(X_aplikasi) y_pred_aplikasi # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="1rFFhrI852D6" outputId="93a0cfe1-d098-476c-a78e-bba94f5f3a77" dfhasil = pd.DataFrame(y_pred_aplikasi, columns=['POS', 'NET', 'NEG']) dfhasil # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="FTBYtkKg563T" outputId="2f89df28-af7f-4cba-90c0-3701a57a72b2" dfhasilramal = dfhasil.eq(dfhasil.where(dfhasil != 0).max(1), axis=0).astype(int) dfhasilramal # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="5FnRO-Tf59WR" outputId="16f4ee1e-94ba-4b4b-bc03-a830b0421415" finalresult = pd.concat([df_untagged, dfhasilramal], axis=1, sort=False) finalresult # + colab={"base_uri": "https://localhost:8080/"} id="Mzh6Vysn6C67" outputId="7be0156d-e9e3-4f20-c15f-794670abc944" finalresult.POS.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="GL9Rk2ll6FM1" outputId="03ac0d7f-22f6-4263-b9e9-7ab2ff5a6c77" finalresult.NET.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="ZdJrvcGE6Hd0" outputId="b650c405-de0a-4bec-860c-fb584149d269" finalresult.NEG.value_counts() # + id="Ta-C6AAaG669" # + [markdown] id="EDa1mtosG7aJ" # #Sinovac GRU-SGD # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="hXOCS5cxG7aK" outputId="c82161cd-505d-440f-8dd4-6103986e1544" df = pd.read_excel('/content/drive/MyDrive/Sinovac_MiningYoutubeconcat_preprocess3_6Juli EXCEL.xlsx') df = df[~df['comments'].isnull()] df = df.reset_index() df = df.drop(['index'], axis=1) df['sentimen']= df['sentimen'].fillna(4) df_untagged = df[df['sentimen'] == 4] df = df[df['sentimen'] != 4] df # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="Lr3Dh-RzG7aL" outputId="01a3f43f-c358-4bc7-8f3b-c8bc3e6ae037" df_untagged = df_untagged.reset_index() df_untagged = df_untagged.drop(['index'], axis=1) df_untagged # + colab={"base_uri": "https://localhost:8080/"} id="pallQEncG7aL" outputId="29321a4d-6e52-499d-cbba-213b8cb73fb5" print(df.sentimen.value_counts()) # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="uQWpFseEG7aL" outputId="b9ef7923-ba36-493c-e886-fc0aa837f313" df['POS']= np.where(df['sentimen'] == 1, '1', '0') df['NET']= np.where(df['sentimen'] == 0, '1', '0') df['NEG']= np.where(df['sentimen'] == -1, '1', '0') df = df.drop(['sentimen'], axis=1) df # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="AQs4K1zEG7aL" outputId="31cfb2b3-dd7b-4358-8a6e-12ef44cd27e3" #ilangin link dan uname df["comments"]=df["comments"].str.replace('(?:\@|https?\://)\S+', '') #ilangin lowercase df["comments"]=df["comments"].str.lower() #ilangin simbol df["comments"]=df["comments"].str.replace('[^\w\s]',' ') #ilangin angka df["comments"]=df["comments"].str.replace('\d+',' ') #ilangin enter df["comments"]=df["comments"].str.replace('\n',' ',regex=True) df["comments"] = df["comments"].replace('\s+', ' ', regex=True) df # + colab={"base_uri": "https://localhost:8080/"} id="Yc4e2hm-G7aM" outputId="0f4c214e-9785-4a9e-e45f-6e988737ad1c" df_majority = df[df.NET== '0' ] df_minority = df[df.NET== '1'] df_minority_upsampled = resample(df_minority, replace=True, # sample with replacement n_samples=100, # to match majority class random_state=123) # reproducible results df_upsampled = pd.concat([df_majority, df_minority_upsampled]) df_majority_2 = df_upsampled[df_upsampled.POS== '0' ] df_minority_2 = df_upsampled[df_upsampled.POS== '1'] df_minority_upsampled_2 = resample(df_minority_2, replace=True, # sample with replacement n_samples=100, # to match majority class random_state=123) # reproducible results df_upsampled_2 = pd.concat([df_majority_2, df_minority_upsampled_2]) df = df_upsampled_2 print(df.POS.value_counts()) print(df.NET.value_counts()) print(df.NEG.value_counts()) # + id="YaS4--D7G7aM" df['POS']=df['POS'].astype(int) df['NET']=df['NET'].astype(int) df['NEG']=df['NEG'].astype(int) train, test = train_test_split(df, test_size=0.2, random_state = 23) list_classes = ['POS','NET','NEG'] y = train[list_classes].values y_te = test[list_classes].values list_sentences_train = train["comments"] list_sentences_test = test["comments"] max_features = 100000 tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(list_sentences_train)) list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train) list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test) maxlen = 50 X_t = pad_sequences(list_tokenized_train, maxlen=maxlen) X_te = pad_sequences(list_tokenized_test, maxlen=maxlen) totalNumWords = [len(one_comment) for one_comment in list_tokenized_train] # + id="l6mp6VVDG7aM" def loadEmbeddingMatrix(typeToLoad): #load different embedding file from Kaggle depending on which embedding #matrix we are going to experiment with if(typeToLoad=="word2vec"): word2vecDict = KeyedVectors.load_word2vec_format("model300.bin", binary=True) embed_size = 300 embeddings_index = dict() for word in word2vecDict.wv.vocab: embeddings_index[word] = word2vecDict.word_vec(word) print('Loaded %s word vectors.' % len(embeddings_index)) gc.collect() #We get the mean and standard deviation of the embedding weights so that we could maintain the #same statistics for the rest of our own random generated weights. all_embs = np.stack(list(embeddings_index.values())) emb_mean,emb_std = all_embs.mean(), all_embs.std() nb_words = len(tokenizer.word_index)+1 #We are going to set the embedding size to the pretrained dimension as we are replicating it. #the size will be Number of Words in Vocab X Embedding Size embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size)) gc.collect() #With the newly created embedding matrix, we'll fill it up with the words that we have in both #our own dictionary and loaded pretrained embedding. embeddedCount = 0 for word, i in tokenizer.word_index.items(): i-=1 #then we see if this word is in glove's dictionary, if yes, get the corresponding weights embedding_vector = embeddings_index.get(word) #and store inside the embedding matrix that we will train later on. if embedding_vector is not None: embedding_matrix[i] = embedding_vector embeddedCount+=1 print('total embedded:',embeddedCount,'common words') del(embeddings_index) gc.collect() #finally, return the embedding matrix return embedding_matrix # + id="HPCwPAOkG7aM" def create_fold_embeddings(embeddings_dim, key_vector): emb_init_values = [] unk = [] a = 0 b = 0 for word, i in tokenizer.word_index.items(): # Untuk memastikan bahwa urut if word == '<unk>': emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) elif word == '<pad>': emb_init_values.append(np.zeros(embeddings_dim).astype('float32')) elif word in key_vector.wv.vocab: emb_init_values.append(key_vector.wv.word_vec(word)) b = b+1 else: emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) a = a+1 unk.append(word) # print(word) emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) known_word = b unknown_word = a print(known_word, unknown_word) return known_word, unknown_word, emb_init_values # + id="Fqt5Q1iQG7aM" word_vectors = KeyedVectors.load_word2vec_format("/content/drive/MyDrive/model300.bin",binary="True", unicode_errors='ignore') word2vec = word_vectors embed_dim = 300 # + colab={"base_uri": "https://localhost:8080/"} id="NGE7hJDiG7aM" outputId="03fea8dd-b231-4c75-a1ab-8e5d72fa493c" known_word, unknown_word, emb_init_values = create_fold_embeddings(embed_dim, word_vectors) emb_init_values = np.array(emb_init_values) print("known words:", known_word) print("unknown words:", unknown_word) # + id="kheqFly3G7aN" class RocAucEvaluation(Callback): def __init__(self, validation_data=(), interval=1): super(Callback, self).__init__() self.interval = interval self.X_val, self.y_val = validation_data def on_epoch_end(self, epoch, logs={}): if epoch % self.interval == 0: y_pred = self.model.predict(self.X_val, verbose=0) score = roc_auc_score(self.y_val, y_pred) print("\n ROC-AUC - epoch: %d - score: %.6f \n" % (epoch+1, score)) inp = Input(shape=(maxlen,)) # + colab={"base_uri": "https://localhost:8080/"} id="RwKg4fu2G7aN" outputId="1f8fa2b5-9e59-49a3-8f13-5b63f2b41056" x = Embedding(len(tokenizer.word_index)+1, emb_init_values.shape[1],weights=[emb_init_values],trainable=True)(inp) # print(inp.shape) #x = Bidirectional(LSTM(200, activation='tanh', return_sequences = True, dropout=0.4))(x) x = GRU(200, return_sequences=True)(x) x = GlobalMaxPool1D()(x) x = Dropout(0.1)(x) x = Dense(100)(x) x = Dropout(0.1)(x) x = Dense(50)(x) x = Dropout(0.1)(x) x = Dense(28, activation="relu")(x) x = Dropout(0.1)(x) x = Dense(3, activation="sigmoid")(x) model = Model(inputs=inp, outputs=x) opt = optimizers.SGD(lr=0.01) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) batch_size = 32 epochs = 100 # file_path="weights_base.best.hdf5" RocAuc = RocAucEvaluation(validation_data=(X_te, y_te), interval=1) # checkpoint = ModelCheckpoint('modelEmb-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto') model.fit(X_t,y, batch_size=batch_size, epochs=epochs, verbose=2,callbacks=[RocAuc], validation_data=(X_te, y_te)) # + colab={"base_uri": "https://localhost:8080/"} id="mBPaYkqzG7aN" outputId="db490be9-3152-4f13-e150-185953172d77" model.summary() # + id="E9h0nGyHG7aN" from keras.models import load_model c = model.predict(X_te) # + colab={"base_uri": "https://localhost:8080/"} id="3uJr8xQgG7aN" outputId="73111308-9137-42cf-bcfb-9b96b851a1f3" score = model.evaluate(X_te, y_te, batch_size=batch_size, verbose=1) print('Test accuracy:', score[1]) # + colab={"base_uri": "https://localhost:8080/"} id="5iAc0tlUG7aN" outputId="b74c16c9-7f17-4653-a91d-25055132b881" y_pred = model.predict(X_te) roc_auc_score(y_te, c) # + id="QkOOjTHXG7aO" from sklearn.metrics import roc_curve, auc fpr = dict() tpr = dict() roc_auc = dict() for i in range(3): fpr[i], tpr[i], _ = roc_curve(y_te[:, i], y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # + colab={"base_uri": "https://localhost:8080/", "height": 851} id="849vW8SzG7aO" outputId="640a1081-34e3-4eb7-ecda-e5f61ef1bd65" for i in range(3): plt.figure() plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i]) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="-BM7JTonG7aO" outputId="4ce3b1df-6d4c-48ee-f635-dda45aae7843" y_pred[y_pred>=0.23] = 1 y_pred[y_pred<0.23] = 0 roc_auc_score(y_te, y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="68YYyfkgG7aQ" outputId="56d98043-b4ce-48bf-fc95-cf72d7d2f339" import sklearn.metrics as skm import numpy as np cm = skm.multilabel_confusion_matrix(y_te, y_pred) print(cm) print( skm.classification_report(y_te,y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="a9N4p6LxG7aQ" outputId="257081c8-a1af-470b-868c-e87eb8b3c497" from sklearn.metrics import accuracy_score from sklearn.metrics import hamming_loss def hamming_score(y_true, y_pred, normalize=True, sample_weight=None): ''' Compute the Hamming score (a.k.a. label-based accuracy) for the multi-label case http://stackoverflow.com/q/32239577/395857 ''' acc_list = [] for i in range(y_true.shape[0]): set_true = set( np.where(y_true[i])[0] ) set_pred = set( np.where(y_pred[i])[0] ) #print('\nset_true: {0}'.format(set_true)) #print('set_pred: {0}'.format(set_pred)) tmp_a = None if len(set_true) == 0 and len(set_pred) == 0: tmp_a = 1 else: tmp_a = len(set_true.intersection(set_pred))/\ float( len(set_true.union(set_pred)) ) #print('tmp_a: {0}'.format(tmp_a)) acc_list.append(tmp_a) return np.mean(acc_list) print('Hamming score: {0}'.format(hamming_score(y_te,y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="6WdhCANdG7aQ" outputId="08f92450-d524-40c7-c5cd-4358e9b144bf" import sklearn.metrics # Subset accuracy # 0.25 (= 0+1+0+0 / 4) --> 1 if the prediction for one sample fully matches the gold. 0 otherwise. print('Subset accuracy: {0}'.format(sklearn.metrics.accuracy_score(y_te, y_pred, normalize=True, sample_weight=None))) # Hamming loss (smaller is better) # $$ \text{HammingLoss}(x_i, y_i) = \frac{1}{|D|} \sum_{i=1}^{|D|} \frac{xor(x_i, y_i)}{|L|}, $$ # where # - \\(|D|\\) is the number of samples # - \\(|L|\\) is the number of labels # - \\(y_i\\) is the ground truth # - \\(x_i\\) is the prediction. # 0.416666666667 (= (1+0+3+1) / (3*4) ) print('Hamming loss: {0}'.format(sklearn.metrics.hamming_loss(y_te, y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="jGMlfgL3G7aR" outputId="cf56626d-5a14-4bcf-de58-2552b5a8e858" from sklearn.metrics import confusion_matrix confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)) # + colab={"base_uri": "https://localhost:8080/", "height": 338} id="vwsN7hFzG7aR" outputId="ff329f5e-4e42-4d09-dc27-5b925398aed9" import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize=(8,5)) sns.heatmap(confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)), annot=True, fmt=".0f", ax=ax) plt.xlabel("y_head") plt.ylabel("y_true") plt.show() # + [markdown] id="6_1-Nu3SG7aR" # #Sinovac LSTM-ADAM # + colab={"base_uri": "https://localhost:8080/"} id="9WeEbsd9G7aR" outputId="9c40a6eb-fe9f-4117-9d8b-566572c55614" x = Embedding(len(tokenizer.word_index)+1, emb_init_values.shape[1],weights=[emb_init_values],trainable=True)(inp) # print(inp.shape) x = LSTM(200, activation='tanh', return_sequences = True, dropout=0.4)(x) #x = GRU(200, return_sequences=True)(x) x = GlobalMaxPool1D()(x) x = Dropout(0.1)(x) x = Dense(100)(x) x = Dropout(0.1)(x) x = Dense(50)(x) x = Dropout(0.1)(x) x = Dense(28, activation="relu")(x) x = Dropout(0.1)(x) x = Dense(3, activation="softmax")(x) model = Model(inputs=inp, outputs=x) opt = optimizers.Adam(lr=0.001) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) batch_size = 32 epochs = 100 # file_path="weights_base.best.hdf5" RocAuc = RocAucEvaluation(validation_data=(X_te, y_te), interval=1) # checkpoint = ModelCheckpoint('modelEmb-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto') model.fit(X_t,y, batch_size=batch_size, epochs=epochs, verbose=2,callbacks=[RocAuc], validation_data=(X_te, y_te)) # + colab={"base_uri": "https://localhost:8080/"} id="f81F4s0FG7aR" outputId="3813f60a-8260-4cd5-8297-1a240eb8cffb" model.summary() # + id="tn8wnpyHG7aS" from keras.models import load_model c = model.predict(X_te) # + colab={"base_uri": "https://localhost:8080/"} id="bw_CdMInG7aS" outputId="a4ed3c1a-3343-4730-a885-4690eae6100b" score = model.evaluate(X_te, y_te, batch_size=batch_size, verbose=1) print('Test accuracy:', score[1]) # + colab={"base_uri": "https://localhost:8080/"} id="IDhx-yEFG7aS" outputId="f2170c21-2b61-461b-c181-cf5d1d4e1bf2" y_pred = model.predict(X_te) roc_auc_score(y_te, c) # + colab={"base_uri": "https://localhost:8080/", "height": 851} id="lRzlYQTnG7aS" outputId="9fc2a1bb-1939-4819-b4dd-1d4f03c31d24" from sklearn.metrics import roc_curve, auc fpr = dict() tpr = dict() roc_auc = dict() for i in range(3): fpr[i], tpr[i], _ = roc_curve(y_te[:, i], y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) for i in range(3): plt.figure() plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i]) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="vFmuZR6qG7aT" outputId="78196096-7abc-4b15-fb3e-d8b06b351db2" y_pred[y_pred>=0.23] = 1 y_pred[y_pred<0.23] = 0 roc_auc_score(y_te, y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="pmVn4oC0G7aT" outputId="4fa6553d-9da3-4972-9f26-467968d52e2d" import sklearn.metrics as skm import numpy as np cm = skm.multilabel_confusion_matrix(y_te, y_pred) print(cm) print( skm.classification_report(y_te,y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="FyEGOGrDG7aT" outputId="7ffc86ec-ef3a-490d-dd92-0dde92607ad1" from sklearn.metrics import accuracy_score from sklearn.metrics import hamming_loss def hamming_score(y_true, y_pred, normalize=True, sample_weight=None): ''' Compute the Hamming score (a.k.a. label-based accuracy) for the multi-label case http://stackoverflow.com/q/32239577/395857 ''' acc_list = [] for i in range(y_true.shape[0]): set_true = set( np.where(y_true[i])[0] ) set_pred = set( np.where(y_pred[i])[0] ) #print('\nset_true: {0}'.format(set_true)) #print('set_pred: {0}'.format(set_pred)) tmp_a = None if len(set_true) == 0 and len(set_pred) == 0: tmp_a = 1 else: tmp_a = len(set_true.intersection(set_pred))/\ float( len(set_true.union(set_pred)) ) #print('tmp_a: {0}'.format(tmp_a)) acc_list.append(tmp_a) return np.mean(acc_list) print('Hamming score: {0}'.format(hamming_score(y_te,y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="d_PFCOoMG7aT" outputId="3ec8c8b6-569a-47a6-f9d1-2e07bc733efb" import sklearn.metrics # Subset accuracy # 0.25 (= 0+1+0+0 / 4) --> 1 if the prediction for one sample fully matches the gold. 0 otherwise. print('Subset accuracy: {0}'.format(sklearn.metrics.accuracy_score(y_te, y_pred, normalize=True, sample_weight=None))) # Hamming loss (smaller is better) # $$ \text{HammingLoss}(x_i, y_i) = \frac{1}{|D|} \sum_{i=1}^{|D|} \frac{xor(x_i, y_i)}{|L|}, $$ # where # - \\(|D|\\) is the number of samples # - \\(|L|\\) is the number of labels # - \\(y_i\\) is the ground truth # - \\(x_i\\) is the prediction. # 0.416666666667 (= (1+0+3+1) / (3*4) ) print('Hamming loss: {0}'.format(sklearn.metrics.hamming_loss(y_te, y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="lx5DtFUpG7aT" outputId="6ebc7f23-d70b-45be-9c38-473502067fee" from sklearn.metrics import confusion_matrix confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)) # + colab={"base_uri": "https://localhost:8080/", "height": 335} id="x1LQ3UKFG7aU" outputId="b00cc6dd-ebbe-4bff-8743-0d6be6583039" import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize=(8,5)) sns.heatmap(confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)), annot=True, fmt=".0f", ax=ax) plt.xlabel("y_head") plt.ylabel("y_true") plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="s2jO3B8dG7aU" outputId="a54d1299-06cf-4999-eef5-95bd57c140b6" df_untagged = df_untagged.drop(['sentimen'], axis =1) df_untagged # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="SJ7O1AOwG7aU" outputId="76426bf5-b63c-45af-9846-ba65be406705" #ilangin link dan uname df_untagged["comments"]=df_untagged["comments"].str.replace('(?:\@|https?\://)\S+', '') #ilangin lowercase df_untagged["comments"]=df_untagged["comments"].str.lower() #ilangin simbol df_untagged["comments"]=df_untagged["comments"].str.replace('[^\w\s]',' ') #ilangin angka df_untagged["comments"]=df_untagged["comments"].str.replace('\d+',' ') #ilangin enter df_untagged["comments"]=df_untagged["comments"].str.replace('\n',' ',regex=True) df_untagged["comments"] = df_untagged["comments"].replace('\s+', ' ', regex=True) df_untagged # + colab={"base_uri": "https://localhost:8080/"} id="NDCQOJDwG7aU" outputId="952067e6-ccaf-4056-a070-949b151a5272" list_sentences_aplikasi = df_untagged["comments"] list_tokenized_aplikasi = tokenizer.texts_to_sequences(list_sentences_aplikasi) X_aplikasi = pad_sequences(list_tokenized_aplikasi, maxlen=maxlen) X_aplikasi # + colab={"base_uri": "https://localhost:8080/"} id="-zVghhfdG7aV" outputId="2ac8f085-45da-4dc9-973b-761c949c2ae3" y_pred_aplikasi = model.predict(X_aplikasi) y_pred_aplikasi # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="693kVVcJG7aV" outputId="fd8c4656-98d2-480d-bd0f-ea58ac75091f" dfhasil = pd.DataFrame(y_pred_aplikasi, columns=['POS', 'NET', 'NEG']) dfhasil # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="H-yMnZo4G7aV" outputId="5ee08e8b-3d0b-4d43-c6f0-2f41e95d0085" dfhasilramal = dfhasil.eq(dfhasil.where(dfhasil != 0).max(1), axis=0).astype(int) dfhasilramal # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="yO5-_DLBG7aV" outputId="a71b8790-bea4-4fbf-9a2b-6b21f9373cee" finalresult = pd.concat([df_untagged, dfhasilramal], axis=1, sort=False) finalresult # + colab={"base_uri": "https://localhost:8080/"} id="vjZsNoGaG7aV" outputId="b69fc5b3-5bf0-4816-aac3-f07909350e8e" finalresult.POS.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="oVzs0E3UG7aV" outputId="4d90cdd2-ddca-4da8-8a77-14451dda5789" finalresult.NET.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="k1K0rGpDG7aX" outputId="b1aa75da-b64f-40ea-b11e-8eda4886bfe8" finalresult.NEG.value_counts() # + id="gu9WegezK7WP" # + [markdown] id="i7zkfFlmLB0M" # #Vaksin GRU-SGD # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="LzNLRJT_LB0N" outputId="7417eea9-dc1a-4aeb-b345-fab0191b1892" df = pd.read_excel('/content/drive/MyDrive/Vaksin_MiningYoutubeconcat_preprocess3_6Juli EXCEL.xlsx') df = df[~df['comments'].isnull()] df = df.reset_index() df = df.drop(['index'], axis=1) df['sentimen']= df['sentimen'].fillna(4) df_untagged = df[df['sentimen'] == 4] df = df[df['sentimen'] != 4] df # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="koB6xnIkLB0N" outputId="1001d3ae-8673-49e3-eb30-f76e50d932c9" df_untagged = df_untagged.reset_index() df_untagged = df_untagged.drop(['index'], axis=1) df_untagged # + colab={"base_uri": "https://localhost:8080/"} id="UBf7dF_SLB0N" outputId="eba85727-05fb-4bc2-9c87-9da6f0d72331" print(df.sentimen.value_counts()) # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="0VxlTtOCLB0N" outputId="caaa6c50-a732-4037-8d44-6574a9ea1e5a" df['POS']= np.where(df['sentimen'] == 1, '1', '0') df['NET']= np.where(df['sentimen'] == 0, '1', '0') df['NEG']= np.where(df['sentimen'] == -1, '1', '0') df = df.drop(['sentimen'], axis=1) df # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="_nZ7L9h8LB0O" outputId="78049a23-a903-4f27-bbcd-8bff72360b2c" #ilangin link dan uname df["comments"]=df["comments"].str.replace('(?:\@|https?\://)\S+', '') #ilangin lowercase df["comments"]=df["comments"].str.lower() #ilangin simbol df["comments"]=df["comments"].str.replace('[^\w\s]',' ') #ilangin angka df["comments"]=df["comments"].str.replace('\d+',' ') #ilangin enter df["comments"]=df["comments"].str.replace('\n',' ',regex=True) df["comments"] = df["comments"].replace('\s+', ' ', regex=True) df # + colab={"base_uri": "https://localhost:8080/"} id="nLa0JrkDLB0O" outputId="d40ac30e-a53d-4981-bffc-9eab8321aaf5" df_majority = df[df.POS== '0' ] df_minority = df[df.POS== '1'] df_minority_upsampled = resample(df_minority, replace=True, # sample with replacement n_samples=235, # to match majority class random_state=123) # reproducible results df_upsampled = pd.concat([df_majority, df_minority_upsampled]) df_majority_2 = df_upsampled[df_upsampled.NET== '0' ] df_minority_2 = df_upsampled[df_upsampled.NET== '1'] df_minority_upsampled_2 = resample(df_minority_2, replace=True, # sample with replacement n_samples=235, # to match majority class random_state=123) # reproducible results df_upsampled_2 = pd.concat([df_majority_2, df_minority_upsampled_2]) df = df_upsampled_2 print(df.POS.value_counts()) print(df.NET.value_counts()) print(df.NEG.value_counts()) # + id="GHjrqH5NLB0O" df['POS']=df['POS'].astype(int) df['NET']=df['NET'].astype(int) df['NEG']=df['NEG'].astype(int) train, test = train_test_split(df, test_size=0.2, random_state = 23) list_classes = ['POS','NET','NEG'] y = train[list_classes].values y_te = test[list_classes].values list_sentences_train = train["comments"] list_sentences_test = test["comments"] max_features = 100000 tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(list_sentences_train)) list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train) list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test) maxlen = 50 X_t = pad_sequences(list_tokenized_train, maxlen=maxlen) X_te = pad_sequences(list_tokenized_test, maxlen=maxlen) totalNumWords = [len(one_comment) for one_comment in list_tokenized_train] # + id="zc7m_nugLB0O" def loadEmbeddingMatrix(typeToLoad): #load different embedding file from Kaggle depending on which embedding #matrix we are going to experiment with if(typeToLoad=="word2vec"): word2vecDict = KeyedVectors.load_word2vec_format("model300.bin", binary=True) embed_size = 300 embeddings_index = dict() for word in word2vecDict.wv.vocab: embeddings_index[word] = word2vecDict.word_vec(word) print('Loaded %s word vectors.' % len(embeddings_index)) gc.collect() #We get the mean and standard deviation of the embedding weights so that we could maintain the #same statistics for the rest of our own random generated weights. all_embs = np.stack(list(embeddings_index.values())) emb_mean,emb_std = all_embs.mean(), all_embs.std() nb_words = len(tokenizer.word_index)+1 #We are going to set the embedding size to the pretrained dimension as we are replicating it. #the size will be Number of Words in Vocab X Embedding Size embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size)) gc.collect() #With the newly created embedding matrix, we'll fill it up with the words that we have in both #our own dictionary and loaded pretrained embedding. embeddedCount = 0 for word, i in tokenizer.word_index.items(): i-=1 #then we see if this word is in glove's dictionary, if yes, get the corresponding weights embedding_vector = embeddings_index.get(word) #and store inside the embedding matrix that we will train later on. if embedding_vector is not None: embedding_matrix[i] = embedding_vector embeddedCount+=1 print('total embedded:',embeddedCount,'common words') del(embeddings_index) gc.collect() #finally, return the embedding matrix return embedding_matrix # + id="T9hfpsLOLB0O" def create_fold_embeddings(embeddings_dim, key_vector): emb_init_values = [] unk = [] a = 0 b = 0 for word, i in tokenizer.word_index.items(): # Untuk memastikan bahwa urut if word == '<unk>': emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) elif word == '<pad>': emb_init_values.append(np.zeros(embeddings_dim).astype('float32')) elif word in key_vector.wv.vocab: emb_init_values.append(key_vector.wv.word_vec(word)) b = b+1 else: emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) a = a+1 unk.append(word) # print(word) emb_init_values.append(np.random.uniform(-0.25, 0.25, embeddings_dim).astype('float32')) known_word = b unknown_word = a print(known_word, unknown_word) return known_word, unknown_word, emb_init_values # + id="q4CEmN1GLB0O" word_vectors = KeyedVectors.load_word2vec_format("/content/drive/MyDrive/model300.bin",binary="True", unicode_errors='ignore') word2vec = word_vectors embed_dim = 300 # + colab={"base_uri": "https://localhost:8080/"} id="jc15SmckLB0O" outputId="fc643dac-53a7-4559-f809-7a231f5cd497" known_word, unknown_word, emb_init_values = create_fold_embeddings(embed_dim, word_vectors) emb_init_values = np.array(emb_init_values) print("known words:", known_word) print("unknown words:", unknown_word) # + id="YP6BuB9ALB0P" class RocAucEvaluation(Callback): def __init__(self, validation_data=(), interval=1): super(Callback, self).__init__() self.interval = interval self.X_val, self.y_val = validation_data def on_epoch_end(self, epoch, logs={}): if epoch % self.interval == 0: y_pred = self.model.predict(self.X_val, verbose=0) score = roc_auc_score(self.y_val, y_pred) print("\n ROC-AUC - epoch: %d - score: %.6f \n" % (epoch+1, score)) inp = Input(shape=(maxlen,)) # + colab={"base_uri": "https://localhost:8080/"} id="nsLRMarXLB0P" outputId="a868e253-1340-471f-f851-f4c4522cafd9" x = Embedding(len(tokenizer.word_index)+1, emb_init_values.shape[1],weights=[emb_init_values],trainable=True)(inp) # print(inp.shape) #x = Bidirectional(LSTM(200, activation='tanh', return_sequences = True, dropout=0.4))(x) x = GRU(200, return_sequences=True)(x) x = GlobalMaxPool1D()(x) x = Dropout(0.1)(x) x = Dense(100)(x) x = Dropout(0.1)(x) x = Dense(50)(x) x = Dropout(0.1)(x) x = Dense(28, activation="relu")(x) x = Dropout(0.1)(x) x = Dense(3, activation="sigmoid")(x) model = Model(inputs=inp, outputs=x) opt = optimizers.SGD(lr=0.01) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) batch_size = 32 epochs = 100 # file_path="weights_base.best.hdf5" RocAuc = RocAucEvaluation(validation_data=(X_te, y_te), interval=1) # checkpoint = ModelCheckpoint('modelEmb-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto') model.fit(X_t,y, batch_size=batch_size, epochs=epochs, verbose=2,callbacks=[RocAuc], validation_data=(X_te, y_te)) # + colab={"base_uri": "https://localhost:8080/"} id="-vOXVd6kLB0P" outputId="e9e6296d-b31d-4bd3-c76f-463f5aad3c2d" model.summary() # + id="Ng3qO5eRLB0P" from keras.models import load_model c = model.predict(X_te) # + colab={"base_uri": "https://localhost:8080/"} id="Ym9ZppngLB0P" outputId="ef731993-ed33-4c16-d18c-075818da7f34" score = model.evaluate(X_te, y_te, batch_size=batch_size, verbose=1) print('Test accuracy:', score[1]) # + colab={"base_uri": "https://localhost:8080/"} id="s5MEa7L6LB0P" outputId="e1da224d-737b-44ad-8c28-3ce9ae863290" y_pred = model.predict(X_te) roc_auc_score(y_te, c) # + id="EK9PJReoLB0Q" from sklearn.metrics import roc_curve, auc fpr = dict() tpr = dict() roc_auc = dict() for i in range(3): fpr[i], tpr[i], _ = roc_curve(y_te[:, i], y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # + colab={"base_uri": "https://localhost:8080/", "height": 851} id="Iahz0v_4LB0Q" outputId="1c8372a8-2d99-4ef6-b5ab-8e7bdfa8f635" for i in range(3): plt.figure() plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i]) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="n76v7PN4LB0Q" outputId="52d0d04f-72cc-4f66-b4e1-132e7efdda9b" y_pred[y_pred>=0.23] = 1 y_pred[y_pred<0.23] = 0 roc_auc_score(y_te, y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="X4ipY9RwLB0Q" outputId="3f6de078-b41d-4a42-f0b1-40d37b752f28" import sklearn.metrics as skm import numpy as np cm = skm.multilabel_confusion_matrix(y_te, y_pred) print(cm) print( skm.classification_report(y_te,y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="7AC3wu5LLB0Q" outputId="b3d19663-c100-4d90-8e1b-9303981c1f6a" from sklearn.metrics import accuracy_score from sklearn.metrics import hamming_loss def hamming_score(y_true, y_pred, normalize=True, sample_weight=None): ''' Compute the Hamming score (a.k.a. label-based accuracy) for the multi-label case http://stackoverflow.com/q/32239577/395857 ''' acc_list = [] for i in range(y_true.shape[0]): set_true = set( np.where(y_true[i])[0] ) set_pred = set( np.where(y_pred[i])[0] ) #print('\nset_true: {0}'.format(set_true)) #print('set_pred: {0}'.format(set_pred)) tmp_a = None if len(set_true) == 0 and len(set_pred) == 0: tmp_a = 1 else: tmp_a = len(set_true.intersection(set_pred))/\ float( len(set_true.union(set_pred)) ) #print('tmp_a: {0}'.format(tmp_a)) acc_list.append(tmp_a) return np.mean(acc_list) print('Hamming score: {0}'.format(hamming_score(y_te,y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="-lb4zN5ALB0Q" outputId="38f235c0-f43c-4d1a-c854-cc05bdaa656e" import sklearn.metrics # Subset accuracy # 0.25 (= 0+1+0+0 / 4) --> 1 if the prediction for one sample fully matches the gold. 0 otherwise. print('Subset accuracy: {0}'.format(sklearn.metrics.accuracy_score(y_te, y_pred, normalize=True, sample_weight=None))) # Hamming loss (smaller is better) # $$ \text{HammingLoss}(x_i, y_i) = \frac{1}{|D|} \sum_{i=1}^{|D|} \frac{xor(x_i, y_i)}{|L|}, $$ # where # - \\(|D|\\) is the number of samples # - \\(|L|\\) is the number of labels # - \\(y_i\\) is the ground truth # - \\(x_i\\) is the prediction. # 0.416666666667 (= (1+0+3+1) / (3*4) ) print('Hamming loss: {0}'.format(sklearn.metrics.hamming_loss(y_te, y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="yNiLLbADLB0Q" outputId="89a2444d-0616-4fdd-9829-55cbdd0d451b" from sklearn.metrics import confusion_matrix confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)) # + colab={"base_uri": "https://localhost:8080/", "height": 338} id="lOhZMz0HLB0R" outputId="60a39fed-9894-4e04-c97b-828f6d9b95e9" import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize=(8,5)) sns.heatmap(confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)), annot=True, fmt=".0f", ax=ax) plt.xlabel("y_head") plt.ylabel("y_true") plt.show() # + [markdown] id="gQF3_dK8LB0S" # #Vaksin LSTM-ADAM # + colab={"base_uri": "https://localhost:8080/"} id="ff4tiE0ULB0T" outputId="20c55569-2cd9-45b3-b06c-9ae35240b369" x = Embedding(len(tokenizer.word_index)+1, emb_init_values.shape[1],weights=[emb_init_values],trainable=True)(inp) # print(inp.shape) x = LSTM(200, activation='tanh', return_sequences = True, dropout=0.4)(x) #x = GRU(200, return_sequences=True)(x) x = GlobalMaxPool1D()(x) x = Dropout(0.1)(x) x = Dense(100)(x) x = Dropout(0.1)(x) x = Dense(50)(x) x = Dropout(0.1)(x) x = Dense(28, activation="relu")(x) x = Dropout(0.1)(x) x = Dense(3, activation="softmax")(x) model = Model(inputs=inp, outputs=x) opt = optimizers.Adam(lr=0.001) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) batch_size = 32 epochs = 100 # file_path="weights_base.best.hdf5" RocAuc = RocAucEvaluation(validation_data=(X_te, y_te), interval=1) # checkpoint = ModelCheckpoint('modelEmb-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto') model.fit(X_t,y, batch_size=batch_size, epochs=epochs, verbose=2,callbacks=[RocAuc], validation_data=(X_te, y_te)) # + colab={"base_uri": "https://localhost:8080/"} id="ne5vY8qBLB0T" outputId="4f7c2e34-c329-40df-fb50-54fd7c13617c" model.summary() # + id="KW6VrNwCLB0T" from keras.models import load_model c = model.predict(X_te) # + colab={"base_uri": "https://localhost:8080/"} id="P-yjbGjdLB0T" outputId="59155afe-8cbb-469e-cb60-4a887aa8be6f" score = model.evaluate(X_te, y_te, batch_size=batch_size, verbose=1) print('Test accuracy:', score[1]) # + colab={"base_uri": "https://localhost:8080/"} id="sVqYCXPjLB0T" outputId="1516afd6-b956-46d0-969c-d0a7ab6ba745" y_pred = model.predict(X_te) roc_auc_score(y_te, c) # + colab={"base_uri": "https://localhost:8080/", "height": 851} id="Sfe6yv2FLB0T" outputId="5549a0fb-1bc9-4b99-f3da-8304d52dc763" from sklearn.metrics import roc_curve, auc fpr = dict() tpr = dict() roc_auc = dict() for i in range(3): fpr[i], tpr[i], _ = roc_curve(y_te[:, i], y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) for i in range(3): plt.figure() plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i]) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="6OOUXkQ9LB0U" outputId="a1e271a4-a2d5-4326-b2f7-20543eca1d1d" y_pred[y_pred>=0.23] = 1 y_pred[y_pred<0.23] = 0 roc_auc_score(y_te, y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="vLgxg6AoLB0U" outputId="0493016c-9703-420c-876b-d04148c99607" import sklearn.metrics as skm import numpy as np cm = skm.multilabel_confusion_matrix(y_te, y_pred) print(cm) print( skm.classification_report(y_te,y_pred)) # + colab={"base_uri": "https://localhost:8080/"} id="p1jWD8bELB0U" outputId="890801ee-254a-4ca1-8986-c58bf396080f" from sklearn.metrics import accuracy_score from sklearn.metrics import hamming_loss def hamming_score(y_true, y_pred, normalize=True, sample_weight=None): ''' Compute the Hamming score (a.k.a. label-based accuracy) for the multi-label case http://stackoverflow.com/q/32239577/395857 ''' acc_list = [] for i in range(y_true.shape[0]): set_true = set( np.where(y_true[i])[0] ) set_pred = set( np.where(y_pred[i])[0] ) #print('\nset_true: {0}'.format(set_true)) #print('set_pred: {0}'.format(set_pred)) tmp_a = None if len(set_true) == 0 and len(set_pred) == 0: tmp_a = 1 else: tmp_a = len(set_true.intersection(set_pred))/\ float( len(set_true.union(set_pred)) ) #print('tmp_a: {0}'.format(tmp_a)) acc_list.append(tmp_a) return np.mean(acc_list) print('Hamming score: {0}'.format(hamming_score(y_te,y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="T2Bs0qZcLB0U" outputId="a5bb04d2-7b1d-4a5b-9cd9-a02c1ac81c9a" import sklearn.metrics # Subset accuracy # 0.25 (= 0+1+0+0 / 4) --> 1 if the prediction for one sample fully matches the gold. 0 otherwise. print('Subset accuracy: {0}'.format(sklearn.metrics.accuracy_score(y_te, y_pred, normalize=True, sample_weight=None))) # Hamming loss (smaller is better) # $$ \text{HammingLoss}(x_i, y_i) = \frac{1}{|D|} \sum_{i=1}^{|D|} \frac{xor(x_i, y_i)}{|L|}, $$ # where # - \\(|D|\\) is the number of samples # - \\(|L|\\) is the number of labels # - \\(y_i\\) is the ground truth # - \\(x_i\\) is the prediction. # 0.416666666667 (= (1+0+3+1) / (3*4) ) print('Hamming loss: {0}'.format(sklearn.metrics.hamming_loss(y_te, y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="-WE12yqcLB0U" outputId="38f4b055-2dd7-4254-a5bb-4e5d1ad990ba" from sklearn.metrics import confusion_matrix confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)) # + colab={"base_uri": "https://localhost:8080/", "height": 338} id="YnmrdQ0rLB0V" outputId="5ef8870b-28d5-4511-998c-66fc92681639" import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(figsize=(8,5)) sns.heatmap(confusion_matrix(y_te.argmax(axis=1), y_pred.argmax(axis=1)), annot=True, fmt=".0f", ax=ax) plt.xlabel("y_head") plt.ylabel("y_true") plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="agArJe35LB0V" outputId="caaba85b-dbe0-4b77-f1cb-8ca82ae50611" df_untagged = df_untagged.drop(['sentimen'], axis =1) df_untagged # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="n8TXMV0iLB0V" outputId="a6e79f8f-bda1-473d-cc39-b40d26804177" #ilangin link dan uname df_untagged["comments"]=df_untagged["comments"].str.replace('(?:\@|https?\://)\S+', '') #ilangin lowercase df_untagged["comments"]=df_untagged["comments"].str.lower() #ilangin simbol df_untagged["comments"]=df_untagged["comments"].str.replace('[^\w\s]',' ') #ilangin angka df_untagged["comments"]=df_untagged["comments"].str.replace('\d+',' ') #ilangin enter df_untagged["comments"]=df_untagged["comments"].str.replace('\n',' ',regex=True) df_untagged["comments"] = df_untagged["comments"].replace('\s+', ' ', regex=True) df_untagged # + colab={"base_uri": "https://localhost:8080/"} id="Mhw0aJ88kexe" outputId="d9c2756e-4200-46bc-cdd6-fab35959cbc3" df_untagged.comments = df_untagged.comments.astype(str) df_untagged.info() # + colab={"base_uri": "https://localhost:8080/"} id="157LEgXRLB0V" outputId="6bce5812-0e5e-444e-fda0-6665deee802e" list_sentences_aplikasi = df_untagged["comments"] list_tokenized_aplikasi = tokenizer.texts_to_sequences(list_sentences_aplikasi) X_aplikasi = pad_sequences(list_tokenized_aplikasi, maxlen=maxlen) X_aplikasi # + colab={"base_uri": "https://localhost:8080/"} id="Luy8CQ4vLB0W" outputId="2244f2fa-94d5-499c-dbd9-30a67a6e7c26" y_pred_aplikasi = model.predict(X_aplikasi) y_pred_aplikasi # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="exMpYp7dLB0W" outputId="fd8c4656-98d2-480d-bd0f-ea58ac75091f" dfhasil = pd.DataFrame(y_pred_aplikasi, columns=['POS', 'NET', 'NEG']) dfhasil # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="eYp7H8T7LB0W" outputId="a24781bd-ec4d-4a14-bfad-02df645a76ac" dfhasilramal = dfhasil.eq(dfhasil.where(dfhasil != 0).max(1), axis=0).astype(int) dfhasilramal # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="p9UmmmILLB0W" outputId="1a5cf668-0050-44fd-8361-5ebff86be648" finalresult = pd.concat([df_untagged, dfhasilramal], axis=1, sort=False) finalresult # + colab={"base_uri": "https://localhost:8080/"} id="EkHtuRlYLB0X" outputId="50053753-6a3c-4a53-8fc6-66d84e13db16" finalresult.POS.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="k07UQpT0LB0X" outputId="a82ddbbd-8eb1-4502-f6c8-6a1e2ea07554" finalresult.NET.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="sAkLSMy4LB0X" outputId="ab2ff88b-8753-4bf6-9c4e-6f74986fc5ca" finalresult.NEG.value_counts()
Code/Youtube.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Step 3.1: Experiment II: Machine Learning # --- # ## 1. Imports import warnings warnings.filterwarnings('ignore') import math import numpy as np #operaciones matriciales y con vectores import pandas as pd #tratamiento de datos import random import matplotlib.pyplot as plt #gráficos import seaborn as sns import joblib from sklearn import naive_bayes from sklearn import tree from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn import tree from sklearn import linear_model from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split #metodo de particionamiento de datasets para evaluación from sklearn import preprocessing from sklearn import metrics from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_validate from sklearn.model_selection import GridSearchCV from sklearn.tree import export_graphviz from sklearn.metrics import cohen_kappa_score from sklearn.metrics import accuracy_score # --- # ## 2. Load the Standardized Dataset dataframe = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Standardized\SDatasetExp2.csv", delimiter = ",") dataframe.head(2) dataframe.shape # --- # ## 3. Let's Split the data x = dataframe.iloc[:,:-1] y = dataframe['Type'] X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=1234) print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) y_test = y_test.astype(np.int64) # --- # ## 4. Let's create a Dataframe to save the Accuracies... acc_Machine_Learning = pd.DataFrame(columns=["Name","Accuracy_Value","CV","Type","Kappa"]) # --- # --- # ## 5. :::::::: MACHINE LEARNING :::::::: x = X_train y = y_train # #### 5.1 Gaussian Naive Bayes # ##### Train gnb = naive_bayes.GaussianNB() params = {} gscv_gnb = GridSearchCV(gnb, params, cv=10, return_train_score=True) gscv_gnb.fit(x,y) gscv_gnb.cv_results_ # The **best_score (Mean cross-validated score of the best_estimator)** is : gscv_gnb.best_score_ # The **best estimator (model)** is : gnb = gscv_gnb.best_estimator_ gnb acc_Machine_Learning= acc_Machine_Learning.append({'Name' : gnb , 'Accuracy_Value' : gscv_gnb.best_score_ , 'CV' : 10 , 'Type' : 'TRAIN'}, ignore_index=True) acc_Machine_Learning # ##### Test y_pred= gnb.predict(X_test) unique, counts = np.unique(y_test, return_counts=True) dict(zip(unique, counts)) unique, counts = np.unique(y_pred, return_counts=True) dict(zip(unique, counts)) cm= metrics.confusion_matrix(y_test, y_pred) plt.imshow(cm, cmap=plt.cm.Blues) plt.title("Matriz de confusión") plt.colorbar() tick_marks = np.arange(3) plt.xticks(tick_marks, ['0','1']) plt.yticks(tick_marks, ['0','1']) print(classification_report(y_test, y_pred, labels=None)) # --- acc_Machine_Learning= acc_Machine_Learning.append({'Name' : gnb , 'Accuracy_Value': accuracy_score(y_test,y_pred) , 'CV' : 10 , 'Type' : 'TEST' , 'Kappa' : cohen_kappa_score(y_test, y_pred)}, ignore_index=True) acc_Machine_Learning # --- # #### 5.2 Decision Tree Classifier # ##### Train dtc = tree.DecisionTreeClassifier(random_state=1234) tree_params = {'criterion':['gini','entropy'], 'max_depth':[4,5,6,7,8,9,10,11,12,15,20,30,40,50,70,90,120,150], 'max_leaf_nodes': list(range(2, 100)), 'min_samples_split': [2, 3, 4]} gscv_dtc = GridSearchCV(dtc, tree_params, cv=10) gscv_dtc.fit(x,y) # The **best_score (Mean cross-validated score of the best_estimator)** is : gscv_dtc.best_score_ # The **best estimator (model)** is : dtc = gscv_dtc.best_estimator_ dtc acc_Machine_Learning= acc_Machine_Learning.append({'Name' : dtc, 'Accuracy_Value' : gscv_dtc.best_score_ ,'CV' : 10 ,'Type' : 'TRAIN'}, ignore_index=True) acc_Machine_Learning # ##### Test y_pred= dtc.predict(X_test) unique, counts = np.unique(y_test, return_counts=True) dict(zip(unique, counts)) unique, counts = np.unique(y_pred, return_counts=True) dict(zip(unique, counts)) cm= metrics.confusion_matrix(y_test, y_pred) plt.imshow(cm, cmap=plt.cm.Blues) plt.title("Matriz de confusión") plt.colorbar() tick_marks = np.arange(3) plt.xticks(tick_marks, ['0','1']) plt.yticks(tick_marks, ['0','1']) print(classification_report(y_test, y_pred, labels=None)) # --- acc_Machine_Learning= acc_Machine_Learning.append({'Name' : dtc , 'Accuracy_Value': accuracy_score(y_test,y_pred) , 'CV' : 10 , 'Type' : 'TEST' , 'Kappa' : cohen_kappa_score(y_test, y_pred)}, ignore_index=True) acc_Machine_Learning # --- # #### 5.3 KNN # ##### Train knn = KNeighborsClassifier() knn_params = {'n_neighbors':[1,3,5], 'weights' : ['uniform','distance'], 'metric':['euclidean','manhattan'] } gscv_knn = GridSearchCV(knn, knn_params, cv=10) gscv_knn.fit(x,y) # The **best_score (Mean cross-validated score of the best_estimator)** is : gscv_knn.best_score_ # The **best estimator (model)** is : knn = gscv_knn.best_estimator_ knn acc_Machine_Learning= acc_Machine_Learning.append({'Name' : knn ,'Accuracy_Value' : gscv_knn.best_score_ , 'CV' :10 , 'Type' : 'TRAIN'}, ignore_index=True) acc_Machine_Learning # ##### Test y_pred= knn.predict(X_test) unique, counts = np.unique(y_test, return_counts=True) dict(zip(unique, counts)) unique, counts = np.unique(y_pred, return_counts=True) dict(zip(unique, counts)) cm= metrics.confusion_matrix(y_test, y_pred) plt.imshow(cm, cmap=plt.cm.Blues) plt.title("Matriz de confusión") plt.colorbar() tick_marks = np.arange(3) plt.xticks(tick_marks, ['0','1']) plt.yticks(tick_marks, ['0','1']) print(classification_report(y_test, y_pred, labels=None)) # --- acc_Machine_Learning= acc_Machine_Learning.append({'Name' : knn , 'Accuracy_Value': accuracy_score(y_test,y_pred) , 'CV' : 10 , 'Type' : 'TEST' , 'Kappa' : cohen_kappa_score(y_test, y_pred)}, ignore_index=True) acc_Machine_Learning # --- # #### 5.4 Logistic Regression # ##### Train logreg = linear_model.LogisticRegression() params = {} gscv_lg = GridSearchCV(logreg, params, cv=10) gscv_lg.fit(x,y) # The **best_score (Mean cross-validated score of the best_estimator)** is : gscv_lg.best_score_ # The **best estimator (model)** is : logreg = gscv_lg.best_estimator_ logreg acc_Machine_Learning= acc_Machine_Learning.append({'Name' : logreg , 'Accuracy_Value' : gscv_lg.best_score_ , 'CV' :10 , 'Type' : 'TRAIN'} , ignore_index=True) acc_Machine_Learning # ##### Test y_pred= logreg.predict(X_test) unique, counts = np.unique(y_test, return_counts=True) dict(zip(unique, counts)) unique, counts = np.unique(y_pred, return_counts=True) dict(zip(unique, counts)) cm= metrics.confusion_matrix(y_test, y_pred) plt.imshow(cm, cmap=plt.cm.Blues) plt.title("Matriz de confusión") plt.colorbar() tick_marks = np.arange(3) plt.xticks(tick_marks, ['0','1']) plt.yticks(tick_marks, ['0','1']) print(classification_report(y_test, y_pred, labels=None)) # --- acc_Machine_Learning= acc_Machine_Learning.append({'Name' : logreg , 'Accuracy_Value': accuracy_score(y_test,y_pred) , 'CV' : 10 , 'Type' : 'TEST' , 'Kappa' : cohen_kappa_score(y_test, y_pred)}, ignore_index=True) acc_Machine_Learning # --- # #### 5.5 Random Forest Classifier # ##### Train clf = RandomForestClassifier(random_state=1234) clf_param = { 'n_estimators': [64, 128], 'max_features': ['auto', 'sqrt', 'log2'], 'max_depth' : [4,5,6,7,8,9,10,11,12,13,14,15], 'criterion' :['gini', 'entropy']} gscv_rfc = GridSearchCV(clf, clf_param, cv=10) gscv_rfc.fit(x,y) # The **best_score (Mean cross-validated score of the best_estimator)** is : gscv_rfc.best_score_ # The **best estimator (model)** is : clf = gscv_rfc.best_estimator_ clf acc_Machine_Learning= acc_Machine_Learning.append({'Name' : clf ,'Accuracy_Value' : gscv_rfc.best_score_ ,'CV' :10 ,'Type' : 'TRAIN'} , ignore_index=True) acc_Machine_Learning # ##### Test y_pred= clf.predict(X_test) unique, counts = np.unique(y_test, return_counts=True) dict(zip(unique, counts)) unique, counts = np.unique(y_pred, return_counts=True) dict(zip(unique, counts)) cm= metrics.confusion_matrix(y_test, y_pred) plt.imshow(cm, cmap=plt.cm.Blues) plt.title("Matriz de confusión") plt.colorbar() tick_marks = np.arange(3) plt.xticks(tick_marks, ['0','1']) plt.yticks(tick_marks, ['0','1']) print(classification_report(y_test, y_pred, labels=None)) # --- acc_Machine_Learning= acc_Machine_Learning.append({'Name' : clf , 'Accuracy_Value': accuracy_score(y_test,y_pred) , 'CV' : 10 , 'Type' : 'TEST' , 'Kappa' : cohen_kappa_score(y_test, y_pred)}, ignore_index=True) acc_Machine_Learning # --- # ## 6. Let's save the accuracies acc_Machine_Learning = acc_Machine_Learning.sort_values(by=['Accuracy_Value'], ascending=False) acc_Machine_Learning acc_Machine_Learning.to_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Accuracies\MLAccuraciesExp2.csv",sep=',',index=False) # --- # ## 7. Let's choose the best ML Algorithm acc_Machine_Learning.iloc[1,:] # --- # ## 8. Let's save the 3 best models... joblib.dump(clf,"./Models/rfc.save") joblib.dump(dtc,"./Models/dtc.save") joblib.dump(knn,"./Models/knn.save") joblib.dump(logreg,"./Models/logreg.save") # --- # --- # ---- # ## References # ### Naive # 1. https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html # 2. https://www.datacamp.com/community/tutorials/naive-bayes-scikit-learn # 3. https://stackoverflow.com/questions/58212613/naive-bayes-gaussian-throwing-valueerror-could-not-convert-string-to-float-m # 4. https://scikit-learn.org/stable/modules/naive_bayes.html # 5. https://scikit-learn.org/stable/modules/model_evaluation.html # ### Decision Tree # 1. https://stackoverflow.com/questions/35097003/cross-validation-decision-trees-in-sklearn # 2. https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html # ### Label Encoder # 1. https://www.interactivechaos.com/python/function/labelencoder # ### KNN # 1. https://medium.com/@svanillasun/how-to-deal-with-cross-validation-based-on-knn-algorithm-compute-auc-based-on-naive-bayes-ff4b8284cff4
Botnets/Phases/Phase 3/Experiments/New Step/.ipynb_checkpoints/Step 3.1 Experiment 2 ML -checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/aksha1234/Deep-learning-tutorials/blob/main/HouseSalePricePrediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="yXJ3wZgk9Ixa" # ## Importing the libraries # + id="-k3QXfzf_NQR" # !pip install opendatasets --upgrade --q # + id="CF6cCHyi9GHT" import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib # %matplotlib inline import seaborn as sns import plotly.express as px import opendatasets as od import sklearn # + id="buc5IVss_CRh" url='https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/data' # + colab={"base_uri": "https://localhost:8080/"} id="0AhTr8oo_Vjo" outputId="e0bce6f8-19b0-4376-827a-84049ad28961" od.download(url) # + id="FGPpi6Su_Xu9" import os # + colab={"base_uri": "https://localhost:8080/"} id="yWBIzb1I_jrd" outputId="da5dd894-7dad-456d-c232-e0a18061a81d" os.listdir('house-prices-advanced-regression-techniques') # + id="d3IIDKzR_tdZ" train_df=pd.read_csv('house-prices-advanced-regression-techniques/train.csv') # + id="BVyOAqXM_4XJ" test_df=pd.read_csv('house-prices-advanced-regression-techniques/test.csv') # + id="TWWddoHA_-Xi" sample_df=pd.read_csv('house-prices-advanced-regression-techniques/sample_submission.csv') # + colab={"base_uri": "https://localhost:8080/"} id="6EFjDxVcAEW_" outputId="9e158660-5363-41b2-e0a8-65a09020b744" with open("house-prices-advanced-regression-techniques/data_description.txt") as file: print(file.read()) # + colab={"base_uri": "https://localhost:8080/", "height": 488} id="IzpyU16ZCjJm" outputId="53f5e653-53fd-4c07-dd21-2a7cac3a20ac" train_df # + colab={"base_uri": "https://localhost:8080/"} id="qvJsIjPqDV4s" outputId="ac397f04-36ed-4bcf-e0ac-ebe73f2e8427" train_df.isnull().sum() # + [markdown] id="y4Jkp-AkDeqY" # ## Lot frontage is having 259 values to be missing so lets use the imouter to calculate the missing values using mean startegy # + id="1ufT5YBADxWC" ## Lets us find out the categorical and numerical cols from the dataset input_cols=train_df.columns.tolist()[1:-1] numerical_cols=train_df[input_cols].select_dtypes(include=np.number).columns.tolist() categorical_cols=train_df[input_cols].select_dtypes(include='object').columns.tolist() output_cols=train_df.columns.tolist()[-1] # + id="8JxF5RsnDbPo" from sklearn.impute import SimpleImputer # + id="IPdS6I1DDt96" imputer=SimpleImputer(strategy='mean') # + id="NQzdwq5IERem" train_df[numerical_cols]=imputer.fit_transform(train_df[numerical_cols]) # + [markdown] id="f-gomTNvEu1h" # ## As it can be seen all the null values are replaced with the mean values and hence we get the zero. # + [markdown] id="JftE7h75FNOX" # ## Now to eliminate the disproprtionate effect of the feature we need to use the `normalization` for numerical values and one hot encoding for the categorical columns. # + id="pZuAVHTDEtTJ" from sklearn.preprocessing import OneHotEncoder # + id="UHsNKfGrFnWW" encoder=OneHotEncoder(sparse=False,handle_unknown='ignore') # + colab={"base_uri": "https://localhost:8080/"} id="mXfBa6TwF6KB" outputId="d1d86aa7-4d36-4e93-abcf-0bfd94a5b2f1" encoder.fit(train_df[categorical_cols]) # + colab={"base_uri": "https://localhost:8080/"} id="H48ebufBGCVN" outputId="84d21fb3-c486-42ea-d4ad-cfff0e7941c6" encode_cols=encoder.get_feature_names(categorical_cols).tolist() # + colab={"base_uri": "https://localhost:8080/"} id="72UWJPBQGLL1" outputId="4a04cf9f-5d48-42a8-e1ee-32b4622777f6" train_df[encode_cols]=encoder.transform(train_df[categorical_cols]) # + colab={"base_uri": "https://localhost:8080/", "height": 488} id="sq8SVpRBGNNN" outputId="25dfcb00-37bb-4263-9e62-a723a760c66a" train_df # + id="xhKArXvTIMop" df=train_df[numerical_cols+encode_cols].copy() # + [markdown] id="zTRZ8Zc2Ilob" # ## Now standardising the columns # + id="GbHd2VndIkQ1" from sklearn.preprocessing import StandardScaler # + id="UE0MW4_SIzpn" scaler=StandardScaler() # + id="9FCfklMQI3qI" df[numerical_cols+encode_cols]=scaler.fit_transform(df[numerical_cols+encode_cols]) # + colab={"base_uri": "https://localhost:8080/", "height": 488} id="bbV0TGo7JFMG" outputId="b7e20997-b3d8-40cc-f8c3-60f9ced73dcb" df # + [markdown] id="_G0qINuoJJix" # ## Now let us use different machine learning modules to predict teh results # + id="3N2uADhOJF7N" from sklearn.model_selection import train_test_split # + id="UctUgXhBKtO1" train_inputs,val_inputs,train_outputs,val_outputs=train_test_split(df,train_df[output_cols],test_size=0.2,random_state=42) # + colab={"base_uri": "https://localhost:8080/", "height": 488} id="MfX4Pfc4RlPZ" outputId="f8a56c8f-0cfd-491c-c77c-4a8edad40b92" train_inputs # + id="vyqOPSIpLfSP" from sklearn.ensemble import RandomForestRegressor # + id="-etm5HgSLpvE" model=RandomForestRegressor(n_estimators=20,max_depth=30,max_features=0.6,n_jobs=-1,random_state=42).fit(train_inputs,train_outputs) # + id="B_wDFQ9xNCc4" preds=model.predict(val_inputs) # + id="l_xrj_7uNITI" from sklearn.metrics import accuracy_score, classification_report # + id="eMCJcPsQNRf9" train_score=model.score(train_inputs,train_outputs) # + id="PaHOBhUENfa9" val_score=model.score(val_inputs,val_outputs) # + colab={"base_uri": "https://localhost:8080/"} id="7nHb7SGQNg1g" outputId="ed4c2e27-e8e3-4f5f-fe62-6d2c79bcc362" print(' The score for the random forest training model is {} \n and for the validation model is {}'.format(train_score*100,val_score*100)) # + [markdown] id="3UeGDFRRN7Gn" # ## As from the above it can be seen that the random forst has predicted score for both model with high accuracy let us apply this model to teh test data. # + id="VT7gWuDROUuq" feature_df=pd.DataFrame({'feature':train_inputs.columns.tolist(),'importance':model.feature_importances_}).sort_values('importance',ascending=False).head(10) # + colab={"base_uri": "https://localhost:8080/", "height": 362} id="yHTZIefxRCGN" outputId="914f9188-7fa8-4366-c964-c54c63ba144d" feature_df # + colab={"base_uri": "https://localhost:8080/", "height": 582} id="tfVUihngQPz4" outputId="887261c7-85b8-4010-dfa7-5bd06d1cef46" plt.figure(figsize=(10,8)) sns.barplot(data=feature_df.sort_values('importance',ascending=False),x='feature',y='importance') plt.xticks(rotation=90) # + colab={"base_uri": "https://localhost:8080/", "height": 144} id="n6ZkpongN6Rs" outputId="70c0451c-a116-4a48-e8ef-06732f8cea4c" sample_df.head(3) # + id="0XGOX94BN2s-" def clean_test_data(df): data=df.copy() data[numerical_cols]=imputer.transform(data[numerical_cols]) data[encode_cols]=encoder.transform(data[categorical_cols]) data[numerical_cols+encode_cols]=scaler.transform(data[numerical_cols+encode_cols]) data=data[numerical_cols+encode_cols] preds=model.predict(data) return preds # + colab={"base_uri": "https://localhost:8080/"} id="3zzQ2ParVURX" outputId="d652c9a1-c46d-4548-bd79-4ec895d21b0f" clean_test_data(test_df) # + colab={"base_uri": "https://localhost:8080/"} id="e6ESVDKwVXRD" outputId="8ebf7030-2616-4853-a0fb-fa2c7cc88fe9" test_preds=clean_test_data(test_df) # + id="Zw-p1Wv5VcF-" submission_df=pd.DataFrame({'Id':test_df["Id"],'SalePrice':test_preds}) # + id="PBEoyCdfVeMf" submission_df.to_csv('submission_2.csv',index=False) # + id="JjY5O05NV2oD" def optmise_hyperprametr(**params): model=RandomForestRegressor(**params,n_jobs=-1,random_state=42).fit(train_inputs,train_outputs) return model.score(val_inputs,val_outputs) # + colab={"base_uri": "https://localhost:8080/"} id="kSeQLrpPXO2P" outputId="749dda5f-5e0d-407f-cae2-b29ece08feb5" optmise_hyperprametr(n_estimators=20,max_depth=30,max_features=0.6) # + colab={"base_uri": "https://localhost:8080/", "height": 480} id="ZyKEXassXWsh" outputId="f26abf0b-0e3a-417f-b4a5-f387576a6c63" df.describe([0.25,0.6]) # + [markdown] id="6e2PSBQL1e7K" # ## Conclusion # Here's the takeaway: Models can suffer from either: # # - `Overfitting`: capturing spurious patterns that won't recur in the future, leading to less accurate predictions, or # - `Underfitting`: failing to capture relevant patterns, again leading to less accurate predictions. # # We use validation data, which isn't used in model training, to measure a candidate model's accuracy. This lets us try many candidate models and keep the best one. # + [markdown] id="iYZ-kcgJ2HZ9" # ## The use of Pipeline # <font size = 5> # # **Pipelines** are a simple way to keep your data preprocessing and modeling code organized. Specifically, a pipeline bundles preprocessing and modeling steps so you can use the whole bundle as if it were a single step. # # Many data scientists hack together models without pipelines, but pipelines have some important benefits. Those include: # # - **Cleaner Code**: Accounting for data at each step of preprocessing can get messy. With a pipeline, you won't need to manually keep track of your training and validation data at each step. # # - **Fewer Bugs**: There are fewer opportunities to misapply a step or forget a preprocessing step. # # - **Easier to Productionize**: It can be surprisingly hard to transition a model from a prototype to something deployable at scale. We won't go into the many related concerns here, but pipelines can help. # # # More Options for Model Validation: You will see an example in the next tutorial, which covers cross-validation. # + id="nu_4lA3HXeLq" from sklearn.pipeline import Pipeline # + id="fMPOSrDm3QwK" from sklearn.model_selection import cross_val_score # + colab={"base_uri": "https://localhost:8080/"} id="J3QcDozu37x1" outputId="9a5acf97-1c97-46c5-ff77-6866851d4ece" cross_val_score(model,val_inputs,val_outputs,cv=6,n_jobs=-1,scoring='neg_mean_poisson_deviance').mean() # + [markdown] id="KvzhGWlN41jz" # Scoring are as follows `'explained_variance', 'r2', 'max_error', 'neg_median_absolute_error', 'neg_mean_absolute_error', 'neg_mean_absolute_percentage_error', 'neg_mean_squared_error', 'neg_mean_squared_log_error', 'neg_root_mean_squared_error', 'neg_mean_poisson_deviance', 'neg_mean_gamma_deviance', 'accuracy', 'top_k_accuracy', 'roc_auc', 'roc_auc_ovr', 'roc_auc_ovo', 'roc_auc_ovr_weighted', 'roc_auc_ovo_weighted', 'balanced_accuracy', 'average_precision', 'neg_log_loss', 'neg_brier_score', 'adjusted_rand_score', 'rand_score', 'homogeneity_score', 'completeness_score', 'v_measure_score', 'mutual_info_score', 'adjusted_mutual_info_score', 'normalized_mutual_info_score', 'fowlkes_mallows_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_samples', 'jaccard_weighted']` # + [markdown] id="XFFQ4bTJ5xCH" # # XGBOOST Technique to find the prediction # # Gradient boosting is a method that goes through cycles to iteratively add models into an ensemble. # # It begins by initializing the ensemble with a single model, whose predictions can be pretty naive. (Even if its predictions are wildly inaccurate, subsequent additions to the ensemble will address those errors.) # # Then, we start the cycle: # # 1. First, we use the current ensemble to generate predictions for each observation in the dataset. To make a prediction, we add the predictions from all models in the ensemble. # 2. These predictions are used to calculate a loss function (like mean squared error, for instance). # 3. Then, we use the loss function to fit a new model that will be added to the ensemble. Specifically, we determine model parameters so that adding this new model to the ensemble will reduce the loss. (Side note: The "gradient" in "gradient boosting" refers to the fact that we'll use gradient descent on the loss function to determine the parameters in this new model.) # 4. Finally, we add the new model to ensemble, and ... # 5. ... repeat! # # XGBoost stands for extreme gradient boosting, which is an implementation of gradient boosting with several additional features focused on performance and speed. (Scikit-learn has another version of gradient boosting, but XGBoost has some technical advantages.) # # # + colab={"base_uri": "https://localhost:8080/"} id="VGsVOoxm4hgM" outputId="ceccc557-16a6-428c-acd1-3ca3eb26ddf7" # !pip install xgboost --upgrade --q # + id="Oo1UriIf6Hr0" from xgboost import XGBRegressor # + id="jpZ_7rO56Leq" model2=XGBRegressor() # + colab={"base_uri": "https://localhost:8080/"} id="NUch3uvO6TwA" outputId="efbff99b-4a74-441e-f2a3-0993d570afbb" model2.fit(train_inputs,train_outputs) # + id="YsY2R9KJ6XO0" from sklearn.metrics import mean_absolute_error # + colab={"base_uri": "https://localhost:8080/"} id="L21qUysW6ffe" outputId="ca32083f-4cd5-4247-a529-cd1423813e26" prediction=model2.predict(val_inputs) print("Mean Absolute Error: " + str(mean_absolute_error(prediction,val_outputs))) # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="Eor7ZDgJ7Q8B" outputId="a0b38e2f-8d41-4744-ddc5-cf722cdd8e34" test_df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="PV0AnTuV7Ym0" outputId="ffcad444-70bb-41f0-8f4b-3e900bdf5856" sample_df.head() # + id="_m5xThdx6md2" def clean_test_data(df): data=df.copy() data[numerical_cols]=imputer.transform(data[numerical_cols]) data[encode_cols]=encoder.transform(data[categorical_cols]) data[numerical_cols+encode_cols]=scaler.transform(data[numerical_cols+encode_cols]) data=data[numerical_cols+encode_cols] preds=model2.predict(data) submission_df['Id']=test_df['Id'] submission_df['SalePrice']=preds submission_df.to_csv('submission_XGBOOST.csv',index=False) # + colab={"base_uri": "https://localhost:8080/"} id="-KgHzIQe7wmH" outputId="a52f09bd-8d4c-4ce0-dd90-60e4d64a7766" clean_test_data(test_df) # + [markdown] id="VUHvqvej8bB9" # ## Let us predict the result using Neural networks # + id="woYvM4cO70Tu" import tensorflow as tf from tensorflow import keras import keras # + id="1LQEYiZe8k83"
HouseSalePricePrediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Author - <NAME> # ## Project Domain - Churn Analysis # ## Objective is to develop a predictive framework for enabling a proactive retention strategy for a Telecom giant # # # Importing Libraries and Data #Loading the libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline #loading the dataset df_orig = pd.read_excel(r"Telco-Customer-Churn.xlsx") #checking the dimension of data df = df_orig.copy() df.shape # Reading what data looks like #df.to_string() df.head() # Showing type of each column df.dtypes # getting basic statistics from data df.describe() # ### Observations # - From above we can tell that primarily most of the columns look like categorical(object) # - Four columns are Numerical (int,float) # - Churn column is our target variable # - There is huge difference between max value and 75th percentile value in TotalCharges (Potential Outlier) # # Exploratory Data Analysis # Ignoring warnings import warnings warnings.filterwarnings('ignore') # Checking for number of rows having null values df['customerID'].isnull().sum() # Dropping customerID column df.drop(['customerID'],axis=1, inplace=True) # Displaying customer with churn and not churn df['Churn'].value_counts() # Label encoding the target column (Churn), No = 0, yes = 1 from sklearn.preprocessing import LabelEncoder le = LabelEncoder() df['Churn'] = le.fit_transform(df['Churn']) df.head() # + # Creating two data frames X and Y to store independent and dependent variable Y = df[['Churn']] X = df.drop(['Churn'],axis =1) Y.head() # - # Getting the churn rate Y.mean() # ### Observations # - There's no missing value in the data in the most of columns # - Churn rate is near about 27% which is quite high # + # Creating two dataframes from X to store numerical and categorical variable num = X.select_dtypes(exclude='object') cat = X.select_dtypes(include ='object') # - # displaying last 5 records of all numerical categories num.tail() # displaying last 5 records of all categorical categories cat.tail() # checking stats of numerical categories num.describe() # displaying value_counts of SeniorCitizen Column num['SeniorCitizen'].value_counts() # ### Observations # - SeniorCitizen columns is indicative, meaning only two values are present 0 0r 1 # - This could be converted in to categorical feature # creating a separate dataframe to store SeniorCitizen sen = num[['SeniorCitizen']] #removing SeniorCitizen from numerical categories dataframe (num) num = num.drop(['SeniorCitizen'],axis=1) num sen # ## Outlier Analysis Numerical Features # describing the numerical features percentile num.describe(percentiles=[0.01,0.05,0.10,0.25,0.50,0.75,0.80,0.9,0.99]) # ### Observations # - From above, we can observe that Difference between max and 99th percentile(Last 1 percentile) of TotalCharges column is quite # large. max = 8684, 99th percentile = 8039 # - we need to use capping and flooring to restrict our dataset from 1% to 99% # # + # Capping and flooring outliers def outlier_cap_floor(x): ''' return data after clipping between 1 percentile and 99th percentile''' x = x.clip(lower=x.quantile(0.01)) x = x.clip(upper=x.quantile(0.99)) return x # - # applying outlier_cap_floor on all numerical feature num = num.apply(lambda x: outlier_cap_floor(x)) # checking percentile after applying capping and flooring num.describe(percentiles=[0.01,0.05,0.10,0.25,0.50,0.75,0.80,0.9,0.99]) # ### Observations # - After applying capping and flooring the max of TotalCharges becomes 8039.88 which is better, also standard deviation doesn't change much # - MonthlyCharges max is also capped from 118 to 114.729 # ## Missing Values Analysis # + num.isnull().sum() # - # displaying percentage of missing values in TotalCharges Column num['TotalCharges'].isnull().mean() # imputing missing values with mean from sklearn.impute import SimpleImputer imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean') num.iloc[:,2:] = imp_mean.fit_transform(num.iloc[:,2:]) # Checking if missing values gets imputed num['TotalCharges'].isnull().sum() # ## Feature selection- Numerical Features # ### Part 1 - Remove features with Zero Variance
Classifications/Project 2 - Telecom Churn Industry/.ipynb_checkpoints/Telecom customer retention strategy modelling-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="-vzosXJacNDG" outputId="cefcbd08-88e9-4f32-e1dd-11dc94ac3065" # !pip install transformers # + id="-rM-8vuOcMF3" import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os from transformers import BertTokenizer,BertModel import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader,Dataset from torch.nn.utils.rnn import pack_padded_sequence from torch.optim import AdamW # + id="MVjggdXsVIma" import os import gc import copy import time import random import string # For data manipulation import numpy as np import pandas as pd # Pytorch Imports import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler from torch.utils.data import Dataset, DataLoader # Utils from tqdm import tqdm from collections import defaultdict # Sklearn Imports from sklearn.metrics import mean_squared_error from sklearn.model_selection import StratifiedKFold, KFold # For Transformer Models from transformers import AutoTokenizer, AutoModel, AdamW # + colab={"base_uri": "https://localhost:8080/"} id="rImFU6MfVMgB" outputId="fe1acc52-2bde-4c89-87f2-0030c0113154" from google.colab import drive drive.mount('/content/drive') # + id="S0Sa9mJfVgmN" class ArabertModel(nn.Module): def __init__(self, model_name): super(ArabertModel, self).__init__() self.model = AutoModel.from_pretrained(model_name) self.layer_norm = nn.LayerNorm(768) self.dropout = nn.Dropout(0.2) self.dense = nn.Sequential( nn.Linear(768, 256), nn.LeakyReLU(negative_slope=0.01), nn.Dropout(0.2), nn.Linear(256, 1) ) def forward(self, input_ids, attention_mask): pooled_output = self.model(input_ids=input_ids, attention_mask=attention_mask) pooled_output = self.layer_norm(pooled_output[1]) pooled_output = self.dropout(pooled_output) preds = self.dense(pooled_output) return preds # + id="i9mNu-XsV0FV" class MarbertModel(nn.Module): def __init__(self, model_name): super(MarbertModel, self).__init__() self.model = AutoModel.from_pretrained(model_name) self.drop = nn.Dropout(p=0.2) self.fc = nn.Linear(768, 1) def forward(self, ids, mask): out = self.model(input_ids=ids,attention_mask=mask, output_hidden_states=False) out = self.drop(out[1]) outputs = self.fc(out) return outputs # + id="7K3flBGuVZnH" class JigsawDatasetTest(Dataset): def __init__(self, df, tokenizer, max_length,column_): self.df = df self.max_len = max_length self.tokenizer = tokenizer self.text = df[column_].values def __len__(self): return len(self.df) def __getitem__(self, index): text = self.text[index] inputs = self.tokenizer.encode_plus( text, truncation=True, add_special_tokens=True, max_length=self.max_len, padding='max_length' ) ids = inputs['input_ids'] mask = inputs['attention_mask'] return { 'text_ids': torch.tensor(ids, dtype=torch.long), 'text_mask': torch.tensor(mask, dtype=torch.long), } # + id="kcelDpy6VtyL" @torch.no_grad() def valid_fn(model, dataloader, device): model.eval() dataset_size = 0 running_loss = 0.0 PREDS = [] bar = tqdm(enumerate(dataloader), total=len(dataloader)) for step, data in bar: ids = data['text_ids'].to(device, dtype = torch.long) mask = data['text_mask'].to(device, dtype = torch.long) outputs = model(ids, mask) sig=nn.Sigmoid() outputs=sig(outputs) # outputs = outputs.argmax(dim=1) # print(len(outputs)) # print(len(np.max(outputs.cpu().detach().numpy(),axis=1))) PREDS.append(outputs.detach().cpu().numpy()) # print(outputs.detach().cpu().numpy()) PREDS = np.concatenate(PREDS) gc.collect() return PREDS # + id="0VpcHp7fVvmm" def inference_Arabert(model_paths, dataloader, device): final_preds = [] for i, path in enumerate(model_paths): model = ArabertModel('aubmindlab/bert-base-arabert') model.to('cuda') model.load_state_dict(torch.load(path)) print(f"Getting predictions for model {i+1}") preds = valid_fn(model, dataloader, device) final_preds.append(preds) final_preds = np.array(final_preds) # print(final_preds) final_preds = np.mean(final_preds, axis=0) # print(final_preds) final_preds[final_preds>=0.5] = 1 final_preds[final_preds<0.5] = 0 # final_preds= np.argmax(final_preds,axis=1) return final_preds # + id="z5mGFK2CWEmN" def inference_Maerbert(model_paths, dataloader, device): final_preds = [] for i, path in enumerate(model_paths): model = MarbertModel('UBC-NLP/MARBERT') model.to('cuda') model.load_state_dict(torch.load(path)) print(f"Getting predictions for model {i+1}") preds = valid_fn(model, dataloader, device) final_preds.append(preds) final_preds = np.array(final_preds) # print(final_preds) final_preds = np.mean(final_preds, axis=0) # print(final_preds) final_preds[final_preds>=0.5] = 1 final_preds[final_preds<0.5] = 0 # final_preds= np.argmax(final_preds,axis=1) return final_preds # + colab={"base_uri": "https://localhost:8080/", "height": 177, "referenced_widgets": ["7795059f55fb47b8a6a7180854020ee4", "<KEY>", "998fdee1f57c4a66bf85c4425d8c18c3", "<KEY>", "956a505d28c44a449b24eeba7363a8ee", "<KEY>", "5a43867972de4069b3ce2f4ce280b7f4", "2158c8e5066f4772b55b76aa5ff1435c", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "7def8a5c0a074a628bd40190f64874c4", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "5eee1dfe4f3e403cb1f628a1fe921770", "8e1b8d27e5024404a17df8b952939ad2", "57a406922e3a400f8d6da428c91748a3", "7b2ca3196d9647d68838ffbd92be80b2", "ce9d836100a0459c8575e2cdc891e77e", "3e3331decf6b414e8e3feee24f7c3cdc", "<KEY>", "ff94e3259d274443b2a86e6483cca1b3", "ab036d245ed54319a333ace008dad2b1", "<KEY>", "<KEY>", "2fa9d39d93da45628ae86ba06fd54188", "<KEY>", "<KEY>", "6ce4352baba94a7d913327112fcccad2", "0a6adc91eadc470f87c7391d103694be", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "8e4f089ad2474efaace19b53e78ccfe1", "99cc1787d89145d184a0d38e99741194", "<KEY>", "5b52613ceb654fdfa8d0a9765f2f8351", "<KEY>", "<KEY>", "<KEY>", "50b545aafc384ce28500b6cadaeb7725", "<KEY>", "9cfd92db8e294312873c0abb67ca2347", "a584f0691a94410d858e781ffae83a5d", "250476a2b4734d859593026415940d7e", "0356417fd57f4e7eb8295a2be28bd4c1", "71f5fe09b2a647058a4928ca728babb5", "<KEY>", "37b669e2f9e440dc945df2e9a12c9b1c"]} id="mD3QyVmDW6i7" outputId="7b086137-fcca-45ab-e876-5ca532b0b956" arabert_tokenizer=AutoTokenizer.from_pretrained('aubmindlab/bert-base-arabert') # + id="4RrKRUqzXWkA" from sklearn.metrics import jaccard_score,f1_score,accuracy_score,recall_score,precision_score,classification_report def print_statistics(y, y_pred): accuracy = accuracy_score(y, y_pred) precision =precision_score(y, y_pred, average='weighted') recall = recall_score(y, y_pred, average='weighted') f_score = f1_score(y, y_pred, average='weighted') print('Accuracy: %.3f\nPrecision: %.3f\nRecall: %.3f\nF_score: %.3f\n' % (accuracy, precision, recall, f_score)) print(classification_report(y, y_pred)) return accuracy, precision, recall, f_score # + colab={"base_uri": "https://localhost:8080/", "height": 145, "referenced_widgets": ["3b1d6de2755f45ecba10d96d13724fd5", "7a07e2a51f5e49a6815aaa58f51cd534", "78934843c57641bcb1d1e357e28c420a", "fdcc57d555544e2e8422a9196dbc391e", "30446af6485f4f6b90a1172868f6c740", "3a1dc10484c547708a4ce3b34e18e3fa", "96dbfd3d2d5a4b84810d1d06cb4c4c49", "45afbb0dabcd4d369847f047103fadb7", "<KEY>", "<KEY>", "c6ca93ce0c15480f885d012656ea13c8", "<KEY>", "<KEY>", "<KEY>", "18a8fa03793c4e95abc44fd999b94211", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "164cca9b7981434f81effd3dec11df8b", "ff9639e1365744279f1a65bd93292914", "9e89a2b51b4e46589fdc39919d0d7320", "<KEY>", "<KEY>", "b63e464177b04dc8b658ec9af383db6d", "7e4ad98bad08429db3e5f255505b88cd", "5ec62e20ec8f4f50844db9983920edbf", "b4d4e0d8266a4c1db67890442a2b11f6", "a542cee43fce4243914070fc82919d66", "<KEY>", "8549820c1c0044d5a8c0253b162ace0b", "6316ac7321144b6ea08ae12b1684ed65", "68b8bb8e652e4273ab7a32d14b35c074", "90e1a7604c3c4b70a37dae1752724e36", "<KEY>", "<KEY>", "a7c09e2fd6184b2d9e8c7aa96e4c283f", "<KEY>", "0e7d82b7398f4b86a31cca7be03973b7", "<KEY>", "421dba0774244fdf95673511e616e835", "54da82c929a5404fb30b5dffdd44f801", "<KEY>", "<KEY>"]} id="sb8WoaraXEY_" outputId="95493c92-1566-4820-9044-7ff0ad8a5c64" marbert_tokenizer=AutoTokenizer.from_pretrained('UBC-NLP/MARBERT') # + id="Z-KQySRDZjAv" class Marbert_kim(nn.Module): def __init__(self, model_name): super(Marbert_kim, self).__init__() self.model = AutoModel.from_pretrained(model_name,output_hidden_states=True) output_channel = 16 # number of kernels num_classes = 1 # number of targets to predict dropout = 0.2 # dropout value embedding_dim = 768 # length of embedding dim ks = 3 # three conv nets here # input_channel = word embeddings at a value of 1; 3 for RGB images input_channel = 4 # for single embedding, input_channel = 1 # [3, 4, 5] = window height # padding = padding to account for height of search window # 3 convolutional nets self.conv1 = nn.Conv2d(input_channel, output_channel, (3, embedding_dim), padding=(2, 0), groups=4) self.conv2 = nn.Conv2d(input_channel, output_channel, (4, embedding_dim), padding=(3, 0), groups=4) self.conv3 = nn.Conv2d(input_channel, output_channel, (5, embedding_dim), padding=(4, 0), groups=4) # apply dropout self.dropout = nn.Dropout(dropout) # fully connected layer for classification # 3x conv nets * output channel self.fc1 = nn.Linear(ks * output_channel, num_classes) self.softmax = nn.Softmax() def forward(self, text_id, text_mask): # get the last 4 layers outputs= self.model(text_id, attention_mask=text_mask) # all_layers = [4, 16, 256, 768] # print(outputs) hidden_layers = outputs[-1] # get hidden layers hidden_layers = torch.stack(hidden_layers, dim=1) x = hidden_layers[:, -7:-3] # x = x.unsqueeze(1) # x = torch.mean(x, 0) # print(hidden_layers.size()) torch.cuda.empty_cache() x = [F.relu(self.conv1(x)).squeeze(3), F.relu(self.conv2(x)).squeeze(3), F.relu(self.conv3(x)).squeeze(3)] x = [F.dropout(i,0.65) for i in x] # max-over-time pooling; # (batch, channel_output) * ks x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x] # concat results; (batch, channel_output * ks) x = torch.cat(x, 1) # add dropout x = self.dropout(x) # generate logits (batch, target_size) logit = self.fc1(x) torch.cuda.empty_cache() return (logit) # + id="12CRxH1EZnhi" def inference_Maerbert_kim(model_paths, dataloader, device): final_preds = [] for i, path in enumerate(model_paths): model = Marbert_kim('UBC-NLP/MARBERT') model.to('cuda') model.load_state_dict(torch.load(path)) print(f"Getting predictions for model {i+1}") preds = valid_fn(model, dataloader, device) final_preds.append(preds) final_preds = np.array(final_preds) # print(final_preds) final_preds = np.mean(final_preds, axis=0) # print(final_preds) final_preds[final_preds>=0.5] = 1 final_preds[final_preds<0.5] = 0 # final_preds= np.argmax(final_preds,axis=1) return final_preds # + id="8XEiFfPcajWL" test_df=pd.read_csv('/content/drive/MyDrive/ISarcasm/Test_dataset/task_C_AR_test.csv') # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="BTitFBoFatRo" outputId="e688d67b-f2ff-4568-aade-56b40cdd7ad2" test_df # + colab={"base_uri": "https://localhost:8080/"} id="34nSUGoTcVDF" outputId="ae65247d-9485-4837-e1a8-72441b822779" test_df.columns # + id="QmHFqe_Nb1fW" test_dataset = JigsawDatasetTest(test, tokenizer=arabert_tokenizer, max_length=128,column_='text_0') test_loader = DataLoader(test_dataset, batch_size=16, num_workers=2, shuffle=False, pin_memory=True) # + colab={"base_uri": "https://localhost:8080/", "height": 502, "referenced_widgets": ["36486ca1674e404bbb4f5628752c8509", "19c04f363f5d4ab9b2ed3dba7eaa9c4d", "3220b4cc901b4067baa9af40ae8fa7b1", "c0aef13b3bb04760b738043b4abba6db", "<KEY>", "36e552f21f9e4d129a8a943c07506e90", "bed99843d4154c158d73bb3d906949d8", "0e05313f87c64ab9a5ce9fefe0f3646c", "c8c9dbe859454e2a8e18ae209ff46ad9", "dc7378cb8e5a443eb8eb5ddbd97c941a", "da524116080b497a89ae6898abd96007"]} id="h49EBl9BcTnl" outputId="1a77da2d-bdca-48ec-a382-1966a2d482b9" MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-4.bin'] # MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin'] preds_arabert_first_column = inference_Arabert(MODEL_PATH_2, test_loader, 'cuda') # + id="cuBRNoBEcevg" test_dataset = JigsawDatasetTest(test, tokenizer=arabert_tokenizer, max_length=128,column_='text_1') test_loader = DataLoader(test_dataset, batch_size=16, num_workers=2, shuffle=False, pin_memory=True) # + colab={"base_uri": "https://localhost:8080/"} id="EUC50y9ycjQJ" outputId="9cf608ec-73c2-4042-fa0a-410c4622aa24" MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/Arabert_task_c/Loss-Fold-4.bin'] # MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin'] preds_arabert_second_column = inference_Arabert(MODEL_PATH_2, test_loader, 'cuda') # + id="GJx5vJcveMlA" df_arabert = pd.DataFrame(list(zip(preds_arabert_first_column.flatten(), preds_arabert_second_column.flatten())),columns =['Prediction_text_0','Prediction_text_1']) # + id="Put2q9n6eMoa" prediction_list=df_arabert.idxmax(axis='columns').to_list() # + id="zOFMZM0MeMr0" prediction_final=[] for pred in prediction_list: if pred=='Prediction_text_0': prediction_final.append(0) else: prediction_final.append(1) # + colab={"base_uri": "https://localhost:8080/"} id="nZoPnXgxeMvo" outputId="ba2014de-7816-4ff1-a95a-f32ae507fa1a" print(print_statistics(test['sarcastic_id'],prediction_final)) # + id="dD8hhFDLeMxh" # + id="LtHMqJfreM1I" # + id="gRPHjDhfclTr" test_dataset = JigsawDatasetTest(test, tokenizer=marbert_tokenizer, max_length=128,column_='text_0') test_loader = DataLoader(test_dataset, batch_size=16, num_workers=2, shuffle=False, pin_memory=True) # + colab={"base_uri": "https://localhost:8080/", "height": 502, "referenced_widgets": ["f1231882c2f64979b5ad97f36de1e46d", "af7ba35daede49f5a032fccf6d6d8823", "ec60bc38a57b44d5bccc717b60e1094c", "1bd6feaf949748f989e1f3061472c23d", "508add170051422ab5a00908f1780583", "fc03641cd0bb49d59a63ddf3e0a85adf", "17669c4b13f548a6b7380f3583505e32", "181a0aeb9351425eadc72e4507004d1b", "<KEY>", "ba4a5826daca40929174c421cb93d31f", "c60018eee57545aebd7d388f6cc4d1cd"]} id="H-efrBa7cstd" outputId="f8af1b31-4baf-43b0-c110-17940a3333e9" MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-4.bin'] # MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin'] preds_marbert_first_column = inference_Maerbert(MODEL_PATH_2, test_loader, 'cuda') # + id="2IExSKdskbYU" test_dataset = JigsawDatasetTest(test, tokenizer=marbert_tokenizer, max_length=128,column_='text_1') test_loader = DataLoader(test_dataset, batch_size=16, num_workers=2, shuffle=False, pin_memory=True) # + colab={"base_uri": "https://localhost:8080/"} id="5mxLoUcvfjYT" outputId="94e94a20-2098-4166-c888-81595162e765" MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_task_C/Loss-Fold-4.bin'] # MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin'] preds_marbert_second_column = inference_Maerbert(MODEL_PATH_2, test_loader, 'cuda') # + id="SwOEoLLLfCKU" preds_marbert= pd.DataFrame(list(zip(preds_marbert_first_column.flatten(), preds_marbert_second_column.flatten())),columns =['Prediction_text_0','Prediction_text_1']) # + id="5KcLVGSOfCKU" prediction_list=preds_marbert.idxmax(axis='columns').to_list() # + id="BEruZlzZfCKU" prediction_final=[] for pred in prediction_list: if pred=='Prediction_text_0': prediction_final.append(0) else: prediction_final.append(1) # + colab={"base_uri": "https://localhost:8080/"} outputId="132a7d69-5dfc-44ad-ee7d-925df680502e" id="RgP47U-ifCKU" print(print_statistics(test['sarcastic_id'],prediction_final)) # + id="LZFD5HfqfBT0" # + id="g2LcStsPfBXt" # + id="GP1LiRhQfBZu" # + id="D3iJoxVOfBdY" # + id="fKon3j5efBe_" # + id="auTHT9SqfBi7" # + id="JAnTdtPYfBlB" # + id="s7KxOW_sfBn4" test_dataset = JigsawDatasetTest(test, tokenizer=marbert_tokenizer, max_length=128,column_='text_0') test_loader = DataLoader(test_dataset, batch_size=16, num_workers=2, shuffle=False, pin_memory=True) # + colab={"base_uri": "https://localhost:8080/"} id="n74H9t4Ocyaj" outputId="c10ac2e6-ff92-471e-e6c1-72d3b114f464" MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-4.bin'] # MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin'] preds_marbert_kim_first_column = inference_Maerbert_kim(MODEL_PATH_2, test_loader, 'cuda') # + id="n8F0MiI5c1UK" test_dataset = JigsawDatasetTest(test, tokenizer=marbert_tokenizer, max_length=128,column_='text_1') test_loader = DataLoader(test_dataset, batch_size=16, num_workers=2, shuffle=False, pin_memory=True) # + colab={"base_uri": "https://localhost:8080/"} id="I8Dz_eCGc6kX" outputId="1b844dcd-be08-4998-fd62-bbaf70bec65f" MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/TaskC_models/marbert_kim_task_c/Loss-Fold-4.bin'] # MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin'] preds_marbert_kim_second_column = inference_Maerbert_kim(MODEL_PATH_2, test_loader, 'cuda') # + id="gewTmcCXfLRQ" preds_marbert_kim = pd.DataFrame(list(zip(preds_marbert_kim_first_column.flatten(), preds_marbert_kim_second_column.flatten())),columns =['Prediction_text_0','Prediction_text_1']) # + id="n8H9j0ZBfLRR" prediction_list=preds_marbert_kim.idxmax(axis='columns').to_list() # + id="RU7LrGJ8fLRR" prediction_final=[] for pred in prediction_list: if pred=='Prediction_text_0': prediction_final.append(0) else: prediction_final.append(1) # + colab={"base_uri": "https://localhost:8080/"} outputId="e1e62b73-ad58-42fe-b132-75f822b5a5f1" id="qcpuAVjlfLRR" print(print_statistics(test['sarcastic_id'],prediction_final)) # + id="x9zzPzQmfKDM" # + id="yzj_2vw7fKHZ" # + id="VGKIYQzlfKJp" # + id="CX3DU9h-fKMg" # + id="iTak_dd-fKN5" # + id="a8zQtANBfKQ5" # + id="1h3xTcN5fKUG" # + id="vomPRe0CfKWI" # + id="OuFC9NXkfKYF" # + id="msWcUteDfKbK" # + id="n5Fp1AB9fKc2" # + id="Cqh932DAfKhC" # + id="DtHqkdcpfKkG" # + id="MRhjBECWfKmP" # + id="7iMPOBhrfKn0" # + id="wL7mELR0fKqu" # + id="Yp3hOTLjfKug"
Task C/Ar/Inference/results_Prediction_Arabic_Task_C_individual_models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Module 2: Apprentissage non supervisé # # * **Partie 2.1: Les K plus Proches Voisins** [[Notebook]](./2_1_knn.ipynb) # * Partie 2.2: A linéaire [[Notebook]](./2_2_régression_linéaire.ipynb) # * Partie 2.3: Régression logistique [[Notebook]](./2_3_régression_logistique.ipynb) # * Partie 2.4: SVM [[Notebook]](./2_4_svm.ipynb) # # # Partie 2.1: KNN - Les K plus Proches Voisins # Le knn est un algorithme d'apprentissage supervisé import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.neighbors import KNeighborsRegressor, KNeighborsClassifier dt = pd.read_csv("datasets/iris.csv") dt.head() # ## Classification # Dans le cadre d'une classification sns.lmplot(x="petal_width", y="petal_length", data=dt) X = dt.petal_width.values.reshape(-1, 1) y = dt.petal_length.values from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle=True, test_size=.3) knn = KNeighborsRegressor(n_neighbors=5) knn.fit(X_train, y_train) # ## Évaluation from sklearn.metrics import r2_score y_pred = knn.predict(X_test) score = r2_score(y_test, y_pred) score plt.scatter(y_test, y_pred) plt.plot(y_test, y_test, c="red") # + k_values = [2, 3, 4, 5, 6, 7, 8] train_scores = [] test_scores = [] for k in k_values: knn = KNeighborsRegressor(n_neighbors=k) knn.fit(X_train, y_train) train_scores.append(knn.score(X_train, y_train)) test_scores.append(knn.score(X_test, y_test)) plt.plot(k_values, train_scores) plt.plot(k_values, test_scores) # - X = dt[["petal_width", "petal_length"]] species = dt["species"] from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() y = encoder.fit_transform(species) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle=True, test_size=.3) clf = KNeighborsClassifier(n_neighbors=5) clf.fit(X_train, y_train) # ## Évaluation from sklearn.metrics import accuracy_score y_pred = clf.predict(X_test) score = accuracy_score(y_test, y_pred) score # + k_values = [2, 3, 4, 5, 6, 7, 8] train_scores = [] test_scores = [] for k in k_values: knn = KNeighborsClassifier(n_neighbors=k) knn.fit(X_train, y_train) train_scores.append(knn.score(X_train, y_train)) test_scores.append(knn.score(X_test, y_test)) plt.plot(k_values, train_scores) plt.plot(k_values, test_scores) # - clf.predict([[1.5, 5.0]]), clf.predict_proba([[1.5, 5.0]]) sns.pairplot(dt, hue='species')
2_1_knn.ipynb
# + """ Define a class named Circle which can be constructed by a radius. The Circle class has a method which can compute the area. """ """Question: Define a class named Circle which can be constructed by a radius. The Circle class has a method which can compute the area. Hints: Use def methodName(self) to define a method. """ class Circle(object): def __init__(self, r): self.radius = r def area(self): return self.radius**2*3.14 aCircle = Circle(2) print aCircle.area()
pset_challenging_ext/exercises/solutions/nb/p52.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from ipywidgets import interact, fixed from ipywidgets import widgets from IPython.display import display, Math, Latex import rheoflow # - def tube_dashboard_plot(pressure_drop,radius,length,density,eta0,reltime,n): plt.figure(figsize=(14,12)) viscosity = rheoflow.viscosity.carreau('Carreau dashboard',eta0=eta0,etainf=.11,reltime=reltime,a=1.3,n=n) pipe = rheoflow.pipe.laminar('laminar tube flow',density=density,radius=radius,length=length, viscosity=viscosity) pipe.pressure_drop = pressure_drop plt.subplot(221) viscosity.visc_plot() plt.plot(pipe.shear_rate_wall,pipe.viscosity_wall(),'o') plt.subplot(222) pipe.q_plot(pressure_drop_min=200.,pressure_drop_max=200000.) plt.plot(pipe.q,pipe.pressure_drop,'o') plt.subplot(223) pipe.shear_rate_plot() plt.subplot(224) pipe.vz_plot() interact(tube_dashboard_plot,pressure_drop=(100.,10000.),radius=(.01,.05),length=(.1,.5), density=(500,1500),eta0=(1,100),reltime=(.001,10),n=(.2,0.9),continuous_update=False)
notebooks/laminar_pipe_dashboard.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Meraki Python SDK Demo: SSID Limits Checker # # *This notebook demonstrates using the Meraki Python SDK to check all SSIDs in a network for bandwidth limits. If any are found, it also provides the option to remove them all automatically.* # # If you have users complaining about slow WiFi, you might like to check if there are any SSID-wide speed limits. With the Meraki Dashboard API, its SDK and Python, we can check for speed limits on any/all SSIDs across the organization without digging through the GUI. # # --- # # >NB: Throughout this notebook, we will print values for demonstration purposes. In a production Python script, the coder would likely remove these print statements to clean up the console output. # In this first cell, we install and import the required meraki and os modules, and open the Dashboard API connection using the SDK. Make sure you have [set up your environment variables first](https://github.com/meraki/dashboard-api-python/blob/master/notebooks/notebooksReadme.md#setting-up-your-environment-variables). # + tags=[] # Install the relevant modules. If you are using a local editor (e.g. VS Code, rather than Colab) you can run these commands, without the preceding %, via a terminal. NB: Run `pip install meraki==` to find the latest version of the Meraki SDK. # %pip install meraki # %pip install tablib # If you are using Google Colab, please ensure you have set up your environment variables as linked above, then delete the two lines of ''' to activate the following code: ''' %pip install colab-env -qU import colab_env ''' # Rely on meraki SDK, os, and tablib -- more on tablib later import meraki import os import tablib # We're also going to import Python's built-in JSON module, but only to make the console output pretty. In production, you wouldn't need any of the printing calls at all, nor this import! import json # Setting API key this way, and storing it in the env variables, lets us keep the sensitive API key out of the script itself # The meraki.DashboardAPI() method does not require explicitly passing this value; it will check the environment for a variable # called 'MERAKI_DASHBOARD_API_KEY' on its own. In this case, API_KEY is shown simply as an reference to where that information is # stored. API_KEY = os.getenv('MERAKI_DASHBOARD_API_KEY') # Initialize the Dashboard connection. dashboard = meraki.DashboardAPI() # - # Let's make a basic pretty print formatter, `printj()`. It will make reading the JSON later a lot easier, but won't be necessary in production scripts. def printj(ugly_json_object): # The json.dumps() method converts a JSON object into human-friendly formatted text pretty_json_string = json.dumps(ugly_json_object, indent = 2, sort_keys = False) return print(pretty_json_string) # Most API calls require passing values for the organization ID and/or the network ID. In this second cell, we fetch a list of the organizations the API key can access, then pick the first org in the list, and the first network in that organization, to use for later operations. You could re-use this code presuming your API key only has access to a single organization, and that organization only contains a single network. Otherwise, you would want to review the organizations object declared and printed here to review its contents. # + tags=[] # Let's make it easier to call this data later # getOrganizations will return all orgs to which the supplied API key has access organizations = dashboard.organizations.getOrganizations() print('Organizations:') printj(organizations) # This example presumes we want to use the first organization as the scope for later operations. firstOrganizationId = organizations[0]['id'] firstOrganizationName = organizations[0]['name'] # Print a blank line for legibility before showing the firstOrganizationId print('') print(f'The firstOrganizationId is {firstOrganizationId}, and its name is {firstOrganizationName}.') # - # This example will analyze and potentially change every SSID in every network in your organization. It is fine to re-use presuming that that's what you want to do. Otherwise, you might want to review the `networks` list and operate on just one of them instead. # + tags=[] networks = dashboard.organizations.getOrganizationNetworks(organizationId=firstOrganizationId) print('Networks:') printj(networks) # - # Now that we've got the organization and network values figured out, we can get to the ask at hand: # # > Check for any SSID-level bandwidth limits. # # We can only run this on networks that have wireless devices, so we have a `for` loop that checks each entry in the `networks` list. If the network's `productTypes` value contains `wireless`, then we'll pull the SSIDs from it. # # The `getNetworkWirelessSsids` endpoint will return the SSIDs (enabled or otherwise, with or without limits) for the network. We will use a [list comprehension](https://www.datacamp.com/community/tutorials/python-list-comprehension) to make a new list, `organization_ssids_with_limits`, that contains any with a bandwidth limit set. # # >NB: There are also traffic-shaping rules that are applied on a per-rule basis. This part does not # # + tags=[] # Create an empty list where we can store all of the organization's SSIDs organization_ssids = [] # Let's make a list of all the organization's SSIDs for network in networks: # We only want to examine networks that might contain APs if 'wireless' in network['productTypes']: # let's find every SSID for ssid in dashboard.wireless.getNetworkWirelessSsids(network['id']): # Add each network's SSIDs to organization_ssids organization_ssids.append({'networkId': network['id'], 'ssid': ssid}) # + tags=[] # Let's make a list of organization SSIDs that have SSID-wide bandwidth limits set organization_ssids_with_limits = [ {'networkId': i['networkId'], 'number': i['ssid']['number']} for i in organization_ssids if i['ssid']['perClientBandwidthLimitUp'] or i['ssid']['perClientBandwidthLimitDown'] or i['ssid']['perSsidBandwidthLimitUp'] or i['ssid']['perSsidBandwidthLimitDown'] ] # Let's inform the user what we found if len(organization_ssids_with_limits): print('These SSIDs have bandwidth limits:') printj(organization_ssids_with_limits) else: print('There are no SSIDs with bandwidth limits set on the SSID level.') # - # To remove the SSID limits, we will modify these values: # * On the SSID level, using `updateNetworkWirelessSsid`, we will set all bandwidth limits to 0, which is "unlimited." # * Separately, we will remove any custom traffic shaping rules using `updateNetworkWirelessSsidTrafficShapingRules`. # # Let's write a method that we can call later to do this: # Let's create a function that removes any found limits. We might use this later. def removeSsidLimits(ssids): for ssid in ssids: # Remove SSID-wide limits dashboard.wireless.updateNetworkWirelessSsid( ssid['networkId'], ssid['number'], perClientBandwidthLimitUp=0, perClientBandwidthLimitDown=0, perSsidBandwidthLimitUp=0, perSsidBandwidthLimitDown=0 ) # Disable rule-based traffic-shaping rules dashboard.wireless.updateNetworkWirelessSsidTrafficShapingRules( ssid['networkId'], ssid['number'], rules=[] ) # We will also define a separate method that removes custom traffic shaping rules everywhere, using the same `updateNetworkWirelessSsidTrafficShapingRules` method we used above, but this time applying it to all SSIDs. Like before, we're defining the method here, and we'll give the user the option to run this later. def removeCustomTrafficShapingRules(): # We'll check each network for network in networks: # We only want to examine networks that might contain APs if 'wireless' in network['productTypes']: # SSIDs are always numbered 1-15 (0-14 in the API) for ssidNumber in range(15): # Disable rule-based traffic shaping for that network's SSID dashboard.wireless.updateNetworkWirelessSsidTrafficShapingRules( network['id'], ssidNumber, rules=[] ) # # Final steps # # Here we're going to give the user an interactive prompt. First we set a few string literals that we can reuse to keep the code tight, then we call `removeSsidLimits` on `organization_ssids_with_limits` if the user confirms the appropriate prompts. **Use with care--there's no undo!** # + tags=[] # Re-used strings CONFIRM_STRING = 'OK, are you sure you want to do this? This script does not have an "undo" feature.' CANCEL_STRING = 'OK. Operation canceled.' WORKING_STRING = 'Working...' COMPLETE_STRING = 'Operation complete.' # Let's give the user the option to clear those bandwidth limits if len(organization_ssids_with_limits): print('Would you like to remove all SSID-level bandwidth limits?') if input('([Y]es/[N]o):') in ['Y', 'y', 'Yes', 'yes', 'ye', 'Ye']: print(CONFIRM_STRING) if input('([Y]es/[N]o):') in ['Y', 'y', 'Yes', 'yes', 'ye', 'Ye']: print(WORKING_STRING) removeSsidLimits(organization_ssids_with_limits) print(COMPLETE_STRING) else: print(CANCEL_STRING) else: print(CANCEL_STRING) # - # As one last option, we can also mass-remove custom traffic shaping rules from all SSIDs across all organizations. This might be useful if, in the past, an admin had set a custom traffic shaping rule, but it's unclear where it was set. **Use with care--there's no undo!** # + tags=[] # Let's also check if the user wants to take the extra step to remove all rule-based limits print('There may also be client bandwidth limits on custom traffic shaping rules. Would you also like to remove any and all custom traffic shaping rules? This may take some time depending on the size and quantity of your networks. This will not clear default traffic shaping rules.') if input('([Y]es/[N]o):') in ['Y', 'y', 'Yes', 'yes', 'ye', 'Ye']: print(CONFIRM_STRING) if input('([Y]es/[N]o):') in ['Y', 'y', 'Yes', 'yes', 'ye', 'Ye']: print(WORKING_STRING) removeCustomTrafficShapingRules() print(COMPLETE_STRING) else: print(CANCEL_STRING) else: print(CANCEL_STRING) # - # # Final thoughts # # And we're done! Hopefully you found this a useful demonstration of just a few things that are possible with Meraki's Python SDK. These additional resources may prove useful along the way. # # [Meraki Interactive API Docs](https://developer.cisco.com/meraki/api-v1/#!overview): The official (and interactive!) Meraki API and SDK documentation repository on DevNet. # # [VS Code](https://code.visualstudio.com/): An excellent code editor with full support for Python and Python notebooks. # # [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/): An excellent learning resource that puts the real-world problem first, then teaches you the Pythonic solution along the way.
notebooks/merakiSSIDLimitsChecker.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Rewriting rules in ReGraph # # In the context of ReGraph, by rewriting rules we mean the rules of _sesqui-pushout rewriting_ (see more details [here](https://link.springer.com/chapter/10.1007/11841883_4)). A rewriting rule consists of the three graphs: $p$ – preserved part, $lhs$ – left hand side, $rhs$ – right hand side, and two mappings: from $p$ to $lhs$ and from $p$ to $rhs$. # # Informally, $lhs$ represents a pattern to match in a graph, subject to rewriting. $p$ together with $p \rightarrow lhs$ mapping specifies a part of the pattern which stays preseved during rewriting, i.e. all the nodes/edges/attributes present in $lhs$ but not $p$ will be removed. $rhs$ and $p \rightarrow rhs$ specify nodes/edges/attributes to add to the $p$. In addition, rules defined is such a way allow to clone and merge nodes. If two nodes from $p$ map to the same node in $lhs$, the node corresponding to this node of the pattern will be cloned. Symmetrically, if two nodes from $p$ map to the same node in $rhs$, the corresponding two nodes will be merged. # # The following examples will illustrate the idea behind the sesqui-pushout rewriting rules more clearly: # from regraph import NXGraph, Rule, plot_rule # ### 1. Creating a rewriting rule from a pattern and injecting transformations # + # Define the left-hand side of the rule pattern = NXGraph() pattern.add_nodes_from([1, 2, 3]) pattern.add_edges_from([(1, 2), (2, 3)]) rule1 = Rule.from_transform(pattern) # `inject_clone_node` returns the IDs of the newly created # clone in P and RHS p_clone, rhs_clone = rule1.inject_clone_node(1) rule1.inject_add_node("new_node") rule1.inject_add_edge("new_node", rhs_clone) plot_rule(rule1) # - # Every rule can be converted to a sequence of human-readable commands, print(rule1.to_commands()) # ### 2. Creating a rewriting rule from $lhs$, $p$, $rhs$ and two maps # # By default, `Rule` objects in ReGraph are initialized with three graph objects (`NXGraph`) corresponding to $p$, $lhs$ and $rhs$, together with two Python dictionaries encoding the homomorphisms $p \rightarrow lhs$ and $p \rightarrow rhs$. This may be useful in a lot of different scenarios. For instance, as in the following example. # + # Define the left-hand side of the rule pattern = NXGraph() pattern.add_nodes_from([1, 2, 3]) pattern.add_edges_from([(1, 2), (1, 3), (1, 1), (2, 3)]) # Define the preserved part of the rule rule2 = Rule.from_transform(pattern) p_clone, rhs_clone = rule2.inject_clone_node(1) plot_rule(rule2) print("New node corresponding to the clone: ", p_clone) print(rule2.p.edges()) # - # As the result of cloning of the node `1`, all its incident edges are copied to the newly created clone node (variable `p_clone`). However, in our rule we would like to keep only some of the edges and remove the rest as follows. # + rule2.inject_remove_edge(1, 1) rule2.inject_remove_edge(p_clone, p_clone) rule2.inject_remove_edge(p_clone, 1) rule2.inject_remove_edge(p_clone, 2) rule2.inject_remove_edge(1, 3) print(rule2.p.edges()) plot_rule(rule2) # - # Instead of initializing our rule from the pattern and injecting a lot of edge removals, we could directly initialize three objects for $p$, $lhs$ and $rhs$, where $p$ contains only the desired edges. In the following example, because the rule does not specify any merges or additions (so $rhs$ is isomorphic to $p$), we can omit the parameter $rhs$ in the constructor of `Rule`. # + # Define the left-hand side of the rule lhs = NXGraph() lhs.add_nodes_from([1, 2, 3]) lhs.add_edges_from([(1, 2), (1, 3), (1, 1), (2, 3)]) # Define the preserved part of the rule p = NXGraph() p.add_nodes_from([1, "1_clone", 2, 3]) p.add_edges_from([ (1, 2), (1, "1_clone"), ("1_clone", 3), (2, 3)]) p_lhs = {1: 1, "1_clone": 1, 2: 2, 3: 3} # Initialize a rule object rule3 = Rule(p, lhs, p_lhs=p_lhs) plot_rule(rule3) print("New node corresponding to the clone: ", "1_clone") print(rule3.p.edges())
examples/Tutorial_rules.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sequencial NN import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from config.config import * from config.constants import * from keras.callbacks import ModelCheckpoint from keras.models import load_model from sklearn.metrics import accuracy_score from collections import Counter from sklearn.model_selection import train_test_split def plot_model(hist): fig, axs = plt.subplots(nrows=1, figsize=(11, 9)) plt.rcParams['font.size'] = '14' for label in (axs.get_xticklabels() + axs.get_yticklabels()): label.set_fontsize(14) plt.plot(hist.history['accuracy']) plt.plot(hist.history['val_accuracy']) axs.set_title('Model Accuracy') axs.set_ylabel('Accuracy', fontsize=14) axs.set_xlabel('Epoch', fontsize=14) plt.legend(['train', 'val'], loc='upper left') plt.show() print("Model has training accuracy of {:.2f}%".format(hist.history['accuracy'][-1]*100)) def pre_process_split(path): dataset = pd.read_csv(path) dataset.dropna(inplace = True) # assigning new column names to the dataframe # dataset.columns = constants.cols + ['label'] # creating training set ignoring labels train_data = dataset[dataset.columns[:-1]].values labels = dataset['label'].values n_class = len(set(labels)) X_train, X_test, y_train, y_test = train_test_split(train_data, labels, test_size=0.20) X_train = X_train.reshape(-1, 1, train_data.shape[1]) X_test = X_test.reshape(-1, 1, train_data.shape[1]) y_train = y_train.reshape(-1, 1, 1) y_test = y_test.reshape(-1, 1, 1) return X_train, X_test, y_train, y_test, n_class def model_config_train(name,eps,bs,actvn,datalink): print("processing dataset") X_train, X_test, y_train, y_test, n_class = pre_process_split(datalink) print(n_class) model = Sequential() model.add(LSTM(100, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Dense(n_class, activation=actvn)) print(model.summary()) chk = ModelCheckpoint(name+'.pkl',save_best_only=True, mode='auto', verbose=1) print("saving as:",name+'.pkl') model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) hist = model.fit(X_train, y_train, epochs=eps, batch_size=bs, callbacks=[chk], validation_split=0.2) plot_model(hist) return model # ## Loading dataset for binary classifier def plotter(plot_data,unique_labels,n_plots): data = plot_data.copy() predicted_labels = data['label'] #print(len(set(predicted_labels)),unique_labels) #print(Counter(predicted_labels).values(),[unique_labels[each] for each in Counter(predicted_labels).keys()]) matrics = sorted(zip([unique_labels[each] for each in Counter(predicted_labels).keys()],Counter(predicted_labels).values() ), key=lambda x: x[1]) score = [list(j) for j in matrics][::-1] total = sum([i[1] for i in score]) c=0 for i in score: score[c][1] = str(round(i[1]*100/total,2))+"%" #print("Fault type:", i[-1], "Percentage: {:.2f}%".format(i[1]*100/total)) c+=1 print(pd.DataFrame.from_records(score,columns=['Fault type','Percentage'])) #print("changing numbers to labels again") data['label'] = [unique_labels[i] for i in predicted_labels] fig, ax = plt.subplots(n_plots,figsize=(15,4*n_plots)) for j in range(n_plots): legend_list = [] for i in range(len(set(predicted_labels))): extract = data[data.label==unique_labels[i]][cols[j]] #print(len(extract)) if unique_labels[i]==score[0][0] and score[0][0]!='NML' or unique_labels[i]== 'FAULT': temp = ax[j].scatter(extract.index,extract,marker='+',s=40) else: temp = ax[j].scatter(extract.index,extract,marker='.',s=10) legend_list.append(temp) ax[j].legend(legend_list,unique_labels,scatterpoints=3,ncol=1,fontsize=15) fig.tight_layout() plt.show() return score[0][0] def tester(model,frame): data = frame cols = ['A'+str(each+1) for each in range(int(col_len/2))] + ['V'+str(each+1) for each in range(int(col_len/2))] if data.shape[1]==6: data.columns = cols elif data.shape[1]==7: data.columns = cols + ['label'] data = data[cols] else: print("columns length is ",data.shape[1]) test_preds = model.predict(data.values.reshape(-1,1,6).tolist()) predicted_labels = np.argmax(test_preds,axis=1) data['label'] = predicted_labels return data # ## Testing the models model_config_train('binary_clf',20,2000,'softmax','./KMTrainingSet/binary/bin_dataset_simulink.csv') model_config_train('multi_clf',20,2000,'softmax','./KMTrainingSet/multi/mul_dataset_simulink.csv') binary_labels_list = ['NML','FAULT'] binary_model = load_model('binary_clf.pkl') multi_labels_list = ['AB', 'AC', 'BC', 'ABC', 'AG', 'BG', 'ABG', 'CG', 'ACG', 'BCG', 'ABCG'] multi_model = load_model('multi_clf.pkl') import os # + # current directory path = "./TrainingSet/" # list of file of the given path is assigned to the variable file_list = [each for each in list(os.walk(path))[0][-1] if ".csv" in each] # - checker = [] for each in file_list: print("\n.\n.\n",each) temp = tester(binary_model,pd.read_csv('./TrainingSet/'+each)) plotter(temp,binary_labels_list,2) temp = tester(multi_model,temp[temp.label!=0]) high = plotter(temp,multi_labels_list,2) if high == ''.join([i for i in each.split(".")[0] if not i.isdigit()]): checker.append(high) else: checker.append('incorrect') # + files_failing_model = [file_list[i] for i in range(len(checker)) if checker[i]=='incorrect'] names = [''.join([i for i in each.split(".")[0] if not i.isdigit()]) for each in files_failing_model] # - Counter(names) # + temp = tester(binary_model,pd.read_csv('./TrainingSet/1AB.csv')) plotter(temp,binary_labels_list,2) temp = tester(multi_model,temp[temp.label!=0]) plotter(temp,multi_labels_list,2) # - data = pd.read_csv('./TrainingSet/1AG.csv') round(data['3V']) # + dat = Counter((round(data['3V'])/10)) matrics = sorted(zip([each for each in Counter(dat).keys()],Counter(dat).values() ), key=lambda x: x[0]) # - matrics import matplotlib.pyplot as plt from kneed import KneeLocator from sklearn.datasets import make_blobs from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from sklearn.preprocessing import StandardScaler import pandas as pd data = pd.read_csv('KMTrainingset/2ABG.csv') features = data[data.columns[:-1]].values.tolist() #true_labels = data['label'].values.tolist() scaler = StandardScaler() scaled_features = scaler.fit_transform(features) kmeans = KMeans( init="random", n_clusters=2, n_init=10, max_iter=500, random_state=42 ) kmeans.fit(scaled_features) kmeans.cluster_centers_ labels = kmeans.fit_predict(features) #data['label']=labels data.head() dic = Counter(labels) dic if dic[1]>dic[0]: print("1 = 0 , 0 =1") data['label']=[1 if i == 0 else 0 for i in labels] else: print(True) dic = Counter(data['label']) data # + n_plots = 6 fig, ax = plt.subplots(n_plots,figsize=(15,4*n_plots)) unique_labels = ['NML','Fault'] cols = data.columns[:-1] for j in range(6): legend_list = [] for i in list(set(data.label)): plo = data[data.label == i] temp = ax[j].scatter(plo.index,plo[cols[j]],marker='+',s=40) legend_list.append(temp) ax[j].legend(legend_list,unique_labels,scatterpoints=3,ncol=1,fontsize=15) fig.tight_layout() plt.show() # - org = [0,1,0,1,1,1,0,0,1,0,1] [1 if i == 0 else 0 for i in org] for x,y in zip(org,[1 if i == 0 else 0 for i in org]): print(x+y)
.ipynb_checkpoints/MC_Classifier_NN-checkpoint 19.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # PyDMD # ## Tutorial 1: Dynamic Mode Decomposition on a toy dataset # In this tutorial we will show the typical use case, applying the dynamic mode decomposition on the snapshots collected during the evolution of a generic system. We present a very simple system since the main purpose of this tutorial is to show the capabilities of the algorithm and the package interface. # First of all we import the DMD class from the pydmd package, we set matplotlib for the notebook and we import numpy. # + # %matplotlib inline import matplotlib.pyplot as plt import numpy as np from pydmd import DMD # - # We create the input data by summing two different functions:<br> # $f_1(x,t) = \text{sech}(x+3)\exp(i2.3t)$<br> # $f_2(x,t) = 2\text{sech}(x)\tanh(x)\exp(i2.8t)$.<br> # + def f1(x,t): return 1./np.cosh(x+3)*np.exp(2.3j*t) def f2(x,t): return 2./np.cosh(x)*np.tanh(x)*np.exp(2.8j*t) x = np.linspace(-5, 5, 128) t = np.linspace(0, 4*np.pi, 256) xgrid, tgrid = np.meshgrid(x, t) X1 = f1(xgrid, tgrid) X2 = f2(xgrid, tgrid) X = X1 + X2 # - # The plots below represent these functions and the dataset. # + titles = ['$f_1(x,t)$', '$f_2(x,t)$', '$f$'] data = [X1, X2, X] fig = plt.figure(figsize=(17,6)) for n, title, d in zip(range(131,134), titles, data): plt.subplot(n) plt.pcolor(xgrid, tgrid, d.real) plt.title(title) plt.colorbar() plt.show() # - # Now we have the temporal snapshots in the input matrix rows: we can easily create a new DMD instance and exploit it in order to compute the decomposition on the data. Since the snapshots must be arranged by columns, in this case we need to transpose the matrix. dmd = DMD(svd_rank=2) dmd.fit(X.T) # The `dmd` object contains the principal information about the decomposition: # - the attribute `modes` is a 2D numpy array where the columns are the low-rank structures individuated; # - the attribute `dynamics` is a 2D numpy array where the rows refer to the time evolution of each mode; # - the attribute `eigs` refers to the eigenvalues of the low dimensional operator; # - the attribute `reconstructed_data` refers to the approximated system evolution. # # Moreover, some helpful methods for the graphical representation are provided. # Thanks to the eigenvalues, we can check if the modes are stable or not: if an eigenvalue is on the unit circle, the corresponding mode will be stable; while if an eigenvalue is inside or outside the unit circle, the mode will converge or diverge, respectively. From the following plot, we can note that the two modes are stable. # + for eig in dmd.eigs: print('Eigenvalue {}: distance from unit circle {}'.format(eig, np.abs(eig.imag**2+eig.real**2 - 1))) dmd.plot_eigs(show_axes=True, show_unit_circle=True) # - # We can plot the modes and the dynamics: # + for mode in dmd.modes.T: plt.plot(x, mode.real) plt.title('Modes') plt.show() for dynamic in dmd.dynamics: plt.plot(t, dynamic.real) plt.title('Dynamics') plt.show() # - # Finally, we can reconstruct the original dataset as the product of modes and dynamics. We plot the evolution of each mode to emphasize their similarity with the input functions and we plot the reconstructed data. # + fig = plt.figure(figsize=(17,6)) for n, mode, dynamic in zip(range(131, 133), dmd.modes.T, dmd.dynamics): plt.subplot(n) plt.pcolor(xgrid, tgrid, (mode.reshape(-1, 1).dot(dynamic.reshape(1, -1))).real.T) plt.subplot(133) plt.pcolor(xgrid, tgrid, dmd.reconstructed_data.T.real) plt.colorbar() plt.show() # - # We can also plot the absolute error between the approximated data and the original one. plt.pcolor(xgrid, tgrid, (X-dmd.reconstructed_data.T).real) fig = plt.colorbar() # The reconstructed system looks almost equal to the original one: the dynamic mode decomposition made possible the identification of the meaningful structures and the complete reconstruction of the system using only the collected snapshots.
tutorials/tutorial-1-dmd.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import glob import pandas as pd # contains population data. os.chdir('C:\\Users\\577731\\Desktop\\va-datathon-2017\\data\\acsdata2') acs_csv = [i for i in glob.glob('*.{}'.format('csv'))] print(acs_csv) trythis = acs_csv[0][4:6] pd.read_csv(acs_csv[0],header=1) def ingest_and_merge(directory,skiplines=2): os.chdir(directory) acs_csv = [i for i in glob.glob('*.{}'.format('csv'))] leftdf = pd.read_csv(acs_csv[0], header=1) leftdf["year"] = '20'+acs_csv[0][4:6] rightdf = pd.read_csv(acs_csv[1], header=1) rightdf["year"] = '20'+acs_csv[1][4:6] growingchain = pd.merge(leftdf, rightdf, how='outer', on=['Id2','year']) for fileindex in range(2,len(acs_csv)): rightdf = pd.read_csv(acs_csv[fileindex], header=1) rightdf["year"] = '20'+acs_csv[fileindex][4:6] growingchain = pd.merge(growingchain, rightdf, how='outer', on=['Id2','year']) growingchain = growingchain.T.drop_duplicates().T return growingchain masterdf = ingest_and_merge('C:\\Users\\577731\\Desktop\\va-datathon-2017\\data\\acsdata2') def qualitycheck(df): columnlist = [] nulllist = [] columnlist = df.columns.tolist() for i in columnlist: print(i) nulls= sum(df[i].isnull()) nulllist.append(nulls) df2 = pd.DataFrame({'column':columnlist, 'nulls': nulllist}) return df2 # + masterdf.columns #m2asterdf = masterdf #m2asterdf = m2asterdf.T.drop_duplicates().T #m2asterdf.shape # - # mastedf # + test = qualitycheck(masterdf) test # - masterdf.to_csv("ACS_master2.csv")
data/acsdata2/ACS.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.3 ('base') # language: python # name: python3 # --- # Always reload modules to have the current version # %reload_ext autoreload # %autoreload 2 # + from ranking.models.fasttext import FastTextModel from ranking.storage.document_store import DocumentStore from ranking.util import dataset_paths as dp from ranking.util import json_lines as jl from ranking.util.corpus import CorpusReader from ranking.normalization.normalizer import normalize, get_wn_stopwords import os import ranking.evaluation.evaluate as ev corpus_name = dp.dup_lemmatized_unique_sentences_corpus model_name = 'dup-complete-lemmatized-sg-fastText_model.pkl' eval_set = jl.read_jsonl(dp.manual_evaluation_set) eval_set['docQuery'] = eval_set['docQuery'].apply(lambda query: normalize(query, stop=get_wn_stopwords())) if os.path.exists(model_name): model = FastTextModel.load(model_name) else: store = DocumentStore(dp.lemmatized_unique_functions_corpus) corpus = CorpusReader(corpus_name) model = FastTextModel(store, corpus) model.save(model_name) # - print('Hoogle + FastText:') ev.evaluate_model(model, eval_set)
src/Rank/notebooks/evaluation/FastText/FastText_WDup_Manual_Lemmatized.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:wildfires] # language: python # name: conda-env-wildfires-py # --- # ## Setup from specific import * # ### Retrieve previous results from the 'model' notebook X_train, X_test, y_train, y_test = data_split_cache.load() rf = get_model() # #### Get Dask Client client = get_client() # client = Client(n_workers=1, threads_per_worker=8, resources={'threads': 8}) add_common_path(client) client rf_fut = client.scatter(rf, broadcast=True) X_fut = client.scatter(X_train, broadcast=True) # ## ALE Plotting # ### Worldwide # + world_ale_1d_cache = SimpleCache("world_ale_1d", cache_dir=CACHE_DIR) # world_ale_1d_cache.clear() @world_ale_1d_cache def get_world_ale_1d(): n_threads = 8 ale_fs = [ client.submit( add_common_path_deco(save_ale_plot_1d_with_ptp), model=rf_fut, X_train=X_fut, column=column, n_jobs=n_threads, monte_carlo_rep=1000, resources={"threads": n_threads}, pure=False, ) for column in X_train.columns ] for ale_f in tqdm( dask.distributed.as_completed(ale_fs), total=len(ale_fs), unit="plot", desc="Calculating 1D ALE plots", smoothing=0, ): if ale_f.status == "error": print(ale_f.result()) ptp_values = {} mc_ptp_values = {} for column, ale_f in zip(X_train.columns, ale_fs): ptp_values[column], mc_ptp_values[column] = ale_f.result() return ptp_values, mc_ptp_values ptp_values, mc_ptp_values = get_world_ale_1d() # - # ### Run Non-MC runs manually (just for the plots) # + n_threads = 8 ale_fs = [ client.submit( add_common_path_deco(save_ale_plot_1d_with_ptp), model=rf_fut, X_train=X_fut, column=column, n_jobs=n_threads, monte_carlo=False, resources={"threads": n_threads}, pure=False, ) for column in X_train.columns ] for ale_f in tqdm( dask.distributed.as_completed(ale_fs), total=len(ale_fs), unit="plot", desc="Calculating 1D Non-MC ALE plots", smoothing=0, ): if ale_f.status == "error": print(ale_f.result()) # - # ## PDP Plotting # ### Worldwide # %time save_pdp_plot_1d(rf, X_train, 'Dry Day Period -12 - 0 Month', n_jobs=32) # ### Sequentially Locally for column in tqdm( X_train.columns, unit="plot", desc="Calculating 1D PDP plots", smoothing=0.05 ): figs, pdp_isolate_out, data_file = save_pdp_plot_1d(rf, X_train, column, n_jobs=32) for fig in figs: plt.close(fig) # ### Using a Dask distributed Cluster # + # %%time n_threads = 8 pdp_fs = [ client.submit( add_common_path_deco(save_pdp_plot_1d), model=rf_fut, X_train=X_fut, column=column, n_jobs=n_threads, resources={"threads": n_threads}, pure=False, ) for column in X_train.columns ] for pdp_f in tqdm( dask.distributed.as_completed(pdp_fs), total=len(pdp_fs), unit="plot", desc="Calculating 1D PDP plots", smoothing=0.04, ): if pdp_f.status == "error": print(pdp_f.result()) # - # ## Combining Multiple ALE plots short_X_train = shorten_columns(X_train) short_X_train.columns = repl_fill_names(short_X_train.columns) for feature in tqdm( [ name for name in ("DD", "SIF", "FAPAR", "LAI", "VOD") # Require the feature to be present for all shifts (including 0) if sum(name in c for c in short_X_train.columns) > 2 ], desc="Multiple shift ALE plots", ): multi_ale_plot_1d( rf, short_X_train, [c for c in short_X_train.columns if feature in c and get_lag(c) <= 9], f'{feature.replace(" ", "_").lower()}_ale_shifts', n_jobs=get_ncpus(), verbose=False, xlabel=f"{feature}", # title=f"First-order ALE for {feature}", figure_saver=figure_saver, CACHE_DIR=CACHE_DIR, ) # + fig, axes = plt.subplots(1, 2, figsize=(7.05, 2.8)) matched = [f for f in ["VOD", "SIF", "FAPAR", "LAI"] if f in short_X_train] assert len(matched) == 1 features = (matched[0], "DD") for feature, ax, title in zip(features, axes, ("(a)", "(b)")): multi_ale_plot_1d( rf, short_X_train, [c for c in short_X_train.columns if feature in c and get_lag(c) <= 9], fig=fig, ax=ax, n_jobs=get_ncpus(), verbose=False, xlabel=add_units(feature), # title=f"First-order ALE for {feature}", CACHE_DIR=CACHE_DIR, figure_saver=None, x_rotation=30, ) # ax.text(-0.09, 1.05, title, transform=ax.transAxes, fontsize=11) ax.set_title(title) axes[0].set_ylabel("ALE (BA)") fig.tight_layout(w_pad=0.02) fig.align_labels() # Explicitly set the x-axis labels' positions so they line up horizontally. y_min = 1 for ax in axes: bbox = ax.get_position() if bbox.ymin < y_min: y_min = bbox.ymin for ax in axes: bbox = ax.get_position() mean_x = (bbox.xmin + bbox.xmax) / 2.0 ax.xaxis.set_label_coords(mean_x, y_min - 0.147, transform=fig.transFigure) figure_saver.save_figure( fig, f'{"__".join(features).replace(" ", "_").lower()}_ale_shifts', sub_directory="multi_ale", )
analyses/seasonality_paper_st/fapar_only/model_analysis_ale_pdp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #!/usr/bin/env python # Configure Jupyter so figures appear in the notebook # %matplotlib inline # Configure Jupyter to display the assigned value after an assignment # %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import * def make_system(beta, gamma, quarantine_rate): """Make a system object for the SIR model. beta: contact rate in days gamma: recovery rate in days returns: System object """ init = State(S=89, I=1, Q =0, R=0) init /= sum(init) t0 = 0 t_end = 7 * 14 return System(init=init, t0=t0, t_end=t_end, beta=beta, gamma=gamma, quarantine_rate=quarantine_rate) def update_func(state, t, system): """Update the SIR model. state: State with variables S, I, R t: time step system: System with beta and gamma returns: State object """ s, i, q, r = state infected = system.beta * i * s recovered = system.gamma * i quarantine_in = system.quarantine_rate * i quarantine_out = q * system.quarantine_rate # Quarantine rate is the percentage of infected people who are moved in/out quarantine at any given time s -= infected i += infected - recovered - quarantine_in q += quarantine_in - quarantine_out r += recovered + quarantine_out return State(S=s, I=i, R=r) def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: TimeFrame """ frame = TimeFrame(columns=system.init.index) frame.row[system.t0] = system.init for t in linrange(system.t0, system.t_end): frame.row[t+1] = update_func(frame.row[t], t, system) return frame def calc_total_infected(results): """Fraction of population infected during the simulation. results: DataFrame with columns S, I, R returns: fraction of population """ return get_first_value(results.S) - get_last_value(results.S) beta = 0.333 gamma = 0.25 quarantine_rate = 0.2 system = make_system(beta, gamma, quarantine_rate) results = run_simulation(system, update_func) print(beta, gamma, calc_total_infected(results)) # -
code/chap_18_mine.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # %matplotlib inline from keras.models import Sequential, Model from keras.layers import Dropout, Reshape, Activation, Conv2D, Input, MaxPooling2D, BatchNormalization, Flatten, Dense, Lambda from keras.layers.advanced_activations import LeakyReLU from keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard from keras.optimizers import SGD, Adam, RMSprop from keras.layers.merge import concatenate import matplotlib.pyplot as plt import keras.backend as K import tensorflow as tf from tqdm import tqdm import numpy as np import pickle import os, cv2 from utils import WeightReader, decode_netout, draw_boxes from object_detector import ObjectDetector from PIL import Image from io import BytesIO, open import base64 import pandas as pd from IPython.display import HTML # ## Initialize # + from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession from functools import partial sparkConf = SparkConf() sparkConf.setMaster("local[8]") sparkConf.setAppName("ros_hadoop") sparkConf.set("spark.jars", "../lib/protobuf-java-3.3.0.jar,../lib/rosbaginputformat.jar,../lib/scala-library-2.11.8.jar") spark = SparkSession.builder.config(conf=sparkConf).getOrCreate() sc = spark.sparkContext # - # %run -i ../src/main/python/functions.py fin = sc.newAPIHadoopFile( path = "hdfs://127.0.0.1:9000/user/root/HMB_4.bag", inputFormatClass = "de.valtech.foss.RosbagMapInputFormat", keyClass = "org.apache.hadoop.io.LongWritable", valueClass = "org.apache.hadoop.io.MapWritable", conf = {"RosbagInputFormat.chunkIdx":"/opt/ros_hadoop/master/dist/HMB_4.bag.idx.bin"}) conn_a = fin.filter(lambda r: r[1]['header']['op'] == 7).map(lambda r: r[1]).collect() conn_d = {str(k['header']['topic']):k for k in conn_a} # ## Feature Engineering # + imageDF = fin.flatMap( partial(msg_map, func=lambda r: (r.header.stamp.secs, r.data), conn=conn_d['/center_camera/image_color/compressed']) ).toDF().toPandas() steeringAngleDF = fin.flatMap(partial(msg_map, func=lambda r: (r.header.stamp.secs, r.steering_wheel_angle), conn=conn_d['/vehicle/steering_report'])).toDF().toPandas() linAccDF = fin.flatMap(partial(msg_map, func=lambda r: (r.header.stamp.secs, r.linear_acceleration.x, r.linear_acceleration.y, r.linear_acceleration.z), conn=conn_d['/vehicle/imu/data_raw'])).toDF().toPandas() velocityDF = fin.flatMap(partial(msg_map, func=lambda r: (r.header.stamp.secs, r.velocity[0]), conn=conn_d['/vehicle/joint_states'])).toDF().toPandas() # - imageDF.columns = ['timestamp','image'] steeringAngleDF.columns = ['timestamp','steering_angle'] linAccDF.columns = ['timestamp', 'linear_accelaration_x','linear_accelaration_y','linear_accelaration_z'] velocityDF.columns = ['timestamp','velocity'] linAccDF.drop_duplicates(subset=('timestamp'), keep='first',inplace=True) steeringAngleDF.drop_duplicates(subset=('timestamp'), keep='first',inplace=True) velocityDF.drop_duplicates(subset=('timestamp'), keep='first',inplace=True) cdf = imageDF.merge(linAccDF, on='timestamp', how='left') cdf = cdf.merge(steeringAngleDF, on='timestamp', how='left') cdf = cdf.merge(velocityDF, on='timestamp', how='left') # ##### Annotating images with info def add_text(img, steering_angle, x, y, z, velocity, idx): font = cv2.FONT_HERSHEY_SIMPLEX pos1 = (10,30) pos2 = (10,55) pos3 = (10,80) pos4 = (10,105) pos5 = (10,130) fontScale = 0.7 redColor = (0,0,255) blueColor = (255,0,0) lineType = 2 cv2.putText(img,'Steering angle: '+str(steering_angle), pos1, font, fontScale, redColor, lineType) cv2.putText(img,'Velocity: '+str(velocity), pos5, font, fontScale, redColor, lineType) cv2.putText(img,'Linear Acc X: '+str(x), pos2, font, fontScale, blueColor, lineType) cv2.putText(img,'Linear Acc Y: '+str(y), pos3, font, fontScale, blueColor, lineType) cv2.putText(img,'Linear Acc Z: '+str(z), pos4, font, fontScale, blueColor, lineType) cv2.imwrite('./annotated-frames/{0:05d}.png'.format(idx), img) images = np.array([np.array(Image.open(BytesIO(k))) for k in cdf.image]) # + active="" # ''' # idx = 0 # for im, sa, x, y, z, v in zip(images, cdf.steering_angle, # cdf.linear_accelaration_x, cdf.linear_accelaration_y, cdf.linear_accelaration_z, # cdf.velocity): # add_text(im.copy(), round(sa,4), round(x,2), round(y,2), round(z,2), round(v,4), idx) # idx += 1 # ''' # + active="" # !ffmpeg -hide_banner -loglevel panic -r 30 -pattern_type glob -i './annotated-frames/*.png' -c:v libx264 drive-stats.mp4 # - HTML(""" <video width="860" height="540" controls> <source src="{0}"> </video> """.format('drive-stats.mp4')) # ## Training a deep learning model : Predict Steering Angle class SteeringAngleModel: @staticmethod def build(): model = Sequential() model.add(Conv2D(8, (3, 3), activation='relu', input_shape=(480,640,3))) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(16, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dense(256, activation='linear')) model.add(Dropout(0.2)) model.add(Dense(1)) return model model = SteeringAngleModel.build() model.compile(optimizer='adam',loss='mse') X_train, X_test, y_train, y_test = images[:1500], images[1501:len(images)], np.array(cdf.steering_angle[:1500]), np.array(cdf.steering_angle[1501:len(images)]) model.fit(X_train, y_train, epochs=5, batch_size=1) model.evaluate(X_test, y_test) # ## Applying pre-trained model : Object Detection with YOLOv2 # We use the pre-trained model and weights from YOLO: Real-Time Object Detection website. Download the weights from the following website: # https://pjreddie.com/darknet/yolo/ # # Most of the model code has been referenced from the YOLOv2 in keras github repo # Ref: https://github.com/experiencor/keras-yolo2 wt_path = 'yolov2.weights' objDetector = ObjectDetector(wt_path) image = images[0] annotatedImage = objDetector.detect_obj(image.copy()) plt.figure(figsize=(10,10)) plt.imshow(annotatedImage) plt.show() # + active="" # ''' # idx = 0 # for image in images: # annotatedImage = objDetector.detect_obj(image.copy()) # cv2.imwrite('./obj-detection-frames/{0:05d}.png'.format(idx), annotatedImage) # idx += 1 # ''' # + active="" # !ffmpeg -hide_banner -loglevel panic -r 30 -pattern_type glob -i './obj-detection-frames/*.png' -c:v libx264 drive-obj-detect.mp4 # - HTML(""" <video width="860" height="540" controls> <source src="{0}"> </video> """.format('./drive-obj-detect.mp4'))
examples/sample-use-cases.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # LEARNING # # This notebook serves as supporting material for topics covered in **Chapter 18 - Learning from Examples** , **Chapter 19 - Knowledge in Learning**, **Chapter 20 - Learning Probabilistic Models** from the book *Artificial Intelligence: A Modern Approach*. This notebook uses implementations from [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py). Let's start by importing everything from the module: from learning import * from notebook import * # ## CONTENTS # # * Machine Learning Overview # * Datasets # * Iris Visualization # * Distance Functions # * Plurality Learner # * k-Nearest Neighbours # * Decision Tree Learner # * Random Forest Learner # * Naive Bayes Learner # * Perceptron # * Learner Evaluation # ## MACHINE LEARNING OVERVIEW # # In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences. # # An agent is **learning** if it improves its performance on future tasks after making observations about the world. # # There are three types of feedback that determine the three main types of learning: # # * **Supervised Learning**: # # In Supervised Learning the agent observes some example input-output pairs and learns a function that maps from input to output. # # **Example**: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string "cat" or "dog" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-"cat"}, {dog image-"dog"} to the agent. The agent then learns a function that maps from an input image to one of those strings. # # * **Unsupervised Learning**: # # In Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is **clustering**: detecting potential useful clusters of input examples. # # **Example**: A taxi agent would develop a concept of *good traffic days* and *bad traffic days* without ever being given labeled examples. # # * **Reinforcement Learning**: # # In Reinforcement Learning the agent learns from a series of reinforcements—rewards or punishments. # # **Example**: Let's talk about an agent to play the popular Atari game—[Pong](http://www.ponggame.org). We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it. # ## DATASETS # # For the following tutorials we will use a range of datasets, to better showcase the strengths and weaknesses of the algorithms. The datasests are the following: # # * [Fisher's Iris](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/iris.csv): Each item represents a flower, with four measurements: the length and the width of the sepals and petals. Each item/flower is categorized into one of three species: Setosa, Versicolor and Virginica. # # * [Zoo](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/zoo.csv): The dataset holds different animals and their classification as "mammal", "fish", etc. The new animal we want to classify has the following measurements: 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1 (don't concern yourself with what the measurements mean). # To make using the datasets easier, we have written a class, `DataSet`, in `learning.py`. The tutorials found here make use of this class. # # Let's have a look at how it works before we get started with the algorithms. # ### Intro # # A lot of the datasets we will work with are .csv files (although other formats are supported too). We have a collection of sample datasets ready to use [on aima-data](https://github.com/aimacode/aima-data/tree/a21fc108f52ad551344e947b0eb97df82f8d2b2b). Two examples are the datasets mentioned above (*iris.csv* and *zoo.csv*). You can find plenty datasets online, and a good repository of such datasets is [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets.html). # # In such files, each line corresponds to one item/measurement. Each individual value in a line represents a *feature* and usually there is a value denoting the *class* of the item. # # You can find the code for the dataset here: # %psource DataSet # ### Class Attributes # # * **examples**: Holds the items of the dataset. Each item is a list of values. # # * **attrs**: The indexes of the features (by default in the range of [0,f), where *f* is the number of features). For example, `item[i]` returns the feature at index *i* of *item*. # # * **attrnames**: An optional list with attribute names. For example, `item[s]`, where *s* is a feature name, returns the feature of name *s* in *item*. # # * **target**: The attribute a learning algorithm will try to predict. By default the last attribute. # # * **inputs**: This is the list of attributes without the target. # # * **values**: A list of lists which holds the set of possible values for the corresponding attribute/feature. If initially `None`, it gets computed (by the function `setproblem`) from the examples. # # * **distance**: The distance function used in the learner to calculate the distance between two items. By default `mean_boolean_error`. # # * **name**: Name of the dataset. # # * **source**: The source of the dataset (url or other). Not used in the code. # # * **exclude**: A list of indexes to exclude from `inputs`. The list can include either attribute indexes (attrs) or names (attrnames). # ### Class Helper Functions # # These functions help modify a `DataSet` object to your needs. # # * **sanitize**: Takes as input an example and returns it with non-input (target) attributes replaced by `None`. Useful for testing. Keep in mind that the example given is not itself sanitized, but instead a sanitized copy is returned. # # * **classes_to_numbers**: Maps the class names of a dataset to numbers. If the class names are not given, they are computed from the dataset values. Useful for classifiers that return a numerical value instead of a string. # # * **remove_examples**: Removes examples containing a given value. Useful for removing examples with missing values, or for removing classes (needed for binary classifiers). # ### Importing a Dataset # # #### Importing from aima-data # # Datasets uploaded on aima-data can be imported with the following line: iris = DataSet(name="iris") # To check that we imported the correct dataset, we can do the following: print(iris.examples[0]) print(iris.inputs) # Which correctly prints the first line in the csv file and the list of attribute indexes. # When importing a dataset, we can specify to exclude an attribute (for example, at index 1) by setting the parameter `exclude` to the attribute index or name. iris2 = DataSet(name="iris",exclude=[1]) print(iris2.inputs) # ### Attributes # # Here we showcase the attributes. # # First we will print the first three items/examples in the dataset. print(iris.examples[:3]) # Then we will print `attrs`, `attrnames`, `target`, `input`. Notice how `attrs` holds values in [0,4], but since the fourth attribute is the target, `inputs` holds values in [0,3]. print("attrs:", iris.attrs) print("attrnames (by default same as attrs):", iris.attrnames) print("target:", iris.target) print("inputs:", iris.inputs) # Now we will print all the possible values for the first feature/attribute. print(iris.values[0]) # Finally we will print the dataset's name and source. Keep in mind that we have not set a source for the dataset, so in this case it is empty. print("name:", iris.name) print("source:", iris.source) # A useful combination of the above is `dataset.values[dataset.target]` which returns the possible values of the target. For classification problems, this will return all the possible classes. Let's try it: print(iris.values[iris.target]) # ### Helper Functions # We will now take a look at the auxiliary functions found in the class. # # First we will take a look at the `sanitize` function, which sets the non-input values of the given example to `None`. # # In this case we want to hide the class of the first example, so we will sanitize it. # # Note that the function doesn't actually change the given example; it returns a sanitized *copy* of it. print("Sanitized:",iris.sanitize(iris.examples[0])) print("Original:",iris.examples[0]) # Currently the `iris` dataset has three classes, setosa, virginica and versicolor. We want though to convert it to a binary class dataset (a dataset with two classes). The class we want to remove is "virginica". To accomplish that we will utilize the helper function `remove_examples`. # + iris2 = DataSet(name="iris") iris2.remove_examples("virginica") print(iris2.values[iris2.target]) # - # We also have `classes_to_numbers`. For a lot of the classifiers in the module (like the Neural Network), classes should have numerical values. With this function we map string class names to numbers. print("Class of first example:",iris2.examples[0][iris2.target]) iris2.classes_to_numbers() print("Class of first example:",iris2.examples[0][iris2.target]) # As you can see "setosa" was mapped to 0. # Finally, we take a look at `find_means_and_deviations`. It finds the means and standard deviations of the features for each class. # + means, deviations = iris.find_means_and_deviations() print("Setosa feature means:", means["setosa"]) print("Versicolor mean for first feature:", means["versicolor"][0]) print("Setosa feature deviations:", deviations["setosa"]) print("Virginica deviation for second feature:",deviations["virginica"][1]) # - # ## IRIS VISUALIZATION # # Since we will use the iris dataset extensively in this notebook, below we provide a visualization tool that helps in comprehending the dataset and thus how the algorithms work. # # We plot the dataset in a 3D space using `matplotlib` and the function `show_iris` from `notebook.py`. The function takes as input three parameters, *i*, *j* and *k*, which are indicises to the iris features, "Sepal Length", "Sepal Width", "Petal Length" and "Petal Width" (0 to 3). By default we show the first three features. # + iris = DataSet(name="iris") show_iris() show_iris(0, 1, 3) show_iris(1, 2, 3) # - # You can play around with the values to get a good look at the dataset. # ## DISTANCE FUNCTIONS # # In a lot of algorithms (like the *k-Nearest Neighbors* algorithm), there is a need to compare items, finding how *similar* or *close* they are. For that we have many different functions at our disposal. Below are the functions implemented in the module: # # ### Manhattan Distance (`manhattan_distance`) # # One of the simplest distance functions. It calculates the difference between the coordinates/features of two items. To understand how it works, imagine a 2D grid with coordinates *x* and *y*. In that grid we have two items, at the squares positioned at `(1,2)` and `(3,4)`. The difference between their two coordinates is `3-1=2` and `4-2=2`. If we sum these up we get `4`. That means to get from `(1,2)` to `(3,4)` we need four moves; two to the right and two more up. The function works similarly for n-dimensional grids. # + def manhattan_distance(X, Y): return sum([abs(x - y) for x, y in zip(X, Y)]) distance = manhattan_distance([1,2], [3,4]) print("Manhattan Distance between (1,2) and (3,4) is", distance) # - # ### Euclidean Distance (`euclidean_distance`) # # Probably the most popular distance function. It returns the square root of the sum of the squared differences between individual elements of two items. # + def euclidean_distance(X, Y): return math.sqrt(sum([(x - y)**2 for x, y in zip(X,Y)])) distance = euclidean_distance([1,2], [3,4]) print("Euclidean Distance between (1,2) and (3,4) is", distance) # - # ### Hamming Distance (`hamming_distance`) # # This function counts the number of differences between single elements in two items. For example, if we have two binary strings "111" and "011" the function will return 1, since the two strings only differ at the first element. The function works the same way for non-binary strings too. # + def hamming_distance(X, Y): return sum(x != y for x, y in zip(X, Y)) distance = hamming_distance(['a','b','c'], ['a','b','b']) print("Hamming Distance between 'abc' and 'abb' is", distance) # - # ### Mean Boolean Error (`mean_boolean_error`) # # To calculate this distance, we find the ratio of different elements over all elements of two items. For example, if the two items are `(1,2,3)` and `(1,4,5)`, the ration of different/all elements is 2/3, since they differ in two out of three elements. # + def mean_boolean_error(X, Y): return mean(int(x != y) for x, y in zip(X, Y)) distance = mean_boolean_error([1,2,3], [1,4,5]) print("Mean Boolean Error Distance between (1,2,3) and (1,4,5) is", distance) # - # ### Mean Error (`mean_error`) # # This function finds the mean difference of single elements between two items. For example, if the two items are `(1,0,5)` and `(3,10,5)`, their error distance is `(3-1) + (10-0) + (5-5) = 2 + 10 + 0 = 12`. The mean error distance therefore is `12/3=4`. # + def mean_error(X, Y): return mean([abs(x - y) for x, y in zip(X, Y)]) distance = mean_error([1,0,5], [3,10,5]) print("Mean Error Distance between (1,0,5) and (3,10,5) is", distance) # - # ### Mean Square Error (`ms_error`) # # This is very similar to the `Mean Error`, but instead of calculating the difference between elements, we are calculating the *square* of the differences. # + def ms_error(X, Y): return mean([(x - y)**2 for x, y in zip(X, Y)]) distance = ms_error([1,0,5], [3,10,5]) print("Mean Square Distance between (1,0,5) and (3,10,5) is", distance) # - # ### Root of Mean Square Error (`rms_error`) # # This is the square root of `Mean Square Error`. # + def rms_error(X, Y): return math.sqrt(ms_error(X, Y)) distance = rms_error([1,0,5], [3,10,5]) print("Root of Mean Error Distance between (1,0,5) and (3,10,5) is", distance) # - # ## PLURALITY LEARNER CLASSIFIER # # ### Overview # # The Plurality Learner is a simple algorithm, used mainly as a baseline comparison for other algorithms. It finds the most popular class in the dataset and classifies any subsequent item to that class. Essentially, it classifies every new item to the same class. For that reason, it is not used very often, instead opting for more complicated algorithms when we want accurate classification. # # ![pL plot](images/pluralityLearner_plot.png) # # Let's see how the classifier works with the plot above. There are three classes named **Class A** (orange-colored dots) and **Class B** (blue-colored dots) and **Class C** (green-colored dots). Every point in this plot has two **features** (i.e. X<sub>1</sub>, X<sub>2</sub>). Now, let's say we have a new point, a red star and we want to know which class this red star belongs to. Solving this problem by predicting the class of this new red star is our current classification problem. # # The Plurality Learner will find the class most represented in the plot. ***Class A*** has four items, ***Class B*** has three and ***Class C*** has seven. The most popular class is ***Class C***. Therefore, the item will get classified in ***Class C***, despite the fact that it is closer to the other two classes. # ### Implementation # # Below follows the implementation of the PluralityLearner algorithm: psource(PluralityLearner) # It takes as input a dataset and returns a function. We can later call this function with the item we want to classify as the argument and it returns the class it should be classified in. # # The function first finds the most popular class in the dataset and then each time we call its "predict" function, it returns it. Note that the input ("example") does not matter. The function always returns the same class. # ### Example # # For this example, we will not use the Iris dataset, since each class is represented the same. This will throw an error. Instead we will use the zoo dataset. # + zoo = DataSet(name="zoo") pL = PluralityLearner(zoo) print(pL([1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1])) # - # The output for the above code is "mammal", since that is the most popular and common class in the dataset. # ## K-NEAREST NEIGHBOURS CLASSIFIER # # ### Overview # The k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are going to use this to classify Iris flowers. More about kNN on [Scholarpedia](http://www.scholarpedia.org/article/K-nearest_neighbor). # # ![kNN plot](images/knn_plot.png) # Let's see how kNN works with a simple plot shown in the above picture. # # We have co-ordinates (we call them **features** in Machine Learning) of this red star and we need to predict its class using the kNN algorithm. In this algorithm, the value of **k** is arbitrary. **k** is one of the **hyper parameters** for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as **hyper parameter tuning/optimising**. We learn more about this in coming topics. # # Let's put **k = 3**. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into the majority class. Observe that smaller circle which contains three points other than **test point** (red star). As there are two violet points, which form the majority, we predict the class of red star as **violet- Class B**. # # Similarly if we put **k = 5**, you can observe that there are three yellow points, which form the majority. So, we classify our test point as **yellow- Class A**. # # In practical tasks, we iterate through a bunch of values for k (like [1, 3, 5, 10, 20, 50, 100]), see how it performs and select the best one. # ### Implementation # # Below follows the implementation of the kNN algorithm: psource(NearestNeighborLearner) # It takes as input a dataset and k (default value is 1) and it returns a function, which we can later use to classify a new item. # # To accomplish that, the function uses a heap-queue, where the items of the dataset are sorted according to their distance from *example* (the item to classify). We then take the k smallest elements from the heap-queue and we find the majority class. We classify the item to this class. # ### Example # # We measured a new flower with the following values: 5.1, 3.0, 1.1, 0.1. We want to classify that item/flower in a class. To do that, we write the following: # + iris = DataSet(name="iris") kNN = NearestNeighborLearner(iris,k=3) print(kNN([5.1,3.0,1.1,0.1])) # - # The output of the above code is "setosa", which means the flower with the above measurements is of the "setosa" species. # ## DECISION TREE LEARNER # # ### Overview # # #### Decision Trees # A decision tree is a flowchart that uses a tree of decisions and their possible consequences for classification. At each non-leaf node of the tree an attribute of the input is tested, based on which corresponding branch leading to a child-node is selected. At the leaf node the input is classified based on the class label of this leaf node. The paths from root to leaves represent classification rules based on which leaf nodes are assigned class labels. # ![perceptron](images/decisiontree_fruit.jpg) # #### Decision Tree Learning # Decision tree learning is the construction of a decision tree from class-labeled training data. The data is expected to be a tuple in which each record of the tuple is an attribute used for classification. The decision tree is built top-down, by choosing a variable at each step that best splits the set of items. There are different metrics for measuring the "best split". These generally measure the homogeneity of the target variable within the subsets. # # #### Gini Impurity # Gini impurity of a set is the probability of a randomly chosen element to be incorrectly labeled if it was randomly labeled according to the distribution of labels in the set. # # $$I_G(p) = \sum{p_i(1 - p_i)} = 1 - \sum{p_i^2}$$ # # We select a split which minimizes the Gini impurity in child nodes. # # #### Information Gain # Information gain is based on the concept of entropy from information theory. Entropy is defined as: # # $$H(p) = -\sum{p_i \log_2{p_i}}$$ # # Information Gain is difference between entropy of the parent and weighted sum of entropy of children. The feature used for splitting is the one which provides the most information gain. # # #### Pseudocode # # You can view the pseudocode by running the cell below: pseudocode("Decision Tree Learning") # ### Implementation # The nodes of the tree constructed by our learning algorithm are stored using either `DecisionFork` or `DecisionLeaf` based on whether they are a parent node or a leaf node respectively. psource(DecisionFork) # `DecisionFork` holds the attribute, which is tested at that node, and a dict of branches. The branches store the child nodes, one for each of the attribute's values. Calling an object of this class as a function with input tuple as an argument returns the next node in the classification path based on the result of the attribute test. psource(DecisionLeaf) # The leaf node stores the class label in `result`. All input tuples' classification paths end on a `DecisionLeaf` whose `result` attribute decide their class. psource(DecisionTreeLearner) # The implementation of `DecisionTreeLearner` provided in [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py) uses information gain as the metric for selecting which attribute to test for splitting. The function builds the tree top-down in a recursive manner. Based on the input it makes one of the four choices: # <ol> # <li>If the input at the current step has no training data we return the mode of classes of input data received in the parent step (previous level of recursion).</li> # <li>If all values in training data belong to the same class it returns a `DecisionLeaf` whose class label is the class which all the data belongs to.</li> # <li>If the data has no attributes that can be tested we return the class with highest plurality value in the training data.</li> # <li>We choose the attribute which gives the highest amount of entropy gain and return a `DecisionFork` which splits based on this attribute. Each branch recursively calls `decision_tree_learning` to construct the sub-tree.</li> # </ol> # ### Example # # We will now use the Decision Tree Learner to classify a sample with values: 5.1, 3.0, 1.1, 0.1. # + iris = DataSet(name="iris") DTL = DecisionTreeLearner(iris) print(DTL([5.1, 3.0, 1.1, 0.1])) # - # As expected, the Decision Tree learner classifies the sample as "setosa" as seen in the previous section. # ## RANDOM FOREST LEARNER # # ### Overview # # ![random_forest.png](images/random_forest.png) # Image via [src](https://cdn-images-1.medium.com/max/800/0*tG-IWcxL1jg7RkT0.png) # # #### Random Forest # # As the name of the algorithm and image above suggest, this algorithm creates the forest with a number of trees. The more number of trees makes the forest robust. In the same way in random forest algorithm, the higher the number of trees in the forest, the higher is the accuray result. The main difference between Random Forest and Decision trees is that, finding the root node and splitting the feature nodes will be random. # # Let's see how Rnadom Forest Algorithm work : # Random Forest Algorithm works in two steps, first is the creation of random forest and then the prediction. Let's first see the creation : # # The first step in creation is to randomly select 'm' features out of total 'n' features. From these 'm' features calculate the node d using the best split point and then split the node into further nodes using best split. Repeat these steps until 'i' number of nodes are reached. Repeat the entire whole process to build the forest. # # Now, let's see how the prediction works # Take the test features and predict the outcome for each randomly created decision tree. Calculate the votes for each prediction and the prediction which gets the highest votes would be the final prediction. # # # ### Implementation # # Below mentioned is the implementation of Random Forest Algorithm. psource(RandomForest) # This algorithm creates an ensemble of decision trees using bagging and feature bagging. It takes 'm' examples randomly from the total number of examples and then perform feature bagging with probability p to retain an attribute. All the predictors are predicted from the DecisionTreeLearner and then a final prediction is made. # # # ### Example # # We will now use the Random Forest to classify a sample with values: 5.1, 3.0, 1.1, 0.1. # + iris = DataSet(name="iris") DTL = RandomForest(iris) print(DTL([5.1, 3.0, 1.1, 0.1])) # - # As expected, the Random Forest classifies the sample as "setosa". # ## NAIVE BAYES LEARNER # # ### Overview # # #### Theory of Probabilities # # The Naive Bayes algorithm is a probabilistic classifier, making use of [Bayes' Theorem](https://en.wikipedia.org/wiki/Bayes%27_theorem). The theorem states that the conditional probability of **A** given **B** equals the conditional probability of **B** given **A** multiplied by the probability of **A**, divided by the probability of **B**. # # $$P(A|B) = \dfrac{P(B|A)*P(A)}{P(B)}$$ # # From the theory of Probabilities we have the Multiplication Rule, if the events *X* are independent the following is true: # # $$P(X_{1} \cap X_{2} \cap ... \cap X_{n}) = P(X_{1})*P(X_{2})*...*P(X_{n})$$ # # For conditional probabilities this becomes: # # $$P(X_{1}, X_{2}, ..., X_{n}|Y) = P(X_{1}|Y)*P(X_{2}|Y)*...*P(X_{n}|Y)$$ # #### Classifying an Item # # How can we use the above to classify an item though? # # We have a dataset with a set of classes (**C**) and we want to classify an item with a set of features (**F**). Essentially what we want to do is predict the class of an item given the features. # # For a specific class, **Class**, we will find the conditional probability given the item features: # # $$P(Class|F) = \dfrac{P(F|Class)*P(Class)}{P(F)}$$ # # We will do this for every class and we will pick the maximum. This will be the class the item is classified in. # # The features though are a vector with many elements. We need to break the probabilities up using the multiplication rule. Thus the above equation becomes: # # $$P(Class|F) = \dfrac{P(Class)*P(F_{1}|Class)*P(F_{2}|Class)*...*P(F_{n}|Class)}{P(F_{1})*P(F_{2})*...*P(F_{n})}$$ # # The calculation of the conditional probability then depends on the calculation of the following: # # *a)* The probability of **Class** in the dataset. # # *b)* The conditional probability of each feature occurring in an item classified in **Class**. # # *c)* The probabilities of each individual feature. # # For *a)*, we will count how many times **Class** occurs in the dataset (aka how many items are classified in a particular class). # # For *b)*, if the feature values are discrete ('Blue', '3', 'Tall', etc.), we will count how many times a feature value occurs in items of each class. If the feature values are not discrete, we will go a different route. We will use a distribution function to calculate the probability of values for a given class and feature. If we know the distribution function of the dataset, then great, we will use it to compute the probabilities. If we don't know the function, we can assume the dataset follows the normal (Gaussian) distribution without much loss of accuracy. In fact, it can be proven that any distribution tends to the Gaussian the larger the population gets (see [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)). # # *NOTE:* If the values are continuous but use the discrete approach, there might be issues if we are not lucky. For one, if we have two values, '5.0 and 5.1', with the discrete approach they will be two completely different values, despite being so close. Second, if we are trying to classify an item with a feature value of '5.15', if the value does not appear for the feature, its probability will be 0. This might lead to misclassification. Generally, the continuous approach is more accurate and more useful, despite the overhead of calculating the distribution function. # # The last one, *c)*, is tricky. If feature values are discrete, we can count how many times they occur in the dataset. But what if the feature values are continuous? Imagine a dataset with a height feature. Is it worth it to count how many times each value occurs? Most of the time it is not, since there can be miscellaneous differences in the values (for example, 1.7 meters and 1.700001 meters are practically equal, but they count as different values). # # So as we cannot calculate the feature value probabilities, what are we going to do? # # Let's take a step back and rethink exactly what we are doing. We are essentially comparing conditional probabilities of all the classes. For two classes, **A** and **B**, we want to know which one is greater: # # $$\dfrac{P(F|A)*P(A)}{P(F)} vs. \dfrac{P(F|B)*P(B)}{P(F)}$$ # # Wait, **P(F)** is the same for both the classes! In fact, it is the same for every combination of classes. That is because **P(F)** does not depend on a class, thus being independent of the classes. # # So, for *c)*, we actually don't need to calculate it at all. # #### Wrapping It Up # # Classifying an item to a class then becomes a matter of calculating the conditional probabilities of feature values and the probabilities of classes. This is something very desirable and computationally delicious. # # Remember though that all the above are true because we made the assumption that the features are independent. In most real-world cases that is not true though. Is that an issue here? Fret not, for the the algorithm is very efficient even with that assumption. That is why the algorithm is called **Naive** Bayes Classifier. We (naively) assume that the features are independent to make computations easier. # ### Implementation # # The implementation of the Naive Bayes Classifier is split in two; *Learning* and *Simple*. The *learning* classifier takes as input a dataset and learns the needed distributions from that. It is itself split into two, for discrete and continuous features. The *simple* classifier takes as input not a dataset, but already calculated distributions (a dictionary of `CountingProbDist` objects). # #### Discrete # # The implementation for discrete values counts how many times each feature value occurs for each class, and how many times each class occurs. The results are stored in a `CountinProbDist` object. # With the below code you can see the probabilities of the class "Setosa" appearing in the dataset and the probability of the first feature (at index 0) of the same class having a value of 5. Notice that the second probability is relatively small, even though if we observe the dataset we will find that a lot of values are around 5. The issue arises because the features in the Iris dataset are continuous, and we are assuming they are discrete. If the features were discrete (for example, "Tall", "3", etc.) this probably wouldn't have been the case and we would see a much nicer probability distribution. # + dataset = iris target_vals = dataset.values[dataset.target] target_dist = CountingProbDist(target_vals) attr_dists = {(gv, attr): CountingProbDist(dataset.values[attr]) for gv in target_vals for attr in dataset.inputs} for example in dataset.examples: targetval = example[dataset.target] target_dist.add(targetval) for attr in dataset.inputs: attr_dists[targetval, attr].add(example[attr]) print(target_dist['setosa']) print(attr_dists['setosa', 0][5.0]) # - # First we found the different values for the classes (called targets here) and calculated their distribution. Next we initialized a dictionary of `CountingProbDist` objects, one for each class and feature. Finally, we iterated through the examples in the dataset and calculated the needed probabilites. # # Having calculated the different probabilities, we will move on to the predicting function. It will receive as input an item and output the most likely class. Using the above formula, it will multiply the probability of the class appearing, with the probability of each feature value appearing in the class. It will return the max result. # + def predict(example): def class_probability(targetval): return (target_dist[targetval] * product(attr_dists[targetval, attr][example[attr]] for attr in dataset.inputs)) return argmax(target_vals, key=class_probability) print(predict([5, 3, 1, 0.1])) # - # You can view the complete code by executing the next line: psource(NaiveBayesDiscrete) # #### Continuous # # In the implementation we use the Gaussian/Normal distribution function. To make it work, we need to find the means and standard deviations of features for each class. We make use of the `find_means_and_deviations` Dataset function. On top of that, we will also calculate the class probabilities as we did with the Discrete approach. # + means, deviations = dataset.find_means_and_deviations() target_vals = dataset.values[dataset.target] target_dist = CountingProbDist(target_vals) print(means["setosa"]) print(deviations["versicolor"]) # - # You can see the means of the features for the "Setosa" class and the deviations for "Versicolor". # # The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occurring with the conditional probabilities of the feature values for the class. # # Since we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value. # + def predict(example): def class_probability(targetval): prob = target_dist[targetval] for attr in dataset.inputs: prob *= gaussian(means[targetval][attr], deviations[targetval][attr], example[attr]) return prob return argmax(target_vals, key=class_probability) print(predict([5, 3, 1, 0.1])) # - # The complete code of the continuous algorithm: psource(NaiveBayesContinuous) # #### Simple # # The simple classifier (chosen with the argument `simple`) does not learn from a dataset, instead it takes as input a dictionary of already calculated `CountingProbDist` objects and returns a predictor function. The dictionary is in the following form: `(Class Name, Class Probability): CountingProbDist Object`. # # Each class has its own probability distribution. The classifier given a list of features calculates the probability of the input for each class and returns the max. The only pre-processing work is to create dictionaries for the distribution of classes (named `targets`) and attributes/features. # # The complete code for the simple classifier: psource(NaiveBayesSimple) # This classifier is useful when you already have calculated the distributions and you need to predict future items. # ### Examples # # We will now use the Naive Bayes Classifier (Discrete and Continuous) to classify items: # + nBD = NaiveBayesLearner(iris, continuous=False) print("Discrete Classifier") print(nBD([5, 3, 1, 0.1])) print(nBD([6, 5, 3, 1.5])) print(nBD([7, 3, 6.5, 2])) nBC = NaiveBayesLearner(iris, continuous=True) print("\nContinuous Classifier") print(nBC([5, 3, 1, 0.1])) print(nBC([6, 5, 3, 1.5])) print(nBC([7, 3, 6.5, 2])) # - # Notice how the Discrete Classifier misclassified the second item, while the Continuous one had no problem. # # Let's now take a look at the simple classifier. First we will come up with a sample problem to solve. Say we are given three bags. Each bag contains three letters ('a', 'b' and 'c') of different quantities. We are given a string of letters and we are tasked with finding from which bag the string of letters came. # # Since we know the probability distribution of the letters for each bag, we can use the naive bayes classifier to make our prediction. bag1 = 'a'*50 + 'b'*30 + 'c'*15 dist1 = CountingProbDist(bag1) bag2 = 'a'*30 + 'b'*45 + 'c'*20 dist2 = CountingProbDist(bag2) bag3 = 'a'*20 + 'b'*20 + 'c'*35 dist3 = CountingProbDist(bag3) # Now that we have the `CountingProbDist` objects for each bag/class, we will create the dictionary. We assume that it is equally probable that we will pick from any bag. dist = {('First', 0.5): dist1, ('Second', 0.3): dist2, ('Third', 0.2): dist3} nBS = NaiveBayesLearner(dist, simple=True) # Now we can start making predictions: print(nBS('aab')) # We can handle strings print(nBS(['b', 'b'])) # And lists! print(nBS('ccbcc')) # The results make intuitive sence. The first bag has a high amount of 'a's, the second has a high amount of 'b's and the third has a high amount of 'c's. The classifier seems to confirm this intuition. # # Note that the simple classifier doesn't distinguish between discrete and continuous values. It just takes whatever it is given. Also, the `simple` option on the `NaiveBayesLearner` overrides the `continuous` argument. `NaiveBayesLearner(d, simple=True, continuous=False)` just creates a simple classifier. # ## PERCEPTRON CLASSIFIER # # ### Overview # # The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First it trains its weights given a dataset and then it can classify a new item by running it through the network. # # Its input layer consists of the the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has *n* synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index. # # Note that in classification problems each node represents a class. The final classification is the class/node with the max output value. # # Below you can see a single node/neuron in the outer layer. With *f* we denote the item features, with *w* the synapse weights, then inside the node we have the dot product and the activation function, *g*. # ![perceptron](images/perceptron.png) # ### Implementation # # First, we train (calculate) the weights given a dataset, using the `BackPropagationLearner` function of `learning.py`. We then return a function, `predict`, which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class. psource(PerceptronLearner) # Note that the Perceptron is a one-layer neural network, without any hidden layers. So, in `BackPropagationLearner`, we will pass no hidden layers. From that function we get our network, which is just one layer, with the weights calculated. # # That function `predict` passes the input/example through the network, calculating the dot product of the input and the weights for each node and returns the class with the max dot product. # ### Example # # We will train the Perceptron on the iris dataset. Because though the `BackPropagationLearner` works with integer indexes and not strings, we need to convert class names to integers. Then, we will try and classify the item/flower with measurements of 5, 3, 1, 0.1. # + iris = DataSet(name="iris") iris.classes_to_numbers() perceptron = PerceptronLearner(iris) print(perceptron([5, 3, 1, 0.1])) # - # The correct output is 0, which means the item belongs in the first class, "setosa". Note that the Perceptron algorithm is not perfect and may produce false classifications. # ## LINEAR LEARNER # # ### Overview # # Linear regression is a linear model. It is a model that assumes a linear relationship between the input variables x and the single output variable y. More specifically, that y can be calculated from a linear combination of the input variables x. Linear learner is a quite simple model as the representation of this model is a linear equation. # The linear equation assigns one scaler factor to each input value or column, called a coefficients or weights. One additional coefficient is also added, giving additional degree of freedom and is often called the intercept or the bias coefficient. For example : y = ax1 + bx2 + c . # # ### Implementation # # Below mentioned is the implementation of Linear Learner. psource(LinearLearner) # This algorithm first assigns some random weights to the input variables and then based on the error calculated updates the weight for each variable. Finally the prediction is made with the updated weights. # # ### Implementation # # We will now use the Linear Learner to classify a sample with values: 5.1, 3.0, 1.1, 0.1. # + iris = DataSet(name="iris") iris.classes_to_numbers() linear_learner = LinearLearner(iris) print(linear_learner([5, 3, 1, 0.1])) # - # ## ENSEMBLE LEARNER # # ### Overview # # Ensemble Learning improves the performance of our model by combining several learners. It improvise the stability and predictive power of the model. Ensemble methods are meta-algorithms that combine several machine learning techniques into one predictive model in order to decrease variance, bias, or improve predictions. # # # # ![ensemble_learner.jpg](images/ensemble_learner.jpg) # # # Some commonly used Ensemble Learning techniques are : # # 1. Bagging : Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. It helps us to reduce variance error. # # 2. Boosting : Boosting is an iterative technique which adjust the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation and vice versa. It helps us to reduce bias error. # # 3. Stacking : This is a very interesting way of combining models. Here we use a learner to combine output from different learners. It can either decrease bias or variance error depending on the learners we use. # # ### Implementation # # Below mentioned is the implementation of Random Forest Algorithm. psource(EnsembleLearner) # This algorithm takes input as a list of learning algorithms, have them vote and then finally returns the predicted result. # ## LEARNER EVALUATION # # In this section we will evaluate and compare algorithm performance. The dataset we will use will again be the iris one. iris = DataSet(name="iris") # ### Naive Bayes # # First up we have the Naive Bayes algorithm. First we will test how well the Discrete Naive Bayes works, and then how the Continuous fares. # + nBD = NaiveBayesLearner(iris, continuous=False) print("Error ratio for Discrete:", err_ratio(nBD, iris)) nBC = NaiveBayesLearner(iris, continuous=True) print("Error ratio for Continuous:", err_ratio(nBC, iris)) # - # The error for the Naive Bayes algorithm is very, very low; close to 0. There is also very little difference between the discrete and continuous version of the algorithm. # ## k-Nearest Neighbors # # Now we will take a look at kNN, for different values of *k*. Note that *k* should have odd values, to break any ties between two classes. # + kNN_1 = NearestNeighborLearner(iris, k=1) kNN_3 = NearestNeighborLearner(iris, k=3) kNN_5 = NearestNeighborLearner(iris, k=5) kNN_7 = NearestNeighborLearner(iris, k=7) print("Error ratio for k=1:", err_ratio(kNN_1, iris)) print("Error ratio for k=3:", err_ratio(kNN_3, iris)) print("Error ratio for k=5:", err_ratio(kNN_5, iris)) print("Error ratio for k=7:", err_ratio(kNN_7, iris)) # - # Notice how the error became larger and larger as *k* increased. This is generally the case with datasets where classes are spaced out, as is the case with the iris dataset. If items from different classes were closer together, classification would be more difficult. Usually a value of 1, 3 or 5 for *k* suffices. # # Also note that since the training set is also the testing set, for *k* equal to 1 we get a perfect score, since the item we want to classify each time is already in the dataset and its closest neighbor is itself. # ### Perceptron # # For the Perceptron, we first need to convert class names to integers. Let's see how it performs in the dataset. # + iris2 = DataSet(name="iris") iris2.classes_to_numbers() perceptron = PerceptronLearner(iris2) print("Error ratio for Perceptron:", err_ratio(perceptron, iris2)) # - # The Perceptron didn't fare very well mainly because the dataset is not linearly separated. On simpler datasets the algorithm performs much better, but unfortunately such datasets are rare in real life scenarios. # ## AdaBoost # # ### Overview # # **AdaBoost** is an algorithm which uses **ensemble learning**. In ensemble learning the hypotheses in the collection, or ensemble, vote for what the output should be and the output with the majority votes is selected as the final answer. # # AdaBoost algorithm, as mentioned in the book, works with a **weighted training set** and **weak learners** (classifiers that have about 50%+epsilon accuracy i.e slightly better than random guessing). It manipulates the weights attached to the the examples that are showed to it. Importance is given to the examples with higher weights. # # All the examples start with equal weights and a hypothesis is generated using these examples. Examples which are incorrectly classified, their weights are increased so that they can be classified correctly by the next hypothesis. The examples that are correctly classified, their weights are reduced. This process is repeated *K* times (here *K* is an input to the algorithm) and hence, *K* hypotheses are generated. # # These *K* hypotheses are also assigned weights according to their performance on the weighted training set. The final ensemble hypothesis is the weighted-majority combination of these *K* hypotheses. # # The speciality of AdaBoost is that by using weak learners and a sufficiently large *K*, a highly accurate classifier can be learned irrespective of the complexity of the function being learned or the dullness of the hypothesis space. # ### Implementation # # As seen in the previous section, the `PerceptronLearner` does not perform that well on the iris dataset. We'll use perceptron as the learner for the AdaBoost algorithm and try to increase the accuracy. # # Let's first see what AdaBoost is exactly: psource(AdaBoost) # AdaBoost takes as inputs: **L** and *K* where **L** is the learner and *K* is the number of hypotheses to be generated. The learner **L** takes in as inputs: a dataset and the weights associated with the examples in the dataset. But the `PerceptronLearner` doesnot handle weights and only takes a dataset as its input. # To remedy that we will give as input to the PerceptronLearner a modified dataset in which the examples will be repeated according to the weights associated to them. Intuitively, what this will do is force the learner to repeatedly learn the same example again and again until it can classify it correctly. # # To convert `PerceptronLearner` so that it can take weights as input too, we will have to pass it through the **`WeightedLearner`** function. psource(WeightedLearner) # The `WeightedLearner` function will then call the `PerceptronLearner`, during each iteration, with the modified dataset which contains the examples according to the weights associated with them. # ### Example # # We will pass the `PerceptronLearner` through `WeightedLearner` function. Then we will create an `AdaboostLearner` classifier with number of hypotheses or *K* equal to 5. WeightedPerceptron = WeightedLearner(PerceptronLearner) AdaboostLearner = AdaBoost(WeightedPerceptron, 5) # + iris2 = DataSet(name="iris") iris2.classes_to_numbers() adaboost = AdaboostLearner(iris2) adaboost([5, 3, 1, 0.1]) # - # That is the correct answer. Let's check the error rate of adaboost with perceptron. print("Error ratio for adaboost: ", err_ratio(adaboost, iris2)) # It reduced the error rate considerably. Unlike the `PerceptronLearner`, `AdaBoost` was able to learn the complexity in the iris dataset.
learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] papermill={"duration": 0.039939, "end_time": "2020-10-23T23:23:00.219590", "exception": false, "start_time": "2020-10-23T23:23:00.179651", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # # RadarCOVID-Report # + [markdown] papermill={"duration": 0.035921, "end_time": "2020-10-23T23:23:00.291942", "exception": false, "start_time": "2020-10-23T23:23:00.256021", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ## Data Extraction # + papermill={"duration": 3.014313, "end_time": "2020-10-23T23:23:03.341956", "exception": false, "start_time": "2020-10-23T23:23:00.327643", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] import datetime import json import logging import os import shutil import tempfile import textwrap import uuid import matplotlib.ticker import numpy as np import pandas as pd import seaborn as sns # %matplotlib inline # + papermill={"duration": 0.049894, "end_time": "2020-10-23T23:23:03.436566", "exception": false, "start_time": "2020-10-23T23:23:03.386672", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] current_working_directory = os.environ.get("PWD") if current_working_directory: os.chdir(current_working_directory) sns.set() matplotlib.rcParams["figure.figsize"] = (15, 6) extraction_datetime = datetime.datetime.utcnow() extraction_date = extraction_datetime.strftime("%Y-%m-%d") extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1) extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d") extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H") # + [markdown] papermill={"duration": 0.038433, "end_time": "2020-10-23T23:23:03.513818", "exception": false, "start_time": "2020-10-23T23:23:03.475385", "status": "completed"} tags=[] # ### Constants # + papermill={"duration": 1.192239, "end_time": "2020-10-23T23:23:04.743543", "exception": false, "start_time": "2020-10-23T23:23:03.551304", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] from Modules.ExposureNotification import exposure_notification_io spain_region_country_code = "ES" germany_region_country_code = "DE" default_backend_identifier = spain_region_country_code efgs_supported_countries_backend_identifier = germany_region_country_code efgs_supported_countries_backend_client = \ exposure_notification_io.get_backend_client_with_identifier( backend_identifier=efgs_supported_countries_backend_identifier) efgs_source_regions = efgs_supported_countries_backend_client.get_supported_countries() if spain_region_country_code in efgs_source_regions: default_source_regions = ",".join(efgs_source_regions) else: default_source_regions = spain_region_country_code backend_generation_days = 7 * 2 daily_summary_days = 7 * 4 * 3 daily_plot_days = 7 * 4 tek_dumps_load_limit = daily_summary_days + 1 # + [markdown] papermill={"duration": 0.03102, "end_time": "2020-10-23T23:23:04.810091", "exception": false, "start_time": "2020-10-23T23:23:04.779071", "status": "completed"} tags=[] # ### Parameters # + papermill={"duration": 0.04987, "end_time": "2020-10-23T23:23:04.892257", "exception": false, "start_time": "2020-10-23T23:23:04.842387", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER") if environment_backend_identifier: report_backend_identifier = environment_backend_identifier else: report_backend_identifier = default_backend_identifier environment_source_regions = os.environ.get("RADARCOVID_REPORT__SOURCE_REGIONS") if environment_source_regions: report_source_regions = environment_source_regions else: report_source_regions = default_source_regions report_source_regions = report_source_regions.split(",") dict( report_backend_identifier=report_backend_identifier, report_source_regions=report_source_regions, ) # + [markdown] papermill={"duration": 0.035444, "end_time": "2020-10-23T23:23:04.963298", "exception": false, "start_time": "2020-10-23T23:23:04.927854", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### COVID-19 Cases # + papermill={"duration": 11.843936, "end_time": "2020-10-23T23:23:16.840367", "exception": false, "start_time": "2020-10-23T23:23:04.996431", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] confirmed_df = pd.read_csv("https://opendata.ecdc.europa.eu/covid19/casedistribution/csv/data.csv") radar_covid_countries = set(report_source_regions) confirmed_df = confirmed_df[["dateRep", "cases", "geoId"]] confirmed_df["dateRep"] = pd.to_datetime(confirmed_df.dateRep, dayfirst=True).dt.strftime("%Y-%m-%d") confirmed_df = confirmed_df[confirmed_df.geoId.isin(radar_covid_countries)] confirmed_df = confirmed_df.groupby("dateRep").cases.sum().reset_index() confirmed_df.sort_values("dateRep", inplace=True) confirmed_df.tail() # + papermill={"duration": 0.048692, "end_time": "2020-10-23T23:23:16.926703", "exception": false, "start_time": "2020-10-23T23:23:16.878011", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] confirmed_df.columns = ["sample_date_string", "new_cases"] confirmed_df.sort_values("sample_date_string", inplace=True) confirmed_df["covid_cases"] = confirmed_df.new_cases.rolling(7).mean().round() confirmed_df.tail() # + papermill={"duration": 0.052978, "end_time": "2020-10-23T23:23:17.015954", "exception": false, "start_time": "2020-10-23T23:23:16.962976", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] extraction_date_confirmed_df = \ confirmed_df[confirmed_df.sample_date_string == extraction_date] extraction_previous_date_confirmed_df = \ confirmed_df[confirmed_df.sample_date_string == extraction_previous_date].copy() if extraction_date_confirmed_df.empty and \ not extraction_previous_date_confirmed_df.empty: extraction_previous_date_confirmed_df["sample_date_string"] = extraction_date extraction_previous_date_confirmed_df["new_cases"] = \ extraction_previous_date_confirmed_df.covid_cases confirmed_df = confirmed_df.append(extraction_previous_date_confirmed_df) confirmed_df["covid_cases"] = confirmed_df.covid_cases.fillna(0).astype(int) confirmed_df.tail() # + papermill={"duration": 0.239821, "end_time": "2020-10-23T23:23:17.290934", "exception": false, "start_time": "2020-10-23T23:23:17.051113", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] confirmed_df[["new_cases", "covid_cases"]].plot() # + [markdown] papermill={"duration": 0.040723, "end_time": "2020-10-23T23:23:17.371448", "exception": false, "start_time": "2020-10-23T23:23:17.330725", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Extract API TEKs # + papermill={"duration": 169.245461, "end_time": "2020-10-23T23:26:06.657230", "exception": false, "start_time": "2020-10-23T23:23:17.411769", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] raw_zip_path_prefix = "Data/TEKs/Raw/" fail_on_error_backend_identifiers = [report_backend_identifier] multi_backend_exposure_keys_df = \ exposure_notification_io.download_exposure_keys_from_backends( generation_days=backend_generation_days, fail_on_error_backend_identifiers=fail_on_error_backend_identifiers, save_raw_zip_path_prefix=raw_zip_path_prefix) multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"] multi_backend_exposure_keys_df.rename( columns={ "generation_datetime": "sample_datetime", "generation_date_string": "sample_date_string", }, inplace=True) multi_backend_exposure_keys_df.head() # + papermill={"duration": 0.223412, "end_time": "2020-10-23T23:26:06.921938", "exception": false, "start_time": "2020-10-23T23:26:06.698526", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] early_teks_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.rolling_period < 144].copy() early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6 early_teks_df[early_teks_df.sample_date_string != extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) # + papermill={"duration": 0.234288, "end_time": "2020-10-23T23:26:07.198475", "exception": false, "start_time": "2020-10-23T23:26:06.964187", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] early_teks_df[early_teks_df.sample_date_string == extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) # + papermill={"duration": 0.066701, "end_time": "2020-10-23T23:26:07.310495", "exception": false, "start_time": "2020-10-23T23:26:07.243794", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[ "sample_date_string", "region", "key_data"]] multi_backend_exposure_keys_df.head() # + papermill={"duration": 0.956615, "end_time": "2020-10-23T23:26:08.310807", "exception": false, "start_time": "2020-10-23T23:26:07.354192", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] active_regions = \ multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() active_regions # + papermill={"duration": 0.985362, "end_time": "2020-10-23T23:26:09.339806", "exception": false, "start_time": "2020-10-23T23:26:08.354444", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] multi_backend_summary_df = multi_backend_exposure_keys_df.groupby( ["sample_date_string", "region"]).key_data.nunique().reset_index() \ .pivot(index="sample_date_string", columns="region") \ .sort_index(ascending=False) multi_backend_summary_df.rename( columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) multi_backend_summary_df.rename_axis("sample_date", inplace=True) multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int) multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days) multi_backend_summary_df.head() # + papermill={"duration": 0.909192, "end_time": "2020-10-23T23:26:10.294286", "exception": false, "start_time": "2020-10-23T23:26:09.385094", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] multi_backend_without_active_region_exposure_keys_df = \ multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier] multi_backend_without_active_region = \ multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() multi_backend_without_active_region # + papermill={"duration": 0.115619, "end_time": "2020-10-23T23:26:10.455146", "exception": false, "start_time": "2020-10-23T23:26:10.339527", "status": "completed"} tags=[] exposure_keys_summary_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.region == report_backend_identifier] exposure_keys_summary_df.drop(columns=["region"], inplace=True) exposure_keys_summary_df = \ exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame() exposure_keys_summary_df = \ exposure_keys_summary_df.reset_index().set_index("sample_date_string") exposure_keys_summary_df.sort_index(ascending=False, inplace=True) exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) exposure_keys_summary_df.head() # + [markdown] papermill={"duration": 0.045679, "end_time": "2020-10-23T23:26:10.547797", "exception": false, "start_time": "2020-10-23T23:26:10.502118", "status": "completed"} tags=[] # ### Dump API TEKs # + papermill={"duration": 0.971505, "end_time": "2020-10-23T23:26:11.564261", "exception": false, "start_time": "2020-10-23T23:26:10.592756", "status": "completed"} tags=[] tek_list_df = multi_backend_exposure_keys_df[ ["sample_date_string", "region", "key_data"]].copy() tek_list_df["key_data"] = tek_list_df["key_data"].apply(str) tek_list_df.rename(columns={ "sample_date_string": "sample_date", "key_data": "tek_list"}, inplace=True) tek_list_df = tek_list_df.groupby( ["sample_date", "region"]).tek_list.unique().reset_index() tek_list_df["extraction_date"] = extraction_date tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour tek_list_path_prefix = "Data/TEKs/" tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json" tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json" tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json" for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]: os.makedirs(os.path.dirname(path), exist_ok=True) tek_list_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json( tek_list_current_path, lines=True, orient="records") tek_list_df.drop(columns=["extraction_date_with_hour"]).to_json( tek_list_daily_path, lines=True, orient="records") tek_list_df.to_json( tek_list_hourly_path, lines=True, orient="records") tek_list_df.head() # + [markdown] papermill={"duration": 0.046435, "end_time": "2020-10-23T23:26:11.657901", "exception": false, "start_time": "2020-10-23T23:26:11.611466", "status": "completed"} tags=[] # ### Load TEK Dumps # + papermill={"duration": 0.05908, "end_time": "2020-10-23T23:26:11.763594", "exception": false, "start_time": "2020-10-23T23:26:11.704514", "status": "completed"} tags=[] import glob def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame: extracted_teks_df = pd.DataFrame(columns=["region"]) file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json")))) if limit: file_paths = file_paths[:limit] for file_path in file_paths: logging.info(f"Loading TEKs from '{file_path}'...") iteration_extracted_teks_df = pd.read_json(file_path, lines=True) extracted_teks_df = extracted_teks_df.append( iteration_extracted_teks_df, sort=False) extracted_teks_df["region"] = \ extracted_teks_df.region.fillna(spain_region_country_code).copy() if region: extracted_teks_df = \ extracted_teks_df[extracted_teks_df.region == region] return extracted_teks_df # + papermill={"duration": 0.604151, "end_time": "2020-10-23T23:26:12.415839", "exception": false, "start_time": "2020-10-23T23:26:11.811688", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] daily_extracted_teks_df = load_extracted_teks( mode="Daily", region=report_backend_identifier, limit=tek_dumps_load_limit) daily_extracted_teks_df.head() # + papermill={"duration": 0.066931, "end_time": "2020-10-23T23:26:12.531512", "exception": false, "start_time": "2020-10-23T23:26:12.464581", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] exposure_keys_summary_df_ = daily_extracted_teks_df \ .sort_values("extraction_date", ascending=False) \ .groupby("sample_date").tek_list.first() \ .to_frame() exposure_keys_summary_df_.index.name = "sample_date_string" exposure_keys_summary_df_["tek_list"] = \ exposure_keys_summary_df_.tek_list.apply(len) exposure_keys_summary_df_ = exposure_keys_summary_df_ \ .rename(columns={"tek_list": "shared_teks_by_generation_date"}) \ .sort_index(ascending=False) exposure_keys_summary_df = exposure_keys_summary_df_ exposure_keys_summary_df.head() # + [markdown] papermill={"duration": 0.047674, "end_time": "2020-10-23T23:26:12.627282", "exception": false, "start_time": "2020-10-23T23:26:12.579608", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Daily New TEKs # + papermill={"duration": 0.101434, "end_time": "2020-10-23T23:26:12.776130", "exception": false, "start_time": "2020-10-23T23:26:12.674696", "status": "completed"} tags=[] tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply( lambda x: set(sum(x, []))).reset_index() tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True) tek_list_df.head() # + papermill={"duration": 1.107253, "end_time": "2020-10-23T23:26:13.930975", "exception": false, "start_time": "2020-10-23T23:26:12.823722", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] def compute_teks_by_generation_and_upload_date(date): day_new_teks_set_df = tek_list_df.copy().diff() try: day_new_teks_set = day_new_teks_set_df[ day_new_teks_set_df.index == date].tek_list.item() except ValueError: day_new_teks_set = None if pd.isna(day_new_teks_set): day_new_teks_set = set() day_new_teks_df = daily_extracted_teks_df[ daily_extracted_teks_df.extraction_date == date].copy() day_new_teks_df["shared_teks"] = \ day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set)) day_new_teks_df["shared_teks"] = \ day_new_teks_df.shared_teks.apply(len) day_new_teks_df["upload_date"] = date day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True) day_new_teks_df = day_new_teks_df[ ["upload_date", "generation_date", "shared_teks"]] day_new_teks_df["generation_to_upload_days"] = \ (pd.to_datetime(day_new_teks_df.upload_date) - pd.to_datetime(day_new_teks_df.generation_date)).dt.days day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0] return day_new_teks_df shared_teks_generation_to_upload_df = pd.DataFrame() for upload_date in daily_extracted_teks_df.extraction_date.unique(): shared_teks_generation_to_upload_df = \ shared_teks_generation_to_upload_df.append( compute_teks_by_generation_and_upload_date(date=upload_date)) shared_teks_generation_to_upload_df \ .sort_values(["upload_date", "generation_date"], ascending=False, inplace=True) shared_teks_generation_to_upload_df.tail() # + papermill={"duration": 0.068818, "end_time": "2020-10-23T23:26:14.051548", "exception": false, "start_time": "2020-10-23T23:26:13.982730", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] today_new_teks_df = \ shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.upload_date == extraction_date].copy() today_new_teks_df.tail() # + papermill={"duration": 0.50641, "end_time": "2020-10-23T23:26:14.610119", "exception": false, "start_time": "2020-10-23T23:26:14.103709", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] if not today_new_teks_df.empty: today_new_teks_df.set_index("generation_to_upload_days") \ .sort_index().shared_teks.plot.bar() # + papermill={"duration": 0.084933, "end_time": "2020-10-23T23:26:14.747106", "exception": false, "start_time": "2020-10-23T23:26:14.662173", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] generation_to_upload_period_pivot_df = \ shared_teks_generation_to_upload_df[ ["upload_date", "generation_to_upload_days", "shared_teks"]] \ .pivot(index="upload_date", columns="generation_to_upload_days") \ .sort_index(ascending=False).fillna(0).astype(int) \ .droplevel(level=0, axis=1) generation_to_upload_period_pivot_df.head() # + papermill={"duration": 0.077952, "end_time": "2020-10-23T23:26:14.877648", "exception": false, "start_time": "2020-10-23T23:26:14.799696", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] new_tek_df = tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() new_tek_df.rename(columns={ "tek_list": "shared_teks_by_upload_date", "extraction_date": "sample_date_string",}, inplace=True) new_tek_df.tail() # + papermill={"duration": 0.079993, "end_time": "2020-10-23T23:26:15.009868", "exception": false, "start_time": "2020-10-23T23:26:14.929875", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] estimated_shared_diagnoses_df = daily_extracted_teks_df.copy() estimated_shared_diagnoses_df["new_sample_extraction_date"] = \ pd.to_datetime(estimated_shared_diagnoses_df.sample_date) + datetime.timedelta(1) estimated_shared_diagnoses_df["extraction_date"] = pd.to_datetime(estimated_shared_diagnoses_df.extraction_date) estimated_shared_diagnoses_df["sample_date"] = pd.to_datetime(estimated_shared_diagnoses_df.sample_date) estimated_shared_diagnoses_df.head() # + papermill={"duration": 0.098397, "end_time": "2020-10-23T23:26:15.162326", "exception": false, "start_time": "2020-10-23T23:26:15.063929", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] # Sometimes TEKs from the same day are uploaded, we do not count them as new TEK devices: same_day_tek_list_df = estimated_shared_diagnoses_df[ estimated_shared_diagnoses_df.sample_date == estimated_shared_diagnoses_df.extraction_date].copy() same_day_tek_list_df = same_day_tek_list_df[["extraction_date", "tek_list"]].rename( columns={"tek_list": "same_day_tek_list"}) same_day_tek_list_df.head() # + papermill={"duration": 0.070516, "end_time": "2020-10-23T23:26:15.285771", "exception": false, "start_time": "2020-10-23T23:26:15.215255", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] shared_teks_uploaded_on_generation_date_df = same_day_tek_list_df.rename( columns={ "extraction_date": "sample_date_string", "same_day_tek_list": "shared_teks_uploaded_on_generation_date", }) shared_teks_uploaded_on_generation_date_df.shared_teks_uploaded_on_generation_date = \ shared_teks_uploaded_on_generation_date_df.shared_teks_uploaded_on_generation_date.apply(len) shared_teks_uploaded_on_generation_date_df.head() shared_teks_uploaded_on_generation_date_df["sample_date_string"] = \ shared_teks_uploaded_on_generation_date_df.sample_date_string.dt.strftime("%Y-%m-%d") shared_teks_uploaded_on_generation_date_df.head() # + papermill={"duration": 0.075974, "end_time": "2020-10-23T23:26:15.414317", "exception": false, "start_time": "2020-10-23T23:26:15.338343", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] estimated_shared_diagnoses_df = estimated_shared_diagnoses_df[ estimated_shared_diagnoses_df.new_sample_extraction_date == estimated_shared_diagnoses_df.extraction_date] estimated_shared_diagnoses_df.head() # + papermill={"duration": 0.086688, "end_time": "2020-10-23T23:26:15.557226", "exception": false, "start_time": "2020-10-23T23:26:15.470538", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] same_day_tek_list_df["extraction_date"] = \ same_day_tek_list_df.extraction_date + datetime.timedelta(1) estimated_shared_diagnoses_df = \ estimated_shared_diagnoses_df.merge(same_day_tek_list_df, how="left", on=["extraction_date"]) estimated_shared_diagnoses_df["same_day_tek_list"] = \ estimated_shared_diagnoses_df.same_day_tek_list.apply(lambda x: [] if x is np.nan else x) estimated_shared_diagnoses_df.head() # + papermill={"duration": 0.085608, "end_time": "2020-10-23T23:26:15.710546", "exception": false, "start_time": "2020-10-23T23:26:15.624938", "status": "completed"} tags=[] estimated_shared_diagnoses_df.set_index("extraction_date", inplace=True) estimated_shared_diagnoses_df["shared_diagnoses"] = estimated_shared_diagnoses_df.apply( lambda x: len(set(x.tek_list).difference(x.same_day_tek_list)), axis=1).copy() estimated_shared_diagnoses_df.reset_index(inplace=True) estimated_shared_diagnoses_df.rename(columns={ "extraction_date": "sample_date_string"}, inplace=True) estimated_shared_diagnoses_df = estimated_shared_diagnoses_df[["sample_date_string", "shared_diagnoses"]] estimated_shared_diagnoses_df["sample_date_string"] = estimated_shared_diagnoses_df.sample_date_string.dt.strftime("%Y-%m-%d") estimated_shared_diagnoses_df.head() # + [markdown] papermill={"duration": 0.05679, "end_time": "2020-10-23T23:26:15.839038", "exception": false, "start_time": "2020-10-23T23:26:15.782248", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Hourly New TEKs # + papermill={"duration": 1.946381, "end_time": "2020-10-23T23:26:17.849524", "exception": false, "start_time": "2020-10-23T23:26:15.903143", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] hourly_extracted_teks_df = load_extracted_teks( mode="Hourly", region=report_backend_identifier, limit=25) hourly_extracted_teks_df.head() # + papermill={"duration": 0.125786, "end_time": "2020-10-23T23:26:18.034285", "exception": false, "start_time": "2020-10-23T23:26:17.908499", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] hourly_new_tek_count_df = hourly_extracted_teks_df \ .groupby("extraction_date_with_hour").tek_list. \ apply(lambda x: set(sum(x, []))).reset_index().copy() hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \ .sort_index(ascending=True) hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff() hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply( lambda x: len(x) if not pd.isna(x) else 0) hourly_new_tek_count_df.rename(columns={ "new_tek_count": "shared_teks_by_upload_date"}, inplace=True) hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[ "extraction_date_with_hour", "shared_teks_by_upload_date"]] hourly_new_tek_count_df.head() # + papermill={"duration": 0.09471, "end_time": "2020-10-23T23:26:18.186056", "exception": false, "start_time": "2020-10-23T23:26:18.091346", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] hourly_estimated_shared_diagnoses_df = hourly_extracted_teks_df.copy() hourly_estimated_shared_diagnoses_df["new_sample_extraction_date"] = \ pd.to_datetime(hourly_estimated_shared_diagnoses_df.sample_date) + datetime.timedelta(1) hourly_estimated_shared_diagnoses_df["extraction_date"] = \ pd.to_datetime(hourly_estimated_shared_diagnoses_df.extraction_date) hourly_estimated_shared_diagnoses_df = hourly_estimated_shared_diagnoses_df[ hourly_estimated_shared_diagnoses_df.new_sample_extraction_date == hourly_estimated_shared_diagnoses_df.extraction_date] hourly_estimated_shared_diagnoses_df = \ hourly_estimated_shared_diagnoses_df.merge(same_day_tek_list_df, how="left", on=["extraction_date"]) hourly_estimated_shared_diagnoses_df["same_day_tek_list"] = \ hourly_estimated_shared_diagnoses_df.same_day_tek_list.apply(lambda x: [] if x is np.nan else x) hourly_estimated_shared_diagnoses_df["shared_diagnoses"] = hourly_estimated_shared_diagnoses_df.apply( lambda x: len(set(x.tek_list).difference(x.same_day_tek_list)), axis=1) hourly_estimated_shared_diagnoses_df = \ hourly_estimated_shared_diagnoses_df.sort_values("extraction_date_with_hour").copy() hourly_estimated_shared_diagnoses_df["shared_diagnoses"] = hourly_estimated_shared_diagnoses_df \ .groupby("extraction_date").shared_diagnoses.diff() \ .fillna(0).astype(int) hourly_estimated_shared_diagnoses_df.set_index("extraction_date_with_hour", inplace=True) hourly_estimated_shared_diagnoses_df.reset_index(inplace=True) hourly_estimated_shared_diagnoses_df = hourly_estimated_shared_diagnoses_df[[ "extraction_date_with_hour", "shared_diagnoses"]] hourly_estimated_shared_diagnoses_df.head() # + papermill={"duration": 0.085292, "end_time": "2020-10-23T23:26:18.338214", "exception": false, "start_time": "2020-10-23T23:26:18.252922", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] hourly_summary_df = hourly_new_tek_count_df.merge( hourly_estimated_shared_diagnoses_df, on=["extraction_date_with_hour"], how="outer") hourly_summary_df.set_index("extraction_date_with_hour", inplace=True) hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index() hourly_summary_df["datetime_utc"] = pd.to_datetime( hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H") hourly_summary_df.set_index("datetime_utc", inplace=True) hourly_summary_df = hourly_summary_df.tail(-1) hourly_summary_df.head() # + [markdown] papermill={"duration": 0.055236, "end_time": "2020-10-23T23:26:18.451046", "exception": false, "start_time": "2020-10-23T23:26:18.395810", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Data Merge # + papermill={"duration": 0.071193, "end_time": "2020-10-23T23:26:18.578580", "exception": false, "start_time": "2020-10-23T23:26:18.507387", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df = exposure_keys_summary_df.merge( new_tek_df, on=["sample_date_string"], how="outer") result_summary_df.head() # + papermill={"duration": 0.071312, "end_time": "2020-10-23T23:26:18.705577", "exception": false, "start_time": "2020-10-23T23:26:18.634265", "status": "completed"} tags=[] result_summary_df = result_summary_df.merge( shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer") result_summary_df.head() # + papermill={"duration": 0.074236, "end_time": "2020-10-23T23:26:18.834862", "exception": false, "start_time": "2020-10-23T23:26:18.760626", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df = result_summary_df.merge( estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer") result_summary_df.head() # + papermill={"duration": 0.078573, "end_time": "2020-10-23T23:26:18.969037", "exception": false, "start_time": "2020-10-23T23:26:18.890464", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df = confirmed_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() # + papermill={"duration": 0.076899, "end_time": "2020-10-23T23:26:19.101929", "exception": false, "start_time": "2020-10-23T23:26:19.025030", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string) result_summary_df.set_index("sample_date", inplace=True) result_summary_df.drop(columns=["sample_date_string"], inplace=True) result_summary_df.sort_index(ascending=False, inplace=True) result_summary_df.head() # + papermill={"duration": 0.083224, "end_time": "2020-10-23T23:26:19.242346", "exception": false, "start_time": "2020-10-23T23:26:19.159122", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] with pd.option_context("mode.use_inf_as_na", True): result_summary_df = result_summary_df.fillna(0).astype(int) result_summary_df["teks_per_shared_diagnosis"] = \ (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0) result_summary_df["shared_diagnoses_per_covid_case"] = \ (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0) result_summary_df.head(daily_plot_days) # + papermill={"duration": 0.081548, "end_time": "2020-10-23T23:26:19.381325", "exception": false, "start_time": "2020-10-23T23:26:19.299777", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] weekly_result_summary_df = result_summary_df \ .sort_index(ascending=True).fillna(0).rolling(7).agg({ "covid_cases": "sum", "shared_teks_by_generation_date": "sum", "shared_teks_by_upload_date": "sum", "shared_diagnoses": "sum" }).sort_index(ascending=False) with pd.option_context("mode.use_inf_as_na", True): weekly_result_summary_df = weekly_result_summary_df.fillna(0).astype(int) weekly_result_summary_df["teks_per_shared_diagnosis"] = \ (weekly_result_summary_df.shared_teks_by_upload_date / weekly_result_summary_df.shared_diagnoses).fillna(0) weekly_result_summary_df["shared_diagnoses_per_covid_case"] = \ (weekly_result_summary_df.shared_diagnoses / weekly_result_summary_df.covid_cases).fillna(0) weekly_result_summary_df.head() # + papermill={"duration": 0.066861, "end_time": "2020-10-23T23:26:19.506880", "exception": false, "start_time": "2020-10-23T23:26:19.440019", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] last_7_days_summary = weekly_result_summary_df.to_dict(orient="records")[0] last_7_days_summary # + [markdown] papermill={"duration": 0.060641, "end_time": "2020-10-23T23:26:19.628083", "exception": false, "start_time": "2020-10-23T23:26:19.567442", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ## Report Results # + papermill={"duration": 0.068247, "end_time": "2020-10-23T23:26:19.756742", "exception": false, "start_time": "2020-10-23T23:26:19.688495", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] display_column_name_mapping = { "sample_date": "Sample\u00A0Date\u00A0(UTC)", "datetime_utc": "Timestamp (UTC)", "upload_date": "Upload Date (UTC)", "generation_to_upload_days": "Generation to Upload Period in Days", "region": "Backend Identifier", "covid_cases": "COVID-19 Cases in Source Countries (7-day Rolling Average)", "shared_teks_by_generation_date": "Shared TEKs by Generation Date", "shared_teks_by_upload_date": "Shared TEKs by Upload Date", "shared_diagnoses": "Shared Diagnoses (Estimation)", "teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis", "shared_diagnoses_per_covid_case": "Usage Ratio (Fraction of Cases in Source Countries Which Shared Diagnosis)", "shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date", } # + papermill={"duration": 0.074679, "end_time": "2020-10-23T23:26:19.888423", "exception": false, "start_time": "2020-10-23T23:26:19.813744", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] summary_columns = [ "covid_cases", "shared_teks_by_generation_date", "shared_teks_by_upload_date", "shared_teks_uploaded_on_generation_date", "shared_diagnoses", "teks_per_shared_diagnosis", "shared_diagnoses_per_covid_case", ] # + [markdown] papermill={"duration": 0.059443, "end_time": "2020-10-23T23:26:20.008370", "exception": false, "start_time": "2020-10-23T23:26:19.948927", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Daily Summary Table # + papermill={"duration": 0.083716, "end_time": "2020-10-23T23:26:20.152806", "exception": false, "start_time": "2020-10-23T23:26:20.069090", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df_ = result_summary_df.copy() result_summary_df = result_summary_df[summary_columns] result_summary_with_display_names_df = result_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) result_summary_with_display_names_df # + [markdown] papermill={"duration": 0.069491, "end_time": "2020-10-23T23:26:20.287226", "exception": false, "start_time": "2020-10-23T23:26:20.217735", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Daily Summary Plots # + papermill={"duration": 1.745604, "end_time": "2020-10-23T23:26:22.096249", "exception": false, "start_time": "2020-10-23T23:26:20.350645", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar( title=f"Daily Summary", rot=45, subplots=True, figsize=(15, 22), legend=False) ax_ = summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.95) ax_.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0)) _ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist())) # + [markdown] papermill={"duration": 0.062306, "end_time": "2020-10-23T23:26:22.220842", "exception": false, "start_time": "2020-10-23T23:26:22.158536", "status": "completed"} tags=[] # ### Daily Generation to Upload Period Table # + papermill={"duration": 0.079274, "end_time": "2020-10-23T23:26:22.362479", "exception": false, "start_time": "2020-10-23T23:26:22.283205", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] display_generation_to_upload_period_pivot_df = \ generation_to_upload_period_pivot_df \ .head(backend_generation_days) display_generation_to_upload_period_pivot_df \ .head(backend_generation_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) # + papermill={"duration": 0.958675, "end_time": "2020-10-23T23:26:23.385032", "exception": false, "start_time": "2020-10-23T23:26:22.426357", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] import matplotlib.pyplot as plt fig, generation_to_upload_period_pivot_table_ax = plt.subplots( figsize=(10, 1 + 0.5 * len(display_generation_to_upload_period_pivot_df))) generation_to_upload_period_pivot_table_ax.set_title( "Shared TEKs Generation to Upload Period Table") sns.heatmap( data=display_generation_to_upload_period_pivot_df .rename_axis(columns=display_column_name_mapping) .rename_axis(index=display_column_name_mapping), fmt=".0f", annot=True, ax=generation_to_upload_period_pivot_table_ax) generation_to_upload_period_pivot_table_ax.get_figure().tight_layout() # + [markdown] papermill={"duration": 0.067777, "end_time": "2020-10-23T23:26:23.520306", "exception": false, "start_time": "2020-10-23T23:26:23.452529", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Hourly Summary Plots # + papermill={"duration": 0.580665, "end_time": "2020-10-23T23:26:24.169081", "exception": false, "start_time": "2020-10-23T23:26:23.588416", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] hourly_summary_ax_list = hourly_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .plot.bar( title=f"Last 24h Summary", rot=45, subplots=True, legend=False) ax_ = hourly_summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.9) _ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist())) # + [markdown] papermill={"duration": 0.067539, "end_time": "2020-10-23T23:26:24.304929", "exception": false, "start_time": "2020-10-23T23:26:24.237390", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Publish Results # + papermill={"duration": 0.076434, "end_time": "2020-10-23T23:26:24.448148", "exception": false, "start_time": "2020-10-23T23:26:24.371714", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] def get_temporary_image_path() -> str: return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png") def save_temporary_plot_image(ax): if isinstance(ax, np.ndarray): ax = ax[0] media_path = get_temporary_image_path() ax.get_figure().savefig(media_path) return media_path def save_temporary_dataframe_image(df): import dataframe_image as dfi media_path = get_temporary_image_path() dfi.export(df, media_path) return media_path # + papermill={"duration": 0.097284, "end_time": "2020-10-23T23:26:24.611496", "exception": false, "start_time": "2020-10-23T23:26:24.514212", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] github_repository = os.environ.get("GITHUB_REPOSITORY") if github_repository is None: github_repository = "pvieito/Radar-STATS" github_project_base_url = "https://github.com/" + github_repository display_formatters = { display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}", display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}", } daily_summary_table_html = result_summary_with_display_names_df \ .head(daily_plot_days) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .to_html(formatters=display_formatters) multi_backend_summary_table_html = multi_backend_summary_df \ .head(daily_plot_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html(formatters=display_formatters) extraction_date_result_summary_df = \ result_summary_df[result_summary_df.index == extraction_date] extraction_date_result_hourly_summary_df = \ hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour] covid_cases = \ extraction_date_result_summary_df.covid_cases.sum() shared_teks_by_generation_date = \ extraction_date_result_summary_df.shared_teks_by_generation_date.sum() shared_teks_by_upload_date = \ extraction_date_result_summary_df.shared_teks_by_upload_date.sum() shared_diagnoses = \ extraction_date_result_summary_df.shared_diagnoses.sum() teks_per_shared_diagnosis = \ extraction_date_result_summary_df.teks_per_shared_diagnosis.sum() shared_diagnoses_per_covid_case = \ extraction_date_result_summary_df.shared_diagnoses_per_covid_case.sum() shared_teks_by_upload_date_last_hour = \ extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int) shared_diagnoses_last_hour = \ extraction_date_result_hourly_summary_df.shared_diagnoses.sum().astype(int) display_source_regions = ", ".join(report_source_regions) # + papermill={"duration": 13.902448, "end_time": "2020-10-23T23:26:38.579675", "exception": false, "start_time": "2020-10-23T23:26:24.677227", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] summary_plots_image_path = save_temporary_plot_image( ax=summary_ax_list) summary_table_image_path = save_temporary_dataframe_image( df=result_summary_with_display_names_df) hourly_summary_plots_image_path = save_temporary_plot_image( ax=hourly_summary_ax_list) multi_backend_summary_table_image_path = save_temporary_dataframe_image( df=multi_backend_summary_df) generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image( ax=generation_to_upload_period_pivot_table_ax) # + [markdown] papermill={"duration": 0.06615, "end_time": "2020-10-23T23:26:38.711327", "exception": false, "start_time": "2020-10-23T23:26:38.645177", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Save Results # + papermill={"duration": 0.097007, "end_time": "2020-10-23T23:26:38.872896", "exception": false, "start_time": "2020-10-23T23:26:38.775889", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-" result_summary_df.to_csv( report_resources_path_prefix + "Summary-Table.csv") result_summary_df.to_html( report_resources_path_prefix + "Summary-Table.html") hourly_summary_df.to_csv( report_resources_path_prefix + "Hourly-Summary-Table.csv") multi_backend_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Summary-Table.csv") generation_to_upload_period_pivot_df.to_csv( report_resources_path_prefix + "Generation-Upload-Period-Table.csv") _ = shutil.copyfile( summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png") _ = shutil.copyfile( summary_table_image_path, report_resources_path_prefix + "Summary-Table.png") _ = shutil.copyfile( hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png") _ = shutil.copyfile( multi_backend_summary_table_image_path, report_resources_path_prefix + "Multi-Backend-Summary-Table.png") _ = shutil.copyfile( generation_to_upload_period_pivot_table_image_path, report_resources_path_prefix + "Generation-Upload-Period-Table.png") # + [markdown] papermill={"duration": 0.06807, "end_time": "2020-10-23T23:26:39.009886", "exception": false, "start_time": "2020-10-23T23:26:38.941816", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Publish Results as JSON # + papermill={"duration": 0.085804, "end_time": "2020-10-23T23:26:39.162394", "exception": false, "start_time": "2020-10-23T23:26:39.076590", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] summary_results_api_df = result_summary_df.reset_index() summary_results_api_df["sample_date_string"] = \ summary_results_api_df["sample_date"].dt.strftime("%Y-%m-%d") summary_results = dict( source_regions=report_source_regions, extraction_datetime=extraction_datetime, extraction_date=extraction_date, extraction_date_with_hour=extraction_date_with_hour, last_hour=dict( shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour, shared_diagnoses=shared_diagnoses_last_hour, ), today=dict( covid_cases=covid_cases, shared_teks_by_generation_date=shared_teks_by_generation_date, shared_teks_by_upload_date=shared_teks_by_upload_date, shared_diagnoses=shared_diagnoses, teks_per_shared_diagnosis=teks_per_shared_diagnosis, shared_diagnoses_per_covid_case=shared_diagnoses_per_covid_case, ), last_7_days=last_7_days_summary, daily_results=summary_results_api_df.to_dict(orient="records")) summary_results = \ json.loads(pd.Series([summary_results]).to_json(orient="records"))[0] with open(report_resources_path_prefix + "Summary-Results.json", "w") as f: json.dump(summary_results, f, indent=4) # + [markdown] papermill={"duration": 0.068597, "end_time": "2020-10-23T23:26:39.301329", "exception": false, "start_time": "2020-10-23T23:26:39.232732", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Publish on README # + papermill={"duration": 0.077394, "end_time": "2020-10-23T23:26:39.448263", "exception": false, "start_time": "2020-10-23T23:26:39.370869", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] with open("Data/Templates/README.md", "r") as f: readme_contents = f.read() readme_contents = readme_contents.format( extraction_date_with_hour=extraction_date_with_hour, github_project_base_url=github_project_base_url, daily_summary_table_html=daily_summary_table_html, multi_backend_summary_table_html=multi_backend_summary_table_html, display_source_regions=display_source_regions) with open("README.md", "w") as f: f.write(readme_contents) # + [markdown] papermill={"duration": 0.070529, "end_time": "2020-10-23T23:26:39.587560", "exception": false, "start_time": "2020-10-23T23:26:39.517031", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Publish on Twitter # + papermill={"duration": 8.970285, "end_time": "2020-10-23T23:26:48.627845", "exception": false, "start_time": "2020-10-23T23:26:39.657560", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER") github_event_name = os.environ.get("GITHUB_EVENT_NAME") if enable_share_to_twitter and github_event_name == "schedule": import tweepy twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"] twitter_api_auth_keys = twitter_api_auth_keys.split(":") auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1]) auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3]) api = tweepy.API(auth) summary_plots_media = api.media_upload(summary_plots_image_path) summary_table_media = api.media_upload(summary_table_image_path) generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path) media_ids = [ summary_plots_media.media_id, summary_table_media.media_id, generation_to_upload_period_pivot_table_image_media.media_id, ] status = textwrap.dedent(f""" #RadarCOVID Report – {extraction_date_with_hour} Source Countries: {display_source_regions} Today: - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour) - Shared Diagnoses: ≤{shared_diagnoses:.0f} ({shared_diagnoses_last_hour:+d} last hour) - Usage Ratio: ≤{shared_diagnoses_per_covid_case:.2%} Week: - Shared Diagnoses: ≤{last_7_days_summary["shared_diagnoses"]:.0f} - Usage Ratio: ≤{last_7_days_summary["shared_diagnoses_per_covid_case"]:.2%} More Info: {github_project_base_url}#documentation """) status = status.encode(encoding="utf-8") api.update_status(status=status, media_ids=media_ids) # + papermill={"duration": 0.065995, "end_time": "2020-10-23T23:26:48.760334", "exception": false, "start_time": "2020-10-23T23:26:48.694339", "status": "completed"} pycharm={"name": "#%%\n"} tags=[]
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-10-23.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Minimum Regret Search # # This notebook can be used to reproduce the results of Section 5.1. of the paper "Minimum Regret Search for Single- and Multi-Task Optimization" (ICML 2016, https://arxiv.org/abs/1602.01064). # # Minimum Regret Search (MRS) is a Bayesian optimization method, which # aims at minimizing the expected immediate regret of its ultimate recommendation # for the optimum. While empirically MRS and the related Entropy Search perform similar in most of the cases, # MRS produces fewer outliers with high regret than ES. This notebook provides empirical results # on a synthetic single-task optimization problem. # + import cPickle import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm # %matplotlib inline plt.rcParams.update({'axes.labelsize': 11, 'axes.titlesize': 12, 'text.fontsize': 11, 'xtick.labelsize': 8, 'ytick.labelsize': 8, 'legend.fontsize': 7, 'legend.fancybox': True, 'font.family': 'serif'}) c = ["r", "g", "b", 'k', 'm', 'c'] ms = ["o", "d", "v", "^", "s", "p"] from joblib import Parallel, delayed from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, Matern, RationalQuadratic from bayesian_optimization import (BayesianOptimizer, GaussianProcessModel, MinimalRegretSearch, EntropySearch, UpperConfidenceBound, ExpectedImprovement, ProbabilityOfImprovement, GPUpperConfidenceBound) from bayesian_optimization.utils.optimization import global_optimization # - # We will compare different acquisition functions. In order to focus on the differences of the different acquisition functions and preclude other factors, we will use a setting where there is no model mismatch between the (unknown) objective function and the surrogate model used in Bayesian optimization. This can be accomplished by drawing the objective function from a GP with identical kernel as the the GP/kernel used in BO. def generate_function(kernel, boundaries, n_train_points, random_state): """Draws target function from GP.""" # Create GP prior gp_prior = GaussianProcessRegressor(kernel=kernel) # Sample function from GP by first sampling X values and then # sampling the y values for this X values for a specific function # from the GP X_train = np.random.RandomState(random_state).uniform( boundaries[:, 0], boundaries[:, 1], (n_train_points, boundaries.shape[0])) y_train = gp_prior.sample_y(X_train, 1, random_state=random_state) # Fit GP to the (X, y) samples of the sampled function gp_target = GaussianProcessRegressor(kernel=kernel) gp_target.fit(X_train, y_train) # Use the mean of the fitted GP as target function def target_fct(x): return gp_target.predict(x[np.newaxis, :])[0, 0] return target_fct def create_optimizer(kernel, acquisition_function): """Create Bayesian optimizer for given GP kernel and acquisition function.""" model = GaussianProcessModel(kernel=kernel, alpha=1e-3) if acquisition_function == "mrs": acquisition_function = \ MinimalRegretSearch(model=model, n_gp_samples=1000, n_candidates=25, n_trial_points=250, n_samples_y=51, point=False) if acquisition_function == "mrs_point": acquisition_function = \ MinimalRegretSearch(model=model, n_gp_samples=1000, n_candidates=25, n_trial_points=250, n_samples_y=51, point=True) elif acquisition_function == "es": acquisition_function = \ EntropySearch(model=model, n_gp_samples=1000, n_candidates=25, n_trial_points=250, n_samples_y=51) elif acquisition_function == "ucb": acquisition_function = \ UpperConfidenceBound(model=model, kappa=5.0) elif acquisition_function == "gp_ucb": acquisition_function = \ GPUpperConfidenceBound(model=model, const=0.5) elif acquisition_function == "ei": acquisition_function = \ ExpectedImprovement(model=model) elif acquisition_function == "pi": acquisition_function = \ ProbabilityOfImprovement(model=model) bayes_opt = BayesianOptimizer(model=model, optimizer="direct+lbfgs", acquisition_function=acquisition_function) return bayes_opt # + # We do not impose any model mismatch between the GP from which the # function was sampled and the GP used internally in Bayesian optimization. # Moreover, we assume that GP in BO already "knows" the true length scales kernel = RBF(length_scale=0.1, length_scale_bounds="fixed") kernel_bo = RBF(length_scale=0.1, length_scale_bounds="fixed") n_train_points = 250 # used only for sampling the target function boundaries = np.array([[0.0, 1.0], [0.0, 1.0]]) # Search space n_runs = 250 # number of independent runs n_trials = 100 # how many trials (queries) per run are performed settings = ["mrs", "es", "mrs_point", "ei", "gp_ucb", "pi"] # acquisition functions # - def perform_run(setting, run): """Execute a single run of a specific setting.""" # Make results reproducible by using run index as random seed np.random.seed(run) y_regret = np.empty((n_trials / 5)) # Compute simple regret every 5 steps X_dist = np.empty((n_trials / 5)) # Compute distance of recommendation from optimum every 5 steps X_query = np.empty((n_trials, boundaries.shape[0])) # Remember query points # Generate target function and compute (approximately) its optimum (X_opt) and the # maximal value y_opt target_fct = generate_function(kernel, boundaries, n_train_points, random_state=run) X_opt = global_optimization(target_fct, boundaries=boundaries, optimizer="direct", maxf=1000) y_opt = target_fct(X_opt) # Create Bayesian optimizer and perform run bayes_opt = create_optimizer(kernel_bo,acquisition_function=setting) for i in range(n_trials): # Delect query point and generate noisy observation query = bayes_opt.select_query_point(boundaries) X_query[i] = query result = target_fct(query) + 1e-3 * np.random.random() # Update Bayesian optimizer bayes_opt.update(query, result) if i % 5 == 4: # Every 5 time steps: determine recommendation of # Bayesian optimizer and compute its regret and distance to optimum X_sel = global_optimization(lambda x: bayes_opt.model.gp.predict(x[np.newaxis, :]), boundaries=boundaries, optimizer="direct", maxf=1000) y_sel = target_fct(X_sel) y_regret[i / 5] = y_opt - y_sel X_dist[i / 5] = np.sqrt(((X_opt - X_sel)**2).sum()) # Store results of individual runs in a log file (not stricly necessary) #f = open("log/%s_%s" % (setting, run), 'w') #cPickle.dump((setting, run, y_regret, X_dist, X_query), f, protocol=-1) #f.close() return setting, run, y_regret, X_dist, X_query # + # Since the actual experiment takes quite long, we default to loading # results from disk from a prior execution of the experiment (the one reported # in the paper). load_results = True if not load_results: # actually perform experiment # Run the experiment parallel (multiple processes on one machine) n_jobs = 1 # how many parallel processes res = Parallel(n_jobs=n_jobs, verbose=10)( delayed(perform_run)(setting, run) for run in range(n_runs) for setting in settings) # Extract the simple regrets, the distance from optimum, and where # the runs have performed queries y_regret = np.empty((len(settings), n_runs, n_trials / 5)) X_dist = np.empty((len(settings), n_runs, n_trials / 5)) X_query = np.empty((len(settings), n_runs, n_trials, 2)) for setting, run, y_regret_, X_dist_, X_query_ in res: i = settings.index(setting) y_regret[i, run] = y_regret_ X_dist[i, run] = X_dist_ X_query[i, run] = X_query_ # This would store the results: # f = open("icml_2016_res.pickle", "w") # cPickle.dump((y_regret, X_dist, X_query), f) # f.close() else: # load results f = open("icml_2016_res.pickle", "r") y_regret, X_dist, X_query = cPickle.load(f) f.close() # - # ### Plotting # Generate plots of Figure 2 of the paper setting_names = { "mrs": "MRS", "es": "ES", "mrs_point": r"MRS$^{point}$", "ei": "EI", "gp_ucb": "GP-UCB", "pi": "PI" } # + ## Simple regret over number of queries fig_width = 487.8225 / 72.27 # Get fig_width_pt from LaTeX using \showthe\columnwidth fig_height = fig_width * 0.5 # height in inches plt.figure(0, dpi=400, figsize=(fig_width, fig_height)) ax = plt.subplot(1,2,1) for i, setting in enumerate(settings): ax.plot(np.arange(5, n_trials+1, 5), np.median(y_regret[i], 0), c=c[i], marker=ms[i], label=setting_names[setting]) ax.set_ylim(1e-5, 1e1) #ax.tick_params(axis='y', which='minor') #ax.yaxis.set_minor_locator(MultipleLocator(5)) ax.set_yscale("log") ax.set_xlabel("N") ax.set_ylabel("Regret") ax.grid(b=True, which='major', color='gray', linestyle='--') ax.grid(b=True, which='minor', color='gray', linestyle=':') ax.legend(loc="best", ncol=3, ) ax.set_title("Median Regret") ax = plt.subplot(1,2,2) for i, setting in enumerate(settings): ax.plot(np.arange(5, n_trials+1, 5), np.mean(y_regret[i], 0), c=c[i], marker=ms[i], label=setting_names[setting]) ax.set_ylim(1e-5, 1e1) ax.set_yscale("log") ax.set_xlabel("N") ax.set_yticklabels([]) ax.grid(b=True, which='major', color='gray', linestyle='--') ax.grid(b=True, which='minor', color='gray', linestyle=':') ax.set_title("Mean Regret") plt.tight_layout() # + # Histograms of simple regrets over runs fig_width = 487.8225 / 72.27 # Get fig_width_pt from LaTeX using \showthe\columnwidth fig_height = fig_width * 0.4 # height in inches plt.figure(0, dpi=400, figsize=(fig_width, fig_height)) for i, setting in enumerate(settings): ax = plt.subplot(2, 3, i+1) ax.hist(np.maximum(y_regret[i, : , -1], 1e-8), #normed=True, bins=np.logspace(-8, 1, 19), color=c[i]) ax.set_xscale("log") ax.set_yscale("log") ax.set_title(setting_names[setting]) ax.set_xticks([1e-8, 1e-6, 1e-4, 1e-2, 1e0]) if i % 3 != 0: ax.set_yticklabels([]) else: ax.set_ylabel("Count") if i < 3: ax.set_xticklabels([]) else: ax.set_xlabel("Regret") plt.tight_layout() # + ## Scatter plot of simple regrets for pairs of acquisition functions plt.figure(figsize=(10, 10)) for i in range(len(settings)): for j in range(len(settings)): plt.subplot(len(settings), len(settings), j*len(settings) + i + 1) plt.scatter(y_regret[i, :, -1], y_regret[j, :, -1], c='r') plt.xscale("log") plt.yscale("log") plt.xlim(1e-10, 1e1) plt.ylim(1e-10, 1e1) plt.plot([1e-10, 1e1], [1e-10, 1e1], c='k') if j == len(settings) - 1: plt.xlabel(setting_names[settings[i]]) plt.gca().set_xticklabels([]) if i == 0: plt.ylabel(setting_names[settings[j]]) plt.gca().set_yticklabels([]) plt.title(str((y_regret[i, : , -1] < y_regret[j, : , -1]).mean())) plt.tight_layout() # + # Visualize target function and where different acquisition functions # have performed queries run = 218 kernel = RBF(length_scale=0.1, length_scale_bounds="fixed") n_train_points = 250 boundaries = np.array([[0.0, 1.0], [0.0, 1.0]]) target_fct = generate_function(kernel, boundaries, n_train_points, random_state=run) X_opt = global_optimization(target_fct, boundaries=boundaries, optimizer="direct", maxf=5000) x_ = np.linspace(0, 1, 50) y_ = np.linspace(0, 1, 49) X_, Y_ = np.meshgrid(x_, y_) Z_ = [[target_fct(np.array([x_[i], y_[j]])) for i in range(x_.shape[0])] for j in range(y_.shape[0])] plt.figure(figsize=(10, 5)) for i in range(len(settings)): plt.subplot(2, 3, i+1) plt.pcolor(X_, Y_, np.array(Z_), cmap = cm.Greys) plt.colorbar() plt.scatter(X_query[i, run, :, 0], X_query[i, run, :, 1], c='r') plt.scatter(X_opt[[0]], X_opt[1], c='b') plt.xlim(0, 1) plt.ylim(0, 1) plt.title(setting_names[settings[i]]) plt.tight_layout() # + ## Generate Figure 3 of paper np.random.seed(0) setting_index = 1 run = y_regret[setting_index, :, -1].argmax() target_fct = generate_function(kernel, boundaries, n_train_points, random_state=run) X_opt = global_optimization(target_fct, boundaries=boundaries, optimizer="direct", maxf=5000) model = GaussianProcessModel(kernel=kernel, alpha=1e-3) y = [target_fct(X_query[1, run, i]) for i in range(100)] model.fit(X_query[1, run], y) n_gp_samples = 1000 mrs = MinimalRegretSearch(model=model, n_gp_samples=n_gp_samples, n_candidates=25, n_trial_points=250, n_samples_y=51, point=False) mrs.set_boundaries(boundaries) es = EntropySearch(model=model, n_gp_samples=n_gp_samples, n_candidates=25, n_trial_points=250, n_samples_y=51) es.set_boundaries(boundaries, X_candidate=mrs.X_candidate) x_ = np.linspace(0, 1, 100) y_ = np.linspace(0, 1, 100) X_, Y_ = np.meshgrid(x_, y_) # True function Z_ = [[target_fct(np.array([x_[i], y_[j]])) for i in range(x_.shape[0])] for j in range(y_.shape[0])] fig_width = 487.8225 / 72.27 # Get fig_width_pt from LaTeX using \showthe\columnwidth fig_height = fig_width * 0.33 # height in inches plt.figure(0, dpi=400, figsize=(fig_width, fig_height)) ax = plt.subplot(1, 4, 1) ax.set_axis_bgcolor('white') plt.pcolor(X_, Y_, np.array(Z_), cmap = cm.Greys) #plt.colorbar() samples = plt.scatter(X_query[setting_index, run, :, 0], X_query[setting_index, run, :, 1], alpha=1.0, c='g', marker='o', label="Samples") optimum = plt.scatter(X_opt[[0]], X_opt[1], alpha=1.0, c='b', marker='d', s=40, label="Optimum") plt.xlim(0, 1) plt.ylim(-0.2, 1) plt.xticks([]) plt.yticks([]) plt.title("Function/Samples", fontsize=10) ax = plt.subplot(1, 4, 2) ax.set_axis_bgcolor('white') plt.pcolor(X_, Y_, np.array(Z_), cmap = cm.Greys) repr_points = plt.scatter(mrs.X_candidate[:, 0], mrs.X_candidate[:, 1], alpha=1.0, c='r', label="Representer points") plt.legend(handles=[samples, repr_points, optimum], loc="lower right", ncol=3) plt.xlim(0, 1) plt.ylim(-0.2, 1) plt.xticks([]) plt.yticks([]) plt.title("Representer points", fontsize=10) for index, acq in enumerate([es, mrs]): ax = plt.subplot(1, 4, index+3) ax.set_axis_bgcolor('white') Z_ = [[acq(np.array([x_[i], y_[j]]))[0] for i in range(x_.shape[0])] for j in range(y_.shape[0])] plt.pcolor(X_, Y_, np.array(Z_), cmap = cm.Greys) #plt.colorbar() plt.xlim(0, 1) plt.ylim(-0.2, 1) plt.xticks([]) plt.yticks([]) plt.title(["ES", "MRS"][index], fontsize=10) plt.tight_layout() # -
examples/mrs_evaluation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Part 2 - End-to-End Machine Learning Project # # 1. Get the data # ## i. Download the Data # Here is a function to fetch the data # + import os import tarfile from six.moves import urllib DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/" HOUSING_PATH = "datasets/housing" HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + "/housing.tgz" def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH): if not os.path.isdir(housing_path): os.makedirs(housing_path) tgz_path = os.path.join(housing_path, "housing.tgz") urllib.request.urlretrieve(housing_url, tgz_path) housing_tgz = tarfile.open(tgz_path) housing_tgz.extractall(path=housing_path) housing_tgz.close() # - # Now when we call fetch_housing_data(), it creates a datasets/housing directory in our workspace, downloads the housing.tgz file, and extracts the housing.csv from it in this directory. # Now let's load the data using Pandas. Once again we should write a small function to load the data. # + import pandas as pd def load_housing_data(housing_path=HOUSING_PATH): csv_path = os.path.join(housing_path, "housing.csv") return pd.read_csv(csv_path) # - # ## ii. Take a Quick Look at the Data Structure # Let's take a look at the top five rows using the DataFrame's head() method housing = load_housing_data() housing.head() # The info() method is useful to get a quick description of the data, in particular the total number of rows, and each attribute's type and number of non-null values housing.info() # We can find out what categories exist and how many rows belong to each category by using the value_counts() method housing["ocean_proximity"].value_counts() # Let's look at the other fields. The describe() method shows a summary of the numerical attributes. housing.describe() # Let's plot a histogram for all the numerical attributes in the dataset # %matplotlib inline import matplotlib.pyplot as plt housing.hist(bins=50, figsize=(20,15)) plt.show() # ## iii. Create a Test Set # Creating a test set is theoretically quite simple: just pick some instances randomly, typically 20% of the dataset, and set them aside. # + import numpy as np def split_train_test(data, test_ratio): shuffled_indices = np.random.permutation(len(data)) test_set_size = int(len(data) * test_ratio) test_indices = shuffled_indices[:test_set_size] train_indices = shuffled_indices[test_set_size:] return data.iloc[train_indices], data.iloc[test_indices] # - # We can then use this function like this train_set, test_set = split_train_test(housing, 0.2) print(len(train_set), "train +", len(test_set), "test") # + # Advanced approach for creating the train test split import hashlib def test_set_check(identifier, test_ratio, hash): return hash(np.int64(identifier)).digest()[-1] < 256 * test_ratio def split_train_test_by_id(data, test_ratio, id_column, hash=hashlib.md5): ids = data[id_column] in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio, hash)) return data.loc[~in_test_set], data.loc[in_test_set] # - # continued... housing_with_id = housing.reset_index() # adds an 'index' column train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "index") # Scikit-Learn's approach for creating the train test split # + from sklearn.model_selection import train_test_split train_set, test_size = train_test_split(housing, test_size=0.2, random_state=42) # - # Time for some feature engineering to create a representative sample for stratified sampling housing["income_cat"] = np.ceil(housing["median_income"] / 1.5) housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace=True) # Now we are ready to do stratified sampling based on the income category. For this we can use Scikit-Learn's StratifiedShuffleSplit class. # + from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(housing, housing["income_cat"]): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] # - # Let's see if this worked as expected. We can start by looking at the income category proportions in the full housing dataset. housing["income_cat"].value_counts() / len(housing) strat_train_set["income_cat"].value_counts() / len(strat_train_set) strat_test_set["income_cat"].value_counts() / len(strat_test_set) # Now we should remove the income_cat attribute so the data is back to its original state for set_ in (strat_train_set, strat_test_set): set_.drop("income_cat", axis=1, inplace=True) # # 2. Discover and Visualize the Data to Gain Insights # Let's create a copy so that we can play with it without harming the training set housing = strat_train_set.copy() # ## i. Visualizing Geographical Data housing.plot(kind="scatter", x="longitude", y="latitude") # Now let's look at the housing prices. The radius of each circle represents the district's population (option s), and the color represents the price (option c). We will use a predefined color map (option cmap) called jet, which ranges from blue (low values) to red (high prices). housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1) housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.4, s=housing["population"]/100, label="population", figsize=(10,7), c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True) plt.legend() # ## ii. Looking for Correlations # Since the dataset is not too large, we can easily compute the standard correlation coefficient (also called Pearson's r) between every pair of attributes using the corr() method. corr_matrix = housing.corr() # Now let's look at how much each attribute correlates with the median house value corr_matrix["median_house_value"].sort_values(ascending=False) # Another way to check for correlation between attributes is to use Pandas' scatter_matrix function, which plots every numerical attribute against every other numerical attribute. Since there are now 11 numerical attributes, we would get 121 plots, which would not fit on a page, so let's just focus on a few promising attributes that seem most correlated with the median housing value. # + from pandas.plotting import scatter_matrix attributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"] scatter_matrix(housing[attributes], figsize=(12,8)) # - # The most promising attribute to predict the median house value is the median income, so let's zoom in on their correlation scatterplot. housing.plot(kind="scatter", x="median_income", y="median_house_value", alpha=0.1) # ### iii. Experimenting with Attribute Combinations # # 3. Prepare the Data for Machine Learning Algorithms # First let's revert to a clean training set (by copying strat_train_set once again), and let's seperate the predictors and the labels since we don't necessarily want to apply the same transformations to the predictors and the target values (note that drop() creates a copy of the data and does not affect strat_train_set). housing = strat_train_set.drop("median_house_value", axis=1) housing_labels = strat_train_set["median_house_value"].copy() # ## i. Data Cleaning # We can accomplish these easily using DataFrame's dropna(), drop(), and fillna() methods. housing.dropna(subset=["total_bedrooms"]) # option 1 housing.drop("total_bedrooms", axis=1) # option 2 median = housing["total_bedrooms"].median() # option 3 housing["total_bedrooms"].fillna(median, inplace=True) # Scikit-Learn provides a handy class to take care of missing values: Imputer. Here is how to use it. First, we need to create an Imputer instance, specifying that we want to replace each attribute's missing values with the median of that attribute. # + from sklearn.preprocessing import Imputer imputer = Imputer(strategy="median") # - # Since the median can only be computed on numerical attributes, we need to create a copy of the data without the text attribute ocean_proximity. housing_num = housing.drop("ocean_proximity", axis=1) # Now we can fit the imputer instance to the training data using the fit() method imputer.fit(housing_num) # The imputer has simply computed the median of each attribute and stored the result in its statistics_ instance variable. Only the total_bedrooms attribute had missing values, but we cannot be sure that there won't be any missing values in new data after the system goes live, so it is safer to apply the imputer to all the numerical attributes. imputer.statistics_ housing_num.median().values # Now we can use this "trained" imputer to transform the training set by replacing missing values by the learned medians X = imputer.transform(housing_num) # The result is a plain Numpy array containing the transformed features. If we want to put it back into a Pandas DataFrame, it's simple. housing_tr = pd.DataFrame(X, columns=housing_num.columns) # ## ii. Handling Text and Categorical Attributes # Scikit-Learn provides a transformer called LabelEncoder to convert text labels to numbers # + from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() housing_cat = housing["ocean_proximity"] housing_cat_encoded = encoder.fit_transform(housing_cat) housing_cat_encoded # - # We can look at the mapping that this encoder has learned using the classes_ attribute ("<1H OCEAN" is mapped to 0, "INLAND" is mapped to 1, etc.) print(encoder.classes_) # Scikit-Learn provides a OneHotEncoder encoder to convert integer categorical values into one-hot vectors. Let's encode the categories as one-hot vectors. Note that fit_transform() expects a 2D array, but housing_cat_encoded is a 1D array, so we need to reshape it. # + from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder() housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1,1)) housing_cat_1hot # - # If we want to convert the sparse matrix to a (dense) Numpy array, just call the toarray() method. housing_cat_1hot.toarray() # We can apply both transformations (from text categories to integer categories, then from integer categories to one-hot vectors) in one shot using the LabelBinarizer class. # + from sklearn.preprocessing import LabelBinarizer encoder = LabelBinarizer() housing_cat_1hot = encoder.fit_transform(housing_cat) housing_cat_1hot # - # ## iii. Transformation Pipelines # As we can see, there are many data transformation steps that need to be executed in the right order. Fortunately, Scikit-Learn provides the Pipeline class to help with such sequences of transformations. Here is a small pipeline for the numerical attributes. # + # from sklearn.pipeline import Pipeline # from sklearn.preprocessing import StandardScaler # from sklearn.impute import SimpleImputer # num_pipeline = Pipeline([ # ('imputer', Imputer(strategy="median")), # ('attribs_adder', CombinedAttributesAdder()), # ('std_scaler', StandardScaler()), # ]) # housing_num_tr = num_pipeline.fit_transform(housing_num) # - # Now it would be nice if we could feed a Pandas DataFrame directly into our pipeline, instead of having to first manually extract the numerical columns into a NumPy array. There is nothing in Scikit-Learn to handle Pandas DataFrames, but we can write a custom transformer for this task. # + from sklearn.base import BaseEstimator, TransformerMixin class DataFrameSelector(BaseEstimator, TransformerMixin): def __init__(self, attribute_names): self.attribute_names = attribute_names def fit(self, X, y=None): return self def transform(self, X): return X[self.attribute_names].values # - # Our DataFrameSelector will transform the data by selecting the desired attributes, dropping the rest, and converting the resulting DataFrame to a NumPy array. With this, we can easily write a pipeline that will take a Pandas DataFrame and handle only the numerical values: the pipeline would just start with a DataFrameSelector to pick only the numerical attributes, followed by the other preprocessing steps we discussed earlier. And we can just as easily write another pipeline for the categorical attributes as well by simply selecting the categorical attributes using a DataFrameSelector and then applying a LabelBinarizer. # + # num_attribs = list(housing_num) # cat_attribs = ["ocean_proximity"] # num_pipeline = Pipeline([ # ('selector', DataFrameSelector(num_attribs)), # ('imputer', Imputer(strategy="median")), # ('attribs_adder', CombinedAttributesAdder()), # ('std_scaler', StandardScaler()), # ]) # cat_pipeline = Pipeline([ # ('selector', DataFrameSelector(cat_attribs)), # ('label_binarizer', LabelBinarizer()), # ]) # - # A full pipeline handling both numerical and categorical attributes may look like this # + # from sklearn.pipeline import FeatureUnion # full_pipeline = FeatureUnion(transformer_list=[ # ("num_pipeline", num_pipeline), # ("cat_pipeline", cat_pipeline), # ]) # - # And we can run the whole pipeline simply # + # housing_prepared = full_pipeline.fit_transform(housing) # housing_prepared # housing_prepared.shape # - # # 4. Select and Train a Model # ## i. Training and Evaluating on the Training Set # Let's first train a Linear Regression model # + # from sklearn.linear_model import LinearRegression # lin_reg = LinearRegression() # lin_reg.fit(housing_prepared, housing_labels) # - # Let's try it out on a few instances from the training set # + # some_data = housing.iloc[:5] # some_labels = housing_labels.iloc[:5] # some_data_prepared = full_pipeline.transform(some_data) # print("Predictions:", lin_reg.predict(some_data_prepared)) # print("Labels:", list(some_labels)) # - # Let's measure this regression model's RMSE on the whole training set using Scikit-Learn's mean_squared_error function # + # from sklearn.metrics import mean_squared_error # housing_predictions = lin_reg.predict(housing_prepared) # lin_mse = mean_squared_error(housing_labels, housing_predictions) # lin_rmse = np.sqrt(lin_mse) # lin_rmse # - # Let's train a DecisionTreeRegressor. This is a powerful model, capable of finding complex nonlinear relationships in the data. # + # from sklearn.tree import DecisionTreeRegressor # tree_reg = DecisionTreeRegressor() # tree_reg.fit(housing_prepared, housing_labels) # - # Now that the model is trained, let's evaluate it on the training set. # + # housing_predictions = tree_reg.predict(housing_prepared) # tree_mse = mean_squared_error(housing_labels, housing_predictions) # tree_rmse = np.sqrt(tree_mse) # tree_rmse # - # ## ii. Better Evaluation Using Cross-Validation # The following code performs K-fold cross-validation. The result is an array containing the 10 evaluation scores. # + # from sklearn.model_selection import cross_val_score # scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10) # tree_rmse_scores = np.sqrt(-scores) # - # Let's look at the results # + # def display_scores(scores): # print("Scores:", scores) # print("Mean:", scores.mean()) # print("Standard deviation:", scores.std()) # display_scores(tree_rmse_scores) # - # Let's compute the same scores for the Linear Regression model just to be sure # + # lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10) # lin_rmse_scores = np.sqrt(-lin_scores) # display_scores(lin_rmse_scores) # - # Let's try one last model now: the RandomForestRegressor # + # from sklearn.ensemble import RandomForestRegressor # forest_reg = RandomForestRegressor() # forest_reg.fit(housing_prepared, housing_labels) # housing_predictions = forest_reg.predict(housing_prepared) # forest_mse = mean_squared_error(housing_labels, housing_predictions) # forest_rmse = np.sqrt(forest_mse) # forest_rmse # display_scores(forest_rmse_scores) # - # We can easily save Scikit-Learn models by using Python's pickle module, or using sklearn.externals.joblib, which is more efficient at serializing large NumPy arrays. # + # from sklearn.externals import joblib # joblib.dump(my_model, "my_model.pkl") # # and later... # my_model_loaded = joblib.load("my_model.pkl") # - # # 5. Fine-Tune Your Model # ## i. Grid Search # All we need to do is tell GridSearchCV which hyperparameters we want it to experiment with, and what values to try out, and it will evaluate all the possible combinations of hyperparameter values, using cross-validation. For example, the following code searches for the best combination of hyperparameter values for the RandomForestRegressor. # + # from sklearn.model_selection import GridSearchCV # param_grid = [ # {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]}, # {'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]}, # ] # forest_reg = RandomForestRegressor() # grid_search = GridSearchCV(forest_reg, param_grid, cv=5, # scoring='neg_mean_squared_error') # grid_search.fit(housing_prepared, housing_labels) # - # We can get the best combination of parameters like this # + # grid_search.best_params_ # - # We can also get the best estimator directly # + # grid_search.best_estimator_ # - # And of course the evaluation scores are also available # + # cvres = grid_search.cv_results_ # for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]): # print(np.sqrt(-mean_score), params) # - # ## ii. Analyze the Best Models and Their Errors # We will often gain good insights on the problem by inspecting the best models. For example, the RandomForestRegressor can indicate the relative importance of each attribute for making accurate predictions. # + # feature_importances = grid_search.best_estimator_.feature_importances_ # feature_importances # - # Let's display these importance scores next to their corresponding attribute names # + # extra_attribs = ["rooms_per_hhold", "pop_per_hhold", "bedrooms_per_room"] # cat_one_hot_attribs = list(encoder.classes_) # attributes = num_attribs + extra_attribs + cat_one_hot_attribs # sorted(zip(feature_importances, attributes), reverse=True) # - # ## iii. Evaluate Your System on the Test Set # Now let's evaluate the final model on the test set # + # final_model = grid_search.best_estimator_ # X_test = strat_test_set.drop("median_house_value", axis=1) # y_test = strat_test_set["median_house_value"].copy() # X_test_prepared = full_pipeline.transform(X_test) # final_predictions = final_model.predict(X_test_prepared) # final_mse = mean_squared_error(y_test, final_predictions) # final_rmse = np.sqrt(final_mse) # - # # Part 3 - Classification # Scikit-Learn provides many helper functions to download popular datasets. MNIST is one of them. The following code fetches the MNIST dataset. def sort_by_target(mnist): reorder_train = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[:60000])]))[:, 1] reorder_test = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[60000:])]))[:, 1] mnist.data[:60000] = mnist.data[reorder_train] mnist.target[:60000] = mnist.target[reorder_train] mnist.data[60000:] = mnist.data[reorder_test + 60000] mnist.target[60000:] = mnist.target[reorder_test + 60000] try: from sklearn.datasets import fetch_openml mnist = fetch_openml('mnist_784', version=1, cache=True) mnist.target = mnist.target.astype(np.int8) # fetch_openml() returns targets as strings sort_by_target(mnist) # fetch_openml() returns an unsorted dataset except ImportError: from sklearn.datasets import fetch_mldata mnist = fetch_mldata('MNIST original') mnist["data"], mnist["target"] mnist.data.shape # Let's look at the data arrays X, y = mnist["data"], mnist["target"] X.shape y.shape # Let's take a peek at one digit from the dataset. All we need to do is grab an instance's feature vector, reshape it to a 28X28 array, and display it using Matplotlib's imshow() function. # + # %matplotlib inline import matplotlib import matplotlib.pyplot as plt some_digit = X[36000] some_digit_image = some_digit.reshape(28, 28) plt.imshow(some_digit_image, cmap=matplotlib.cm.binary, interpolation="nearest") plt.axis("off") plt.show() # - # This looks like a 5, and indeed that's what the label tells us. def plot_digit(data): image = data.reshape(28, 28) plt.imshow(image, cmap = mpl.cm.binary, interpolation="nearest") plt.axis("off") # EXTRA def plot_digits(instances, images_per_row=10, **options): size = 28 images_per_row = min(len(instances), images_per_row) images = [instance.reshape(size,size) for instance in instances] n_rows = (len(instances) - 1) // images_per_row + 1 row_images = [] n_empty = n_rows * images_per_row - len(instances) images.append(np.zeros((size, size * n_empty))) for row in range(n_rows): rimages = images[row * images_per_row : (row + 1) * images_per_row] row_images.append(np.concatenate(rimages, axis=1)) image = np.concatenate(row_images, axis=0) plt.imshow(image, cmap = matplotlib.cm.binary, **options) plt.axis("off") plt.figure(figsize=(9,9)) example_images = np.r_[X[:12000:600], X[13000:30600:600], X[30600:60000:590]] plot_digits(example_images, images_per_row=10) y[36000] # We should always create a test set and set it aside before inspecting the data closely. The MNIST dataset is actually already split into a training set (the first 60,000 images) and a test set (the last 10,000 images). X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:] # Let's also shuffle the training set; this will guarantee that all cross-validation folds will be similar (we don't want one fold to be missing some digits). Morever, some learning algorithms are sensitive to the order of the training instances, and they perform poorly if they get many similar instances in a row. Shuffling the dataset ensures that this won't happen. # + import numpy as np shuffle_index = np.random.permutation(60000) X_train, y_train = X_train[shuffle_index], y_train[shuffle_index] # - # # 1. Training a Binary Classifier # Let's simplify the problem for now and only try to identify one digit- for example, the number 5. This "5-detector" will be an example of a binary classifier, capable of distinguishing between just two classes, 5 and not-5. Let's create the target vectors for this classification task. y_train_5 = (y_train == 5) # True for all 5s, False for all other digits. y_test_5 = (y_test == 5) # Let's create an SGDClassifier and train it on the whole training set # + from sklearn.linear_model import SGDClassifier sgd_clf = SGDClassifier(random_state=42) sgd_clf.fit(X_train, y_train_5) # - # Now we can use it to detect images of the number 9 sgd_clf.predict([some_digit]) # # 2. Performance Measures # ## i. Measuring Accuracy Using Cross-Validation # Let's use the cross_val_score() function to evaluate our SGDClassifier model using K-fold cross-validation, with three folds. # + from sklearn.model_selection import cross_val_score cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy") # - # Occasionally we will need more control over the cross-validation process than what Scikit-Learn provides off-the-shelf. In these cases, we can implement cross-validation ourself; it is actually fairly straightforward. The following code does roughly the same thing as Scikit-Learn’s cross_val_score() function, and prints the same result. # + from sklearn.model_selection import StratifiedKFold from sklearn.base import clone skfolds = StratifiedKFold(n_splits=3, random_state=42) for train_index, test_index in skfolds.split(X_train, y_train_5): clone_clf = clone(sgd_clf) X_train_folds = X_train[train_index] y_train_folds = (y_train_5[train_index]) X_test_fold = X_train[test_index] y_test_fold = (y_train_5[test_index]) clone_clf.fit(X_train_folds, y_train_folds) y_pred = clone_clf.predict(X_test_fold) n_correct = sum(y_pred == y_test_fold) print(n_correct / len(y_pred)) # - # Let's look at a very dumb classifier that just classifies every single image in the "not-5" class # + from sklearn.base import BaseEstimator class Never5Classifier(BaseEstimator): def fit(self, X, y=None): pass def predict(self, X): return np.zeros((len(X), 1), dtype=bool) # - # Let's find out the model's accuracy never_5_clf = Never5Classifier() cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy") # That’s right, it has over 90% accuracy! This is simply because only about 10% of the images are 5s, so if we always guess that an image is not a 5, we will be right about 90% of the time. # # This demonstrates why accuracy is generally not the preferred performance measure for classifiers, especially when we are dealing with skewed datasets (i.e., when some classes are much more frequent than others). # ## ii. Confusion Matrix # To compute the confusion matrix, we first need to have a set of predictions, so they can be compared to the actual targets. We can use the cross_val_predict() function for this purpose. # + from sklearn.model_selection import cross_val_predict y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3) # - # Just like the cross_val_score() function, cross_val_predict() performs K-fold cross-validation, but instead of returning the evaluation scores, it returns the predictions made on each test fold. This means that we get a clean prediction for each instance in the training set (“clean” meaning that the prediction is made by a model that never saw the data during training). # # Now we are ready to get the confusion matrix using the confusion_matrix() function. Just pass it the # target classes (y_train_5) and the predicted classes (y_train_pred). # + from sklearn.metrics import confusion_matrix confusion_matrix(y_train_5, y_train_pred) # - # A perfect classifier would have only true positives and true negatives, so its confusion matrix would have nonzero values only on its main diagonal (top left to bottom right). y_train_perfect_predictions = y_train_5 confusion_matrix(y_train_5, y_train_perfect_predictions) # ## iii. Precision and Recall # Scikit-Learn provides several functions to compute classifier metrics, including precision and recall. # + from sklearn.metrics import precision_score, recall_score precision_score(y_train_5, y_train_pred) # - recall_score(y_train_5, y_train_pred) # To compute the F1 score, simply call the f1_score() function. # + from sklearn.metrics import f1_score f1_score(y_train_5, y_train_pred) # - # ## iv. Precision / Recall Tradeoff # Scikit-Learn does not let us set the threshold directly, but it does gives us access to the decision scores that it uses to make predictions. Instead of calling the classifier’s predict() method, we can call its decision_function() method, which returns a score for each instance, and then make predictions based on those scores using any threshold we want. y_scores = sgd_clf.decision_function([some_digit]) y_scores threshold = 0 y_some_digit_pred = (y_scores > threshold) # The SGDClassifier uses a threshold equal to 0, so the previous code returns the same result as the predict() method (i.e., True). Let’s raise the threshold. threshold = 200000 y_some_digit_pred = (y_scores > threshold) y_some_digit_pred # This confirms that raising the threshold decreases recall. The image actually represents a 5, and the classifier detects it when the threshold is 0, but it misses it when the threshold is increased to 200,000. So how can we decide which threshold to use? For this we will first need to get the scores of all instances in the training set using the cross_val_predict() function again, but this time specifying that we want it to return decision scores instead of predictions. y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method="decision_function") # Now with these scores we can compute precision and recall for all possible thresholds using the precision_recall_curve() function. # Note: there was an issue in Scikit-Learn 0.19.0 (fixed in 0.19.1) where the result of cross_val_predict() was incorrect in the binary classification case when using method="decision_function", as in the code above. The resulting array had an extra first dimension full of 0s. Just in case you are using 0.19.0, we need to add this small hack to work around this issue. y_scores.shape # hack to work around issue #9589 in Scikit-Learn 0.19.0 if y_scores.ndim == 2: y_scores = y_scores[:, 1] # + from sklearn.metrics import precision_recall_curve precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores) # - # Finally, we can plot precision and recall as functions of the threshold value using Matplotlib. # + def plot_precision_recall_vs_threshold(precisions, recalls, thresholds): plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2) plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2) plt.xlabel("Threshold", fontsize=16) plt.legend(loc="upper left", fontsize=16) plt.ylim([0, 1]) plt.figure(figsize=(8, 4)) plot_precision_recall_vs_threshold(precisions, recalls, thresholds) plt.xlim([-70000, 70000]) plt.show() # - # Now we can simply select the threshold value that gives us the best precision/recall tradeoff for our task. Another way to select a good precision/recall tradeoff is to plot precision directly against recall. # + def plot_precision_vs_recall(precisions, recalls): plt.plot(recalls, precisions, "b-", linewidth=2) plt.xlabel("Recall", fontsize=16) plt.ylabel("Precision", fontsize=16) plt.axis([0, 1, 0, 1]) plt.figure(figsize=(8, 6)) plot_precision_vs_recall(precisions, recalls) plt.show() # - # We can see that precision really starts to fall sharply around 80% recall. We will probably want to select a precision/recall tradeoff just before that drop — for example, at around 60% recall. But of course the choice depends on our project. # # So let’s suppose we decide to aim for 90% precision. We look up the first plot (zooming in a bit) and find that we need to use a threshold of about 70,000. To make predictions (on the training set for now), instead of calling the classifier’s predict() method, we can just run this code. y_train_pred_90 = (y_scores > 40000) # Let’s check these predictions’ precision and recall precision_score(y_train_5, y_train_pred_90) recall_score(y_train_5, y_train_pred_90) # Great, we have a 90% precision classifier (or close enough)! As we can see, it is fairly easy to create a classifier with virtually any precision we want: just set a high enough threshold, and we're done. Hmm, not so fast. A high-precision classifier is not very useful if its recall is too low! # ## v. The ROC Curve # To plot the ROC curve, we first need to compute the TPR and FPR for various threshold values, using the roc_curve() function. # + from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_train_5, y_scores) # - # Then we can plot the FPR against the TPR using Matplotlib # + def plot_roc_curve(fpr, tpr, label=None): plt.plot(fpr, tpr, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'k--') plt.axis([0, 1, 0, 1]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plot_roc_curve(fpr, tpr) plt.show() # - # Scikit-Learn provides a function to compute the ROC AUC # + from sklearn.metrics import roc_auc_score roc_auc_score(y_train_5, y_scores) # - # Let’s train a RandomForestClassifier and compare its ROC curve and ROC AUC score to the SGDClassifier. First, we need to get scores for each instance in the training set. But due to the way it works, the RandomForestClassifier class does not have a decision_function() method. Instead it has a predict_proba() method. Scikit-Learn classifiers generally have one or the other. The predict_proba() method returns an array containing a row per instance and a column per class, each containing the probability that the given instance belongs to the given class (e.g., 70% chance that the image represents a 5). # + from sklearn.ensemble import RandomForestClassifier forest_clf = RandomForestClassifier(random_state=42) y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3, method="predict_proba") # - # But to plot a ROC curve, we need scores, not probabilities. A simple solution is to use the positive class’s probability as the score. y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5, y_scores_forest) # Now we are ready to plot the ROC curve. It is useful to plot the first ROC curve as well to see how they compare. plt.plot(fpr, tpr, "b:", label="SGD") plot_roc_curve(fpr_forest, tpr_forest, "Random Forest") plt.legend(loc="lower right") plt.show() # As we can see, the RandomForestClassifier’s ROC curve looks much better than the SGDClassifier’s: it comes much closer to the top-left corner. As a result, its ROC AUC score is also significantly better. roc_auc_score(y_train_5, y_scores_forest) # # 3. Multiclass Classification # Scikit-Learn detects when we try to use a binary classification algorithm for a multiclass classification task, and it automatically runs OvA (except for SVM classifiers for which it uses OvO). Let’s try this with the SGDClassifier. sgd_clf.fit(X_train, y_train) # y_train, not y_train_5 sgd_clf.predict([some_digit]) # That was easy! This code trains the SGDClassifier on the training set using the original target classes from 0 to 9 (y_train), instead of the 5-versus-all target classes (y_train_5). Then it makes a prediction (a correct one in this case). Under the hood, Scikit-Learn actually trained 10 binary classifiers, got their decision scores for the image, and selected the class with the highest score. # # To see that this is indeed the case, we can call the decision_function() method. Instead of returning just one score per instance, it now returns 10 scores, one per class. some_digit_scores = sgd_clf.decision_function([some_digit]) some_digit_scores # The highest score is the one corresponding to class 3 np.argmax(some_digit_scores) sgd_clf.classes_ sgd_clf.classes_[5] # If we want to force Scikit-Learn to use one-versus-one or one-versus-all, we can use the OneVsOneClassifier or OneVsRestClassifier classes. Simply create an instance and pass a binary classifier to its constructor. For example, this code creates a multiclass classifier using the OvO strategy, based on a SGDClassifier. # + from sklearn.multiclass import OneVsOneClassifier ovo_clf = OneVsOneClassifier(SGDClassifier(random_state=42)) ovo_clf.fit(X_train, y_train) ovo_clf.predict([some_digit]) len(ovo_clf.estimators_) # - # Training a RandomForestClassifier is just as easy forest_clf.fit(X_train, y_train) forest_clf.predict([some_digit]) # This time Scikit-Learn did not have to run OvA or OvO because Random Forest classifiers can directly classify instances into multiple classes. We can call predict_proba() to get the list of probabilities that the classifier assigned to each instance for each class. forest_clf.predict_proba([some_digit]) # We can see that the classifier is fairly confident about its prediction: the 0.7 at the 5th index in the array means that the model estimates a 70% probability that the image represents a 5. It also thinks that the image could instead be a 3 (30% chance). # # Let’s evaluate the SGDClassifier’s accuracy using the cross_val_score() function. cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy") # It gets over 86% on all test folds. If we used a random classifier, we would get 10% accuracy, so this is not such a bad score, but we can still do much better. For example, simply scaling the inputs increases accuracy above 89%. # + from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train.astype(np.float64)) cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy") # - # # 4. Error Analysis # First, We can look at the confusion matrix. We need to make predictions using the cross_val_predict() function, then call the confusion_matrix() function. y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3) conf_mx = confusion_matrix(y_train, y_train_pred) conf_mx # That’s a lot of numbers. It’s often more convenient to look at an image representation of the confusion matrix, using Matplotlib’s matshow() function. plt.matshow(conf_mx, cmap=plt.cm.gray) plt.show() # Let’s focus the plot on the errors. First, we need to divide each value in the confusion matrix by the number of images in the corresponding class, so we can compare error rates instead of absolute number of errors (which would make abundant classes look unfairly bad). row_sums = conf_mx.sum(axis=1, keepdims=True) norm_conf_mx = conf_mx / row_sums # Now let’s fill the diagonal with zeros to keep only the errors, and let’s plot the result. np.fill_diagonal(norm_conf_mx, 0) plt.matshow(norm_conf_mx, cmap=plt.cm.gray) plt.show() # Analyzing individual errors can also be a good way to gain insights on what our classifier is doing and why it is failing, but it is more difficult and time-consuming. For example, let’s plot examples of 3s and 5s (the plot_digits() function just uses Matplotlib’s imshow() function. def plot_digits(instances, images_per_row=10, **options): size = 28 images_per_row = min(len(instances), images_per_row) images = [instance.reshape(size,size) for instance in instances] n_rows = (len(instances) - 1) // images_per_row + 1 row_images = [] n_empty = n_rows * images_per_row - len(instances) images.append(np.zeros((size, size * n_empty))) for row in range(n_rows): rimages = images[row * images_per_row : (row + 1) * images_per_row] row_images.append(np.concatenate(rimages, axis=1)) image = np.concatenate(row_images, axis=0) plt.imshow(image, cmap = matplotlib.cm.binary, **options) plt.axis("off") # + cl_a, cl_b = 3, 5 X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)] X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)] X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)] X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)] plt.figure(figsize=(8,8)) plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5) plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5) plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5) plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5) plt.show() # - # # 5. Multilabel classification # Let’s look at a simpler example, just for illustration purposes. # + from sklearn.neighbors import KNeighborsClassifier y_train_large = (y_train >= 7) y_train_odd = (y_train % 2 == 1) y_multilabel = np.c_[y_train_large, y_train_odd] knn_clf = KNeighborsClassifier() knn_clf.fit(X_train, y_multilabel) # - # This code creates a y_multilabel array containing two target labels for each digit image: the first indicates whether or not the digit is large (7, 8, or 9) and the second indicates whether or not it is odd. The next lines create a KNeighborsClassifier instance (which supports multilabel classification, but not all classifiers do) and we train it using the multiple targets array. Now we can make a prediction, and notice that it outputs two labels. knn_clf.predict([some_digit]) # There are many ways to evaluate a multilabel classifier, and selecting the right metric really depends on the project. For example, one approach is to measure the F1 score for each individual label (or any other binary classifier metric discussed earlier), then simply compute the average score. This code computes the average F1 score across all labels. y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_train, cv=3) f1_score(y_train, y_train_knn_pred, average="macro") # This assumes that all labels are equally important, which may not be the case. In particular, if we have many more pictures of Alice than of Bob or Charlie, we may want to give more weight to the classifier’s score on pictures of Alice. One simple option is to give each label a weight equal to its support (i.e., the number of instances with that target label). To do this, simply set average="weighted" in the preceding code. # # 6. Multioutput Classification # Let’s start by creating the training and test sets by taking the MNIST images and adding noise to their pixel intensities using NumPy’s randint() function. The target images will be the original images. noise = np.random.randint(0, 100, (len(X_train), 784)) X_train_mod = X_train + noise noise = np.random.randint(0, 100, (len(X_test), 784)) X_test_mod = X_test + noise y_train_mod = X_train y_test_mod = X_test # Let’s take a peek at an image from the test set. # # On the left is the noisy input image, and on the right is the clean target image. Now let’s train the classifier and make it clean this image. knn_clf.fit(X_train_mod, y_train_mod) clean_digit = knn_clf.predict([X_test_mod[some_index]]) plot_digit(clean_digit) # Looks close enough to the target! # # Part 4 - Training Models # ## 1. Linear Regression # ### i. The Normal Equation # Let's generate some linear-looking data to test the normal equation # + import numpy as np import matplotlib.pyplot as plt X = 2 * np.random.randn(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) # - # Now let's compute theta-hat using the Normal Equation. We will use the inv() function from NumPy's Linear Algebra module (np.linalg) to compute the inverse of a matrix, and the dot() method for matrix multiplication. X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) # The actual function that we used to generate the data is y = 4 + 3x0 + Gaussian noise. Let's see what the equation found. theta_best # Now we can make predictions using theta-hat X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict # Let's plot this model's predictions plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() # The equivalent code using Scikit-Learn looks like this # + from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ # - lin_reg.predict(X_new) # ## 2. Gradient Descent # ### i. Batch Gradient Descent # Let's look at a quick implementation of this algorithm # + eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2, 1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients # - # Let's look at the resulting theta theta # ### ii. Stochastic Gradient Descent # This code implements Stochastic Gradient Descent using a simple learning schedule # + n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2, 1) # random initialization for epoch in range(n_epochs): for i in range(m): random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients # - # By convention we iterate by rounds of m iterations; each round is called an epoch. While the Batch Gradient Descent code iterated 1,000 times through the whole training set, this code goes through the training set only 50 times and reaches a fairly good solution theta # To perform Linear Regression using SGD with Scikit-Learn, we can use the SGDRegressor class, which defaults to optimizing the squared error cost function. The following code runs 50 epochs, starting with a learning rate of 0.1 (eta0=0.1), using the default learning schedule (different from the preceeding one), and it does not use any regularization (penalty=None) # + from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1) sgd_reg.fit(X, y.ravel()) # - # Once again, we find a solution very close to the one returned by the Normal Equation sgd_reg.intercept_, sgd_reg.coef_ # ### iii. Mini-batch Gradient Descent # ## 3. Polynomial Regression # Let's look at an example. First, let's generate some nonlinear data, based on a simple quadratic equation (plus some noise) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) # Clearly, a straight line will never fit this data properly. So let's use Scikit-Learn's PolynomialFeatures class to transform our training data, adding the square (2nd-degree polynomial) of each feature in the training set as new features (in this case there is just one feature) # + from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] # - X_poly[0] # X_poly now contains the original feature of X plus the square of this feature. Now we can fit a LinearRegression model to this extended training data. lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ # ## 4. Learning Curves # Learning curves are plots of the model's performance on the training set and the validation set as a function of the training set size. To generate the plots, simply train the model several times on different sized subsets of the training set. The following code defines a function that plots the learning curves of a model given some training data. # + from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") # - # Let's look at the learning curves of the plain Linear Regression model (a straight line) lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) # Now let's look at the learning curves of a 10th-degree polynomial model on the same data # + from sklearn.pipeline import Pipeline polynomial_regression = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), )) plot_learning_curves(polynomial_regression, X, y) # - # ## 5. Regularized Linear Models # ### i. Ridge Regression # Here is how to perform Ridge Regression with Scikit-Learn using a closed-form solution # + from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # - # And using Stochastic Gradient Descent sgd_reg = SGDRegressor(penalty="l2") sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) # ### ii. Lasso Regression # Here is a small Scikit-Learn example using the Lasso class. Note that we could instead use an SGDRegressor (penalty="l1"). # + from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) # - # ### iii. Elastic Net # Here is a short example using Scikit-Learn's ElasticNet (l1_ratio corresponds to the mix ratio r) # + from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) # - # ### iv. Early Stopping # Here is a basic implementation of early stopping # + # from sklearn.base import clone # sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, # learning_rate="constant", eta0=0.0005) # minimum_val_error = float("inf") # best_epoch = None # best_model = None # for epoch in range(1000): # sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off # y_val_predict = sgd_reg.predict(X_val_poly_scaled) # val_error = mean_squared_error(y_val_predict, y_val) # if val_error < minimum_val_error: # minimum_val_error = val_error # best_epoch = epoch # best_model = clone(sgd_reg) # - # Note that with warm_start=True, when the fit() method is called, it just continues training where it left off instead of restarting from scratch. # ## 6. Logistic Regression # ### i. Decision Boundaries # + from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) # - X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 # Now let's train a Logistic Regression model # + from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression() log_reg.fit(X, y) # - # Let's look at the model's estimated probabilities for flowers with petal widths varying from 0 to 3 cm X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", label="Not Iris-Virginica") log_reg.predict([[1.7], [1.5]]) # ### ii. Softmax Regression # Let's use Softmax Regression to classify the iris flowers into all three classes. Scikit-Learn's LogisticRegression uses one-versus-all by default when we train it on more than two classes, but we can set the multi_class hyperparameter to "multinomial" to switch it to Softmax Regression instead. We must also specify a solver that supports Softmax Regression, such as the "lbfgs" solver (see Scikit-Learn's documentation for more details). It also applies l2 regularization by default, which we can control using the hyperparameter C. # + X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", C=10) softmax_reg.fit(X, y) # - # So the next time we find an iris with 5cm long and 2cm wide petals, we can ask our model to tell us what type of iris it is, and it will answer Iris-Virginica (class 2) with 94.2% probability (or Iris-Versicolor with 5.8% probability). softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) # # Part 5 - Support Vector Machines # ## 1. Linear SVM Classification # The following Scikit-Learn code loads the iris dataset, scales the features, and then trains a linear SVM model (using the LinearSVC class with C = 0.1 and the hinge loss function) to detect Iris-Virginica flowers. # + import numpy as np from sklearn import datasets from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import LinearSVC iris = datasets.load_iris() X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica svm_clf = Pipeline(( ("scaler", StandardScaler()), ("linear_svc", LinearSVC(C=1, loss="hinge")), )) svm_clf.fit(X, y) # - # Then, as usual, we can use the model to make predictions svm_clf.predict([[5.5, 1.7]]) # ## 2. Nonlinear SVM Classification # To implement this idea using Scikit-Learn, we can create a Pipeline containing a PolynomialFeatures transformer, followed by a StandardScaler and a LinearSVC. Let’s test this on the moons dataset. # + from sklearn.datasets import make_moons from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures polynomial_svm_clf = Pipeline(( ("poly_features", PolynomialFeatures(degree=3)), ("scaler", StandardScaler()), ("svm_clf", LinearSVC(C=10, loss="hinge")) )) polynomial_svm_clf.fit(X, y) # - # ### i. Polynomial Kernel # Adding polynomial features is simple to implement and can work great with all sorts of Machine Learning algorithms (not just SVMs), but at a low polynomial degree it cannot deal with very complex datasets, and with a high polynomial degree it creates a huge number of features, making the model too slow. # # Fortunately, when using SVMs we can apply an almost miraculous mathematical technique called the kernel trick. It makes it possible to get the same result as if we added many polynomial features, even with very high-degree polynomials, without actually having to add them. So there is no combinatorial explosion of the number of features since we don’t actually add any features. This trick is implemented by the SVC class. Let’s test it on the moons dataset. # + from sklearn.svm import SVC poly_kernel_svm_clf = Pipeline(( ("scaler", StandardScaler()), ("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5)) )) poly_kernel_svm_clf.fit(X, y) # - # ### ii. Gaussian RBF Kernel # Just like the polynomial features method, the similarity features method can be useful with any Machine Learning algorithm, but it may be computationally expensive to compute all the additional features, especially on large training sets. However, once again the kernel trick does its SVM magic: it makes it possible to obtain a similar result as if we had added many similarity features, without actually having to add them. Let’s try the Gaussian RBF kernel using the SVC class. # + rbf_kernel_svm_clf = Pipeline(( ("scaler", StandardScaler()), ("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001)) )) rbf_kernel_svm_clf.fit(X, y) # - # ## 3. SVM Regression # We can use Scikit-Learn’s LinearSVR class to perform linear SVM Regression (the training data should be scaled and centered first). # + from sklearn.svm import LinearSVR svm_reg = LinearSVR(epsilon=1.5) svm_reg.fit(X, y) # - # To tackle nonlinear regression tasks, we can use a kernelized SVM model. # # The following code uses Scikit-Learn’s SVR class (which supports the kernel trick). The SVR class is the regression equivalent of the SVC class, and the LinearSVR class is the regression equivalent of the LinearSVC class. The LinearSVR class scales linearly with the size of the training set (just like the LinearSVC class), while the SVR class gets much too slow when the training set grows large (just like the SVC class). # + from sklearn.svm import SVR svm_poly_reg = SVR(kernel="poly", degree=2, C=100, epsilon=0.1) svm_poly_reg.fit(X, y) # - # # Part 6 - Decision Trees # ## 1. Training and Visualizing a Decision Tree # The following code trains a DecisionTreeClassifier on the iris dataset # + from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier iris = load_iris() X = iris.data[:, 2:] # petal length and width y = iris.target tree_clf = DecisionTreeClassifier(max_depth=2) tree_clf.fit(X, y) # - # We can visualize the trained Decision Tree by first using the export_graphviz() method to output a graph definition file called iris_tree.dot # + # from sklearn.tree import export_graphviz # export_graphviz( # tree_clf, # out_file=image_path("iris_tree.dot"), # feature_names=iris.feature_names[2:], # class_names=iris.target_names, # rounded=True, # filled=True # ) # - # Then we can convert this .dot file to a variety of formats such as PDF or PNG using the dot command-line tool from the graphviz package. This command line converts the .dot file to a .png image file. # # $ dot -Tpng iris_tree.dot -o iris_tree.png # ## 2. Estimating Class Probabilities tree_clf.predict_proba([[5, 1.5]]) tree_clf.predict([[5, 1.5]]) # ## 3. Regression # Decision Trees are also capable of performing regression tasks. Let's build a regression tree using Scikit-Learn's DecisionTreeRegressor class, training it on a noisy quadratic dataset with max_depth=2. # + from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor(max_depth=2) tree_reg.fit(X, y) # -
Scripts/template_script.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <center><h1>Notebook TP1 - TP1 bis<\h1> # </center> # <hr> # # <h2 style="color:#FF0000";>Partie 1 - TP1<\h2> # # Vous remarquerez l'utilisation des bibliothèques numpy (manipulation de tableau max-min) et pyplot (affichage graphique). # # ### Import de scikitlearn et autres bibliothèques import numpy as np from sklearn.linear_model import Perceptron from sklearn.metrics import accuracy_score import matplotlib.pyplot as pyplot # ### Données du modèle # jeu d'apprentissage sample = np.array([[3.5,0.5],[2.5,2.],[4.5,1.5],[5.,2.5], [6.,4.],[2.5,3.5],[1.,4.],[2.,6.5],[4.,5.5]]) target = np.array([-1,-1,-1,-1,-1,1,1,1,1]) # jeu de test t_sample= np.array([[2.,3.],[0.,5.],[4.5,5.5],[3.,6.],[7.,6.5], [0.5,2.],[1.5,2.],[2.5,1.],[4.5,3.5],[6.5,3.],[7.,5.5]]) t_target = np.array([1,1,1,1,1,-1,-1,-1,-1,-1,-1]) # ### Construction du perceptron Perc1 = Perceptron(random_state=0,verbose=10) Perc1.fit(t_sample, t_target) # ### Prédiction et analyse t_predict = Perc1.predict(t_sample) print('Accuracy: %.2f' % accuracy_score(t_target, t_predict)) Perc1.coef_ from sklearn.metrics import confusion_matrix confusion_matrix(t_target, t_predict) # ## Rendu Graphique h=0.01 x_min, x_max = sample[:, 0].min() - 2, sample[:, 0].max() + 2 y_min, y_max = sample[:, 1].min() - 1, sample[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # + fig, ax = pyplot.subplots() Z = Perc1.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) ax.set_title('Visualisation Séparation linéaire données apprentissage') ax.contourf(xx, yy, Z) #ax.axis('off') for value in [-1,1]: idx = np.where(target == value) pyplot.scatter(sample[idx, 0], sample[idx, 1]) pyplot.show() # - fig, ax = pyplot.subplots() ax.set_title('Visualisation Séparation linéaire sur jeu test') ax.contourf(xx, yy, Z) #ax.axis('off') for value in [-1,1]: idx = np.where(t_target == value) pyplot.scatter(t_sample[idx, 0], t_sample[idx, 1]) pyplot.show() # ## Seconde approche # # La droite de séparation obtenue passe par (0,0) peut on ajouter un biais ? # # Pour cela on ajoute une coordonnée et égale à 1 à nos points : sample = np.column_stack((sample,np.ones(9))) t_sample= np.column_stack((t_sample,np.ones(11))) sample Perc1 = Perceptron(random_state=0) Perc1.fit(t_sample, t_target) Perc1.coef_ # + fig, ax = pyplot.subplots() Z = Perc1.predict(np.c_[xx.ravel(), yy.ravel(), np.ones(len(yy.ravel()))]) Z = Z.reshape(xx.shape) ax.set_title('Visualisation Séparation linéaire données apprentissage') ax.contourf(xx, yy, Z) #ax.axis('off') for value in [-1,1]: idx = np.where(target == value) pyplot.scatter(t_sample[idx, 0], t_sample[idx, 1]) pyplot.show() # - # La droite passe toujours par (0,0) ! # <hr> # <h2 style="color:#FF0000";>Partie 2 - TP1 bis (Iris de fisher)<\h2> # # ### Préambule import pandas as pd from sklearn.model_selection import train_test_split import matplotlib.pyplot as pyplot from mpl_toolkits.mplot3d import Axes3D # ## Lecture de données table = pd.read_csv('iris.csv') table T=np.array(table) X = np.array(T[:,0:4],dtype=float) classes = np.unique(T[:,4]); classes Perc=[Perceptron(),Perceptron(),Perceptron()] target1 = 2*(T[:,4] == classes[0]) -1; target2 = 2*(T[:,4] == classes[1]) -1; target3 = 2*(T[:,4] == classes[2]) -1; Target=[target1,target2,target3] #X_train,x_test,Y_train,y_test = train_test_split(X,target1,test_size=0.30) Target for i in [0,1,2]: Perc[i].fit(X,Target[i]); Axe1 = Perc[0].coef_; Axe1 = Axe1/np.sqrt(Axe1[0,0]**2+Axe1[0,1]**2+Axe1[0,2]**2+Axe1[0,3]**2); Axe2 = Perc[1].coef_; Axe2 = Axe2/np.sqrt(Axe2[0,0]**2+Axe2[0,1]**2+Axe2[0,2]**2+Axe2[0,3]**2); Axe3 = Perc[2].coef_; Axe3 = Axe3/np.sqrt(Axe3[0,0]**2+Axe3[0,1]**2+Axe3[0,2]**2+Axe3[0,3]**2); Axes=np.array([Axe1[0,:],Axe2[0,:],Axe3[0,:]]) Axes Coor = np.ones(3*150).reshape(150,3); for j in range(3): for i in range(150): Coor[i,j]= np.dot(X[i,:],Axes[j,:]) # + fig = pyplot.figure(figsize=(10,6), dpi= 100) ax = fig.add_subplot(111, projection='3d') ax.set_title('Visualisation Séparation linéaire sur jeu Iris') #ax.axis('off') m=['o','^','+'] c=['r','b','g'] for i in range(3): idx = np.where(Target[i] == 1) ax.scatter(Coor[idx, 0], Coor[idx, 1],Coor[idx, 2],marker=m[i],c=c[i]) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') pyplot.show() # - Y = [np.ones(150),np.ones(150),np.ones(150)] for i in [0,1,2]: Y[i]=Perc[i].predict(X); for i in range(3): print('Accuracy {classes[i]}: %.2f' % accuracy_score(Target[i], Y[i])) # # 2 eme approche (prise en compte du biais) Xn= np.column_stack((X,np.ones(150))) Percn=[Perceptron(),Perceptron(),Perceptron()] for i in range(3): Percn[i].fit(Xn,Target[i]); Axe1 = Percn[0].coef_; Axe1 = Axe1[0,0:4]/np.sqrt(Axe1[0,0]**2+Axe1[0,1]**2+Axe1[0,2]**2+Axe1[0,3]**2); Axe2 = Percn[1].coef_; Axe2 = Axe2[0,0:4]/np.sqrt(Axe2[0,0]**2+Axe2[0,1]**2+Axe2[0,2]**2+Axe2[0,3]**2); Axe3 = Percn[2].coef_; Axe3 = Axe3[0,0:4]/np.sqrt(Axe3[0,0]**2+Axe3[0,1]**2+Axe3[0,2]**2+Axe3[0,3]**2); Axes=np.array([Axe1,Axe2,Axe3]) Axes Coor = np.ones(3*150).reshape(150,3); for j in range(3): for i in range(150): Coor[i,j]= np.dot(X[i,:],Axes[j,:]) # + fig = pyplot.figure(figsize=(10,6), dpi= 100) ax = fig.add_subplot(111, projection='3d') ax.set_title('Visualisation Séparation linéaire sur jeu Iris') #ax.axis('off') m=['o','^','+'] c=['r','b','g'] for i in range(3): idx = np.where(Target[i] == 1) ax.scatter(Coor[idx, 0], Coor[idx, 1],Coor[idx, 2],marker=m[i],c=c[i]) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') pyplot.show() # - Y = [np.ones(150),np.ones(150),np.ones(150)] for i in [0,1,2]: Y[i]=Percn[i].predict(Xn); for i in range(3): print('Accuracy: %.2f' % accuracy_score(Target[i], Y[i])) # <hr> # <h2 style="color:#FF0000";>Partie 3 - TP1 ter (<NAME>)<\h2> # # # ### Préambule # # source : https://www.python-course.eu/neural_networks_with_scikit.php iris.data = X; iris.target = np.ones(150) for i in range(3): idx=np.where(Target[i]==1) iris.target[idx]=i # + # splitting into train and test datasets from sklearn.model_selection import train_test_split datasets = train_test_split(iris.data, iris.target, test_size=0.2) train_data, test_data, train_labels, test_labels = datasets # + # scaling the data from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # we fit the train data scaler.fit(train_data) # scaling the train data train_data = scaler.transform(train_data) test_data = scaler.transform(test_data) print(train_data[:3]) # + # Training the Model from sklearn.neural_network import MLPClassifier # creating an classifier from the model: mlp = MLPClassifier(hidden_layer_sizes=(60,50,4), max_iter=10000) # let's fit the training data to our model mlp.fit(train_data, train_labels) # + from sklearn.metrics import accuracy_score predictions_train = mlp.predict(train_data) print(accuracy_score(predictions_train, train_labels)) predictions_test = mlp.predict(test_data) print(accuracy_score(predictions_test, test_labels)) # + from sklearn.metrics import confusion_matrix confusion_matrix(predictions_train, train_labels) # + from sklearn.metrics import classification_report print(classification_report(predictions_test, test_labels)) # - # <hr> # <h2 style="color:#FF0000";>Partie 4 - Sur apprentissage<\h2> # # + import numpy as np from sklearn.neural_network import MLPClassifier h=0.001 xx, yy = np.meshgrid(np.arange(0, 1, h), np.arange(0, 1, h)) # + n=50 p=50 for test in range(1,10): Sample = np.random.rand(n,2) Sample_test = np.random.rand(p,2) Target = np.random.randint(0,2,n) Target_test = np.random.randint(0,2,p) mlp = MLPClassifier(hidden_layer_sizes=(141,51,21), max_iter=100000, tol=1e-12,solver='lbfgs') mlp.fit(Sample, Target) Predict = mlp.predict(Sample) Predict_test = mlp.predict(Sample_test) print('Accuracy Sample : ', end=" ") print(accuracy_score(Target, Predict), end=" Test : ") print(accuracy_score(Target_test, Predict_test), end="\n") # - fig, ax = pyplot.subplots() Z = mlp.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) ax.set_title('Visualisation') ax.contourf(xx, yy, Z) #ax.axis('off') for value in [0,1]: idx = np.where(Target == value) pyplot.scatter(Sample[idx, 0], Sample[idx, 1]) pyplot.show()
Perceptron/Perceptron.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Production facility 04 - Flow Analysis # *This notebook illustrates how to perform an analysis on the flows within a production facility. # *** # <NAME>, Ph.D. 2021 # ### Import packages # + # %% append functions path import sys; sys.path.insert(0, '..') #add the above level with the package import os import pandas as pd import numpy as np from IPython.display import display, HTML #display dataframe # - # ### Set data fields string_casestudy = 'TOY_DATA' # ### Import data # + # %% import data from analogistics.data.data_generator_distribution import generateDistributionData #random generation of distribution data _, _, _, D_mov = generateDistributionData(num_movements=2500, num_parts = 100) # - #print nodes dataframe display(HTML(D_mov.head().to_html())) # ### Create folder hierarchy # + # %% create folder hierarchy pathResults = 'C:\\Users\\aletu\\desktop' root_path = os.path.join(pathResults,f"{string_casestudy}_results") prediction_results_path = os.path.join(root_path,f"P7_lotSizing") os.makedirs(root_path, exist_ok=True) os.makedirs(prediction_results_path, exist_ok=True) # - # ### Estimate the flows from analogistics.supply_chain.P3_flow_problem.assessFlows import defineFromToTable # Print from to matrix dict_df, dict_fig = defineFromToTable(D_flows=D_mov, colFrom='LOADING_NODE_DESCRIPTION', colTo='DISCHARGING_NODE_DESCRIPTION', colQty='QUANTITY') # print from to table DataFrame quantity dict_df['fromToQuantity'] # print from to table DataFrame count dict_df['fromToCount']
examples/Production Facility 04 - Flow Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: conda-env-tutorials-py # --- # <!--BOOK_INFORMATION--> # <img style="float: right; width: 100px" src="https://raw.github.com/pyomeca/design/master/logo/logo_cropped_doc.svg?sanitize=true"> # <font size="+3">Effective computation in Biomechanics</font> # # <font size="+2"><NAME></font> <a href="https://github.com/romainmartinez"><img src="https://img.shields.io/badge/github-romainmartinez-green?logo=github&style=social" /></a> # <!--NAVIGATION--> # < [Scientific computing with Numpy](01.02-intro-to-numpy.ipynb) | [Contents](index.ipynb) | [Programming tips and tricks](01.04-tips-and-tricks.ipynb) > # # Biomechanical analysis with Pyomeca # <center> # <img # src="https://raw.githubusercontent.com/pyomeca/design/master/logo/logo_plain_doc.svg?sanitize=true" # alt="logo" # width="800px" # /> # </center> # <center style="margin: 10px"> # <a href="https://travis-ci.org/pyomeca/pyomeca" # ><img # alt="Actions Status" # src="https://travis-ci.org/pyomeca/pyomeca.svg?branch=travis" # /></a> # <a href="https://anaconda.org/conda-forge/pyomeca" # ><img # alt="License" # src="https://anaconda.org/conda-forge/pyomeca/badges/license.svg" # /></a> # <a href="https://anaconda.org/conda-forge/pyomeca" # ><img # alt="latest release" # src="https://anaconda.org/conda-forge/pyomeca/badges/latest_release_date.svg" # /></a> # <a href="https://anaconda.org/conda-forge/pyomeca" # ><img # alt="Downloads" # src="https://anaconda.org/conda-forge/pyomeca/badges/downloads.svg" # /></a> # <a href="https://github.com/psf/black" # ><img # alt="Code style: black" # src="https://img.shields.io/badge/code%20style-black-000000.svg" # /></a> # </center> # # # Pyomeca is a python library allowing you to carry out a complete biomechanical analysis; in a simple, logical and concise way. # # ## Pyomeca documentation # # See Pyomeca's [documentation site](https://romainmartinez.github.io/motion). # # ## Example # # Here is an example of a complete EMG pipeline in just one command: # # ```python # from pyomeca import Analogs3d # # emg = ( # Analogs3d.from_c3d("your_c3d.c3d", names=['anterior_deltoid', 'biceps']) # .band_pass(freq=2000, order=4, cutoff=[10, 425]) # .center() # .rectify() # .low_pass(freq=2000, order=4, cutoff=5) # .normalization() # .time_normalize() # ) # ``` # # ## Features # # - Object-oriented architecture where each class is associated with common and specialized functionalities: # - **Markers3d**: 3d markers positions # - **Analogs3d**: analogs (emg, force or any analog signal) # - **GeneralizedCoordinate**: generalized coordinate (joint angle) # - **RotoTrans**: roto-translation matrix # # # - Specialized functionalities include signal processing routine commonly used in biomechanics: filters, normalization, onset detection, outliers detection, derivative, etc. # # # - Each functionality can be chained. In addition to making it easier to write and read code, it allows you to add and remove analysis steps easily (such as Lego blocks). # # # - Each class inherits from a numpy array, so you can create your own analysis step easily. # # # - Easy reading and writing interface to common files in biomechanics (c3d, csv, mat, sto, trc, mot, xlsx) # # # - Common linear algebra routine implemented: get Euler angles to/from roto-translation matrix, create a system of axes, set a rotation or translation, transpose or inverse, etc. # # ## Installation # # ### Using Conda # # First, install [miniconda](https://conda.io/miniconda.html) or [anaconda](https://www.anaconda.com/download/). # Then type: # # ```bash # conda install pyomeca -c conda-forge # ``` # # ## Integration with other modules # # Pyomeca is designed to work well with other libraries that we have developed: # # - [pyosim](https://github.com/pyomeca/pyosim): interface between [OpenSim](http://opensim.stanford.edu/) and pyomeca to perform batch musculoskeletal analyses # - [ezc3d](https://github.com/pyomeca/ezc3d): Easy to use C3D reader/writer in C++, Python and Matlab # - [biorbd](https://github.com/pyomeca/biorbd): C++ interface and add-ons to the Rigid Body Dynamics Library, with Python and Matlab binders. # # ## Bug Reports & Questions # # Pyomeca is Apache-licensed and the source code is available on [GitHub](https://github.com/pyomeca/pyomeca). If any questions or issues come up as you use pyomeca, please get in touch via [GitHub issues](https://github.com/pyomeca/pyomeca/issues). We welcome any input, feedback, bug reports, and contributions. # # --- # ## Reading and writing files # # | Type | Reading example | Writing example | Class | Description | # |---------|--------------------------|------------------------------------------|-----------------------------|----------------------------------------| # | `c3d` | `Markers3d.from_c3d()` | | `Markers3d` and `Analogs3d` | C3d file | # | `csv` | `Markers3d.from_csv()` | `Markers3d.to_csv()` | `Markers3d` and `Analogs3d` | Csv file | # | `excel` | `Markers3d.from_excel()` | | `Markers3d` and `Analogs3d` | Excel file | # | `sto` | `Analogs3d.from_sto()` | `Analogs3dOsim.to_sto()` (pyosim needed) | `Analogs3d` | Analogs file used in Opensim | # | `mot` | `Analogs3d.from_mot()` | `Analogs3dOsim.to_mot()` (pyosim needed) | `Analogs3d` | Joint angles file used in Opensim | # | `trc` | `Markers3d.from_trc()` | `Markers3dOsim.to_trc()` (pyosim needed) | `Markers3d` | Markers positions file used in OpenSim | # + from pathlib import Path from pyomeca import Markers3d, Analogs3d from utils import describe_data # %load_ext lab_black data_path = Path("..") / "data" / "markers_analogs.c3d" analogs = Analogs3d.from_c3d(data_path) # - describe_data(analogs) markers = Markers3d.from_c3d( data_path, prefix=":", names=["EPICm", "LARMm", "LARMl", "LARM_elb"] ) describe_data(markers) # ## Signal processing # + import matplotlib.pyplot as plt import seaborn as sns sns.set(style="ticks", context="talk") raw = analogs["Delt_ant.EMG1"].abs() def create_plots(data, labels): _, ax = plt.subplots(figsize=(12, 6)) for datum, label in zip(data, labels): datum.plot(label=label, lw=3, ax=ax) plt.legend() sns.despine() # + moving_average = raw.moving_average(window_size=100) create_plots(data=[raw, moving_average], labels=["raw", "moving average"]) # - # ### Your turn # # From the `raw` array: # # 1. Render the same plot using the `moving_median` and `moving_rms` methods # # 2. Plot the three kind of smoothing methods together # ## Filtering methods # + import numpy as np # fake data freq = 100 time = np.arange(0, 1, 0.01) w = 2 * np.pi * 1 y = np.sin(w * time) + 0.1 * np.sin(10 * w * time) y = Analogs3d(y.reshape(1, 1, -1)) # + low_pass = y.low_pass(freq=freq, order=2, cutoff=5) create_plots(data=[y, low_pass], labels=["raw", "low-pass @ 5Hz"]) # - # ### Your turn # # From the `raw` array: # # 1. Render the same plot using the `band_pass` (4th order with 10-200Hz cutoff), `band_stop` (2nd order with 40-60Hz cutoff) and `high_pass` (2nd order with 100Hz cutoff) methods # ## Utils methods # ### FFT # + amp, freqs = y.fft(freq=freq) amp_filtered, freqs_filtered = low_pass.fft(freq=freq) _, ax = plt.subplots(2, 1, figsize=(12, 6)) y.plot("k-", ax=ax[0], label="raw") low_pass.plot("r-", ax=ax[0], label="low-pass @ 5Hz") ax[0].set_title("Temporal domain") ax[1].plot(freqs, amp.squeeze(), "k-", label="raw") ax[1].plot(freqs_filtered, amp_filtered.squeeze(), "r-", label="low-pass @ 5Hz") ax[1].set_title("Frequency domain") ax[1].legend() plt.tight_layout() sns.despine() # - # ### Normalization # if `ref` is not specified (MVC), take normalize with signal max raw.rectify().normalization().plot() sns.despine() # + # raw.normalization?? # - # ### Time normalization raw.moving_rms(100).time_normalization().plot() sns.despine() # ### Detect onset # + # insert some 0 in the signal signal = moving_average.copy() signal[..., 6000:6500] = 0 mu = signal[..., : int(signal.get_rate)].mean() # - onset = signal.detect_onset( threshold=mu + mu * 0.1, # mean of the first second + 10% above=int(signal.get_rate) / 2, # we want at least 1/2 second above the threshold below=int(signal.get_rate) / 2, # we accept point below threshold for 1/2 second ) onset _, ax = plt.subplots(figsize=(12, 6)) signal.plot(ax=ax) for (inf, sup) in onset: ax.axvline(x=inf, color="g") ax.axvline(x=sup, color="r") sns.despine() # ## Your turn # # Apply the following pipeline on the `raw` channel in the `analogs` array: # # 1. band-pass (4th order with 10-425Hz cutoff) # 2. center # 3. rectify # 4. low-pass (4th order with 5Hz cutoff) # 5. normalize (mvc = 0.0005562179366360516 mV) # 6. time_normalize # # Then, plot the result # <!--NAVIGATION--> # < [Scientific computing with Numpy](01.02-intro-to-numpy.ipynb) | [Contents](index.ipynb) | [Programming tips and tricks](01.04-tips-and-tricks.ipynb) >
notebooks/01.03-intro-to-pyomeca.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # + import astropy.coordinates as coord import astropy.table as at import astropy.units as u import matplotlib as mpl import matplotlib.pyplot as plt # %matplotlib inline import numpy as np # gala import gala.coordinates as gc import gala.dynamics as gd import gala.potential as gp from gala.units import galactic # - galcen_frame = coord.Galactocentric() galcen_frame # - Sky coordinates from # - Proper motions from Table 2: (Blue sample) https://arxiv.org/pdf/2012.09204.pdf # - Distance and RV from Mcconachie 2012 m31_c = coord.SkyCoord( ra=10.68470833 * u.deg, dec=41.26875 * u.deg, distance=731 * u.kpc, pm_ra_cosdec=48.98 * u.microarcsecond/u.yr, pm_dec=-36.85 * u.microarcsecond/u.yr, radial_velocity=-300*u.km/u.s ) m31_galcen = m31_c.transform_to(galcen_frame) # From [Petersen & Peñarrubia 2021](https://arxiv.org/pdf/2011.10581.pdf) vtravel_c = coord.SkyCoord( lon=56*u.deg, lat=-34*u.deg, d_distance=32*u.km/u.s, frame=galcen_frame, representation_type=coord.UnitSphericalRepresentation, differential_type=coord.RadialDifferential ) vsun = galcen_frame.galcen_v_sun m31_galcen.velocity + vtravel_c.velocity m31_galcen.velocity.norm(), (m31_galcen.velocity + vtravel_c.velocity).norm()
notebooks/Coordinate-velocity-stuff.ipynb
// -*- coding: utf-8 -*- // --- // jupyter: // jupytext: // text_representation: // extension: .cpp // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: C++17 // language: C++17 // name: xcpp17 // --- // <p style="font-size: 30px; font-weight: bold;">Lock-Free Concurrent Data Structures</p> // # Lock-based concurrent data structures // <img src="img/lock_based_approach.png" alt="Lock-based concurrent data structures" width="80%" style="margin: 0 auto;"> // ## Blocing vs non-blocking // * Programs that use mutexes, condition variables, and futures are called **blocking** // * They call library functions that will suspend the execution of a thread until another thread performs an action // * Thread can’t progress past this point until block is removed // * Typically, OS will suspend a blocked thread completely // * Programs that don’t use blocking library calls are non-blocking // # Lock-free structures // * More than one thread can access the data structure concurrently // * If one thread is suspended by the scheduler midway through its operation, other threads must still be able to complete their operation // * Caveat: compare/exchange loops // * Can still result in a thread being subject of starvation // * Consequences of “wrong” timing // * One thread makes progress while another one continuously retries its operation // <img src="img/lock_based_vs_lock_free.png" alt="Lock-based concurrent data structures" width="80%" style="margin: 0 auto;"> // ## Wait-free and Lock-free definitions // **Wait-free:** A method is wait-free if it guarantees that every call to it finishes its execution in a finite number of steps. It is **bounded wait-free** if there is a bound on the number of steps a method call can take. // // **Lock-free:** A method is lock-free if it guarantees that infinitely often some method call finishes in a finite number of steps. Clearly, any wait-free method is also lock-free, but not vice versa. Lock-free algorithms admit the possibility that some threads could starve. // ## Advantages of lock-free data structures // * Performance – some thread makes progress with every step // * In a wait-free DS every thread can make forward progress // * Robustness – if a thread dies during an operation, only its data is lost // * If a thread dies while holding a lock, DS is broken forever // * Deadlocks impossible – although livelocks may occur // ## Livelock // * Two threads concurrently try to change the DS // * Actions performed by one cause other to fail and vice versa // * Each thread has to continuously restart its operation // * Analogy – two people trying to go simultaneously through a narrow gap // * They keep retrying until they agree on an order // * Typically short-lived like the scheduling condition that causes them // * Decreases performance rather than preventing progress entirely // ## Disadvantages of lock-free data structures // * Hard to uphold invariants in the absence of mutexes // * Avoiding data races requires atomic operations // * Important to ensure that updates become visible in the correct order // * May improve concurrency but decrease overall performance // * Atomic operations can be slower than non-atomic ones // * Hardware must synchronize data between threads // * Performance not necessarily portable // # Lock-free Stack // ## Push Method // * The `compare_exchange_weak` helps to always have the last value of the head and not lose track of any node when more than one thread is trying to push an element into the stack // * Once the thread has been out of the push method, means that the value was added to the stack without a problem // <img src="img/thread_safe_lock_free_push.png" alt="Thread Safe Lock-Free Push" width="80%" style="margin: 0 auto;"> // ## Pop Method // ### Remarks of the code in the Diagram `C04_stack_pop` // * The hard part in this part of the code is to **not have memory leaks**. C++ is not a garbage collector language like C# or Java. Therefore it is necessary to implement a system to eliminate the pointers and information that are no longer need after the pop. // * The pop functionality is made only in the first lines using the `while` to get the last real value of the `head` // * The next lines in this approach tries to eliminate the residual pointers when there were already used. However, as it is mentioned in the diagram, there is a *wrong behavior* using this code, when a thread gets the head but it starts again after a complete process in otehr thread. // ### Discussion of the `pop()` with hazard pointers // * In this code a change is made in compare with the other one. It is using the hazard pointer, which is going to verify that an element in the stack is not being used from another thread when wants to be deleted. // * Hazard pointers store a list of the nodes in use // // Although this simple implementation does indeed safely reclaim the deleted nodes, it adds quite a bit of overhead to the process. Scanning the hazard pointer array requires checking `max_hazard_pointers` atomic variables, and this is done for every `pop()` call. Atomic operations are inherently slow—often 100 times slower than an equivalent nonatomic operation on desktop CPUs—so this makes `pop()` an expensive operation. Not only do you scan the hazard pointer list for the node you’re about to remove, but you also scan it for each node in the waiting list. Clearly this is a bad idea. There may well be max_hazard_pointers nodes in the list, and you’re checking all of them against `max_hazard_pointers` stored hazard pointers. Ouch! There has to be a better way. // ### Reference Counting // * Hazard pointers store a list of the nodes in use // * Reference counters stores a count of the of the number of threads accessing each node // * Idea similar to `std::shared_ptr<>` // * Why not just using `std::shared_ptr<>`? // * Not guaranteed to be lock free // * A lock-free implementation would impose overheads in many use case scenarios for which lock freedom is not needed // * If `std::shared_ptr<>` was lock-free, problem would be easily solved // // ### Reference Couting with split counters // * External count kept alongside the pointer // * Increased every time the pointer is read // * Internal count kept alongside the node // * Decreased when reader is finished with the node // * Sum equal to total number of references // * A simple operation reading the pointer will leave the // * External counter increased by one // * Internal counter decreased by one // * When the external count/pointer pairing is no longer needed (i.e., node no longer accessible from a location accessible to multiple hreads) // * The value of the external count is added to the internal count // * The internal count is decreased by one // * The external count is discarded // * Once the internal count is zero, there are no outstanding references and the node can be deleted
jupyter-notebooks/04_lock_free_data_structures.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from matplotlib import pyplot as plt data = None with open('NP0029_20150731085611_Raw.txt', 'r', encoding='utf-8') as f: data = f.read() data = np.array(data.replace('\n', '').split(',')[:-1]).reshape(-1, 5) print(data) # - positions = data[:, 2:4].astype(np.int) plt.scatter(positions[:, 1], positions[:, 0], alpha=0.2, s=10) # + import pandas as pd int_data = data.astype(np.int) time_and_pressure = np.column_stack((int_data[:, 0], int_data[:, -1])) df = pd.DataFrame(time_and_pressure)\ .groupby(by=0, as_index=False)\ .sum() time_and_pressure_sum = df.values[:, :] print(time_and_pressure_sum) # - plt.plot(time_and_pressure_sum[:, 0].tolist(), time_and_pressure_sum[:, 1].tolist()) # + df1 = pd.DataFrame(time_and_pressure)\ .groupby(by=0, as_index=False)\ .sum() df2 = pd.DataFrame(time_and_pressure)\ .groupby(by=0, as_index=False)\ .count() time_and_pressure_mean = np.column_stack((df1.values[:, 0], df1.values[:, 1] / df2.values[:, 1])) print(time_and_pressure_mean) # - plt.plot(time_and_pressure_mean[:, 0].tolist(), time_and_pressure_mean[:, 1].tolist()) tps_filtered = time_and_pressure_sum[time_and_pressure_sum[:, 1] > 5000] tpm_filtered = time_and_pressure_mean[time_and_pressure_mean[:, 1] > 30] plt.plot(tps_filtered[:, 0].tolist(), tps_filtered[:, 1].tolist()) plt.show() plt.plot(tpm_filtered[:, 0].tolist(), tpm_filtered[:, 1].tolist()) plt.show() positions_with_number = np.column_stack((int_data[:, 3], int_data[:, 2], int_data[:, 1])) print(positions_with_number) # + offsets = [[0, 0], [0, 0],[60, 0],[120,0],[180,0],[240,0],[300,0],[360,0],[420,0], [0,120],[60,120],[360,120],[420,120], [0,240],[60,240],[360,240],[420,240], [0,360],[60,360], [0,480],[60,480], [0,600],[60,600], [0,720],[60,720], [0,840],[60,840], [0,960],[60,960], ] def rela_to_abso(position_with_number): return np.array([position_with_number[0] + offsets[position_with_number[2]][0], position_with_number[1] + offsets[position_with_number[2]][1]]) abso_positions = np.apply_along_axis(rela_to_abso, 1, positions_with_number) print(abso_positions) # - plt.figure(figsize=(9, 16)) ax = plt.gca() ax.xaxis.set_ticks_position('top') ax.invert_yaxis() plt.scatter(abso_positions[:, 0], abso_positions[:, 1], alpha=0.2, s=4) pd.read_excel('NP29_1_右脚.xls') # + import matplotlib.patches as mpathes plt.figure(figsize=(9, 16)) ax = plt.gca() ax.xaxis.set_ticks_position('top') ax.invert_yaxis() plt.scatter(abso_positions[:, 0], abso_positions[:, 1], alpha=0.2, s=4) right_feet = pd.read_excel('NP29_1_右脚.xls').to_numpy() for right_foot in right_feet: ax.add_patch(mpathes.Rectangle([right_foot[2], right_foot[0]], right_foot[3] - right_foot[2], right_foot[1] - right_foot[0], facecolor='none', edgecolor='r')) plt.show() # + feet = pd.read_excel('NP29_1_右脚.xls').to_numpy() right_feet = feet[::2] left_feet = feet[1::2] print(right_feet) print(left_feet) right_filter = [] left_filter = [] for pos in abso_positions: is_this_pos_right_foot = False is_this_pos_left_foot = False for right_foot in right_feet: if right_foot[2] <= pos[0] <= right_foot[3] and right_foot[0] <= pos[1] <= right_foot[1]: is_this_pos_right_foot = True break for left_foot in left_feet: if left_foot[2] <= pos[0] <= left_foot[3] and left_foot[0] <= pos[1] <= left_foot[1]: is_this_pos_left_foot = True break right_filter.append(is_this_pos_right_foot) left_filter.append(is_this_pos_left_foot) right_positions = abso_positions[np.array(right_filter)] left_positions = abso_positions[np.array(left_filter)] # - plt.figure(figsize=(9, 16)) ax = plt.gca() ax.xaxis.set_ticks_position('top') ax.invert_yaxis() plt.scatter(right_positions[:, 0], right_positions[:, 1], alpha=0.2, s=4, c='red') plt.scatter(left_positions[:, 0], left_positions[:, 1], alpha=0.2, s=4, c='blue') # + # store right and left data # NP0029_20150731085611 right_data = int_data[np.array(right_filter)] left_data = int_data[np.array(left_filter)] print(right_data) print(left_data) # - pd.DataFrame(right_data).to_csv('NP0029_20150731085611_左脚.csv', index=False, header=None) pd.DataFrame(left_data).to_csv('NP0029_20150731085611_右脚.csv', index=False, header=None) # + df = pd.DataFrame(np.column_stack((right_data[:, 0], right_data[:, -1])))\ .groupby(by=0, as_index=False)\ .sum() right_sum = df.values[:, :] df = pd.DataFrame(np.column_stack((left_data[:, 0], left_data[:, -1])))\ .groupby(by=0, as_index=False)\ .sum() left_sum = df.values[:, :] pd.DataFrame(right_sum).to_csv('NP0029_20150731085611_左脚求和.csv', index=False, header=None) pd.DataFrame(left_sum).to_csv('NP0029_20150731085611_右脚求和.csv', index=False, header=None) # + right_sum_without_time = right_sum[:, 1] left_sum_without_time = left_sum[:, 1] plt.figure(figsize=(12, 8)) l1, = plt.plot(right_sum[:, 0], right_sum_without_time) l2, = plt.plot(left_sum[:, 0], left_sum_without_time) plt.legend(handles=[l1, l2], labels=['left', 'right']) # + from statsmodels.tsa.holtwinters import ExponentialSmoothing, SimpleExpSmoothing, Holt plt.figure(figsize=(14, 8)) fit1 = SimpleExpSmoothing(right_sum_without_time).fit(smoothing_level=0.2, optimized=False) l1, = plt.plot(list(fit1.fittedvalues), linewidth=2) fit2 = SimpleExpSmoothing(right_sum_without_time).fit(smoothing_level=0.6, optimized=False) l2, = plt.plot(list(fit2.fittedvalues), linewidth=2, linestyle='dashed') l3, = plt.plot(right_sum_without_time, linewidth=2, linestyle='dotted') plt.legend(handles = [l1, l2, l3], labels = ['a=0.2', 'a=0.6', 'data']) plt.show() # - N = 5 n = np.ones(N) weights = n / N sma = np.convolve(weights, right_sum_without_time)[N-1:-N+1] time = np.arange(N-1, len(right_sum_without_time.tolist())) plt.figure(figsize=(14, 8)) plt.plot(time, right_sum_without_time[N-1:], linewidth=2, linestyle='dashed') plt.plot(time, sma, linewidth=2) # + from statsmodels.tsa.stattools import acf from statsmodels.graphics.tsaplots import plot_acf plt.figure(figsize=(12, 6)) ax = plt.gca() plot_acf(right_sum_without_time, lags=200, ax=ax) # -
应用数据分析(时序分析部分)/a.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # CODE 1161: Data Project # # Rainfall in Centennial Park # + [markdown] slideshow={"slide_type": "subslide"} # collect & analyze the rainfalldata in Centnnial Park # + slideshow={"slide_type": "notes"} import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import os import geopandas as gp from datetime import datetime # + slideshow={"slide_type": "notes"} aus_poas = gp.read_file('aus_poas.shp') #文件来自https://spatialvision.com.au/blog-open-source-spatial-geopandas-part-1/ # + slideshow={"slide_type": "notes"} def plot_australia(): #拿到数据 google postcode 显示出来 #红的中间 #print("Australia's geographic information form") #print("https://spatialvision.com.au") aus_map = aus_poas.plot() #澳洲地图 plt.xlabel('Longitude',fontsize = 15) plt.ylabel('Latitude',fontsize = 15) plt.title("Australia",fontsize = 15) nsw = aus_poas.query('code >= 2000 & code <= 2999') nsw_map = nsw.plot(ax=aus_map,color='green') #nsw地图 around_centennial_park = aus_poas.query('code >= 2000 & code <= 2150') around_centennial_park.map = around_centennial_park.plot(ax=nsw_map,color='red') #centennial_park 附近地图 红 return around_centennial_park.map # + slideshow={"slide_type": "slide"} plot_australia() # + slideshow={"slide_type": "subslide"} aus_poas.query('code == 2021').plot(color = 'red') plt.title('Centennial Park',fontsize = 15) # + slideshow={"slide_type": "notes"} # %matplotlib inline plt.rcParams['figure.figsize'] = (10, 5) saved_style_state = matplotlib.rcParams.copy() #give us a style state to go back to # + [markdown] slideshow={"slide_type": "notes"} # Collect and organize the rainfall data # # (data form: http://www.bom.gov.au/) # + slideshow={"slide_type": "notes"} os.path.isfile("rainfall.csv") filepath = 'rainfall.csv' #print("load form the file") centennial_park_rainfall = pd.read_csv(filepath) #print("done") #导入数据 # + slideshow={"slide_type": "notes"} centennial_park_rainfall.head(8) # + [markdown] slideshow={"slide_type": "notes"} # Year Month Day --> Date # # 1900 1 1 --> 1900-01-01 # + slideshow={"slide_type": "notes"} #合并日月年 # #+下一条 centennial_park_rainfall["Date"] = centennial_park_rainfall['Year'].astype(str) + ('-' + centennial_park_rainfall['Month'].astype(str) ) + ('-' + centennial_park_rainfall['Day'].astype(str) ) #去掉无用 rainfall_data = centennial_park_rainfall[['Date','Rainfall amount (millimetres)']] #日期 to index #penalty_data3.set_index("Date") centennial_park_rainfall.index = pd.DatetimeIndex(rainfall_data['Date']) del rainfall_data['Date'] # + slideshow={"slide_type": "notes"} #合并日月年 # #+下一条 centennial_park_rainfall["Date"] = centennial_park_rainfall['Year'].astype(str) + ('-' + centennial_park_rainfall['Month'].astype(str) ) + ('-' + centennial_park_rainfall['Day'].astype(str) ) #去掉无用 rainfall_data = centennial_park_rainfall[['Date','Rainfall amount (millimetres)']] #日期 to index centennial_park_rainfall.index = pd.DatetimeIndex(rainfall_data['Date']) del rainfall_data['Date'] #不知道为什么 跑两次 # + slideshow={"slide_type": "notes"} #去掉未开张 rainfall_data = rainfall_data.loc['1900-6':'2021-6'] #rainfall_data.T #90度 # + slideshow={"slide_type": "notes"} #索引 单年 penalty_data3_2020 = rainfall_data.loc['2020'] # + slideshow={"slide_type": "notes"} penalty_data3_2020 # + slideshow={"slide_type": "notes"} rainfall_data.loc['2020-01'].T # + [markdown] slideshow={"slide_type": "slide"} # Graphs and analyse # + slideshow={"slide_type": "notes"} def scatter_rainfall(): x = np.arange(len(rainfall_data["Rainfall amount (millimetres)"])) y = rainfall_data["Rainfall amount (millimetres)"] plt.scatter(x,y,alpha = 0.6) plt.ylabel('Rainfall amount (millimetres)',{'size' : 15}) plt.xlabel('Date',{'size' : 15}) plt.title("Rainfall in Centennial Park",{'size':20}) plt.xticks([0,10000,20000,30000,40000],["1900","1925","1950","1975","2000"]) #plt.yticks([1,10,25,50],['Light','Moderate','Heavy','Violen']) y_ticks = np.arange(0, np.max(centennial_park_rainfall["Rainfall amount (millimetres)"])+1, 20) plt.yticks(y_ticks) return plt.show # + slideshow={"slide_type": "slide"} scatter_rainfall() # + slideshow={"slide_type": "notes"} max_rainfall = np.max(centennial_park_rainfall["Rainfall amount (millimetres)"]) print("单日最高降雨(全数据)") print(max_rainfall) c = np.where(rainfall_data["Rainfall amount (millimetres)"] == max_rainfall) d = centennial_park_rainfall["Date"][c[0][0]] #print(c[0][0]) print("在这个时间") print(d) #全数据 # + slideshow={"slide_type": "notes"} def highest_rainfall(): print('The highest rainfall happen in',d) print('Rainfall amount reached',max_rainfall,'mm') print("Violet rainday") # + slideshow={"slide_type": "subslide"} highest_rainfall() # + slideshow={"slide_type": "notes"} wuyu = len(rainfall_data[rainfall_data['Rainfall amount (millimetres)'] == 0]) #not rain xiaoyu = len(rainfall_data[rainfall_data['Rainfall amount (millimetres)'] < 10]) - wuyu #light rain baoyu= len(rainfall_data[rainfall_data['Rainfall amount (millimetres)'] > 50]) #Violen rain dayu = len(rainfall_data[rainfall_data['Rainfall amount (millimetres)'] >= 25]) - baoyu #heavy rain zhongyu = len(rainfall_data[rainfall_data['Rainfall amount (millimetres)'] >= 10]) - baoyu - dayu #moderate rain typeyu = [xiaoyu,zhongyu,dayu,baoyu] typeyu_lable = ['Light rain', 'Moderate rain', 'Heavy rain','Violet rain'] # + slideshow={"slide_type": "notes"} def distribution_of_rain_type(): plt.figure(2, figsize=(6,6)) #colors = ['cyan','lightskyblue','steelblue','darkblue'] explodes =(0.1,0,0,0) plt.pie(typeyu, explode=explodes, colors=None, labels=typeyu_lable, autopct='%1.1f%%',pctdistance=0.8, shadow=True) plt.title('Distribution of Rain Type', bbox={'facecolor':'0.8', 'pad':5}) plt.legend(typeyu_lable,loc = 1) plt.show() plt.close() #这是全年 # + [markdown] slideshow={"slide_type": "subslide"} # Rainday Light Moderate Heavy Violet # # Rainfall 0 ~ 9.9 10 ~ 24.9 25 ~ 49.9 >=50 # (millimetres) # # # + slideshow={"slide_type": "slide"} distribution_of_rain_type() # + slideshow={"slide_type": "notes"} #大小中暴雨的天数 三雨折线图 #全数据 #暴雨天 baoyutian = (rainfall_data[rainfall_data['Rainfall amount (millimetres)'] > 50]).groupby('Date')['Rainfall amount (millimetres)'].count() baoyutiana = baoyutian.resample('Y').sum() baoyutianaa = baoyutiana #print(baoyutianaa) #大 雨天 dayutian = (rainfall_data[rainfall_data['Rainfall amount (millimetres)'] >= 25]).groupby('Date')['Rainfall amount (millimetres)'].count() dayutiana = dayutian.resample('Y').sum() dayutianaa = dayutiana - baoyutiana #print(dayutianaa)#这个才是 #中 雨天 zhongyutian = (rainfall_data[rainfall_data['Rainfall amount (millimetres)'] >=10 ]).groupby('Date')['Rainfall amount (millimetres)'].count() zhongyutiana = zhongyutian.resample('Y').sum() zhongyutianaa = zhongyutiana - dayutiana #print(zhongyutianaa)#这个才是 #小 雨天 xiaoyutian = (rainfall_data[rainfall_data['Rainfall amount (millimetres)'] > 0]).groupby('Date')['Rainfall amount (millimetres)'].count() xiaoyutiana = xiaoyutian.resample('Y').sum() xiaoyutianaa = xiaoyutiana - zhongyutiana #print(xiaoyutianaa)#这个才是 # + slideshow={"slide_type": "notes"} def rainydays_year(): xiaoyu = plt.plot(xiaoyutianaa) zhongyu = plt.plot(zhongyutianaa) dayu = plt.plot(dayutianaa) baoyu = plt.plot(baoyutianaa) plt.xlabel('Year') plt.ylabel("Days") plt.title('rainydays in year') plt.legend(typeyu_lable,loc = 1) # + slideshow={"slide_type": "slide"} rainydays_year() # + [markdown] slideshow={"slide_type": "slide"} # Consecutive Rainyday # + slideshow={"slide_type": "notes"} def continuous_rainy_2020(): data_2020_yue =rainfall_data.loc['2020'] count = 1 maxcount = 0 for i in range(len(data_2020_yue['Rainfall amount (millimetres)'])-1): if (data_2020_yue['Rainfall amount (millimetres)'][i] > 0 ) & (data_2020_yue['Rainfall amount (millimetres)'][i+1] > 0): count = count + 1 #print(count) if count > maxcount: maxcount = count index = i + 1 maxindex = data_2020_yue['Rainfall amount (millimetres)'][i+1] else: count = 1 #print(index,maxcount,maxindex) kaishi = centennial_park_rainfall['Date']['2020'][index - maxcount +1] jiewei = centennial_park_rainfall['Date']['2020'][index] print('at lastyear, start form',kaishi,'it rains every day unitl',jiewei) print('It rained for',maxcount,'consecutive days')#肯 sir q tive days print('rainy every day in a week') # + slideshow={"slide_type": "subslide"} continuous_rainy_2020() # + slideshow={"slide_type": "notes"} def continuous_rainy(): countall = 0 maxcountall = 0 for i in range(len(rainfall_data['Rainfall amount (millimetres)'])-1): if rainfall_data['Rainfall amount (millimetres)'][i] == 0: countall = 0 elif rainfall_data['Rainfall amount (millimetres)'][i] > 0: countall = countall +1 if countall > maxcountall: maxcountall = countall indexall = i maxindexall = rainfall_data['Rainfall amount (millimetres)'][i] #maxindex应该无用 #print(indexall,maxcountall,maxindexall) kaishiall = centennial_park_rainfall['Date'][indexall - maxcountall +1 +(31+28+31+30+31)] jieweiall = centennial_park_rainfall['Date'][indexall +(31+28+31+30+31)] #print(kaishiall,jieweiall,maxcountall) print('start form',kaishiall,'it rains every day unitl',jieweiall) print('It rained for',maxcountall,'consecutive days')#肯 sir q tive days print('rainy amounts a months') # + slideshow={"slide_type": "subslide"} continuous_rainy() # + [markdown] slideshow={"slide_type": "slide"} # about last year (2020) # + [markdown] slideshow={"slide_type": "subslide"} # Spring Setptember October November # Summer December January February # Autumn March April May # Winter June July August # + slideshow={"slide_type": "notes"} xiatian_2020 = rainfall_data.loc['2020-1':'2020-2'] xiatian_2020_12 = (rainfall_data.loc['2020-12']) #print(xiatian_2020,xiatian_2020_12) #2020 夏 qiutian_2020 = rainfall_data.loc['2020-3':'2020-5'] #2020 秋 dongtian_2020 = rainfall_data.loc['2020-6':'2020-8'] #2020 冬 chuntian_2020 = rainfall_data.loc['2020-9':'2020-11'] #2020 春 # + slideshow={"slide_type": "notes"} def pie_seasons(labels,datas): plt.figure(1, figsize=(6,6)) #expl = [0] colors = ["blue","green","yellow","orange"] #设置颜色(循环显示) #plt.shadow = True # Pie Plot # autopct: format of "percent" string;百分数格式 plt.pie(datas, explode=None, colors=colors, labels=labels, autopct='%1.1f%%',pctdistance=0.8, shadow=True) plt.title('2020——rainfall in seasons', bbox={'facecolor':'0.8', 'pad':5}) plt.legend(labels,loc = 1) plt.show() #plt.savefig("pie.jpg") plt.close() #四季 饼图 #夏》秋》冬》春 # + slideshow={"slide_type": "notes"} seasons_label = ['Spr.','Sum.','Aut.','Win.'] #datas 每个季节 总共降水量 datas = [np.sum(chuntian_2020)[0], np.sum(xiatian_2020)[0] + np.sum(xiatian_2020_12)[0], np.sum(qiutian_2020)[0], np.sum(dongtian_2020)[0]] # + slideshow={"slide_type": "slide"} pie_seasons(seasons_label,datas) # + slideshow={"slide_type": "notes"} def draw_pie_month(label_months,data_months): #12个月 版本 饼图 plt.figure(2, figsize=(6,6)) colors = ['cornflowerblue',"blue",'royalblue','forestgreen','green','lime','khaki','yellow','gold','y',"orange",'lightcoral'] plt.pie(data_months, explode=None, colors=colors, labels=labels_months, autopct='%1.1f%%',pctdistance=0.8, shadow=True) plt.title('2020——rainfall in months', bbox={'facecolor':'0.8', 'pad':5}) plt.legend(labels_months,loc = 1) plt.show() plt.close() #12月分 饼图 # + slideshow={"slide_type": "notes"} labels_months = ['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sept','Oct','Nov','Dec'] # + slideshow={"slide_type": "notes"} #这个调到前面 data_2020_yue =rainfall_data.loc['2020'] #年 分 wenhao2= data_2020_yue.index.month # 看看索引 wenhao3 = data_2020_yue.resample('M').sum() # 各个 求和 M为月 #实验 年份分类 wenhao4 = wenhao3['Rainfall amount (millimetres)'] print(wenhao4[8]) wenhao5 = [] for i in range(12): wenhao5 = wenhao5 + [wenhao4[i]] print(wenhao5) wenhao7 = wenhao5[8:10] + wenhao5[0:8] +wenhao5[10:12] print(wenhao7) #改变位置 排序 春夏秋冬 data_months = wenhao7 # + slideshow={"slide_type": "slide"} draw_pie_month(labels_months,data_months) # + slideshow={"slide_type": "notes"} #c = data_2020_yue.resample('M').sum() #每个月 降雨天数 c = (data_2020_yue[data_2020_yue['Rainfall amount (millimetres)'] > 0]).groupby('Date')['Rainfall amount (millimetres)'].count() cc = c.resample('M').sum() #print(c) #[31,28,31,30,31,30,31,31,30,31,30,31] #月份 天速 year = 2020 if int(year/4) == float(year/4): days_in_month = [31,29,31,30,31,30,31,31,30,31,30,31] else: days_in_month = [31,28,31,30,31,30,31,31,30,31,30,31] ccc = cc/days_in_month print(np.mean(ccc)) # + slideshow={"slide_type": "notes"} def rainy_in_2020(): #labels_months 1~12 #days_in_month 31,28,31 otherday = days_in_month - cc #print(otherday) plt.bar(range(len(otherday)),days_in_month,label = 'no rain',fc = 'snow',ec='black', ls='-', lw=0.5,tick_label = labels_months) plt.bar(range(len(otherday)),cc,label = 'rainy',fc = 'silver',ec='k', lw=0.5, hatch='/') for a,b in zip(np.arange(len(labels_months)),cc): #柱子上的数字显示 plt.text(a,b,'%.2d'%b,ha='center',va='bottom',fontsize=10) for a,b in zip(np.arange(len(labels_months)),days_in_month): #柱子上的数字显示 plt.text(a,b,'%.2d'%b,ha='center',va='bottom',fontsize=10) plt.yticks([]) plt.xlabel("month",fontsize = 15) plt.legend() plt.show # + slideshow={"slide_type": "slide"} rainy_in_2020() # + slideshow={"slide_type": "notes"} def rainfall_in_month(): #每个月降雨量 条状 #1月示例 tiaozhuang = data_2020_yue.loc['2020-08']["Rainfall amount (millimetres)"] plt.bar(range(len(tiaozhuang)),tiaozhuang,fc = 'silver',ec='k') plt.ylabel("Rainfall (millimetres",fontsize = 15) plt.xlabel('Aug',fontsize =15) # + slideshow={"slide_type": "notes"} def rainydays_2020(): print(np.mean(ccc)) # + slideshow={"slide_type": "skip"} rainydays_2020() # + slideshow={"slide_type": "slide"} rainfall_in_month() # + [markdown] slideshow={"slide_type": "slide"} # Thanks
Rainfall_for_persatation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Acquiring and processing NN data from NanoJ Core # # 1) Code for ImageJ/Fiji macro that takes images, splits them, measures NN values for separate channels, and saves them in a file. # # 2) different functions in python that are needed to process files obtained in step (1) # # - remove entries with a zero for x/y; seem to be NanoJ NN artifacts # # - create new files with entries combined for one condition, also listing cell identifier and replicate for subsequent statistical analysis # # - create lists that contain either distances per cell or distances per replicate # # - create histogram from lists # # - calculate average distance per cell per channel # # - calculate FWHM per cell # # # ## ImageJ plugin to get NN values with NanoJ and save them in separate file # # Code expects an image with three channel, the first being a labeling used for creating a focal adhesion mask (anti paxillin 633), second one being transfected, RFP labeled construct, third one being anything (anti pFAK, pPax 488 for example). # # First channel is used to create a rather "tight" fitting mask for focal adhesions; second channel (transfected protein) applies another, very broad mask in order to exclude untransfected cells in the field of view. # # Other order of color channels or labeled proteins will require to adapt numbering. Check how ImageJ names respective channels when you "split color" (C1-..., C2-..., etc). # # explanation of variables: # # dir1: where to save resulting file; either use "getDirectory" to have the possibility to browse folders or give a string with file path; comment out what you don't need # # NN NanoJ Core needs pixel size and a tolerance level as parameter # # --> getPixelSize and px do exactly that # # --> tol is variable for tolerance level in NN analysis; 100 for example for Airyscan, 1000 for SIM # # tol values is also saved in file name of end results to make sure that this information is not lost # # ### macro code starts here; copy paste this to use it in ImageJ/Fiji # // Script to batch and save tables from NanoJ core plugin - fixed cells\ # // C.Henry and M.Bachmann - V3 - 2021-11-25\ # # // dir1 = getDirectory("Choose Directory To Save Files"); //un-comment if you want to choose directory every time # dir1 = "/Volumes/LaCie/_iFAK iSrc 3T3/20211122_3T3_Test_iFAK_iSrc/" //comment out if you want to select directory every time or change path to the correct one # NbSlice = nSlices(); # ImageName = getTitle(); # ScriptTitle = "Nearest-Neighbour Table"; # getPixelSize(unit, pw, ph, pd); # px = pw*1000 //get pixel size in nm for NN plugin later to make sure it uses the right pixel size # tol = 100 // tolerance level for NN plugin # # // split image and create mask based on channel = 1, usually Pax 633 # selectWindow(ImageName); # run("Split Channels"); # selectWindow("C1-"+ImageName); # run("Duplicate...", " "); # rename("mask"); # run("Subtract Background...", "rolling=1 sliding"); //settings should work for Airy Scan images # setAutoThreshold("Otsu dark"); # run("Threshold..."); # run("Create Selection"); # run("Enlarge...", "enlarge=-0.1"); # run("Enlarge...", "enlarge=0.1"); // reduce and enlarge to remove small features # // run("Clear Outside"); //replaced with invert-set = 0 # run("Make Inverse"); // delete everything outside of selection # run("Set...", "value=0"); # run("Select None"); # # // for transfected cells: Make a mask with transfected signal to remove all un-transfected cells, assuming channel 2 is transfected signal # // but using it for antibody-staining only images shouldn't be a problem in general; mostly depends on signal distribution in channel 2 # selectWindow("C2-"+ImageName); # setAutoThreshold("Otsu dark"); # run("Threshold..."); # run("Create Selection"); # run("Enlarge...", "enlarge=0.5"); # selectWindow("mask"); # run("Restore Selection"); //apply mask for transfected signal to mask of paxillin and remove adhesions outside of transfected cell # run("Make Inverse"); # run("Set...", "value=0"); # run("Select None"); # setAutoThreshold("Otsu dark"); # run("Threshold..."); # run("Create Selection"); // final selection of adhesions only within transfected cell # # // loop that goes through all channels, applies mask to limit to adhesions within transfected cell and measures NN # for (i=1; i<NbSlice+1; i++) { # selectWindow("mask"); # selectWindow("C"+i+"-"+ImageName); # run("Restore Selection"); # run("Make Inverse"); # run("Set...", "value=0"); # run("Select None"); # run("Nearest-Neighbours Analysis", "tolerance=" + tol +" number=250 pixel="+px); # if (i == 1) { # ColName = newArray(4); # ColName[0] = "x-position"; # ColName[1] = "y-position"; # ColName[2] = "closest neighbour distance (pixels)"; # ColName[3] = "closest neighbour distance (nm)"; # DataArray1 = newArray(); # DataArray2 = newArray(); # DataArray3 = newArray(); # DataArray4 = newArray(); # DataArrayIdx = newArray(); # } # ColTemp = Table.getColumn(ColName[0], ScriptTitle); # DataArray1 = Array.concat(DataArray1, ColTemp); # ColTemp = Table.getColumn(ColName[1], ScriptTitle); # DataArray2 = Array.concat(DataArray2, ColTemp); # ColTemp = Table.getColumn(ColName[2], ScriptTitle); # DataArray3 = Array.concat(DataArray3, ColTemp); # ColTemp = Table.getColumn(ColName[3], ScriptTitle); # DataArray4 = Array.concat(DataArray4, ColTemp); # # NbValues = ColTemp.length; # for (j=0; j<NbValues; j++) { # DataArrayIdx = Array.concat(DataArrayIdx,i); # } # close("Nearest Neighbours Voronoi (nm)"); # close("Histogram of nearest-neighbour distances"); # } # # Table.create("FinalTable; tolerance: " + tol); # Table.setColumn(ColName[0], DataArray1, "FinalTable; tolerance: " + tol); # Table.setColumn(ColName[1], DataArray2, "FinalTable; tolerance: " + tol); # Table.setColumn(ColName[2], DataArray3, "FinalTable; tolerance: " + tol); # Table.setColumn(ColName[3], DataArray4, "FinalTable; tolerance: " + tol); # Table.setColumn("Slice", DataArrayIdx, "FinalTable; tolerance: " + tol); # Table.update("FinalTable; tolerance: " + tol); # Table.save(dir1 + "_" + ImageName + "_Table-" + tol + ".txt", "FinalTable; tolerance: " + tol); # # print("Job done with tolerance level " + tol + " and pixel size " + px); # # # ## collection of python functions to make data from NN files obtained with macro code above accessible # ### general strategy: # # ImageJ macro gave plenty of single files with NN data for several cells from several independent experiments. # # Code has to go through folder, read all files, and save correct information into separate files for each color channel. New files should contain distance values (>= 100 nm to <= 1000 nm), cell identifier, and identifier for independent experiment. # # I name files always the same way "yyyymmdd_cellLine_condition_...". # --> independent experiments will be identified based on the date (yyyymmdd format) in the file name; file name will be split based on "_". Other naming system will require to adapt this part of the code or the file naming system. # # # + import numpy as np def ident_replicate(file): """ file name will be split based on '_' --> Make sure that filename has date separated by underscore and has date in format yyyymmdd. Date is recognized as being an integer > 20000000 meaning that having another integer in the file name that is not the date of experiment will create problems. Input is name of a file, return value is date of experiment as integer """ file_name = file.split('_') #split file name in parts identified by "_" between them exp_date = 0 #variable to hold date of experiment # this loop goes through the split file name and looks with "try int()" for possible # integers; if an integer is bigger than 2000 00 00 it is considered to be a date # and is used as output of the function for index, i in enumerate(file_name): try: date = int(file_name[index]) if date > 20000000: exp_date = date except: pass return(exp_date) def NN_summary(file_path, output_path, condition, counter=1): """ Creating new summary files for each color channel based on all files for a given experimental condition. Input files are considered to be .txt files, output will be .csv and can be saved in separate folder Output files are created in append mode; be aware when calling several times. Input: file_path = path of data files output_path = folder for output data condition = used to name output files to identify experimental condition later on counter = counter for the number of files analysed, first time will be used to write a header into the output files; default of '1' will lead to a header. Counter is also used as an identifier of separate cells. This information will be lost if counter is not increased when function is called in a loop. """ # naming of output files for three different channels output01 = output + condition + '_slice01.csv' output02 = output + condition + '_slice02.csv' output03 = output + condition + '_slice03.csv' # opening data files for reading and output files for writing in append mode # to hold information from all single files with open(file_path, 'r') as f_in, open(output01, 'a') as f_out01, open(output02, 'a') as f_out02, open(output03, 'a') as f_out03: # remove header of data files line = f_in.readline() # get file name from file path file = (file_path.split('/'))[-1] # call function to get date as int from file name to identify # independent experiments later on date_experiment = ident_replicate(file) # writing headers for output files # put counter either to != 1 or increase with a loop if you don't want the # header several times in your file; but be aware that counter is also used # as cell identifier if counter == 1: print ('distance', 'date_experiment', 'cell', file = f_out01, sep = ',') print ('distance', 'date_experiment', 'cell', file = f_out02, sep = ',') print ('distance', 'date_experiment', 'cell', file = f_out03, sep = ',') # looping through lines of data file for line in f_in: data = line.strip().split('\t') x_pos = float(data[0]) distance = float(data[3]) slice_nr = float(data[4]) # identifying suitable data, x = 0 is artifical entry from NanoJ Core, # limit of 1000 nm is set by me assuming that longer distances are # unlikely within the same adhesion # data will be printed into respective output files based on slice # entry in data file if (x_pos >= 100) and (distance <= 1000): if slice_nr == 1: print (distance, date_experiment, counter, file = f_out01, sep = ',') elif slice_nr == 2: print (distance, date_experiment, counter, file = f_out02, sep = ',') elif slice_nr == 3: print (distance, date_experiment, counter, file = f_out03, sep = ',') def create_arrays_average(file_path): """ Creates and returns a dictionarry with date of experiments as keys and all distances with same key as values. Input: path to file that contains all distance values for one labeling, supposed to be a .csv file (i.e. file created by function NN_summary) with distance at first position in line and replicate at second position in line Such a dictionnary can later be used to get data sorted per independent experiment. But connection to single cell data is lost. """ with open(file_path,'r') as f_01: # remove header f_01.readline() #dic_replicates = 0 dic_replicates = {} #initialize dictionary with keys = replicates and values = distances dic_cells = {} #initialize dictionary with keys = replicates and values = cell ID for line in f_01: data = line.strip().split(',') distance = float(data[0]) replicate = float(data[1]) if not replicate in dic_replicates: #key with respective name is created if not already existent dic_replicates[replicate] = [] dic_replicates[replicate].append(distance) #add distance values to resp key return(dic_replicates) # returns dic with replicates as keys and distances as values def create_arrays_average_02(file_path): """ Creates and returns a dictionarry with cell identifier as keys and all distances with same key as values. Input: path to file that contains all distance values for one labeling, supposed to be a .csv file (i.e. file created by function NN_summary) with distance at first position in line and cell identifier at third position in line. replicate is currently not used (replicate is second position in line) Such a dictionnary can later be used to use all the distance data connected to individuell cells. """ with open(file_path,'r') as f_01: # remove header f_01.readline() dic_cells = {} #dic with cells as keys and distances as values for line in f_01: data = line.strip().split(',') distance = float(data[0]) cell = float(data[2]) if not cell in dic_cells: #key with respective name is created if not #already existent dic_cells[cell] = [] dic_cells[cell].append(distance) #add distance values to resp key return(dic_cells) # returns dic with cells as keys and distances as values def make_histo(distance_values): #give a list with distance values as input, (values from dic with keys as replic) """ Takes dictionary created with function 'create_arrays_average' as input and make numpy array and subsequently calculate histogram distribution for each replicate. Histograms are also normalized to max value. Histogram bins are set within function. Returns normalized histogram data and list with bins """ bins = [0,100,150,200,250,300,350,400,450,500,550,600,650,700,750,800,850,900,950,1000] array = np.array(distance_values) histo, bin_edges = np.histogram(array, bins=bins, density=False) histo_norm = [float(i)/max(histo) for i in histo] del bins[0] return histo_norm, bins def create_arrays(file_path): """ """ dic_cells = {} # dic_replicate = {} with open(file_path,'r') as f_01: # remove header f_01.readline() for line in f_01: data = line.strip().split(',') distance = float(data[0]) replicate = float(data[1]) cell = float(data[-1]) #key with respective name is created if not already existent if not replicate in dic_replicate: dic_replicate[replicate] = [] #add cell nr to resp key dic_replicate[replicate].append(cell) #key with respective name is created if not already existent if not cell in dic_cells: dic_cells[cell] = [] dic_cells[cell].append(distance) #add distance values to resp key return(dic_cells, dic_replicate) # - # # ## Calling functions for creating output files with summarized distance values and histogram data # # The idea is that you have one folder per condition that contains all NN files, in .txt format, from several experiments, obtained with ImageJ Macro from above, for this given condition. Filename needs to contain date in yyyymmdd format separated with "_". # Folder should also contain an empty output folder. # # + # information about path of input files and folder for output files, # as well as "condition" = name used for output files path = "/Users/michaelbachmann/Python/pPax paper/Figure05/test" output = path + "/output/" # condition and name for the output files condition = 'FAKwt' # change these if conditions are not like this from your ImageJ output files channel01 = 'Pax' channel02 = condition channel03 = 'pFAK' # + # create summary files that contains distance data from all separate files # files will be saved in output folder and be named according to variable 'condition' # and with slice01, slice02, slice03 for channels according to ImageJ Macro # in case of Bachmann et al, pPax paper: 1 being Pax, 2 being FAK construct, 3 being pFAK # Import Module import os # Change the directory to the one given in the first code cell with variable "path" os.chdir(path) # index for file numbers, used to identify data from individual cells and to write # header at the start of output files index = 1 # iterate through all files in path defined by variable "path"; for file in os.listdir(): # Check whether file is in text format or not if file.endswith(".txt"): file_path = f"{path}/{file}" #process data, i.e. remove 0, split according to color NN_summary(file_path, output, condition, index) index += 1 #add one to counter of analyzed cells # + # create file that contains histogram data from distances pooled per # independent experiment # file will be named after variable 'condition' and with '_histo_output.txt' # output folder should contain 3 files ending with 'slice01', ..., 'slice03' and they # will be named assuming that 01 is Pax, 03 is pFAK, and that 02 is variable 'condition' # Import Module import os # Change the directory to the directory of the output files os.chdir(output) with open(condition + "_histo_output.txt", 'w') as f_out: print ('index', 'condition', 'bin', 'bin value', file = f_out, sep = '\t') # iterate through all file index_counter = 1 for file in os.listdir(): # Check whether file is in csv format or not if file.endswith(".csv"): file_path = f"{output}{file}" #call function to make dic. for ind. exp. and dis. values dic_replicates = create_arrays_average(file_path) #go through dictionary key (=ind.exp.) by key and create histogram for all #distances belonging to this replicate for key in dic_replicates: bin_value, distance = make_histo(dic_replicates[key]) #calc. histogram # file[-5] is the slice number in the file name and # therefore indicates condition if int(file[-5]) == 1: channel = channel01 elif int(file[-5]) == 2: channel = channel02 elif int(file[-5]) == 3: channel = channel03 # go through lists of bin values and print them together with number of # respective bin for i, dist in enumerate(bin_value): print (index_counter, channel, distance[i], bin_value[i], file = f_out, sep = '\t') #increase counter that will indicate separate experiments index_counter += 1 # + # Creates output files that contain average distances per cell named # 'average-distance.txt'. # column replicate is empty and replicates have to be calculated with second part, creating # file 'cell-rep.txt'. Not very elegant but I didn't find another way. # Import Module import os # Change the directory to the directory of the output files os.chdir(output) # this part creates file with average values with open(condition + "_average-distance.txt", 'w') as f_out: print ('index', 'condition', 'average distance', 'replicate', file = f_out, sep = '\t') # iterate through all files for file in os.listdir(): average_dis = [] #creaty empty list for average distances # Check whether file is in csv format or not if file.endswith(".csv"): file_path = f"{output}{file}" # file[-5] is the slice number in the file bame and therefore indicates condition if int(file[-5]) == 1: channel = channel01 elif int(file[-5]) == 2: channel = channel02 elif int(file[-5]) == 3: channel = channel03 # calls function ...average_02 that creates a dic. per cell and not per # ind. experiment as the case for ...average dic_cells = create_arrays_average_02(file_path) for cell in dic_cells: print (cell, channel, sum(dic_cells[cell]) / len(dic_cells[cell]), file=f_out, sep='\t') # this part creates file with list of exp. dates; can be copy pasted later on to # file _average-distances to connect average distances with replicates. # the way it work is that one of the output files (...slice01.csv) is taken and the order # of exp dates is read and written into new file with open(condition+'_slice01.csv', 'r') as f_in, open(condition+"_cell-rep.txt", 'w') as f_out: f_in.readline() # dic pairing cell identifier with replicate dic_pairing = {} for line in f_in: data = line.strip().split(',') replicate = float(data[1]) cell = float(data[2]) if not cell in dic_pairing: dic_pairing[cell] = [] #this cell identifier is made a key #if this key has no entry yet, it gets exp date as entry, #connecting cell identifier with exp date and thereby replicate if not replicate in dic_pairing[cell]: dic_pairing[cell].append(replicate) for cell in dic_pairing: print (dic_pairing[cell], file=f_out, sep=',') # + # creates new file containing FWHM values named 'histo_output_FWHM.txt' # input comes again from csv files slice01, ..., slice03 # output file contains experimental date to identify replicates, condition analyzed, and # number of bins with value >= 0.5 multiplied with 50 as FWHM(nm) value # Import Module import os # Change the directory to the directory of the output files os.chdir(output) with open(condition + "_histo_output_FWHM.txt", 'w') as f_out: print ('date', 'condition', 'FWHM(nm)', file = f_out, sep = '\t') for file in os.listdir(): # iterate through all file index_counter = 1 # Check whether file is in csv format or not if file.endswith(".csv"): file_path = f"{output}{file}" # create two dic, one with cells as keys, one with replicates as key dic_cells, dic_replicate = create_arrays(file_path) #go through all cells/keys from dic_cells to calculate FWHM per cell for cell in dic_cells: #connect cell with a replicate by checking if cell is a value in #dic_replicate for given exp date for date_experiment in dic_replicate: if cell in dic_replicate[date_experiment]: cell_date = date_experiment #create counter for bin >= 0.5 counter_FWHM = 0 #create histogramm for individual cell data bin_value, distance = make_histo(dic_cells[cell]) # file[-5] is the slice number in the file bame and # therefore indicates condition if int(file[-5]) == 1: channel = channel01 elif int(file[-5]) == 2: channel = channel02 elif int(file[-5]) == 3: channel = channel03 #go through bin values from histogram, check if >= 0.5, add 1 to counter #variable and print all information when last value per cell is reached #number of bins >= 0.5 is multiplied with 50 according to bin width for i, dist in enumerate(bin_value): if bin_value[i] >= 0.5: counter_FWHM += 1 if distance[i] == 1000: print ( cell_date, channel, counter_FWHM*50, file = f_out, sep ='\t' ) index_counter += 1 # -
Notebook_NN-Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sampling In Continuous Graphical Models # As we know inference is asking conditional probability questions to the models, the exact solution of these problems quickly becomes intractable. Sampling algorithms can be used to get approximate inference results by generating a large # number of coherent samples that converge to original distribution. In this notebook we take a look at some of the # sampling algorithms that can be used to sample from continuous case models. # # 1. Hamiltonian Monte Carlo # 2. No-U-Turn Sample # # ## Hamiltonian Monte Carlo # Hamiltonian Monte Carlo (HMC) is a Markov Chain Monte Carlo (MCMC) that proposes future states in Markov Chain using # Hamiltonian Dynamics. Before understanding the HMC algorithm lets first understand Hamiltonian Dynamics. # # ### Hamiltonian Dynamics # Hamiltonian dynamics are used to describe how objects move throughout a # system. Hamiltonian dynamics is defined in terms of object location $x$ # and its momentum $p$ (equivalent to object's mass times velocity) at some # time $t$. For each location of object there is an associated potential # energy $U(x)$ and with momentum there is associated # kinetic energy $K(p)$. The total energy of system is constant and is called as Hamiltonian # $H(x, p)$, defined as the sum of potential energy and kinetic energy: # # $$ H(x, p) = U(x) + K(p) $$ # # The partial derivatives of the Hamiltonian determines how position $x$ and # momentum $p$ change over time $t$, according to Hamiltonian's equations: # # $$ \frac{dx_i}{dt} = \frac{\partial H}{\partial p_i} = \frac{\partial K(p)}{\partial p_i}$$ # # $$ \frac{dp_i}{dt} = -\frac{\partial H}{\partial x_i} = -\frac{\partial U(x)}{\partial x_i}$$ # # The above equations operates on a *d-dimensional position vector $x$* and # a *d-dimensional momentum vector $p$*, for $i = 1, 2, \cdots, d$. # # Thus, if we can evaluate $\frac{\partial U(x)}{\partial x_i}$ and # $\frac{\partial K(p)}{\partial p_i}$ and have a set of initial conditions i.e # an initial position and initial momentum at time $t_0$, then we can predict # the location and momentum of object at any future time $t = t_0 + T$ by # simulating dynamics for a time duration $T$. # ### Discretizing Hamiltonian's Equations # # The Hamiltonian's equations describes an object's motion in regard to time, # which is a continuous variable. For simulating dynamics on a computer, # Hamiltonian's equations must be numerically approximated by discretizing time. # This is done by splitting the time interval $T$ into small intervals of size # $\epsilon$. # # #### Euler's Method # For Hamiltonian's equations, this method performs the following steps, for # each component of position and momentum (indexed by $i=1, ...,d$) # # $$ p_i(t + \epsilon) = p_i(t) + \epsilon \frac{dp_i}{dt}(t) = p_i(t) - \epsilon \frac{\partial U}{\partial x_i(t)} $$ # # $$ x_i(t + \epsilon) = x_i(t) + \epsilon \frac{dx_i}{dt} = x_i(t) + \epsilon \frac{\partial K}{\partial p_i(t)} $$ # # Even better results can be obtained if we use updated value of momentum # in later equation # # $$ x_i(t + \epsilon) = x_i(t) + \epsilon \frac{\partial K}{\partial p_i(t + \epsilon)} $$ # # This method is called as **Modified Euler's method**. # # #### Leapfrog Method # Unlike Euler's method where we take full steps for updating position and # momentum in leapfrog method we take half steps to update momentum value. # # $$ p_i(t + \epsilon / 2) = p_i(t) - (\epsilon / 2) \frac{\partial U}{\partial x_i(t)} $$ # # $$x_i(t + \epsilon) = x_i(t) + \epsilon \frac{\partial K}{\partial p_i(t + \epsilon /2)} $$ # # $$ p_i(t + \epsilon) = p_i(t) - (\epsilon / 2) \frac{\partial U}{\partial x_i(t + \epsilon)} $$ # # Leapfrog method yields even better result than Modified Euler Method. # # #### Example: Simulating Hamiltonian dynamics of a simple pendulum # # Imagine a bob of mass $m = 1$ attached to a string of length $l=1.5$ # whose one end is fixed at point $(x=0, y=0)$. # The equilibrium position of the pendulum is at $x = 0$. Now keeping string # stretched we move it some distance horizontally say $x_0$. The corresponding # change in potential energy is given by # # $ U(h) = mg\Delta h $, where $\Delta h$ is change in height and $g$ is gravity of earth. # # Using simple trigonometry one can derive relationship between $x$ and # $\Delta h$. # # $$ U(x) = mgl(1 - cos(sin^{-1}(x/l)))$$ # # Kinetic energy of bob can be written in terms of momentum as # # $$ K(v) = \frac{mv^2}{2} = \frac{(mv)^2}{2m} = \frac{p^2}{2m} = K(p)$$ # # Further, partial derivatives of potential and kinetic energy can be written as: # # $$ \frac{\partial U}{\partial x} = \frac{mglx}{\sqrt{l^2 - x^2}}$$ # # and # # $$ \frac{\partial K}{\partial p} = \frac{p}{m} $$ # # Here is a animation that uses these equations to simulate the dynamics of simple pendulum # <img src="../images/simple_pendulum.gif" alt="Drawing" style="width: 700px;"/> # The sub-plot in the right upper half of the output demonstrates the energies. The # red portion of first bar plot represents potential energy and black represents # kinetic energy. The second bar plot represents the Hamiltonian. The lower right sub-plot shows the phase space # showing how momentum and position are varying. We can see that phase space # maps out an ellipse without deviating from its path. In case of Euler # method the particle doesn't fully trace a ellipse instead diverges slowly # towards infinity. One can clearly see that value of position and momentum are not completely # random, but takes a deterministic circular kind of trajectory. # If we use Leapfrog method to propose future states than we can avoid # random-walk behavior which we saw in Metropolis-Hastings algorithm. This is the main reason for good performance # of HMC algorithm. # # ### Hamiltonian and Probability: Canonical Distributions # # Now having a bit of understanding what is Hamiltonian and how we can simulate # Hamiltonian dynamics, we now need to understand how we can use these # Hamiltonian dynamics for MCMC. We need to develop some relation between # probability distribution and Hamiltonian so that we can use Hamiltonian # dynamics to explore the distribution. To relate $H(x, p)$ to target # distribution $P(x)$ we use a concept from statistical mechanics known as # the canonical distribution. For any energy function $E(q)$, defined over a set of variables $q$, we # can find corresponding $P(q)$ # # $$ P(q) = \frac{1}{Z} exp \left( \frac{-E(q)}{T} \right) $$ # # , where $Z$ is normalizing constant called Partition function and $T$ is # temperature of system. For our use case we will consider $T=1$. # # Since, the Hamiltonian is an energy function for the joint state of "position", # $x$ and "momentum", $p$, so we can define a joint distribution for them # as follows: # # $$ P(x, p) = \frac{e^{-H(x, p)}}{Z} $$ # # Since $H(x, p) = U(x) + K(p)$, we can write above equation as # # $$P(x, p) = \frac{e^{-U(x)-K(p)}}{z}$$ # # $$P(x, p) = \frac{e^{-U(x)}e^{-K(p)}}{Z}$$ # # Furthermore we can associate probability distribution with each of the # potential and kinetic energy ($P(x)$ with potential energy and $P(p)$, # with kinetic energy). Thus, we can write above equation as: # # $$P(x, p) = \frac{P(x)P(p)}{Z'} $$ # # ,where $Z'$ is new normalizing constant. Since joint distribution factorizes # over $x$ and $p$, we can conclude that $P(x)$ and $P(p)$ are independent. # Because of this independence we can choose any distribution # from which we want to sample the momentum variable. A common choice is to use # a zero mean and unit variance Normal distribution $N(0, I)$ # The target distribution of interest $P(x)$ from which we actually want to # sample from is associated with potential energy. # # $$U(x) = - log (P(x))$$ # # Thus, if we can calculate $\frac{\partial log(P(x))}{\partial x_i}$, then # we are in business and we can use Hamiltonian dynamics to generate samples. # # ### Hamiltonian Monte Carlo Algorithm # Given initial state $x_0$, stepsize $\epsilon$, number of steps $L$, # log density function $U$, number of samples to be drawn $M$, we can write HMC algorithm as: # # - set $m = 0 $ # - repeat until $m = M$ # # 1. set $m \leftarrow m + 1$ # # 2. Sample new initial momentum $p_0$ ~ $N(0, I)$ # # 3. Set $x_m \leftarrow x_{m-1}, x' \leftarrow x_{m-1}, p' \leftarrow p_0$ # # 4. repeat for $L$ steps # # - Set $x', p' \leftarrow Leapfrog(x', p', \epsilon)$ # # 5. Calculate acceptance probability $\alpha = min \left(1, \frac{exp( U(x') - (p'.p')/2 )}{exp( U(x_{m-1}) - (p_0.p_0)/2 )} \right)$ # # 6. Draw a random number u ~ Uniform(0, 1) # # 7. if $u \leq \alpha$ then $x_m \leftarrow x', p_m \leftarrow -p'$ # # $Leapfrog$ is a function that runs a single iteration of Leapfrog method. # # In practice sometimes instead of explicitly giving number of steps $L$, # we use **trajectory length** which is product of number of steps $L$, # and stepsize $\epsilon$. # # ### Hamiltonian Monte Carlo in pgmpy # In pgmpy one can use Hamiltonian Monte Carlo algorithm by importing HamiltonianMC from pgmpy.inference.continuous from pgmpy.sampling import HamiltonianMC as HMC # Lets use the HamiltonianMC implementation and draw some samples from a multivariate disrtibution $ P(x) = N(\mu, \Sigma)$, where # $$ # \mu = [0, 0], \qquad # \Sigma = \left[ # \begin{array}{cc} # 1 & 0.97 \\ # 0.97 & 1 # \end{array} # \right] # $$ # + # %matplotlib inline from pgmpy.factors.distributions import GaussianDistribution as JGD from pgmpy.sampling import LeapFrog, GradLogPDFGaussian import numpy as np import matplotlib.pyplot as plt np.random.seed(77777) # Defining a multivariate distribution model mean = np.array([0, 0]) covariance = np.array([[1, 0.97], [0.97, 1]]) model = JGD(['x', 'y'], mean, covariance) # Creating a HMC sampling instance sampler = HMC(model=model, grad_log_pdf=GradLogPDFGaussian, simulate_dynamics=LeapFrog) # Drawing samples samples = sampler.sample(initial_pos=np.array([7, 0]), num_samples = 1000, trajectory_length=10, stepsize=0.25) plt.figure(figsize=(7, 7)); plt.hold(True) plt.scatter(samples['x'], samples['y'], label='HMC samples', color='k') plt.plot(samples['x'][0:100], samples['y'][0:100], 'r-', label='First 100 samples') plt.legend(); plt.hold(False) plt.show() # - # One, can change the values of parameters `stepsize` and `trajectory_length` and see the convergence towards target distribution. For example set the value of `stepsize = 0.5` and let rest parameters have the same value and see the output. With this you might get a feel that performance of HMC critically depends upon choice of these parameters. # # The `stepsize` parameter for HamiltonianMC implementation is optional, but should be use only as starting point value. # One should hand-tune the model using this stepsize value for good performance. # # ### Hamiltonian Monte Carlo with dual averaging # In pgmpy we have implemented an another variant of HMC in which we adapt the stepsize during the course of sampling thus # completely eliminates the need of specifying `stepsize` (but still requires # `trajectory_length` to be specified by user). This variant of HMC is known as Hamiltonian Monte Carlo with dual averaging (HamiltonianMCda in pgmpy). One can also use Modified Euler to simulate dynamics instead of leapfrog, or even can plug-in one's own implementation for simulating dynamics using base class for it. from pgmpy.sampling import HamiltonianMCDA as HMCda, ModifiedEuler # Using modified euler instead of Leapfrog for simulating dynamics sampler_da = HMCda(model, GradLogPDFGaussian, simulate_dynamics=ModifiedEuler) # num_adapt is number of iteration to run adaptation of stepsize samples = sampler_da.sample(initial_pos=np.array([7, 0]), num_adapt=10, num_samples=10, trajectory_length=10) print(samples) print("\nAcceptance rate:",sampler_da.acceptance_rate) # The values returned by HamiltonianMC and HamiltonianMCda depends upon the installation available in the working environment. In working env has a installation of pandas, it returns a pandas.DataFrame object otherwise it returns a # numpy.recarry (numpy recorded arrays). # # Lets now use base class for simulating hamiltonian dynamics and write our own Modified Euler method # + from pgmpy.sampling import BaseSimulateHamiltonianDynamics class ModifiedEulerMethod(BaseSimulateHamiltonianDynamics): def __init__(self, model, position, momentum, stepsize, grad_log_pdf, grad_log_position=None): BaseSimulateHamiltonianDynamics.__init__(self, model, position, momentum, stepsize, grad_log_pdf, grad_log_position) self.new_position, self.new_momentum, self.new_grad_logp = self._get_proposed_values() def _get_proposed_values(self): new_momentum = self.momentum + self.stepsize * self.grad_log_position new_position = self.position + self.stepsize * new_momentum grad_log, _ = self.grad_log_pdf(new_position, self.model).get_gradient_log_pdf() return new_position, new_momentum, grad_log hmc_sampler = HMC(model, GradLogPDFGaussian, simulate_dynamics=ModifiedEulerMethod) samples = hmc_sampler.sample(initial_pos=np.array([0, 0]), num_samples=10, trajectory_length=10, stepsize=0.2) print(samples) print("Total accepted proposal:", hmc_sampler.accepted_proposals) # - # ## No-U-Turn Sampler # Both (HMC and HMCda) of these algorithms requires some hand-tuning from user, # which can be time consuming especially for high dimensional complex model. # No-U-Turn Sampler(NUTS) is an extension of HMC that eliminates the need to specify the # trajectory length but requires user to specify stepsize. # # NUTS, removes the need of parameter number of steps by considering a metric # to evaluate whether we have ran Leapfrog algorithm for long enough, that # is when running the simulation for more steps would no longer increase # the distance between the proposal value of $x$ and initial value of # $x$ # # At high level, NUTS uses the leapfrog method to trace out a path forward # and backward in fictitious time, first running forwards or backwards # 1 step, the forwards and backwards 2 steps, then forwards or backwards # 4 steps etc. This doubling process builds a balanced binary tree whose # leaf nodes correspond to position-momentum states. The doubling process is # halted when the sub-trajectory from the leftmost to the rightmost nodes of # any balanced subtree of the overall binary tree starts to double back on # itself (i.e., the fictional particle starts to make a "U-Turn"). At # this point NUTS stops the simulation and samples from among the set of # points computed during the simulation, taking are to preserve detailed # balance. # # Lets use NUTS and draw some samples. # + from pgmpy.sampling import NoUTurnSampler as NUTS mean = np.array([1, 2, 3]) covariance = np.array([[2, 0.4, 0.5], [0.4, 3, 0.6], [0.5, 0.6, 4]]) model = JGD(['x', 'y', 'z'], mean, covariance) # Creating sampling instance of NUTS NUTS_sampler = NUTS(model=model, grad_log_pdf=GradLogPDFGaussian) samples = NUTS_sampler.sample(initial_pos=[10, 10, 0], num_samples=2000, stepsize=0.25) from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(9, 9)) ax = Axes3D(fig) plt.hold(True) ax.scatter(samples['x'], samples['y'], samples['z'], label='NUTS Samples') ax.plot(samples['x'][:50], samples['y'][:50], samples['z'][:50], 'r-', label='Warm-up period') plt.legend() plt.title("Scatter plot of samples") plt.hold(False) plt.show() # - # The **Warm-up period** a.k.a **Burn-in period** of Markov chain , is the amount of time it takes for the markov chain # to reach the target stationary distribution. The samples generated during this period of Markov chain are usually thrown away because they don't show the characteristics of distribution from which they are sampled. # # Since it is difficult to visualize more than 3-Dimensions we generally use a trace-plot of Markov chain to determine this # warm-up period. plt.figure(figsize=(9, 9)) plt.hold(True) plt.plot(samples['x'],'b-', label='x') plt.plot(samples['y'], 'g-', label='y') plt.plot(samples['z'], 'c-', label='z') plt.plot([50, 50], [-5, 9], 'k-', label='Warm-up period', linewidth=4) plt.legend() plt.title("Trace plot of Markov Chain") plt.hold(False) plt.show() # ### No-U-Turn Sampler with dual averaging # Like HMCda in No-U-Turn sampler with dual averaging (NUTSda) we adapt the stepsize during the course of sampling thus completely eliminates the need of specifying `stepsize`. Thus we can use NUTSda without any hand tuning at all. from pgmpy.sampling import NoUTurnSamplerDA as NUTSda NUTSda_sampler = NUTSda(model=model, grad_log_pdf=GradLogPDFGaussian) samples = NUTSda_sampler.sample(initial_pos=[0.457420, 0.500307, 0.211056], num_adapt=10, num_samples=10) print(samples) # Apart from the `sample` method all the four algorithms (HMC, HMCda, NUTS, NUTSda) provides another method to sample from model named `generate_sample` method. `generate_sample` method returns a generator type object whose each iteration yields a sample. The sample returned is a simple numpy.array object. The arguments for `generate_sample` method and `sample` method for all algorithms are same generate_samples = NUTSda_sampler.generate_sample(initial_pos=[0.4574, 0.503, 0.211], num_adapt=10, num_samples=10) samples = np.array([sample for sample in generate_samples]) print(samples) # ## Support for coustom Models # One can also use HMC, HMCda, NUTS and NUTSda to sample from a user defined model, all one has to do is create class for finding log of probability density function and gradient log of probability density function using base class provided by pgmpy, and give this class as a parameter to `grad_log_pdf` argument in all of these algorithms (HMC[da], NUTS[da]). # # In this example we will define our own logisitcs distribution and use NUTSda to sample from it. # # The probability density of logistic distribution is given by: # # $$ P(x; \mu, s) = \frac{e^{-\frac{x- \mu}{s}}}{s(1 + e^{-\frac{x - \mu}{s}})^2} $$ # # Thus the log of this probability density function (potential energy function) can be written as: # # $$ log(P(x; \mu, s)) = -\frac{x - \mu}{s} - log(s) - 2 log(1 + e^{-\frac{x - \mu}{s}}) $$ # # And the gradient of potential energy : # # $$ \frac{\partial log(P(x; \mu, s))}{\partial x} = - \frac{1}{s} + \frac{2e^{-\frac{x - \mu}{s}}}{s(1 + e^{-\frac{x - \mu}{s}})}$$ # + # Importing th base class structure for log and gradient log of probability density function from pgmpy.sampling import BaseGradLogPDF # Base class for user defined continuous factor from pgmpy.factors.distributions import CustomDistribution # Defining pdf of a Logistic distribution with mu = 5, s = 2 def logistic_pdf(x): power = - (x - 5.0) / 2.0 return np.exp(power) / (2 * (1 + np.exp(power))**2) # Calculating log of logistic pdf def log_logistic(x): power = - (x - 5.0) / 2.0 return power - np.log(2.0) - 2 * np.log(1 + np.exp(power)) # Calculating gradient log of logistic pdf def grad_log_logistic(x): power = - (x - 5.0) / 2.0 return - 0.5 - (2 / (1 + np.exp(power))) * np.exp(power) * (-0.5) # Creating a logistic model logistic_model = CustomDistribution(['x'], logistic_pdf) # Creating a class using base class for gradient log and log probability density function class GradLogLogistic(BaseGradLogPDF): def __init__(self, variable_assignments, model): BaseGradLogPDF.__init__(self, variable_assignments, model) self.grad_log, self.log_pdf = self._get_gradient_log_pdf() def _get_gradient_log_pdf(self): return (grad_log_logistic(self.variable_assignments), log_logistic(self.variable_assignments)) # Generating samples using NUTS sampler = NUTSda(model=logistic_model, grad_log_pdf=GradLogLogistic) samples = sampler.sample(initial_pos=np.array([0.0]), num_adapt=10000, num_samples=10000) x = np.linspace(-30, 30, 10000) y = [logistic_pdf(i) for i in x] plt.figure(figsize=(8, 8)) plt.hold(1) plt.plot(x, y, label='real logistic pdf') plt.hist(samples.values, normed=True, histtype='step', bins=200, label='Samples NUTSda') plt.legend() plt.hold(0) plt.show() # -
notebooks/8. Sampling Algorithms.ipynb
// # Automatic generation of Notebook using PyCropML // This notebook implements a crop model. // ### Domain Class EnergybalanceAuxiliary // + import java.io.*; import java.util.*; import java.time.LocalDateTime; public class EnergybalanceAuxiliary { private double minTair; private double maxTair; private double solarRadiation; private double vaporPressure; private double extraSolarRadiation; private double hslope; private double plantHeight; private double wind; private double deficitOnTopLayers; private double VPDair; private double netRadiation; private double netOutGoingLongWaveRadiation; private double netRadiationEquivalentEvaporation; private double energyLimitedEvaporation; private double soilEvaporation; public EnergybalanceAuxiliary() { } public EnergybalanceAuxiliary(EnergybalanceAuxiliary toCopy, boolean copyAll) // copy constructor { if (copyAll) { this.minTair = toCopy.minTair; this.maxTair = toCopy.maxTair; this.solarRadiation = toCopy.solarRadiation; this.vaporPressure = toCopy.vaporPressure; this.extraSolarRadiation = toCopy.extraSolarRadiation; this.hslope = toCopy.hslope; this.plantHeight = toCopy.plantHeight; this.wind = toCopy.wind; this.deficitOnTopLayers = toCopy.deficitOnTopLayers; this.VPDair = toCopy.VPDair; this.netRadiation = toCopy.netRadiation; this.netOutGoingLongWaveRadiation = toCopy.netOutGoingLongWaveRadiation; this.netRadiationEquivalentEvaporation = toCopy.netRadiationEquivalentEvaporation; this.energyLimitedEvaporation = toCopy.energyLimitedEvaporation; this.soilEvaporation = toCopy.soilEvaporation; } } public double getminTair() { return minTair; } public void setminTair(double _minTair) { this.minTair= _minTair; } public double getmaxTair() { return maxTair; } public void setmaxTair(double _maxTair) { this.maxTair= _maxTair; } public double getsolarRadiation() { return solarRadiation; } public void setsolarRadiation(double _solarRadiation) { this.solarRadiation= _solarRadiation; } public double getvaporPressure() { return vaporPressure; } public void setvaporPressure(double _vaporPressure) { this.vaporPressure= _vaporPressure; } public double getextraSolarRadiation() { return extraSolarRadiation; } public void setextraSolarRadiation(double _extraSolarRadiation) { this.extraSolarRadiation= _extraSolarRadiation; } public double gethslope() { return hslope; } public void sethslope(double _hslope) { this.hslope= _hslope; } public double getplantHeight() { return plantHeight; } public void setplantHeight(double _plantHeight) { this.plantHeight= _plantHeight; } public double getwind() { return wind; } public void setwind(double _wind) { this.wind= _wind; } public double getdeficitOnTopLayers() { return deficitOnTopLayers; } public void setdeficitOnTopLayers(double _deficitOnTopLayers) { this.deficitOnTopLayers= _deficitOnTopLayers; } public double getVPDair() { return VPDair; } public void setVPDair(double _VPDair) { this.VPDair= _VPDair; } public double getnetRadiation() { return netRadiation; } public void setnetRadiation(double _netRadiation) { this.netRadiation= _netRadiation; } public double getnetOutGoingLongWaveRadiation() { return netOutGoingLongWaveRadiation; } public void setnetOutGoingLongWaveRadiation(double _netOutGoingLongWaveRadiation) { this.netOutGoingLongWaveRadiation= _netOutGoingLongWaveRadiation; } public double getnetRadiationEquivalentEvaporation() { return netRadiationEquivalentEvaporation; } public void setnetRadiationEquivalentEvaporation(double _netRadiationEquivalentEvaporation) { this.netRadiationEquivalentEvaporation= _netRadiationEquivalentEvaporation; } public double getenergyLimitedEvaporation() { return energyLimitedEvaporation; } public void setenergyLimitedEvaporation(double _energyLimitedEvaporation) { this.energyLimitedEvaporation= _energyLimitedEvaporation; } public double getsoilEvaporation() { return soilEvaporation; } public void setsoilEvaporation(double _soilEvaporation) { this.soilEvaporation= _soilEvaporation; } } // - // ### Domain Class EnergybalanceRate // + import java.io.*; import java.util.*; import java.time.LocalDateTime; public class EnergybalanceRate { private double evapoTranspirationPriestlyTaylor; private double evapoTranspirationPenman; private double evapoTranspiration; private double potentialTranspiration; private double soilHeatFlux; private double cropHeatFlux; public EnergybalanceRate() { } public EnergybalanceRate(EnergybalanceRate toCopy, boolean copyAll) // copy constructor { if (copyAll) { this.evapoTranspirationPriestlyTaylor = toCopy.evapoTranspirationPriestlyTaylor; this.evapoTranspirationPenman = toCopy.evapoTranspirationPenman; this.evapoTranspiration = toCopy.evapoTranspiration; this.potentialTranspiration = toCopy.potentialTranspiration; this.soilHeatFlux = toCopy.soilHeatFlux; this.cropHeatFlux = toCopy.cropHeatFlux; } } public double getevapoTranspirationPriestlyTaylor() { return evapoTranspirationPriestlyTaylor; } public void setevapoTranspirationPriestlyTaylor(double _evapoTranspirationPriestlyTaylor) { this.evapoTranspirationPriestlyTaylor= _evapoTranspirationPriestlyTaylor; } public double getevapoTranspirationPenman() { return evapoTranspirationPenman; } public void setevapoTranspirationPenman(double _evapoTranspirationPenman) { this.evapoTranspirationPenman= _evapoTranspirationPenman; } public double getevapoTranspiration() { return evapoTranspiration; } public void setevapoTranspiration(double _evapoTranspiration) { this.evapoTranspiration= _evapoTranspiration; } public double getpotentialTranspiration() { return potentialTranspiration; } public void setpotentialTranspiration(double _potentialTranspiration) { this.potentialTranspiration= _potentialTranspiration; } public double getsoilHeatFlux() { return soilHeatFlux; } public void setsoilHeatFlux(double _soilHeatFlux) { this.soilHeatFlux= _soilHeatFlux; } public double getcropHeatFlux() { return cropHeatFlux; } public void setcropHeatFlux(double _cropHeatFlux) { this.cropHeatFlux= _cropHeatFlux; } } // - // ### Domain Class EnergybalanceState // + import java.io.*; import java.util.*; import java.time.LocalDateTime; public class EnergybalanceState { private double diffusionLimitedEvaporation; private double conductance; private double minCanopyTemperature; private double maxCanopyTemperature; public EnergybalanceState() { } public EnergybalanceState(EnergybalanceState toCopy, boolean copyAll) // copy constructor { if (copyAll) { this.diffusionLimitedEvaporation = toCopy.diffusionLimitedEvaporation; this.conductance = toCopy.conductance; this.minCanopyTemperature = toCopy.minCanopyTemperature; this.maxCanopyTemperature = toCopy.maxCanopyTemperature; } } public double getdiffusionLimitedEvaporation() { return diffusionLimitedEvaporation; } public void setdiffusionLimitedEvaporation(double _diffusionLimitedEvaporation) { this.diffusionLimitedEvaporation= _diffusionLimitedEvaporation; } public double getconductance() { return conductance; } public void setconductance(double _conductance) { this.conductance= _conductance; } public double getminCanopyTemperature() { return minCanopyTemperature; } public void setminCanopyTemperature(double _minCanopyTemperature) { this.minCanopyTemperature= _minCanopyTemperature; } public double getmaxCanopyTemperature() { return maxCanopyTemperature; } public void setmaxCanopyTemperature(double _maxCanopyTemperature) { this.maxCanopyTemperature= _maxCanopyTemperature; } } // - // ### Domain Class EnergybalanceExogenous // + import java.io.*; import java.util.*; import java.time.LocalDateTime; public class EnergybalanceExogenous { public EnergybalanceExogenous() { } public EnergybalanceExogenous(EnergybalanceExogenous toCopy, boolean copyAll) // copy constructor { if (copyAll) { } } } // - // ### Model Potentialtranspiration import java.io.*; import java.util.*; import java.text.ParseException; import java.text.SimpleDateFormat; import java.time.LocalDateTime; public class Potentialtranspiration { private double tau; public double gettau() { return tau; } public void settau(double _tau) { this.tau= _tau; } public Potentialtranspiration() { } public void Calculate_potentialtranspiration(EnergybalanceState s, EnergybalanceState s1, EnergybalanceRate r, EnergybalanceAuxiliary a, EnergybalanceExogenous ex) { //- Name: PotentialTranspiration -Version: 1.0, -Time step: 1 //- Description: // * Title: PotentialTranspiration Model // * Author: <NAME> // * Reference: Modelling energy balance in the wheat crop model SiriusQuality2: // Evapotranspiration and canopy and soil temperature calculations // * Institution: INRA/LEPSE Montpellier // * ExtendedDescription: SiriusQuality2 uses availability of water from the soil reservoir as a method to restrict // transpiration as soil moisture is depleted // * ShortDescription: It uses the availability of water from the soil reservoir as a method to restrict // transpiration as soil moisture is depleted //- inputs: // * name: evapoTranspiration // ** description : evapoTranspiration // ** variablecategory : rate // ** datatype : DOUBLE // ** default : 830.958 // ** min : 0 // ** max : 10000 // ** unit : mm // ** uri : http://www1.clermont.inra.fr/siriusquality/?page_id=547 // ** inputtype : variable // * name: tau // ** description : plant cover factor // ** parametercategory : species // ** datatype : DOUBLE // ** default : 0.9983 // ** min : 0 // ** max : 1 // ** unit : // ** uri : http://www1.clermont.inra.fr/siriusquality/?page_id=547 // ** inputtype : parameter //- outputs: // * name: potentialTranspiration // ** description : potential Transpiration // ** variablecategory : rate // ** datatype : DOUBLE // ** min : 0 // ** max : 10000 // ** unit : g m-2 d-1 // ** uri : http://www1.clermont.inra.fr/siriusquality/?page_id=547 double evapoTranspiration = r.getevapoTranspiration(); double potentialTranspiration; potentialTranspiration = evapoTranspiration * (1.0d - tau); r.setpotentialTranspiration(potentialTranspiration); } } class Test { EnergybalanceState s = new EnergybalanceState(); EnergybalanceState s1 = new EnergybalanceState(); EnergybalanceRate r = new EnergybalanceRate(); EnergybalanceAuxiliary a = new EnergybalanceAuxiliary(); EnergybalanceExogenous ex = new EnergybalanceExogenous(); Potentialtranspiration mod = new Potentialtranspiration(); //first); //test1 public void test1() { mod.settau(0.9983); r.setevapoTranspiration(830.958); mod.Calculate_potentialtranspiration(s,s1, r, a, ex); //potentialTranspiration: 1.413; System.out.println("potentialTranspiration estimated :"); System.out.println(r.getpotentialTranspiration()); } } Test t = new Test(); t.test1();
test/java/Potentialtranspiration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Environment (conda_pytorch_p36) # language: python # name: conda_pytorch_p36 # --- # + import rasterio as rio from cmr import CollectionQuery, GranuleQuery import pandas as pd import geopandas as gpd from shapely.geometry import Polygon import ipywidgets as widgets import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.io.img_tiles as cimgt import requests import os from tqdm import tqdm import math import subprocess # - # # Step 1: ASO Acqusition ❄️ ✈️ 🖥 ❄ # # # This notebook represents the first step of the Planet SCA data pipeline to set up for model training. Each instance of model training is based around ASO collects in a single region, which forms the basis for the imagery which is used to train the model (described in Step 2: Planet Ordering). # # The steps to complete this process are as follows: # # 1. Query the NASA Common Metadata Repository (CMR) for Airborne Snow Observatory (ASO) collects. # 1. Select ASO collect **region**. # 1. Filter CMR results to selected region. # 1. Select ASO collects based on collection date for given region. # 1. Download data from NSIDC DAAC. # 1. Process ASO collect (thresholding, creation of spatial footprint) # 1. Store raw and processed ASO collects in S3. # 1. Tile processed collects and upload to S3. # # ⚠️ **Requirements** ⚠️ # # 1. This notebook must be run with a version of the `pytorch_p36` conda environment (we'll check this) # 2. You need an account with NASA Earthdata. Create one [here](https://urs.earthdata.nasa.gov/users/new). Once you do, set the shell variables **`EARTHDATA_USERNAME`** and **`EARTHDATA_PASSWORD`** by running `export EARTHDATA_USERNAME=<username> EARTHDATA_PASSWORD=<password>` on your terminal and restarting Jupyter. # 3. You need at least 5GB of free space available on this computer's storage. # 4. You need to have configured your AWS Command Line tools described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) via `aws configure`, and know the "profile name" of your chosen profile, if you have multiple. # 5. You also need to have configured an AWS S3 Bucket to store these resulting images and their tiled versions. assert "pytorch_p36" in os.environ['CONDA_DEFAULT_ENV'], "Need to activate pytorch_p36 environment" # ## ASO Catalog Search 🔎 # # Below we use the NASA CMR to search for ASO collects. # Load the ASO Catalog nasaAPI = GranuleQuery() aso = pd.DataFrame(nasaAPI.short_name('ASO_3M_SD').get()) aso.head() # + ## Parse into Useful Data # Parse dates for easier sorting aso.time_start = pd.to_datetime(aso.time_start) # Store ASO region aso['aso_region'] = [granId.split('_')[3] for granId in aso.producer_granule_id] # Convert polygons into Shapely Polygons. polygons = [] for raw_poly in aso.polygons.values: t = pd.to_numeric(raw_poly[0][0].split(" ")) polygons.append(Polygon(list(zip(t[1::2], t[::2])))) aso['geom'] = polygons # - # Compute ASO region area. aso['area'] = [_g.area for _g in polygons] # + # Examine ASO regions by area. asoRegions = sorted(aso.aso_region.unique()) asoRegions = list(zip(["{}-{:.2f}".format(a[0], a[1]) for a in zip(asoRegions, ([aso[aso.aso_region == a].area.min() for a in asoRegions]))], asoRegions)) asoRegions # + ## Create figure of all ASO regions. ## NOTE: This takes ~2-3 minutes to complete. ## If there's already a figure below this cell, there's no need to run it. fig = plt.figure(figsize=(15, 15)) gs = plt.GridSpec(5, 5) mapbox = cimgt.MapboxTiles( access_token="<KEY>", map_id = 'streets' ) for i, region in enumerate(asoRegions): geom = aso[aso.aso_region == region[1]].geom.values[0] _a = plt.subplot(gs[i], projection=ccrs.AlbersEqualArea()) _a.get_xaxis().set_visible(False) _a.get_yaxis().set_visible(False) _a.add_geometries([geom], crs=ccrs.PlateCarree(), alpha=0.76) _a.set_title("{}-{:.2f}".format(region[1], geom.area)) bds = geom.buffer(0.2).bounds _a.set_extent([bds[0], bds[2], bds[1], bds[3]], crs = ccrs.PlateCarree()) _a.add_image(mapbox, 7) # - # ## Select ASO Region # # Choose a focal ASO region to examine collects. region = widgets.Dropdown( options=asoRegions, description='ASO Region:', disabled=False, ) region region.value = 'USCOGE' # ## Select ASO Candidates for Selected Region # # Choose ASO collects for selected region. regionalCandidates = aso[ (aso.aso_region == region.value) ] options = list(zip(regionalCandidates.time_start, regionalCandidates.producer_granule_id)) granuleSelector = widgets.SelectMultiple( options= options, rows=10, description='{}'.format(region.value), disabled=False ) granuleSelector filename = granuleSelector.value # ## Download Data 🖥 # # We'll build up the correct URLs for the selected ASO assets and download them via cURL from Earthdata. regionalCandidates[regionalCandidates.producer_granule_id.isin(filename)].links downloadTo = '/tmp/work/' urls = [l[0]['href'] for l in regionalCandidates[ regionalCandidates.producer_granule_id.isin(filename)].links ] # ! mkdir /tmp/work urls # ### Configure cURL to connect with NASA Earthdata Databases. # # ⚠️ This requires the environment variables **`EARTHDATA_USERNAME`** and **`EARTHDATA_PASSWORD`** to be set, as described above. # + username = os.environ['EARTHDATA_USERNAME'] password = os.environ['<PASSWORD>'] # ! echo "machine urs.earthdata.nasa.gov login {username} password {password}" > ~/.netrc && chmod 0600 ~/.netrc # ! touch .urs_cookies # + for url in urls: # download_file(url, downloadTo, username, password) !(cd {downloadTo} && curl -O -b ~/.urs_cookies -c ~/.urs_cookies -L -n {url}) # - os.listdir(downloadTo) # ## Preprocess for Tiling # # The raw ASO Collect(s) have to be thresholded to a binary raster and their footprints calculated and saved for imagery acqusition. We use the `/preprocess` module for this. # + og_dir = os.getcwd() for url in urls: imgPath = os.path.join(downloadTo, url.split('/')[-1]) try: os.chdir("/home/ubuntu/planet-snowcover/") subprocess.check_call(['/home/ubuntu/anaconda3/envs/pytorch_p36/bin/python', '-m', 'preprocess', 'gt_pre', '--gt_file', imgPath, downloadTo, '--footprint', '--threshold', '0.1']) except Exception as e: print(e) # - # check result os.listdir(downloadTo) # ## Upload Raw & Processed Artifacts to to Cloud Storage ☁︎🗑 # # After preprocessing the raw ASO collects, we want to add them to a cloud storage bucket so that we can access them later. We're going to use the Amazon **S**imple **S**torage **S**ervice (S3) to do this. # # Specify your AWS Command Line Tools profile and the destination S3 Bucket below: # awsProfile = widgets.Text(description='AWS Profile', value="esip") awsProfile awsBucket = widgets.Text(description="S3 Bucket", value='planet-snowcover-snow') awsBucket awsUploadCommand = "aws s3 --profile {} sync {} s3://{}".format(awsProfile.value, downloadTo, awsBucket.value) print(awsUploadCommand) # **🚨 IMPORTANT 🚨:** To avoid over-uploading files to your S3 bucket (+ potentially incurring unnecessary costs), we will perform a "dry run" of this S3 command to verify accuracy. Check it below. If there are extraneous files, remove them from the directory specified by `downloadTo`. # ! {awsUploadCommand + ' --dryrun'} # Look OK? Run the below command to perform the upload. # ! {awsUploadCommand} # Confirm upload by listing contents of bucket: # ! aws s3 --profile {awsProfile.value} ls {awsBucket.value} # ## Tile ASO 🗺 # # We take advantage of the Spherical Mercator tiling specification for spatial data, which allows for efficient data storage and model training. We will tile the thresholded ASO raster files we've downloaded above, again using the `preprocess` module. # # This module uploads tiles directly to the same S3 bucket specified above. # # ⚠️ *If you run this in a Jupyter notebook/lab setting, there will be no output to the notebook itself. To see the output of the below command, visit the Jupyter console running in the terminal.* # + og_dir = os.getcwd() for url in urls: imgPath = os.path.join(downloadTo, url.split('/')[-1]) try: os.chdir("/home/ubuntu/planet-snowcover/") subprocess.check_call(['/home/ubuntu/anaconda3/envs/pytorch_p36/bin/python', '-m', 'preprocess', 'tile', '--indexes', '1', '--zoom', '15', '--skip-blanks', "--aws_profile" , awsProfile.value, "s3://" + awsBucket.value, imgPath]) except Exception as e: print(e) # - # Check output. If you see the name of the ASO collect with a label `PRE` next to it, the tiling was successful and the tiles are hosted in the S3 bucket specified above. # ! aws s3 --profile {awsProfile.value} ls {awsBucket.value} # 👌 # # --- # by <NAME>. # # © University of Washington, 2019. # # Support from the National Science Foundation, NASA THP, Earth Science Information partners, and the UW eScience Institute.</small>
pipeline/1_Acquire_ASO.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import shutil import numpy import pandas as pd from sklearn.model_selection import train_test_split DATADIR = "/home/shared/workspace/rose_ntu_dataset/our_videos/" TARGET_DIR = "/home/shared/workspace/rose_ntu_dataset/our_videos/" LABEL_DIR = "/home/shared/workspace/rose_ntu_dataset/our_videos_labels/" ACTION_NAMES = ["sneezeCough", "staggering", "fallingDown", "headache", "chestPain", "backPain", "neckPain", "nauseaVomiting", "fanSelf"] def filter_dir(dirname): for filename in os.listdir(dirname): left, _ = filename.split("_") action_num = int(left[-3:]) if (action_num in range(41,50)): class_dir = TARGET_DIR+ACTION_NAMES[action_num - 41] + "/" print(dirname+filename) print(class_dir+filename) if not os.path.exists(class_dir): os.mkdir(class_dir) if not os.path.exists(class_dir+filename): shutil.copy(dirname+filename, class_dir+filename) filter_dir(DATADIR) def index_mapping(targetdir): df = pd.DataFrame({ "Index":list(range(41, 50)), "ActionNames":ACTION_NAMES }) df.to_csv(LABEL_DIR+"classInd.txt", header=None, index=None, sep=' ', mode='a') index_mapping(TARGET_DIR) def extract_labels(targetdir): X,y = [],[] for actioname in os.listdir(targetdir): actiondir = targetdir + actioname + "/" for filename in os.listdir(actiondir): videoname,_ = filename.split("_") X.append(actioname+"/"+filename.split('.')[0]) y.append(int(videoname[-3:])) return X,y X,y = extract_labels(TARGET_DIR) X y def create_test_set(X,y): testdf = pd.DataFrame({ "filename":X, "label": y }) testdf = testdf.sort_values(by=['label']) testdf.to_csv(LABEL_DIR+"testlist01.txt", header=None, index=None, sep=' ', mode='a') create_test_set(X,y) # cd /home/shared/workspace/Resnet3D/3D-ResNets-PyTorch/ # !python -m util_scripts.generate_video_jpgs /home/shared/workspace/rose_ntu_dataset/our_videos/ our_data/jpg/ ucf101 # !python -m util_scripts.ntu_test_json /home/shared/workspace/rose_ntu_dataset/our_videos_labels/ our_data/jpg/ our_data/ # ls our_data # cp our_data/ucf101_01.json our_data/ntutest.json # !python /home/shared/workspace/human-activity-recognition/Efficient-3DCNNs/utils/n_frames_ucf101_hmdb51.py /home/shared/workspace/Resnet3D/3D-ResNets-PyTorch/our_data/jpg/
Efficient-3DCNNs/Our_test_set_preprocessing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Feature-Opinion Pairing # + jupyter={"outputs_hidden": true} import pandas as pd # + jupyter={"outputs_hidden": true} # For monitoring duration of pandas processes from tqdm import tqdm, tqdm_pandas # To avoid RuntimeError: Set changed size during iteration tqdm.monitor_interval = 0 # Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm` # (can use `tqdm_gui`, `tqdm_notebook`, optional kwargs, etc.) tqdm.pandas(desc="Progress:") # Now you can use `progress_apply` instead of `apply` # and `progress_map` instead of `map` # can also groupby: # df.groupby(0).progress_apply(lambda x: x**2) # - # ## Load important nouns # + jupyter={"outputs_hidden": true} df00 = pd.read_pickle('../data/interim/005_important_nouns.p') # - # df00.head() len(df00) df01 = df00.assign(num_of_imp_nouns = df00['imp_nns'].progress_apply(lambda imp_nouns:len(imp_nouns))) df02 = df01.loc[df01['num_of_imp_nouns'] != 0] len(df02) df02.head() # ## Load book tagged reviews # + jupyter={"outputs_hidden": true} df10 = pd.read_pickle('../data/interim/002_pos_tagged_keyed_reviews.p') # - df10.head() len(df10) df11 = pd.DataFrame(df10.uniqueKey.str.split('##',1).tolist(),columns = ['userId','asin']) df11.head() df_12 = pd.DataFrame(df10['reviewText']) df_12.head() df_13 = pd.concat([df11, df_12], axis=1) df_13.head() # ## Join reviews with important nouns df_joined = df_13.merge(df02, left_on='asin', right_on='asin', how='inner') df_joined[0:31] df_joined.describe() 1 - 511364/582711 582711-511364 import numpy as np matrix_m01 = df_joined.as_matrix() len(matrix_m01) matrix_m02 = np.append(matrix_m01,np.zeros([len(matrix_m01),1]),1) sample = pd.DataFrame(matrix_m02[0:10]) sample def get_pair(index, tagged_review): possible_pairs_dictionary = {} # left window counter = 0 left_index = index - 1 while((left_index!=-1) and (counter<10)): if tagged_review[left_index][1] in {'JJ', 'JJR', 'JJS'}: distance = index - left_index possible_pairs_dictionary.update({tagged_review[left_index][0]:distance}) left_index -= 1 counter += 1 # right window counter = 0 right_index = index + 1 while((right_index!=len(tagged_review)) and (counter<10)): if tagged_review[right_index][1] in {'JJ', 'JJR', 'JJS'}: distance = right_index - index possible_pairs_dictionary.update({tagged_review[left_index][0]:distance}) right_index += 1 counter += 1 # get shortest adj with shortest distance if multiple are found if(len(possible_pairs_dictionary)>1): return (min(possible_pairs_dictionary, key=lambda k: possible_pairs_dictionary[k]), tagged_review[index][0]) elif(len(possible_pairs_dictionary)==1): return (possible_pairs_dictionary.get(0),tagged_review[index][0]) else: return (None, tagged_review[index][0]) # + from tqdm import tqdm with tqdm(total=len(matrix_m02)) as pbar: for i in range(len(matrix_m02)): pairs = [] tagged_review = matrix_m02[i][2] imp_nns = matrix_m02[i][3] index = 0 for(word, tag) in tagged_review: if tag in {'NN', 'NNS', 'NNP', 'NNPS'}: if word.strip() in imp_nns: (adj,nn) = get_pair(index, tagged_review) if adj is not None: pairs.append((adj.strip(),nn.strip())) index += 1 matrix_m02[i][5] = pairs pbar.update(1) # - sample = pd.DataFrame(matrix_m02[0:100]) sample df20 = pd.DataFrame(matrix_m02) df20.columns = ['userId','asin','reviewText','imp_nns','num_of_imp_nouns','pairs'] df20.head() len(df20) reviews_vs_feature_opinion_pairs = df20[df20['pairs'].map(lambda pairs: len(pairs)) > 0] len(reviews_vs_feature_opinion_pairs) reviews_vs_feature_opinion_pairs[0:100] 249871/511364 reviews_vs_feature_opinion_pairs = reviews_vs_feature_opinion_pairs.assign(num_of_pairs = reviews_vs_feature_opinion_pairs['pairs'].progress_apply(lambda pairs:len(pairs))) reviews_vs_feature_opinion_pairs.head() reviews_vs_feature_opinion_pairs[0:100] pairs_per_book = reviews_vs_feature_opinion_pairs.groupby(['asin'])[["num_of_pairs"]].sum() pairs_per_book = pairs_per_book.reset_index() pairs_per_book.head() len(pairs_per_book) 48939 - 48853 import plotly import plotly.plotly as py from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot import cufflinks as cf print(cf.__version__) # Configure cufflings cf.set_config_file(offline=False, world_readable=True, theme='pearl') pairs_per_book['num_of_pairs'].iplot(kind='histogram', bins=100, xTitle='Number of Pairs', yTitle='Number of Books') # + jupyter={"outputs_hidden": true} # Save data pairs_per_book.to_pickle("../data/interim/006_pairs_per_book.p") # + jupyter={"outputs_hidden": true} reviews_vs_feature_opinion_pairs.to_pickle("../data/interim/006_pairs_per_review.p") # + jupyter={"outputs_hidden": true} ## END_OF_FILE # + jupyter={"outputs_hidden": true}
notebooks/006_pairing_features_to_opinions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # **Web scraping 1** import pandas as pd from bs4 import BeautifulSoup import numpy as np import requests from time import sleep from tqdm import tqdm import numpy as np from multiprocessing.dummy import Pool # ## **Using API's** # API´s (application programming interface) allow you to remotely excecute a given function. In this class we are going to see how to use API's to get real life data with Python, using the [Requests library](https://docs.python-requests.org/en/master/) # - [Grate place for finding usefull API'S](https://rapidapi.com/hub) import datetime as dt # - Lest´s see an example with [Bitcon](https://binance-docs.github.io/apidocs/spot/en/#change-log) API to get data from ETHUSDT prices for every hour in in 2017. In this case we are don't have a suscription to the Bitcoin API. For this reason we have a limit on our requests responses. I strongly recomend getting a free subscrition if you are developing a project with Bitcoin data url = "https://api.binance.com/api/v3/klines" startTime = str(int(dt.datetime(2017, 5, 1).timestamp() * 1000)) endTime = str(int(dt.datetime(2018, 5, 1).timestamp() * 1000)) limit = '1000' req_params = {"symbol" : 'ETHUSDT', 'interval' : '1h', 'startTime' : startTime, 'endTime' : endTime, 'limit' : limit} response=(requests.get(url, params = req_params)) # - The response of the request we made comes is delivered in form of a json. Let´s make a DataFrame out of this data. # + jupyter={"outputs_hidden": true} response.text # - import json df = pd.DataFrame(json.loads(response.text)) df df = df.iloc[:, 0:6] df.columns = ['datetime', 'open', 'high', 'low', 'close', 'volume'] df=df.set_index('datetime') df.index=[dt.datetime.fromtimestamp(x / 1000.0) for x in df.index] # + [markdown] jupyter={"outputs_hidden": true} # - Can we make a function that requests data of any symbol for every specified time period? # - def extraer_data_Symbol(symbol, interval,start, end): req_params = {"symbol" : symbol, 'interval' : interval, 'startTime' : start, 'endTime' : end, 'limit' : limit} limit = '10000' response=(requests.get(url, params = req_params)) df = pd.DataFrame(json.loads(response.text)) df = df.iloc[:, 0:6] df.columns = ['datetime', 'open', 'high', 'low', 'close', 'volume'] df.index = [dt.datetime.fromtimestamp(x / 1000.0) for x in df.datetime] return df Sym='https://api.binance.com/api/v3/exchangeInfo' Symbols=json.loads(requests.get(Sym).text) activos=[] for i in Symbols['symbols']: activos.append(i['symbol']) # ## **Web scraping Beautifulsoup** # [Documentación BeautifulSoup](https://beautiful-soup-4.readthedocs.io/en/latest/) # - Generlay the first thing we want to do is to url='https://www.imdb.com/chart/moviemeter/?sort=ir,desc&mode=simple&page=1' page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') datas=[] for i in tqdm(range(0,len(soup.find_all('td',{'class':'titleColumn'})))): data={} try: data['titulo']=soup.find_all('td',{'class':'titleColumn'})[i].find('a').text except:pass try: data['link']='https://www.imdb.com/'+soup.find_all('td',{'class':'titleColumn'})[i].find('a').get('href') except:pass try: data['puntaje']=soup.find_all('td',{'class':'ratingColumn imdbRating'})[i].find('strong').text except:pass datas.append(data) len(datas) datas[0] data=pd.DataFrame(datas) links=data['link'].to_list() # + jupyter={"outputs_hidden": true} tags=[] infos=[] for i in tqdm(links): soup=0 page = requests.get(i) soup = BeautifulSoup(page.content, 'html.parser') info={} try: info['año_lanzamiento']=soup.find('ul',{'class':'ipc-inline-list ipc-inline-list--show-dividers TitleBlockMetaData__MetaDataList-sc-12ein40-0 dxizHm baseAlt'}).findAll('li')[0].text[:4] except:pass try: info['duracion_peli']=soup.find('ul',{'class':'ipc-inline-list ipc-inline-list--show-dividers TitleBlockMetaData__MetaDataList-sc-12ein40-0 dxizHm baseAlt'}).findAll('li')[2].text except:pass infos.append(info) # - # ## **Paralelizar scraping** a=int(len(links)/10) infos=[] def scrap(x): global infos for i in tqdm(links[x:x+10]): soup=0 page = requests.get(i) soup = BeautifulSoup(page.content, 'html.parser') info={} try: info['año_lanzamiento']=soup.find('ul',{'class':'ipc-inline-list ipc-inline-list--show-dividers TitleBlockMetaData__MetaDataList-sc-12ein40-0 dxizHm baseAlt'}).findAll('li')[0].text[:4] except:pass try: info['duracion_peli']=soup.find('ul',{'class':'ipc-inline-list ipc-inline-list--show-dividers TitleBlockMetaData__MetaDataList-sc-12ein40-0 dxizHm baseAlt'}).findAll('li')[2].text except:pass infos.append(info) # + jupyter={"outputs_hidden": true} tags=[] j=0 k=[] for i in range(0,10): k.append(j) j+=10 pool=Pool(10) re=pool.map(scrap, k) # + tags=[] pd.DataFrame(infos) # - # - **Using headers**: Sometimes pages won´t respond to your request if your request is not done as a human would do it. One way you can try to make your requet more "human like" is by adding headers. headers={ 'authority': 'www.imdb.com', 'method': 'GET', 'path': '/chart/moviemeter/?sort=ir,desc&mode=simple&page=1', 'scheme': 'https', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'accept-encoding': 'gzip, deflate, br', 'accept-language':'es-ES,es;q=0.9', 'cache-control': 'max-age=0', 'cookie': '<KEY>; session-id=136-8467877-9681758; adblk=adblk_no; ubid-main=133-3750919-3113945; session-id-time=2082787201l; session-token=<KEY>; csm-hit=tb:s-PGX2A0861DEA1XENQNVS|1629483407253&t:1629483408943&adb:adblk_no', 'sec-ch-ua': '"Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"', 'sec-ch-ua-mobile': '?0', 'sec-fetch-dest': 'document', 'sec-fetch-mode': 'navigate', 'sec-fetch-site':'none', 'sec-fetch-user': '?1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36' } url='https://www.imdb.com/chart/moviemeter/?sort=ir,desc&mode=simple&page=1' page = requests.get(url,headers=headers) soup = BeautifulSoup(page.content, 'html.parser') # - [Grate place for finding usefull API'S](https://rapidapi.com/hub) # - Scraping an image # + ###nordvpn ###headers
e-ta4_webscraping_basics/e-ta4_webscraping_basics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys import importlib sys.path.insert(0, '/n/groups/htem/Segmentation/shared-nondev/cb2_segmentation/analysis_mf_grc') import my_plot importlib.reload(my_plot) from my_plot import MyPlotData, my_box_plot def to_ng_coord(coord): return ( int(coord[0]/4), int(coord[1]/4), int(coord[2]/40), ) script_n = 'grc_locations_all_210520' # def get_eucledean_dist(a, b): # return np.linalg.norm( # (a[0]-b[0], a[1]-b[1], a[2]-b[2])) # def get_distance(u, v): # return get_eucledean_dist(u, v) import compress_pickle # input_graph = compress_pickle.load('/n/groups/htem/Segmentation/shared-nondev/cb2_segmentation/analysis_mf_grc/mf_grc_model/input_graph_201114_restricted_z.gz') input_graph = compress_pickle.load('/n/groups/htem/Segmentation/shared-nondev/cb2_segmentation/analysis_mf_grc/mf_grc_model/input_graph_201114.gz') grcs = [k for k in input_graph.grcs.keys()] z_min = 19800 z_max = 29800 # z_min = 19800-10000 # z_max = 29800+10000 # z_min = 19800+2500 # z_max = 29800-2500 z_min = 19800-2500 z_max = 29800+2500 z_min = 19800-5000 z_max = 29800+5000 # GrCs are fully reconstructed and proofread from 90k to 150k # x_min = 100*1000*4 # x_max = 140*1000*4 x_min = 105*1000*4 x_max = 135*1000*4 # x_max = 115*1000*4 # x_max = 125*1000*4 mpd = MyPlotData() # claw_lengths = defaultdict(int) num_grcs = 0 for grc_id in input_graph.grcs: grc = input_graph.grcs[grc_id] soma_loc = grc.soma_loc x, y, z = soma_loc if x < x_min or x > x_max: continue if z < z_min or z > z_max: continue mpd.add_data_point( x=x/1000, y=-y/1000, z=z/1000, claw_count=max(len(grc.edges), 2), ) num_grcs += 1 print(f'Counted {num_grcs} grcs within bounds') # + def custom_legend_fn(plt): # plt.legend(bbox_to_anchor=(1.025, .8), loc='upper left', borderaxespad=0.) plt.legend(bbox_to_anchor=(1, .8), loc='upper left', frameon=False) importlib.reload(my_plot); my_plot.my_relplot( mpd, kind='scatter', x="x", y="y", # ax=ax, hue="claw_count", size="claw_count", xlim=(420, 540), ylim=(-480, -380), width=10, custom_legend_fn=custom_legend_fn, ) # + importlib.reload(my_plot); my_plot.my_displot( mpd, x="x", y='y', cbar=True, ) # + def custom_legend_fn(plt): # plt.legend(bbox_to_anchor=(1.025, .8), loc='upper left', borderaxespad=0.) plt.legend(bbox_to_anchor=(1, .925), loc='upper left', frameon=False) save_filename=f'{script_n}_xy.svg' import seaborn as sns importlib.reload(my_plot); my_plot.my_relplot( mpd, context='paper', font_scale=.65, kind='scatter', x="x", y="y", s=10, linewidth=0, alpha=.9, aspect=2.9, width=3.5, xlim=(None, x_max-x_min+10), hue="claw_count", palette=sns.color_palette("mako_r", as_cmap=True), save_filename=save_filename, y_axis_label='Y (μm)', # title='Granule Cell Cell Body Locations', # x_axis_label='Dorsal-ventral Axis: X (μm)', custom_legend_fn=custom_legend_fn, show=True, ) # + save_filename=f'{script_n}_xz.svg' import seaborn as sns importlib.reload(my_plot); my_plot.my_relplot( mpd, # context='paper', # font_scale=1, kind='scatter', x="x", y="z", aspect=2.9, width=10, xlim=(None, x_max-x_min+10), # size="claw_count", hue="claw_count", palette=sns.color_palette("mako_r", as_cmap=True), # alpha=.9, y_axis_label='Z (μm)', x_axis_label='X (μm)', save_filename=save_filename, custom_legend_fn=custom_legend_fn, show=True, ) # + # Counting all GrCs within common subvolume x_min = 280*1000 x_max = 600*1000 mpd = MyPlotData() # claw_lengths = defaultdict(int) num_grcs = 0 for grc_id in input_graph.grcs: grc = input_graph.grcs[grc_id] soma_loc = grc.soma_loc x, y, z = soma_loc if x < x_min or x > x_max: continue # if z < z_min or z > z_max: # continue mpd.add_data_point( x=x/1000, y=-y/1000, z=z/1000, claw_count=max(len(grc.edges), 2), ) num_grcs += 1 print(f'Counted {num_grcs} grcs within bounds')
analysis/mf_grc_analysis/grc_locs/grc_locations_all_210520.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import stats_functions as sf import emission.storage.timeseries.aggregate_timeseries as estag import emission.storage.timeseries.timequery as estt import arrow import numpy as np import emission.core.get_database as edb from emission.core.wrapper.user import User import matplotlib.pyplot as plt import pandas as pd import uuid from datetime import timedelta, date import math from scipy import stats all_users = pd.DataFrame(list(edb.get_uuid_db().find({}, {"uuid": 1, "_id": 0}))) num_users = all_users.shape[0] if num_users <= 0: raise Exception("No users in DB") def calc_weeks(d1, d2): monday1 = (d1 - timedelta(days=d1.weekday())) monday2 = (d2 - timedelta(days=d2.weekday())) return int(math.floor((monday2 - monday1).days / 7)) # + # Create a dataframe with columns user_id, carbon intensity, day, week number, and group (info/emotion). df = pd.DataFrame() for i in range(len(all_users)): user_id = all_users[i] # Determine group for the user. group = "none" try: client = edb.get_profile_db().find_one({"user_id": user_id})['client'] if client == 'urap-2017-information': group = "information" elif client == 'urap-2017-emotion': group = "emotion" elif client == 'urap-2017-control': group = "control" except: continue start = arrow.get('2018-03-28', 'YYYY-MM-DD') end = arrow.get('2018-06-06', 'YYYY-MM-DD') for day in arrow.Arrow.range('day', start, end): begin_ts = day.timestamp end_ts = (day + timedelta(days=1)).timestamp val = User.computeCarbon(user_id, begin_ts, end_ts) if val != None: # Append a row to the df. week = calc_weeks(start, day) df = df.append({'uuid': user_id, 'carbon_intensity': val, 'week': week, 'group': group}, ignore_index=True) # - # Mean carbon intensity for each user. mean_user_carbon_df = (df.groupby(['group' , 'uuid', 'week']).sum().reset_index()).drop('week', axis=1).groupby(['group' , 'uuid']).mean() mean_user_carbon_df = mean_user_carbon_df.reset_index() mean_user_carbon_df # + diff_df = pd.DataFrame() # Only includes users with carbon intensities for more than one week. curr_uuid = None for index, row in df.groupby(['group' , 'uuid', 'week']).sum().iterrows(): curr_c_intensity = row['carbon_intensity'] group = index[0] uuid = index[1] week = index[2] if curr_uuid == None: curr_uuid = uuid if uuid == curr_uuid: if week == 0: val = math.nan else: val = 100 * (curr_c_intensity - prev_c_intensity)/mean_user_carbon_df[mean_user_carbon_df.uuid == curr_uuid].iloc[0].carbon_intensity diff_df = diff_df.append({'uuid': uuid, 'carbon_intensity_diff (%)': val, 'week': week, 'group': group}, ignore_index=True) if uuid != curr_uuid: curr_uuid = uuid prev_c_intensity = curr_c_intensity diff_df = diff_df[1:len(diff_df)] diff_df # - # Averaged change in carbon intensity across users' weekly total carbon intensity. mean_df = diff_df.groupby(['group' , 'uuid']).sum() mean_df df_group_change = mean_df.groupby(['group']).mean() df_group_change import numpy as np df_group_change = mean_df.groupby(['group']).var() print("control: ", np.var(mean_df.loc['control'])) print("emotion: ", np.var(mean_df.loc['emotion'])) print("information: ", np.var(mean_df.loc['information'])) df_group_change # # Permutation Testing # + mean_df = mean_df.reset_index() control_diff_simple_avg_df = mean_df.loc[mean_df.group == "control"] emotion_diff_simple_avg_df = mean_df.loc[mean_df.group == "emotion"] information_diff_simple_avg_df = mean_df.loc[mean_df.group == "information"] control_emotion_diff_df = mean_df[mean_df.group != "information"] control_information_diff_df = mean_df[mean_df.group != "emotion"] emotion_information_diff_df = mean_df[mean_df.group != "control"] control_emotion_diff_df # - print(sf.perm_test(control_emotion_diff_df['group'], control_emotion_diff_df['carbon_intensity_diff (%)'], sf.mean_diff, 100000)) print("Control vs Emotion") print(sf.perm_test(control_information_diff_df['group'], control_information_diff_df['carbon_intensity_diff (%)'], sf.mean_diff, 100000)) print("Control vs Info") print(sf.perm_test(emotion_information_diff_df['group'], emotion_information_diff_df['carbon_intensity_diff (%)'], sf.mean_diff, 100000)) print("Info vs Emotion") # # Bootstrapping Tests print(sf.bootstrap_test(control_information_diff_df['group'], control_information_diff_df['carbon_intensity_diff (%)'], sf.mean_diff, 100000)) # # Mann Whitney U Tests # + from scipy.stats import mannwhitneyu control = mean_df[mean_df['group'] == 'control'] control_array = control.as_matrix(columns=control.columns[2:]) info = mean_df[mean_df['group'] == 'information'] info_array = info.as_matrix(columns=info.columns[2:]) print(mannwhitneyu(info_array, control_array))
tripaware_2017/Uncleared Outputs Notebooks/Change in Carbon Footprint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Monte Carlo Simulation # # Run MC simulation from imported module # + import os import mcsim.monte_carlo # - help(mcsim.monte_carlo) # + # Set simulation parameters reduced_temperature = 1.5 num_steps = 2000 max_displacement = 0.1 cutoff = 3 freq = 1000 # Read initial coordinates file_path = os.path.join('lj_sample_configurations', 'lj_sample_config_periodic1.txt') atomic_coordinates, box_length = mcsim.monte_carlo.read_xyz(file_path) # - len(atomic_coordinates) final_coordinates = mcsim.monte_carlo.run_simulation(atomic_coordinates, box_length, cutoff, reduced_temperature, num_steps) final_coordinates final_coordinates_14 = mcsim.monte_carlo.run_simulation(atomic_coordinates, box_length, cutoff, 1.4, num_steps)
run_mc.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + # Copyright 2021 Google LLC # Use of this source code is governed by an MIT-style # license that can be found in the LICENSE file or at # https://opensource.org/licenses/MIT. # Notebook authors: <NAME> (<EMAIL>) # and <NAME> (<EMAIL>) # This notebook reproduces figures for chapter 2 from the book # "Probabilistic Machine Learning: An Introduction" # by <NAME> (MIT Press, 2021). # Book pdf is available from http://probml.ai # - # <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a> # <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/figures/chapter2_probability_univariate_models_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # ## Figure 2.1:<a name='2.1'></a> <a name='multinomApp'></a> # # Some discrete distributions on the state space $\mathcal X =\ 1,2,3,4\ $. (a) A uniform distribution with $p(x=k)=1/4$. (b) A degenerate distribution (delta function) that puts all its mass on $x=1$. # Figure(s) generated by [discrete_prob_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/discrete_prob_dist_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./discrete_prob_dist_plot.py") # %run discrete_prob_dist_plot.py # ## Figure 2.2:<a name='2.2'></a> <a name='gaussianQuantileApp'></a> # # (a) Plot of the cdf for the standard normal, $\mathcal N (0,1)$. # Figure(s) generated by [gauss_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/gauss_plot.py) [quantile_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/quantile_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./gauss_plot.py") # %run gauss_plot.py google.colab.files.view("./quantile_plot.py") # %run quantile_plot.py # ## Figure 2.3:<a name='2.3'></a> <a name='roweis-xtimesy'></a> # # Computing $p(x,y) = p(x) p(y)$, where $ X \perp Y $. Here $X$ and $Y$ are discrete random variables; $X$ has 6 possible states (values) and $Y$ has 5 possible states. A general joint distribution on two such variables would require $(6 \times 5) - 1 = 29$ parameters to define it (we subtract 1 because of the sum-to-one constraint). By assuming (unconditional) independence, we only need $(6-1) + (5-1) = 9$ parameters to define $p(x,y)$. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # ## Figure 2.4:<a name='2.4'></a> <a name='bimodal'></a> # # Illustration of a mixture of two 1d Gaussians, $p(x) = 0.5 \mathcal N (x|0,0.5) + 0.5 \mathcal N (x|2,0.5)$. # Figure(s) generated by [bimodal_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/bimodal_dist_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./bimodal_dist_plot.py") # %run bimodal_dist_plot.py # ## Figure 2.5:<a name='2.5'></a> <a name='kalmanHead'></a> # # (a) Any planar line-drawing is geometrically consistent with infinitely many 3-D structures. From Figure 11 of <a href='#Sinha1993'>[PE93]</a> . Used with kind permission of <NAME>. (b) Vision as inverse graphics. The agent (here represented by a human head) has to infer the scene $h$ given the image $y$ using an estimator, which computes $ h (y) = \operatorname * argmax _ h p(h|y)$. From Figure 1 of <a href='#Rao1999'>[RP99]</a> . Used with kind permission of <NAME>. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') show_image("/pyprobml/book1/figures/images/Figure_2.5_A.png") show_image("/pyprobml/book1/figures/images/Figure_2.5_B.png") show_image("/pyprobml/book1/figures/images/Figure_2.5_C.png") # ## Figure 2.6:<a name='2.6'></a> <a name='binomDist'></a> # # Illustration of the binomial distribution with $N=10$ and (a) $\theta =0.25$ and (b) $\theta =0.9$. # Figure(s) generated by [binom_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/binom_dist_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./binom_dist_plot.py") # %run binom_dist_plot.py # ## Figure 2.7:<a name='2.7'></a> <a name='sigmoidHeaviside'></a> # # (a) The sigmoid (logistic) function $\sigma (a)=(1+e^ -a )^ -1 $. (b) The Heaviside function $\mathbb I \left ( a>0 \right )$. # Figure(s) generated by [activation_fun_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/activation_fun_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./activation_fun_plot.py") # %run activation_fun_plot.py # ## Figure 2.8:<a name='2.8'></a> <a name='iris-logreg-1d'></a> # # Logistic regression applied to a 1-dimensional, 2-class version of the Iris dataset. # Figure(s) generated by [iris_logreg.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_logreg.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./iris_logreg.py") # %run iris_logreg.py # ## Figure 2.9:<a name='2.9'></a> <a name='softmaxDemo'></a> # # Softmax distribution $\mathcal S (\mathbf a /T)$, where $\mathbf a =(3,0,1)$, at temperatures of $T=100$, $T=2$ and $T=1$. When the temperature is high (left), the distribution is uniform, whereas when the temperature is low (right), the distribution is ``spiky'', with most of its mass on the largest element. # Figure(s) generated by [softmax_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/softmax_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./softmax_plot.py") # %run softmax_plot.py # ## Figure 2.10:<a name='2.10'></a> <a name='iris-logistic-2d-3class-prob'></a> # # Logistic regression on the 3-class, 2-feature version of the Iris dataset. Adapted from Figure of 4.25 <a href='#Geron2019'>[Aur19]</a> . # Figure(s) generated by [iris_logreg.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_logreg.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./iris_logreg.py") # %run iris_logreg.py # ## Figure 2.11:<a name='2.11'></a> <a name='gaussianPdf'></a> # # (a) Cumulative distribution function (cdf) for the standard normal. # Figure(s) generated by [gauss_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/gauss_plot.py) [quantile_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/quantile_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./gauss_plot.py") # %run gauss_plot.py google.colab.files.view("./quantile_plot.py") # %run quantile_plot.py # ## Figure 2.12:<a name='2.12'></a> <a name='hetero'></a> # # Linear regression using Gaussian output with mean $\mu (x)=b + w x$ and (a) fixed variance $\sigma ^2$ (homoskedastic) or (b) input-dependent variance $\sigma (x)^2$ (heteroscedastic). # Figure(s) generated by [linreg_1d_hetero_tfp.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_1d_hetero_tfp.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./linreg_1d_hetero_tfp.py") # %run linreg_1d_hetero_tfp.py # ## Figure 2.13:<a name='2.13'></a> <a name='studentPdf'></a> # # (a) The pdf's for a $\mathcal N (0,1)$, $\mathcal T (\mu =0,\sigma =1,\nu =1)$, $\mathcal T (\mu =0,\sigma =1,\nu =2)$, and $\mathrm Lap (0,1/\sqrt 2 )$. The mean is 0 and the variance is 1 for both the Gaussian and Laplace. When $\nu =1$, the Student is the same as the Cauchy, which does not have a well-defined mean and variance. (b) Log of these pdf's. Note that the Student distribution is not log-concave for any parameter value, unlike the Laplace distribution. Nevertheless, both are unimodal. # Figure(s) generated by [student_laplace_pdf_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/student_laplace_pdf_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./student_laplace_pdf_plot.py") # %run student_laplace_pdf_plot.py # ## Figure 2.14:<a name='2.14'></a> <a name='robustDemo'></a> # # Illustration of the effect of outliers on fitting Gaussian, Student and Laplace distributions. (a) No outliers (the Gaussian and Student curves are on top of each other). (b) With outliers. We see that the Gaussian is more affected by outliers than the Student and Laplace distributions. Adapted from Figure 2.16 of <a href='#BishopBook'>[Bis06]</a> . # Figure(s) generated by [robust_pdf_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/robust_pdf_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./robust_pdf_plot.py") # %run robust_pdf_plot.py # ## Figure 2.15:<a name='2.15'></a> <a name='gammaDist'></a> # # (a) Some beta distributions. # Figure(s) generated by [beta_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/beta_dist_plot.py) [gamma_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/gamma_dist_plot.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./beta_dist_plot.py") # %run beta_dist_plot.py google.colab.files.view("./gamma_dist_plot.py") # %run gamma_dist_plot.py # ## Figure 2.16:<a name='2.16'></a> <a name='changeOfVar1d'></a> # # (a) Mapping a uniform pdf through the function $f(x) = 2x + 1$. (b) Illustration of how two nearby points, $x$ and $x+dx$, get mapped under $f$. If $\frac dy dx >0$, the function is locally increasing, but if $\frac dy dx <0$, the function is locally decreasing. From <a href='#JangBlog'>[Jan18]</a> . Used with kind permission of <NAME> #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') show_image("/pyprobml/book1/figures/images/Figure_2.16_A.png") show_image("/pyprobml/book1/figures/images/Figure_2.16_B.png") # ## Figure 2.17:<a name='2.17'></a> <a name='affine2d'></a> # # Illustration of an affine transformation applied to a unit square, $f(\mathbf x ) = \mathbf A \mathbf x + \mathbf b $. (a) Here $\mathbf A =\mathbf I $. (b) Here $\mathbf b =\boldsymbol 0 $. From <a href='#JangBlog'>[Jan18]</a> . Used with kind permission of <NAME> #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') show_image("/pyprobml/book1/figures/images/Figure_2.17.png") # ## Figure 2.18:<a name='2.18'></a> <a name='polar'></a> # # Change of variables from polar to Cartesian. The area of the shaded patch is $r \tmspace +\thickmuskip .2777em dr \tmspace +\thickmuskip .2777em d\theta $. Adapted from Figure 3.16 of <a href='#Rice95'>[Ric95]</a> #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') show_image("/pyprobml/book1/figures/images/Figure_2.18.png") # ## Figure 2.19:<a name='2.19'></a> <a name='bellCurve'></a> # # Distribution of the sum of two dice rolls, i.e., $p(y)$ where $y=x_1 + x_2$ and $x_i \sim \mathrm Unif (\ 1,2,\ldots ,6\ )$. From https://en.wikipedia.org/wiki/Probability\_distribution . Used with kind permission of Wikipedia author <NAME>. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') show_image("/pyprobml/book1/figures/images/Figure_2.19.png") # ## Figure 2.20:<a name='2.20'></a> <a name='clt'></a> # # The central limit theorem in pictures. We plot a histogram of $ \mu _N^s = \frac 1 N \DOTSB \sum@ \slimits@ _ n=1 ^Nx_ ns $, where $x_ ns \sim \mathrm Beta (1,5)$, for $s=1:10000$. As $N\rightarrow \infty $, the distribution tends towards a Gaussian. (a) $N=1$. (b) $N=5$. Adapted from Figure 2.6 of <a href='#BishopBook'>[Bis06]</a> . # Figure(s) generated by [centralLimitDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/centralLimitDemo.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./centralLimitDemo.py") # %run centralLimitDemo.py # ## Figure 2.21:<a name='2.21'></a> <a name='changeOfVars'></a> # # Computing the distribution of $y=x^2$, where $p(x)$ is uniform (left). The analytic result is shown in the middle, and the Monte Carlo approximation is shown on the right. # Figure(s) generated by [change_of_vars_demo1d.py](https://github.com/probml/pyprobml/blob/master/scripts/change_of_vars_demo1d.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./change_of_vars_demo1d.py") # %run change_of_vars_demo1d.py # ## Figure 2.22:<a name='2.22'></a> <a name='PIT'></a> # # Illustration of the probability integral transform. Left column: 3 different pdf's for $p(X)$ from which we sample $x_n \sim p(x)$. Middle column: empirical cdf of $y_n=P_X(x_n)$. Right column: empirical pdf of $p(y_n)$ using a kernel density estimate. Adapted from Figure 11.17 of <a href='#BMC'>[MKL11]</a> . # Figure(s) generated by [ecdf_sample.py](https://github.com/probml/pyprobml/blob/master/scripts/ecdf_sample.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') google.colab.files.view("./ecdf_sample.py") # %run ecdf_sample.py # ## Figure 2.23:<a name='2.23'></a> <a name='KStest'></a> # # Illustration of the Kolmogorov–Smirnov statistic. The red line is a model CDF, the blue line is an empirical CDF, and the black arrow is the K–S statistic. From https://en.wikipedia.org/wiki/Kolmogorov\_Smirnov_test . Used with kind permission of Wikipedia author Bscan. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') show_image("/pyprobml/book1/figures/images/Figure_2.23.png") # ## References: # <a name='Geron2019'>[Aur19]</a> <NAME> "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)". (2019). # # <a name='BishopBook'>[Bis06]</a> <NAME> "Pattern recognition and machine learning". (2006). # # <a name='JangBlog'>[Jan18]</a> <NAME> "Normalizing Flows Tutorial". (2018). # # <a name='BMC'>[MKL11]</a> <NAME>, <NAME> and <NAME>. "Bayesian Modeling and Computation in Python". (2011). # # <a name='Sinha1993'>[PE93]</a> <NAME> and <NAME>. "Recovering reflectance and illumination in a world of paintedpolyhedra". (1993). # # <a name='Rao1999'>[RP99]</a> <NAME> "An optimal estimation approach to visual perception and learning". In: Vision Res. (1999). # # <a name='Rice95'>[Ric95]</a> <NAME> "Mathematical statistics and data analysis". (1995). # #
book1/figures/chapter2_probability_univariate_models_figures.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.5 64-bit (''mediapipe_env'': conda)' # name: python3 # --- # # 1. Install and import Dependencies # # !pip install mediapipe opencv-python import mediapipe as mp import cv2 import numpy as np import uuid import os mp_drawing = mp.solutions.drawing_utils mp_hands = mp.solutions.hands # # 2. Draw Hands # <img src=https://google.github.io/mediapipe/images/mobile/hand_landmarks.png> # + cap = cv2.VideoCapture(0) # Confidence # Detctioon : # treshold for the initial detection to be succesful # Tracking : # treshold for tracking after detection with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands: while cap.isOpened(): ret, frame = cap.read() # BGR to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Flip on horizontal # Flip the image horizontally for a later selfie-view display # no mirror effect ! image = cv2.flip(image, 1) # Set flag # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False # Detections results = hands.process(image) # Set flag to true in order to be able to draw on the image image.flags.writeable = True # RGB 2 BGR image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Detections #print(results) # Rendering results if results.multi_hand_landmarks: # we check here whether or not we have results for num, hand in enumerate(results.multi_hand_landmarks): mp_drawing.draw_landmarks(image, hand, mp_hands.HAND_CONNECTIONS, mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4), mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2), ) cv2.imshow('Hand Tracking', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # - # ## Be careful ! # If in our results.multi_hand_landmarks, we have only one "none" the return "non" ! # + print(results.multi_hand_landmarks) # Coordinates # X : Landmark position in the horizontal axis # Y : Landmark position in the vertical axis # Z : Landmark depth from the camera # - print(list(enumerate(results.multi_hand_landmarks))) mp_hands.HAND_CONNECTIONS # # 3. Output Images os.mkdir('Output images') # + cap = cv2.VideoCapture(0) # Confidence # Detctioon : # treshold for the initial detection to be succesful # Tracking : # treshold for tracking after detection with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands: while cap.isOpened(): ret, frame = cap.read() # BGR to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Flip on horizontal # Flip the image horizontally for a later selfie-view display # no mirror effect ! image = cv2.flip(image, 1) # Set flag # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False # Detections results = hands.process(image) # Set flag to true in order to be able to draw on the image image.flags.writeable = True # RGB 2 BGR image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Detections #print(results) # Rendering results print(results.multi_hand_landmarks) # we check here whether or not we have results if results.multi_hand_landmarks: for num, hand in enumerate(results.multi_hand_landmarks): mp_drawing.draw_landmarks(image, hand, mp_hands.HAND_CONNECTIONS, mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4), mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2), ) # Save our image #cv2.imwrite(os.path.join('Output images','{}.jpg'.format(uuid.uuid1())),image) cv2.imshow('Hand Tracking', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # - # ls print(results.multi_hand_landmarks) # # 4. Detect Left and Right Hands mp_hands.HandLandmark.WRIST results.multi_hand_landmarks[0].landmark[mp_hands.HandLandmark.PINKY_TIP] results.multi_handedness.classification[0] # in order to have the lable name of the hand # ### Function get_label arguments # - index: the hand result i.e 0 or 1 # - hand: the actual hand landmarks # - results: all detections from model def get_label(index, hand, results): output = None for idx, classification in enumerate(results.multi_handedness): if classification.classification[0].index == index: # Process results label = classification.classification[0].label score = classification.classification[0].score text = '{} {}'.format(label, round(score, 2)) # Extract Coordinates """ coords = tuple ( np.multiply ( np.array ( ( hand.landmark[mp_hands.HandLandmark.WRIST].x, hand.landmark[mp_hands.HandLandmark.WRIST].y ) ), [640,480] ) .astype(int) ) """ coords = tuple(np.multiply(np.array((hand.landmark[mp_hands.HandLandmark.WRIST].x, hand.landmark[mp_hands.HandLandmark.WRIST].y)),[640,480]).astype(int)) output = text, coords return output hand.landmark[mp_hands.HandLandmark.WRIST].x+5 # in order to understand results.multi_handedness[0].classification[0].score # + #print(num) #print(hand) #print(results) # - get_label(num,hand,results) # + cap = cv2.VideoCapture(0) # Confidence # Detctioon : # treshold for the initial detection to be succesful # Tracking : # treshold for tracking after detection with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands: while cap.isOpened(): ret, frame = cap.read() # BGR to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Flip on horizontal # Flip the image horizontally for a later selfie-view display # no mirror effect ! image = cv2.flip(image, 1) # Set flag # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False # Detections results = hands.process(image) # Set flag to true in order to be able to draw on the image image.flags.writeable = True # RGB 2 BGR image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Detections #print(results) # Rendering results # print(results.multi_hand_landmarks) # we check here whether or not we have results if results.multi_hand_landmarks: for num, hand in enumerate(results.multi_hand_landmarks): # becareful doesn't work ! mp_drawing.draw_landmarks(image,hand, mp_hands.HAND_CONNECTIONS, mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4),mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2),) # Render left or right detection if get_label(num, hand, results): text, coord = get_label(num, hand, results) cv2.putText(image, text, coord, cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA) # Save our image #cv2.imwrite(os.path.join('Output images','{}.jpg'.format(uuid.uuid1())),image) cv2.imshow('Hand Tracking', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # - # # 5. Calculate Multiple Angles from matplotlib import pyplot as plt joint_list = [[8,7,6],[12,11,10],[1,2,3]] def draw_finger_angles(image, results, joint_list): # Loop through hands for hand in results.multi_hand_landmarks: #Loop through joint sets for joint in joint_list: a = np.array([hand.landmark[joint[0]].x, hand.landmark[joint[0]].y]) # First b = np.array([hand.landmark[joint[1]].x, hand.landmark[joint[1]].y]) # Mid c = np.array([hand.landmark[joint[2]].x, hand.landmark[joint[2]].y]) # End # a(x_a,y_a) et b(x_b,y_b) # coord vect ba : (x_a - x_b, y_a - y_b) # take the vector ba and not ab because angle in b v_ba_x = a[0]-b[0] v_ba_y = a[1]-b[1] v_ba = np.array([v_ba_x,v_ba_y]) v_bc_x = c[0]-b[0] v_bc_y = c[1]-b[1] v_bc = np.array([v_bc_x,v_bc_y]) # angle = atan2(vector2.y, vector2.x) - atan2(vector1.y, vector1.x); radians = np.arctan2(v_bc_y, v_bc_x) - np.arctan2(v_ba_y, v_ba_x) #radian = np.arctan2(c[1]-b[1],c[0]-b[0]) - np.arctan2(a[1]-b[1],a[0]-b[0]) angle = np.abs(radians*180.0/np.pi) if angle > 180.0: angle = 360 - angle cv2.putText(image,str(round(angle,2)), tuple(np.multiply(b, [640, 480]).astype(int)),cv2.FONT_HERSHEY_SIMPLEX, 0.5,(255,255,255),2,cv2.LINE_AA) return image # + cap = cv2.VideoCapture(0) # Confidence # Detctioon : # treshold for the initial detection to be succesful # Tracking : # treshold for tracking after detection with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands: while cap.isOpened(): ret, frame = cap.read() # BGR to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Flip on horizontal # Flip the image horizontally for a later selfie-view display # no mirror effect ! image = cv2.flip(image, 1) # Set flag # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False # Detections results = hands.process(image) # Set flag to true in order to be able to draw on the image image.flags.writeable = True # RGB 2 BGR image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Detections #print(results) # Rendering results #print(results.multi_hand_landmarks) # we check here whether or not we have results if results.multi_hand_landmarks: for num, hand in enumerate(results.multi_hand_landmarks): # becareful doesn't work ! mp_drawing.draw_landmarks(image,hand, mp_hands.HAND_CONNECTIONS, mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4),mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2),) # Render left or right detection if get_label(num, hand, results): text, coord = get_label(num, hand, results) cv2.putText(image, text, coord, cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA) # Draw angles to image from joint list draw_finger_angles(image,results,joint_list) # Save our image #cv2.imwrite(os.path.join('Output images','{}.jpg'.format(uuid.uuid1())),image) cv2.imshow('Hand Tracking', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # -
AI_Hand_Pose.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Summarising, Aggregating, and Grouping data in Python Pandas # ![pandas-python-group-by-named-aggregation-update-1024x451.jpg](attachment:pandas-python-group-by-named-aggregation-update-1024x451.jpg) # ## Pandas – Python Data Analysis Library # I use Python's excellent Pandas library as a data analysis tool. # # One aspect that will be explored is the task of grouping large data frames by different variables and applying summary functions to each group. This is accomplished in Pandas using the "groupby ()" and "agg ()" functions of Panda's DataFrame objects. # ### A Sample DataFrame # In order to demonstrate the effectiveness and simplicity of the grouping commands, we will need some data. For an example dataset, I have a mobile phone usage records. we will analyse this type of data using Pandas. the full csv file is available at "./data/sample_data.csv". # The dataset contains 830 entries for mobile phone log spanning a total time of 5 months. The CSV file can be loaded into a pandas DataFrame using the pandas.DataFrame.from_csv() function. # + # Ignore warnings import warnings warnings.filterwarnings('always') warnings.filterwarnings('ignore') import pandas as pd import dateutil import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # - # Read sample Data df = pd.read_csv("./data/sample_data.csv") df.head(10) # The main columns in the file are: # # * *date :* The date and time of the entry # * *duration :* The duration (in seconds) for each call, the amount of data (in MB) for each data entry, and the number of texts sent (usually 1) for each sms entry. # * *item :* A description of the event occurring – can be one of call, sms, or data. # * *month :* The billing month that each entry belongs to – of form ‘YYYY-MM’. # * *network :* The mobile network that was called/texted for each entry. # * *network_type :* Whether the number being called was a mobile, international (‘world’), voicemail, landline, or other (‘special’) number. # # Phone numbers were removed for privacy. The date column can be parsed using the extremely handy dateutil library. df['date'] = df['date'].apply(dateutil.parser.parse, dayfirst=True) # ## Summarising the DataFrame # Once the data has been loaded into Python, Pandas makes the calculation of different statistics very simple. For example, mean, max, min, standard deviations and more for columns are easily calculable: # How many rows the dataset df['item'].count() # What was the longest phone call / data entry ? df['duration'].max() # How many seconds of phone calls are recorded in total ? df['duration'][df['item'] == 'call'].sum() # How many entries are there for each month? df['month'].value_counts() # Number of non-null unique network entries df['network'].nunique() # The need for custom functions is minimal unless you have very specific requirements. The full range of basic statistics that are quickly calculable and built into the base Pandas package are : # # * *count :* Number of non-null observations # * *sum :* Sum of values # * *mean :* Mean of values # * *mad :* Mean absolute deviation # * *median :* Arithmetic median of values # * *min :* Minimum # * *max :* Maximum # * *mode :* Mode # * *abs :* Absolute Value # * *prod :* Product of values # * *std :* Unbiased standard deviation # * *var :* Unbiased variance # * *sem :* Unbiased standard error of the mean # * *skew :* Unbiased skewness (3rd moment) # * *kurt :* Unbiased kurtosis (4th moment) # * *quantile :* Sample quantile (value at %) # * *cumsum :* Cumulative sum # * *cumprod :* Cumulative product # * *cummax :* Cumulative maximum # * *cummin :* Cumulative minimum # # The *describe()* function is a useful summarisation tool that will quickly display statistics for any variable or group it is applied to. The *describe()* output varies depending on whether you apply it to a numeric or character column. # ## Summarising Groups in the DataFrame # There’s further power put into your hands by mastering the Pandas “groupby()” functionality. Groupby essentially splits the data into different groups depending on a variable of your choice. For example, the expression data.groupby(‘month’) will split our current DataFrame by month. # # The groupby() function returns a GroupBy object, but essentially describes how the rows of the original data set has been split. the GroupBy object .groups variable is a dictionary whose keys are the computed unique groups and corresponding values being the axis labels belonging to each group. For example: df.groupby(['month']).groups.keys() len(df.groupby(['month']).groups['2014-11']) # Functions like max(), min(), mean(), first(), last() can be quickly applied to the GroupBy object to obtain summary statistics for each group – an immensely useful function. Different variables can be excluded / included from each summary requirement. # Get the first entry for each month df.groupby('month').first() # Get the sum of the durations per month df.groupby('month')['duration'].sum() # Get the number of dates / entries in each month df.groupby('month')['date'].count() # What is the sum of durations, for calls only, to each network df[df['item'] == 'call'].groupby('network')['duration'].sum() # We can also group by more than one variable, allowing more complex queries. # How many calls, sms, and data entries are in each month? df.groupby(['month', 'item'])['date'].count() # How many calls, texts, and data are sent per month, split by network_type ? df.groupby(['month', 'network_type'])['date'].count() # ## Groupby output format – Series or DataFrame? # The output from a groupby and aggregation operation varies between Pandas Series and Pandas Dataframes, which can be confusing for new users. As a rule of thumb, if you calculate more than one column of results, your result will be a Dataframe. For a single column of results, the agg function, by default, will produce a Series. # # We can change this by selecting our operation column differently: # produces Pandas Series df.groupby('month')['duration'].sum() # Produces Pandas DataFrame df.groupby('month')[['duration']].sum() # The groupby output will have an index or multi-index on rows corresponding to your chosen grouping variables. To avoid setting this index, pass “as_index=False” to the groupby operation. df.groupby('month', as_index=False).agg({"duration": "sum"}) # Using the as_index parameter while Grouping data in pandas prevents setting a row index on the result. # ## Multiple Statistics per Group # The final piece of syntax that we’ll examine is the “agg()” function for Pandas. The aggregation functionality provided by the agg() function allows multiple statistics to be calculated per group in one calculation. # ### Applying a single function to columns in groups # # Instructions for aggregation are provided in the form of a python dictionary or list. The dictionary keys are used to specify the columns upon which you’d like to perform operations, and the dictionary values to specify the function to run. # # For example: # Group the data frame by month and item and extract a number of stats from each group df.groupby( ['month', 'item'] ).agg( { 'duration':sum, # Sum duration per group 'network_type': "count", # get the count of networks 'date': 'first' # get the first date per group } ) # The aggregation dictionary syntax is flexible and can be defined before the operation. You can also define functions inline using “lambda” functions to extract statistics that are not provided by the built-in options. # Define the aggregation procedure outside of the groupby operation aggregations = { 'duration':'sum', 'date': lambda x: max(x) } df.groupby('month').agg(aggregations) # ### Applying multiple functions to columns in groups # To apply multiple functions to a single column in your grouped data, expand the syntax above to pass in a list of functions as the value in your aggregation dataframe. See below : # Group the data frame by month and item and extract a number of stats from each group df.groupby( ['month', 'item'] ).agg( { # Find the min, max, and sum of the duration column 'duration': [min, max, sum], # find the number of network type entries 'network_type': "count", # minimum, first, and number of unique dates 'date': [min, 'first', 'nunique'] } ) # The agg(..) syntax is flexible and simple to use. Remember that you can pass in custom and lambda functions to your list of aggregated calculations, and each will be passed the values from the column in your grouped data. # ### Renaming grouped aggregation columns # We’ll examine two methods to group Dataframes and rename the column results in your work. # ![pandas-python-group-by-named-aggregation-update-1024x451.jpg](attachment:pandas-python-group-by-named-aggregation-update-1024x451.jpg) # Grouping, calculating, and renaming the results can be achieved in a single command using the “agg” functionality in Python. A “pd.NamedAgg” is used for clarity, but normal tuples of form (column_name, grouping_function) can also be used also. # ### Recommended: Tuple Named Aggregations # Introduced in Pandas 0.25.0, groupby aggregation with relabelling is supported using “named aggregation” with simple tuples. Python tuples are used to provide the column name on which to work on, along with the function to apply. # # For example: df[df['item'] == 'call'].groupby('month').agg( # Get max of the duration column for each group max_duration=('duration', max), # Get min of the duration column for each group min_duration=('duration', min), # Get sum of the duration column for each group total_duration=('duration', sum), # Apply a lambda to date column num_days=("date", lambda x: (max(x) - min(x)).days) ) # Grouping with named aggregation using new Pandas 0.25 syntax. Tuples are used to specify the columns to work on and the functions to apply to each grouping. # # For clearer naming, Pandas also provides the NamedAggregation named-tuple, which can be used to achieve the same as normal tuples: df[df['item'] == 'call'].groupby('month').agg( max_duration=pd.NamedAgg(column='duration', aggfunc=max), min_duration=pd.NamedAgg(column='duration', aggfunc=min), total_duration=pd.NamedAgg(column='duration', aggfunc=sum), num_days=pd.NamedAgg( column="date", aggfunc=lambda x: (max(x) - min(x)).days) ) # Note that in versions of Pandas after release, applying lambda functions only works for these named aggregations when they are the only function applied to a single column, otherwise causing a KeyError. # ### Renaming index using droplevel and ravel # When multiple statistics are calculated on columns, the resulting dataframe will have a multi-index set on the column axis. The multi-index can be difficult to work with, and I typically have to rename columns after a groupby operation. # # One option is to drop the top level (using .droplevel) of the newly created multi-index on columns using: grouped = df.groupby('month').agg({"duration": [min, max, "mean"]}) grouped.columns = grouped.columns.droplevel(level=0) grouped.rename(columns={ "min": "min_duration", "max": "max_duration", "mean": "mean_duration" }) grouped.head() # However, this approach loses the original column names, leaving only the function names as column headers. A neater approach, is using the ravel() method on the grouped columns. Ravel() turns a Pandas multi-index into a simpler array, which we can combine into sensible column names: grouped = df.groupby('month').agg({"duration": [min, max, "mean"]}) # Using ravel, and a string join, we can create better names for the columns: grouped.columns = ["_".join(x) for x in grouped.columns.ravel()] grouped.head() # + from matplotlib.gridspec import GridSpec month = grouped.index duration_max = grouped.duration_max duration_mean = grouped.duration_mean # Make square figures and axes plt.figure(1, figsize=(20,10)) the_grid = GridSpec(2, 2) cmap = plt.get_cmap('Spectral') colors = [cmap(i) for i in np.linspace(0, 1, 8)] plt.subplot(the_grid[0, 1], aspect=1, title='Duration MAX') source_pie = plt.pie(duration_max, labels=month, autopct='%1.1f%%', shadow=True, colors=colors) plt.subplot(the_grid[0, 0], aspect=1, title='Duration MEAN') source_1_pie = plt.pie(duration_mean,labels=month, autopct='%.0f%%', shadow=True, colors=colors) plt.suptitle('Repartition of MAX & MEAN Duration through month', fontsize=16) plt.show() # - # Quick renaming of grouped columns from the groupby() multi-index can be achieved using the ravel() function. # ### *DEPRECATED* Dictionary groupby format # *There were substantial changes to the Pandas aggregation function in May of 2017. Renaming of variables using dictionaries within the agg() function as in the diagram below is being deprecated/removed from Pandas – see notes.* # ![pandas_aggregation-1024x409.png](attachment:pandas_aggregation-1024x409.png) # Aggregation of variables in a Pandas Dataframe using the agg() function. Note that in Pandas versions 0.20.1 onwards, the renaming of results needs to be done separately. # # In older Pandas releases (< 0.20.1), renaming the newly calculated columns was possible through nested dictionaries, or by passing a list of functions for a column. Our final example calculates multiple values from the duration column and names the results appropriately. Note that the results have multi-indexed column headers. # #### Note this syntax will no longer work for new installations of Python Pandas. # Define the aggregation calculations aggregations = { # work on the "duration" column 'duration': { # get the sum, and call this result 'total_duration' 'total_duration': 'sum', # get mean, call result 'average_duration' 'average_duration': 'mean', 'num_calls': 'count' }, # Now work on the "date" column 'date': { # Find the max, call the result "max_date" 'max_date': 'max', 'min_date': 'min', # Calculate the date range per group 'num_days': lambda x: max(x) - min(x) }, # Calculate two results for the 'network' column with a list 'network': ["count", "max"] } # Perform groupby aggregation by "month", # but only on the rows that are of type "call" df[df['item'] == 'call'].groupby('month').agg(aggregations) # ## Wrap up # The groupby functionality in Pandas is well documented in the official docs. # # There are plenty of resources online on this functionality, and I’d recommomend really conquering this syntax if you’re using Pandas in earnest at any point. # # * *DataQuest Tutorial on Data* # Analysis: https://www.dataquest.io/blog/pandas-tutorial-python-2/ # * *<NAME> notes on* # Groups: https://chrisalbon.com/python/pandas_apply_operations_to_groups.html # * *<NAME>* # Tutorial: http://www.gregreda.com/2013/10/26/working-with-pandas-dataframes/
Summarising, Aggregating, and Grouping data in Python Pandas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # AutoML solution vs single model # #### FEDOT version = 0.3.0 # - pip install fedot==0.3.0 # Below is an example of running an Auto ML solution for a classification problem. # ## Description of the task and dataset # + pycharm={"is_executing": false} import pandas as pd import warnings warnings.filterwarnings("ignore") # Input data from csv files train_data_path = '../data/scoring_train.csv' test_data_path = '../data/scoring_test.csv' df = pd.read_csv(train_data_path) df.head(5) # - # ## Baseline model # # Let's use the api features to solve the classification problem. First, we create a chain from a single model "xgboost". # To do this, we will substitute the appropriate name in the predefined_model field. # + pycharm={"is_executing": false, "name": "#%%\n"} from fedot.api.main import Fedot #task selection, initialisation of the framework baseline_model = Fedot(problem='classification') #fit model without optimisation - single XGBoost node is used baseline_model.fit(features=train_data_path, target='target', predefined_model='xgboost') #evaluate the prediction with test data baseline_model.predict_proba(features=test_data_path) #evaluate quality metric for the test sample baseline_metrics = baseline_model.get_metrics() print(baseline_metrics) # + [markdown] pycharm={"name": "#%% md\n"} # ## FEDOT AutoML for classification # # We can identify the model using an evolutionary algorithm built into the core of the FEDOT framework. # + pycharm={"is_executing": false, "name": "#%%\n"} # new instance to be used as AutoML tool auto_model = Fedot(problem='classification', seed = 42, verbose_level=4) # - # *Due to the specifics of the jupiter notebooks format, in order not to overload the page with unnecessary logs, we do not show the cell output below. # + pycharm={"is_executing": true, "name": "#%%\n"} #run of the AutoML-based model generation pipeline = auto_model.fit(features=train_data_path, target='target') # + pycharm={"is_executing": true, "name": "#%%\n"} prediction = auto_model.predict_proba(features=test_data_path) auto_metrics = auto_model.get_metrics() print(auto_metrics) # + pycharm={"is_executing": true, "name": "#%%\n"} #comparison with the manual pipeline print('Baseline', round(baseline_metrics['roc_auc'], 3)) print('AutoML solution', round(auto_metrics['roc_auc'], 3)) # - # Thus, with just a few lines of code, we were able to launch the FEDOT framework and got a better result*. # # *Due to the stochastic nature of the algorithm, the metrics for the found solution may differ. # # If you want to learn more about FEDOT, you can use [this notebook](2_intro_to_fedot.ipynb).
notebooks/version_03/1_intro_to_automl.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Ubuntu # > Ubuntu format, server setting # # - toc:true # - branch: master # - badges: true # - comments: false # - author: 최서연 # - categories: [Ubuntu] # 2021/12/23 우분투 설치 및 서버 세팅 한 거 기반 # ### 우분투 운영체제 설치 # 1. www.ubuntu.com/download/desktop 접속 # - 컴퓨터 사양에 맞게 선택(32bit ot 62bit 선택) # - download # - 결제하라고 나오는 거(옵션인듯?) 지나쳐서! # - `not now, take me to the download` # 2. 우분투가 설치된 usb 꽂기 # - 다운받은 우분투 운영체제 실행 # - 왼쪽 하단 Diskimage 선택하면 오른쪽 ISO 자동 선택 되어 있는듯 # - 다운 받은 우분투 파일 선택 # 3. 컴퓨터 restart # - delete 키 누르면 Bios setting 나타남 # - 여기에서 부팅 순서 변경 (깔아줄 우분투로) # 4. 재부팅되면 설치~ # - install ubuntu(try ubuntu는 약간..체험용?) # - english # - install this third-party software 선택 (처음 우분투 설치는 선택 안 하기!) # - something else 선텍(partition 직접 조정가능) # 5. partition 설정 # - https://remocon33.tistory.com/328 # - 혹은.. 기억이 잘 안 남.. # 6. 국적 설정 seoul # 7. 아이디 및 비밀번호 입력 # 8. 설치 진행 # - 설치 후 재부팅되면 우분투 깔려있는 모습 # ### server setting # ref: https://miruetoto.github.io/yechan/%EC%9A%B0%EB%B6%84%ED%88%AC/2000/01/03/(%EB%85%B8%ED%8A%B8)-%EC%9A%B0%EB%B6%84%ED%88%AC-%ED%8F%AC%EB%A7%B7-%EB%B0%8F-%EA%B0%9C%EB%B0%9C%EC%9A%A9-%EC%84%9C%EB%B2%84-%EC%85%8B%ED%8C%85-%EC%9A%B0%EB%B6%84%ED%88%AC.html # 1. gedit 열기 및 순서대로 입력 # - `blacklist nouveau` # - `options nouveau modeset=0` # 2. 파일이름을 `blacklist-nouveau.conf`로 지정 및 home에 저장 후 ctrl+alt+F3을 눌러서 까만화면으로 간다. # - `sudo -i` 입력 # - 우분투 설치시 입력한 아이디/비밀번호 입력해 root 권한을 얻기 # - `sudo cp /home/cgb2/blacklist-nouveau.conf /etc/modprobe.d` # - `sudo update-initramfs -u` # - `exit` # - `sudo rebooting`(재부팅) 여기서 sud는 관리자 권한으로 실행이라는 뜻, # 3. command 에서 실행 # - `sudo apt install gcc` # - `sudo apt install build-essential` # - 엔비디아 공식 홈페이지 https://www.nvidia.com/en-us/geforce/drivers/ 접속 # - OS 는 LINUX 64-BIT 선택 후 검색(다운 진행됨) # - 다운 후 파일 있는 폴더로 이동 및 `chmod +x NVIDIA-Linux-x86_64-410.78.run` 실핼 # - 드라이버 잘 설치되었는지 `nvidia-smi` 확인 # ### Anaconda # 1. ubuntu 다운 # - https://www.anaconda.com/products/individual 접속 및 아나콘다 다운 # - download 폴더에 있겠지? 그 폴더로 commend 로 가서 # - `bash Anaconda3-2019.03-Linux-x86_64.sh` 실행 # 2. 환경 구성 # - (base) `conda create -n py38r40 python=3.8` # - (base) `conda create --name py38r40 python=3.8` # - 둘 중 골라 파이썬 설치 # - 파이썬 버전이 높으면 나중에 `conda tensorflow-gpu` 실행 안 될 수도 # 3. SSH # - 컴퓨터에 ssh 없으면 검색해서 일단 설치 # - 있다는 전제 하래 진행 # - 연결하고 싶은 환경의 command 에서 실행 # - `sudo apt install openssh-server` # > note: 나중에 컴퓨터 재부팅 할 일 생기면 학교 서버로 들어가서 하면 되는데 학교 wifi 연결될 때만 가능 . 외부에서 하고 싶다면 특정 server (폰 note에 적어놓음) 들어가서 재부팅!(일단 재부팅 한다 말하고..) # ### Jupiter 원격 제어 # 1. 주피터랩 설치 # - 콘다 가상환경에서 주피터랩 설치 # `(py48r40) conda install -c conda-forge jupyterlab` # 2. 패스워드 설정 # - 주피터랩을 원격으로 접속할 수 있게 만들기 # - 커멘드에서 실행 # - (py38r40) `jupyter lab --generate-config` # - (py38r40) `jupyter lab password` # 3. 주피터랩 환경설정 # - `/home/"내가 입력한 아이디!!"/.jupyter/jupyter_lab_config.py` 파일 열기 # - 아이피 주소를 바꾸고 port도 바꾸고 싶다면 바꾸기 # - c.ServerApp.ip = '192.168.0.4' # - c.ServerApp.port = 1306 # > note: 192.168.0.4는 내부 IP, 고정 IP 있다면 고정 IP 입력 # - 주피터 랩 local에서 안 열리게 하는 법 뭐였지.. 나중에 한 번 더 할 기회 생기면 알아내서 note 할 것! # ### CUDA, cuDNN, tensorflow, pytorch # 1. 콘다 환경 가서 아래 실행 # - (py38r40) `conda install -c conda-forge tensorflow-gpu` # - (py38r40) `conda install -c conda-forge pytorch-gpu` # - 실행하면 CUDA, cuDNN, tensorflow, pytorch 모두 설치됨 # ### 주피터와 R 커널 연결 # 1. 콘다 환경가서 아래 실행 # - (py38r40) `conda install -c conda-forge r-essentials=4.0` # - 콘다환경에만 R이 설치되고, base에는 설치가 되지 않는다. # - 콘다 환경에서 R을 실행 # - commend 에서 R 실행( `R` 만 입력하면 실행~) # - `install.packages("IRkernel")`로 `IRkernel`설치 # - `IRkernel::installspec()` 실행 시 주피터랩과 R 환경 연결 # ### package # ``` # conda install -c conda-forge jupyterlab # conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch # pip install fastai # pip install plotly # pip install ipywidgets # pip install jupyter-dash # pip install dash # pip install plotnine # pip install seaboarn # pip install opencv-python # pip install folium # pip install pandas_datareader # conda install -c conda-forge r-essentials=4 # pip install rpy2 # conda install -c conda-forge python-graphviz # ``` # 기본 tool 미리 설치 # ### dropbox # - conda activate "내 id 입력" 후 # - `dropbix status` # - Dropbox isn't running! 문구를 볼 수 있음 # - `dropbox start` # - 시작 # --- # ### car package install error # terminal에서 `sudo apt-get install cmake` 입력 후 R에서 설치해보기 # - 빠른 시간에 대규모의 데이터를 다루는 incremental build 방식을 사용하기 위해 Make를 사용하는데, Make는 관리가 번거로워 Cmake라는 tool을 이용하여 편리하게 만들도록 도와준다. # d
_notebooks/2022-04-08-ubuntu.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # --- # # _You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._ # # --- # # Assignment 2 - Introduction to NLTK # # In part 1 of this assignment you will use nltk to explore the <NAME> novel <NAME>. Then in part 2 you will create a spelling recommender function that uses nltk to find words similar to the misspelling. # ## Part 1 - Analyzing <NAME> # + import nltk import pandas as pd import numpy as np # If you would like to work with the raw text you can use 'moby_raw' with open('moby.txt', 'r') as f: moby_raw = f.read() # If you would like to work with the novel in nltk.Text format you can use 'text1' moby_tokens = nltk.word_tokenize(moby_raw) text1 = nltk.Text(moby_tokens) # - # ### Example 1 # # How many tokens (words and punctuation symbols) are in text1? # # *This function should return an integer.* # + def example_one(): return len(nltk.word_tokenize(moby_raw)) # or alternatively len(text1) example_one() # - # ### Example 2 # # How many unique tokens (unique words and punctuation) does text1 have? # # *This function should return an integer.* # + def example_two(): return len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1)) example_two() # - # ### Example 3 # # After lemmatizing the verbs, how many unique tokens does text1 have? # # *This function should return an integer.* # + from nltk.stem import WordNetLemmatizer def example_three(): lemmatizer = WordNetLemmatizer() lemmatized = [lemmatizer.lemmatize(w,'v') for w in text1] return len(set(lemmatized)) example_three() # - # ### Question 1 # # What is the lexical diversity of the given text input? (i.e. ratio of unique tokens to the total number of tokens) # # *This function should return a float.* # + def answer_one(): ratio = example_two()/example_one() return ratio# Your answer here answer_one() # - # ### Question 2 # # What percentage of tokens is 'whale'or 'Whale'? # # *This function should return a float.* # + def answer_two(): dist = nltk.FreqDist(text1) whales = (dist['whale']+dist['Whale'])/example_one()*100 return whales # Your answer here answer_two() # - # ### Question 3 # # What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency? # # *This function should return a list of 20 tuples where each tuple is of the form `(token, frequency)`. The list should be sorted in descending order of frequency.* # + def answer_three(): dist = nltk.FreqDist(text1) return dist.most_common(20) # Your answer here answer_three() # - # ### Question 4 # # What tokens have a length of greater than 5 and frequency of more than 150? # # *This function should return an alphabetically sorted list of the tokens that match the above constraints. To sort your list, use `sorted()`* # + def answer_four(): dist = nltk.FreqDist(text1) words = [ w for w in dist.keys() if len(w) > 5 and dist[w] > 150] return sorted(words) # Your answer here answer_four() # - # ### Question 5 # # Find the longest word in text1 and that word's length. # # *This function should return a tuple `(longest_word, length)`.* # + def answer_five(): dist = nltk.FreqDist(text1) words = dist.keys() length = max(len(w) for w in text1) longest_word = [w for w in text1 if len(w) == length] return (longest_word[0], length) # Your answer here answer_five() # - # ### Question 6 # # What unique words have a frequency of more than 2000? What is their frequency? # # "Hint: you may want to use `isalpha()` to check if the token is a word and not punctuation." # # *This function should return a list of tuples of the form `(frequency, word)` sorted in descending order of frequency.* # + def answer_six(): dist = nltk.FreqDist(text1) words = dist.keys() list_words = [(dist[w], w) for w in words if w.isalpha() and dist[w] > 2000] list_words.sort(key = lambda x: x[0], reverse=True) return list_words # Your answer here answer_six() # - # ### Question 7 # # What is the average number of tokens per sentence? # # *This function should return a float.* def answer_seven(): sentences = nltk.sent_tokenize(moby_raw) avg_s = len(text1)/len(sentences) return avg_s # Your answer here answer_seven() # ### Question 8 # # What are the 5 most frequent parts of speech in this text? What is their frequency? # # *This function should return a list of tuples of the form `(part_of_speech, frequency)` sorted in descending order of frequency.* # + def answer_eight(): import collections pos_list = nltk.pos_tag(text1) pos_counts = collections.Counter((tag[1] for tag in pos_list)) return pos_counts.most_common(5) # Your answer here answer_eight() # - # ## Part 2 - Spelling Recommender # # For this part of the assignment you will create three different spelling recommenders, that each take a list of misspelled words and recommends a correctly spelled word for every word in the list. # # For every misspelled word, the recommender should find find the word in `correct_spellings` that has the shortest distance*, and starts with the same letter as the misspelled word, and return that word as a recommendation. # # *Each of the three different recommenders will use a different distance measure (outlined below). # # Each of the recommenders should provide recommendations for the three default words provided: `['cormulent', 'incendenece', 'validrate']`. # + from nltk.corpus import words correct_spellings = words.words() # - # ### Question 9 # # For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric: # # **[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the trigrams of the two words.** # # *This function should return a list of length three: # `['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.* # + def answer_nine(entries=['cormulent', 'incendenece', 'validrate']): recommended = [] for entry in entries: # Criar lista com palavras corretas que comecem com a mesma letra da que precisa ser corrigida match_spell = [w for w in correct_spellings if w[0] == entry[0]] # Calcular distância jaccard jaccard_dist = [nltk.jaccard_distance(set(nltk.ngrams(entry,3)),set(nltk.ngrams(w,3))) for w in match_spell] # Recomendar palvra de acordo a menor distância de jaccard recommended.append(match_spell[np.argmin(jaccard_dist)]) return recommended # Your answer here answer_nine() # - # ### Question 10 # # For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric: # # **[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the 4-grams of the two words.** # # *This function should return a list of length three: # `['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.* def answer_ten(entries=['cormulent', 'incendenece', 'validrate']): recommended = [] for entry in entries: # Criar lista com palavras corretas que comecem com a mesma letra da que precisa ser corrigida match_spell = [w for w in correct_spellings if w[0] == entry[0]] # Calcular distância jaccard jaccard_dist = [nltk.jaccard_distance(set(nltk.ngrams(entry,4)),set(nltk.ngrams(w,4))) for w in match_spell] # Recomendar palvra de acordo a menor distância de jaccard recommended.append(match_spell[np.argmin(jaccard_dist)]) return recommended # Your answer here answer_ten() # ### Question 11 # # For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric: # # **[Edit distance on the two words with transpositions.](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance)** # # *This function should return a list of length three: # `['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.* nltk.edit_distance def answer_eleven(entries=['cormulent', 'incendenece', 'validrate']): recommended = [] for entry in entries: # Criar lista com palavras corretas que comecem com a mesma letra da que precisa ser corrigida match_spell = [w for w in correct_spellings if w[0] == entry[0]] # Calcular distância entre duas palavras tw_dist = [nltk.edit_distance(entry, w, transpositions=True) for w in match_spell] # Recomendar palvra de acordo a menor distância entre duas palavras recommended.append(match_spell[np.argmin(tw_dist)]) return recommended # Your answer here answer_eleven()
Course4_Applied_Text_Mining_in_Python/Week-2/Assignment+2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #not needed for pipeline, just for testing import seaborn as sns import matplotlib.pyplot as plt # + from __future__ import division from os.path import join, basename, exists from os import makedirs from glob import glob from nilearn import input_data, datasets, plotting, regions from nilearn.image import concat_imgs from nilearn.input_data import NiftiLabelsMasker from nilearn.connectome import ConnectivityMeasure from scipy.stats import pearsonr import bct import json import numpy as np import pandas as pd # + subjects = ['101', '102', '103', '104', '106', '107', '108', '110', '212', '214', '215', '216', '217', '218', '219', '320', '321', '323', '324', '325', '327', '328', '330', '331', '333', '334', '335', '336', '337', '338', '339', '340', '341', '342', '343', '344', '345', '346', '347', '348', '349', '350', '451', '453', '455', '458', '459', '460', '462', '463', '464', '465', '467', '468', '469', '470', '502', '503', '571', '572', '573', '574', '577', '578', '581', '582', '584', '585', '586', '587', '588', '589', '591', '592', '593', '594', '595', '596', '597', '598', '604', '605', '606', '607', '608', '609', '610', '612', '613', '614', '615', '617', '618', '619', '620', '621', '622', '623', '624', '625', '626', '627', '629', '630', '631', '633', '634'] #all subjects 102 103 101 104 106 107 108 110 212 X213 214 215 216 217 218 219 320 321 X322 323 324 325 #327 328 X329 330 331 X332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 451 #X452 453 455 X456 X457 458 459 460 462 463 464 465 467 468 469 470 502 503 571 572 573 574 X575 577 578 #X579 X580 581 582 584 585 586 587 588 589 X590 591 592 593 594 595 596 597 598 604 605 606 607 608 609 #610 X611 612 613 614 615 X616 617 618 619 620 621 622 623 624 625 626 627 X628 629 630 631 633 634 #errors in fnirt-to-mni: 213, 322, 329, 332, 452, 456, 457, 575, 579, 580, 590, 611, 616, 628 #subjects without post-IQ measure: 452, 461, 501, 575, 576, 579, 583, 611, 616, 628, 105, 109, 211, 213, 322, 326, 329, 332 #subjects = ['101','103'] #data_dir = '/home/data/nbc/physics-learning/data/pre-processed' data_dir = '/home/data/nbc/physics-learning/retrieval-graphtheory/output' timing_dir = '/home/data/nbc/physics-learning/data/behavioral-data/vectors' #sink_dir = '/Users/Katie/Dropbox/Projects/physics-retrieval/data/out' sessions = ['pre','post'] tasks = {'fci': [{'conditions': ['Physics', 'NonPhysics']}, {'runs': [0,1,2]}], 'reas': [{'conditions': ['Reasoning', 'Baseline']}, {'runs': [0,1]}], 'retr': [{'conditions': ['Physics', 'General']}, {'runs': [0,1]}]} masks = ['shen2015', 'craddock2012'] connectivity_metric = 'partial correlation' conds = ['high-level', 'lower-level'] #find a way to estimate this threshold range... #or threshold it thresh_range = np.arange(0.1, 1, 0.1) highpass = 1/55. correlation_measure = ConnectivityMeasure(kind=connectivity_metric) index = pd.MultiIndex.from_product([subjects, sessions, tasks.keys(), conds, masks], names=['subject', 'session', 'task', 'condition', 'mask']) df = pd.DataFrame(columns=['k_scale-free', 'k_connected'], index=index, dtype=np.float64) # - #the reasoning task timing is modeled a little differently #"events" fall halfway between the presentation of the third screen and the ppt's button press #so I'm thinking of taking the TR in which the event is modeled, the one before, and the one after #ideally capturing the 6 seconds around decision making #I don't think those 6 seconds overlap between trials, but I'll check task = 'reas' timing = {} for run in tasks[task][1]['runs']: for condition in tasks[task][0]['conditions']: print(task, run, condition) timing[condition] = np.genfromtxt(join('/Users/Katie/Dropbox/Projects/physics-retrieval/data/', '{0}-{1}-{2}.txt'.format(task, run, condition)), delimiter='\t', dtype='float') print(np.average(timing[condition][:,1])) print(timing[condition]) #divide by 2 bc timing is in seconds and TRs are 2 seconds each #so data points represent 2s. intervals #subtracting 2 to (1) zero-index and (2) caputure the preceding TR timing[condition][:,0] = np.round(timing[condition][:,0]/2,0) - 2 #all timing is 3 because I want the TR before #the TR halfway between screen 3 and decision #and the TR after halfway timing[condition][:,1] = np.round(np.round(timing[condition][:,1],0)/2,0) print(timing[condition]) timing[condition] = timing[condition][:,0:2] print(timing[condition]) task = 'fci' timing = {} for run in tasks[task][1]['runs']: for condition in tasks[task][0]['conditions']: print(task, run, condition) timing[condition] = np.genfromtxt(join('/Users/Katie/Dropbox/Projects/physics-retrieval/data/', '{0}-{1}-{2}.txt'.format(task, run, condition)), delimiter='\t', dtype='float') print(np.average(timing[condition][:,1])) print(timing[condition]) timing[condition][:,0] = np.round(timing[condition][:,0]/2,0) - 1 timing[condition][:,1] = np.round(np.round(timing[condition][:,1],0)/2,0) print(timing[condition]) timing[condition] = timing[condition][:,0:2] print(timing[condition]) session = '1' timing = {} for task in tasks.keys(): for run in tasks[task][1]['runs']: for condition in tasks[task][0]['conditions']: print(task, run, condition) if task != 'reas': if task == 'retr': timing[condition] = np.genfromtxt(join('/Users/Katie/Dropbox/Projects/physics-retrieval/data/', 'RETRcondition{0}Sess{1}.txt'.format(condition,session)), delimiter='\t', dtype='float') if task == 'fci': timing[condition] = np.genfromtxt(join('/Users/Katie/Dropbox/Projects/physics-retrieval/data/', '{0}-{1}-{2}.txt'.format(task, run, condition)), delimiter='\t', dtype='float') timing[condition][:,0] = np.round(timing[condition][:,0]/2,0) - 1 timing[condition][:,1] = np.round(np.round(timing[condition][:,1],0)/2,0) timing[condition] = timing[condition][:,0:2] print(timing[condition]) else: timing[condition] = np.genfromtxt(join('/Users/Katie/Dropbox/Projects/physics-retrieval/data/', '{0}-{1}-{2}.txt'.format(task, run, condition)), delimiter='\t', dtype='float') print(np.average(timing[condition][:,1])) timing[condition][:,0] = np.round(timing[condition][:,0]/2,0) - 1 timing[condition][:,1] = np.round(np.round(timing[condition][:,1],0)/2,0) timing[condition] = timing[condition][:,0:2] print(timing[condition]) task = 'retr' tasks[task][0]['conditions'][0] # + for subject in subjects: for i in np.arange(0,len(sessions)): spliced_ts = {} for task in tasks.keys: timing = {} conditions = tasks[task][0]['conditions'] for mask in masks: for run in tasks[task][1]['runs']: for condition in conditions: print(task, run, condition) if task != 'reas': if task == 'retr': timing['{0}-{1}'.format(run, condition)] = np.genfromtxt(join('/Users/Katie/Dropbox/Projects/physics-retrieval/data/', 'RETRcondition{0}Sess{1}.txt'.format(condition,session)), delimiter='\t', dtype='float') if task == 'fci': timing['{0}-{1}-{2}'.format(task, run, condition)] = np.genfromtxt(join('/Users/Katie/Dropbox/Projects/physics-retrieval/data/', '{0}-{1}-{2}.txt'.format(task, run, condition)), delimiter='\t', dtype='float') timing['{0}-{1}-{2}'.format(task, run, condition)][:,0] = np.round(timing['{0}-{1}-{2}'.format(task, run, condition)][:,0]/2,0) - 1 timing['{0}-{1}-{2}'.format(task, run, condition)][:,1] = np.round(np.round(timing['{0}-{1}-{2}'.format(task, run, condition)][:,1],0)/2,0) timing['{0}-{1}-{2}'.format(task, run, condition)] = timing['{0}-{1}-{2}'.format(task, run, condition)][:,0:2] print(timing['{0}-{1}-{2}'.format(task, run, condition)]) else: timing['{0}-{1}-{2}'.format(task, run, condition)] = np.genfromtxt(join('/Users/Katie/Dropbox/Projects/physics-retrieval/data/', '{0}-{1}-{2}.txt'.format(task, run, condition)), delimiter='\t', dtype='float') print(np.average(timing['{0}-{1}-{2}'.format(task, run, condition)][:,1])) timing['{0}-{1}-{2}'.format(task, run, condition)][:,0] = np.round(timing['{0}-{1}-{2}'.format(task, run, condition)][:,0]/2,0) - 1 timing['{0}-{1}-{2}'.format(task, run, condition)][:,1] = np.round(np.round(timing['{0}-{1}-{2}'.format(task, run, condition)][:,1],0)/2,0) timing['{0}-{1}-{2}'.format(task, run, condition)] = timing['{0}-{1}-{2}'.format(task, run, condition)][:,0:2] print(timing['{0}-{1}-{2}'.format(task, run, condition)]) #epi = join(data_dir, sessions[i], subject,'{0}-session-{1}_{2}-{3}_mcf.nii.gz'.format(subject, i, task, run)) #confounds = join(data_dir, subject,'{0}-{1}_{2}-confounds.txt'.format(subject, run, task)) #for each parcellation, extract BOLD timeseries mask_file = join(data_dir, sessions[i], subject,'{0}-session-{1}_{2}-{3}_shen2015.nii.gz'.format(subject, i, task, run)) print(mask_file) #masker = NiftiLabelsMasker(mask_file, standardize=True, high_pass=highpass, t_r=2., verbose=1) #timeseries = masker.fit_transform(epi, confounds=confounds) #and now we slice into conditions for condition in conditions: run_cond['{0}-{1}-{2}'.format(task, run, condition)] = np.vstack((timeseries[timing['{0}-{1}-{2}'.format(task, run, condition)][0,0].astype(int):(timing['{0}-{1}-{2}'.format(task, run, '{0}-{1}-{2}'.format(task, run, condition))][0,0]+timing['{0}-{1}-{2}'.format(task, run, condition)][0,1]+1).astype(int), :], timeseries[timing['{0}-{1}-{2}'.format(task, run, condition)][1,0].astype(int):(timing'{0}-{1}-{2}'.format(task, run, condition)[1,0]+timing[condition][1,1]+1).astype(int), :], timeseries[timing[condition][2,0].astype(int):(timing[condition][2,0]+timing[condition][2,1]+1).astype(int), :])) print(run_cond) #and paste together the timeseries from each run together per condition for j in np.arange(0,len(conditions)): sliced_ts[conditions[j]] = np.vstack((run_cond['{0}-0-{1}'.format(task, conditions[j])], run_cond['{0} 1'.format(conditions[j])])) corrmat = correlation_measure.fit_transform([sliced_ts[conditions[j]]])[0] np.savetxt(join(data_dir, sessions[i], subject,'{0}-session-{1}_{2}-{3}-{4}_{5}-corrmat.csv'.format(subject, i, task, run, condition, mask)), corrmat) #reset kappa starting point #calculate proportion of connections that can be retained #before degree dist. ceases to be scale-free kappa = 0.01 skewness = 1 while skewness > 0.3: w = bct.threshold_proportional(corrmat, kappa) skewness = skew(bct.degrees_und(w)) kappa_lower += 0.01 df.at[(subject, session[i], task, cond[j], mask),'k_scale-free'] = kappa #reset kappa starting point #calculate proportion of connections that need to be retained #for node connectedness kappa = 0.01 num = 2 while num > 1: w = bct.threshold_proportional(corrmat, kappa) [comp, num] = bct.get_components(w) num = np.unique(comp).shape[0] kappa_lower += 0.01 df.at[(subject, session[i], task, cond[j], mask),'k_connected'] = kappa df.to_csv(join(data_dir, 'kappa.csv'), sep=',') # - test = np.random.random(size=[150,150]) test = np.corrcoef(test) test = np.corrcoef(test) kappa_lower = 0 num = 100 while num > 1: w = bct.threshold_proportional(test, kappa_lower) w_bin = np.where(w > 0, 1, 0) [comp, num] = bct.get_components(w_bin) num = np.unique(comp).shape[0] print(kappa_lower, num) kappa_lower += 0.01 q = bct.threshold_proportional(test,kappa_lower) sns.heatmap(q) fig,ax = plt.subplots(figsize=(20,5)) for i in np.arange(0,0.4,0.1): print(i) w = bct.threshold_proportional(test, i) sns.distplot(np.ravel(w), hist=False, label=np.round(i,2)) subjects = ['101','102','103'] degree = [] thresh_df = pd.DataFrame(index=subjects, columns=['k_connected', 'k_skewed']) for subject in subjects: matrix = pd.read_csv('/Volumes/Macintosh HD/Users/Katie/Dropbox/Projects/physics-retrieval/data/out/{0}-phy-corrmat-regionwise.csv'.format(subject), header=0, index_col=0).values kappa = 0.01 skewness = 1 while skewness > 0.3: w = bct.threshold_proportional(matrix, kappa) skewness = skew(bct.degrees_und(w)) kappa_lower += 0.01 thresh_df.at[subject,'k_skewed'] = kappa kappa = 0.01 num = 2 while num > 1: w = bct.threshold_proportional(matrix, kappa) [comp, num] = bct.get_components(w) num = np.unique(comp).shape[0] kappa_lower += 0.01 thresh_df.at[subject,'k_connected'] = kappa thresh_df ranger = np.arange(0.05,kappa_lower,0.05) for i in ranger: sns.set_palette('husl', ranger.shape[0], desat=0.8) thresh = bct.threshold_proportional(matrix,i) degrees = bct.degrees_und(thresh) print(i, skew(degrees)) sns.kdeplot(degrees, label='{0}: {1}'.format(np.round(i,2), np.round(skew(degrees),2))) from scipy.stats import skew skew(degrees)
examples/data-wrangling/pipeline_testing_tasks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # I need to manually check each metadata because they entered controls in different ways... # # Studies 10321 and 13450 don't seem to have a control. import pandas as pd pd.__version__ id714 = pd.read_csv('714/templates/714_20191015-105331.txt', sep = '\t') id714 = id714[id714['treatment']=='control'].reset_index(drop=True) id714.to_csv('714/templates/control_714.txt', sep='\t', index=False) id714[['sample_name']].to_csv('714/templates/samples_to_keep_714.tsv', sep='\t', index=False) id714.head() id1609 = pd.read_csv('1609/templates/1609_20181018-131039.txt', sep='\t') id1609 = id1609[id1609['control']=='y'].reset_index(drop=True) id1609.to_csv('1609/templates/control_1609.txt', sep='\t', index=False) id1609[['sample_name']].to_csv('1609/templates/samples_to_keep_1609.tsv', sep='\t', index=False) id1609.head() id10141 = pd.read_csv('10141/templates/10141_20190104-105500.txt', sep='\t') id10141 = id10141[id10141['sample_type']=='control'].reset_index(drop=True) id10141.to_csv('10141/templates/control_10141.txt', sep='\t', index=False) id10141[['sample_name']].to_csv('10141/templates/samples_to_keep_10141.tsv', sep='\t', index=False) id10141.head() id10143 = pd.read_csv('10143/templates/10143_20191021-130723.txt', sep='\t') id10143 = id10143[id10143['sample_name'].str.contains('ctrl')].reset_index(drop=True) id10143.to_csv('10143/templates/control_10143.txt', sep='\t', index=False) id10143[['sample_name']].to_csv('10143/templates/samples_to_keep_10143.tsv', sep='\t', index=False) id10143.head() # + ## All samples comes from fly, and doesn't seem to have a control # id10321 = pd.read_csv('10321/templates/10321_20180418-105405.txt', sep='\t') # id10321 # -
2_filter_control_samples_per_study.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # We'll use requesta and BeautifulSoup again in this tutorial: import requests from bs4 import BeautifulSoup ## We'll also use the re module for regular expressions. import re # + ## Let's look at this list of state universities in the US: top_url = 'https://en.wikipedia.org/wiki/List_of_state_universities_in_the_United_States' # Use requests.get to fetch the HTML at the specific url: response = requests.get(top_url) print(type(response)) # This returns an object of type Response: # - # And it contains all the HTML of the URL: print(response.content) # Create the nested data object using the BeautifulSoup() function: soup = BeautifulSoup(response.content) print(type(soup)) # The prettify method for making our output more readable. ## The example below looks at the 50,000 - 51,000 characters in the scraped HTML: print(soup.prettify())[50000:51000] # We can use the find method to find the first tag (and its contents) of a certain type. soup.find("p") # ### Exploring and Inspecting a Webpage # # Similar to the `find` method, we can use the `find_all` method to find all the tags of a certain type. But what tags are we looking for? We can look at the code for any individual part of an HTML page by clicking on it from within a browser and selecting `inspect`. # ![Inspecting an HTML Element](03-images/inspect.png) # # ### Inspected Elements # This will show you the underlying code that generates this element. # # ![Results of Inspection](03-images/inspected.png) # # You can see that the links to the colleges are listed, meaning within `<li>` tags, as well as links, meaning within `<a>` tags. # This gets us somewhere, but there are links in here that are not colleges and some of the colleges do not have links. soup.find_all("a") # Searching for <li> tags gets us closer, but there are still some non-universities in here. list_items = soup.find_all("li") print(type(list_items)) print(list_items[200:210]) # Let's search for the first and last university in the list and return their index number: for i in range(0, len(list_items)): content = str(list_items[i].contents) if "University of Alabama System" in content: print("Index of first university is: " + str(i)) if "University of Wyoming" in content: print("Index of last university is: " + str(i)) # + # Now we can use those indexes to subset out everything that isn't a university: universities = list_items[71:840] print(len(universities)) print(universities) # + # We can grab the University Names and URLs for the wikipedia pages for the schools that have them: name_list = [] url_list = [] for uni in universities: name_list.append(uni.text) a_tag = uni.find("a") if a_tag: ref = a_tag.get("href") print(ref) url_list.append(ref) else: print("No URL for this University") url_list.append("") # + import pandas as pd d = { "name" : pd.Series(name_list), "html_tag" : pd.Series(universities), "url" : pd.Series(url_list)} df = pd.DataFrame(d) df["url"] = "https://en.wikipedia.org" + df["url"] df.shape df[:10] # - # How many names contain 'College': df['name'].str.contains("College", na=False).value_counts() # How many names contain 'University': df['name'].str.contains("University", na=False).value_counts() # ## From Scraping to Crawling # # So, you might have noticed that the information we collected from this scraper isn't that interesting. However, it does include a list of URLs for each University we found and we can scrape these pages as well. On the individual pages for each university, there's data on the school type, their location, endowment, and founding year, as well as other interesting information that we may be able to get to. # # At this point, you'd start to consider our task a basic form of web crawling - the systemic or automated browsing of multiple web pages. This is certainly a simple application of web crawling, but the idea of following hyperlinks from one URL to another is representative. uni_pages = [] for url in df["url"]: if url != "": resp = requests.get(url) uni_pages.append(resp.content) else: uni_pages.append("") ## Add this newly scrapped data to our pandas dataframe: df["wikipedia_page"] = uni_pages df.shape ## Our pandas dataframe now has a column containing the entire HTML wikipedia apgefor each university: df["wikipedia_page"][:10] # + # Let's see what we can get from one page: soup = BeautifulSoup(df["wikipedia_page"][0]) table = soup.find("table", {"class" : "infobox"}) rows = table.find_all("tr") print(rows[:]) # - ## Now we can search across these rows for various data of interest: for row in rows: header = row.find("th") data = row.find("td") # Make sure there was actually both a th and td tag in that row, and proceed if so. if header is not None and data is not None: if header.contents[0] == "Type": print("The type of this school is " + data.text) if header.contents[0] == "Location": print("This location of this school is " + data.text) if header.contents[0] == "Website": print("The website for this school is " + data.text) if "Endowment" in str(header.contents[0]): print("The endowment for this school is " + data.text) # + ## Create empty columns of out dataframe to fill with new information: df["type"] = "" df["location"] = "" df["website"] = "" df["established"] = "" df["endowment"] = "" ## Loop over every wikipedia page in our dataframe and populate our new columns with the pertinent data: for i in range(0, len(df["wikipedia_page"])): tmp_soup = BeautifulSoup(df["wikipedia_page"][i]) tmp_table = tmp_soup.find("table", {"class" : "infobox"}) if tmp_table is not None: tmp_rows = tmp_table.find_all("tr") for row in tmp_rows: header = row.find("th") data = row.find("td") if header is not None and data is not None: if header.contents[0] == "Type": df["type"][i] = data.text if header.contents[0] == "Location": df["location"][i] = data.text if header.contents[0] == "Website": df["website"][i] = data.text ## Note that below we convert to unicode using utf-8, rather then simply str(). ## This is more robust in handling special characters. if "Endowment" in header.contents[0].encode('utf-8'): df["endowment"][i] = data.text if "Established" in header.contents[0].encode('utf-8'): df["established"][i] = data.text # - ## Now we have dramatically more actionable data that could have been very difficult to collect manually. df[:200]
notebooks/session_10/Web_crawling.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="_ETK8_kd_fgb" # # Energy Efficiency Prediction # + [markdown] id="wN6l80Ho_fgh" # ## Problem Statement # + [markdown] id="jEMXdL7b_fgi" # >The effect of eight input variables (relative compactness, surface area, wall area, roof # area, overall height, orientation, glazing area, glazing area distribution) on two output # variables, namely heating load (HL) and cooling load (CL), of residential buildings is # investigated using a statistical machine learning framework. We have to use a number # of classical and non-parametric statistical analytic tools to carefully analyse the strength # of each input variable's correlation with each of the output variables in order to discover # the most strongly associated input variables. We need to estimate HL and CL, we can # compare a traditional linear regression approach to a sophisticated state-of-the-art # nonlinear non-parametric method, random forrests. # + [markdown] id="i4WyEnMP_fgj" # ## Importing Dependencies # + executionInfo={"elapsed": 1980, "status": "ok", "timestamp": 1648915819220, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="PkTHPNeB_fgk" # !pip install jovian --upgrade --quiet # - # !pip install ruamel-yaml --quiet # + executionInfo={"elapsed": 3707, "status": "ok", "timestamp": 1648915822920, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="aHC2eGjh_fgk" # !pip install pandas-profiling numpy matplotlib seaborn --quiet # + executionInfo={"elapsed": 4217, "status": "ok", "timestamp": 1648915827127, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="PttZQvKN_fgl" # !pip install opendatasets scikit-learn jovian --quiet --upgrade # + executionInfo={"elapsed": 3711, "status": "ok", "timestamp": 1648915830831, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="znHJ_Ptw_fgm" # !pip install openpyxl --quiet --upgrade # + executionInfo={"elapsed": 18, "status": "ok", "timestamp": 1648915830833, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="E5rV3RaP_fgn" import opendatasets as od import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np import matplotlib import jovian import os # %matplotlib inline pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', 150) sns.set_style('darkgrid') matplotlib.rcParams['font.size'] = 14 matplotlib.rcParams['figure.figsize'] = (10, 6) matplotlib.rcParams['figure.facecolor'] = '#00000000' # + [markdown] id="dNCX2U97_fgo" # ## Downloading the Data # + executionInfo={"elapsed": 15, "status": "ok", "timestamp": 1648915830833, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="4gMBjBzs_fgp" data_dir = "https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx" # + executionInfo={"elapsed": 1522, "status": "ok", "timestamp": 1648915832342, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="Tq0CoagF_fgq" data_df = pd.read_excel(data_dir) # - data_df.drop(data_df.filter(regex="Unname"),axis=1, inplace=True) # + colab={"base_uri": "https://localhost:8080/", "height": 423} executionInfo={"elapsed": 26, "status": "ok", "timestamp": 1648915832342, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="kzuTSfbu_fgq" outputId="9e784950-e1c7-4c36-e7e1-47108b43b712" data_df # + [markdown] id="uq3-KxQ0_fgr" # ## Data Labeling # + executionInfo={"elapsed": 23, "status": "ok", "timestamp": 1648915832343, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="Au8XdkDF_fgr" df_cols = ['Relative Compactness', 'Surface Area', 'Wall Area', 'Roof Area', 'Overall Height', 'Orientation', 'Glazing Area','Glazing Area Distribution', 'Heating Load', 'Cooling Load'] # + colab={"base_uri": "https://localhost:8080/", "height": 485} executionInfo={"elapsed": 25, "status": "ok", "timestamp": 1648915832345, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="Pu90x_yY_fgs" outputId="bd4058a2-2b70-43bb-b048-62fefeb59903" data_df.columns = df_cols # - data_df # + [markdown] id="4aUMRLZe_fgs" # ## Train-Val-Test Split # + executionInfo={"elapsed": 24, "status": "ok", "timestamp": 1648915832346, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="3v4Njz_R_fgt" inputs_cols = df_cols[:-2] targets_cols = df_cols[-2:] inputs_df = data_df[inputs_cols].copy() targets_df = data_df[targets_cols].copy() # + executionInfo={"elapsed": 444, "status": "ok", "timestamp": 1648915832766, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="iwni2vw2_fgu" from sklearn.model_selection import train_test_split # + [markdown] id="E85nfjlY_fgv" # ### Train-Test Split # + executionInfo={"elapsed": 38, "status": "ok", "timestamp": 1648915832766, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="AbvEW-86_fgw" test_split_var = train_test_split(inputs_df, targets_df, test_size=0.20, random_state=42) (raw_inputs, test_inputs, raw_targets, test_targets) = test_split_var # + [markdown] id="7AkCpCQF_fgx" # ### Train-Val Split # + executionInfo={"elapsed": 38, "status": "ok", "timestamp": 1648915832767, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="cdGBFv6d_fgy" val_split_var = train_test_split(raw_inputs, raw_targets, test_size=0.20, random_state=42) (train_inputs, val_inputs, train_targets, val_targets) = val_split_var # + [markdown] id="zndFhGlJ_fgy" # ## EDA # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 3, "status": "ok", "timestamp": 1648916637357, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="f7WyY_r3_fgy" outputId="3f1e5801-45db-4c7d-ab49-7ea9debac2e1" train_inputs.info() # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 4, "status": "ok", "timestamp": 1648916656663, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="9odkfaBMFz72" outputId="94816a1a-2c6a-4785-dc08-b3534050e2ea" train_targets.info() # - train_inputs plt.figure(figsize=(5,5)) sns.histplot(train_inputs['Relative Compactness'], bins=5) # + colab={"base_uri": "https://localhost:8080/", "height": 300} executionInfo={"elapsed": 528, "status": "ok", "timestamp": 1648916624912, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="SD8ZniPe_fgz" outputId="9e0d0df6-410b-45cb-93d1-9e89580c7d78" train_inputs.describe() # + colab={"base_uri": "https://localhost:8080/", "height": 300} executionInfo={"elapsed": 9, "status": "ok", "timestamp": 1648916677054, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="JB0k_QdPF5DH" outputId="d90c8f49-9778-4eb4-b382-627c16f92a1d" train_targets.describe() # + colab={"base_uri": "https://localhost:8080/", "height": 344} executionInfo={"elapsed": 524, "status": "ok", "timestamp": 1648917037773, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="qNWLa_T5_fgz" outputId="2d6b80ad-9b36-4e0b-861f-d4d320726e44" input_corr = train_inputs.corr() input_corr # + colab={"base_uri": "https://localhost:8080/", "height": 563} executionInfo={"elapsed": 1814, "status": "ok", "timestamp": 1648917046961, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="PqRI9VEMFLM0" outputId="314cc4e5-4ad9-4b32-93c6-3dafac7a452b" sns.heatmap(input_corr, cmap='Reds', annot=True) # + colab={"base_uri": "https://localhost:8080/", "height": 112} executionInfo={"elapsed": 435, "status": "ok", "timestamp": 1648917059219, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="pkoB--P7FLGi" outputId="6b0d9553-8866-48ba-a885-517a8f1a835f" target_corr = train_targets.corr() target_corr # + colab={"base_uri": "https://localhost:8080/", "height": 401} executionInfo={"elapsed": 991, "status": "ok", "timestamp": 1648917104480, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="c22Fe5CbFK_z" outputId="53c7b370-6446-4168-c85d-3184c535a850" sns.heatmap(target_corr, cmap='Reds', annot=True) # - train_targets.value_counts() # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 3, "status": "ok", "timestamp": 1648917561283, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="6p2_LKOcFK37" outputId="a839fab8-2b49-4ce9-9b31-3d3aff807805" train_inputs.isna().sum() # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 3, "status": "ok", "timestamp": 1648917647484, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="XVvJ36ktFKcr" outputId="f64b43d0-4f11-4bbb-f733-ddf2d28093d0" train_inputs.dtypes.value_counts() # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 7, "status": "ok", "timestamp": 1648917656778, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="bN_TDn8JJkNm" outputId="f277553e-445f-463c-db20-e972a7800699" train_targets.dtypes.value_counts() # + [markdown] id="At4k6eBz_fgz" # ## Data Processing # + [markdown] id="LB2TlAczJtSK" # ### Scaling the Features # + colab={"base_uri": "https://localhost:8080/", "height": 112} executionInfo={"elapsed": 417, "status": "ok", "timestamp": 1648917867162, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="tqT3caNe_fg0" outputId="83a87ed4-84e2-4ddd-a96b-2ccfd0278953" train_inputs.describe().loc[['min', 'max']] # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 447, "status": "ok", "timestamp": 1648917894942, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="yp-aZnam_fg0" outputId="e90e7b66-b1c8-476d-ddfc-3a4fc2fd88df" from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler.fit(train_inputs[inputs_cols]) # + executionInfo={"elapsed": 396, "status": "ok", "timestamp": 1648917958255, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="ZNqLOuoYKs3H" train_inputs[inputs_cols] = scaler.transform(train_inputs[inputs_cols]) val_inputs[inputs_cols] = scaler.transform(val_inputs[inputs_cols]) test_inputs[inputs_cols] = scaler.transform(test_inputs[inputs_cols]) # + colab={"base_uri": "https://localhost:8080/", "height": 166} executionInfo={"elapsed": 8, "status": "error", "timestamp": 1648917961273, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="0ZjaL05gKl4e" outputId="a5aced4c-f791-46cd-9b60-090b4025dfe1" train_inputs.describe().loc[['min', 'max']] # + [markdown] id="xUD6NlQ2_fg0" # ## Model Training # + executionInfo={"elapsed": 25, "status": "ok", "timestamp": 1648915832770, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="sz0o_jJv_fg0" from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor(n_jobs=-1, random_state=42) # + executionInfo={"elapsed": 25, "status": "ok", "timestamp": 1648915832770, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="-t9Crtuw_fg0" model.fit(train_inputs, train_targets) # + [markdown] id="VJwQ2hko_fg0" # ## Model Evaluation # + executionInfo={"elapsed": 26, "status": "ok", "timestamp": 1648915832771, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="zwnCNKFn_fg1" from sklearn.metrics import (mean_squared_error, mean_absolute_error, median_absolute_error, explained_variance_score, r2_score) # + executionInfo={"elapsed": 25, "status": "ok", "timestamp": 1648915832771, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="sxmJpntj_fg1" prediction_val = model.predict(val_inputs) # - print(f'Mean Squared Error: {round(mean_squared_error(val_targets, prediction_val, squared=False), 3)}') print(f'Mean Absolute Error: {round(mean_absolute_error(val_targets, prediction_val), 3)}') print(f'Median Absolute Error: {round(median_absolute_error(val_targets, prediction_val), 3)}') print(f'Explained Variance Score: {round(explained_variance_score(val_targets, prediction_val), 3)}') print(f'R2 Score: {round(r2_score(val_targets, prediction_val, multioutput= "variance_weighted"), 3)}') # ### Visualizing the Prediction # + plt.scatter(val_targets['Heating Load'], pd.DataFrame(prediction_val)[0]) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title('Heating Load Predictions') plt.plot(val_targets, val_targets, color='Magenta') # + plt.scatter(val_targets['Cooling Load'], pd.DataFrame(prediction_val)[1]) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title('Cooling Load Predictions') plt.plot(val_targets, val_targets, color='Magenta') # - # ## Dummy Models HL_mean = train_targets.mean()[0] CL_mean = train_targets.mean()[1] dummy_targets = val_targets.copy() dummy_targets['Heating Load']=HL_mean dummy_targets['Cooling Load']=CL_mean print(f'Mean Squared Error: {round(mean_squared_error(val_targets, dummy_targets, squared=False), 3)}') print(f'Mean Absolute Error: {round(mean_absolute_error(val_targets, dummy_targets), 3)}') print(f'Median Absolute Error: {round(median_absolute_error(val_targets, dummy_targets), 3)}') print(f'Explained Variance Score: {round(explained_variance_score(val_targets, dummy_targets), 3)}') print(f'R2 Score: {round(r2_score(val_targets, dummy_targets, multioutput= "variance_weighted"), 3)}') # >The Dummy Model predicts the outcome with Mean error loss of around 10 # ### Visualizing the Predictions # + plt.scatter(val_targets['Heating Load'], dummy_targets['Heating Load']) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title('Heating Load Predictions') plt.plot(val_targets, val_targets, color='Magenta') # + plt.scatter(val_targets['Cooling Load'], dummy_targets['Cooling Load']) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title('Cooling Load Predictions') plt.plot(val_targets, val_targets, color='Magenta') # - # ## K-Fold Cross Validation from sklearn.model_selection import cross_val_score rf_model = RandomForestRegressor(n_jobs=-1, random_state=42) scores = cross_val_score(rf_model, train_inputs, train_targets, scoring='neg_mean_squared_error', cv=5) scores scores = abs(scores) print(f'Mean: {np.mean(scores)}, standard: {np.std(scores)}') # + from sklearn.model_selection import KFold X = (train_inputs) y = np.array(train_targets) model_eval = [] kf = KFold(n_splits=5) kf.get_n_splits(X, y) for train_index,test_index in kf.split(X,y): print("TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X.iloc[train_index], X.iloc[test_index] y_train, y_test = y[train_index], y[test_index] rf_model.fit(X_train, y_train) prediction = model.predict(X_test) model_loss = mean_squared_error(val_targets, prediction_val, squared=False) model_eval.append(model_loss) # - model_eval # + [markdown] id="_8mChN0o_fg1" # ## Tune Hyperparameters # + executionInfo={"elapsed": 20, "status": "ok", "timestamp": 1648915832771, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="-vPATILK_fg1" from sklearn.model_selection import GridSearchCV # + executionInfo={"elapsed": 20, "status": "ok", "timestamp": 1648915832772, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="ikOlfeIW_fg1" param_grid = { 'bootstrap': [True], 'max_depth': [10, 15, 20], 'max_features': [4, 5, 6], 'n_estimators': [200, 250, 300], 'min_samples_split': [2,3,4] # 'min_samples_leaf': [1,2,3] } rfr = RandomForestRegressor(random_state=1) g_search = GridSearchCV(estimator = rfr, param_grid = param_grid, cv=3, n_jobs=1, verbose=0, return_train_score=True, scoring='neg_mean_squared_error') g_search.fit(X_train, y_train) print(g_search.best_params_) # + [markdown] executionInfo={"elapsed": 19, "status": "ok", "timestamp": 1648915832772, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="v6jzhc8W_fg1" # ## Evaluating Best Parameters # - rfr_best = RandomForestRegressor(random_state=1, max_depth=10, max_features=4, # min_samples_leaf=1, min_samples_split=3, n_estimators=200) rfr_best.fit(train_inputs, train_targets) rfr_predictions = rfr_best.predict(val_inputs) print(f'Mean Squared Error: {round(mean_squared_error(val_targets, rfr_predictions, squared=False), 3)}') print(f'Mean Absolute Error: {round(mean_absolute_error(val_targets, rfr_predictions), 3)}') print(f'Median Absolute Error: {round(median_absolute_error(val_targets, rfr_predictions), 3)}') print(f'Explained Variance Score: {round(explained_variance_score(val_targets, rfr_predictions), 3)}') print(f'R2 Score: {round(r2_score(val_targets, rfr_predictions, multioutput= "variance_weighted"), 3)}') # ### Visualizing the Predictions # + plt.scatter(val_targets['Heating Load'], pd.DataFrame(rfr_predictions)[0]) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title('Heating Load Predictions') plt.plot(val_targets, val_targets, color='Magenta') # + plt.scatter(val_targets['Cooling Load'], pd.DataFrame(rfr_predictions)[1]) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title('Cooling Load Predictions') plt.plot(val_targets, val_targets, color='Magenta') # + [markdown] id="hLJj5Xbw_fg1" # ## Final Performance # - # >First we will merge the training and Validation dataset to provide more data for the model to learn # + executionInfo={"elapsed": 19, "status": "ok", "timestamp": 1648915832772, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="FOQ84NSR_fg2" frames1 = [train_inputs, val_inputs] frames2 = [train_targets,val_targets] X_inputs = pd.concat(frames1) X_targets = pd.concat(frames2) # + executionInfo={"elapsed": 19, "status": "ok", "timestamp": 1648915832773, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="75FPkJuf_fg2" rfr_best.fit(X_inputs, X_targets) # + executionInfo={"elapsed": 19, "status": "ok", "timestamp": 1648915832773, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="WEeCT0ug_fg2" prediction_final = rfr_best.predict(test_inputs) # - print(f'Mean Squared Error: {round(mean_squared_error(test_targets, prediction_final, squared=False), 3)}') print(f'Mean Absolute Error: {round(mean_absolute_error(test_targets, prediction_final), 3)}') print(f'Median Absolute Error: {round(median_absolute_error(test_targets, prediction_final), 3)}') print(f'Explained Variance Score: {round(explained_variance_score(test_targets, prediction_final), 3)}') print(f'R2 Score: {round(r2_score(test_targets, prediction_final, multioutput= "variance_weighted"), 3)}') # ### Visualizing the Model Prediction # + plt.scatter(test_targets['Heating Load'], pd.DataFrame(prediction_final)[0]) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title('Heating Load Predictions') plt.plot(test_targets, test_targets, color='Magenta') # + plt.scatter(test_targets['Cooling Load'], pd.DataFrame(prediction_final)[1]) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title('Cooling Load Predictions') plt.plot(test_targets, test_targets, color='Magenta') # - # ## Saving the Model import joblib joblib.dump(rfr_best, 'energy_efficiency_prediction.joblib') model = joblib.load('energy_efficiency_prediction.joblib') # + colab={"base_uri": "https://localhost:8080/", "height": 87} executionInfo={"elapsed": 2650, "status": "ok", "timestamp": 1648917233672, "user": {"displayName": "<NAME>", "userId": "05607408892905365280"}, "user_tz": -330} id="aPkQIsO2_fg2" outputId="19d9de64-e890-4f3f-b446-c46ddfba0df1" # Execute this to save new versions of the notebook jovian.commit(project="energy-efficiency-prediction") # + id="5t9cltau_fg2"
energy-efficiency-prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Projekt1 - Beizen der Schweiz, die Ochsentour # # ## Arbeiten mit API, requests BeautifulSoup, arrangieren mit Pandas, Visualisierung # # ### Nenn mir deine Stammbeiz und ich sage dir, wo du wohnst # # Welches sind die vorherrschenden Beizennamen pro Kanton? # # Nachdem die **Swisscom Directories AG API mich in die Sackgasse geführt hat,** die Übung auf die manuelle Tour (ziemlich sicher ohne Selenium, da muss ich mich noch besser einarbeiten)... ups, funktioniert auch nicht, denn es lassen sich nicht mehr als 200 Einträge scrapen! # # Also von vorne: in Handarbeit habe ich alle Beizennamen rausgesucht, die schweizweit mehr als 200 Einträge haben. Dies muss ich nun **via API** pro Kanton absuchen und zusammenstellen. # # Folgende zehn Beizennamen sind in der Auswahl (inkl. französische und italienische Übersetzung): # # **-Rössli -Löwen -Hirschen -Bären -Sonne -Linde -Kreuz -Sternen -Engel -Post** # # Mein persönlicher API Schlüssel: XXXX # # https://tel.search.ch/api/help # # # BEISPIEL: https://tel.search.ch/api/?was=john+meier&key=Ihr Schlüssel # # # # + import requests import pandas as pd from bs4 import BeautifulSoup # %matplotlib inline # Als Erstes alles importieren, was ich glaube zu brauchen... # - # ### Dann hole ich mir die Website, mit deren API ich arbeite # # Anhand der API Spezifikationen kann ich mir die genauen Parameter raussuchen, nach denen ich suchen will. # Hier **was** (für Restaurant) , **wo** (für Kanton), und da TelSearch die requests auf 200 beschränkt hat die **maxnum** und **pos** (um eine Schlaufe bilden zu können für alle Einträge die mehr als 200 (maxnum) sind. "https://tel.search.ch/api/?was=Restaurant&key=XXXX" # Durch die Spezifikationen passe ich meine URL an und speichere sie gleich in eine Variable. Das unten heisst so viel wie alle Einträge in Restaurants des Kantons Schaffhausen, einer der zehn Namen (wegen der maxnum!!! GRRRR), bitte. url = "https://tel.search.ch/api/?was=Restaurant+Loewen&wo=SH&key=XXXX" # Mit **requests** rufe ich die Website auf, auch dies wird gleich in eine Variable **response** gepackt. response = requests.get(url) # Was spuckt er aus? Erste Kontrolle, ob mein request richtig verpackt war. Und das dann gleich noch als Textausgabe. response response.text # Nun verpacke ich die URL in verschiedene Variablen um damit übersichtlicher arbeiten zu können. Dies gibt dann die nächste Variable **url_sh_loewen** und meine nächste Kontrolle, ob die Anfrage richtig gestellt war. basis = "https://tel.search.ch/api/?" resto_loewen = "was=Restaurant+Loewen&" wo = "wo=SH&" key = "&key=XXXX" url_sh_loewen = basis + resto_loewen + wo + key requests.get(url_sh_loewen) requests.get(url_sh_loewen).text # Ich will mir die zusammengesetzte Website-URL nochmals anschauen... print(url_sh_loewen) # Wie in JupyterNotebook "Beizen der Schweiz" beschrieben, nix mit json.XML. # Ich pack die ganze Sache in BeautifulSoup. # Und wie im ersten Notebook beschrieben, suche ich nach "entry". r = requests.get(url_sh_loewen) contents = r.text #Wandeln wir den Text in ein Format um, mit dem BeautifulSoup umgehen kann. soup = BeautifulSoup(contents,'html.parser') #Geben wir das an BeautifulSoup weiter entries = soup.find_all('entry') #Nun lesen wir Titel aus. len(entries) #Schauen wir, wie lange die Titel sind. # ### Der neue Workaround... # # Alles wie in Teil 1 : ) # Also baue ich mir ein leeres Dataframe, wo ich dann wie in einen Container alles abfüllen kann. Ich brauche einen Boolean, verschiedene Variablen, neu für die verschiedenen Namen der Beizen pro Kanton. # # Und ja, wo es pro Kanton weniger als 200 Restaurants gibt, ziehe ich natürlich alle raus, um mir mehrere Requests pro Kanton sparen zu können... # # Das ganze wird in mehrere CSV's gespeichert, um danach weiterarbeiten zu können. # # + df = pd.DataFrame({"Restaurantname":[], "Kanton":[]}) df.head() kanton = "FR" restoname = "Ange" basis = "https://tel.search.ch/api/?" was = "was=Restaurant%20" + restoname + "&" wo = "wo="+ kanton + "&" key = "&key=XXXX" url_fr_ange = basis + was + wo + key #print(url_sh_sternen) r = requests.get(url_fr_ange) soup = BeautifulSoup(r.text, 'html.parser') #print(soup.find("opensearch:totalresults").text) #Alle Entrys vom aktuellen file durchsuchen entries = soup.find_all("entry") for entry in entries: print (entry.find("title").text) restaurantname = entry.find("title").text df = df.append({"Restaurantname": restaurantname},ignore_index=True) print("finito") # - df.to_csv("export_csv/export_" + kanton + "_" + restoname + ".csv", sep = ";" , index = False) # ## CSVs bereinigen # # die Files kommen zum Teil mit komischen (falschen) Einträgen und Namen daher. Da die API immer nach einer gewissen Anzahl Abfragen pro Tag schliesst, bereinige ich die CSV in der Zweischenzeit in Atom und füge alle Einträge zu einem Kantons-CSV zusammen. Diese werden dann zu einem Schweiz total CSV zusammengefügt und wieder eingelesen. "AG\n".strip() with open("CH_alle.csv") as file: for line in file: data = line.split(";") print(data) df = pd.read_csv("CH_alle_b.csv",sep = ";") # ## Daten mit Pandas in df packen und durchchecken # # Wie in den Schulunterlagen wundervoll beschrieben, nehme ich mein CH_alle CSV und packe es in ein Dataframe. Ist alles sauber zusammengefügt? Mit df.head und df.tail schaue ich Anfang und Ende an mit df.shape die Anzahl Einträge (stimmt!) und lasse mir mit df.describe eine Beschreibung des Files geben. df.head() df.tail(5) df.shape len(df) df.describe() df['Kanton'] # ## Erste Werte, die ich brauche # # Als Basis für meine Visualisierung benötige ich die Anzahl Restaurants pro Kanton. Ein einfacher value_counts als Variable verpackt und als neues CSV rausspeichern... dann das Ganze gleich nochmals nachkontrollieren. df["Kanton"].value_counts() anz_beizen_kanton = df["Kanton"].value_counts() path = 'aus_python/anz_beizen_kanton.csv' anz_beizen_kanton.to_csv(path, index=True) check = pd.read_csv(path) check df["Kanton"].value_counts().plot(kind="barh", figsize=(10,7)) # ## Die einzelnen Restaurantnamen... # # Wie oben die Kantonsabfrage benötige ich das Datenset für die Anzahl Restaurantnamen (Achtung Post, Poste und Posta müssen dann noch zusammenaddiert werden... Auch diese Abfrage wird in ein CSV gespeichert und nochmals nachkontrolliert. df["Restaurantname"].value_counts().head(30) # + from collections import Counter #z = df["Restaurantname"] #Counter(z) # - anz_beizennamen = df["Restaurantname"].value_counts() path = 'aus_python/anz_beizennamen.csv' anz_beizennamen.to_csv(path, index=True) check = pd.read_csv(path) check df["Restaurantname"].value_counts().plot(kind="barh", figsize=(10,7)) # ## Die einzelnen Beizen nach Kanton # # Nun benötige ich noch die kombinierte Abfrage der Beizennamen nach Kanton. Ich lese also nochmals das CH_alle File ein. df = pd.read_csv("CH_alle_b.csv",sep = ";") df.head() df.groupby(['Kanton']).sum() kt_beiz = df.groupby(['Kanton']).sum() path = 'aus_python/kt_beizennamen.csv' kt_beiz.to_csv(path, index = True) # ## Ich höre auf nachdem... # # Nach viel rumprobieren und testen mit der API habe ich folgendes bereinigtes Datenset bekommen (Bild 1). Auf den ersten Blick sieht man, dass etwas nicht stimmen kann; nur wenn ein Name in mehrerern Sprachen vorkommt, "existiert" er häufiger als 10 Mal! Das kann nicht sein. Und diese API bringt mich nicht an, sondern weit über meine Grenzen! All die Limitierungen... Ich recherchiere bei TelSearch von Hand nach und komme auf folgendes Resultat (Bild 2). Wenn ich also ein Datenset nehme, um eine Arbeit zu zeichen, wäre es die nachkorrigierte Excel-Liste. # # Learnings: viele : ) # # Das Wichtigste: keine Dataviz mit Hilfe der Swisscom API ; ))) # from IPython.display import Image Image(filename='Beizen_API.png') from IPython.display import Image Image(filename='Beizen_nachrecherche.png')
Eigene Projekte/Projekt1_Beizen/Ochsentour_Beizen_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <div class="alert alert-success"> # <b>Author</b>: # # <NAME> # <EMAIL> # # </div> # # # [Click here to see class lecture](https://photos.app.goo.gl/jwkcj4t3afHXrAJy7) # # Host ( Client, Router, Server) # # IP address : Network Part + Host Part # # **Classfull IP addressing** # # - Class A : N.H.H.H<br> # network : 8 bit (cause there's only one N and each is an octet)<br> # host : 24 bit (cause there's three host and each is an octet)<br> # no. of hosts that can connect : $2^{24}-2\approx16million$<br> # First octet : (1~127)<br> # Subnet Mask : 255.0.0.0 (11111111.00000000.00000000.00000000) # <br><br> # - Class B : N.N.H.H<br> # network : 16 bit (cause there's two N and each is an octet)<br> # host : 16 bit (cause there's two host and each is an octet)<br> # no. of hosts that can connect : $2^{16}-2 = 65534$<br> # First octet : (128~191)<br> # Subnet Mask : 255.255.0.0 (11111111.11111111.00000000.00000000) # <br><br> # - Class C : N.N.N.H<br> # network : 24 1bit (cause there's three N and each is an octet)<br> # host : 8 bit (cause there's only one host and each is an octet)<br> # no. of hosts that can connect : $2^{8}-2 = 254$<br> # First octet : (192~223)<br> # Subnet Mask : 255.255.255.0 (11111111.11111111.11111111.00000000) # ` What if we want to connect more than 254 hosts? Then we'll use class B.` # # **University of Asia Pacific** # # No. of host per dept. # # 1. CSE = 2000 # 2. CE = 1500 # 3. EEE = 1200 # 4. PHARMA = 1000 # 5. ENG = 100 # 6. BBA = 64 # # We have to connect `2000 hosts` just in CSE. So we can't choose class C cause it'll take us 7 ~ 8 networks to connect all the hosts in CSE . But if we choose class B then we can connect all hosts of CSE using a single network. This saves us the cost of buying 8(minimum) routers if used class C to connect CSE. That's why we choose class B network. If we choose class B we'll only need 6 networks to connect the whole university where as if we had chosen C it would take 7 ~ 8 networks just to connect CSE let alone other depts. ( There are 6 depts, we want to have separate network for each dept that's why there are 6 network in this class B setup). # Every device in a network wants to know about other devices connected to the network. This can be done using message transfer mechanism called broadcast. There are three types of message transfer mechanism. # # - Unicast : One to one communication, I will send a message and send it only to you. # - Multicast : One to a group, I will create 10 messages and send to group members (considering there are 10 members in a group). # - Broadcast : One to all, I will create 65534 messages and send to all. (considering we are in class B network) # # Now the drawback is in CSE there's just 2000 host. But broadcasting will create 65534 messages, so remaining 63534 messages will not be utilized and it will increase traffic. So we need to count how much bit should be use so that there be near about 2000 connection points in the CSE network. # # Manual counting : # # $2^0 = 1$ # # $2^1 = 2$ # # $2^2 = 4$ # # $2^3 = 6$<br> # .<br> # .<br> # .<br> # .<br> # $2^{11} = 2048$ # # $2^{12} = 4096$ # # So for 11 bit we can connect $2^{11} - 2 = 2046$ hosts which is near about 2000 connection needed for CSE. There's much less traffic now as only 46 connections will be unused. So we need 11 bit rather than 16 bit in host part(N.N.H.H). # ## Variable Length Subnet Mask (VLSM) # # Now let's see needed host part bit in each dept. This bit calculation can be done using the method above. `Usually subnet mask of class B is 255.255.0.0 (11111111.11111111.00000000.00000000) but as we are using less host bit so we'll use more network bit as given below` # # 1. **CSE = 2000** # - Host bit : 11 bit # - Network bit : 32-11 = 21 bit # - Subnet Mask : 11111111.11111111.11111 000.00000000 = 255.255.248.0 # # 2. **CE = 1500** # - Host bit : 11 bit # - Network bit : 32-11 = 21 bit # - Subnet Mask : 11111111.11111111.11111 000.00000000 = 255.255.248.0 # # 3. **EEE = 1200** # - Host bit : 11 bit # - Network bit : 32-11 = 21 bit # - Subnet Mask : 11111111.11111111.11111 000.00000000 = 255.255.248.0 # # 4. **PHARMA = 1000** # - Host bit : 10 bit # - Network bit : 32-10 = 22 bit # - Subnet Mask : 11111111.11111111.111111 00.00000000 = 255.255.252.0 # # 5. **ENG = 100** # - Host bit : 7 bit # - Network bit : 32-7 = 25 bit # - Subnet Mask : 11111111.11111111.11111111.1 0000000 = 255.255.255.128 # # 6. **BBA = 64** # - Host bit : 7 bit # - Network bit : 32-7 = 25 bit # - Subnet Mask : 11111111.11111111.11111111.1 0000000 = 255.255.255.128 # # # Generally we use fixed length subnet mask like 255.255.0.0 (for class B) but above mentioned subnet mask aren't fixed. This type of subnet mask is called **Variable Length Subnet Mask (VLSM).** Using VLSM we design **CIDR : Classless Inter Domain Routing.** When subnet mask was fixed we designed `Classfull IP addressing` but when using VLSM we'll design `CIDR`. CIDR is classless cause we can't say which class the network bit (21 bit) belongs to. # # Using these methods we can reduce wastage connection wastage and traffic greatly. # ## CIDR : Classless Inter Domain Routing # # Base network : 10.10.0.0/16 (will be provided by instructor, we write 0/16 cause there are 16 bit in network part, this network belongs to class B). Now we'll design CIDR based on requirements mentioned above. # # **Package (Network Address, IP Address(First host, Last host), Subnet Mask, Broadcast)**<br> # Let's clear Package concept using class C.<br> # Network address : 192.168.5.0/24 (assumed base address, we write 0/24 cause there are 24 bit in network part. From the 2nd iteration we'll add binary 1 to the host part of the previous iteration's base address to get the network address.)<br> # Subnet Mask : 255.255.255.0<br> # First host : 192.168.5.1 (Network address + 1)<br> # Last host : 192.168.5.254 (Broadcast address - 1)<br> # Broadcast address : 192.168.5.255<br> # Broadcast address equation is we set all the bit to 1 in the host part. Here host part has only 8 bit thats why it has 255 in last octet. # # # # # Now let's see package for CIDR,<br> # # ## For CSE # # Base network/ Network address : 10.10.0.0/21<br> # First host : 10.10.0.1<br> # Subnet Mask : 255.255.248.0 (for 21 bit network)<br> # Last host : 10.10.7.254<br> # Broadcast address : 10.10.00000 111.11111111 (11 bit host) = 10.10.7.255<br> # <br><br> # # ## For CE # # To get the network address we add 1 to the binary form of CSE's broadcast address.<br> # Network address : 00001010.00001010.00000 111.11111111 +1 = 00001010.00001010.00001 000.00000000 = 10.10.8.0/21<br> # First host : 10.10.8.1<br> # Subnet Mask : 255.255.248.0 (for 21 bit network)<br> # Last host = 10.10.15.254<br> # Broadcast address : 00001010.00001010.00001 111.11111111 (11 bit host) = 10.10.15.255<br> # <br><br> # # ## For EEE # # To get the network address we add 1 to the binary form of CE's broadcast address.<br> # Network address : 00001010.00001010.00001 111.11111111 + 1 = 00001010.00001010.00010 000.00000000 = 10.10.16.0/21<br> # First host : 10.10.16.1<br> # Subnet Mask : 255.255.248.0 (for 21 bit network)<br> # Last host : 10.10.23.254<br> # Broadcast address : 00001010.00001010.00010 111.11111111 (11 bit host) = 10.10.23.255<br> # <br><br> # # ## For PHARMA # # To get the network address we add 1 to the binary form of EEE's broadcast address.<br> # Network address : 00001010.00001010.00010 111.11111111 + 1 = 00001010.00001010.000110 00.00000000 = 10.10.24.0/22<br> # First host : 10.10.24.1<br> # Subnet Mask : 255.255.252.0 (for 22 bit network)<br> # Last host : 10.10.27.254<br> # Broadcast address : 00001010.00001010.000110 11.11111111 (10 bit host) = 10.10.27.255<br> # <br><br> # # ## For ENG # # To get the network address we add 1 to the binary form of PHARMA's broadcast address.<br> # Network address : 00001010.00001010.000110 11.11111111 + 1 = 00001010.00001010.00011100.0 0000000 = 10.10.28.0/25<br> # First host : 10.10.28.1<br> # Subnet Mask : 255.255.255.128 (for 25 bit network)<br> # Last host : 10.10.28.126<br> # Broadcast address : 00001010.00001010.00011100.0 1111111 = 10.10.28.127 (for 7 bit host)<br> # <br><br> # # ## For BBA # # To get the network address we add 1 to the binary form of ENG's broadcast address.<br> # Network address : 00001010.00001010.00011100.0 1111111 + 1 = 00001010.00001010.00011100.1 0000000 = 10.10.28.128/25<br> # First host : 10.10.28.129<br> # Subnet Mask : 255.255.255.128 (for 25 bit network)<br> # Last host : 10.10.28.254<br> # Broadcast address : 00001010.00001010.00011100.1 1111111 = 10.10.28.255 (for 7 bit host)<br> # <br><br> # # # That's all for this lab!
CSE_320_Computer Networks Lab/Lab_4_23.07.2020.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py_geo] # language: python # name: conda-env-py_geo-py # --- # ### Benchmarking SCINet GeoCDL Options # # This notebook is designed to benchmark the throughput from various data sources to the USDA ARS HPC systems. Much of the code and inspiration for these metrics comes from a similiar effort via the Pangeo ([see this link](http://gallery.pangeo.io/repos/earthcube2020/ec20_abernathey_etal/cloud_storage.html) or this [preprint](https://www.authorea.com/doi/full/10.22541/au.160443768.88917719/v1)) # # This benchmark uses dask arrays and a dask distributed computing system to scale the number of workers fetching data from remote data repositories. To run a benchmark: # 1. Copy the `template.ipynb` file. # 2. Rename the file like:<br> # * DataSource__FileType.ipynb (Note the double _ characters) # * e.g. `template.ipynb` --> `aws__netcdf.ipynb` # * e.g. `template.ipynb` --> `DukeFTP__Gdal_tif.ipynb` # 3. Fill in the IO code in the "blank" section. # 4. Run the entire notebook. # 5. Confirm a file was written into the result folder. # # **NOTE: You need to update the `container` variable in the 3rd cell below** import warnings warnings.filterwarnings('ignore') from utils import DevNullStore,DiagnosticTimer,total_nthreads,total_ncores,total_workers,get_chunksize import time, datetime, os, dask import pandas as pd import numpy as np import dask.array as da import dask_jobqueue as jq from tqdm.notebook import tqdm from dask.distributed import Client # + ## This environment set to optimized reading COG files - see https://github.com/pangeo-data/cog-best-practices env = dict(GDAL_DISABLE_READDIR_ON_OPEN='EMPTY_DIR', AWS_NO_SIGN_REQUEST='YES', GDAL_MAX_RAW_BLOCK_CACHE_SIZE='200000000', GDAL_SWATH_SIZE='200000000', VSI_CURL_CACHE_SIZE='200000000') os.environ.update(env) # - partition='brief-low' num_processes = 3 num_threads_per_processes = 2 mem = 3.2*num_processes*num_threads_per_processes#*1.25, n_cores_per_job = num_processes*num_threads_per_processes container = 'docker://rowangaffney/data_science_im_rs:latest' env = 'py_geo' cluster = jq.SLURMCluster(queue=partition, processes=num_processes, memory=str(mem)+'GB', cores=n_cores_per_job, interface='ib0', local_directory='$TMPDIR', death_timeout=30, python="singularity exec {} /opt/conda/envs/{}/bin/python".format(container,env), walltime='00:30:00', scheduler_options={'dashboard_address': ':8777'}, job_extra=["--output=/dev/null","--error=/dev/null"]) client=Client(cluster) client # ### Dataset Specific IO Code # # Load data into a dask array named `data`. Ideally subset the data is it has ~100MB chunks and totals 25-200 GBs (depending on throughput and # of chunks). An example approach is: # ```python # ds = xr.open_rasterio('http://hydrology.cee.duke.edu/POLARIS/PROPERTIES/v1.0/vrt/clay_mode_0_5.vrt', # chunks='auto') # data = ds.isel(x=slice(0,5400*5),y=slice(0,5400*5)).data # ``` # # Define the following metadata: # # **cloud_source** = Where is the data being housed <br> # **format** = Format of the data. For gdal drivers use gdal_drivername (aka gdal_tif). For other sources, use the file formatt suffix (aka zarr).<br> # **system** = The system (Ceres, Atlas, AWS, GCP, AZURE, etc...) # # Example: # ```python # cloud_source = 'aws_nasa_daac' # d_format = 'gdal_cog' # system = 'Ceres' # ``` # + ### ADD CODE BELOW ### import xarray as xr ds = xr.open_rasterio('http://hydrology.cee.duke.edu/POLARIS/PROPERTIES/v1.0/vrt/clay_mode_0_5.vrt',chunks='auto') data = ds.isel(x=slice(0,5400*15),y=slice(0,5310*15)).data d_format = 'gdal_vrt' cloud_source = 'ftp_duke' system = 'Ceres' data # - # ### Run Diagnostics on Throughput # + diag_timer = DiagnosticTimer() devnull = DevNullStore() chunksize = get_chunksize(data) totalsize = data.nbytes*1e-9 diag_kwargs = dict(nbytes=data.nbytes, chunksize=chunksize, cloud_source=cloud_source, system=system, format=d_format) n_worker_lst = [3,6,12,18,30,42,66,90] runtime = datetime.datetime.now().strftime("%Y%m%d_%H%M") for nworkers in tqdm(n_worker_lst): client.restart() cluster.scale(nworkers) time.sleep(10) client.wait_for_workers(nworkers) with diag_timer.time(nthreads=total_nthreads(client), ncores=total_ncores(client), nworkers=total_workers(client), **diag_kwargs): future = da.store(data, devnull, lock=False, compute=False) dask.compute(future, retries=5) df = diag_timer.dataframe() cluster.scale(0) df['throughput_MBps'] = df.nbytes/1e6/df.runtime # - # ### Visualize and save the results # + #Save results df.to_csv('../results/Python__'+system+'__'+cloud_source+'__'+d_format+'__'+runtime+'.csv',index=False) p = df.plot(x='nworkers', y='throughput_MBps', title='System: '+system+' | Data Source: '+cloud_source+' | File Format: '+d_format, grid=True, style='.-', figsize=(10,5), ylabel='Throughput MB/s', xlabel='# of Workers', ms=12, legend=False) # -
throughput_benchmarks/python/ftpDuke__gdal_vrt.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + import geopy.distance #103.8125 #1.2937 coords_1 = (52.2296756, 21.0122287) coords_2 = (52.406374, 16.9251681) print(geopy.distance.vincenty(coords_1, coords_2)) # + import pandas as pd import numpy as np rain_df = pd.read_csv("rain_shifted.csv") rain_df.head() # - rain_grouped = rain_df.groupby(['station_id'], axis=0) print(rain_grouped.size()) # ## Carpark data carpark_df = pd.read_csv("carpark_2018-01-08.csv") carpark_df.head() # ## Carpark info with longitude, latitude carparkinfo_df = pd.read_csv("carpark_info.csv") carparkinfo_df.head() carparkinfo_df.shape carparkinfo_filter = carparkinfo_df.filter(['carpark_number','longitude','latitude']) carparkinfo_filter # + active="" # rain_location_distance = geopy.distance(2 coordinates) # rain influence = summation_i(e^-(rain_location_distance)*rain_value) # + #pd.merge(df, carparkinfo_filter, how='outer', on=['timestamp (SGT)','station_id']) # - carparkinfo_df.groupby(['carpark_number'], axis=0).size() # + #temp_df = grouped.get_group('S94') carpark_grouped = carpark_df.groupby(['carpark_number'], axis=0) print(carpark_grouped.size()) # - carpark_points = carpark_grouped.size().index #carpark_points[0] carpark_loc_df = pd.DataFrame(carpark_points) carpark_loc_df['longitude'] = np.nan carpark_loc_df['latitude'] = np.nan carpark_loc_df.head() carpark_loc_df['carpark_number'] # + #for i in range(0,len(carpark_loc_df['carpark_number'])): # carpark_loc_df['longitude'][i] = carpark_df.loc[carpark_df['carpark_number'] == carpark_loc_df['carpark_number'][i]]['longitude'].values[0] # - carpark_df.loc[carpark_df['carpark_number'] == carpark_points[0]] carpark_df.loc[carpark_df['carpark_number'] == carpark_points[0]]['longitude'].values[0] info_df = pd.read_csv("hdb-carpark-information.csv") info_df.head()
rain_influence.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # name: python2 # --- # Imports import numpy import pandas import sklearn import sklearn.dummy import sklearn.ensemble # Matplotlib setup # %matplotlib inline import matplotlib.pyplot as plt import seaborn seaborn.set() # Load justice-centered SCDB data scdb_data = pandas.read_csv("data/SCDB_2013_01_justiceCentered_Citation.csv") # ## Disposition outcoming coding # # In the section below, we transform the SCDB vote and caseDisposition variables into an outcome variable indicating whether the case overall and each Justice has affirmed or reverse. # # * vote: [http://scdb.wustl.edu/documentation.php?var=vote#norms](http://scdb.wustl.edu/documentation.php?var=vote#norms) # * caseDisposition: [http://scdb.wustl.edu/documentation.php?var=caseDisposition#norms](http://scdb.wustl.edu/documentation.php?var=caseDisposition#norms) # + """ Setup the outcome map. Rows correspond to vote types. Columns correspond to disposition types. Element values correspond to: * -1: no precedential issued opinion or uncodable, i.e., DIGs * 0: affirm, i.e., no change in precedent * 1: reverse, i.e., change in precent """ outcome_map = pandas.DataFrame([[-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1], [-1, 1, 0, 0, 0, 1, 0, -1, -1, -1, -1], [-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1], [-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1], [-1, 0, 1, 1, 1, 0, 1, -1, -1, -1, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, -1, 0, -1, -1, -1, -1, -1]]) outcome_map.columns = range(1, 12) outcome_map.index = range(1, 9) def get_outcome(vote, disposition): """ Return the outcome code. """ if pandas.isnull(vote) or pandas.isnull(disposition): return -1 return outcome_map.loc[int(vote), int(disposition)] # + # Map the case-level disposition outcome scdb_data.loc[:, "case_outcome_disposition"] = outcome_map.loc[1, scdb_data.loc[:, "caseDisposition"]].values # Map the justice-level disposition outcome scdb_data.loc[:, "justice_outcome_disposition"] = scdb_data.loc[:, ("vote", "caseDisposition")] \ .apply(lambda row: get_outcome(row["vote"], row["caseDisposition"]), axis=1) # - # ## Running a simulation # # In the section below, we define methods that handle the execution and analysis of simulations. Simulations are based around the following concepts: # # * __prediction methods__: prediction methods take historical data and determine, for each term-justice, what prediction to make. # + def predict_court_case_rate(historical_data, justice_list): """ Prediction method based on the entire Court case-level historical reversal rate. :param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered data known up to point in time :param justice_list: list of Justices to generate predictions for :return: dictionary containing a prediction score for reversal for each Justice """ # Calculate the rate counts = historical_data.loc[:, "case_outcome_disposition"].value_counts() rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1]) # Create return dictionary prediction_map = dict([(justice, rate) for justice in justice_list]) return prediction_map def predict_court_justice_rate(historical_data, justice_list): """ Prediction method based on the entire Court justice-level historical reversal rate. :param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered data known up to point in time :param justice_list: list of Justices to generate predictions for :return: dictionary containing a prediction score for reversal for each Justice """ # Calculate the rate counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts() rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1]) # Create return dictionary prediction_map = dict([(justice, rate) for justice in justice_list]) return prediction_map def predict_justice_rate(historical_data, justice_list): """ Prediction method based on the per-Justice historical reversal rate. :param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered data known up to point in time :param justice_list: list of Justices to generate predictions for :return: dictionary containing a prediction score for reversal for each Justice """ # Create return dictionary prediction_map = dict([(justice, numpy.nan) for justice in justice_list]) # Calculate the rate for justice_id, justice_data in historical_data.groupby('justice'): # Check justice ID if justice_id not in justice_list: continue # Else, get the rate. counts = justice_data.loc[:, "justice_outcome_disposition"].value_counts() rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1]) prediction_map[justice_id] = rate # In some cases, we have a new Justice without historical data. Fill their value with the overall rate. counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts() rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1]) for justice in justice_list: if pandas.isnull(prediction_map[justice]): prediction_map[justice] = rate return prediction_map def predict_justice_last_rate(historical_data, justice_list, last_terms=1): """ Prediction method based on the per-Justice historical reversal rate. :param historical_data: SCDB DataFrame to use for out-of-sample calculationi; must be a subset of SCDB justice-centered data known up to point in time :param justice_list: list of Justices to generate predictions for :param last_terms: number of recent terms to use for rate estimate :return: dictionary containing a prediction score for reversal for each Justice """ # Create return dictionary prediction_map = dict([(justice, numpy.nan) for justice in justice_list]) # Calculate the rate for justice_id, justice_data in historical_data.groupby('justice'): # Check justice ID if justice_id not in justice_list: continue # Else, get the rate. max_term = justice_data["term"].max() counts = justice_data.loc[justice_data["term"] >= (max_term-last_terms+1), "justice_outcome_disposition"].value_counts() rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1]) prediction_map[justice_id] = rate # In some cases, we have a new Justice without historical data. Fill their value with the overall rate. counts = historical_data.loc[:, "justice_outcome_disposition"].value_counts() rate = float(counts.ix[1]) / (counts.ix[0] + counts.ix[1]) for justice in justice_list: if pandas.isnull(prediction_map[justice]): prediction_map[justice] = rate return prediction_map def run_simulation(simulation_data, term_list, prediction_method, score_method="binary"): """ This method defines the simulation driver. :param simulation_data: SCDB DataFrame to use for simulation; must be a subset of SCDB justice-centered data :param term_list: list of terms to simulate, e.g., [2000, 2001, 2002] :param prediction_method: method that takes historical data and indicates, by justice, predictions for term :param score_method: "binary" or "stratified"; binary maps to score >= 0.5, stratified maps to score <= random :return: copy of simulation_data with additional columns representing predictions """ # Initialize predictions return_data = simulation_data.copy() return_data.loc[:, "prediction"] = numpy.nan return_data.loc[:, "prediction_score"] = numpy.nan # Iterate over all terms for term in term_list: # Get indices for dockets to predict and use for historical data before_term_index = simulation_data.loc[:, "term"] < term current_term_index = simulation_data.loc[:, "term"] == term # Get the list of justices term_justices = sorted(simulation_data.loc[current_term_index, "justice"].unique().tolist()) # Get the prediction map prediction_map = prediction_method(simulation_data.loc[before_term_index, :], term_justices) # Get the predictions return_data.loc[current_term_index, "prediction_score"] = [prediction_map[j] for j in simulation_data.loc[current_term_index, "justice"].values] # Support both most_frequent and stratified approaches if score_method == "binary": return_data.loc[current_term_index, "prediction"] = (return_data.loc[current_term_index, "prediction_score"] >= 0.5).apply(int) elif score_method == "stratified": return_data.loc[current_term_index, "prediction"] = (return_data.loc[current_term_index, "prediction_score"] >= numpy.random.random(return_data.loc[current_term_index].shape[0])).apply(int) else: raise NotImplementedError # Get the return range and return term_index = (return_data.loc[:, "term"].isin(term_list)) & (return_data.loc[:, "case_outcome_disposition"] >= 0) & (return_data.loc[:, "justice_outcome_disposition"] >= 0) return return_data.loc[term_index, :] # - # Set parameters start_term = 1953 end_term = 2013 # ## Predicting case outcomes with court reversal rate # # In the simulation below, we demonstrate the performance of the baseline model to predict case outcome based solely on historical court reversal rate. # # The results indicate an accuracy of 63.72% with a frequency-weighted F1 score of 49.6%. # + # Run simulation for simplest model print("predict_court_case_rate") output_data = run_simulation(scdb_data, range(start_term, end_term), predict_court_case_rate) # Analyze results print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"], output_data["prediction"])) # Get accuracy over time and store results output_data.loc[:, "correct"] = (output_data["case_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1)) output_data.to_csv("results/baseline_court_case_rate.csv", index=False) court_case_accuracy_by_year = output_data.groupby("term")["correct"].mean() # - # ## Predicting case outcomes with Justice reversal rate # # In the simulation below, we demonstrate the performance of the baseline model to predict case outcome based solely on historical Justice reversal rate. # # The results are identical to the simulation above, and indicate an accuracy of 63.72% with a frequency-weighted F1 score of 49.6%. # + # Run simulation for simplest model print("predict_court_justice_rate") output_data = run_simulation(scdb_data, range(start_term, end_term), predict_court_justice_rate) # Analyze results print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"], output_data["prediction"])) # Get accuracy over time and store results output_data.loc[:, "correct"] = (output_data["case_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1)) output_data.to_csv("results/baseline_court_justice_rate.csv", index=False) court_justice_accuracy_by_year = output_data.groupby("term")["correct"].mean() # - # ## Predicting Justice outcomes with Justice reversal rate # # In the simulation below, we demonstrate the performance of the baseline model to predict Justice outcome based solely on historical Justice reversal rate. # # The results indicate an accuracy of 57.1% with a frequency-weighted F1 score of 49.7%. # + # Run simulation for simplest model print("predict_justice_rate") output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_rate) # Analyze results print(sklearn.metrics.classification_report(output_data["justice_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.confusion_matrix(output_data["justice_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.accuracy_score(output_data["justice_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.f1_score(output_data["justice_outcome_disposition"], output_data["prediction"])) # Get accuracy over time and store results output_data.loc[:, "correct"] = (output_data["justice_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1)) output_data.to_csv("results/baseline_justice_justice_rate.csv", index=False) justice_accuracy_by_year = output_data.groupby("term")["correct"].mean() # - # ## Predicting Justice outcomes with trailing Justice reversal rate # # In the simulation below, we demonstrate the performance of the baseline model to predict Justice outcome based solely on historical Justice reversal rate over the last one term. # # The results indicate an accuracy of 56.7% with a frequency-weighted F1 score of 52.0%. # + # Run simulation for simplest model print("predict_justice_last_rate") output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_last_rate) # Analyze results print(sklearn.metrics.classification_report(output_data["justice_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.confusion_matrix(output_data["justice_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.accuracy_score(output_data["justice_outcome_disposition"], output_data["prediction"])) print(sklearn.metrics.f1_score(output_data["justice_outcome_disposition"], output_data["prediction"])) # Get accuracy over time and store results output_data.loc[:, "correct"] = (output_data["justice_outcome_disposition"].fillna(-1) == output_data["prediction"].fillna(-1)) output_data.to_csv("results/baseline_justice_justice_last_rate.csv", index=False) justice_last_accuracy_by_year = output_data.groupby("term")["correct"].mean() # - # ## Simulating case votes outcomes with trailing Justice reversal rate # # In addition to assessing accuracy of Justice predictions, we can simulate case outcomes by simulating voting dynamics based thereon. # + # Run vote simulation print("predict_justice_last_rate") output_data = run_simulation(scdb_data, range(start_term, end_term), predict_justice_last_rate) output_data.loc[:, "case_prediction"] = numpy.nan # Iterate over all dockets for docket_id, docket_data in output_data.groupby('docketId'): # Count predictions from docket output_data.loc[:, "case_prediction"] = int(docket_data.loc[:, "prediction"].value_counts().idxmax()) # Output case level predictionsn output_data.to_csv("results/baseline_case_justice_last_rate.csv", index=False) # - print(sklearn.metrics.classification_report(output_data["case_outcome_disposition"].fillna(-1), output_data["case_prediction"].fillna(-1))) print(sklearn.metrics.confusion_matrix(output_data["case_outcome_disposition"].fillna(-1), output_data["case_prediction"].fillna(-1))) print(sklearn.metrics.accuracy_score(output_data["case_outcome_disposition"].fillna(-1), output_data["case_prediction"].fillna(-1))) print(sklearn.metrics.f1_score(output_data["case_outcome_disposition"].fillna(-1), output_data["case_prediction"].fillna(-1))) # + # Plot all accuracies f = plt.figure(figsize=(10, 8)) plt.plot(court_case_accuracy_by_year.index, court_case_accuracy_by_year, marker='o', alpha=0.75) plt.plot(court_justice_accuracy_by_year.index, court_justice_accuracy_by_year, marker='o', alpha=0.75) plt.plot(justice_accuracy_by_year.index, justice_accuracy_by_year, marker='o', alpha=0.75) plt.plot(justice_last_accuracy_by_year.index, justice_last_accuracy_by_year, marker='o', alpha=0.75) plt.title("Accuracy by term and model", size=24) plt.xlabel("Term") plt.ylabel("% correct") plt.legend(("Court by case disposition", "Court by Justice disposition", "Justice by justice disposition", "Justice by trailing justice disposition")) # - # Plot case disposition rate by term rate_data = scdb_data.groupby("term")["case_outcome_disposition"].value_counts(normalize=True, sort=True).unstack() f = plt.figure(figsize=(10, 8)) plt.plot(rate_data.index, rate_data, marker="o", alpha=0.75) plt.title("Outcome rates by year", size=24) plt.xlabel("Term") plt.ylabel("Rate (% of outcomes/term)") plt.legend(("NA", "Affirm", "Reverse"))
simulate_baseline_performance.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (foss4g) # language: python # name: foss4g # --- # # Prepare Data for Workshop # # Workshop: https://github.com/IBMDeveloperUK/foss4g-geopandas # # Data downloaded from https://data.police.uk/data/ for Metropolitan Police Service into `/data/crime_data` folder, unzipped and all files moved to one folder `/data/crime_data/by_year`. # %matplotlib inline import numpy as np import pandas as pd import geopandas as gpd import zipfile # ## Convert crime data to easier readable format # # These files are uploaded to https://github.com/IBMDeveloperUK/foss4g-geopandas/tree/master/data # + years = ['2017','2018'] months = ['01','02','03','04','05','06','07','08','09','10','11','12'] for year in years: for month in months: if month == '01': df = pd.read_csv("/Users/margriet/projects/geopandas-workshop/data/crime_data/"+year+"-"+month+"-metropolitan-stop-and-search.csv") else: df2 = pd.read_csv("/Users/margriet/projects/geopandas-workshop/data/crime_data/"+year+"-"+month+"-metropolitan-stop-and-search.csv") df = df.append(df2) df = df.drop(columns=['Self-defined ethnicity', 'Officer-defined ethnicity','Removal of more than just outer clothing']) df.to_csv("/Users/margriet/projects/geopandas-workshop/data/crime_data/by_year/"+year+"-metropolitan-stop-and-search.csv") zip_file = zipfile.ZipFile("/Users/margriet/projects/geopandas-workshop/data/crime_data/by_year/"+year+"-metropolitan-stop-and-search.zip", 'w') zip_file.write("/Users/margriet/projects/geopandas-workshop/data/crime_data/by_year/"+year+"-metropolitan-stop-and-search.csv", compress_type=zipfile.ZIP_DEFLATED) zip_file.close() # !rm data/crime_data/by_year/*.csv # + years = ['2017','2018'] months = ['01','02','03','04','05','06'] for year in years: for month in months: if month == '01': df = pd.read_csv("data/crime_data/"+year+"-"+month+"-metropolitan-street.csv") else: df2 = pd.read_csv("data/crime_data/"+year+"-"+month+"-metropolitan-street.csv") df = df.append(df2) df = df.drop(columns=['Reported by','Falls within','LSOA name']) df.to_csv("data/crime_data/by_year/"+year+"-1-metropolitan-street.csv") zip_file = zipfile.ZipFile("data/crime_data/by_year/"+year+"-1-metropolitan-street.zip", 'w') zip_file.write("data/crime_data/by_year/"+year+"-1-metropolitan-street.csv", compress_type=zipfile.ZIP_DEFLATED) zip_file.close() # !rm data/crime_data/by_year/*.csv # + years = ['2017','2018'] months = ['07','08','09','10','11','12'] for year in years: for month in months: if month == '07': df = pd.read_csv("data/crime_data/"+year+"-"+month+"-metropolitan-street.csv") else: df2 = pd.read_csv("data/crime_data/"+year+"-"+month+"-metropolitan-street.csv") df = df.append(df2) df = df.drop(columns=['Reported by','Falls within','LSOA name']) df.to_csv("data/crime_data/by_year/"+year+"-2-metropolitan-street.csv") zip_file = zipfile.ZipFile("data/crime_data/by_year/"+year+"-2-metropolitan-street.zip", 'w') zip_file.write("data/crime_data/by_year/"+year+"-2-metropolitan-street.csv", compress_type=zipfile.ZIP_DEFLATED) zip_file.close() # !rm data/crime_data/by_year/*.csv # - # ## Clean up Borough shape files # + # https://data.london.gov.uk/dataset/2011-boundary-files # Included are Output Area (OA), Lower Super Output Area (LSOA) and Middle-Level Super Output Area (MSOA) bounadries. #Each geography is provided at Extent of the Realm (BFE), Coastline (BFC) and Generalised Coastline (BGC). Boundaries = gpd.read_file("data/2011_london_boundaries/LSOA_2011_BFE_London/LSOA_2011_BFE_City_of_London.shp") #Boundaries[] Boundaries.plot(); # - Boundaries = Boundaries.append(gpd.read_file("data/2011_london_boundaries/LSOA_2011_BFE_London/LSOA_2011_BFE_Westminster.shp")) Boundaries.plot(); # + Boundaries = Boundaries.append(gpd.read_file("data/2011_london_boundaries/LSOA_2011_BFE_London/LSOA_2011_BFE_Camden.shp")) Boundaries = Boundaries.append(gpd.read_file("data/2011_london_boundaries/LSOA_2011_BFE_London/LSOA_2011_BFE_Islington.shp")) Boundaries = Boundaries.append(gpd.read_file("data/2011_london_boundaries/LSOA_2011_BFE_London/LSOA_2011_BFE_Hackney.shp")) Boundaries = Boundaries.append(gpd.read_file("data/2011_london_boundaries/LSOA_2011_BFE_London/LSOA_2011_BFE_Tower_Hamlets.shp")) Boundaries = Boundaries.append(gpd.read_file("data/2011_london_boundaries/LSOA_2011_BFE_London/LSOA_2011_BFE_Southwark.shp")) Boundaries = Boundaries.append(gpd.read_file("data/2011_london_boundaries/LSOA_2011_BFE_London/LSOA_2011_BFE_Lambeth.shp")) Boundaries.plot(); # - Boundaries.plot(column='POPDEN',cmap="Reds",scheme='quantiles'); Boundaries.head() Boundaries.to_file("data/boundaries.shp") # ## Convert to lat/lon - work in progress # + #https://pypi.org/project/OSGridConverter/ # #!pip install OSGridConverter #from OSGridConverter import grid2latlong #l=grid2latlong('TG 532151 181867') #(l.latitude,l.longitude) # + #from OSGridConverter import latlong2grid #g=latlong2grid(51.993742,-0.975257, tag = ‘WGS84’) #str(g) # - # ## boroughs in lat/lon # + #https://skgrange.github.io/www/data/london_boroughs.json #boroughs2 = gpd.read_file("data/london_boroughs.json") # load data from a url boroughs2 = gpd.read_file("https://skgrange.github.io/www/data/london_boroughs.json") boroughs2.tail() # - boroughs2.plot(); # + #london = boroughs2.dissolve(by='inner_statistical',aggfunc='sum') #london #london.head() boroughs2['all'] = 1 london = boroughs2.dissolve(by='all',aggfunc='sum') london.head() # - # ## Bounding box for extracting London OSM data bounding_box = london.envelope bb = gpd.GeoDataFrame(gpd.GeoSeries(bounding_box), columns=['geometry']) bb.head() bb.plot(); # + #london2 = london.drop([0, 0]) #london2.head() # - london.plot(); xmin, ymin, xmax, ymax = london.total_bounds xmin, ymin, xmax, ymax # ## Open Street Map data # + # http://download.geofabrik.de/europe/great-britain.html roads_all = gpd.read_file("/Users/margriet/projects/geopandas-workshop/data/england-latest-free/gis_osm_roads_free_1.shp") roads_all.head() # - roads = roads_all.cx[xmin:xmax, ymin:ymax] roads.to_file("data/london_inner_roads.shp") pois_all = gpd.read_file("/Users/margriet/projects/geopandas-workshop/data/england-latest-free/gis_osm_pois_free_1.shp") pois_all.head() pois = pois_all.cx[xmin:xmax, ymin:ymax] pois.to_file("../data/london_pois.shp")
notebooks/prepare-uk-crime-data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # FIBONACCI SERIES USING DECORATOR def smart_fibo(num): def Fibonacci(n): if n<=0: print("Incorrect input") elif n==1: return 0 elif n==2: return 1 else: return Fibonacci(n-1)+Fibonacci(n-2) return num(n) return Fibonacci @smart_fibo def fibo(n): print("The number is",n) fibo(2)
DAY 8.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Comparing Encoder-Decoders Analysis # ### Model Architecture # + report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drb/encdec_noing10_bow_200_512_04drb.json"] log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drb/encdec_noing10_bow_200_512_04drb_logs.json"] reports = [] logs = [] import json import matplotlib.pyplot as plt import numpy as np for report_file in report_files: with open(report_file) as f: reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read()))) for log_file in log_files: with open(log_file) as f: logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read()))) for report_name, report in reports: print '\n', report_name, '\n' print 'Encoder: \n', report['architecture']['encoder'] print 'Decoder: \n', report['architecture']['decoder'] # - # ### Perplexity on Each Dataset # + # %matplotlib inline from IPython.display import HTML, display def display_table(data): display(HTML( u'<table><tr>{}</tr></table>'.format( u'</tr><tr>'.join( u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data) ) )) def bar_chart(data): n_groups = len(data) train_perps = [d[1] for d in data] valid_perps = [d[2] for d in data] test_perps = [d[3] for d in data] fig, ax = plt.subplots(figsize=(10,8)) index = np.arange(n_groups) bar_width = 0.3 opacity = 0.4 error_config = {'ecolor': '0.3'} train_bars = plt.bar(index, train_perps, bar_width, alpha=opacity, color='b', error_kw=error_config, label='Training Perplexity') valid_bars = plt.bar(index + bar_width, valid_perps, bar_width, alpha=opacity, color='r', error_kw=error_config, label='Valid Perplexity') test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width, alpha=opacity, color='g', error_kw=error_config, label='Test Perplexity') plt.xlabel('Model') plt.ylabel('Scores') plt.title('Perplexity by Model and Dataset') plt.xticks(index + bar_width / 3, [d[0] for d in data]) plt.legend() plt.tight_layout() plt.show() data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']] for rname, report in reports: data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']]) display_table(data) bar_chart(data[1:]) # - # ### Loss vs. Epoch # %matplotlib inline plt.figure(figsize=(10, 8)) for rname, l in logs: for k in l.keys(): plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)') plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)') plt.title('Loss v. Epoch') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.show() # ### Perplexity vs. Epoch # %matplotlib inline plt.figure(figsize=(10, 8)) for rname, l in logs: for k in l.keys(): plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)') plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)') plt.title('Perplexity v. Epoch') plt.xlabel('Epoch') plt.ylabel('Perplexity') plt.legend() plt.show() # ### Generations # + def print_sample(sample, best_bleu=None): enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>']) gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>']) print('Input: '+ enc_input + '\n') print('Gend: ' + sample['generated'] + '\n') print('True: ' + gold + '\n') if best_bleu is not None: cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>']) print('Closest BLEU Match: ' + cbm + '\n') print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n') print('\n') def display_sample(samples, best_bleu=False): for enc_input in samples: data = [] for rname, sample in samples[enc_input]: gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>']) data.append([rname, '<b>Generated: </b>' + sample['generated']]) if best_bleu: cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>']) data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')']) data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>']) display_table(data) def process_samples(samples): # consolidate samples with identical inputs result = {} for rname, t_samples, t_cbms in samples: for i, sample in enumerate(t_samples): enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>']) if t_cbms is not None: sample.update(t_cbms[i]) if enc_input in result: result[enc_input].append((rname, sample)) else: result[enc_input] = [(rname, sample)] return result # - samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports]) display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1]) samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports]) display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1]) samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports]) display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1]) # ### BLEU Analysis def print_bleu(blue_structs): data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']] for rname, blue_struct in blue_structs: data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']]) display_table(data) # Training Set BLEU Scores print_bleu([(rname, report['train_bleu']) for (rname, report) in reports]) # Validation Set BLEU Scores print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports]) # Test Set BLEU Scores print_bleu([(rname, report['test_bleu']) for (rname, report) in reports]) # All Data BLEU Scores print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports]) # ### N-pairs BLEU Analysis # # This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations # Training Set BLEU n-pairs Scores print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports]) # Validation Set n-pairs BLEU Scores print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports]) # Test Set n-pairs BLEU Scores print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports]) # Combined n-pairs BLEU Scores print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports]) # Ground Truth n-pairs BLEU Scores print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports]) # ### Alignment Analysis # # This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores # + def print_align(reports): data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']] for rname, report in reports: data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']]) display_table(data) print_align(reports)
old_comparisons/noing10_LSTM_v_BOW.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # #### **Title**: PolyEdit # # **Description**: A linked streams example demonstrating how to use the PolyEdit stream. # # **Dependencies**: Bokeh # # **Backends**: [Bokeh](./PolyEdit.ipynb) # + import numpy as np import holoviews as hv from holoviews import opts, streams hv.extension('bokeh') # - # The ``PolyEdit`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting vertices on polygons and making the drawn data available to Python. The tool supports the following actions: # # **Show vertices** # # Double tap an existing patch or multi-line # # **Add vertex** # # Double tap an existing vertex to select it, the tool will draw the next point, to add it tap in a new location. # To finish editing and add a point double tap otherwise press the ESC key to cancel. # # **Move vertex** # # Drag an existing vertex and let go of the mouse button to release it. # # **Delete vertex** # # After selecting one or more vertices press BACKSPACE while the mouse cursor is within the plot area. # # As a simple example we will draw a number of boxes and ellipses by displaying them using a ``Polygons`` element and then link that element to two ``PolyEdit`` streams. Enabling the ``shared`` option allows editing multiple ``Polygons``/``Paths`` with the same tool. You may also supply a ``vertex_style`` dictionary defining the visual attributes of the vertices once you double tapped a polygon: # + np.random.seed(42) polys = hv.Polygons([hv.Box(*i, spec=np.random.rand()/3) for i in np.random.rand(10, 2)]) ovals = hv.Polygons([hv.Ellipse(*i, spec=np.random.rand()/3) for i in np.random.rand(10, 2)]) poly_edit = streams.PolyEdit(source=polys, vertex_style={'color': 'red'}, shared=True) poly_edit2 = streams.PolyEdit(source=ovals, shared=True) (polys * ovals).opts( opts.Polygons(active_tools=['poly_edit'], fill_alpha=0.4, height=400, width=400)) # - # <center><img src="https://assets.holoviews.org/gifs/examples/streams/bokeh/poly_edit.gif" width=400></center> # Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns: poly_edit.data # Alternatively we can use the ``element`` property to get an Element containing the returned data: poly_edit.element
examples/reference/streams/bokeh/PolyEdit.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import syft as sy sy.logger.remove() import numpy as np from syft.core.tensor.smpc.party import Party from syft.core.tensor.smpc.mpc_tensor import MPCTensor data = sy.Tensor(np.array([1,2,3],dtype=np.int32)) # - party_ca = Party(url="localhost", port=8081) party_it = Party(url="localhost", port=8082) ca_client = sy.login(email="<EMAIL>", password="<PASSWORD>", port=8081) it_client = sy.login(email="<EMAIL>", password="<PASSWORD>", port=8082) clients = [ca_client, it_client] parties = [party_ca, party_it] tensor_1 = data.send(ca_client) tensor_2 = data.send(it_client) mpc_tensor_1 = MPCTensor(shape=(3, ), clients= clients, parties_info=parties, secret=tensor_1) mpc_tensor_2 = MPCTensor(shape=(3, ), clients= clients, parties_info=parties, secret=tensor_2) res = mpc_tensor_1 * mpc_tensor_2 #The above two private tensor belongs to gryffindor and slytherin #we create an MPCTensor by sharing it with all the parties in the computation(instead of the above two) mpc_1.reconstruct()
notebooks/smpc/SMPC Tensor pointer Abstraction-Mul.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lab 1 - Working with NAPALM # # This is an example lab to prove out the concept of building a lab with jupyter notebooks in conjunction with the vMX docker image. We'll do a quick example of using NAPALM to connect to a Junos device and print the facts. # In order to work with our network device via NAPALM, we need to first import the library. This is done with a simple `import` statement: import napalm # Next, we want to call `napalm`'s `get_network_driver` function, and pass in the name of the driver we wish to use. In this case, we want `junos` since we know the device we're about to get facts from is a Junos device: driver = napalm.get_network_driver("junos") # Now that we have a driver, we can call the driver like a function to get a handle on a specific device using its FQDN and the username/password we generated earlier: device = driver(hostname="vqfx1", username="root", password="<PASSWORD>") # We can initiate the connection to the device with a call to `device.open()`: device.open() # Finally, call `device.get_facts()` to device.get_facts() # That's it!
lessons/lesson-13/stage1/notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os, time from tqdm import tqdm os.environ["CUDA_DEVICE_ORDER"]= "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "0" from keras.models import Model from keras.layers import Dense, Input, Lambda, Concatenate from keras.preprocessing.image import img_to_array from keras.callbacks import ModelCheckpoint from keras import backend as K from keras.datasets import mnist import tensorflow as tf import tensorflow_probability as tfp # for tf version 2.0.0, tfp version 0.8 is needed import numpy as np import cv2 import matplotlib.pyplot as plt # %matplotlib inline # - # tf.test.is_gpu_available() tf.config.list_physical_devices('GPU') IMG_ROWS, IMG_COLS, CHANNELS = 64, 52, 3 IMG_ROWS_EVAL, IMG_COLS_EVAL = 64, 52 # *** # + dir_data = "../img_align_celeba" #"data/img_align_celeba/" Ntrain = 162770 Ntest = 19962 nm_imgs = np.sort(os.listdir(dir_data)) print(nm_imgs.shape) ## name of the jpg files for training set nm_imgs_train = nm_imgs[:Ntrain] ## name of the jpg files for the testing data nm_imgs_test = nm_imgs[Ntrain:Ntrain + Ntest] def get_npdata(nm_imgs_train, img_rows = IMG_ROWS, img_cols = IMG_COLS): X_train = np.zeros([nm_imgs_train.shape[0], img_rows, img_cols, CHANNELS], dtype='uint8') for i, myid in enumerate(nm_imgs_train): X_train[i] = np.array(img_to_array(cv2.resize(cv2.imread(dir_data + "/" + myid)[...,::-1], dsize=(img_cols, img_rows), interpolation=cv2.INTER_AREA))) return(X_train) X_train = get_npdata(nm_imgs_train, IMG_ROWS, IMG_COLS) print("X_train.shape = {}".format(X_train.shape)) X_test = get_npdata(nm_imgs_test, IMG_ROWS_EVAL, IMG_COLS_EVAL) print("X_test.shape = {}".format(X_test.shape)) # + # np.save("X_train-full_162770-res_64x52.npy", X_train) # np.save("X_test-full_19962-res_64x52.npy", X_test) # - # *** X_train = np.load("X_train-full_162770-res_64x52.npy") X_test = np.load("X_test-full_19962-res_64x52.npy") # --- fig = plt.figure(figsize=(30,10)) nplot = 6 for count in range(1, nplot): ax = fig.add_subplot(1,nplot,count) ax.imshow(X_train[count]) plt.show() # --- def img_coordinates(img_rows, img_cols): # [img_rows, img_cols] row_dim = np.tile(np.expand_dims(np.arange(0., 1., 1. / img_rows, dtype='float32'), axis=0), [img_cols,1]) # [img_cols, img_rows] col_dim = np.tile(np.expand_dims(np.arange(0., 1., 1. / img_cols, dtype='float32'), axis=0), [img_rows,1]) # [img_rows*img_cols, 2] x_pairs = np.reshape(np.concatenate((np.expand_dims(np.transpose(row_dim), axis=-1), np.expand_dims(col_dim, axis=-1)), axis=-1), [img_rows*img_cols,2]) return x_pairs def generate_data(batch_size, max_num_context, img_rows, img_cols, testing=False, index=None, num_context=None): max_num_points = img_rows*img_cols if num_context == None: if max_num_context > max_num_points: num_context = np.random.randint(3, max_num_points, dtype='int32') else: num_context = np.random.randint(3, max_num_context, dtype='int32') if testing: num_target_points = max_num_points set_x = X_test else: num_target_points = np.random.randint(num_context, max_num_points+1, dtype='int32') set_x = X_train context_x = np.zeros([batch_size, num_context, 2]) context_y = np.zeros([batch_size, num_context, CHANNELS]) target_x = np.zeros([batch_size, num_target_points, 2]) target_y = np.zeros([batch_size, num_target_points, CHANNELS]) x_pairs = img_coordinates(img_rows, img_cols) if index!=None: idx_1 = np.array([index]) else: idx_1 = np.arange(set_x.shape[0]) np.random.shuffle(idx_1) for i in range(batch_size): img = np.reshape(set_x[idx_1[i]], [max_num_points, CHANNELS])/255 idx_2 = np.arange(max_num_points) np.random.shuffle(idx_2) context_x[i] = x_pairs[idx_2[:num_context]] context_y[i] = img[idx_2[:num_context]] if testing: target_x[i] = x_pairs target_y[i] = img else: target_x[i] = x_pairs[idx_2[:num_target_points]] target_y[i] = img[idx_2[:num_target_points]] context_xy = np.concatenate([context_x, context_y], axis=-1) return [context_xy, target_x], target_y def generate(batch_size, max_num_context, img_rows, img_cols, testing): """a generator for batches, so model.fit_generator can be used. """ while True: inputs, targets = generate_data(batch_size, max_num_context, img_rows, img_cols, testing=testing) yield (inputs, targets) [context_xy, target_x], target_y = generate_data(1, IMG_ROWS_EVAL*IMG_COLS_EVAL, IMG_ROWS_EVAL, IMG_COLS_EVAL, testing=True) plt.imshow(np.array(np.round(target_y[0]*255), dtype='uint8').reshape([IMG_ROWS_EVAL, IMG_COLS_EVAL,3])) plt.show() # --- def log_prob(y_true, y_pred): mu, sigma = tf.split(y_pred, 2, axis=-1) dist = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=sigma) log_p = dist.log_prob(y_true) loss = -K.mean(log_p) return loss # ___ # # + """encoder""" input_context_xy = Input((None, 2+CHANNELS), name="Input_layer_contxt_xy") # [num_pts, 2] input_target_x = Input((None, 2), name="Input_layer_target_x") # [num_pts, 1] encoder = input_context_xy encoder = Dense(128, activation='relu', name="Encoder_layer_0")(encoder) encoder = Dense(128, activation='relu', name="Encoder_layer_1")(encoder) encoder = Dense(128, activation='relu', name="Encoder_layer_2")(encoder) # encoder = Dense(128, activation='relu', name="Encoder_layer_3")(encoder) # encoder = Dense(128, activation='relu', name="Encoder_layer_4")(encoder) representation_r_i = Dense(128, activation='linear', name="Encoder_layer_3")(encoder) """aggregate""" representation_r = Lambda(lambda x: K.mean(x, axis=-2), name="Mean_layer_r")(representation_r_i) """decoder""" representation_r_tiled = Lambda(lambda x: K.tile(K.expand_dims(x, axis=-2), [1, K.shape(input_target_x)[-2], 1]), name="Tile_layer_r")(representation_r) decoder_input = Concatenate(axis=-1, name="Concat_layer_r_target_x")([representation_r_tiled, input_target_x]) decoder = Dense(128, activation='relu', name="Decoder_layer_0")(decoder_input) decoder = Dense(128, activation='relu', name="Decoder_layer_1")(decoder) decoder = Dense(128, activation='relu', name="Decoder_layer_2")(decoder) decoder = Dense(128, activation='relu', name="Decoder_layer_3")(decoder) # decoder = Dense(128, activation='relu', name="Decoder_layer_4")(decoder) # decoder = Dense(128, activation='relu', name="Decoder_layer_5")(decoder) decoder = Dense(2*CHANNELS, activation='linear', name="Decoder_layer_4")(decoder) mu, log_sigma = Lambda(lambda x: tf.split(x, 2, axis=-1), name="Split_layer")(decoder) sigma = Lambda(lambda x: 0.1 + 0.9 * K.softplus(x), name="Softplus_layer_sigma")(log_sigma) """build model""" output = Concatenate(axis=-1, name="Concat_layer_mu_sigma")([mu, sigma]) model = Model([input_context_xy, input_target_x], output) model.compile(loss=log_prob, optimizer='adam') model.summary() model.load_weights("CNP_celebA_e4_d5_v3_b500_rs64x52.h5") # - # ___ # # + def process_to_plot(inputs, target_y, pred, img_rows, img_cols): target_y = np.array(np.round(target_y[0]*255), dtype='uint8').reshape([img_rows, img_cols,3]) context_x = np.array(np.round(np.matmul(inputs[0][0, :, :2], [[img_rows, 0], [0, img_cols]])), dtype='uint16') context_y = np.array(np.round(inputs[0][0, :, 2:]*255), dtype='uint8') context_img = np.zeros([img_rows, img_cols, 3], dtype='uint8') context_img[:,:,2] = 255 # set background blue for i in range(context_x.shape[0]): context_img[context_x[i,0], context_x[i,1], :] = context_y[i] pred_y = np.array(np.round(np.clip(pred[0, :, :3], 0.0, 1.0)*255), dtype='uint8').reshape([img_rows, img_cols,3]) # for i in range(context_x.shape[0]): # pred_y[context_x[i,0], context_x[i,1], :] = context_y[i] var = np.array(np.round(np.clip(pred[0, :, 3:], 0.0, 1.0)*255), dtype='uint8').reshape([img_rows, img_cols,3]) return target_y, pred_y, context_img, var, img_rows*img_cols, context_x.shape[0] # + def plot_function_1(target_y, pred_y, context_img, var, max_context, num_context): font_s=18 fig, [[ax1,ax2],[ax3,ax4]] = plt.subplots(2, 2, figsize=(6,6)) ax1.imshow(target_y) ax1.set_title("True", fontsize = font_s) ax1.set_xticks([]) ax1.set_yticks([]) ax2.imshow(pred_y) ax2.set_title("Predict", fontsize = font_s) ax2.set_xticks([]) ax2.set_yticks([]) ax3.imshow(context_img) ax3.set_title("Context Points: \n Num pts=%i/%i" % (num_context, max_context), fontsize = font_s) ax3.set_xticks([]) ax3.set_yticks([]) ax4.imshow(var) ax4.set_title("Variance", fontsize = font_s) ax4.set_xticks([]) ax4.set_yticks([]) fig.tight_layout() # plt.savefig('celebA1_1000dpi.png', dpi = 1000, bbox_inches = 'tight') plt.show() # - # --- # + MAX_CONTEXT_POINTS = 25e3 repeats = 10 epochs = 5 steps_per_epoch = 2e2 steps_per_validation = 2e2 batch_size = 500 # - # log: r15, e1, spe2e2, spv2e2, b500 &nbsp;&nbsp;&nbsp; CNP_celebA_e4_d5_v1_b500_rs64x52.h5<br> # log: r25, e5, spe2e2, spv2e2, b500 &nbsp;&nbsp;&nbsp; CNP_celebA_e4_d5_v2_b500_rs64x52.h5<br> # log: r10, e5, spe2e2, spv2e2, b500 &nbsp;&nbsp;&nbsp; CNP_celebA_e4_d5_v3_b500_rs64x52.h5<br> # # Total Train: 38000 iterations of batch size <br> # # + print('Training model') hist = np.zeros([2, repeats*epochs], dtype='float32') for i in range(repeats): print("*****************************") print("Repeat %i :" % (i+1), time.ctime()) history = model.fit(generate(batch_size, MAX_CONTEXT_POINTS, IMG_ROWS, IMG_COLS, testing=False), steps_per_epoch=steps_per_epoch, epochs=epochs, # callbacks=[ModelCheckpoint('CNP_celebA_e4_d5_v2_b500_MIN_LOSS.h5', monitor='val_loss', save_best_only=True)], validation_data=generate(batch_size, MAX_CONTEXT_POINTS, IMG_ROWS_EVAL, IMG_COLS_EVAL, testing=True), validation_steps=steps_per_validation) hist[0,i*epochs:(i+1)*epochs] = history.history['loss'] hist[1,i*epochs:(i+1)*epochs] = history.history['val_loss'] inputs, target_y = generate_data(1, MAX_CONTEXT_POINTS, IMG_ROWS_EVAL, IMG_COLS_EVAL, testing=True) pred = model.predict(inputs, steps=1) plot_function_1(*process_to_plot(inputs, target_y, pred, IMG_ROWS_EVAL, IMG_COLS_EVAL)) # model.save_weights("CNP_celebA_e4_d5_v4_b500_rs64x52.h5") # - plt.figure(figsize=(9,6)) plt.plot(hist[1]) plt.plot(hist[0]) plt.title('Model Loss', fontsize=18) plt.ylabel('Loss', fontsize=15) plt.xlabel('Epoch', fontsize=15) plt.legend(['Validation Loss', 'Train Loss'], loc='upper right', fontsize=15) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.grid() plt.show() # + inputs, target_y = generate_data(1, None, IMG_ROWS_EVAL, IMG_COLS_EVAL, testing=True, index=10, num_context=340) pred = model.predict(inputs, steps=1) plot_function_1(*process_to_plot(inputs, target_y, pred, IMG_ROWS_EVAL, IMG_COLS_EVAL)) # - # *** # + contexts_pts = np.array([3,30,300,3000]) labels = ["Context", "Mean", "Variance"] font_s = 30 fig, ax = plt.subplots(3, 4, figsize=(2.44*contexts_pts.shape[0], 9)) for i in range(contexts_pts.shape[0]): inputs, target_y = generate_data(1, None, IMG_ROWS_EVAL, IMG_COLS_EVAL, testing=True, index=1, num_context=contexts_pts[i]) pred = model.predict(inputs, steps=1) _, pred_y, context_img, var, _, _ = process_to_plot(inputs, target_y, pred, IMG_ROWS_EVAL, IMG_COLS_EVAL) ax[0, i].imshow(context_img) ax[0, i].set_title("%i" % (contexts_pts[i]), fontsize = font_s) ax[1, i].imshow(pred_y) ax[2, i].imshow(var) [ax[j, i].set_xticks([]) for j in range(len(labels))] [ax[j, i].set_yticks([]) for j in range(len(labels))] [ax[j, 0].set_ylabel(labels[j], fontsize = font_s) for j in range(len(labels))] fig.suptitle("Context Points: Num pts=#/%i (%ipx, %ipx)" % (IMG_ROWS_EVAL*IMG_COLS_EVAL, IMG_ROWS_EVAL, IMG_COLS_EVAL), fontsize = font_s) fig.tight_layout() fig.subplots_adjust(top=0.87) # plt.savefig('celebA1_1000dpi.png', dpi = 1000, bbox_inches = 'tight') plt.show() # - # *** # + inputs, target_y = generate_data(1, 500, IMG_ROWS_EVAL, IMG_COLS_EVAL, testing=True, index=10, num_context=500) pred = model.predict(inputs, steps=1) plot_function_1(*process_to_plot(inputs, target_y, pred, IMG_ROWS_EVAL, IMG_COLS_EVAL)) # + def findLikeImages(inputs, img_rows, img_cols): context_x = np.array(np.round(np.matmul(inputs[0][0, :, :2], [[img_rows, 0], [0, img_cols]])), dtype='uint16') context_y = np.array(np.round(inputs[0][0, :, 2:]*255), dtype='uint8') # setx = X_test setx = X_train mse = np.zeros(setx.shape[0]) for i in tqdm(range(mse.shape[0])): for j in range(context_x.shape[0]): mse[i] += np.mean(np.power(setx[i, context_x[j, 0], context_x[j, 1]] - context_y[j], 2)) idx = np.argsort(mse) return idx # - idx = findLikeImages(inputs, IMG_ROWS_EVAL, IMG_COLS_EVAL) # + fig = plt.figure(figsize=(30,15)) nplot = 5 for count in range(1, nplot): ax = fig.add_subplot(2,nplot,count) ax.imshow(X_test[idx[count-1]]) ax = fig.add_subplot(2,nplot,nplot+1) ax.imshow(np.mean(X_test[idx[:50]], axis=0)/255) pred_y = np.array(np.round(np.clip(pred[0, :, :3], 0.0, 1.0)*255), dtype='uint8').reshape([IMG_ROWS_EVAL, IMG_COLS_EVAL,3]) ax = fig.add_subplot(2,nplot,nplot+2) ax.imshow(pred_y) plt.show() # -
conditional_neural_process_with_tensorflow_ver_2.0.0_Ex3_CelebA_IMG-RESIZE_64x52.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import numpy as np import matplotlib.pyplot as plt from ipywidgets import interact, fixed from matplotlib import cm plt.rcParams['font.size'] = 14 plt.rcParams['axes.spines.right'] = False plt.rcParams['ytick.right'] = False plt.rcParams['axes.spines.top'] = False plt.rcParams['xtick.top'] = False # Get n grayscale colors def getGrayColors(n): colors = []; for i in range(1, n+1): colors.append(i/(n+1.)*np.ones(3)) return colors # - # ### Example case # Suppose we have observed spike responses $y_i$ to some one-dimensional stimulus $x_i$, $i \in 1,\dots, N$. Furthermore, suppose that the cell exhibited some thresholding mechanism: hardly any spikes were observed for stimulus values below a certain threshold, whereas the response grew roughly linearly above the threshold. We would like to describe the cell's stimulus response mapping using a simple model, and one alternative is a soft-rectifying function: # \begin{equation} # \hat{y_i} = f(z_i) = # \begin{cases} # \exp(z_i),& \text{if } z_i \leq 0\\ # z_i + 1,& \text{if } z_i > 0\\ # \end{cases} # \end{equation} # where $\hat{y_i}$ is the predicted output, $z_i=w_0+w_1x_i$, $w_0$ the threshold, and $w_1$ the slope. However, it is often simpler to work with vector notations, in such cases, we define $z_i$ as a dot product between a parameter vector and a stimulus vector $\mathbf{w}^T\mathbf{x}_i$, where $\mathbf{w}=[w_0,\; w_1]$ and $\mathbf{x}_i=[1; x_i]$. # # Below, we generate simulated data from the proposed model, and we illustrate what the proposed model looks like for various choices of $w_0$ and $w_1$. # + # The soft-rectifying function (f), also known as the inverse of the link function for generalized linear models def invLinkFun(X, w): z = np.dot(X, w) mu = np.zeros(z.shape) mu[z<0] = np.exp(z[z<0]) mu[z>=0] = z[z>=0] + 1 return mu # Parameters n = 200; # Number of data points wTrue = [-1, 0.5] # w0 and w1 for the simulated data # Generate example data x = np.random.rand(n)*30 - 15 x = np.sort(x) X = np.vstack([-np.ones(n), x]).T mu = invLinkFun(X, wTrue) y = np.random.poisson(mu) # Callback plotting function for interactive plots def plot_fun(x, y, w0, w1): fig = plt.figure(figsize=(7.5, 5)) ax = fig.add_subplot(1, 1, 1) ax.plot(x, y, 'k.', label='data') wTmp = np.array([w0, w1]) muTmp = invLinkFun(X, wTmp) ax.plot(x, muTmp, '-', color=0.5*np.ones(3)) ax.set_ylabel('Spike count') ax.set_xlabel('x') ax.set_xticks([-10, 0, 10]); ax.set_ylim([-1, 11]); interact(plot_fun, x=fixed(x), y=fixed(y), w0=(-5., 5.), w1=(-5., 5.)); # - # ### Objective functions # A manual search for good parameter values can be quite tricky and cumbersome, especially if there are more than just two model parameters. An automatic way would be a lot simpler, but for that we need some measure of how "good" a specific selection of model parameters are. One natural way of quantifying the "goodness" of parameters is through an objective function which scores each parameter combination. It might seen tricky to come up with such a function, but one common approach is to define it to be a likelihood function. The idea being that the likelihood function tells you how likely you are to observe the current data given some model parameters. For example, if we assume that the observed spike counts come from a Poisson distribution whose mean is given by our soft-rectifying function, then the probability $p$ of observing a specific data point ($x_i, y_i$) is given by: # \begin{equation} # p(y_i|x_i, w_0, w_1) = \frac{f(z_i)^{y_i} \exp(-f(z_i))}{y_i!}. # \end{equation} # The likelihood function $l$ is then defined as the probability of observing all data points, where independence is essentially always assumed to simplify the expression into a product: # \begin{equation} # l(w_0, w_1) = \prod_i^N \frac{f(z_i)^{y_i} \exp(-f(z_i))}{y_i!}. # \end{equation} # The likelihood function is, however, often cumbersome to work with, and people tend to use the log of the likelihood function instead. Logarithms turns products into sums, and the log-likelihood function $ll$ thus has the form: # \begin{equation} # ll(w_0, w_1) = \sum_i^N y_i\log(f(z_i)) -f(z_i) -\log(y_i!). # \end{equation} # Finally, we note that the last term can be ignored (it does not depend on our model parameters), whereupon we are left with: # \begin{equation} # ll(w_0, w_1) = \sum_i^N y_i\log(f(z_i)) -f(z_i), # \end{equation} # which also corresponds to the objective function for **Poisson regression**. # # Equipped with a function that quantifies the "goodness" of parameter choices, we can now select parameter values that maximizes the log-likelihood function. # Poisson regression, log-likelihood logLikFun = lambda X, w, y: np.sum(y*np.log(invLinkFun(X, w))-invLinkFun(X, w)) # ##### Exampel 1, find w0 given w1 # + # Example parameters w0Ex = np.array([wTrue[0]-4, wTrue[0], wTrue[0]+4]) w0Vals = np.linspace(wTrue[0]-5, wTrue[0]+5, 101) # Log-likelihood function over the whole interval ll = np.empty_like(w0Vals) for i, w0 in enumerate(w0Vals): wTmp = np.array([w0, wTrue[1]]) ll[i] = logLikFun(X, wTmp, y) # Log-likelihood values for our example parameters llEx = np.empty_like(w0Ex) for i, w0 in enumerate(w0Ex): wTmp = np.array([w0, wTrue[1]]) llEx[i] = logLikFun(X, wTmp, y) # Plotting grayColors = getGrayColors(3) fig = plt.figure(figsize=(15, 5)) # Log-likelihood ax = fig.add_subplot(1, 2, 1) ax.plot(w0Vals, ll, 'k-') for i in range(w0Ex.size): ax.plot(w0Ex[i], llEx[i], 'o', ms=10, color=grayColors[i]) ax.set_xlabel('$w_0$') ax.set_ylabel('Log-likelihood') # Data and example response function mappings ax = fig.add_subplot(1, 2, 2) ax.plot(x, y, 'k.', label='data') for i in range(w0Ex.size): wTmp = np.array([w0Ex[i], wTrue[1]]) pTmp = invLinkFun(X, wTmp) ax.plot(x, pTmp, '-', color=grayColors[i]) ax.set_ylabel('Spike count') ax.set_xlabel('x') ax.set_xticks([-10, 0, 10]); # - # ##### Example 2, find both w0 and w1 # + # Example parameters wEx = np.array([[wTrue[0]-0.9, wTrue[0], wTrue[0]+0.9], [wTrue[1]+0.3, wTrue[1], wTrue[1]-0.3]]) # Get w0 and w1 combinations over a grid nGrid = 51 W0, W1 = np.meshgrid(np.linspace(wTrue[0]-1, wTrue[0]+1, nGrid), np.linspace(wTrue[1]-0.5, wTrue[1]+0.5, nGrid)) # Get the log-likelihood for each parameter combination llVals = np.zeros([nGrid, nGrid]) for i in range(nGrid): for j in range(nGrid): wTmp = np.array([W0[i, j], W1[i, j]]) llVals[i, j] = logLikFun(X, wTmp, y) # Plotting grayColors = getGrayColors(3) fig = plt.figure(figsize=(15, 5)) # Log-lieklihood and example parameters ax = plt.subplot(1, 2, 1) contourHandle = ax.contourf(W0, W1, llVals, 50, cmap=cm.coolwarm) for i in range(wEx.shape[1]): ax.plot(wEx[0, i], wEx[1, i], 'o', ms=10, color=grayColors[i]) ax.set_xlabel('$w_0$') ax.set_ylabel('$w_1$'); cBarHandle = plt.colorbar(contourHandle) cBarHandle.set_label('Log-likelihood') # Data and example response function mappings ax = fig.add_subplot(1, 2, 2) ax.plot(x, y, 'k.', label='data') for i in range(wEx.shape[1]): wTmp = wEx[:, i] pTmp = invLinkFun(X, wTmp) ax.plot(x, pTmp, '-', color=grayColors[i]) ax.set_ylabel('Spike count') ax.set_xlabel('x') ax.set_xticks([-10, 0, 10]); # - # ### Optimization # At first sight, it might look like the objective function only slightly simplifies our problem of finding good parameter values: finding the maxima is easy when you see the objective function, but visualizing the function for more than three parameters is extremely difficult. Luckily, however, there is a whole optimization field devoted to finding the maxima or minima of functions using methods that don't require you to visualize the objective function. However, before continuing, we will modify our objective functions once more by multiplying it with -1. This is just to turn the maximization problem into a minimization problem, which is usually the default notation. Finding the best parameters values thus corresponds to minimizing the negative log-likelihood $nll$: # # \begin{equation} # nll(w_0, w_1) = \sum_i^N f(z_i) - y_i\log(f(z_i)). # \end{equation} # Poisson regression, negative log-likelihood negLogLikFun = lambda X, w, y: np.sum(invLinkFun(X, w) - y*np.log(invLinkFun(X, w))) # ### Gradient # Several commonly used optimization methods utilize gradients. The gradient of a function is by definition a vector consisting of the partial derivatives with respect to each parameter. For our example from above, the gradient thus becomes: # \begin{equation} # \nabla nll = \frac{\partial nll}{\partial \mathbf{w}} = \left[ \frac{\partial nll}{\partial w_0}, \frac{\partial nll}{\partial w_1} \right], # \end{equation} # where $\nabla$ denotes the linear nabla operator. The fact that $\nabla$ is linear means that we can obtain the gradient of the negative log-likelihood function by just summing up the gradients of each term in the sum: # \begin{align} # \nabla nll &= \sum_i^N \frac{\partial f(z_i) - y_i\log(f(z_i))}{\partial \mathbf{w}}, \\ # \frac{\partial f(z_i) - y_i\log(f(z_i))}{\partial \mathbf{w}} &= # \begin{cases} # (\exp(z_i) - y_i)\mathbf{x}_i,& \text{if } z_i \leq 0\\ # \left( 1-\frac{y_i}{z_i+1} \right) \mathbf{x}_i,& \text{if } z_i > 0\\ # \end{cases}. # \end{align} # We can thus compute the gradient for any parameter values of our choice, but the real benefit of this possibility only becomes apparent when considering what the gradient really denotes. The partial derivative of any parameter tells us how quickly the function increases when that parameter is changed slightly, and consequently the gradient, which is the vector containing all partial derivatives, indicates in which direction in parameter space that the function value increases the most. # Poisson regression, gradient of the negative log-likelihood def negLogLikDerFun(X, w, y): z = np.dot(X, w) der = np.dot(np.exp(z[z<0])-y[z<0], X[z<0, :]) der += np.dot(1-y[z>=0]/(z[z>=0]+1), X[z>=0, :]) return der # ##### Example 3, the gradient of the negative log-likelihood function # + # Use a lower resolution for gradient evaluations nGrid = 21 W0q, W1q = np.meshgrid(np.linspace(wTrue[0]-1, wTrue[0]+1, nGrid), np.linspace(wTrue[1]-0.5, wTrue[1]+0.5, nGrid)) # Get the gradient for each combination nllVals = np.zeros([nGrid, nGrid]) W0der = np.zeros([nGrid, nGrid]) W1der = np.zeros([nGrid, nGrid]) for i in range(nGrid): for j in range(nGrid): wTmp = np.array([W0q[i, j], W1q[i, j]]) derTmp = negLogLikDerFun(X, wTmp, y) W0der[i, j] = derTmp[0] W1der[i, j] = derTmp[1] # Normalize to unit length W0derNorm = W0der / np.sqrt(W0der**2. + W1der**2.) W1derNorm = W1der / np.sqrt(W0der**2. + W1der**2.) # Plotting grayColors = getGrayColors(3) fig = plt.figure(figsize=(25, 7)) ax = plt.subplot(1, 1, 1) ax.set_aspect('equal') # Negative log-likelihood contourHandle = ax.contourf(W0, W1, -llVals, 50, cmap=cm.coolwarm) # Gradient ax.quiver(W0q, W1q, W0derNorm, W1derNorm, angles='xy', scale_units='xy', scale=30, width=2e-3) # Axes settings ax.set_xlabel('$w_0$') ax.set_ylabel('$w_1$'); cBarHandle = plt.colorbar(contourHandle) cBarHandle.set_label('negative log-likelihood') # - # ### Gradient descent # A stupidly simple idea for finding the minima of the negative log-likelihood function would then be to start from some parameters $\mathbf{w}_\mathrm{init}$ and then just iteratively move in the opposite direction to the gradient. That is, we evaluate the gradient for our current parameter guess and then we update our guess by subtracting the gradient times a constant $\eta$. More specifically, at each step $k$ of our iterative algorithm, we update our parameters as: # \begin{equation} # \mathbf{w}_{k+1} = \mathbf{w}_k - \eta \nabla nll(\mathbf{w}_k). # \end{equation} # For sufficiently small $\eta$, this will guarantee that we backtrack along the gradients all the way to the minima of the negative log-likelihood function, as illustrated below. # + # Gradient descent parameters eta = 2e-4 nIterations = 25 wInit = np.array([-1.8, 0.85]) wUpdates = np.zeros([nIterations, 2]) wUpdates[0, :] = wInit nllVals = np.zeros(nIterations) nllVals[0] = negLogLikFun(X, wUpdates[0, :], y) # Gradient descent loop for i in range(1, nIterations): wUpdates[i, :] = wUpdates[i-1, :] - eta*negLogLikDerFun(X, wUpdates[i-1, :], y) nllVals[i] = negLogLikFun(X, wUpdates[i, :], y) # Plotting grayColors = getGrayColors(3) fig = plt.figure(figsize=(25, 7)) ax = plt.subplot(1, 1, 1) ax.set_aspect('equal') # Negative log-likelihood contourHandle = ax.contourf(W0, W1, -llVals, 50, cmap=cm.coolwarm) # Gradient ax.quiver(W0q, W1q, W0derNorm, W1derNorm, angles='xy', scale_units='xy', scale=30, width=2e-3) # Gradient descent progress ax.plot(wUpdates[:, 0], wUpdates[:, 1], '-o', ms=6, color='white', lw=3) # Axes settings ax.set_xlabel('$w_0$') ax.set_ylabel('$w_1$'); cBarHandle = plt.colorbar(contourHandle) cBarHandle.set_label('negative log-likelihood') # - # ### Backtracking line search # Finding a suitable $\eta$ value for the gradient descent approach might not be trivial: a too small value of $\eta$ will hamper progress, whereas a too large value will cause the algorithm to diverge. It would therefore be very convenient to have an adaptive method that automatically adjusted $\eta$ to take large but not too large steps on each iteration of gradient descent. One common method for convex problems is to use something called backtracking line search. The idea being that we start of with a large $\eta$ value and then successively decrease it by a factor $\beta$. This continues until the observed decrease of the objective function is within a factor $\alpha$ of what would be expected for a linear approximation of the objective function. This is illustrated below for a one-dimensional case (find $w_0$ given $w_1$) by showing the real negative log-likelihood and the linear approximation around our current guess of $w_0$. # + w0Init = w0Vals.min() nllInit = negLogLikFun(X, np.array([w0Init, wTrue[1]]), y) w0InitDer = negLogLikDerFun(X, np.array([w0Init, wTrue[1]]), y)[0] offset = nllInit - w0Init*w0InitDer; # Calculate real and expected decreases realNllDecrease = -ll - nllInit expectedNllDecrease = offset + w0InitDer*w0Vals - nllInit # Plotting fig = plt.figure(figsize=(15, 5)) # Log-likelihood ax = fig.add_subplot(1, 2, 1) ax.plot(w0Vals, -ll, 'k-', label='Real nll') ax.plot(w0Vals, offset + w0InitDer*w0Vals, 'k:', label='linear approx.') ax.plot(w0Init, nllInit, 'o', ms=10, color='k') ax.set_xlabel('$w_0$') ax.set_ylabel('Negative log-likelihood') ax.legend() # Log-likelihood decrease ax = fig.add_subplot(1, 2, 2) ax.plot(w0Vals[1:], realNllDecrease[1:] / expectedNllDecrease[1:], 'k-') ax.set_xlabel('$w_0$') ax.set_ylabel('nll / linear approx.'); # - # The linear approximation will always lie below the real negative log-likelihood for convex objective functions, but it provides a good approximation close to $w_0$. The idea of backtracking line search is thus to take the largest step possible while still ensuring that the linear approximation is good enough. Goodness is measured by how close the real objective function is to the linear approximation, and the acceptance criteria is set by $\alpha$. $\alpha=0.5$ thus means that we want the largest step size for which the decrease observed for the real objective function is at least half of what would be predicted by the linear approximation. However, we don't want to spend time solving this line search problem exactly, rather, we start of with a high value of $\eta$ and decrease it successively by a factor $\beta$ until the goodness criteria is met. That is, we decrease $\eta$ to $\eta\beta$ until # \begin{equation} # nll(\mathbf{w}_{k}-\eta \nabla nll(\mathbf{w}_k)) > nll(\mathbf{w}_k) - \alpha \eta \nabla nll(\mathbf{w}_k)^T\nabla nll(\mathbf{w}_k), # \end{equation} # where $\eta \nabla nll(\mathbf{w}_k)^T\nabla nll(\mathbf{w}_k)$ is the predicted decrease for our linear approximation. Standard values for $\alpha$ and $\beta$ are 0.5 and 0.8, respectively. The illustration below use these standard values to show how gradient descent progresses for our example problem of finding both $w_0$ and $w_1$ using a backtracking line search implementation. # + # Gradient descent parameters eta = 0.1 alpha = 0.5 beta = 0.8 wUpdates = np.zeros([nIterations, 2]) wUpdates[0, :] = wInit nllVals = np.zeros(nIterations) nllVals[0] = negLogLikFun(X, wUpdates[0, :], y) # Gradient descent with backtracking for i in range(1, nIterations): wUpdates[i, :] = wUpdates[i-1, :] delta = negLogLikDerFun(X, wUpdates[i, :], y) tooLarge = True # Backtracking loop while tooLarge: wUpdates[i, :] -= eta*delta nllTmp = negLogLikFun(X, wUpdates[i, :], y) if nllTmp > nllVals[i-1] - alpha*eta*np.dot(delta, delta): wUpdates[i, :] += eta*delta eta *= beta else: tooLarge = False nllVals[i] = nllTmp # Plotting grayColors = getGrayColors(3) fig = plt.figure(figsize=(25, 7)) ax = plt.subplot(1, 1, 1) ax.set_aspect('equal') # Negative log-likelihood contourHandle = ax.contourf(W0, W1, -llVals, 50, cmap=cm.coolwarm) # Gradient ax.quiver(W0q, W1q, W0derNorm, W1derNorm, angles='xy', scale_units='xy', scale=30, width=2e-3) # Gradient descent progress ax.plot(wUpdates[:, 0], wUpdates[:, 1], '-o', ms=6, color='white', lw=3) # Axes settings ax.set_xlabel('$w_0$') ax.set_ylabel('$w_1$'); cBarHandle = plt.colorbar(contourHandle) cBarHandle.set_label('negative log-likelihood') # - # ### Acceleration # Gradient descent can be made to converge faster using a method called Nesterov accelerated gradient, named after <NAME> introduced it. The basic idea behind Nesterov's accelerated gradient method is to first make a move in the direction of the previous parameter change, and then evaluate the gradient at this intermediate position $v$ in order to let the standard gradient descent rule make a correction step. That is, at each iteration, we first determine intermediate parameter values $v$, and then take a step in the direction of the negative gradient evaluated at $v$: # \begin{align} # \mathbf{v}_{k+1} &= \mathbf{w}_k + \frac{k-2}{k+1}(\mathbf{w}_k - \mathbf{w}_{k-1}), \\ # \mathbf{w}_{k+1} &= \mathbf{v}_{k+1} - \eta \nabla nll(\mathbf{v}_{k+1}). # \end{align} # We thus carry with us some momentum of previous weight updates, and the coefficient $\frac{k-2}{k+1}$ tells us to use more momentum later on (it converged to 1 for large $k$). However, the momentum term gives rise to oscillations in the objective function value, and the Nesterov's accelerated gradient is therefore not a strict descent method. Acceleration can nonetheless be combined with backtracking line search, as for the standard gradient descent method, but now from the intermediate parameter values $v$. # + # Gradient descent parameters eta = 0.1 wUpdates = np.zeros([nIterations, 2]) wUpdates[0:2, :] = [-1.8, 0.85] nllVals = np.zeros(nIterations) nllVals[0:2] = negLogLikFun(X, wUpdates[0, :], y) # Nesterov accelerated gradient with backtracking for i in range(2, nIterations): v = wUpdates[i-1, :] + (i-2.)/(i+1)*(wUpdates[i-1, :]-wUpdates[i-2, :]) delta = negLogLikDerFun(X, v, y) nllTmp = negLogLikFun(X, v , y) tooLarge = True # Backtracking loop while tooLarge: wUpdates[i, :] = v - eta*delta nllNew = negLogLikFun(X, wUpdates[i, :] , y) if nllNew > nllTmp - eta*np.dot(delta, delta)/2.: wUpdates[i, :] = wUpdates[i-1, :] eta *= beta else: tooLarge = False nllVals[i] = nllNew # Plotting grayColors = getGrayColors(3) fig = plt.figure(figsize=(25, 7)) ax = plt.subplot(1, 1, 1) ax.set_aspect('equal') # Negative log-likelihood contourHandle = ax.contourf(W0, W1, -llVals, 50, cmap=cm.coolwarm) # Gradient ax.quiver(W0q, W1q, W0derNorm, W1derNorm, angles='xy', scale_units='xy', scale=30, width=2e-3) # Gradient descent progress ax.plot(wUpdates[:, 0], wUpdates[:, 1], '-o', ms=6, color='white', lw=3) # Axes settings ax.set_xlabel('$w_0$') ax.set_ylabel('$w_1$'); cBarHandle = plt.colorbar(contourHandle) cBarHandle.set_label('negative log-likelihood')
ConvexOptimization/Part 1, basic ideas and gradient descent.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ALIGN Tutorial Notebook: CHILDES # This notebook provides an introduction to **ALIGN**, # a tool for quantifying multi-level linguistic similarity # between speakers, using parent-child transcript data from # the Kuczaj Corpus # (https://childes.talkbank.org/access/Eng-NA/Kuczaj.html). # This method was introduced in "ALIGN: Analyzing Linguistic Interactions with Generalizable techNiques" (Duran, Paxton, & Fusaroli, 2019). # ## Tutorial Overview # While many studies of interpersonal linguistic alignment # compare alignment between different dyads across different # conditions (i.e., typically a within- or between-dyads # design in which each dyad contributes only one or two # conversations), there may also be interest in understanding # longer-scale temporal dynamics *within* a given dyad. # This tutorial provides an example of how ALIGN may be used # to just that end: analyzing how a single dyad's multilevel # alignment changes across different conversations held # at different points over a longer range of time. # To do so, this tutorial walks users through an analysis of # conversations from a single English corpus from the CHILDES # database (MacWhinney, 2000)---specifically, Kuczaj’s Abe # corpus (Kuczaj, 1976), used under a Creative Commons # Attribution-ShareAlike 3.0 Unported License (see GitHub # repository or `data/CHILDES` directory for license). # We analyze the last 20 conversations in the corpus in order # to explore how ALIGN can be used to track multi-level # linguistic alignment between a parent and child over time, # which may be of interest to developmental language # researchers. Specifically, we explore how alignment between a parent # and a child changes over a brief span of developmental # trajectory. # Data for this tutorial are shipped with the `align` # package on PyPI (https://pypi.python.org/pypi/align) and GitHub # (https://github.com/nickduran/align-linguistic-alignment/). # *** # ## Table of Contents # * [Getting Started](#Getting-Started) # * [Prerequisites](#Prerequisites) # * [Preparing input data](#Preparing-input-data) # * [Filename conventions](#Filename-conventions) # * [Highest-level functions](#Highest-level-functions) # * [Setup](#Setup) # * [Import libraries](#Import-libraries) # * [Specify ALIGN path settings](#Specify-ALIGN-path-settings) # * [Phase 1: Prepare transcripts](#Phase-1:-Prepare-transcripts) # * [Preparation settings](#Preparation-settings) # * [Run preparation phase](#Run-preparation-phase) # * [Phase 2: Calculate alignment](#Phase-2:-Calculate-alignment) # * [For real data: Alignment calculation settings](#For-real-data:-Alignment-calculation-settings) # * [For real data: Run alignment calculation](#For-real-data:-Run-alignment-calculation) # * [For surrogate data: Alignment calculation settings](#For-surrogate-data:-Alignment-calculation-settings) # * [For surrogate data: Run alignment calculation](#For-surrogate-data:-Run-alignment-calculation) # * [ALIGN output overview](#ALIGN-output-overview) # * [Speed calculations](#Speed-calculations) # * [Printouts!](#Printouts!) # *** # # Getting Started # ## Preparing input data # * Each input text file needs to contain a single conversation organized in an `N x 2` matrix # * Text file must be tab-delimited. # * Each row must correspond to a single conversational turn from a speaker. # * Rows must be temporally ordered based on their occurrence in the conversation. # * Rows must alternate between speakers. # * Speaker identifier and content for each turn are divided across two columns. # * Column 1 must have the header `participant`. # * Each cell specifies the speaker. # * Each speaker must have a unique label (e.g., `P1` and `P2`, `0` and `1`). # * Column 2 must have the header `content`. # * Each cell corresponds to the transcribed utterance from the speaker. # * Each cell must end with a newline character: `\n` # * See `examples` directory in Github repository for an example # ## Filename conventions # * Each conversation text file must be regularly formatted, including a prefix for dyad and a prefix for conversation prior to the identifier for each that are separated by a unique character. By default, ALIGN looks for patterns that follow this convention: `dyad1-condA.txt` # * However, users may choose to include any label for dyad or condition so long as the two labels are distinct from one another and are not subsets of any possible dyad or condition labels. Users may also use any character as a separator so long as it does not occur anywhere else in the filename. # * The chosen file format **must** be used when saving **all** files for this analysis. # ## Highest-level functions # Given appropriately prepared transcript files, ALIGN can be run in 3 high-level functions: # **`prepare_transcripts`**: Pre-process each standardized # conversation, checking it conforms to the requirements. # Each utterance is tokenized and lemmatized and has # POS tags added. # **`calculate_alignment`**: Generates turn-level and # conversation-level alignment scores (lexical, # conceptual, and syntactic) across a range of # *n*-gram sequences. # **`calculate_baseline_alignment`**: Generate a surrogate corpus # and run alignment analysis (using identical specifications # from `calculate_alignment`) on it to produce a baseline. # *** # # Setup # ## Import libraries # Install ALIGN if you have not already. import sys # !{sys.executable} -m pip install align # Import packages we'll need to run ALIGN. import align, os import pandas as pd # Import `time` so that we can get a sense of how # long the ALIGN pipeline takes. import time # Import `warnings` to flag us if required files aren't provided. import warnings # ## Install additional NTLK packages # Download some addition `nltk` packages for `align` to work. import nltk nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('wordnet') # ## Specify ALIGN path settings # ALIGN will need to know where the raw transcripts are stored, where to store the processed data, and where to read in any additional files needed for optional ALIGN parameters. # ### Required directories # For the sake of this tutorial, specify a base path that will serve as our jumping-off point for our saved data. All of the CHILDES data and shipped data will be called from the package directory. # **`BASE_PATH`**: Containing directory for this tutorial. BASE_PATH = os.getcwd() # **`CHILDES_EXAMPLE`**: Subdirectories for output and other # files for this tutorial. (We'll create a default directory # if one doesn't already exist.) CHILDES_EXAMPLE = os.path.join(BASE_PATH, 'CHILDES/') if not os.path.exists(CHILDES_EXAMPLE): os.makedirs(CHILDES_EXAMPLE) # **`TRANSCRIPTS`**: Path to raw transcript files. Automatically provided by `align`. TRANSCRIPTS = align.datasets.CHILDES_directory # **`PREPPED_TRANSCRIPTS`**: Set variable for folder name # (as string) for relative location of folder into which # prepared transcript files will be saved. (We'll create # a default directory if one doesn't already exist.) PREPPED_TRANSCRIPTS = os.path.join(CHILDES_EXAMPLE, 'childes-prepped/') if not os.path.exists(PREPPED_TRANSCRIPTS): os.makedirs(PREPPED_TRANSCRIPTS) # **`ANALYSIS_READY`**: Set variable for folder name # (as string) for relative location of folder into # which analysis-ready dataframe files will be saved. # (We'll create a default directory if one doesn't # already exist.) ANALYSIS_READY = os.path.join(CHILDES_EXAMPLE, 'childes-analysis/') if not os.path.exists(ANALYSIS_READY): os.makedirs(ANALYSIS_READY) # **`SURROGATE_TRANSCRIPTS`**: Set variable for folder name # (as string) for relative location of folder into which all # prepared surrogate transcript files will be saved. (We'll # create a default directory if one doesn't already exist.) SURROGATE_TRANSCRIPTS = os.path.join(CHILDES_EXAMPLE, 'childes-surrogate/') if not os.path.exists(SURROGATE_TRANSCRIPTS): os.makedirs(SURROGATE_TRANSCRIPTS) # ### Paths for optional parameters # **`OPTIONAL_PATHS`**: If using Stanford POS tagger or # pretrained vectors, the path to these files. If these # files are provided in other locations, be sure to # change the file paths for them. (We'll create a default # directory if one doesn't already exist.) OPTIONAL_PATHS = os.path.join(CHILDES_EXAMPLE, 'optional_directories/') if not os.path.exists(OPTIONAL_PATHS): os.makedirs(OPTIONAL_PATHS) # #### Stanford POS Tagger # The Stanford POS tagger **will not be used** by # default in this example. However, you may use them # by uncommenting and providing the requested file # paths in the cells in this section and then changing # the relevant parameters in the ALIGN calls below. # If desired, we could use the Standford part-of-speech # tagger along with the Penn part-of-speech tagger # (which is always used in ALIGN). To do so, the files # will need to be downloaded separately: # https://nlp.stanford.edu/software/tagger.shtml#Download # **`STANFORD_POS_PATH`**: If using Stanford POS tagger # with the Penn POS tagger, path to Stanford directory. # + # STANFORD_POS_PATH = os.path.join(OPTIONAL_PATHS, # 'stanford-postagger-full-2018-10-16/') # + # if os.path.exists(STANFORD_POS_PATH) == False: # warnings.warn('Stanford POS directory not found at the specified ' # 'location. Please update the file path with ' # 'the folder that can be directly downloaded here: ' # 'https://nlp.stanford.edu/software/stanford-postagger-full-2018-10-16.zip ' # '- Alternatively, comment out the ' # '`STANFORD_POS_PATH` information.') # - # **`STANFORD_LANGUAGE`**: If using Stanford tagger, # set language model to be used for POS tagging. # + # STANFORD_LANGUAGE = os.path.join('models/english-left3words-distsim.tagger') # + # if os.path.exists(STANFORD_POS_PATH + STANFORD_LANGUAGE) == False: # warnings.warn('Stanford tagger language not found at the specified ' # 'location. Please update the file path or comment ' # 'out the `STANFORD_POS_PATH` information.') # - # #### Google News pretrained vectors # The Google News pretrained vectors **will be used** # by default in this example. The file is available for # download here: https://code.google.com/archive/p/word2vec/ # If desired, researchers may choose to read in pretrained # `word2vec` vectors rather than creating a semantic space # from the corpus provided. This may be especially useful # for small corpora (i.e., fewer than 30k unique words), # although the choice of semantic space corpus should be # made with careful consideration about the nature of the # linguistic context (for further discussion, see Duran, # Paxton, & Fusaroli, *submitted*). # **`PRETRAINED_INPUT_FILE`**: If using pretrained vectors, path # to pretrained vector files. You may choose to download the file # directly to this path or change the path to a different one. PRETRAINED_INPUT_FILE = os.path.join(OPTIONAL_PATHS, 'GoogleNews-vectors-negative300.bin') if os.path.exists(PRETRAINED_INPUT_FILE) == False: warnings.warn('Google News vector not found at the specified ' 'location. Please update the file path with ' 'the .bin file that can be accessed here: ' 'https://code.google.com/archive/p/word2vec/ ' '- Alternatively, comment out the `PRETRAINED_INPUT_FILE` information') # *** # # Phase 1: Prepare transcripts # In Phase 1, we take our raw transcripts and get them ready # for later ALIGN analysis. # ## Preparation settings # There are a number of parameters that we can set for the # `prepare_transcripts()` function: print(align.prepare_transcripts.__doc__) # For the sake of this demonstration, we'll keep everything as # defaults. Among other parameters, this means that: # * any turns fewer than 2 words will be removed from the corpus # (`minwords=2`), # * we'll be using regex to strip out any filler words # (e.g., "uh," "um," "huh"; `use_filler_list=None`), # * if you like, you can supply additional filler words as `use_filler_list=["string1", "string2"]` but be sure to set `filler_regex_and_list=True` # * we'll be using the Project Gutenberg corpus to create our # spell-checker algorithm (`training_dictionary=None`), # * we'll rely only on the Penn POS tagger # (`add_stanford_tags=False`), and # * our data will be saved both as individual conversation files # and as a master dataframe of all conversation outputs # (`save_concatenated_dataframe=True`). # ## Run preparation phase # First, we prepare our transcripts by reading in individual `.txt` # files for each conversation, clean up undesired text and turns, # spell-check, tokenize, lemmatize, and add POS tags. start_phase1 = time.time() model_store = align.prepare_transcripts( input_files=TRANSCRIPTS, output_file_directory=PREPPED_TRANSCRIPTS, minwords=2, use_filler_list=None, filler_regex_and_list=False, training_dictionary=None, add_stanford_tags=False, ### if you want to run the Stanford POS tagger, be sure to uncomment the next two lines # stanford_pos_path=STANFORD_POS_PATH, # stanford_language_path=STANFORD_LANGUAGE, save_concatenated_dataframe=True) end_phase1 = time.time() # *** # # Phase 2: Calculate alignment # ## For real data: Alignment calculation settings # There are a number of parameters that we can set for the # `calculate_alignment()` function: print(align.calculate_alignment.__doc__) # For the sake of this tutorial, we'll keep everything as # defaults. Among other parameters, this means that we'll: # * use only unigrams and bigrams for our *n*-grams # (`maxngram=2`), # * use pretrained vectors instead of creating our own # semantic space, since our tutorial corpus is quite # small (`use_pretrained_vectors=True` and # `pretrained_file_directory=PRETRAINED_INPUT_FILE`), # * ignore exact lexical duplicates when calculating # syntactic alignment, # * we'll rely only on the Penn POS tagger # (`add_stanford_tags=False`), and # * implement high- and low-frequency cutoffs to clean # our transcript data (`high_sd_cutoff=3` and # `low_n_cutoff=1`). # Whenever we calculate a baseline level of alignment, # we need to include the same parameter values for any # parameters that are present in both `calculate_alignment()` # (this step) and `calculate_baseline_alignment()` # (next step). As a result, we'll specify these here: # set standards to be used for real and surrogate INPUT_FILES = PREPPED_TRANSCRIPTS MAXNGRAM = 2 USE_PRETRAINED_VECTORS = True SEMANTIC_MODEL_INPUT_FILE = os.path.join(CHILDES_EXAMPLE, 'align_concatenated_dataframe.txt') PRETRAINED_FILE_DRIRECTORY = PRETRAINED_INPUT_FILE ADD_STANFORD_TAGS = False IGNORE_DUPLICATES = True HIGH_SD_CUTOFF = 3 LOW_N_CUTOFF = 1 # ## For real data: Run alignment calculation start_phase2real = time.time() [turn_real,convo_real] = align.calculate_alignment( input_files=INPUT_FILES, maxngram=MAXNGRAM, use_pretrained_vectors=USE_PRETRAINED_VECTORS, pretrained_input_file=PRETRAINED_INPUT_FILE, semantic_model_input_file=SEMANTIC_MODEL_INPUT_FILE, output_file_directory=ANALYSIS_READY, add_stanford_tags=ADD_STANFORD_TAGS, ignore_duplicates=IGNORE_DUPLICATES, high_sd_cutoff=HIGH_SD_CUTOFF, low_n_cutoff=LOW_N_CUTOFF) end_phase2real = time.time() # ## For surrogate data: Alignment calculation settings # For the surrogate or baseline data, we have many of the same # parameters for `calculate_baseline_alignment()` as we do for # `calculate_alignment()`: print(align.calculate_baseline_alignment.__doc__) # As mentioned above, when calculating the baseline, it is **vital** # to include the *same* parameter values for any parameters that # are included in both `calculate_alignment()` and # `calculate_baseline_alignment()`. As a result, we re-use those # values here. # We demonstrate other possible uses for labels by setting # `dyad_label = time`, allowing us to compare alignment over # time across the same speakers. We also demonstrate how to # generate a subset of surrogate pairings rather than all # possible pairings. # In addition to the parameters that we're re-using from # the `calculate_alignment()` values (see above), we'll # keep most parameters at their defaults by: # * preserving the turn order when creating surrogate # pairs (`keep_original_turn_order=True`), # * specifying condition with `cond` prefix # (`condition_label='cond'`), and # * using a hyphen to separate the condition and # dyad identifiers (`id_separator='\-'`). # # However, we will also change some of these defaults, # including: # * generating only a subset of surrogate data equal # to the size of the real data (`all_surrogates=False`) # and # * specifying that we'll be shuffling the baseline data # by time instead of by dyad (`dyad_label='time'`). # ## For surrogate data: Run alignment calculation start_phase2surrogate = time.time() [turn_surrogate,convo_surrogate] = align.calculate_baseline_alignment( input_files=INPUT_FILES, maxngram=MAXNGRAM, use_pretrained_vectors=USE_PRETRAINED_VECTORS, pretrained_input_file=PRETRAINED_INPUT_FILE, semantic_model_input_file=SEMANTIC_MODEL_INPUT_FILE, output_file_directory=ANALYSIS_READY, add_stanford_tags=ADD_STANFORD_TAGS, ignore_duplicates=IGNORE_DUPLICATES, high_sd_cutoff=HIGH_SD_CUTOFF, low_n_cutoff=LOW_N_CUTOFF, surrogate_file_directory=SURROGATE_TRANSCRIPTS, all_surrogates=False, keep_original_turn_order=True, id_separator='\-', dyad_label='time', condition_label='cond') end_phase2surrogate = time.time() # *** # # ALIGN output overview # ## Speed calculations # As promised, let's take a look at how long it takes to run each section. Time is given in seconds. # **Phase 1:** end_phase1 - start_phase1 # **Phase 2, real data:** end_phase2real - start_phase2real # **Phase 2, surrogate data:** end_phase2surrogate - start_phase2surrogate # **All phases:** end_phase2surrogate - start_phase1 # ## Printouts! # And that's it! Before we go, let's take a look at the output from the real data analyzed at the turn level for each conversation (`turn_real`) and at the conversation level for each dyad (`convo_real`). We'll then look at our surrogate data, analyzed both at the turn level (`turn_surrogate`) and at the conversation level (`convo_surrogate`). In our next step, we would then take these data and plug them into our statistical model of choice, but we'll stop here for the sake of our tutorial. turn_real.head(10) convo_real.head(10) turn_surrogate.head(10) convo_surrogate.head(10)
examples/align-childes_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" import torch import torch.nn as nn import time import argparse import os import datetime from torch.distributions.categorical import Categorical from scipy.spatial import distance # visualization # %matplotlib inline from IPython.display import set_matplotlib_formats, clear_output set_matplotlib_formats('png2x','pdf') import matplotlib.pyplot as plt import numpy as np import pandas as pd try: import networkx as nx from scipy.spatial.distance import pdist, squareform except: pass import warnings warnings.filterwarnings("ignore", category=UserWarning) device = torch.device("cpu"); gpu_id = -1 # select CPU gpu_id = '0' # select a single GPU #gpu_id = '2,3' # select multiple GPUs os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_id) if torch.cuda.is_available(): device = torch.device("cuda") print('GPU name: {:s}, gpu_id: {:s}'.format(torch.cuda.get_device_name(0),gpu_id)) print(device) # - # # Model Architecture # + import math import numpy as np import torch.nn.functional as F import random import torch.optim as optim from torch.autograd import Variable from torch.optim import lr_scheduler import matplotlib.pyplot as plt from tqdm import tqdm_notebook class Transformer_encoder_net(nn.Module): """ Encoder network based on self-attention transformer Inputs : h of size (bsz, nb_nodes+1, dim_emb) batch of input cities Outputs : h of size (bsz, nb_nodes+1, dim_emb) batch of encoded cities score of size (bsz, nb_nodes+1, nb_nodes+1) batch of attention scores """ def __init__(self, nb_layers, dim_emb, nb_heads, dim_ff, batchnorm): super(Transformer_encoder_net, self).__init__() assert dim_emb == nb_heads* (dim_emb//nb_heads) # check if dim_emb is divisible by nb_heads self.MHA_layers = nn.ModuleList( [nn.MultiheadAttention(dim_emb, nb_heads) for _ in range(nb_layers)] ) self.linear1_layers = nn.ModuleList( [nn.Linear(dim_emb, dim_ff) for _ in range(nb_layers)] ) self.linear2_layers = nn.ModuleList( [nn.Linear(dim_ff, dim_emb) for _ in range(nb_layers)] ) if batchnorm: self.norm1_layers = nn.ModuleList( [nn.BatchNorm1d(dim_emb) for _ in range(nb_layers)] ) self.norm2_layers = nn.ModuleList( [nn.BatchNorm1d(dim_emb) for _ in range(nb_layers)] ) else: self.norm1_layers = nn.ModuleList( [nn.LayerNorm(dim_emb) for _ in range(nb_layers)] ) self.norm2_layers = nn.ModuleList( [nn.LayerNorm(dim_emb) for _ in range(nb_layers)] ) self.nb_layers = nb_layers self.nb_heads = nb_heads self.batchnorm = batchnorm def forward(self, h): # PyTorch nn.MultiheadAttention requires input size (seq_len, bsz, dim_emb) h = h.transpose(0,1) # size(h)=(nb_nodes, bsz, dim_emb) # L layers for i in range(self.nb_layers): h_rc = h # residual connection, size(h_rc)=(nb_nodes, bsz, dim_emb) h, score = self.MHA_layers[i](h, h, h) # size(h)=(nb_nodes, bsz, dim_emb), size(score)=(bsz, nb_nodes, nb_nodes) # add residual connection h = h_rc + h # size(h)=(nb_nodes, bsz, dim_emb) if self.batchnorm: # Pytorch nn.BatchNorm1d requires input size (bsz, dim, seq_len) h = h.permute(1,2,0).contiguous() # size(h)=(bsz, dim_emb, nb_nodes) h = self.norm1_layers[i](h) # size(h)=(bsz, dim_emb, nb_nodes) h = h.permute(2,0,1).contiguous() # size(h)=(nb_nodes, bsz, dim_emb) else: h = self.norm1_layers[i](h) # size(h)=(nb_nodes, bsz, dim_emb) # feedforward h_rc = h # residual connection h = self.linear2_layers[i](torch.relu(self.linear1_layers[i](h))) h = h_rc + h # size(h)=(nb_nodes, bsz, dim_emb) if self.batchnorm: h = h.permute(1,2,0).contiguous() # size(h)=(bsz, dim_emb, nb_nodes) h = self.norm2_layers[i](h) # size(h)=(bsz, dim_emb, nb_nodes) h = h.permute(2,0,1).contiguous() # size(h)=(nb_nodes, bsz, dim_emb) else: h = self.norm2_layers[i](h) # size(h)=(nb_nodes, bsz, dim_emb) # Transpose h h = h.transpose(0,1) # size(h)=(bsz, nb_nodes, dim_emb) return h, score class Attention(nn.Module): def __init__(self, n_hidden): super(Attention, self).__init__() self.size = 0 self.batch_size = 0 self.dim = n_hidden v = torch.FloatTensor(n_hidden).cuda() self.v = nn.Parameter(v) self.v.data.uniform_(-1/math.sqrt(n_hidden), 1/math.sqrt(n_hidden)) # parameters for pointer attention self.Wref = nn.Linear(n_hidden, n_hidden) self.Wq = nn.Linear(n_hidden, n_hidden) def forward(self, q, ref): # query and reference self.batch_size = q.size(0) self.size = int(ref.size(0) / self.batch_size) q = self.Wq(q) # (B, dim) ref = self.Wref(ref) ref = ref.view(self.batch_size, self.size, self.dim) # (B, size, dim) q_ex = q.unsqueeze(1).repeat(1, self.size, 1) # (B, size, dim) # v_view: (B, dim, 1) v_view = self.v.unsqueeze(0).expand(self.batch_size, self.dim).unsqueeze(2) # (B, size, dim) * (B, dim, 1) u = torch.bmm(torch.tanh(q_ex + ref), v_view).squeeze(2) return u, ref class LSTM(nn.Module): def __init__(self, n_hidden): super(LSTM, self).__init__() # parameters for input gate self.Wxi = nn.Linear(n_hidden, n_hidden) # W(xt) self.Whi = nn.Linear(n_hidden, n_hidden) # W(ht) self.wci = nn.Linear(n_hidden, n_hidden) # w(ct) # parameters for forget gate self.Wxf = nn.Linear(n_hidden, n_hidden) # W(xt) self.Whf = nn.Linear(n_hidden, n_hidden) # W(ht) self.wcf = nn.Linear(n_hidden, n_hidden) # w(ct) # parameters for cell gate self.Wxc = nn.Linear(n_hidden, n_hidden) # W(xt) self.Whc = nn.Linear(n_hidden, n_hidden) # W(ht) # parameters for forget gate self.Wxo = nn.Linear(n_hidden, n_hidden) # W(xt) self.Who = nn.Linear(n_hidden, n_hidden) # W(ht) self.wco = nn.Linear(n_hidden, n_hidden) # w(ct) def forward(self, x, h, c): # query and reference # input gate i = torch.sigmoid(self.Wxi(x) + self.Whi(h) + self.wci(c)) # forget gate f = torch.sigmoid(self.Wxf(x) + self.Whf(h) + self.wcf(c)) # cell gate c = f * c + i * torch.tanh(self.Wxc(x) + self.Whc(h)) # output gate o = torch.sigmoid(self.Wxo(x) + self.Who(h) + self.wco(c)) h = o * torch.tanh(c) return h, c class HPN(nn.Module): def __init__(self, n_feature, n_hidden): super(HPN, self).__init__() self.city_size = 0 self.batch_size = 0 self.dim = n_hidden # lstm for first turn #self.lstm0 = nn.LSTM(n_hidden, n_hidden) # pointer layer self.pointer = Attention(n_hidden) self.TransPointer = Attention(n_hidden) # lstm encoder self.encoder = LSTM(n_hidden) # trainable first hidden input h0 = torch.FloatTensor(n_hidden) c0 = torch.FloatTensor(n_hidden) # trainable latent variable coefficient alpha = torch.ones(1).cuda() self.h0 = nn.Parameter(h0) self.c0 = nn.Parameter(c0) self.alpha = nn.Parameter(alpha) self.h0.data.uniform_(-1/math.sqrt(n_hidden), 1/math.sqrt(n_hidden)) self.c0.data.uniform_(-1/math.sqrt(n_hidden), 1/math.sqrt(n_hidden)) r1 = torch.ones(1) r2 = torch.ones(1) r3 = torch.ones(1) self.r1 = nn.Parameter(r1) self.r2 = nn.Parameter(r2) self.r3 = nn.Parameter(r3) # embedding self.embedding_x = nn.Linear(n_feature, n_hidden) self.embedding_all = nn.Linear(n_feature, n_hidden) self.Transembedding_all = Transformer_encoder_net(6, 128, 8, 512, batchnorm=True) # vector to start decoding self.start_placeholder = nn.Parameter(torch.randn(n_hidden)) # weights for GNN self.W1 = nn.Linear(n_hidden, n_hidden) self.W2 = nn.Linear(n_hidden, n_hidden) self.W3 = nn.Linear(n_hidden, n_hidden) # aggregation function for GNN self.agg_1 = nn.Linear(n_hidden, n_hidden) self.agg_2 = nn.Linear(n_hidden, n_hidden) self.agg_3 = nn.Linear(n_hidden, n_hidden) def forward(self,context,Transcontext, x, X_all, mask, h=None, c=None, latent=None): ''' Inputs (B: batch size, size: city size, dim: hidden dimension) x: current city coordinate (B, 2) X_all: all cities' cooridnates (B, size, 2) mask: mask visited cities h: hidden variable (B, dim) c: cell gate (B, dim) latent: latent pointer vector from previous layer (B, size, dim) Outputs softmax: probability distribution of next city (B, size) h: hidden variable (B, dim) c: cell gate (B, dim) latent_u: latent pointer vector for next layer ''' self.batch_size = X_all.size(0) self.city_size = X_all.size(1) # the weights share across all the cities # Embedd All Cities if h is None or c is None: x = self.start_placeholder context = self.embedding_all(X_all) Transcontext,_ = self.Transembedding_all(context) # ============================= # graph neural network encoder # ============================= # (B, size, dim) context = context.reshape(-1, self.dim) Transcontext = Transcontext.reshape(-1, self.dim) context = self.r1 * self.W1(context)\ + (1-self.r1) * F.relu(self.agg_1(context/(self.city_size-1))) context = self.r2 * self.W2(context)\ + (1-self.r2) * F.relu(self.agg_2(context/(self.city_size-1))) context = self.r3 * self.W3(context)\ + (1-self.r3) * F.relu(self.agg_3(context/(self.city_size-1))) h0 = self.h0.unsqueeze(0).expand(self.batch_size, self.dim) c0 = self.c0.unsqueeze(0).expand(self.batch_size, self.dim) h0 = h0.unsqueeze(0).contiguous() c0 = c0.unsqueeze(0).contiguous() # let h0, c0 be the hidden variable of first turn h = h0.squeeze(0) c = c0.squeeze(0) else: x = self.embedding_x(x) # LSTM encoder h, c = self.encoder(x, h, c) # query vector q = h # pointer u1, _ = self.pointer(q, context) u2 ,_ = self.TransPointer(q,Transcontext) u = u1 + u2 latent_u = u.clone() u = 10 * torch.tanh(u) + mask if latent is not None: u += self.alpha * latent return context,Transcontext,F.softmax(u, dim=1), h, c, latent_u # - # # Data Generation # + ''' generate training data ''' DataGen = HPN(n_feature=2, n_hidden=128) DataGen = DataGen.to(device) DataGen.eval() # Upload checkpoint For pre-trained model "HPN for TSP" checkpoint_file = "../input/checkpoint_21-09-05--08-53-44-n50-gpu0.pkl" checkpoint = torch.load(checkpoint_file, map_location=device) DataGen.load_state_dict(checkpoint['model_baseline']) print("Done") del checkpoint def ModelSolution(B,size,Critic): zero_to_bsz = torch.arange(B, device=device) # [0,1,...,bsz-1] X = torch.rand(B, size, 2,device = device) mask = torch.zeros(B,size,device = device) solution = [] Y = X.view(B, size, 2) # to the same batch size x = Y[:,0,:] h = None c = None context = None Transcontext = None with torch.no_grad(): for k in range(size): context,Transcontext,output, h, c, _ = Critic(context,Transcontext,x=x, X_all=X, h=h, c=c, mask=mask) idx = torch.argmax(output, dim=1) x = Y[zero_to_bsz, idx.data] solution.append(x.cpu().numpy()) mask[zero_to_bsz, idx.data] += -np.inf graph = torch.tensor(solution).permute(1,0,2)# Shape = (B,size,2) return graph def generate_data(model,B=512, size=50): #X = np.zeros([B, size, 4]) # xi, yi, ei, li, ci solutions = torch.zeros(B,device = 'cuda') route = [x for x in range(size)] + [0] route = torch.tensor(route).unsqueeze(0).repeat(B,1) X = ModelSolution(B,size,model).to('cuda') arange_vec = torch.arange(B, device=X.device) ColAdded = torch.zeros((B,size,2),device = X.device) X = torch.cat((X,ColAdded),dim = 2).to(X.device) X[arange_vec,0,3] = 2 * torch.rand(B,device = X.device) # l0 = rand first_cities = X[arange_vec, route[:,0], :2] # size(first_cities)=(bsz,2) previous_cities = first_cities cur_time = torch.zeros(B, device=X.device) tour_len = torch.zeros(B, device=X.device) zeros = torch.zeros(B,device = X.device) with torch.no_grad(): for k in range(1, size): # generate data with approximate solutions current_cities = X[arange_vec, route[:,k], :2] cur_time += torch.sum( (current_cities - previous_cities)**2 , dim=1 )**0.5 tour_len += torch.sum( (current_cities - previous_cities)**2 , dim=1 )**0.5 previous_cities = current_cities X[arange_vec,k,2] = torch.maximum(zeros, (cur_time - 2*torch.rand(B,device = X.device))) # entering time 0<= ei <= cur_time X[arange_vec,k,3] = cur_time + 2*torch.rand(B,device = X.device) + 1 # leaving time li >= cur_time tour_len += torch.sum( (current_cities - first_cities)**2 , dim=1 )**0.5 solutions += tour_len X = np.array(X.cpu().numpy()) np.random.shuffle(X) X = torch.tensor(X).to('cuda') return X, solutions # - # # Training # + size = 50 learn_rate = 1e-4 # learning rate B = 512 # batch_size TOL = 1e-3 B_val = 1000 # validation size B_valLoop = 20 steps = 2500 # training steps n_epoch = 100 # epochs print('=========================') print('prepare to train') print('=========================') print('Hyperparameters:') print('size', size) print('learning rate', learn_rate) print('batch size', B) print('validation size', B_val) print('steps', steps) print('epoch', n_epoch) print('=========================') ################### # Instantiate a training network and a baseline network ################### try: del ActorLow # remove existing model del CriticLow # remove existing model except: pass ActorLow = HPN(n_feature=4, n_hidden=128) CriticLow = HPN(n_feature=4, n_hidden=128) optimizer = optim.Adam(ActorLow.parameters(), lr=learn_rate) # Putting Critic model on the eval mode ActorLow = ActorLow.to(device) CriticLow = CriticLow.to(device) CriticLow.eval() ######################## # Remember to first initialize the model and optimizer, then load the dictionary locally. ####################### epoch_ckpt = 0 tot_time_ckpt = 0 val_mean = [] val_std = [] val_accuracy = [] plot_performance_train = [] plot_performance_baseline = [] #********************************************# Uncomment these lines to re-start training with saved checkpoint #********************************************# """ checkpoint_file = "../input/nonhiersize20/checkpoint_21-09-05--08-55-01-n50-gpu0.pkl" checkpoint = torch.load(checkpoint_file, map_location=device) epoch_ckpt = checkpoint['epoch'] + 1 tot_time_ckpt = checkpoint['tot_time'] plot_performance_train = checkpoint['plot_performance_train'] plot_performance_baseline = checkpoint['plot_performance_baseline'] CriticLow.load_state_dict(checkpoint['model_baseline']) ActorLow.load_state_dict(checkpoint['model_train']) optimizer.load_state_dict(checkpoint['optimizer']) print('Re-start training with saved checkpoint file={:s}\n Checkpoint at epoch= {:d} and time={:.3f}min\n'.format(checkpoint_file,epoch_ckpt-1,tot_time_ckpt/60)) del checkpoint """ #*********************************************# Uncomment these lines to re-start training with saved checkpoint #********************************************# ################### # Main training loop ################### start_training_time = time.time() time_stamp = datetime.datetime.now().strftime("%y-%m-%d--%H-%M-%S") C = 0 # baseline R = 0 # reward zero_to_bsz = torch.arange(B, device=device) # [0,1,...,bsz-1] zero_to_bsz_val = torch.arange(B_val, device=device) # [0,1,...,bsz-1] for epoch in range(0,n_epoch): # re-start training with saved checkpoint epoch += epoch_ckpt ################### # Train model for one epoch ################### start = time.time() ActorLow.train() for i in range(1,steps+1): X, _ = generate_data(DataGen,B=B, size=size) Enter = X[:,:,2] # Entering time Leave = X[:,:,3] # Leaving time mask = torch.zeros(B,size).cuda() R = 0 logprobs = 0 reward = 0 time_wait = torch.zeros(B).cuda() time_penalty = torch.zeros(B).cuda() total_time_penalty_train = torch.zeros(B).cuda() total_time_cost_train = torch.zeros(B).cuda() total_time_wait_train = torch.zeros(B).cuda() # X = X.view(B,size,3) # Time = Time.view(B,size) x = X[:,0,:] h = None c = None context = None Transcontext = None #Actor Sampling phase for k in range(size): context,Transcontext,output, h, c, _ = ActorLow(context,Transcontext,x=x, X_all=X, h=h, c=c, mask=mask) sampler = torch.distributions.Categorical(output) idx = sampler.sample() y_cur = X[zero_to_bsz, idx.data].clone() if k == 0: y_ini = y_cur.clone() if k > 0: reward = torch.norm(y_cur[:,:2] - y_pre[:,:2], dim=1) y_pre = y_cur.clone() x = X[zero_to_bsz, idx.data].clone() R += reward total_time_cost_train += reward # enter time enter = Enter[zero_to_bsz, idx.data] leave = Leave[zero_to_bsz, idx.data] # determine the total reward and current enter time time_wait = torch.lt(total_time_cost_train, enter).float()*(enter - total_time_cost_train) total_time_wait_train += time_wait # total time cost total_time_cost_train += time_wait time_penalty = torch.lt(leave, total_time_cost_train).float()*10 total_time_cost_train += time_penalty total_time_penalty_train += time_penalty logprobs += torch.log(output[zero_to_bsz, idx.data]) mask[zero_to_bsz, idx.data] += -np.inf R += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) total_time_cost_train += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) # Critic Baseline phase C = 0 baseline = 0 mask = torch.zeros(B,size).cuda() time_wait = torch.zeros(B).cuda() time_penalty = torch.zeros(B).cuda() total_time_penalty_base = torch.zeros(B).cuda() total_time_cost_base = torch.zeros(B).cuda() total_time_wait_base = torch.zeros(B).cuda() x = X[:,0,:] h = None c = None context = None Transcontext = None # compute tours for baseline without grad with torch.no_grad(): for k in range(size): context,Transcontext,output, h, c, _ = CriticLow(context,Transcontext,x=x, X_all=X, h=h, c=c, mask=mask) idx = torch.argmax(output, dim=1) # ----> greedy baseline critic y_cur = X[zero_to_bsz, idx.data].clone() if k == 0: y_ini = y_cur.clone() if k > 0: baseline = torch.norm(y_cur[:,:2] - y_pre[:,:2], dim=1) y_pre = y_cur.clone() x = X[zero_to_bsz, idx.data].clone() C += baseline total_time_cost_base += baseline # enter time enter = Enter[zero_to_bsz, idx.data] leave = Leave[zero_to_bsz, idx.data] # determine the total reward and current enter time time_wait = torch.lt(total_time_cost_base, enter).float()*(enter - total_time_cost_base) total_time_wait_base += time_wait # total time cost total_time_cost_base += time_wait time_penalty = torch.lt(leave, total_time_cost_base).float()*10 total_time_cost_base += time_penalty total_time_penalty_base += time_penalty mask[zero_to_bsz, idx.data] += -np.inf C += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) total_time_cost_base += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) ################### # Loss and backprop handling ################### loss = torch.mean((total_time_cost_train - total_time_cost_base) * logprobs) optimizer.zero_grad() loss.backward() optimizer.step() if i % 50 == 0: print("epoch:{}, batch:{}/{}, total time:{}, reward:{}, time:{}" .format(epoch, i, steps, total_time_cost_train.mean().item(), R.mean().item(), total_time_wait_train.mean().item())) time_one_epoch = time.time() - start time_tot = time.time() - start_training_time + tot_time_ckpt ################### # Evaluate train model and baseline on 1k random TSP instances ################### ActorLow.eval() mean_tour_length_actor = 0 mean_tour_length_critic = 0 for step in range(0,B_valLoop): # compute tour for model and baseline X, solutions = generate_data(DataGen,B=B, size=size) Enter = X[:,:,2] # Entering time Leave = X[:,:,3] # Leaving time mask = torch.zeros(B,size).cuda() R = 0 reward = 0 time_wait = torch.zeros(B).cuda() time_penalty = torch.zeros(B).cuda() total_time_penalty_train = torch.zeros(B).cuda() total_time_cost_train = torch.zeros(B).cuda() total_time_wait_train = torch.zeros(B).cuda() # X = X.view(B,size,3) # Time = Time.view(B,size) x = X[:,0,:] h = None c = None context = None Transcontext = None #Actor ِGreedy phase with torch.no_grad(): for k in range(size): context,Transcontext,output, h, c, _ = ActorLow(context,Transcontext,x=x, X_all=X, h=h, c=c, mask=mask) idx = torch.argmax(output, dim=1) # ----> greedy baseline critic y_cur = X[zero_to_bsz, idx.data].clone() if k == 0: y_ini = y_cur.clone() if k > 0: reward = torch.norm(y_cur[:,:2] - y_pre[:,:2], dim=1) y_pre = y_cur.clone() x = X[zero_to_bsz, idx.data].clone() R += reward total_time_cost_train += reward # enter time enter = Enter[zero_to_bsz, idx.data] leave = Leave[zero_to_bsz, idx.data] # determine the total reward and current enter time time_wait = torch.lt(total_time_cost_train, enter).float()*(enter - total_time_cost_train) total_time_wait_train += time_wait # total time cost total_time_cost_train += time_wait time_penalty = torch.lt(leave, total_time_cost_train).float()*10 #total_time_cost_train += time_penalty total_time_penalty_train += time_penalty mask[zero_to_bsz, idx.data] += -np.inf R += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) total_time_cost_train += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) # Critic Baseline phase C = 0 baseline = 0 mask = torch.zeros(B,size).cuda() time_wait = torch.zeros(B).cuda() time_penalty = torch.zeros(B).cuda() total_time_penalty_base = torch.zeros(B).cuda() total_time_cost_base = torch.zeros(B).cuda() total_time_wait_base = torch.zeros(B).cuda() x = X[:,0,:] h = None c = None context = None Transcontext = None # compute tours for baseline without grad with torch.no_grad(): for k in range(size): context,Transcontext,output, h, c, _ = CriticLow(context,Transcontext,x=x, X_all=X, h=h, c=c, mask=mask) idx = torch.argmax(output, dim=1) # ----> greedy baseline critic y_cur = X[zero_to_bsz, idx.data].clone() if k == 0: y_ini = y_cur.clone() if k > 0: baseline = torch.norm(y_cur[:,:2] - y_pre[:,:2], dim=1) y_pre = y_cur.clone() x = X[zero_to_bsz, idx.data].clone() C += baseline total_time_cost_base += baseline # enter time enter = Enter[zero_to_bsz, idx.data] leave = Leave[zero_to_bsz, idx.data] # determine the total reward and current enter time time_wait = torch.lt(total_time_cost_base, enter).float()*(enter - total_time_cost_base) total_time_wait_base += time_wait # total time cost total_time_cost_base += time_wait time_penalty = torch.lt(leave, total_time_cost_base).float()*10 #total_time_cost_base += time_penalty total_time_penalty_base += time_penalty mask[zero_to_bsz, idx.data] += -np.inf C += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) total_time_cost_base += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) mean_tour_length_actor += total_time_cost_train.mean().item() mean_tour_length_critic += total_time_cost_base.mean().item() mean_tour_length_actor = mean_tour_length_actor / B_valLoop mean_tour_length_critic = mean_tour_length_critic / B_valLoop # evaluate train model and baseline and update if train model is better update_baseline = mean_tour_length_actor < mean_tour_length_critic print('Avg Actor {} --- Avg Critic {}'.format(mean_tour_length_actor,mean_tour_length_critic)) if update_baseline: CriticLow.load_state_dict(ActorLow.state_dict()) print('My actor is going on the right road Hallelujah :) Updated') ################### # Valdiation train model and baseline on 1k random TSP instances ################### with torch.no_grad(): print("optimal upper bound:{}".format(solutions.mean())) X_val, _ = generate_data(DataGen,B=B_val, size=size) Enter = X_val[:,:,2] # Entering time Leave = X_val[:,:,3] # Leaving time mask = torch.zeros(B_val, size).to(device) baseline = 0 time_wait = torch.zeros(B_val).to(device) time_penalty = torch.zeros(B_val).to(device) total_time_cost = torch.zeros(B_val).to(device) total_time_penalty = torch.zeros(B_val).to(device) x = X_val[:,0,:] h = None c = None context = None Transcontext = None for k in range(size): context,Transcontext,output, h, c, _ = CriticLow(context,Transcontext,x=x, X_all=X_val, h=h, c=c, mask=mask) idx = torch.argmax(output, dim=1) # greedy baseline y_cur = X_val[zero_to_bsz_val, idx.data].clone() if k == 0: y_ini = y_cur.clone() if k > 0: baseline = torch.norm(y_cur[:,:2] - y_pre[:,:2], dim=1) y_pre = y_cur.clone() x = X_val[zero_to_bsz_val, idx.data].clone() total_time_cost += baseline # enter time enter = Enter[zero_to_bsz_val, idx.data] leave = Leave[zero_to_bsz_val, idx.data] # determine the total reward and current enter time time_wait = torch.lt(total_time_cost, enter).float()*(enter - total_time_cost) total_time_cost += time_wait time_penalty = torch.lt(leave, total_time_cost).float()*10 total_time_cost += time_penalty total_time_penalty += time_penalty mask[zero_to_bsz_val, idx.data] += -np.inf total_time_cost += torch.norm(y_cur[:,:2] - y_ini[:,:2], dim=1) accuracy = 1 - torch.lt(torch.zeros_like(total_time_penalty), total_time_penalty).sum().float() / total_time_penalty.size(0) print('validation result:{}, accuracy:{}' .format(total_time_cost.mean().item(), accuracy)) val_mean.append(total_time_cost.mean().item()) val_std.append(total_time_cost.std().item()) val_accuracy.append(accuracy) # For checkpoint plot_performance_train.append([(epoch+1), mean_tour_length_actor]) plot_performance_baseline.append([(epoch+1), mean_tour_length_critic]) # Compute optimality gap if size==50: gap_train = mean_tour_length_actor/5.692- 1.0 elif size==100: gap_train = mean_tour_length_actor/7.765- 1.0 else: gap_train = -1.0 # Print and save in txt file mystring_min = 'Epoch: {:d}, epoch time: {:.3f}min, tot time: {:.3f}day, L_actor: {:.3f}, L_critic: {:.3f}, gap_train(%): {:.3f}, update: {}'.format( epoch, time_one_epoch/60, time_tot/86400, mean_tour_length_actor, mean_tour_length_critic, 100 * gap_train, update_baseline) print(mystring_min) print('Save Checkpoints') # Saving checkpoint checkpoint_dir = os.path.join("checkpoint") if not os.path.exists(checkpoint_dir): os.makedirs(checkpoint_dir) torch.save({ 'epoch': epoch, 'time': time_one_epoch, 'tot_time': time_tot, 'loss': loss.item(), 'plot_performance_train': plot_performance_train, 'plot_performance_baseline': plot_performance_baseline, 'mean_tour_length_val': total_time_penalty, 'model_baseline': CriticLow.state_dict(), 'model_train': ActorLow.state_dict(), 'optimizer': optimizer.state_dict(), }, '{}.pkl'.format(checkpoint_dir + "/checkpoint_" + time_stamp + "-n{}".format(size) + "-gpu{}".format(gpu_id))) # -
Size50/twtsp-hpn-size50.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/XavierCarrera/platzi-master-ml-exercises/blob/main/Mall_Clients_Data_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="XzhNk91xthIT" # # Introduction and Problem Definition # # [EN] # # For this problem, we're going to analyze a dataset that contains data regarding a mall's customers. The main goal of this exercise is to segment them by their business value. # # The main aproach to solve this problem is to cluster customers observing the feauters in the dataset. Because no labels are provided, we're going to use a K-means algorithm for this case. # # For further info on how to use Kmeans, best practices and when to use this algorithm, consult the official documentation: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html # # [ES] # # Para este problema, vamos a analizar un dataset que contiene datos sobre los clientes de un centro comercial. El objetivo del ejercicio es segmentarlos por su valor de negocio. # # La aproximación principal para resolver este problema es agrupar los clientes observando las features del dataset. Debido a que no se dan etiquetas, vamos a utilizar el algoritmo K-means para este caso. # # Para mayor información sobre Kmeans, mejores prácticas y cuando usar este algoritmo, consulta la documentación oficial: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html # + id="tOwE0g1bklyR" import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.cluster import KMeans # %matplotlib inline from mpl_toolkits.mplot3d import Axes3D plt.rcParams['figure.figsize'] = (16, 9) plt.style.use('ggplot') # + id="lifyzs8GtL2_" outputId="f328f4f5-b41b-4fcd-cc91-22beaee612d4" colab={"base_uri": "https://localhost:8080/", "height": 419} df = pd.read_csv ('/content/drive/My Drive/Colab Notebooks/db/Mall_Customers.csv') df # + [markdown] id="_vU8Ouf45qH3" # [EN] # # At this point, the only feature that provides no added value to analyze the problem is the Customer ID. Thus, we're dropping such variable. # # [ES] # # En este punto, el único feature que no da ningún valor agreado para analizar el problema es el Cusotmer ID. Por ello, eliminaremos dicha variable. # + id="Cxl9pQR8t91c" outputId="fd956d38-cc5c-452d-b9d5-a839302d3863" colab={"base_uri": "https://localhost:8080/", "height": 419} df.drop(["CustomerID"], axis=1, inplace=True) df # + [markdown] id="kvldjHNE6QSG" # [EN] # # Before starting the analysis, we verify any null value. # # [ES] # # Antes de comenzar el análisis, verificamos si hay algún valor nulo. # + id="yaQXNB-svDRd" outputId="92c3f5f9-59bc-429d-fab6-14ade35d6f86" colab={"base_uri": "https://localhost:8080/"} df.isnull().sum().sum() # + [markdown] id="hqFOPAdut5KI" # # Exploratory Data Analysis # + [markdown] id="PHTLo-_n6dtS" # [EN] # # For the EDA process, we're going to focus on looking for obvious patterns that might exist among our features. Therefore, the following charts and plots will provide us an insight of the remaining features. # # [ES] # # Para en análisis exploratorio de datos, nos enfocaremos en buscar patrones obvios que existan entre nuestros features. Por consiguiente, las siguienets tablas y gráficas nos daran una perspectiva sobre las features restantes. # + id="b9VXNfXrbSVU" outputId="4f6b6cbf-c33c-4e94-de6f-9757868feb5e" colab={"base_uri": "https://localhost:8080/", "height": 297} df.describe() # + id="jgKq4pgWddeB" outputId="ffda7bd7-7ec5-4312-88e2-2bf7d0f8e000" colab={"base_uri": "https://localhost:8080/", "height": 717} df.hist(figsize=(12,12)) plt.show() # + [markdown] id="024OXTCZ-Vfl" # [EN] # # First observation: The numeric values doesn't strictly have a Gaussian distribution. The Spending Score is the most evenly distributed variable. It's also an interesting feature to analyze, so we're going to use it as an dependet variable. # # [ES] # # Primera observación: Los valores numéricos no tienen estrictamente una distribución gausiana. El Spending Score es la variable que tiene mayor distribución. Es también un feature interesante de analizar, por lo que lo usaremos como variable dependiente. # + id="DfzuCm8kCnTv" outputId="7ec5fa7f-2685-4ea4-a857-712fa979896c" colab={"base_uri": "https://localhost:8080/", "height": 571} sns.regplot(x=df["Age"], y=df["Spending Score (1-100)"]) # + id="udWa44rRCyHr" outputId="ab460089-1e82-43d8-b05a-2d9a4d37a316" colab={"base_uri": "https://localhost:8080/", "height": 571} sns.regplot(x=df["Annual Income (k$)"], y=df["Spending Score (1-100)"]) # + [markdown] id="Un4TXjmh_bNm" # [EN] # # Second observation: The scatter plots show no linear distribution among our data. There's no obvious pattern in the Age-Spending Score pair. However, there's certain symmetry on the Annual Income-Spending Score pair -- thus, we're going to look in to it closer. # # [ES] # # Segunda observación: El gráfico de dispersión no muestra una distribución lineal en nuestros datos. No hay un patrón obio en el par Age-Spending Score. Sin embargo, hay cierta simetría en el par Annual Income-Spending Score por lo que lo observaremos más de cerca. # + [markdown] id="H-aly8hE9WN6" # [EN] # # Because Genre is a categorical value, we're going to use the get_dummies mehtod to transform it in binary values through two columns. # # [ES] # # Porque Género es un valor categórico, vamos a usar el método get_dummies para transformarlo en valores binarios por medio de dos columnas. # + id="1V03EN3eGjd7" df = pd.get_dummies(df, prefix=["Gender"], columns=["Genre"]) # + id="MB9Y67EEJcrd" outputId="279db2b5-0949-411b-9d76-92bb85c215bd" colab={"base_uri": "https://localhost:8080/", "height": 571} sns.regplot(x=df["Gender_Female"], y=df["Spending Score (1-100)"]) # + id="fEDOSVrzJtN9" outputId="9dc3417d-0631-481e-8ab0-22dc87e86a19" colab={"base_uri": "https://localhost:8080/", "height": 571} sns.regplot(x=df["Gender_Male"], y=df["Spending Score (1-100)"]) # + [markdown] id="Avjmv-BE919O" # [EN] # # Third observation: When using the scatter plot to observe any obvious distribution, we see no correlation. Thus, we're dropping this variable from furter analysis. # # [ES] # # Tercera observación: Al usar una gráfica de dispersión para observar algúna distrubución obvia, no se correlación alguna. Por lo que eliminaremos dicha variable de un anális más extenso. # + [markdown] id="j0Z45fQFKpMP" # # Model Training # + [markdown] id="m_4QIGBdCZB9" # [EN] # # In order to train our model, we're going to use only the remaining three variables. First, we're going to show our data on a 3D space and then make the Elbow Analysis for our Kmeans algorithm. # # [ES] # # Para entrenar nuestro modelo, vamos a usar las variables restantes. Primero, vamos a ubicar nuestros datos en un espacio 3D y luego hacer el Análisis de Codo para nuestro Algoritmo de Kmeans. # + id="LhDehG3UHwuX" outputId="00f2f12c-c195-4c6c-9042-ef7b24901924" colab={"base_uri": "https://localhost:8080/"} X = np.array(df[["Age","Annual Income (k$)", "Spending Score (1-100)"]]) X.shape # + id="C80loxVPfgTm" outputId="dc9304e6-3b66-476b-ebae-efa6b02b3985" colab={"base_uri": "https://localhost:8080/", "height": 696} fig = plt.figure() ax = Axes3D(fig) ax.scatter(X[:, 0], X[:, 1], X[:, 2], s=60) # + id="JUoI4ApbK6bZ" outputId="8c02ff29-558f-47ef-abe9-fdea4a78e192" colab={"base_uri": "https://localhost:8080/", "height": 571} Nc = range(1, 20) kmeans = [KMeans(n_clusters=i) for i in Nc] kmeans score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))] score plt.plot(Nc,score) plt.xlabel('Number of Clusters') plt.ylabel('Score') plt.title('Elbow Curve') plt.show() # + [markdown] id="n6W923n8DBF-" # [EN] # # As we can see, the Elbow Analysis calculates that we need 5 centriods (Kmean) to evenly distribute our data. # # [ES] # # Como podemos ver, el Análisis de Codo calcula que necesitamos 5 centroides (promedios de grupos) para distribuir igualmente nuestros datos. # + [markdown] id="zyMOQ0xADZ-K" # [EN] # # At this point is important to locate again our distributed data, but now with assigned color groups once our model is trained. # # Centroids are identified as stars. # # [ES] # # En este punto hay que ubicar nuestros datos distribuidos otra vez, pero ahora con grupos de colores asignados una vez que hayamos entrenado el modelo. # # Los centroides quedan identificados como estrellas. # + id="lgGAA7aGhpgH" outputId="5392e2a9-099a-41d1-bbd9-1e187b4504d8" colab={"base_uri": "https://localhost:8080/"} kmeans = KMeans(n_clusters=5).fit(X) centroids = kmeans.cluster_centers_ print(centroids) # + id="fdvjTh4rh6nn" outputId="1b175cdf-47bc-4eb7-84d4-108783c031c0" colab={"base_uri": "https://localhost:8080/", "height": 696} # Predicting the clusters labels = kmeans.predict(X) # Getting the cluster centers C = kmeans.cluster_centers_ colors=['red','green','blue','cyan','yellow'] assigned=[] for row in labels: assigned.append(colors[row]) fig = plt.figure() ax = Axes3D(fig) ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=assigned,s=60) ax.scatter(C[:, 0], C[:, 1], C[:, 2], marker='*', c=colors, s=1000) # + [markdown] id="PovqKrmAD7Oz" # [EN] # # We've trained already our model. When we locate our data on a 3D space, only the green group seems to be an obvious separate group from the rest of data points. # # [ES] # # Hemos entrenado ya nuestro modelo. Al ubicar nuestros datos en un espacio 3D, solo el grupo verde parece ser un grupo obviamente separada del resto de los puntos. # + [markdown] id="n7ZJUQa3lP-1" # # Model Evaluation # + [markdown] id="7OQZVTNmEiW_" # [EN] # # Because we haven't find any obvious distribution in the 3D space, we're going to evaluate our model by located our data on 2D spaces. # # [ES] # # Porque no hemos encontrado una distribución obiva en el espacio 3D, vamos a evaluar nuestro modelo localizando nuestros datos en espacios 2D. # + id="6RT6QnunijT_" outputId="0e570cb4-2474-4ea3-ed3a-27c9f95d845f" colab={"base_uri": "https://localhost:8080/", "height": 537} f1 = df['Age'].values f2 = df['Annual Income (k$)'].values plt.scatter(f1, f2, c=assigned, s=70) plt.scatter(C[:, 0], C[:, 1], marker='*', c=colors, s=1000) plt.show() # + id="5KRqIZAxjhzc" outputId="7b4759e0-c114-4ed6-d4ad-2f33433b8071" colab={"base_uri": "https://localhost:8080/", "height": 537} f1 = df['Annual Income (k$)'].values f2 = df['Spending Score (1-100)'].values plt.scatter(f1, f2, c=assigned, s=70) plt.scatter(C[:, 1], C[:, 2], marker='*', c=colors, s=1000) plt.show() # + id="LBrI48hYj31T" outputId="97ea9454-5938-4075-ea97-de88fc37ee7a" colab={"base_uri": "https://localhost:8080/", "height": 537} f1 = df['Age'].values f2 = df['Spending Score (1-100)'].values plt.scatter(f1, f2, c=assigned, s=70) plt.scatter(C[:, 1], C[:, 2], marker='*', c=colors, s=1000) plt.show() # + [markdown] id="rfEHj2RcFGfg" # [EN] # # The importance of the latter analysis is to deep on the patterns found by our model. In this case, Age isn't a relevant variable. Thus, in a hypothetical second iteration we might considering dropping such variable. # # We end our model evaluation by grouping our data by color groups. # # [ES] # # La importancia del anterior análisis es profundizar en los patrones encontrados en nuestro modelo. En este caso, Age no es una variable relevante. Por ende, en una segunda iteración hipotética deberíamos considerar borrar dicha variable. # # Terminamos nuestra evaluación de modelo agrupando nuestra data por grupos de colores. # + id="iSu8BGjAlLSa" outputId="1eeb66a3-5fd2-425a-e63c-db22692f244c" colab={"base_uri": "https://localhost:8080/", "height": 204} copy = pd.DataFrame() copy['label'] = labels; num_group = pd.DataFrame() num_group['Color']=colors num_group['Number']=copy.groupby('label').size() num_group # + [markdown] id="eEHNIc6Fna9K" # # Group Assignation and Conclusion # + [markdown] id="UjsUUqmTGZRL" # [EN] # # After training this model we have five customer groups. # # **Green**: Which are a **high value customers** because they have a high # spending score and high annual income. # # Cyan: Customers with high spending score but low annual income. # # Red: Average customers. # # Blue: Customers with high annual income, but low spending score. # # Yellow: Dispensable customers due to a low annual income and low spending score. # # The final step is to assign the group of each customer to our original dataframe. # # [ES] # # Después de entrenar este modelo tenemos cinco grupos. # # **Verde**: Que son **clientes de alto valor** debido a que tienen un alto spending score y annual income. # # Cyan: Clientes con un spending score alto pero un annual income bajo. # # Rojo: Clientes promedio. # # Ázul: Clientes con un annual income alto, pero bajo spending score. # # Amarillo: Clientes dispensables debido a un annual income y spending score bajos. # # El útlimo paso es asignar el grupo a cada cliente en nuestro dataframe original. # + id="ZBQcRFtNker-" outputId="f79eed14-3dc5-44f2-db14-150ec13e868c" colab={"base_uri": "https://localhost:8080/", "height": 419} final_df = pd.read_csv ('/content/drive/My Drive/Colab Notebooks/db/Mall_Customers.csv') final_df["Group"] = kmeans.predict(X) final_df
Mall_Clients_Data_Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # Agenda # ============ # # * Self Introduction # * Aibo Introduction # * UDS(Unix Domain Socket) ROS Overvieww # * UDSROS Hands-On # * QA # # Who am I ? # ---------- # # <img src="images/Selfie.JPG" width="200"> <NAME> # # * 2008 - # * Embeded Firmware Engineer for ARM Platform # * MPEG Audio Encoder/Decoder Implementation (AAC, v1, v2) # * Audio Core Algorithm Development # * 2012 - # * Unix Operating System # * Coherent Memory Interface beyond Node Instances # * Userspace Message Queue Framework # * Virtualization / Hypervisor # * 2016 - Sony Corporation # * System Software Engineer / Architect # * Robotics Product Development # * ROS/ROS2 System Engineer # # Why ROS ? # --------- # # * Common Software Framework for robotics and autonomous driving development # * Community activity is brisk # * Useful Tools for debug/build/documentation # * A lot of packages ready for robots # * Newest and Modern Software Framework and Architecture # # ROS Related Work # ---------------- # [![ROSCon2018 aibo](http://img.youtube.com/vi/twxWZKseo2M/0.jpg)](https://www.youtube.com/watch?reload=9&v=twxWZKseo2M) # # Many Thanks # ----------- # ### Gracias # ### Thank you # ### 谢谢 # ### Danke # ### 감사 합니다 # ### ありがとう # # Next # ---- # [Aibo Introduction](2-Aibo_Introductin.ipynb)
notebook_examples/sony/1-Overview.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # argv: # - python # - -m # - ipykernel_launcher # - -f # - '{connection_file}' # display_name: Python 3 # env: null # interrupt_mode: signal # language: python # metadata: null # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt plt.style.use('dark_background') import seaborn as sns print(f'seaborn version: {sns.__version__}') # wine dataset from sklearn.datasets import load_wine data = load_wine() print(type(data)) wdf = pd.read_csv('https://raw.githubusercontent.com/insaid2018/Term-1/master/Data/Projects/winequality.csv') print(wdf.head()) print(wdf.dtypes) print(wdf.shape) # + wdf['overall'] = wdf['quality'].apply(lambda x : 'Poor' if x < 5 else 'Medium' if x < 8 else 'Good' ) wdf.overall = wdf.overall.astype('category') wdf.overall.value_counts() # - sns.scatterplot(x=wdf.sulphates, y=wdf.chlorides,hue=wdf.overall.cat.codes, palette="Set2") #not very good for clustering # + from sklearn.datasets import make_blobs import numpy as np X, y = make_blobs(n_samples=500, centers=3, n_features=2,random_state=0) print(X.shape) print(y.shape, np.unique(y, return_counts=True)) # - sns.scatterplot(x=X[:,0], y=X[:,1],hue=y, palette="Set2") # + X, y = make_blobs(n_samples=500, centers=[(0,2), (2,4), (-2,-2)], n_features=3,random_state=0) sns.scatterplot(x=X[:,0], y=X[:,1],hue=y, palette="Set2") # - X, y = make_blobs(n_samples=400, centers=4, n_features=4, cluster_std=0.60, random_state=0) sns.scatterplot(x=X[:,0], y=X[:,1],hue=y, palette="Set2") print(X.shape, y.shape) d = np.stack((X,y[:,np.newaxis])) print(d.shape) d = {'x1' : X[:,0],'x2': X[:,1],'label':y} df = pd.DataFrame(data=d) print(df.head()) df.to_csv("test.csv") # !head test.csv i = 7 # + from sklearn.datasets import make_blobs import numpy as np import pandas as pd for i in range(10): std = .5+i*.04 X, y = make_blobs(n_samples=400, centers=4, n_features=4, cluster_std=0.60, random_state=i) label = (y > 1).astype(np.int8) d = {'x1' : X[:,0],'x2': X[:,1], 'x3': X[:,2],'label': label,'raw':y} df = pd.DataFrame(data=d) fn = "clustering_data_{:02.0f}.csv".format(i) df.to_csv(fn) # - # !ls # !head clustering_data_00.csv from sklearn.datasets import load_iris iris = load_iris() iris.target data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target']) data1.head() # + from sklearn.datasets import load_breast_cancer bc = load_breast_cancer() dir(bc) print(bc['data'].dtype) np.c_[bc['data'], bc['target']] [*bc['feature_names'], 'target'] # - data2 = pd.DataFrame(data= np.c_[bc['data'], bc['target']], columns= [*bc['feature_names'], 'target']) data2.head()
source/lesson03/datasets.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/nazmul-karim170/cs230-code-examples/blob/master/Homework_1_Machine_Learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="u5RXinm7wv4y" colab_type="text" # **First, we import the necessary python packages that we need for our homework.** # + id="6HKA2JZ9u_Y6" colab_type="code" colab={} import numpy as np from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense # + [markdown] id="m2rDnkhaxeio" colab_type="text" # **Then we load the data and normalize them** # + id="XSSgDFfpxdf8" colab_type="code" outputId="02786c30-a3de-4bf2-a752-6d44f5342cd3" colab={"base_uri": "https://localhost:8080/", "height": 51} (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(x_train.shape[0], 28*28)/255 x_test = x_test.reshape(x_test.shape[0], 28*28)/255 # + [markdown] id="V6EK3yt3z1CV" colab_type="text" # **Here are some activation and loss functions we are going to need for solving our problems. I am putting it all here together. ** # + id="cr0bAZAw0d-c" colab_type="code" colab={} def sigmoid(z): return 1/(1+ np.exp(-z)) ## differentiation of sigmoid def sigmoid_dif(x): return sigmoid(x)*(1-sigmoid(x)) ## softmax function def softmax(z): z -= np.max(z) sm = (np.exp(z).T/ np.sum(np.exp(z), axis=1)).T return sm #### mean square error loss def mse_loss(y, y_hat): return 0.5*np.mean((y_hat-y)**2) ## binary cross entropy loss def logistic_loss(y, y_hat): return -np.mean(y* np.log(y_hat) + (1-y) * np.log(1-y_hat)) ## categorical cross entropy loss def CCE_loss(y, y_hat): return -np.mean(y * np.log(y_hat)) # + [markdown] id="6BKOBIBC2Our" colab_type="text" # **We define the minibatch size and number of epochs here for our training.** # + id="Rmp793eI2jmA" colab_type="code" colab={} minibatch_size = 40 n_epochs = 20 # + [markdown] id="iwkPmmEf3GHk" colab_type="text" # **Problem 1: Designing ten classifier using logistic regression with mean square error loss function** Here, we have ten different classifier for each class. The parameters we have, after training, are different for each class. # + id="63ac8_H93V_Y" colab_type="code" outputId="a8af40b7-e2f4-4ead-eb81-e79b7b78449b" colab={"base_uri": "https://localhost:8080/", "height": 187} ##Forward propagation caculation def forward(x,W,b): Z = np.dot(x,W) + b A = sigmoid(Z) return A ##Learning rate lr = 0.05 ##Define the number of features and classes n_feature = 28*28 #input size(could be whole image or just the extracted features) n_class = 1 #output size ##Accumulated_Loss loss_a =[] ##stacking up all the parameters of all classifiers model_parameters_w = [] model_parameters_b = [] ## We do it for ten classes that's why we are using the for loop for label in range(10): ## Weight_initialization W = np.random.randn(n_feature) b = np.random.randn(n_class) ### TRAINING PHASE for epoch in range(n_epochs): ##Shuffling the dataset for each epoch m= np.random.permutation(x_train.shape[0]) x_train = x_train[m] y_train = y_train[m] for i in range(int(x_train.shape[0]/ minibatch_size)): #minibatch_training x_train_m = x_train[i:i+minibatch_size] y_train_m = y_train[i:i+minibatch_size] ##get the value A = forward(x_train_m, W, b) ## Create probability distribution of true label y = y_train_m==label ## Mean Square loss loss = mse_loss(y, A) ## gradient calculation dz = sigmoid_dif(A)*(A - y)/minibatch_size ## Update of W and b dw = np.matmul(x_train_m.T,dz) db = np.sum(dz, axis = 0) W = W - lr*dw b = b - lr*db loss_a.append(loss) ###TESTING PHASE y_p = forward(x_test, W,b)>=0.5 y_test1 = y_test == label accs = np.sum(y_p == y_test1)/y_test.shape[0] print('The accuracy of the classifier designed for class label {} is:: {}'.format(label,accs)) model_parameters_w.append(W) model_parameters_b.append(b) # + [markdown] id="guGCCGbd87Ft" colab_type="text" # Now we take the parameters of all the classifiers and evaluate the accuracy by using argmax operation. # + id="962pmOP286Hd" colab_type="code" outputId="4d3b6d65-86d4-4db4-b15a-ddcee885060a" colab={"base_uri": "https://localhost:8080/", "height": 34} ## classification accuracy if we make a joint classifier for all classes Y_P = np.zeros((x_test.shape[0], 10)) for label in range(10): Y_P[:,label] = np.squeeze(forward(x_test,model_parameters_w[label], model_parameters_b[label])) index = np.argmax(Y_P, axis = 1) acc = np.sum(index == y_test)/y_test.shape[0] print ("The overall accuracy is now: %f" %acc) # + [markdown] id="wHOhHb7BDoO5" colab_type="text" # **Problem 2: Designing ten classifier using logistic regression binary cross entropy loss function** Here, we do the same thing as problem 1 just use different loss function. the code are almost same. # + id="cXKAQ86_DxWH" colab_type="code" outputId="1606c21c-900a-4ae0-fadc-e91822d529a7" colab={"base_uri": "https://localhost:8080/", "height": 187} ##Forward propagation caculation def forward(x,W,b): Z = np.dot(x,W) + b A = sigmoid(Z) return A ##Learning rate lr = 0.005 ##Define the number of features and classes n_feature = 28*28 #input size(could be whole image or just the extracted features) n_class = 1 #output size ##Accumulated_Loss loss_a =[] ##stacking up all the parameters of all classifiers model_parameters_w = [] model_parameters_b = [] ## We do it for ten classes that's why we are using the for loop for label in range(10): ## Weight_initialization W = np.random.randn(n_feature) b = np.random.randn(n_class) ### TRAINING PHASE for epoch in range(n_epochs): ##Shuffling the dataset for each epoch m= np.random.permutation(x_train.shape[0]) x_train = x_train[m] y_train = y_train[m] for i in range(int(x_train.shape[0]/ minibatch_size)): #minibatch_training x_train_m = x_train[i:i+minibatch_size] y_train_m = y_train[i:i+minibatch_size] ##get the value A = forward(x_train_m, W, b) ## Create probability distribution of true label y = y_train_m==label ## Binary Cross Entropy loss loss = logistic_loss(y, A) ## gradient calculation dz = (A - y)/minibatch_size ## Update of W and b dw = np.matmul(x_train_m.T,dz) db = np.sum(dz, axis = 0) W = W - lr*dw b = b - lr*db loss_a.append(loss) ###TESTING PHASE y_p = forward(x_test, W,b)>=0.5 y_test1 = y_test == label accs = np.sum(y_p == y_test1)/y_test.shape[0] print('The accuracy of the classifier designed for class label {} is:: {}'.format(label,accs)) model_parameters_w.append(W) model_parameters_b.append(b) # + [markdown] id="v1VC4cuFHJeI" colab_type="text" # Now we take the parameters of all the classifiers and evaluate the accuracy by using argmax operation. # + id="I-gZdw82HJ6_" colab_type="code" outputId="b7e28de0-f19b-4d54-a723-c1bbaa7de861" colab={"base_uri": "https://localhost:8080/", "height": 34} ## classification accuracy if we make a joint classifier for all classes Y_P = np.zeros((x_test.shape[0], 10)) for label in range(10): Y_P[:,label] = np.squeeze(forward(x_test,model_parameters_w[label], model_parameters_b[label])) index = np.argmax(Y_P, axis = 1) acc = np.sum(index == y_test)/y_test.shape[0] print ("The overall accuracy is now: %f" %acc) # + [markdown] id="Hybs4-4EI07g" colab_type="text" # **Problelm 3: A classifier using softmax regression ** # + id="1W12e8qxJSPL" colab_type="code" outputId="06247f4b-516c-4288-bcb7-c764ec7f2794" colab={"base_uri": "https://localhost:8080/", "height": 34} ##Learning rate lr = 0.05 ##Define the number of features and classes n_feature = 28*28 #input size(could be whole image or just the extracted features) n_class = 10 # output size # Weight_initialization W = np.random.randn(n_feature, n_class) b = np.random.randn(1,n_class) ## Create probability distribution of true label ##One hot encoding y_train1 = np.zeros((60000,n_class)) y_train = np.array(y_train) y_train1[np.arange(60000), y_train] = 1 ### TRAINING PHASE for epoch in range(n_epochs): ##Shuffling m= np.random.permutation(x_train.shape[0]) x_train = x_train[m] y_train1 = y_train1[m] for i in range(int(x_train.shape[0]/ minibatch_size)): #minibatch_training x_train_m = x_train[i:i+minibatch_size] y_train_m = y_train1[i:i+minibatch_size] A = forward(x_train_m, W, b) ## Categorical Cross Entropy loss loss = CCE_loss(y_train_m, A) ## gradient calculation dz = (A - y_train_m)/minibatch_size #if we vectorize the whole thing our gradient function look like this ## Update of W and b dw = np.matmul(x_train_m.T,dz) db = np.sum(dz, axis = 0) W = W - lr*dw b = b - lr*db ###TESTING PHASE y_p = np.argmax(forward(x_test, W,b),axis = 1) accs = np.sum(y_p == y_test)/y_test.shape[0] print("The accuracy for MNIST dataset using softmax is:", accs) # + [markdown] id="ybq1DwCqK-ml" colab_type="text" # **Problem 4: Implemeting problem 3 with keras** # + id="u1b2ZrO-LGMU" colab_type="code" outputId="343303db-6678-480e-b1d4-92a4429aba3f" colab={"base_uri": "https://localhost:8080/", "height": 408} import numpy as np from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense ##Load the data (x_train, y_train),(x_test, y_test) = mnist.load_data() # Normalize the input x_train = x_train.reshape(x_train.shape[0], 28*28)/255 x_test = x_test.reshape(x_test.shape[0], 28*28)/255 n_feature = 28*28 # input size (could be whole image or just the extracted features) n_class = 10 # output size #One hot encoding y_train1 = np.zeros((60000,n_class)) y_train = np.array(y_train) y_train1[np.arange(60000), y_train] = 1 y_test1 = np.zeros((10000,n_class)) y_test = np.array(y_test) y_test1[np.arange(10000), y_test] = 1 #Neural Network input size input_size = x_train.shape[1] # Using a dense layer with input and output size matched model = Sequential() model.add(Dense(output_dim = n_class, activation = 'softmax', input_dim = input_size)) ## Training and testing phase model.compile(loss='categorical_crossentropy', optimizer = 'sgd', metrics=['accuracy']) model.fit(x_train, y_train1, epochs=10, batch_size = 40) model.evaluate(x_test, y_test1) # + [markdown] id="NTShiektN857" colab_type="text" # **Problem5 : Using DFS for new feature(It takes hour to run)** # + id="yyhXtJYEdNeV" colab_type="code" outputId="2b1f90fa-9276-41b7-b172-11599cfc98bb" colab={"base_uri": "https://localhost:8080/", "height": 408} import numpy as np from keras.models import Sequential from keras.layers import Dense from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train1 = np.round(x_train/255) x_test1 = np.round(x_test/255) def is_white(image): a = image == 0 return a def get_adjacent(n): x,y = n return [(x-1,y),(x,y-1),(x+1,y),(x,y+1)] ## DFS algorithm for for finding the number of connected components def DFS(start, end, pixels): stack = [start] visited = [] ls = np.ndindex((28,28)) count =0 k = 0 s = start while s != end: if not is_white(pixels[s]): ##if it is not white, append it to the visited list visited.append(s) if s not in visited: ## start from a white pixel stack.append(s) while len(stack) != 0: ## length of the stack should be empty at the end of traversing a connected region x = stack[-1] if is_white(pixels[x]): pixels[x] = 1 ## if it is white and you visited it, make it black pixel visited.append(x) remove_from_stack = True for adjacent in get_adjacent(x): if adjacent in np.ndindex((28,28)): ## adjacent pixel should be inside the image if is_white(pixels[adjacent]): pixels[adjacent] = 1 stack.append(adjacent) visited.append(adjacent) remove_from_stack = False if remove_from_stack: ## returning to the parent node stack.pop() count += 1 k += 1 y = next(ls) if y != end: s = y else: s=start if k == pixels.shape[0]* pixels.shape[1]: s = end return count ## storing up the information from DFS count_train = np.zeros((x_train.shape[0],1)) count_test = np.zeros((x_test.shape[0],1)) ##RUN DFS for training dataset for i in range(x_train.shape[0]): count_train[i,0] = DFS((27,27), (0,0), np.reshape(x_train1[i], [28,28])) x_train_dfs = np.append(np.reshape(x_train, 28*28)/255, count_train/3,axis =1) ## RUN DFS for test dataset for i in range(x_test.shape[0]): count_test[i,0] = DFS((27,27), (0,0), np.reshape(x_test1[i], [28,28]) ) x_test_dfs= np.append(np.reshape(x_test, 28*28)/255, count_test/3, axis =1) #One hot encoding y_train1 = np.zeros((60000,n_class)) y_train = np.array(y_train) y_train1[np.arange(60000), y_train] = 1 y_test1 = np.zeros((10000,n_class)) y_test = np.array(y_test) y_test1[np.arange(10000), y_test] = 1 #Neural Network input size input_size = x_train.shape[1] #Neural Network input size input_size = x_train_dfs.shape[1] #output size n_class = 10 # Using a dense layer with input and output size matched model = Sequential() model.add(Dense(output_dim = n_class, activation = 'softmax', input_dim = input_size)) model.compile(loss='categorical_crossentropy', optimizer = 'sgd', metrics=['accuracy']) model.fit(x_train_dfs, y_train1, epochs=100, batch_size = 40) model.evaluate(x_test_dfs, y_test)
Homework_1_Machine_Learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # # Ising models: illustration of hill climbing and simulated annealing # # *Selected Topics in Mathematical Optimization* # # *2016-2017* # # **<NAME>** from ising import * import matplotlib.pyplot as plt # %matplotlib inline # ## Ising models # # - discrete mathematical model of ferromagnetism # - described by an $N$-dimensional state $\mathbf{x}$ # - two possible statesof each component: $x_i \in\{-1, 1\}$ # - short-range coupling between neighbouring components using $J_{ij}$ # - (optinally) applying a field $H$ # ## Energy of a state # # $$ # E(\mathbf{x}) = -\left[\frac{1}{2} \sum_{i,j}J_{ij} x_i x_j + \sum_iH_i x_i\right] # $$ # with # - $J_{ij}$ the coupling between components, $J_{ij}=J$ if $x_i$ and $x_j$ are neigbours and $J_{ij}=0$ otherwise # - $H_i$ the field applied on component $x_i$ (often set to zero) # # The system tries to minimize this energy. # ## Rectangular Ising models # # Components are ordered on a rectangular lattice. # # ![The Von Neumann neighbourhood](Figures/ising_neighb.jpg) # # We use the Von Neumann neighbourhood (four neighbours that can be reached in a single step). # Either: # # - $J=1$: neighbouring magnets like to aligned # - $J=-1$: neighbouring magnets like to have the opposite alginment # ## Simulations # random 50 x 50 grid x0 = random_ising((50, 50)) Jij = 1 #Jij = -1 H = None # no field #H = np.random.randn(*x0.shape) / 2 # random field #H = np.ones(x0.shape) / 2; H[:,25:] = -1/2 # half half def plot_state(x, ax, Jij=Jij, H=H): """ Plots the state with the energy """ ax.imshow(x, interpolation='nearest') ax.set_title('Energy = {}'.format(ising_energy(x, Jij, H))) fig, ax = plt.subplots() plot_state(x0, ax) # ### Hill climbing # %%time x_hc, energies_hc = hill_climbing_ising(x0, Jij, H) plt.plot(energies_hc) fig, ax = plt.subplots() plot_state(x_hc, ax) # ### Simulated annealing # %%time x_sa, energies_sa = simulated_annealing_ising(x0, Jij=Jij, H=H, Tmax=100, Tmin=0.01, r=0.7, kT=1000) plt.plot(energies_sa) fig, ax = plt.subplots() plot_state(x_sa, ax) # ### Comparision of methods # + fig, (ax0, ax1, ax2) = plt.subplots(ncols=3) plot_state(x0, ax0) plot_state(x_hc, ax1) plot_state(x_sa, ax2) # -
Chapters/Old/06.Bioinspired/ising_demonstration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import numpy as np import pandas as pd import tensorflow as tf # + pycharm={"name": "#%%\n"} # Init data path root_dir = '~/Data/OSIC' train_csv_path = os.path.join(root_dir, 'train.csv') test_csv_path = os.path.join(root_dir, 'test.csv') img_dir = os.path.join(root_dir, 'preprocessing_data') # + pycharm={"name": "#%%\n"} # Read CSV train_csv = pd.read_csv(train_csv_path) test_csv = pd.read_csv(test_csv_path) train_csv.head() # + pycharm={"name": "#%%\n"} # Preprocessing # + pycharm={"name": "#%%\n"} # Dataset class Dataset: label_col_name = 'FVC' root_dir = '/Users/younghun/Data/OSIC/' # root_dir = '~/Data/OSIC' def __init__(self, data_list, epoch=1, batch_size=10): # init labels label_list = data_list[self.label_col_name].to_numpy() self.patient_list = data_list['Patient'].to_numpy() data_list.drop([self.label_col_name, 'Patient'], axis=1, inplace=True) data_list = pd.get_dummies(data_list) # init dataset self.dataset = tf.data.Dataset.from_tensor_slices((data_list, label_list, np.arange(len(self.patient_list)))) self.dataset = self.dataset.map(lambda data, label, index: tf.py_function(self.read_img, [data, label, index], [tf.float64, tf.float64, tf.int64])) self.dataset = self.dataset.repeat(epoch) # self.dataset = self.dataset.shuffle(buffer_size=(int(len(data_list) * 0.4) + 3 * batch_size)) self.dataset = self.dataset.batch(batch_size, drop_remainder=False) def __iter__(self): return self.dataset.__iter__() def read_img(self, data, label, index: tf.Tensor): img_path = os.path.join(self.root_dir, 'preprocessing_data', f'{self.patient_list[index]}.npy') img = np.load(img_path) img.resize((1, 38, 334, 334)) return img, data, label # + pycharm={"name": "#%%\n"} from tensorflow.keras import Model from tensorflow.keras.layers import Dense, Flatten, Conv3D class PFPModel(Model): def __init__(self): super().__init__() self.build_model() def build_model(self): self.conv1 = tf.keras.Sequential([ Conv3D(filters=200, kernel_size=3, padding='same', activation='relu'), Conv3D(filters=100, kernel_size=3, padding='same', activation='relu'), Conv3D(filters=100, kernel_size=3, padding='same', activation='relu'), Conv3D(filters=50, kernel_size=3, padding='same', activation='relu'), Flatten(), ]) self.fc = tf.keras.Sequential([ Dense(500, activation='relu'), Dense(100, activation='relu'), Dense(1) ]) def fit(self, dataset, epoch_num=100): # compile self.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.MSE, metrics=['mse']) for step, (img, x, y) in enumerate(dataset): y = tf.cast(y, tf.float32) with tf.GradientTape() as tape: output = self.call((img, x)) loss = self.loss(output, y) gradients = tape.gradient(loss, self.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) print('STEP:', step, np.mean(loss.numpy())) def call(self, inputs, *args, **kwargs): """ inputs: {imgs: [], info: []} """ imgs = inputs[0] info = inputs[1] imgs = tf.cast(imgs, float) info = tf.cast(info, float) conv_out = self.conv1(imgs) info = tf.concat((conv_out, info), axis=1) out = self.fc(info) return out /* # + pycharm={"name": "#%%\n"} dataset = Dataset(train_csv) # load model model = PFPModel() model.fit(dataset)
train.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import graphlab import matplotlib.pyplot as plt import numpy as np from scipy.sparse import csr_matrix import time from copy import copy # %matplotlib inline def norm(x): sum_sq=x.dot(x.T) norm=np.sqrt(sum_sq) return norm # - wiki = graphlab.SFrame('people_wiki.gl/') print wiki wiki=wiki.add_row_number() print wiki wiki['tf_idf']=graphlab.text_analytics.tf_idf(wiki['text']) print wiki def sframe_to_scipy(column): x = graphlab.SFrame({'X1':column}) x=x.add_row_number() x=x.stack('X1',['feature','value']) f=graphlab.feature_engineering.OneHotEncoder(features=['feature']) f.fit(x) x=f.transform(x) mapping=f['feature_encoding'] x['feature_id']=x['encoded_features'].dict_keys().apply(lambda x: x[0]) i=np.array(x['id']) j=np.array(x['feature_id']) v=np.array(x['value']) width=x['id'].max()+1 height= x['feature_id'].max()+1 mat=csr_matrix((v,(i,j)),shape=(width,height)) return mat,mapping start=time.time() corpus,mapping = sframe_to_scipy(wiki['tf_idf']) end=time.time() print end-start corpus assert corpus.shape==(59071,547979) print "working" def generate_random_vectors(num_vector,dim): return np.random.randn(dim,num_vector) np.random.seed(0) generate_random_vectors(3,5) np.random.seed(0) random_vectors=generate_random_vectors(16,547979) random_vectors.shape doc = corpus[0,:] np.array(doc.dot(random_vectors)>=0,dtype=int) np.array(corpus.dot(random_vectors)>=0,dtype=int).shape index_bits=(doc.dot(random_vectors)>=0) print index_bits powers_of_two = (1<<np.arange(15,-1,-1)) powers_of_two index_bits=corpus.dot(random_vectors)>=0 index_bits.dot(powers_of_two) def train_lsh(data,num_vector=16,seed=None): dim=data.shape[1] if seed is not None: np.random.seed(seed) random_vectors=generate_random_vectors(num_vector,dim) powers_of_two=1<<np.arange(num_vector-1,-1,-1) table={} bin_index_bits = (data.dot(random_vectors)>=0) bin_indices = bin_index_bits.dot(powers_of_two) for data_index,bin_index in enumerate(bin_indices): if bin_index not in table: table[bin_index]=list() doc_ids=table[bin_index] doc_ids.append(data_index) table[bin_index]=doc_ids model = { 'data':data, 'bin_index_bits':bin_index_bits, 'bin_indices':bin_indices, 'table':table, 'random_vectors':random_vectors, 'num_vector':num_vector } return model # + model = train_lsh(corpus,num_vector=16,seed=143) table=model['table'] # - if 0 in table and table[0] == [39583] and \ 143 in table and table[143] == [19693, 28277, 29776, 30399]: print 'Passed!' else: print 'Check your code.' obama=wiki[wiki['name'] == '<NAME>'] obama model['bin_indices'][35817] biden=wiki[wiki['name'] == '<NAME>'] biden # + b = model['bin_index_bits'][24478] o = model['bin_index_bits'][35817] print np.array(b==o,dtype=int).sum() # - print np.array(model['bin_index_bits'][22745], dtype=int) # list of 0/1's print model['bin_indices'][22745] # integer format model['bin_index_bits'][35817] == model['bin_index_bits'][22745] model['table'][model['bin_indices'][35817]] # + doc_ids = list(model['table'][model['bin_indices'][35817]]) doc_ids.remove(35817) # display documents other than Obama docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column docs # + def cosine_distance(x, y): xy = x.dot(y.T) dist = xy/(norm(x)*norm(y)) return 1-dist[0,0] obama_tf_idf = corpus[35817,:] biden_tf_idf = corpus[24478,:] print '================= Cosine distance from Barack Obama' print 'Barack Obama - {0:24s}: {1:f}'.format('<NAME>', cosine_distance(obama_tf_idf, biden_tf_idf)) for doc_id in doc_ids: doc_tf_idf = corpus[doc_id,:] print '<NAME> - {0:24s}: {1:f}'.format(wiki[doc_id]['name'], cosine_distance(obama_tf_idf, doc_tf_idf)) # - from itertools import combinations # + num_vector=16 search_radius=3 for diff in combinations(range(num_vector),search_radius): print diff # - def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()): """ For a given query vector and trained LSH model, return all candidate neighbors for the query among all bins within the given search radius. Example usage ------------- >>> model = train_lsh(corpus, num_vector=16, seed=143) >>> q = model['bin_index_bits'][0] # vector for the first document >>> candidates = search_nearby_bins(q, model['table']) """ num_vector = len(query_bin_bits) powers_of_two = 1 << np.arange(num_vector-1, -1, -1) # Allow the user to provide an initial set of candidates. candidate_set = copy(initial_candidates) for different_bits in combinations(range(num_vector), search_radius): # Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector. ## Hint: you can iterate over a tuple like a list alternate_bits = copy(query_bin_bits) for i in different_bits: alternate_bits[i] = 1-alternate_bits[i] # Convert the new bit vector to an integer index nearby_bin = alternate_bits.dot(powers_of_two) # Fetch the list of documents belonging to the bin indexed by the new bit vector. # Then add those documents to candidate_set # Make sure that the bin exists in the table! # Hint: update() method for sets lets you add an entire list to the set if nearby_bin in table: candidate_set.update(table[nearby_bin]) return candidate_set obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0) if candidate_set == set([35817, 21426, 53937, 39426, 50261]): print 'Passed test' else: print 'Check your code' print 'List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261' candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set) if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547, 23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676, 19699, 2804, 20347]): print 'Passed test' else: print 'Check your code' from sklearn.metrics.pairwise import pairwise_distances def query(vec, model, k, max_search_radius): data = model['data'] table = model['table'] random_vectors = model['random_vectors'] num_vector = random_vectors.shape[1] # Compute bin index for the query vector, in bit representation. bin_index_bits = (vec.dot(random_vectors) >= 0).flatten() # Search nearby bins and collect candidates candidate_set = set() for search_radius in xrange(max_search_radius+1): candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set) # Sort candidates by their true distances from the query nearest_neighbors = graphlab.SFrame({'id':candidate_set}) candidates = data[np.array(list(candidate_set)),:] nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten() return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set) query(corpus[35817,:], model, k=10, max_search_radius=3) query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance') wiki[wiki['name']=='<NAME>'] # + num_candidates_history = [] query_time_history = [] max_distance_from_query_history = [] min_distance_from_query_history = [] average_distance_from_query_history = [] for max_search_radius in xrange(17): start=time.time() result, num_candidates = query(corpus[35817,:], model, k=10, max_search_radius=max_search_radius) end=time.time() query_time = end-start print 'Radius:', max_search_radius print result.join(wiki[['id', 'name']], on='id').sort('distance') average_distance_from_query = result['distance'][1:].mean() print average_distance_from_query max_distance_from_query = result['distance'][1:].max() min_distance_from_query = result['distance'][1:].min() num_candidates_history.append(num_candidates) query_time_history.append(query_time) average_distance_from_query_history.append(average_distance_from_query) max_distance_from_query_history.append(max_distance_from_query) min_distance_from_query_history.append(min_distance_from_query) # + plt.figure(figsize=(7,4.5)) plt.plot(num_candidates_history, linewidth=4) plt.xlabel('Search radius') plt.ylabel('# of documents searched') plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(query_time_history, linewidth=4) plt.xlabel('Search radius') plt.ylabel('Query time (seconds)') plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors') plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors') plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors') plt.xlabel('Search radius') plt.ylabel('Cosine distance of neighbors') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() # -
Machine Learning/Clustering and Retrieval/Week 2/Nearest neighbours using LDA solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from matplotlib.pyplot import cm import random from mpl_toolkits.mplot3d import Axes3D import diff_classifier.pca as pca # + figsize = (12, 5) hue_order = ['yes', 'no'] fsubset = [0, 2] ncomp = 3 label = 'label' np.random.seed(seed=1) dataset = {'label': 500*['yes'] + 500*['no'], 0: np.random.normal(0.5, 1, size=1000), 1: np.random.normal(1, 2, size=1000), 2: np.random.normal(3, 10, size=1000) } df = pd.DataFrame(data=dataset) # + import random axes = {} fig = plt.figure(figsize=(14, 14)) axes[1] = fig.add_subplot(221, projection='3d') axes[2] = fig.add_subplot(222, projection='3d') axes[3] = fig.add_subplot(223, projection='3d') axes[4] = fig.add_subplot(224, projection='3d') color = iter(cm.viridis(np.linspace(0, 0.9, 3))) angle=240 angle1 = [60, 0, 0, 0] angle2 = [240, 240, 0, 180] labels = ['100', '200', '500'] counter = 0 for key in tgroups: c = next(color) to_plot = random.sample(range(0, len(tgroups[key][0].tolist())), 1000) xy = (list(tgroups[key][0].tolist()[i] for i in to_plot), list(tgroups[key][1].tolist()[i] for i in to_plot), list(tgroups[key][2].tolist()[i] for i in to_plot)) acount = 0 for ax in axes: axes[ax].scatter(xy[0], xy[1], xy[2], c=c, s=18, alpha=0.3, label=labels[counter]) axes[ax].set_xlim3d(-8, 8) axes[ax].set_ylim3d(-8, 8) axes[ax].view_init(angle1[acount], angle2[acount]) acount = acount + 1 counter = counter + 1 plt.legend(fontsize=20, frameon=False) plt.show() # - def feature_plot_3D(dataset, label, features=[0, 1, 2], randsel=True, randcount=200, **kwargs): """Plots three features against each other from feature dataset. Parameters ---------- dataset : pandas.core.frames.DataFrame Must comtain a group column and numerical features columns labels : string or int Group column name features : list of int Names of columns to be plotted randsel : bool If True, downsamples from original dataset randcount : int Size of downsampled dataset **kwargs : variable figsize : tuple of int or float Size of output figure dotsize : float or int Size of plotting markers alpha : float or int Transparency factor xlim : list of float or int X range of output plot ylim : list of float or int Y range of output plot zlim : list of float or int Z range of output plot legendfontsize : float or int Font size of legend labelfontsize : float or int Font size of labels fname : string Filename of output figure Returns ------- xy : list of lists Coordinates of data on plot """ defaults = {'figsize': (8, 8), 'dotsize': 70, 'alpha': 0.7, 'xlim': None, 'ylim': None, 'zlim': None, 'legendfontsize': 12, 'labelfontsize': 10, 'fname': None} for defkey in defaults.keys(): if defkey not in kwargs.keys(): kwargs[defkey] = defaults[defkey] axes = {} fig = plt.figure(figsize=(14, 14)) axes[1] = fig.add_subplot(221, projection='3d') axes[2] = fig.add_subplot(222, projection='3d') axes[3] = fig.add_subplot(223, projection='3d') axes[4] = fig.add_subplot(224, projection='3d') color = iter(cm.viridis(np.linspace(0, 0.9, 3))) angle1 = [60, 0, 0, 0] angle2 = [240, 240, 10, 190] tgroups = {} xy = {} counter = 0 labels = dataset[label].unique() for lval in labels: tgroups[counter] = dataset[dataset[label] == lval] counter = counter + 1 N = len(tgroups) color = iter(cm.viridis(np.linspace(0, 0.9, N))) counter = 0 for key in tgroups: c = next(color) xy = [] if randsel: to_plot = random.sample(range(0, len(tgroups[key][0].tolist())), randcount) for key2 in features: xy.append(list(tgroups[key][key2].tolist()[i] for i in to_plot)) else: for key2 in features: xy.append(tgroups[key][key2]) acount = 0 for ax in axes: axes[ax].scatter(xy[0], xy[1], xy[2], c=c, s=kwargs['dotsize'], alpha=kwargs['alpha'], label=labels[counter]) if kwargs['xlim'] is not None: axes[ax].set_xlim3d(kwargs['xlim']) if kwargs['ylim'] is not None: axes[ax].set_ylim3d(kwargs['ylim']) if kwargs['zlim'] is not None: axes[ax].set_zlim3d(kwargs['zlim']) axes[ax].view_init(angle1[acount], angle2[acount]) axes[ax].set_xlabel('Prin. Component {}'.format(features[0]), fontsize=kwargs['labelfontsize']) axes[ax].set_ylabel('Prin. Component {}'.format(features[1]), fontsize=kwargs['labelfontsize']) axes[ax].set_zlabel('Prin. Component {}'.format(features[2]), fontsize=kwargs['labelfontsize']) acount = acount + 1 counter = counter + 1 #plt.legend(fontsize=kwargs['legendfontsize'], frameon=False) axes[3].set_xticks([]) axes[4].set_xticks([]) if kwargs['fname'] is None: plt.show() else: plt.savefig(kwargs['fname']) feature_plot_3D(df, label='label', features=[0, 1, 2]) def test_feature_plot_3D(): np.random.seed(seed=1) dataset = {'label': 250*['yes'] + 250*['no'], 0: np.random.normal(0.5, 1, size=500), 1: np.random.normal(1, 2, size=500), 2: np.random.normal(3, 10, size=500) } df = pd.DataFrame(data=dataset) xy = pca.feature_plot_3D(df, label='label', features=[0, 1], randsel=True, fname='test1.png') # assert len(xy[1]) == 200 # assert os.path.isfile('test1.png') # # xy = pca.feature_plot_3D(df, label='label', features=[0, 1], randsel=False) # assert len(xy[1]) == 250
notebooks/development/09_06_18_test_functions_pca.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # > **How to run this notebook (command-line)?** # 1. Install the `ReinventCommunity` environment: # `conda env create -f environment.yml` # 2. Activate the environment: # `conda activate ReinventCommunity` # 3. Execute `jupyter`: # `jupyter notebook` # 4. Copy the link to a browser # # # # `REINVENT 3.0`: reinforcement learning exploration demo # This illustrates a use case where we aim to achieve an exploration behavior by generating as many as possible diverse solutions by using a predictive model as the main component. # # NOTE: The generated solutions might not be entirely reliable since they could be outside of the applicability domain of the predictive model. Predictive models could score highly compounds that are outside of the applicability domain but this score would be likely inaccurate. This mode would be more reliable if we aslo include `matching_substructure` component with a list of desired core structural patterns/scaffolds. Alternatively this mode can be quite successful in combination with docking or pharmacophore similarity. Such examples will be provided with the next releases. # # ## 1. Set up the paths # _Please update the following code block such that it reflects your system's installation and execute it._ # + # load dependencies import os import re import json import tempfile # --------- change these path variables as required reinvent_dir = os.path.expanduser("~/Desktop/Reinvent") reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent.v3.0") output_dir = os.path.expanduser("~/Desktop/REINVENT_RL_Exploration_demo") # --------- do not change # get the notebook's root path try: ipynb_path except NameError: ipynb_path = os.getcwd() # if required, generate a folder to store the results try: os.mkdir(output_dir) except FileExistsError: pass # - # ## 2. Setting up the configuration # In the cells below we will build a nested dictionary object that will be eventually converted to JSON file which in turn will be consumed by `REINVENT`. # You can find this file in your `output_dir` location. # ### A) Declare the run type # initialize the dictionary configuration = { "version": 3, # we are going to use REINVENT's newest release "run_type": "reinforcement_learning" # other run types: "sampling", "validation", # "transfer_learning", # "scoring" and "create_model" } # ### B) Sort out the logging details # This includes `result_folder` path where the results will be produced. # # Also: `REINVENT` can send custom log messages to a remote location. We have retained this capability in the code. if the `recipient` value differs from `"local"` `REINVENT` will attempt to POST the data to the specified `recipient`. # add block to specify whether to run locally or not and # where to store the results and logging configuration["logging"] = { "sender": "http://0.0.0.1", # only relevant if "recipient" is set to "remote" "recipient": "local", # either to local logging or use a remote REST-interface "logging_frequency": 10, # log every x-th steps "logging_path": os.path.join(output_dir, "progress.log"), # load this folder in tensorboard "result_folder": os.path.join(output_dir, "results"), # will hold the compounds (SMILES) and summaries "job_name": "Reinforcement learning demo", # set an arbitrary job name for identification "job_id": "demo" # only relevant if "recipient" is set to a specific REST endpoint } # Create `"parameters"` field # add the "parameters" block configuration["parameters"] = {} # ### C) Set Diversity Filter # During each step of Reinforcement Learning the compounds scored above `minscore` threshold are kept in memory. The scored smiles are written out to a file in the results folder `scaffold_memory.csv`. In the example here we are not using any filter by setting it to `"IdenticalMurckoScaffold"`. This will help to explore the chemical space since using the diversity filters will stimulate generation of more diverse solutions. The maximum average value of the scoring fuinction will be lower in exploration mode because the Agent is encouraged to search for diverse scaffolds rather than to only optimize the ones that are being found so far. The number of generated compounds should be higher in comparison to the exploitation scenario since the diversity is encouraged. # add a "diversity_filter" configuration["parameters"]["diversity_filter"] = { "name": "IdenticalMurckoScaffold", # other options are: "IdenticalTopologicalScaffold", # "IdenticalMurckoScaffold" and "ScaffoldSimilarity" # -> use "NoFilter" to disable this feature "nbmax": 25, # the bin size; penalization will start once this is exceeded "minscore": 0.4, # the minimum total score to be considered for binning "minsimilarity": 0.4 # the minimum similarity to be placed into the same bin } # ### D) Set Inception # * `smiles` provide here a list of smiles to be incepted # * `memory_size` the number of smiles allowed in the inception memory # * `sample_size` the number of smiles that can be sampled at each reinforcement learning step from inception memory # prepare the inception (we do not use it in this example, so "smiles" is an empty list) configuration["parameters"]["inception"] = { "smiles": [], # fill in a list of SMILES here that can be used (or leave empty) "memory_size": 100, # sets how many molecules are to be remembered "sample_size": 10 # how many are to be sampled each epoch from the memory } # ### E) Set the general Reinforcement Learning parameters # * `n_steps` is the amount of Reinforcement Learning steps to perform. Best start with 1000 steps and see if thats enough. # * `agent` is the generative model that undergoes transformation during the Reinforcement Learning run. # # We reccomend keeping the other parameters to their default values. # set all "reinforcement learning"-specific run parameters configuration["parameters"]["reinforcement_learning"] = { "prior": os.path.join(ipynb_path, "models/random.prior.new"), # path to the pre-trained model "agent": os.path.join(ipynb_path, "models/random.prior.new"), # path to the pre-trained model "n_steps": 1000, # the number of epochs (steps) to be performed; often 1000 "sigma": 128, # used to calculate the "augmented likelihood", see publication "learning_rate": 0.0001, # sets how strongly the agent is influenced by each epoch "batch_size": 128, # specifies how many molecules are generated per epoch "reset": 0, # if not '0', the reset the agent if threshold reached to get # more diverse solutions "reset_score_cutoff": 0.5, # if resetting is enabled, this is the threshold "margin_threshold": 50 # specify the (positive) margin between agent and prior } # ### F) Define the scoring function # We will use a `custom_product` type. The component types included are: # * `predictive_property` which is the target activity to _Aurora_ kinase represented by the predictive `regression` model. Note that we set the weight of this component to be the highest. # * `qed_score` is the implementation of QED in RDKit. It biases the egenration of molecules towars more "drug-like" space. Depending on the study case can have beneficial or detrimental effect. # * `custom_alerts` the `"smiles"` field also can work with SMILES or SMARTS # # Note: The model used in this example is a regression model # # prepare the scoring function definition and add at the end scoring_function = { "name": "custom_product", # this is our default one (alternative: "custom_sum") "parallel": False, # sets whether components are to be executed # in parallel; note, that python uses "False" / "True" # but the JSON "false" / "true" # the "parameters" list holds the individual components "parameters": [ # add component: an activity model { "component_type": "predictive_property", # this is a scikit-learn model, returning # activity values "name": "<NAME>", # arbitrary name for the component "weight": 6, # the weight ("importance") of the component (default: 1) "specific_parameters": { "model_path": os.path.join(ipynb_path, "models/Aurora_model.pkl"), # absolute model path "transformation": { "transformation_type": "sigmoid", # see description above "high": 9, # parameter for sigmoid transformation "low": 4, # parameter for sigmoid transformation "k": 0.25, # parameter for sigmoid transformation }, "scikit": "regression", # model can be "regression" or "classification" "descriptor_type": "ecfp_counts", # sets the input descriptor for this model "size": 2048, # parameter of descriptor type "radius": 3, # parameter of descriptor type "use_counts": True, # parameter of descriptor type "use_features": True # parameter of descriptor type } }, # add component: QED { "component_type": "qed_score", # this is the QED score as implemented in RDKit "name": "QED", # arbitrary name for the component "weight": 2 # the weight ("importance") of the component (default: 1) }, # add component: enforce to NOT match a given substructure { "component_type": "custom_alerts", "name": "Custom alerts", # arbitrary name for the component "weight": 1, # the weight of the component (default: 1) "specific_parameters": { "smiles": [ # specify the substructures (as list) to penalize "[*;r8]", "[*;r9]", "[*;r10]", "[*;r11]", "[*;r12]", "[*;r13]", "[*;r14]", "[*;r15]", "[*;r16]", "[*;r17]", "[#8][#8]", "[#6;+]", "[#16][#16]", "[#7;!n][S;!$(S(=O)=O)]", "[#7;!n][#7;!n]", "C#C", "C(=[O,S])[O,S]", "[#7;!n][C;!$(C(=[O,N])[N,O])][#16;!s]", "[#7;!n][C;!$(C(=[O,N])[N,O])][#7;!n]", "[#7;!n][C;!$(C(=[O,N])[N,O])][#8;!o]", "[#8;!o][C;!$(C(=[O,N])[N,O])][#16;!s]", "[#8;!o][C;!$(C(=[O,N])[N,O])][#8;!o]", "[#16;!s][C;!$(C(=[O,N])[N,O])][#16;!s]" ] } }] } configuration["parameters"]["scoring_function"] = scoring_function # #### NOTE: Getting the selectivity score component to reach satisfactory levels is non-trivial and might take considerably higher number of steps # ## 3. Write out the configuration # We now have successfully filled the dictionary and will write it out as a `JSON` file in the output directory. Please have a look at the file before proceeding in order to see how the paths have been inserted where required and the `dict` -> `JSON` translations (e.g. `True` to `true`) have taken place. # write the configuration file to the disc configuration_JSON_path = os.path.join(output_dir, "RL_config.json") with open(configuration_JSON_path, 'w') as f: json.dump(configuration, f, indent=4, sort_keys=True) # ## 4. Run `REINVENT` # Now it is time to execute `REINVENT` locally. Note, that depending on the number of epochs (steps) and the execution time of the scoring function components, this might take a while. # # The command-line execution looks like this: # ``` # # activate envionment # conda activate reinvent.v3.0 # # # execute REINVENT # python <your_path>/input.py <config>.json # ``` # + # %%capture captured_err_stream --no-stderr # execute REINVENT from the command-line # !{reinvent_env}/bin/python {reinvent_dir}/input.py {configuration_JSON_path} # + # print the output to a file, just to have it for documentation with open(os.path.join(output_dir, "run.err"), 'w') as file: file.write(captured_err_stream.stdout) # prepare the output to be parsed list_epochs = re.findall(r'INFO.*?local', captured_err_stream.stdout, re.DOTALL) data = [epoch for idx, epoch in enumerate(list_epochs) if idx in [1, 75, 124]] data = ["\n".join(element.splitlines()[:-1]) for element in data] # - # Below you see the print-out of the first, one from the middle and the last epoch, respectively. Note, that the fraction of valid `SMILES` is high right from the start (because we use a pre-trained prior). You can see the partial scores for each component for the first couple of compounds, but the most important information is the average score. You can clearly see how it increases over time. for element in data: print(element) # ## 5. Analyse the results # In order to analyze the run in a more intuitive way, we can use `tensorboard`: # # ``` # # go to the root folder of the output # # cd <your_path>/REINVENT_RL_demo # # # make sure, you have activated the proper environment # conda activate reinvent.v3.0 # # # start tensorboard # tensorboard --logdir progress.log # ``` # # Then copy the link provided to a browser window, e.g. "http://workstation.url.com:6006/". The following figures are exmaple plots - remember, that there is always some randomness involved. In `tensorboard` you can monitor the individual scoring function components. # # The score for predicted Aurora Kinase activity. # # ![](img/explore_aurora_kinase.png) # # The average score over time. # # ![](img/explore_avg_score.png) # # It might also be informative to look at the results from the prior (dark blue), the agent (blue) and the augmented likelihood (purple) over time. # # ![](img/explore_nll_plot.png) # # And last but not least, there is a "Images" tab available that lets you browse through the compounds generated in an easy way. In the molecules, the substructure matches that were defined to be required are highlighted in red (if present). Also, the total scores are given per molecule. # # ![](img/molecules.png) # The results folder will hold four different files: the agent (pickled), the input JSON (just for reference purposes), the memory (highest scoring compounds in `CSV` format) and the scaffold memory (in `CSV` format). # !head -n 15 {output_dir}/results/memory.csv
notebooks/Reinforcement_Learning_Exploration_Demo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/alik604/CMPT-419/blob/master/ML_algo_compare_code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="G8bFiuchPYim" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="8f477d43-9ebf-44df-e266-d2265230bbf4" # Importing the libraries import math import datetime import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM import matplotlib.pyplot as plt # Import matplotlib # %matplotlib inline from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # !pip install fastparquet # + id="xPWTBBefIaxi" colab_type="code" outputId="14e46286-e63a-4159-ead9-6bba7630bcb7" colab={"base_uri": "https://localhost:8080/", "height": 419} dataset = pd.read_parquet('https://github.com/ddkpham/Stock_Prediction_NLP_ML/blob/master/wsb.hourly.joined.parquet.gz?raw=true', engine="fastparquet") # dataset = data.set_index('created_utc') dataset = dataset.iloc[:,29:] dataset # + id="fO0G7gxnQA5l" colab_type="code" outputId="2fdcdd3b-f527-4d91-edcd-a192a00907a5" colab={"base_uri": "https://localhost:8080/", "height": 419} opening_dataset = dataset.SPY_StockPrice dataset = dataset.drop(["SPY_StockPrice"],axis=1) dataset # + id="Ml86n07VeS77" colab_type="code" outputId="82eb61aa-3d08-4982-cbd3-80572cf53618" colab={"base_uri": "https://localhost:8080/", "height": 136} scaler = MinMaxScaler(feature_range=(0,1)) openScaler = MinMaxScaler(feature_range=(0,1)) scaled_dataset = scaler.fit_transform(dataset) scaled_open_prices = openScaler.fit_transform(opening_dataset.values.reshape(-1, 1)) scaled_open_prices # + id="WYkKUoZ0KV_A" colab_type="code" colab={} training_data_len = math.ceil(len(dataset) * 0.8) train_xvals = scaled_dataset[0:training_data_len, :] train_yvals = scaled_open_prices[0:training_data_len, :] # + id="qS6ZzUUYNwT0" colab_type="code" colab={} test_xvals = scaled_dataset[training_data_len:, :] test_yvals = scaled_open_prices[training_data_len:, :] # + id="YndPC_STbn-f" colab_type="code" colab={} y_train = train_yvals x_train = train_xvals y_test = test_yvals x_test = test_xvals # + id="o_sL6gWENwQM" colab_type="code" outputId="92fd2be1-6684-4917-840f-5b9db18c973c" colab={"base_uri": "https://localhost:8080/", "height": 34} train_xvals.shape test_xvals.shape scaled_dataset.shape # + id="_wCo0iPFo8kC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c0de7de9-fcfb-44e5-92c2-f126f4316331" x_train = [] y_train = [] maxlen = 20 for i in range(maxlen, len(train_xvals)): # len(X_scaled_train)+1) changed added +1; X_scaled_train and x_train[-1], have the same last elem; they are synced x_train.append(train_xvals[i-maxlen: i, :]) y_train.append(train_yvals[i]) np.array(x_train).shape # + id="9wBNEDvopKz0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="e5a2972e-b934-471e-be9e-0dd2ae11c9c0" import tensorflow as tf from sklearn.preprocessing import MinMaxScaler from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM from tensorflow.compat.v1.keras.layers import CuDNNLSTM, CuDNNGRU from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau x_train, y_train = np.array(x_train), np.array(y_train) model = Sequential() model.add(LSTM(100, return_sequences=True, input_shape=(x_train.shape[1], x_train.shape[2]))) model.add(LSTM(135, return_sequences=True)) # recurrent_dropout=0.1 will prevent cuDNN, using TF2 model.add(Dense(160)) model.add(LSTM(160, return_sequences=False)) model.add(Dense(60)) model.add(Dense(1)) # Compile Model model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.001), loss='mean_squared_error') # metrics=['mean_squared_error', 'mae'] batch_size = 16 callbacks=[] # callbacks.append(EarlyStopping(monitor='loss', patience=12, verbose=1, mode='min', restore_best_weights=False)) # end if loss converges stop = EarlyStopping(monitor='loss', patience=10, verbose=1, mode='min', restore_best_weights=True) # end if val_loss converges callbacks.append(stop) callbacks.append(ModelCheckpoint('./my_model.hdf5', save_best_only=True, monitor='loss', mode='min')) callbacks.append(ReduceLROnPlateau(monitor='loss', factor=0.5, patience=6, verbose=1, mode='min')) history = model.fit(x_train, y_train, batch_size=batch_size, epochs=45, callbacks=callbacks,shuffle=True) # + id="-z8j1-w3qeFi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="1a5c00ce-4498-46dd-c382-95d89cd2bab1" model.summary() # + id="ONPU_2Xfs28F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 661} outputId="3ed6b444-9e31-4061-fbb2-dead6bd052ee" from tensorflow.keras.utils import plot_model plot_model(model) # + id="R38uC5k0qeL-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2654af14-76ba-49be-9133-102d7b1a3310" x_test = [] y_test = test_yvals[:-20] for i in range(maxlen, len(test_xvals)): x_test.append(test_xvals[i-maxlen:i, :]) x_test = np.array(x_test) predictions = model.predict(x_test, batch_size=batch_size) rmse_LSTM = np.sqrt(np.mean((openScaler.inverse_transform(predictions.reshape(-1, 1))- openScaler.inverse_transform(y_test.reshape(-1, 1)))**2)) rmse_LSTM # 12.474914716503145 # + id="ZEHflymlrIO-" colab_type="code" colab={} # + id="PrdanopLrIR_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="24f0563b-adaf-4839-84dd-cf07968ad3b4" from tensorflow.keras.layers import Dense, LSTM,GRU model = Sequential() model.add(GRU(100, return_sequences=True, input_shape=(x_train.shape[1], x_train.shape[2]))) model.add(GRU(135, return_sequences=True)) # recurrent_dropout=0.1 will prevent cuDNN, using TF2 model.add(Dense(160)) model.add(GRU(160, return_sequences=False)) model.add(Dense(60)) model.add(Dense(1)) # Compile Model model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.001), loss='mean_squared_error') batch_size = 16 callbacks=[] stop = EarlyStopping(monitor='loss', patience=10, verbose=1, mode='min', restore_best_weights=True) callbacks.append(stop) callbacks.append(ModelCheckpoint('./my_model.hdf5', save_best_only=True, monitor='loss', mode='min')) callbacks.append(ReduceLROnPlateau(monitor='loss', factor=0.5, patience=6, verbose=1, mode='min')) history = model.fit(x_train, y_train, batch_size=batch_size, epochs=45, callbacks=callbacks,shuffle=True) predictions = model.predict(x_test, batch_size=batch_size) rmse_GRU = np.sqrt(np.mean((openScaler.inverse_transform(predictions.reshape(-1, 1))- openScaler.inverse_transform(y_test.reshape(-1, 1)))**2)) rmse_GRU # 15.126561829394966 # + id="wMtsPjHJrIYD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 661} outputId="76baff8f-92b7-4d23-d13c-bad2a22f99c7" plot_model(model) # + id="dgMBXeHOw9tc" colab_type="code" colab={} y_train = train_yvals x_train = train_xvals y_test = test_yvals x_test = test_xvals # + id="0RiLJLfBrIVm" colab_type="code" colab={} # from tensorflow.keras.layers import Dense, LSTM,GRU # model = Sequential() # model.add(Dense(160)) # model.add(Dense(60)) # model.add(Dense(1)) # # Compile Model # model.compile(optimizer='adam', loss='mean_squared_error') # batch_size = 16 # callbacks=[] # stop = EarlyStopping(monitor='loss', patience=10, verbose=1, mode='min', restore_best_weights=True) # callbacks.append(stop) # callbacks.append(ModelCheckpoint('./my_model.hdf5', save_best_only=True, monitor='loss', mode='min')) # callbacks.append(ReduceLROnPlateau(monitor='loss', factor=0.5, patience=6, verbose=1, mode='min')) # history = model.fit(x_train, y_train, batch_size=batch_size, epochs=150, callbacks=callbacks,shuffle=True) # + id="Zls9MhpLqeR7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e6df4227-26bf-47ae-b289-c5e68f3f126c" # predictions = model.predict(x_test, batch_size=batch_size) # rmse_MLP = np.sqrt(np.mean((openScaler.inverse_transform(yhat.reshape(-1, 1))- openScaler.inverse_transform(y_test.reshape(-1, 1)))**2)) # rmse_MLP #17.856727259041 # + id="QzfUsxq9qeOy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="da720d95-0e69-4c99-acf6-f5dea42a39ee" plt.plot(predictions) plt.plot(y_test) # + id="o9GADjJaqeJW" colab_type="code" colab={} # + id="npFiDGGSpK6V" colab_type="code" colab={} y_train = train_yvals x_train = train_xvals y_test = test_yvals x_test = test_xvals # + [markdown] colab_type="text" id="bSJeyHcZQzbc" # ## SVM & RandomForestRegressor & bayesian regression # + colab_type="code" id="S5miV8c_Qzbc" colab={} from sklearn.svm import SVR from sklearn.ensemble import RandomForestRegressor # + id="JaPSwuSZbIZO" colab_type="code" colab={} # + colab_type="code" id="bKXBfOfSQzbe" outputId="05ff8f3a-2948-4470-c1e8-fa430779c651" colab={"base_uri": "https://localhost:8080/", "height": 221} reg = RandomForestRegressor(n_jobs=-1,n_estimators=999,random_state=42,criterion='mse') reg.fit(x_train,y_train) yhat = reg.predict(x_test) MSE = mean_squared_error(openScaler.inverse_transform(yhat.reshape(-1, 1)), openScaler.inverse_transform(y_test.reshape(-1, 1))) print('\nMSE: ',MSE) RMSE_RFR = np.sqrt(MSE) print('\nRMSE: ',RMSE_RFR) # + colab_type="code" id="G3Yiln0eQzbg" colab={} # plt.title("RandomForestRegressor") # plt.plot(y,label ='obs'); # plt.plot(yhat,c='r',label = "pred"); # plt.legend(); # plt.show() # + colab_type="code" id="OG3KhnbMQzbh" outputId="9ecc863b-2217-4505-e4d8-523e777f956e" colab={"base_uri": "https://localhost:8080/", "height": 136} reg = SVR(kernel='rbf') reg.fit(x_train,y_train) yhat = reg.predict(x_test) MSE = mean_squared_error(openScaler.inverse_transform(yhat.reshape(-1, 1)), openScaler.inverse_transform(y_test.reshape(-1, 1))) print('\nMSE: ',MSE) RMSE_SVM = np.sqrt(MSE) print('\nRMSE: ',RMSE_SVM) # + id="DHQNRPC8WPZc" colab_type="code" colab={} y_train = y_train.reshape(1,-1)[0] y_test = y_test.reshape(1,-1)[0] # + id="9L6Uyv6tZxNv" colab_type="code" colab={} x_train = x_train.tolist() x_test = x_test.tolist() y_test = y_test.tolist() y_train = y_train.tolist() # + id="wM7xud6ybW0G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="63ee2b2a-400d-43c2-a620-7f69057f07b7" reg = GradientBoostingRegressor(n_estimators=200,max_depth=4) reg.fit(x_train,y_train) yhat = reg.predict(x_test) MSE = mean_squared_error(openScaler.inverse_transform(yhat.reshape(-1, 1)), openScaler.inverse_transform(y_test.reshape(-1, 1))) print('\nMSE: ',MSE) RMSE_RGB = np.sqrt(MSE) print('\nRMSE: ',RMSE_RGB) # + id="ZoLBoUzxWNaf" colab_type="code" colab={} # from sklearn.linear_model import ARDRegression # https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.BayesianRidge.html#sklearn.linear_model.BayesianRidge # reg = ARDRegression(n_iter=300,tol = 0.001,compute_score=False) # reg.fit(x_train,y_train) # yhat = reg.predict(X_test) # MSE = mean_squared_error(yhat,y_test) # print('\nMSE - ARDRegression: ',MSE) # RMSE_ARDRegression = np.sqrt(MSE) # print('\nRMSE: ',RMSE_ARDRegression) # + id="vLi6Tknyaq6W" colab_type="code" colab={} # from sklearn.linear_model import BayesianRidge # reg = BayesianRidge(n_iter=300, tol = 0.001,compute_score=False) # reg.fit(x,y) # yhat = reg.predict(x_train) # MSE = mean_squared_error(yhat,y_train) # print('\nMSE - BayesianRidge : ',MSE) # RMSE_BayesianRidge = np.sqrt(MSE) # print('\RMSE_BayesianRidge: ',RMSE_BayesianRidge) # + id="EgMD9UPraq9c" colab_type="code" colab={} # + id="czn2OLjPPN8P" colab_type="code" colab={} from sklearn.linear_model import * # https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.BayesianRidge.html#sklearn.linear_model.BayesianRidge # + id="qaPSeaPZPN_u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="bb652e4c-d49f-4abb-d889-e694a00f9e32" scores # + id="CYiV8QSQPOCW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 366} outputId="69599d03-658b-44da-d9b6-2333ba6fbcb7" import matplotlib import matplotlib.pyplot as plt import numpy as np scores = [RMSE_RGB, RMSE_RFR, RMSE_SVM,rmse_LSTM, rmse_GRU,rmse_MLP ] #, RMSE_ARDRegression, RMSE_BayesianRidge labels = ['Grad. boosting','Random Forest', 'SVM','LSTM', 'GRU','Mltilayer perceptron'] # 'Bayesian ARD regression', 'Bayesian ridge regression' y_pos = np.arange(len(labels)) fig, ax = plt.subplots() plt.bar(y_pos, scores,width=0.7, align='center', alpha=0.7) ls = [ls.append(str(round(i,2))) for i in scores] plt.legend(ls, labels) ax.set_ylabel('RMSE') ax.set_title('Comparison') # ax.set_xticks(y_pos,labels) # ax.set_xticklabels(labels, rotation = 45, ha="left") # + [markdown] id="TTyQeJjK6fd3" colab_type="text" # # candle stick ploting # + id="fYEZzw4QKWC4" colab_type="code" colab={} # # !pip install https://github.com/matplotlib/mpl_finance/archive/master.zip # + id="cf2VjlRSSVwh" colab_type="code" colab={} # from matplotlib.dates import DateFormatter, WeekdayLocator, DayLocator, MONDAY # from mpl_finance import candlestick_ohlc # def pandas_candlestick_ohlc(dat, stick = "day", otherseries = None): # """ # :param dat: pandas DataFrame object with datetime64 index, and float columns "Open", "High", "Low", and "Close", likely created via DataReader from "yahoo" # :param stick: A string or number indicating the period of time covered by a single candlestick. Valid string inputs include "day", "week", "month", and "year", ("day" default), and any numeric input indicates the number of trading days included in a period # :param otherseries: An iterable that will be coerced into a list, containing the columns of dat that hold other series to be plotted as lines # This will show a Japanese candlestick plot for stock data stored in dat, also plotting other series if passed. # """ # mondays = WeekdayLocator(MONDAY) # major ticks on the mondays # alldays = DayLocator() # minor ticks on the days # dayFormatter = DateFormatter('%d') # e.g., 12 # # Create a new DataFrame which includes OHLC data for each period specified by stick input # transdat = dat.loc[:,["Open", "High", "Low", "Close"]] # if (type(stick) == str): # if stick == "day": # plotdat = transdat # stick = 1 # Used for plotting # elif stick in ["week", "month", "year"]: # if stick == "week": # transdat["week"] = pd.to_datetime(transdat.index).map(lambda x: x.isocalendar()[1]) # Identify weeks # elif stick == "month": # transdat["month"] = pd.to_datetime(transdat.index).map(lambda x: x.month) # Identify months # transdat["year"] = pd.to_datetime(transdat.index).map(lambda x: x.isocalendar()[0]) # Identify years # grouped = transdat.groupby(list(set(["year",stick]))) # Group by year and other appropriate variable # plotdat = pd.DataFrame({"Open": [], "High": [], "Low": [], "Close": []}) # Create empty data frame containing what will be plotted # for name, group in grouped: # plotdat = plotdat.append(pd.DataFrame({"Open": group.iloc[0,0], # "High": max(group.High), # "Low": min(group.Low), # "Close": group.iloc[-1,3]}, # index = [group.index[0]])) # if stick == "week": stick = 5 # elif stick == "month": stick = 30 # elif stick == "year": stick = 365 # elif (type(stick) == int and stick >= 1): # transdat["stick"] = [np.floor(i / stick) for i in range(len(transdat.index))] # grouped = transdat.groupby("stick") # plotdat = pd.DataFrame({"Open": [], "High": [], "Low": [], "Close": []}) # Create empty data frame containing what will be plotted # for name, group in grouped: # plotdat = plotdat.append(pd.DataFrame({"Open": group.iloc[0,0], # "High": max(group.High), # "Low": min(group.Low), # "Close": group.iloc[-1,3]}, # index = [group.index[0]])) # else: # raise ValueError('Valid inputs to argument "stick" include the strings "day", "week", "month", "year", or a positive integer') # # Set plot parameters, including the axis object ax used for plotting # fig, ax = plt.subplots() # fig.subplots_adjust(bottom=0.2) # if plotdat.index[-1] - plotdat.index[0] < pd.Timedelta('730 days'): # weekFormatter = DateFormatter('%b %d') # e.g., Jan 12 # ax.xaxis.set_major_locator(mondays) # ax.xaxis.set_minor_locator(alldays) # else: # weekFormatter = DateFormatter('%b %d, %Y') # ax.xaxis.set_major_formatter(weekFormatter) # ax.grid(True) # # Create the candelstick chart # candlestick_ohlc(ax, list(zip(list(date2num(plotdat.index.tolist())), plotdat["Open"].tolist(), plotdat["High"].tolist(), # plotdat["Low"].tolist(), plotdat["Close"].tolist())), # colorup = "black", colordown = "red", width = stick * .4) # # Plot other series (such as moving averages) as lines # if otherseries != None: # if type(otherseries) != list: # otherseries = [otherseries] # dat.loc[:,otherseries].plot(ax = ax, lw = 1.3, grid = True) # ax.xaxis_date() # ax.autoscale_view() # plt.setp(plt.gca().get_xticklabels(), rotation=90, horizontalalignment='right') # plt.show() # #pandas_candlestick_ohlc(apple) # FNGU = web.DataReader("FNGU", "yahoo", start, end) # FNGU["14d"] = np.round(FNGU["Close"].rolling(window = 20, center = False).mean(), 2) # pandas_candlestick_ohlc(FNGU.loc['2019-01-04':], otherseries = "14d") # FNGU.loc['201-01-04':'2016-08-07',:] # + id="Hpzvy5HvZV1q" colab_type="code" colab={} # + id="Z7idjOg9ZV5D" colab_type="code" colab={} # + id="PZKOvn_PZV8d" colab_type="code" colab={} # + id="Uwa5QLbq6f3P" colab_type="code" colab={}
ML_algo_compare_code.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' iris_2d = np.genfromtxt(url, delimiter=',', dtype='object') # + # Solution # Compute volume sepallength = iris_2d[:, 0].astype('float') petallength = iris_2d[:, 2].astype('float') volume = (np.pi * petallength * (sepallength**2))/3 print("iris_2d shape: {}, volume shape: {}".format(iris_2d.shape, volume.shape)) # Introduce new dimension to match iris_2d's volume = volume[:, np.newaxis] print(volume.shape) # Add the new column out = np.hstack([iris_2d, volume]) print(out.shape) # - out[:4] # + # Input np.random.seed(100) a = np.random.uniform(1,50, 6) # Solution: print(a) print(a.argsort()) #> [18 7 3 10 15] # Solution 2: np.argpartition(-a, 5)[:5] #> [15 10 3 7 18] # Below methods will get you the values. # Method 1: a[a.argsort()][-5:] # Method 2: np.sort(a)[-5:] # Method 3: np.partition(a, kth=-5)[-5:] # Method 4: a[np.argpartition(-a, 5)][:5] # - x = np.array([[3, 1, 2], [5, 4, 6]]) print("x:{}".format(x)) sorted_x_idx = np.argsort(x) print("sorted_x_idx: {}".format(sorted_x_idx)) print("shape: {}".format(sorted_x_idx.shape)) y1 = x[0][sorted_x_idx[0]] y2 = x[1][sorted_x_idx[1]] y = np.vstack((y1, y2)) print("y: {}".format(y)) col1 = x[:, 0] col2 = x[:, 1] col3 = x[:, 2] print("col1: {}, shape: {}".format(col1, col1.shape)) print(col2) print(col3) col1_max = np.maximum(4, col1) print(col1_max) X = np.array([1, 2, 3]) for data in X: print(data) np.squeeze(X) m, n = 5, 5 [[0 for _ in range(n)] for _ in range(m)] [[0]*m]*n
Numpy_Python_Exercise.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.2 64-bit # language: python # name: python38264bita307c9f5d73a4e6bba0d2b5e435e778a # --- # + import cv2 import matplotlib.pyplot as plt vc = cv2.VideoCapture(0) plt.figure(figsize=(10,10)) plt.axis("off") if vc.isOpened(): _, frame = vc.read() frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) plt.imshow(frame); width = vc.get(cv2.CAP_PROP_FRAME_WIDTH) height = vc.get(cv2.CAP_PROP_FRAME_HEIGHT) print(f"Dimensions = {width} x {height}") else: is_capturing = False vc.release() # -
video-classification/OpenCV.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 with Spark 1.6 # language: python # name: python2 # --- # ### Loading text data from remote sources # Data files commonly reside in remote sources, such as such as public or private market places or GitHub repositories. You can load comma separated value (csv) data files using Pixiedust's `sampleData` method. # #### Prerequisites # Import PixieDust import pixiedust # When you run a notebook cell (that loads or processes data) it might trigger execution of one or more jobs. # # #### Enable Apache Spark job monitoring pixiedust.enableJobMonitor() # #### Loading data # # To load a data set invoke `pixiedust.sampleData` and specify the data set URL: homes = pixiedust.sampleData("https://openobjectstore.mybluemix.net/misc/milliondollarhomes.csv") # <div class="alert alert-block alert-info"> # `pixiedust.sampleData` loads the data into an [Apache Spark DataFrame](https://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-dataframes), which you can inspect and visualize using `display()`. # </div> # # #### Inspecting and previewing the loaded data # # To inspect the automatically inferred schema and preview a small subset of the data you can use the _DataFrame Table_ view, as shown in the preconfigured example below: # # + pixiedust={"displayParams": {"handlerId": "dataframe"}} display(homes) # - # #### Simple visualization using bar charts # # With PixieDust `display()` you can visually explore the loaded data using built-in charts such as bar charts, line charts, scatter plots or maps. # # To explore a data set # * choose the desired chart type from the drop down # * configure chart options # * configure display options # # We can analyze the average home price for each city by choosing # * chart type: bar chart # * chart options # * _Options > Keys_: `CITY` # * _Options > Values_: `PRICE` # * _Options > Aggregation_: `AVG` # # Run the next cell to review the results. # + pixiedust={"displayParams": {"legend": "true", "valueFields": "PRICE", "handlerId": "barChart", "rowCount": "100", "title": "Average home price by city", "keyFields": "CITY", "rendererId": "matplotlib", "stretch": "true", "mpld3": "false", "aggregation": "AVG"}} display(homes) # - # #### Exploring the data # # Changing the display **Options** you can continue to explore the loaded data set without having to pre-process the data. # # For example, changing # * _Options > Key_ to `YEAR_BUILT` and # * _Options > aggregation_ to `COUNT` # # you can find out how old the listed properties are: # + pixiedust={"displayParams": {"legend": "false", "valueFields": "PRICE", "handlerId": "barChart", "rowCount": "100", "keyFields": "YEAR BUILT", "title": "Property age", "stretch": "true", "aggregation": "COUNT"}} display(homes) # - # #### Using sample data sets # # PixieDust comes with a set of curated data sets that you can use get familiar with the different chart types and options. # # Type `pixiedust.sampleData()` to display those data sets. pixiedust.sampleData() # <div class="alert alert-block alert-info"> The homes sales data set we've loaded earlier is one of the samples. Therefore we could have also loaded it by specifying the displayed data set id as parameter: `home = pixiedust.sampleData(6)`</div> # If your data isn't stored in csv files, you can load it into a DataFrame from any supported Spark [data source](https://spark.apache.org/docs/latest/sql-programming-guide.html#data-sources). Refer to [these Python code snippets](https://apsportal.ibm.com/docs/content/analyze-data/python_load.html) for more information.
notebook/PixieDust 2 - Working with External Data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## This notebook would be used to visualize the various audio features obtained import os import pickle import soundfile as sf import numpy as np import matplotlib.pyplot as plt import matplotlib.style as ms from tqdm import tqdm import librosa import math import random import pandas as pd from IPython.display import Audio import librosa.display ms.use('seaborn-muted') # %matplotlib inline audio_vectors = pickle.load(open('data/pre-processed/audio_vectors_1.pkl', 'rb')) y1 = audio_vectors['Ses01F_script01_2_F011'] # Angry y2 = audio_vectors['Ses01F_script02_2_F036'] # Sad min_len = min(len(y1), len(y2)) y1, y2 = y1[:min_len], y2[:min_len] sr = 44100 Audio(y1, rate=sr) plt.figure(figsize=(15,2)) librosa.display.waveplot(y1, sr=sr, max_sr=1000, alpha=0.25, color='r') librosa.display.waveplot(y2, sr=sr, max_sr=1000, alpha=0.25, color='b') rmse1 = librosa.feature.rmse(y1 + 0.0001)[0] rmse2 = librosa.feature.rmse(y2 + 0.0001)[0] # plt.figure(figsize=(15,2)) plt.plot(rmse1, color='r') plt.plot(rmse2, color='b') plt.ylabel('RMSE') # + silence1 = 0 for e in rmse1: if e <= 0.3 * np.mean(rmse1): silence1 += 1 silence2 = 0 for e in rmse2: if e <= 0.3 * np.mean(rmse2): silence2 += 1 print(silence1/float(len(rmse1)), silence2/float(len(rmse2))) # - y1_harmonic = librosa.effects.hpss(y1)[0] y2_harmonic = librosa.effects.hpss(y2)[0] # plt.figure(figsize=(5,2)) plt.plot(y1, color='r') plt.plot(y2, color='b') plt.ylabel('Harmonics') autocorr1 = librosa.core.autocorrelate(y1) autocorr2 = librosa.core.autocorrelate(y2) plt.figure(figsize=(15,2)) plt.plot(autocorr2, color='b') plt.ylabel('Autocorrelations') cl = 0.45 * np.mean(abs(y2)) center_clipped = [] for s in y2: if s >= cl: center_clipped.append(s - cl) elif s <= -cl: center_clipped.append(s + cl) elif np.abs(s) < cl: center_clipped.append(0) new_autocorr = librosa.core.autocorrelate(np.array(center_clipped)) plt.figure(figsize=(15,2)) plt.plot(autocorr2, color='yellow') plt.plot(new_autocorr, color='pink') plt.ylabel('Center-clipped Autocorrelation')
6_analyze.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MAT281 - Laboratorios N°07 # Nombre: <NAME> # # Rol: 201610519-0 # <a id='p1'></a> # # I.- Problema 01 # # # <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/Anscombe.svg/1200px-Anscombe.svg.png" width="360" height="360" align="center"/> # # # El **cuarteto de Anscombe** comprende cuatro conjuntos de datos que tienen las mismas propiedades estadísticas, pero que evidentemente son distintas al inspeccionar sus gráficos respectivos. # # Cada conjunto consiste de once puntos (x, y) y fueron construidos por el estadístico F. J. Anscombe. El cuarteto es una demostración de la importancia de mirar gráficamente un conjunto de datos antes de analizarlos. # + import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings("ignore") # %matplotlib inline sns.set_palette("deep", desat=.6) sns.set(rc={'figure.figsize':(11.7,8.27)}) # - # cargar datos df = pd.read_csv(os.path.join("data","anscombe.csv"), sep=",") df.head() x = df['x'] # + [markdown] tags=[] # Basado en la información presentada responda las siguientes preguntas: # # 1. Gráfique mediante un gráfico tipo **scatter** cada grupo. A simple vista, ¿ los grupos son muy distintos entre si?. # 2. Realice un resumen de las medidas estadísticas más significativas ocuapando el comando **describe** para cada grupo. Interprete. # 3. Realice un ajuste lineal para cada grupo. Además, grafique los resultados de la regresión lineal para cada grupo. Interprete. # 4. Calcule los resultados de las métricas para cada grupo. Interprete. # 5. Es claro que el ajuste lineal para algunos grupos no es el correcto. Existen varias formas de solucionar este problema (eliminar outliers, otros modelos, etc.). Identifique una estrategia para que el modelo de regresión lineal ajuste de mejor manera e implemente otros modelos en los casos que encuentre necesario. # + [markdown] tags=[] # ### Pregunta 1 # + tags=[] fig, axes = plt.subplots(4, sharex=True, figsize=(15,15)) word = 'Grupo_' for i in range(1,5): word_2 = word+str(i) ###iteracion para iterar sobre cada grupo, esto genera Grupo_i df1 = df.loc[df['grupo'] == word_2] #Creamos un dataframe sobre cada grupo de la columna grupo sns.scatterplot(ax=axes[i-1], x ='x', y ='y', data=df1) #Grafico de dispersion axes[i-1].set_title(word_2) #Usamos como titulo el grupo for a in axes[:]: #Ajuste de grafico a.tick_params(labelbottom=True) fig.tight_layout() # - # Todos poseen distribucuiones diferentes, en particular, el grupo 4 posee una linea vertical indicando multiples valores para x = 8. # ### Pregunta 2 # + tags=[] word = 'Grupo_' for i in range(1,5): word_2 = word+str(i) ###iteracion para iterar sobre cada grupo, esto genera Grupo_i df1 = df.loc[df['grupo'] == word_2] #Creamos un dataframe sobre cada grupo de la columna grupo df2 = df1.describe() print(word_2) print(df2) print('-----------------------O------------------------') # - # Todos los grupos tienen 11 datos por cada columna x e y, y practicamente las mismas caracteristicas estadísticas tal y como fue mencionado, pero difieren en sus valores como tal. # ### Pregunta 3 # Las interpretaciones de los graficos se mencionan en la pregunta 5 para cada grupo. # + tags=[] import statsmodels.api as sm word = 'Grupo_' for i in range(1,5): word_2 = word+str(i) ###iteracion para iterar sobre cada grupo, esto genera Grupo_i df1 = df.loc[df['grupo'] == word_2] #Creamos un dataframe sobre cada grupo de la columna grupo model = sm.OLS(df1['y'], sm.add_constant(df1['x'])) #regresion lineal results = model.fit() print('------------------------O---------------------------------') print(word_2) print(results.summary()) #Entrega un resumen de resultados print('------------------------O---------------------------------') print(' ') print(' ') # + tags=[] #Graficos word = 'Grupo_' for i in range(1,5): word_2 = word+str(i) ###iteracion para iterar sobre cada grupo, esto genera Grupo_i df1 = df.loc[df['grupo'] == word_2] #Creamos un dataframe sobre cada grupo de la columna grupo sns.lmplot(x ='x', y ='y', data=df1, height = 7) #Grafico de regresion lineal ax = plt.gca() ax.set_title(word_2) #Titulo del grafico # + [markdown] tags=[] # ### Pregunta 4 # + tags=[] import statsmodels.api as sm from sklearn.metrics import r2_score from metrics_regression import * word = 'Grupo_' for i in range(1,5): word_2 = word+str(i) ###iteracion para iterar sobre cada grupo, esto genera Grupo_i df1 = df.loc[df['grupo'] == word_2] #Creamos un dataframe sobre cada grupo de la columna grupo model = sm.OLS(df1['y'], sm.add_constant(df1['x'])) #regresion lineal results = model.fit() y_pred = results.predict(sm.add_constant(df1['x'])) df_temp = pd.DataFrame( #Dataframe utilizado para medir las diferentes metricas por grupo { 'y':df1['y'], 'yhat': y_pred } ) print('------------------------O---------------------------------') print(word_2) print(summary_metrics(df_temp)) #resumen de metricas print('------------------------O---------------------------------') print(' ') print(' ') # - # Los 4 grupos poseen valores similares para algunos errores, mientras que en otros difieren, sobre todo el grupo 4, y esto ultimo tiene sentido debido a su distribución de puntos. # ### Pregunta 5 # #### Grupo 4 # No tiene sentido hacer una regresión lineal para el grupo 4 ya que existen multiples valores y para x = 8, lo cual, puede indicar mediciones independientes de distintos fenómenos para x = 8, además, desde un punto de vista matemático, no tiene sentido buscar una función lineal que represente estos 8 datos ya que por definición de función no se pueden tener valores distintos en un mismo punto x. # #### Grupo 1 # No es necesario hacer otro modelo que la regresión lineal para el grupo 1, ya que, viendo su gráfico de dispersión (pregunta 1), se observa que no hay relación clara ya sea polinomial, o exponencial, en particular, debido a la poca cantidad de datos no es viable afirmar que otro modelo se ajustaría mejor para representar su tendencia. # # Además, el valor de $R^2$ de 0,6 aproximadamente indica que no se tiene un comportamiento lineal del todo en los datos. # # En conclusión, la regresión lineal es el mejor ajuste para este pequeño conjunto de datos, aún más, remover posibles outliers no es viable en este caso debido a que son muy pocos y no se ve tan claro a partir del gráfico la existencia de estos. # + [markdown] tags=[] # #### Grupo 2 # - # Se tiene la siguiente gráfica de dispersión para el grupo 2. # + sns.set(rc={'figure.figsize':(10,8)}) df1 = df.loc[df['grupo'] == 'Grupo_2'] #Creamos un dataframe del grupo 2 sns.scatterplot(x ='x', y ='y', data=df1) #Grafico de dispersion # - # Claramente, esto no representa un comportamiento lineal debido a la "cascada" que se produce a partir de x = 12, aún más, a simple vista se ve una tendencia a un comportamiento polinomial, por lo que este será el acercamiento. ax = sns.regplot(x="x", y="y", data= df1, scatter_kws={"s": 80}, order=2, ci=None) #Ajuste polinomial usando Seaborn, de orden 2 # A partir de este gráfico, concluimos que una regresión polinomial de orden 2 es el ajuste perfecto para este conjunto de datos, sin embargo, veamos el score $R^2$ usando sklearn. # + from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression poly = PolynomialFeatures(degree=2) #ajuste polinomial X_poly = poly.fit_transform(df1['x'].values.reshape(-1, 1)) #ajuste de datos poly.fit(X_poly, df1['y']) lin2 = LinearRegression() lin2.fit(X_poly, df1['y']) #regresion lineal con los datos ajustados y_pred = lin2.predict(X_poly) print('R2 = ', r2_score(df1['y'], y_pred)) #Score R2 para el ajuste polinomial # - # Con este valor de $R^2$ concluimos que el ajuste polinomial de orden 2 es el adecuacdo para este conjunto de datos. # #### Grupo 3 # Viendo la gráfica de dispersión se ve lo siguiente # + tags=[] sns.set(rc={'figure.figsize':(10,8)}) df1 = df.loc[df['grupo'] == 'Grupo_3'] #Creamos un dataframe sobre cada grupo de la columna grupo sns.scatterplot(x ='x', y ='y', data=df1) #Grafico de dispersion # - # Claramente existe un outlier significativo, por lo que lo eliminaremos, por regla general, un outlier es un dato que tiene (en valor absoluto) un z-score mayor a 3, pero en este caso, no hay, por lo que consideramos los mayores a 2.5 como outliers. # + from scipy import stats import statsmodels.api as sm pd.options.mode.chained_assignment = None #ignorar los warning df1 = df.loc[df['grupo'] == 'Grupo_3'] #dataframe de grupo 3 df1['z_score'] = stats.zscore(df1['y']) #zscore para el grupo 3 #nos quedamos solo con los datos que tienen un z-score menor a 2.5 df1['abs_z_score'] = df1['z_score'].abs() df2 = df1.loc[df1['abs_z_score'] < 2.5] sns.lmplot(x ='x', y ='y', data=df2, height = 7) #grafico de regresion lineal model = sm.OLS(df2['y'], sm.add_constant(df2['x'])) #regresion lineal results = model.fit() print('------------------------O---------------------------------') print('Grupo_3') print(results.summary()) #Tabla de resultados print('------------------------O---------------------------------') print(' ') print(' ') # - # Se observa que ahora $R^2$ es de 1, lo que indica que los datos se ajustan perfectamente a una recta. # + #<NAME> - Rol: 201610519-0 # -
labs/lab_07_Cristobal_Lobos.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Splitting Datasets # In supervised training, you will want to split your training set into training and evaluation samples, and only use the test set for the final model test. # To achieve that we zip a dataset with an index range, filter by a certain threshold and then map the dataset back to get rid of that index import tensorflow as tf import numpy as np # You may want to rerun and should close the session, if one is open. try: sess.close() except NameError: print("Don't worry. Need to ignore this error once") sess = tf.InteractiveSession() # ### Scenario # We have 20 samples with their labels and want to split at 20% tr_img = np.array([[0,0],[1,1],[2,2],[3,3],[4,4],[5,5],[6,6],[7,7],[8,8],[9,9], [10,0],[11,1],[12,2],[13,3],[14,4],[15,5],[16,6],[17,7],[18,8],[19,9]]) tr_lbl = np.array([0,3,2,0,2,1,2,3,0,3,0,2,1,3,1,0,3,2,3,3]) N = 20 ratio = 0.2 idx = np.array(range(N)) # ### Creating the indexed datasets from the numpy arrays # + tr_img_tensor = tf.constant(tr_img) tr_lbl_tensor = tf.constant(tr_lbl) idx_tensor = tf.constant(idx) tr_img_ds = tf.data.Dataset.from_tensor_slices(tr_img_tensor) tr_lbl_ds = tf.data.Dataset.from_tensor_slices(tr_lbl_tensor) idx_ds = tf.data.Dataset.from_tensor_slices(idx_tensor) tr_ds = tf.data.Dataset.zip((tr_img_ds, tr_lbl_ds)).shuffle(buffer_size=N) tr_ds_i = tf.data.Dataset.zip((tr_ds, idx_ds)) # - # ### Now we filter by the index - that's equivalent to splitting the ```Dataset``` ds_ev = tr_ds_i.filter(lambda x,y: y < int(N * ratio)) ds_tr = tr_ds_i.filter(lambda x,y: y >= int(N * ratio)) it_ev = ds_ev.map(lambda x,y: x).batch(2).make_one_shot_iterator() it_tr = ds_tr.map(lambda x,y: x).repeat(3).batch(8).make_one_shot_iterator() # ### 20% of 20 samples means 4. With batch size 2 that's *two* batches, before we run out of data sess.run(it_ev.get_next()) # ### The other 80% are repeated over three epochs with batch size 8. That's six batches to expect. sess.run(it_tr.get_next()) # ### Let's implement that in a single utility method def split(ds, N, ratio): idx = np.array(range(N)) idx_ds = tf.data.Dataset.from_tensor_slices(tf.constant(idx)) ds_i = tf.data.Dataset.zip((ds, idx_ds)) ds1 = ds_i.filter(lambda x,y: y < int(N * ratio)).map(lambda x,y: x) ds2 = ds_i.filter(lambda x,y: y >= int(N * ratio)).map(lambda x,y: x) return (ds1, ds2) ds = tf.data.Dataset.from_tensor_slices(tr_img_tensor) ds1, ds2 = split(ds, 20, 0.4) ds1 = ds1.batch(20).make_one_shot_iterator().get_next() ds2 = ds2.batch(20).make_one_shot_iterator().get_next() ds1, ds2 = sess.run([ds1, ds2]) # ### The dataset is split into 40% / 60% parts! ds1, ds2
experiments/mnist_sota/splitting_datasets.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Question 1 import requests url = requests.get ('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M') import pandas as pd from bs4 import BeautifulSoup soup = BeautifulSoup (url.text, 'html.parser') table = soup.find ('table', attrs = {'class': 'wikitable sortable'}) table # + #Process data headers=table.findAll('th') for i, head in enumerate(headers): headers[i]=str(headers[i]).replace("<th>","").replace("</th>","").replace ('\n','') rows = table.findAll ('tr') rows = rows [1:len(rows)] for i, row in enumerate (rows): rows [i] = str (rows [i]).replace ('\n</td></tr>','').replace('<tr>\n<td>','') df=pd.DataFrame(rows) df[headers] = df[0].str.split("</td>\n<td>", n = 2, expand = True) df.drop(columns=[0],inplace=True) df = df.replace ('\n','', regex = True) df.head (10) # + #Process dataframe df = df.drop(df[(df.Borough == "Not assigned")].index) df.Neighborhood.replace ('Not assigned\n','Not assigned') df.Neighborhood.replace("Not assigned", df.Borough, inplace=True) df.Neighborhood.fillna (df.Borough, inplace = True) df = df.drop_duplicates () df.update( df.Neighborhood.loc[ lambda x: x.str.contains('title') ].str.extract('title=\"([^\"]*)', expand = False)) df.update( df.Borough.loc[ lambda x: x.str.contains('title') ].str.extract('title=\"([^\"]*)', expand = False)) df.update( df.Neighborhood.loc[ lambda x: x.str.contains('Toronto') ].str.replace(", Toronto","")) df.update( df.Neighborhood.loc[ lambda x: x.str.contains('Toronto') ].str.replace("\(Toronto\)","")) df2 = pd.DataFrame({'Postal Code':df['Postal Code'].unique()}) df2['Borough']=pd.DataFrame(list(set(df['Borough'].loc[df['Postal Code'] == x['Postal Code']])) for i, x in df2.iterrows()) df2['Neighborhood']=pd.Series(list(set(df['Neighborhood'].loc[df['Postal Code'] == x['Postal Code']])) for i, x in df2.iterrows()) df2['Neighborhood']=df2['Neighborhood'].apply(lambda x: ', '.join(x)) df2.dtypes df2 = df2.replace ('\n', '', regex = True) df2.head(20) # - # <h1>Question 2</h1> df3 = pd.read_csv("http://cocl.us/Geospatial_data") df3.set_index("Postal Code") df2.set_index("Postal Code") toronto_data=pd.merge(df2, df3) toronto_data.head() # <h1>Question 3 # + from geopy.geocoders import Nominatim address = 'Toronto, ON, Canada' geolocator = Nominatim(user_agent="to_explorer") location = geolocator.geocode(address) latitude = location.latitude longitude = location.longitude print('The geograpical coordinate of Toronto, ON, Canada are {}, {}.'.format(latitude, longitude)) # + import folium map_toronto = folium.Map (location = [latitude, longitude], zoom_start = 10.5) for lat, lng, borough, neighborhood in zip(toronto_data['Latitude'], toronto_data['Longitude'], toronto_data['Borough'], toronto_data['Neighborhood']): label = '{}, {}'.format(neighborhood, borough) label = folium.Popup(label, parse_html=True) folium.CircleMarker( [lat, lng], radius=5, popup=label, color='blue', fill=True, fill_color='#3186cc', fill_opacity=0.7, parse_html=False).add_to(map_toronto) map_toronto # + CLIENT_ID = 'ZCJD3ZOTFXDU4WYZ1OLMFWFENJWAEVO2TNXEKR1WFW1VE4QB' CLIENT_SECRET = '<KEY>' VERSION = '20200702' neighborhood_latitude = toronto_data.loc[0, 'Latitude'] # neighborhood latitude value neighborhood_longitude = toronto_data.loc[0, 'Longitude'] # neighborhood longitude value neighborhood_name = toronto_data.loc[0, 'Neighborhood'] # neighborhood name print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name, neighborhood_latitude, neighborhood_longitude)) # - LIMIT = 100 # limit of number of venues radius = 500 url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( CLIENT_ID, CLIENT_SECRET, VERSION, neighborhood_latitude, neighborhood_longitude, radius, LIMIT) print (url) results = requests.get(url).json() print ('OK') # + import json from pandas.io.json import json_normalize def get_category_type(row): try: categories_list = row['categories'] except: categories_list = row['venue.categories'] if len(categories_list) == 0: return None else: return categories_list[0]['name'] venues = results['response']['groups'][0]['items'] nearby_venues = json_normalize(venues) # flatten JSON # filter columns filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] nearby_venues =nearby_venues.loc[:, filtered_columns] # filter the category for each row nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1) # clean columns nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns] nearby_venues.head() # - def getNearbyVenues(names, latitudes, longitudes, radius=500): venues_list=[] for name, lat, lng in zip(names, latitudes, longitudes): print(name) # create the API request URL url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, LIMIT) # make the GET request results = requests.get(url).json()["response"]['groups'][0]['items'] # return only relevant information for each nearby venue venues_list.append([( name, lat, lng, v['venue']['name'], v['venue']['location']['lat'], v['venue']['location']['lng'], v['venue']['categories'][0]['name']) for v in results]) nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list]) nearby_venues.columns = ['Neighborhood', 'Neighborhood Latitude', 'Neighborhood Longitude', 'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category'] return (nearby_venues) toronto_venues = getNearbyVenues(names=toronto_data['Neighborhood'], latitudes=toronto_data['Latitude'], longitudes=toronto_data['Longitude'] ) toronto_venues.head () toronto_venues.shape toronto_venues.groupby('Neighborhood').count() # + toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="") toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood'] fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1]) toronto_onehot = toronto_onehot[fixed_columns] print (toronto_onehot.head()) toronto_onehot.shape # - # Grouping by neighbourhood toronto_neigh = toronto_onehot.groupby('Neighborhood').mean().reset_index() toronto_neigh # + limit_top = 5 for hood in toronto_neigh['Neighborhood']: print("----"+hood+"----") temp = toronto_neigh[toronto_neigh['Neighborhood'] == hood].T.reset_index() temp.columns = ['venue','freq'] temp = temp.iloc[1:] temp['freq'] = temp['freq'].astype(float) temp = temp.round({'freq': 2}) print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(limit_top)) print('\n') # - def return_most_common_venues(row, limit_top): row_categories = row.iloc[1:] row_categories_sorted = row_categories.sort_values(ascending=False) return row_categories_sorted.index.values[0:limit_top] # + import numpy as np num_top_venues = 10 indicators = ['st', 'nd', 'rd'] # create columns according to number of top venues columns = ['Neighborhood'] for ind in np.arange(limit_top): try: columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind])) except: columns.append('{}th Most Common Venue'.format(ind+1)) # create a new dataframe neighborhoods_venues_sorted = pd.DataFrame(columns=columns) neighborhoods_venues_sorted['Neighborhood'] = toronto_neigh['Neighborhood'] for ind in np.arange(toronto_neigh.shape[0]): neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_neigh.iloc[ind, :], limit_top) neighborhoods_venues_sorted.head() # - # <h2> Clustering # + from sklearn.cluster import KMeans k = 5 toronto_neigh_clustering = toronto_neigh.drop('Neighborhood', 1) kmeans = KMeans(n_clusters=k, random_state=0).fit(toronto_neigh_clustering) kmeans.labels_[0:10] # + neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_) toronto_merged = toronto_data # merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood') # check the last columns! toronto_merged.head() # + import matplotlib.cm as cm import matplotlib.colors as colors map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11) # set color scheme for the clusters x = np.arange(k) ys = [i + x + (i*x)**2 for i in range(k)] colors_array = cm.rainbow(np.linspace(0, 1, len(ys))) rainbow = [colors.rgb2hex(i) for i in colors_array] # add markers to the map markers_colors = [] for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']): label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True) folium.CircleMarker( [lat, lon], radius=5, popup=label, #color=rainbow[cluster-1], fill=True, #fill_color=rainbow[cluster-1], fill_opacity=0.7).add_to(map_clusters) map_clusters # - # <h2> Examine toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 1, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 2, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 3, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]] toronto_merged.loc[toronto_merged['Cluster Labels'] == 4, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
Week 3 Assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 예제로 본 간단한 자동미분 # + from typing import Union, List import numpy as np np.set_printoptions(precision=4) # - a = 3 a.__add__(4) # + a = np.array([2,3,1,0]) print(a) print("Addition using '__add__':", a.__add__(4)) print("Addition using '+':", a + 4) # + Numberable = Union[float, int] def ensure_number(num: Numberable): if isinstance(num, NumberWithGrad): return num else: return NumberWithGrad(num) class NumberWithGrad(object): def __init__(self, num: Numberable, depends_on: List[Numberable] = None, creation_op: str = ''): self.num = num self.grad = None self.depends_on = depends_on or [] self.creation_op = creation_op def __add__(self, other: Numberable): return NumberWithGrad(self.num + ensure_number(other).num, depends_on = [self, ensure_number(other)], creation_op = 'add') def __mul__(self, other: Numberable = None): return NumberWithGrad(self.num * ensure_number(other).num, depends_on = [self, ensure_number(other)], creation_op = 'mul') def backward(self, backward_grad: Numberable = None): if backward_grad is None: # backward가 처음 호출됨 self.grad = 1 else: # 이 부분에서 기울기가 누적됨 # 기울기 정보가 아직 없다면 backward_grad로 설정 if self.grad is None: self.grad = backward_grad # 기울기 정보가 있다면 기존 기울깃값에 backward_grad를 더함 else: self.grad += backward_grad if self.creation_op == "add": # self.grad를 역방향으로 전달함 # 둘 중 어느 요소를 증가시켜도 출력이 같은 값만큼 증가함 self.depends_on[0].backward(self.grad) self.depends_on[1].backward(self.grad) if self.creation_op == "mul": # 첫 번째 요소에 대한 미분 계산 new = self.depends_on[1] * self.grad # 이 요소에 대한 미분을 역방향으로 전달 self.depends_on[0].backward(new.num) # 두 번째 요소에 대한 미분 계산 new = self.depends_on[0] * self.grad # 이 요소에 대한 미분을 역방향으로 전달 self.depends_on[1].backward(new.num) # - a = NumberWithGrad(3) b = a * 4 c = b + 3 c.backward() print(a.grad) # 예상한 값이 출력됨 print(b.grad) # 예상한 값이 출력됨 a = NumberWithGrad(3) b = a * 4 c = b + 3 d = (a + 2) e = c * d e.backward() a.grad # 예상한 값이 출력됨
06_rnns/Autograd_Simple.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # GenericSimulationLibrary # # All the documentation can be found at https://readthedocs.org/projects/pypsdier/ # ## 1. Install Libraries # You *might* need to install the GenericSimulationLibrary and other libraries. # # If the notebook is run by mybinder, it should already have all the required dependencies. Otherwise, just run the following cells: # #!pip install git+https://github.com/sebastiandres/GenericSimulationLibrary # To install from the very edgiest version in the repository # !pip install -i https://pypi.org/simple/ GenericSimulationLibrary # To install the latest official version # !pip install numpy # !pip install matplotlib # ## 2. The data # # Define the inputs for the simulation and plot options to be used. This can then later be stored in a simulation seed, if the `save` method is used. # The data inputs = { "x_min":0, "x_max":10, "N_points":12, "m":3.0, "b":-2.0 } plot_options = { "xlabel":"x [x_units]", "ylabel":"y [y_units]", "title":"My title", "data_x":[ 0.1, 2.1, 3.9, 6.1, 7.9, 9.9], "data_y":[-2.8, 3.6, 10.7, 13.6, 22.8, 27.1], # -2 + 3*x + error "data_kwargs": {'label':'exp', 'color':'red', 'marker':'s', 'markersize':6, 'linestyle':'none','linewidth':2, }, "sim_kwargs": {'label':'sim', 'color':'black', 'marker':'o', 'markersize':6, 'linestyle':'dashed','linewidth':2, }, } # ## 3. Using the SimulationInterface # You can have all your code in a single cell or separated cells. # # Here we separate the code in two cell to test the save method (first cell) and load method (second cell). # # Don't forget to store/download the simlation seed, in case you want to avoid simulating again later. # + from GenericSimulationLibrary import SimulationInterface SI = SimulationInterface() SI.new(inputs, plot_options) SI.simulate() SI.save("test.sim") SI.status() del SI # - # In the **second cell**, we create a new SimulationInterface, but using a simulation seed (no reference to the previous data). SI_2 = SimulationInterface() SI_2.load("test.sim") SI_2.status() # Should be exactly the same as before SI_2.plot(filename="test.png")
tests/jupyter_test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import nltk from konlpy.corpus import kolaw from konlpy.tag import * import pandas as pd import platform from collections import Counter from matplotlib import font_manager, rc from konlpy.tag import Twitter; t = Twitter() import numpy as np dfa = pd.read_csv("../petition_data_all.csv") df1 = dfa[dfa["count"]>=10] df1 ko_con_text = list(df1["title"]) ko_con_text = ' '.join(ko_con_text) print(len(ko_con_text)) tokens_ko = t.nouns(ko_con_text) tokens_ko nlist = list(df1["num"]) nlist tokens_ko = list(set(tokens_ko)) tokens_ko all_set = tokens_ko len(all_set) all_set = [] f = open("keyword.txt", 'r',encoding="UTF8") while True: line = f.readline() if not line: break all_set.append(line[:-1]) f.close() all_set df_name = list(df1["title"]) conut_lists = [] for i in range(len(df_name)): conut_list = [] for one_set in all_set: conut_list.append(df_name[i].count(one_set)) conut_lists.append(conut_list) len(all_set), len(set(all_set)), len(conut_lists[0]) len(nlist) columns = nlist columns.append("key_word") # + test = pd.DataFrame(columns=columns) test # - len(all_set) # + for i in range(len(all_set)): tlist = [] for j in range(len(conut_lists)): tlist.append(conut_lists[j][i]) tlist.append(all_set[i]) test.loc[i] = tlist if i % 1000 == 0: print(i) test # - print("TEst") test.to_csv("up10_t.csv",index=False)
Find-similar/make_keyword_vectorization - 복사본.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # 把源代码文件放入响应名称的文件夹,例如java代码放入java文件夹。然后用ext-cp.py和build-sent-datasets.py构建测试数据。最后生成两个文件,test.csv和test.csv。 import torch import torchtext from torchtext.datasets import text_classification NGRAMS = 2 import os train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS']( root='./data/lang_codes', ngrams=NGRAMS, vocab=None) BATCH_SIZE = 16 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") import torch.nn as nn import torch.nn.functional as F class TextSentiment(nn.Module): def __init__(self, vocab_size, embed_dim, num_class): super().__init__() self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True) self.fc = nn.Linear(embed_dim, num_class) self.init_weights() def init_weights(self): initrange = 0.5 self.embedding.weight.data.uniform_(-initrange, initrange) self.fc.weight.data.uniform_(-initrange, initrange) self.fc.bias.data.zero_() def forward(self, text, offsets): embedded = self.embedding(text, offsets) return self.fc(embedded) VOCAB_SIZE = len(train_dataset.get_vocab()) EMBED_DIM = 32 NUN_CLASS = len(train_dataset.get_labels()) model = TextSentiment(VOCAB_SIZE, EMBED_DIM, NUN_CLASS).to(device) def generate_batch(batch): label = torch.tensor([entry[0] for entry in batch]) text = [entry[1].long() for entry in batch] offsets = [0] + [len(entry) for entry in text] # torch.Tensor.cumsum returns the cumulative sum # of elements in the dimension dim. # torch.Tensor([1.0, 2.0, 3.0]).cumsum(dim=0) offsets = torch.tensor(offsets[:-1]).cumsum(dim=0) # print(text) text = torch.cat(text) return text, offsets, label # + from torch.utils.data import DataLoader def train_func(sub_train_): # Train the model train_loss = 0 train_acc = 0 data = DataLoader(sub_train_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=generate_batch) for i, (text, offsets, cls) in enumerate(data): optimizer.zero_grad() text, offsets, cls = text.to(device), offsets.to(device), cls.to(device) output = model(text, offsets) loss = criterion(output, cls) train_loss += loss.item() loss.backward() optimizer.step() train_acc += (output.argmax(1) == cls).sum().item() # Adjust the learning rate scheduler.step() return train_loss / len(sub_train_), train_acc / len(sub_train_) def test(data_): loss = 0 acc = 0 data = DataLoader(data_, batch_size=BATCH_SIZE, collate_fn=generate_batch) for text, offsets, cls in data: text, offsets, cls = text.to(device), offsets.to(device), cls.to(device) with torch.no_grad(): output = model(text, offsets) loss = criterion(output, cls) loss += loss.item() acc += (output.argmax(1) == cls).sum().item() return loss / len(data_), acc / len(data_) # + import time from torch.utils.data.dataset import random_split N_EPOCHS = 5 min_valid_loss = float('inf') criterion = torch.nn.CrossEntropyLoss().to(device) optimizer = torch.optim.SGD(model.parameters(), lr=4.0) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9) train_len = int(len(train_dataset) * 0.95) sub_train_, sub_valid_ = \ random_split(train_dataset, [train_len, len(train_dataset) - train_len]) for epoch in range(N_EPOCHS): try: start_time = time.time() train_loss, train_acc = train_func(sub_train_) valid_loss, valid_acc = test(sub_valid_) secs = int(time.time() - start_time) mins = secs / 60 secs = secs % 60 print('Epoch: %d' %(epoch + 1), " | time in %d minutes, %d seconds" %(mins, secs)) print(f'\tLoss: {train_loss:.4f}(train)\t|\tAcc: {train_acc * 100:.1f}%(train)') print(f'\tLoss: {valid_loss:.4f}(valid)\t|\tAcc: {valid_acc * 100:.1f}%(valid)') except Exception as ex: print('Epoch: %d' %(epoch + 1), ex) # - print('Checking the results of test dataset...') test_loss, test_acc = test(test_dataset) print(f'\tLoss: {test_loss:.4f}(test)\t|\tAcc: {test_acc * 100:.1f}%(test)') # + import re from torchtext.data.utils import ngrams_iterator from torchtext.data.utils import get_tokenizer ag_news_label = {1 : "cpp", 2 : "java", 3 : "python"} def predict(text, model, vocab, ngrams): tokenizer = get_tokenizer("basic_english") with torch.no_grad(): text = torch.tensor([vocab[token] for token in ngrams_iterator(tokenizer(text), ngrams)]) output = model(text, torch.tensor([0])) return output.argmax(1).item() + 1 ex_text_str = ''' static CPLErr OGRPolygonContourWriter( double dfLevelMin, double dfLevelMax, const OGRMultiPolygon& multipoly, void *pInfo ) { OGRContourWriterInfo *poInfo = static_cast<OGRContourWriterInfo *>(pInfo); OGRFeatureDefnH hFDefn = OGR_L_GetLayerDefn( static_cast<OGRLayerH>(poInfo->hLayer) ); OGRFeatureH hFeat = OGR_F_Create( hFDefn ); if( poInfo->nIDField != -1 ) OGR_F_SetFieldInteger( hFeat, poInfo->nIDField, poInfo->nNextID++ ); if( poInfo->nElevFieldMin != -1 ) OGR_F_SetFieldDouble( hFeat, poInfo->nElevFieldMin, dfLevelMin ); if( poInfo->nElevFieldMax != -1 ) OGR_F_SetFieldDouble( hFeat, poInfo->nElevFieldMax, dfLevelMax ); const bool bHasZ = wkbHasZ(OGR_FD_GetGeomType(hFDefn)); OGRGeometryH hGeom = OGR_G_CreateGeometry( bHasZ ? wkbMultiPolygon25D : wkbMultiPolygon ); for ( int iPart = 0; iPart < multipoly.getNumGeometries(); iPart++ ) { OGRPolygon* poNewPoly = new OGRPolygon(); const OGRPolygon* poPolygon = static_cast<const OGRPolygon*>(multipoly.getGeometryRef(iPart)); for ( int iRing = 0; iRing < poPolygon->getNumInteriorRings() + 1; iRing++ ) { const OGRLinearRing* poRing = iRing == 0 ? poPolygon->getExteriorRing() : poPolygon->getInteriorRing(iRing - 1); OGRLinearRing* poNewRing = new OGRLinearRing(); for ( int iPoint = 0; iPoint < poRing->getNumPoints(); iPoint++ ) { const double dfX = poInfo->adfGeoTransform[0] + poInfo->adfGeoTransform[1] * poRing->getX(iPoint) + poInfo->adfGeoTransform[2] * poRing->getY(iPoint); const double dfY = poInfo->adfGeoTransform[3] + poInfo->adfGeoTransform[4] * poRing->getX(iPoint) + poInfo->adfGeoTransform[5] * poRing->getY(iPoint); if( bHasZ ) OGR_G_SetPoint( OGRGeometry::ToHandle( poNewRing ), iPoint, dfX, dfY, dfLevelMax ); else OGR_G_SetPoint_2D( OGRGeometry::ToHandle( poNewRing ), iPoint, dfX, dfY ); } poNewPoly->addRingDirectly( poNewRing ); } OGR_G_AddGeometryDirectly( hGeom, OGRGeometry::ToHandle( poNewPoly ) ); } OGR_F_SetGeometryDirectly( hFeat, hGeom ); const OGRErr eErr = OGR_L_CreateFeature(static_cast<OGRLayerH>(poInfo->hLayer), hFeat); OGR_F_Destroy( hFeat ); return eErr == OGRERR_NONE ? CE_None : CE_Failure; } ''' vocab = train_dataset.get_vocab() model = model.to("cpu") print("This is a %s " %ag_news_label[predict(ex_text_str, model, vocab, 2)]) # -
rnn_news_classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + # Copyright 2021 Google LLC # Use of this source code is governed by an MIT-style # license that can be found in the LICENSE file or at # https://opensource.org/licenses/MIT. # Notebook authors: <NAME> (<EMAIL>) # and <NAME> (<EMAIL>) # This notebook reproduces figures for chapter 21 from the book # "Probabilistic Machine Learning: An Introduction" # by <NAME> (MIT Press, 2021). # Book pdf is available from http://probml.ai # - # <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a> # <a href="https://colab.research.google.com/github/probml/pml-book/blob/main/pml1/figure_notebooks/chapter21_clustering_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # ## Figure 21.1:<a name='21.1'></a> <a name='purity'></a> # # Three clusters with labeled objects inside. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # <img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_21.1.png" width="256"/> # ## Figure 21.2:<a name='21.2'></a> <a name='agglom'></a> # # (a) An example of single link clustering using city block distance. Pairs (1,3) and (4,5) are both distance 1 apart, so get merged first. (b) The resulting dendrogram. Adapted from Figure 7.5 of <a href='#Alpaydin04'>[Alp04]</a> . # Figure(s) generated by [agglomDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/agglomDemo.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run agglomDemo.py # ## Figure 21.3:<a name='21.3'></a> <a name='hclust'></a> # # Illustration of (a) Single linkage. (b) Complete linkage. (c) Average linkage. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # <img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_21.3_A.png" width="256"/> # <img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_21.3_B.png" width="256"/> # <img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_21.3_C.png" width="256"/> # ## Figure 21.4:<a name='21.4'></a> <a name='yeastHier'></a> # # Hierarchical clustering of yeast gene expression data. (a) Single linkage. (b) Complete linkage. (c) Average linkage. # Figure(s) generated by [hclust_yeast_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/hclust_yeast_demo.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run hclust_yeast_demo.py # ## Figure 21.5:<a name='21.5'></a> <a name='yeast'></a> # # (a) Some yeast gene expression data plotted as a heat map. (b) Same data plotted as a time series. # Figure(s) generated by [yeast_data_viz.py](https://github.com/probml/pyprobml/blob/master/scripts/yeast_data_viz.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run yeast_data_viz.py # ## Figure 21.6:<a name='21.6'></a> <a name='yeastHierClustering'></a> # # Hierarchical clustering applied to the yeast gene expression data. (a) The rows are permuted according to a hierarchical clustering scheme (average link agglomerative clustering), in order to bring similar rows close together. (b) 16 clusters induced by cutting the average linkage tree at a certain height. # Figure(s) generated by [hclust_yeast_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/hclust_yeast_demo.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run hclust_yeast_demo.py # ## Figure 21.7:<a name='21.7'></a> <a name='kmeansVoronoi'></a> # # Illustration of K-means clustering in 2d. We show the result of using two different random seeds. Adapted from Figure 9.5 of <a href='#Geron2019'>[Aur19]</a> . # Figure(s) generated by [kmeans_voronoi.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_voronoi.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run kmeans_voronoi.py # ## Figure 21.8:<a name='21.8'></a> <a name='yeastClustering'></a> # # Clustering the yeast data from \cref fig:yeast using K-means clustering with $K=16$. (a) Visualizing all the time series assigned to each cluster. (d) Visualizing the 16 cluster centers as prototypical time series. # Figure(s) generated by [kmeans_yeast_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_yeast_demo.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run kmeans_yeast_demo.py # ## Figure 21.9:<a name='21.9'></a> <a name='vqDemo'></a> # # An image compressed using vector quantization with a codebook of size $K$. (a) $K=2$. (b) $K=4$. # Figure(s) generated by [vqDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/vqDemo.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run vqDemo.py # ## Figure 21.10:<a name='21.10'></a> <a name='kmeansMinibatch'></a> # # Illustration of batch vs mini-batch K-means clustering on the 2d data from \cref fig:kmeansVoronoi . Left: distortion vs $K$. Right: Training time vs $K$. Adapted from Figure 9.6 of <a href='#Geron2019'>[Aur19]</a> . # Figure(s) generated by [kmeans_minibatch.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_minibatch.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run kmeans_minibatch.py # ## Figure 21.11:<a name='21.11'></a> <a name='kmeansVsK'></a> # # Performance of K-means and GMM vs $K$ on the 2d dataset from \cref fig:kmeansVoronoi . (a) Distortion on validation set vs $K$. # Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) [gmm_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_2d.py) [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run kmeans_silhouette.py # %run gmm_2d.py # %run kmeans_silhouette.py # ## Figure 21.12:<a name='21.12'></a> <a name='kmeansSilhouetteVoronoi'></a> # # Voronoi diagrams for K-means for different $K$ on the 2d dataset from \cref fig:kmeansVoronoi . # Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run kmeans_silhouette.py # ## Figure 21.13:<a name='21.13'></a> <a name='kmeansSilhouetteDiagram'></a> # # Silhouette diagrams for K-means for different $K$ on the 2d dataset from \cref fig:kmeansVoronoi . # Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run kmeans_silhouette.py # ## Figure 21.14:<a name='21.14'></a> <a name='gmm2dShapes'></a> # # Some data in 2d fit using a GMM with $K=5$ components. Left column: marginal distribution $p(\mathbf x )$. Right column: visualization of each mixture distribution, and the hard assignment of points to their most likely cluster. (a-b) Full covariance. (c-d) Tied full covariance. (e-f) Diagonal covairance, (g-h) Spherical covariance. Color coding is arbitrary. # Figure(s) generated by [gmm_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_2d.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run gmm_2d.py # ## Figure 21.15:<a name='21.15'></a> <a name='gmmIdentifiabilityData'></a> # # Some 1d data, with a kernel density estimate superimposed. Adapted from Figure 6.2 of <a href='#Martin2018'>[Mar18]</a> . # Figure(s) generated by [gmm_identifiability_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_identifiability_pymc3.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run gmm_identifiability_pymc3.py # ## Figure 21.16:<a name='21.16'></a> <a name='gmmIdentifiability'></a> # # Illustration of the label switching problem when performing posterior inference for the parameters of a GMM. We show a KDE estimate of the posterior marginals derived from 1000 samples from 4 HMC chains. (a) Unconstrained model. Posterior is symmetric. (b) Constrained model, where we add a penalty to ensure $\mu _0 < \mu _1$. Adapted from Figure 6.6-6.7 of <a href='#Martin2018'>[Mar18]</a> . # Figure(s) generated by [gmm_identifiability_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_identifiability_pymc3.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run gmm_identifiability_pymc3.py # ## Figure 21.17:<a name='21.17'></a> <a name='gmmChooseKkde'></a> # # Fitting GMMs with different numbers of clusters $K$ to the data in \cref fig:gmmIdentifiabilityData . Black solid line is KDE fit. Solid blue line is posterior mean; feint blue lines are posterior samples. Dotted lines show the individual Gaussian mixture components, evaluated by plugging in their posterior mean parameters. Adapted from Figure 6.8 of <a href='#Martin2018'>[Mar18]</a> . # Figure(s) generated by [gmm_chooseK_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_chooseK_pymc3.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run gmm_chooseK_pymc3.py # ## Figure 21.18:<a name='21.18'></a> <a name='gmmChooseKwaic'></a> # # WAIC scores for the different GMMs. The empty circle is the posterior mean WAIC score for each model, and the black lines represent the standard error of the mean. The solid circle is the in-sample deviance of each model, i.e., the unpenalized log-likelihood. The dashed vertical line corresponds to the maximum WAIC value. The gray triangle is the difference in WAIC score for that model compared to the best model. Adapted from Figure 6.10 of <a href='#Martin2018'>[Mar18]</a> . # Figure(s) generated by [gmm_chooseK_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_chooseK_pymc3.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run gmm_chooseK_pymc3.py # ## Figure 21.19:<a name='21.19'></a> <a name='scRings'></a> # # Clustering data consisting of 2 spirals. (a) K-means. (b) Spectral clustering. # Figure(s) generated by [spectral_clustering_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/spectral_clustering_demo.py) #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # %run spectral_clustering_demo.py # ## Figure 21.20:<a name='21.20'></a> <a name='IRManimals'></a> # # Illustration of biclustering. We show 5 of the 12 organism clusters, and 6 of the 33 feature clusters. The original data matrix is shown, partitioned according to the discovered clusters. From Figure 3 of <a href='#Kemp06'>[KTU06]</a> . Used with kind permission of <NAME>. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # <img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_21.20.png" width="256"/> # ## Figure 21.21:<a name='21.21'></a> <a name='biclustering'></a> # # (a) Example of biclustering. Each row is assigned to a unique cluster, and each column is assigned to a unique cluster. (b) Example of multi-clustering using a nested partition model. The rows can belong to different clusters depending on which subset of column features we are looking at. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # <img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_21.21_A.png" width="256"/> # <img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_21.21_B.png" width="256"/> # ## Figure 21.22:<a name='21.22'></a> <a name='crosscat'></a> # # MAP estimate produced by the crosscat system when applied to a binary data matrix of animals (rows) by features (columns). See text for details. From Figure 7 of <a href='#Shafto06'>[Sha+06]</a> . Used with kind permission of <NAME>. #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') # !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null # %cd -q /pyprobml/scripts import pyprobml_utils as pml import colab_utils import os os.environ["PYPROBML"] = ".." # one above current scripts directory import google.colab from google.colab.patches import cv2_imshow # %reload_ext autoreload # %autoreload 2 def show_image(img_path,size=None,ratio=None): img = colab_utils.image_resize(img_path, size) cv2_imshow(img) print('finished!') # <img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_21.22.png" width="256"/> # ## References: # <a name='Alpaydin04'>[Alp04]</a> <NAME> "Introduction to machine learning". (2004). # # <a name='Geron2019'>[Aur19]</a> <NAME> "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)". (2019). # # <a name='Kemp06'>[KTU06]</a> <NAME>, <NAME> and <NAME>. "Learning systems of concepts with an infinite relational model". (2006). # # <a name='Martin2018'>[Mar18]</a> <NAME> "Bayesian analysis with Python". (2018). # # <a name='Shafto06'>[Sha+06]</a> <NAME>, <NAME>, <NAME>, <NAME> and <NAME>. "Learning cross-cutting systems of categories". (2006). # #
pml1/figure_notebooks/chapter21_clustering_figures.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np # + df_results= pd.read_csv('/Users/joejohns/data_bootcamp/results_Leung.csv') df_data = pd.read_csv('/Users/joejohns/Downloads/nhl_data_Leung.csv') df_data.dropna(inplace = True) # - df_data.shape df_results.shape df_results.loc[(df_results['HomeTeam'] == 'Anaheim Ducks') | (df_results['AwayTeam'] == 'Anaheim Ducks'), ['HomeTeam','AwayTeam','Date']][0:20] df.columns df.groupby(['endDate', 'teamAbbreviation'])['teamAbbreviation'].nunique().value_counts() #bueno! ##looks like endDate is the date and 'date' is a range with right end point = endDate df2 = df.loc[:,['teamAbbreviation','endDate', 'teamId', 'gA', 'gF', 'gF%', 'gA60',]].sort_values(by = [ 'teamAbbreviation', 'endDate' ]).copy() # # 0 2007/09/29 # 1 2007/09/30 # 3 2007/10/03 # 15 2007/10/05 # 26 2007/10/06 # 50 2007/10/10 # 75 2007/10/14 # 79 2007/10/15 # 87 2007/10/17 # 109 2007/10/20 # 122 2007/10/23 # 137 2007/10/25 # 157 2007/10/28 # 180 2007/11/01 # 195 2007/11/03 # 208 2007/11/05 # 218 2007/11/07 # 232 2007/11/09 # 255 2007/11/13 # 269 2007/11/15 df2.head(40) df.loc[:50,['date','endDate', 'teamAbbreviation', 'teamId', 'gF', 'gA','gF%', 'gA60', 'gF60', 'gP','xFSh%', 'xFSv%', 'xGA', 'xGA60', 'xGF', 'xGF%', 'xGF60', 'xGplusminus', 'xPDO']] def make_mp_date(s): ymd = s.split('-') y = ymd[0] m = ymd[1] d = ymd[2] if len(m) ==1: m = '0'+m if len(d) == 1: d = '0'+d return int(y+m+d) '2014-1-16'.split('-') make_mp_date('2014-10-16') v_mp_date = np.vectorize(make_mp_date) df['mp_date'] = v_mp_date(df[''])
Note_books/Explore_Data/Explore_Leung_Data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # 说明: # 给定一个整数n,返回一个包含n个字符的字符串,以使该字符串中的每个字符出现奇数次。 # 返回的字符串只能包含小写英文字母。 # 如果有多个有效字符串,则返回其中任何一个。 # # Example 1: # Input: n = 4 # Output: "pppz" # Explanation: "pppz" is a valid string since the character 'p' occurs three times and the character 'z' occurs once. Note that there are many other valid strings such as "ohhh" and "love". # # Example 2: # Input: n = 2 # Output: "xy" # Explanation: "xy" is a valid string since the characters 'x' and 'y' occur once. Note that there are many other valid strings such as "ag" and "ur". # # Example 3: # Input: n = 7 # Output: "holasss" # # Constraints: # 1、1 <= n <= 500 # - class Solution: def generateTheString(self, n: int) -> str: alphabets = list(range(97, 123)) res = '' if n <= 26: for i in range(n): res += chr(alphabets[i]) left_val = n - 26 return res class Solution: def generateTheString(self, n: int) -> str: # 求 n 以内最大的奇数,让 a 出现最大的奇数次 s1 = n - 1 if n % 2 == 0 else n s2 = n - s1 if s2 > 0: return 'a' * s1 + 'b' * s2 return 'a' * s1 solution = Solution() solution.generateTheString(7) print(ord('a')) print(ord('z'))
String/1012/1374. Generate a String With Characters That Have Odd Counts.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # importing documnets for all of 10 files: # # 1. Athabasca Glaciers: with open('doc_AthabascaGlaciers.txt','r') as myfile: doc_AthabascaGlaciers=myfile.read().replace('\n','') print("This string has", len(doc_AthabascaGlaciers), "characters.") type(doc_AthabascaGlaciers) # # 2. Athabasca Falls with open('doc_AthabscaFalls.txt','r') as myfile: doc_AthabscaFalls=myfile.read().replace('\n','') print("This string has", len(doc_AthabscaFalls), "characters.") type(doc_AthabscaFalls) # # 3. Columbia Ice Fields: with open('doc_ColumbiaIcefield.txt','r') as myfile: doc_ColumbiaIcefield=myfile.read().replace('\n','') print("This string has", len(doc_ColumbiaIcefield), "characters.") type(doc_ColumbiaIcefield) # # 4. Maligne Canyon: with open('doc_MaligneCanyon.txt','r') as myfile: doc_MaligneCanyon=myfile.read().replace('\n','') print("This string has", len(doc_MaligneCanyon), "characters.") type(doc_MaligneCanyon) # # 5. Maligne Lake: with open('doc_MaligneLake.txt','r') as myfile: doc_MaligneLake=myfile.read().replace('\n','') print("This string has", len(doc_MaligneLake), "characters.") type(doc_MaligneLake) # # 6. Mount <NAME> Trail: with open('doc_MountEdithCavellTrail.txt','r') as myfile: doc_MountEdithCavellTrail=myfile.read().replace('\n','') print("This string has", len(doc_MountEdithCavellTrail), "characters.") type(doc_MountEdithCavellTrail) # # 7. <NAME>: with open('doc_MtEdithCavell.txt','r') as myfile: doc_MtEdithCavell=myfile.read().replace('\n','') print("This string has", len(doc_MtEdithCavell), "characters.") type(doc_MtEdithCavell) # # 8. Pyramid and Patricia Lakes: with open('doc_PyramidPatriciaLakes.txt','r') as myfile: doc_PyramidPatriciaLakes=myfile.read().replace('\n','') print("This string has", len(doc_PyramidPatriciaLakes), "characters.") type(doc_PyramidPatriciaLakes) # # 9. Sunwapta Falls: with open('doc_SunwaptaFalls.txt','r') as myfile: doc_SunwaptaFalls=myfile.read().replace('\n','') print("This string has", len(doc_SunwaptaFalls), "characters.") type(doc_SunwaptaFalls) # # 10. Sunwapta Falls and Canyons: with open('doc_SunwaptaFallsCanyons.txt','r') as myfile: doc_SunwaptaFallsCanyons=myfile.read().replace('\n','') print("This string has", len(doc_SunwaptaFallsCanyons), "characters.") type(doc_SunwaptaFallsCanyons) # # A list that contains all of the 10 documents: doc_set = [doc_AthabascaGlaciers, doc_AthabscaFalls, doc_ColumbiaIcefield, doc_MaligneCanyon, doc_MaligneLake, doc_MountEdithCavellTrail, doc_MtEdithCavell, doc_PyramidPatriciaLakes, doc_SunwaptaFalls, doc_SunwaptaFallsCanyons] type(doc_set) len(doc_set) # # Now that we have all of 10 documents in string format, we try to clean them: # # 1. Cleaning Athabsca Glaciers: from nltk.tokenize import RegexpTokenizer tokenizer = RegexpTokenizer(r'\w+') raw1 = doc_AthabascaGlaciers.lower() tokens1 = tokenizer.tokenize(raw1) print(tokens1[:20]) # #stop words: # + from stop_words import get_stop_words # create English stop words list en_stop = get_stop_words('en') # + # remove stop words from tokens stopped_tokens1 = [i for i in tokens1 if not i in en_stop] print(stopped_tokens1[:20]) # - # #Stemming: # + from nltk.stem.porter import PorterStemmer # Create p_stemmer of class PorterStemmer p_stemmer = PorterStemmer() # + # stem token text1 = [p_stemmer.stem(i) for i in stopped_tokens1] print(text1[:20]) # - # # 2. Cleaning Athabsca falls: raw2 = doc_AthabscaFalls.lower() tokens2 = tokenizer.tokenize(raw2) stopped_tokens2 = [i for i in tokens2 if not i in en_stop] text2 = [p_stemmer.stem(i) for i in stopped_tokens2] print(text2[:20]) # # 3. Cleaning Columbia Ice Fields: raw3 = doc_ColumbiaIcefield.lower() tokens3 = tokenizer.tokenize(raw3) stopped_tokens3 = [i for i in tokens3 if not i in en_stop] text3 = [p_stemmer.stem(i) for i in stopped_tokens3] print(text3[:20]) # # 4. Cleaning Maligne Canyon: raw4 = doc_MaligneCanyon.lower() tokens4 = tokenizer.tokenize(raw4) stopped_tokens4 = [i for i in tokens4 if not i in en_stop] text4 = [p_stemmer.stem(i) for i in stopped_tokens4] print(text4[:20]) # # 5. Cleaning Maligne Lake: raw5 = doc_MaligneLake.lower() tokens5 = tokenizer.tokenize(raw5) stopped_tokens5 = [i for i in tokens5 if not i in en_stop] text5 = [p_stemmer.stem(i) for i in stopped_tokens5] print(text5[:20]) # # 6. Cleaning Mount Edith Cavell Trail: raw6 = doc_MountEdithCavellTrail.lower() tokens6 = tokenizer.tokenize(raw6) stopped_tokens6 = [i for i in tokens6 if not i in en_stop] text6 = [p_stemmer.stem(i) for i in stopped_tokens6] print(text6[:20]) # # 7. Cleaning Mt. Edith Cavell: raw7 = doc_MtEdithCavell.lower() tokens7 = tokenizer.tokenize(raw7) stopped_tokens7 = [i for i in tokens7 if not i in en_stop] text7 = [p_stemmer.stem(i) for i in stopped_tokens7] print(text7[:20]) # # 8. Cleaning Pyramid and Patricia Lakes: raw8 = doc_PyramidPatriciaLakes.lower() tokens8 = tokenizer.tokenize(raw8) stopped_tokens8 = [i for i in tokens8 if not i in en_stop] text8 = [p_stemmer.stem(i) for i in stopped_tokens8] print(text8[:20]) # # 9. Cleaning Sunwapta Falls: raw9 = doc_SunwaptaFalls.lower() tokens9 = tokenizer.tokenize(raw9) stopped_tokens9 = [i for i in tokens9 if not i in en_stop] text9 = [p_stemmer.stem(i) for i in stopped_tokens9] print(text9[:20]) # # 10. Cleaning Sunwapta Falls and Canyons: raw10 = doc_SunwaptaFallsCanyons.lower() tokens10 = tokenizer.tokenize(raw10) stopped_tokens10 = [i for i in tokens10 if not i in en_stop] text10 = [p_stemmer.stem(i) for i in stopped_tokens10] print(text10[:20]) # # Making a list of lists: texts=[text1, text2, text3, text4, text5, text6, text7, text8, text9, text10] type(texts) # # Constructing a document-term matrix: # + from gensim import corpora, models dictionary = corpora.Dictionary(texts) # - corpus = [dictionary.doc2bow(text) for text in texts] print(corpus[0]) # # Applying the LDA model: import gensim ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=5, id2word = dictionary, passes=20) print(ldamodel.print_topics(num_topics=5, num_words=10)) # # This is a different way: print('Number of unique tokens: %d' % len(dictionary)) print('Number of documents: %d' % len(corpus)) # + # Train LDA model. from gensim.models import LdaModel # Set training parameters. num_topics = 5 chunksize = 2000 passes = 20 iterations = 400 eval_every = None # Don't evaluate model perplexity, takes too much time. # Make a index to word dictionary. temp = dictionary[0] # This is only to "load" the dictionary. id2word = dictionary.id2token # %time model = LdaModel(corpus=corpus, id2word=id2word, chunksize=chunksize, \ # alpha='auto', eta='auto', \ # iterations=iterations, num_topics=num_topics, \ # passes=passes, eval_every=eval_every) # + top_topics = model.top_topics(corpus, dictionary, coherence='u_mass', topn=10, processes=-1) # Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics. avg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics print('Average topic coherence: %.4f.' % avg_topic_coherence) from pprint import pprint pprint(top_topics) # + top_topics = model.show_topics(num_topics=5, num_words=10, log=False, formatted=True) from pprint import pprint pprint(top_topics) # -
.ipynb_checkpoints/LDA Topic moeling-Finished!!-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: sins # language: python # name: sins # --- # %matplotlib inline # %reload_ext autoreload # %autoreload 2 import IPython.display as ipd from matplotlib import pyplot as plt import matplotlib.gridspec as gridspec # + from pathlib import Path import numpy as np import json import torch from pprint import pprint from sins.database.database import SINS, AudioReader from sins.database.utils import prepare_sessions from sins.systems.sad.model import BinomialClassifier from sins.systems.modules import CNN2d, CNN1d, AutoPool from sins.features.stft import STFT from sins.features.mel_transform import MelTransform from sins.features.normalize import Normalizer from sins.systems.utils import Collate from sins import paths from sins.systems.sad.utils import load_sections, get_sections_in_range # - exp_dir = paths.exp_dir / 'sad' / '2019-10-07-06-57-23' with (exp_dir / '1' / 'config.json').open() as f: config = json.load(f) # ## Load Model # + model = BinomialClassifier( cnn_2d=CNN2d(**config['model']['cnn_2d']), cnn_1d=CNN1d(**config['model']['cnn_1d']), pooling=AutoPool(**config['model']['pool']) ) ckpt = torch.load(exp_dir / 'ckpt-best.pth') model.load_state_dict(ckpt) model.eval() model.pooling.alpha = 2.0 pool_sizes = [ pool_size[1] if isinstance(pool_size, (list, tuple)) else pool_size for pool_size in (model._cnn_2d.pool_sizes + model._cnn_1d.pool_sizes) ] total_pool_size = np.prod(pool_sizes) # - # ## Example sad # + db = SINS() presence = prepare_sessions( db.sessions, room='living', include_absence=True, discard_other_rooms=True, discard_ambiguities=False, label_map_fn=lambda label: (False if label == "absence" else True) ) eval_sets = db.get_segments( db.room_to_nodes['living'], max_segment_length=60., time_ranges=db.eval_ranges, sessions=presence, session_key='presence' ) audio_reader = AudioReader(**config['audio_reader']) stft = STFT(**config['stft']) mel_transform = MelTransform(**config['mel_transform']) normalizers = [ Normalizer("mel_transform", name=node, **config['normalizer']) for node in db.room_to_nodes['living'] ] [normalizer.initialize_moments(verbose=True) for normalizer in normalizers] def pretend_batch(example): example['features'] = torch.Tensor(example['mel_transform'].transpose((0,2,1))[:, None]) example['seq_len'] = 4*[example['mel_transform'].shape[-1]] return example eval_sets = [ ds.map(audio_reader).map(stft).map(mel_transform).map(normalizer).map(pretend_batch) for ds, normalizer in zip(eval_sets, normalizers) ] # - example = eval_sets[0][200] print(example['presence']) sad, seq_len = model(example) sad = sad.cpu().data.numpy().mean((0,1)) n_on, n_off = 0, 300 x = example['mel_transform'][0, 10*n_on:10*n_off].T y = sad[n_on:n_off] fig, axes = plt.subplots(2,1) axes[0].imshow(x, interpolation='nearest', aspect='auto', origin="lower") axes[1].plot(y, linewidth=2) axes[1].set_xlim([-0.5, y.shape[0]-0.5]) # ## Evaluate sad from sins.systems.sad.evaluate import prepare_dataset, fscore, prepare_targets, simple_sad sound_events_dir = '/path/to/dcase2016_task2_train_sounds' # ### test metric scores = np.array([[[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]]]) targets = prepare_targets( scores, onset_frames=np.array([6]), offset_frames=np.array([12.]) ) print(targets) print(fscore(targets, scores>0.5)) # ### plot example mix # + eval_sets, onset_frames, offset_frames = prepare_dataset( exp_dir, sound_events_dir, mixtures_per_sound=1, segment_length=20., snr=0., nodes=["Node1"], num_workers=8, prefetch_buffer=16, seed=0, dataset_name='eval', prefetch=False ) example_idx = 100 batch = eval_sets[0][example_idx] y, seq_len = model(Collate()(batch)) y = y.cpu().data.numpy().mean((0,1)) x = batch[1]['features'][0] targets = prepare_targets( y[None,None], np.array([onset_frames[example_idx] / total_pool_size]), np.array([offset_frames[example_idx] / total_pool_size]) )[0, 0] print(targets.shape) n_on = 0 n_off = 1000 x = x[:, n_on:n_off] y = y[n_on//total_pool_size:n_off//total_pool_size] targets = targets[n_on//total_pool_size:n_off//total_pool_size] fig, axes = plt.subplots(3,1) axes[0].imshow( x,#[...,int(onset_frames[n]-10):int(offset_frames[n]+10)], interpolation='nearest', aspect='auto', origin="lower" ) axes[1].plot(targets, linewidth=2) axes[1].set_ylim([0., 1.]) axes[1].set_xlim([-0.5, targets.shape[-1]-0.5]) axes[2].plot(y, linewidth=2) axes[2].set_ylim([0., 1.]) axes[2].set_xlim([-0.5, y.shape[-1]-0.5]); # - # ## Listen to active sections nodes = config["nodes"] sections = load_sections(exp_dir / 'ensemble_sections.json') train_sections = get_sections_in_range(sections, db.train_ranges) datasets = db.get_segments( nodes, min_segment_length=1., max_segment_length=60., time_ranges=train_sections ) datasets = [ds.map(audio_reader) for ds in datasets] sec_idx = 100 for ds in datasets: ipd.display(ipd.Audio(ds[sec_idx]['audio_data'][0], rate=16000)) # ## Correlate Node sections # + def compute_overlap(sections_a, sections_b, shift=0.): sections_b = [(start - shift, stop - shift) for (start, stop) in sections_b] overlap = 0. idx_b = 0 for section_a in sections_a: while idx_b < len(sections_b) and sections_b[idx_b][-1] < section_a[-2]: idx_b += 1 while idx_b < len(sections_b) and sections_b[idx_b][-1] < section_a[-1]: overlap += sections_b[idx_b][-1] - max(section_a[-2], sections_b[idx_b][-2]) idx_b += 1 if idx_b < len(sections_b): overlap += max( section_a[-1] - max(section_a[-2], sections_b[idx_b][-2]), 0.) return overlap def correlate_segments(sections_a, sections_b, shifts=np.arange(-1., 1.1, .2)): return np.array([ compute_overlap(sections_a, sections_b, shift) for shift in shifts ]) def compute_shift_mat(sections, shifts=np.arange(-1., 1.1, .2)): shift_mat = np.array([ [ shifts[np.argmax(correlate_segments( sections[node_1], sections[node_2], shifts ))] for node_2 in range(len(sections)) ] for node_1 in range(len(sections)) ]) return shift_mat # - node_sections = [] for node in db.room_to_nodes['living']: with (exp_dir / f'{node}_sections.json').open() as f: node_sections.append(json.load(f)) shifts = np.arange(-1., 1.1, .1) autocorrelation = correlate_segments(node_sections[5], node_sections[5], shifts=shifts) crosscorrelation = correlate_segments(node_sections[5], node_sections[6], shifts=shifts) fig, ax = plt.subplots(1,1, figsize=(6,2)) ax.plot(shifts, autocorrelation/1000) ax.plot(shifts, crosscorrelation/1000) ax.set_xlim(-1,1) ax.set_xlabel('Lag / s', size=14) ax.set_ylabel('Correlation / ($10^3$ s)', size=14) ax.legend(['Auto', 'Cross'], fontsize=13) ax.tick_params(axis='both', which='major', labelsize=13) ax.grid() plt.locator_params(axis='x', nbins=5) plt.savefig('correlation.pdf', bbox_inches = 'tight', pad_inches = 0.02) shift_mat = compute_shift_mat(node_sections) shifts = shift_mat[0] shift_mat = shift_mat - shifts + shifts[:, None] assert (np.abs(shift_mat) < 0.1).all()
notebooks/sad.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.interpolate import interp1d, UnivariateSpline from scipy.signal import find_peaks import warnings warnings.filterwarnings("ignore") # - # # Datas # # + # unkown sample datas data_uk = pd.read_excel("radiation_datas.xlsx", sheet_name="unknown") uk_channel = data_uk["u_channel"] uk_counts = data_uk["u_counts"] # cobalt-60 datas data_co = pd.read_excel("radiation_datas.xlsx", sheet_name="cobalt") co_channel = data_co["c_channel"] co_counts = data_co["c_counts"] # Barium-133 datas data_ba = pd.read_excel("radiation_datas.xlsx", sheet_name="barium") ba_channel = data_ba["b_channel"] ba_counts = data_ba["b_counts"] # - # # Functions # # + var = 100/50 # interpolation function def interpolate(x, y): f = interp1d(x, y, kind="cubic", fill_value="extrapolate") a = np.arange(x[0], x[len(x) - 1], 0.001) b = f(a) return a, b # full width half maxima function def FWHM(x_n, y_n): # create a spline spline = UnivariateSpline(x_n, y_n - np.max(y_n) / var, s=0) x1, x2 = spline.roots() # find the roots return x1, x2 def FWHM_co(x_n, y_n): # create a spline spline = UnivariateSpline(x_n, y_n - np.max(y_n) / var, s=0) x1, x2, x3, x4 = spline.roots() # find the roots return x1, x2, x3, x4 # funciton for polynomial fitting def polfit(a, b, c): z = np.polyfit(a, b, c) f = np.poly1d(z) x = np.arange(a[0], a[len(a) - 1], 0.001) y = f(x) return x, y # - # # Interpolation # # + channel_interpolated_ba, counts_interpolated_ba = interpolate(ba_channel, ba_counts) channel_interpolated_co, counts_interpolated_co = interpolate(co_channel, co_counts) channel_interpolated_uk, counts_interpolated_uk = interpolate(uk_channel, uk_counts) element_name = ["Barium-133", "Cobalt-60", "Unknown Source"] channel_interpolated = [channel_interpolated_ba, channel_interpolated_co, channel_interpolated_uk] counts_interpolated = [counts_interpolated_ba, counts_interpolated_co, counts_interpolated_uk] channel_original = [ba_channel, co_channel, uk_channel] counts_original = [ba_counts, co_counts, uk_counts] # - # # Calculations # ## Finding the Full width at half maxima(FWHM) # # From the above graph, I am considering the interpolated datas and then going to find the FWHM here # # + # plt.style.use("seaborn-poster") # plt.figure(figsize=(15, 24)) del_V = [] vi = [] for i in range(len(element_name)): if element_name[i] != "Cobalt-60": r1, r2= FWHM(channel_interpolated[i], counts_interpolated[i]) vi.append(r1) vi.append(r2) del_V.append(r2 - r1) if element_name[i] == "Unknown Sample": print(f"{element_name[i]}: \n\t V1 = {r1:.2f}, V2 = {r2:.2f}, del_V = {del_V[i+1]:.2f}") elif element_name[i] == "Barium-133": print(f"{element_name[i]}: \n\t V1 = {r1:.2f}, V2 = {r2:.2f}, del_V = {del_V[i]:.2f}") if element_name[i] == "Cobalt-60": r1, r2, r3, r4 = FWHM_co(channel_interpolated[i], counts_interpolated[i]) vi.append(r1) vi.append(r2) vi.append(r3) vi.append(r4) del_V.append(r2 - r1) del_V.append(r4 - r3) print( f"{element_name[i]}: \n\t V1 = {r1:.2f}, V2 = {r2:.2f}, del_V = {del_V[i]:.2f} \n\t V3 = {r3:.2f}, V4 = {r4:.2f}, del_V = {del_V[i+1]:.2f}" ) # - # ## Peak determination # # + res_name = ["Barium-133", "Cobalt-60 lower peak", "Cobalt-60 upper peak"] for i in range(3): peak_id_max = find_peaks(counts_interpolated[i], height=np.max(counts_interpolated[i]) - 5000) heights = peak_id_max[1]["peak_heights"] pos = channel_interpolated[i][peak_id_max[0]] print(f"{element_name[i]}: \n\t channel = {pos} and peak = {heights}") peak_counts = [110920, 28887, 25867] peak_channel = [13, 42, 49] known_energy = [0.356, 1.173, 1.332] # - # # Spectrums # # ## Barium-133 # # + plt.style.use("seaborn-poster") plt.figure(figsize=(15, 8)) plt.axvspan(vi[0], vi[1], alpha=0.2) for i in range(2): plt.annotate(f"{vi[i]:.2f}", xy=(vi[i]-0.25, 0), fontsize=14) plt.annotate(f"{peak_counts[0]}", xy=(peak_channel[0] + 0.5, peak_counts[0]), fontsize=14) # plt.title(f"{element_name[0]} Spectrum") plt.xlabel("channel number (V)") plt.ylabel("counts per minute") plt.plot(channel_interpolated_ba, counts_interpolated_ba, "--", label="interpolated points") plt.plot(ba_channel, ba_counts, "o", markersize=9, label="original points") plt.legend(loc="upper left") plt.grid(alpha=0.3, which="major") plt.minorticks_on() plt.grid(alpha=0.2, which="minor", ls="--") # - # ## Cobalt-60 # # + plt.style.use("seaborn-poster") plt.figure(figsize=(15, 8)) plt.axvspan(vi[2], vi[3], alpha=0.2) plt.axvspan(vi[4], vi[5], alpha=0.2) for i in range(2, 6): plt.annotate(f"{vi[i]:.2f}", xy=(vi[i]-1, 300), fontsize=14) for i in range(1,3): plt.annotate(f"{peak_counts[i]}", xy=(peak_channel[i] + 0.5, peak_counts[i]), fontsize=14) # plt.title(f"{element_name[1]} Spectrum") plt.xlabel("channel number (V)") plt.ylabel("counts per minute") plt.plot(channel_interpolated_co, counts_interpolated_co, "--", label="interpolated points") plt.plot(co_channel, co_counts, "o", markersize=9, label="original points") plt.legend(loc="upper left") plt.grid(alpha=0.3, which="major") plt.minorticks_on() plt.grid(alpha=0.2, which="minor", ls="--") plt.show() # - # ## Cesium-144 # # + plt.style.use("seaborn-poster") plt.figure(figsize=(15, 8)) plt.axvspan(vi[6], vi[7], alpha=0.2) for i in range(6, 8): plt.annotate(f"{vi[i]:.2f}", xy=(vi[i]-0.5, 0), fontsize=14) plt.annotate(f"43029", xy=(24 + 0.5, 43029), fontsize=14) # plt.title(f"{element_name[2]} Spectrum") plt.xlabel("channel number (V)") plt.ylabel("counts per minute") plt.plot(channel_interpolated_uk, counts_interpolated_uk, "--", label="interpolated points") plt.plot(uk_channel, uk_counts, "o", markersize=9, label="original points") plt.legend(loc="upper left") plt.grid(alpha=0.3, which="major") plt.minorticks_on() plt.grid(alpha=0.2, which="minor", ls="--") # - # ## Unknown samples energy # # The energy of the unknown sample is calculated from the calibaration curve # # + unknown_energy = np.interp(24, peak_channel, known_energy) print(f"Energy of Unknown Sample from the calibaration curve = {unknown_energy:.3f} MeV") peak_channel.append(24) known_energy.append(unknown_energy) res_name.append("Unknown Source") # - # # Finding the Energy # # ## Calibaration Curve # + plt.style.use("seaborn-poster") plt.figure(figsize=(15, 8)) # plt.title(f"Calibaration curve") plt.xlabel("Channel Number(V)") plt.ylabel("Energy of element(MeV)") plt.plot(np.sort(peak_channel), np.sort(known_energy)) for i in range(len(res_name)): plt.plot(peak_channel[i], known_energy[i], "o", label=res_name[i]) plt.annotate(f"({peak_channel[i]}, {known_energy[i]:.3f})", xy=(peak_channel[i]+0.5,known_energy[i]-0.025), fontsize=14) plt.legend(loc="upper left") plt.grid(alpha=0.3, which="major") plt.minorticks_on() plt.grid(alpha=0.2, which="minor", ls="--") plt.show() # - # # Resolution Curve # # + # resolution V = peak_channel resolution = [] for i in range(len(res_name)): res = (del_V[i] / V[i]) * 100 resolution.append(res) print( f"{res_name[i]}: \n\t resolution = {resolution[i]:.2f}%, del_V = {del_V[i]:.2f}, V = {V[i]}" ) sqrt_energy = 1 / np.sqrt(known_energy) # for i in range(4): # print(f"{sqrt_energy[i]:0.2f}, {resolution[i]:.2f}") sqe_int, res_int = polfit(np.sort(sqrt_energy), np.sort(resolution), 1) # print(sqe_int,res_int) # + plt.style.use("seaborn-poster") plt.figure(figsize=(15, 8)) # plt.title(f"Resolution curve") plt.xlabel(r"$1/\sqrt{E}$") plt.ylabel("Resolution %") # plt.plot(np.sort(sqrt_energy[:3]), np.sort(resolution[:3])) plt.plot(np.sort(sqe_int), np.sort(res_int)) for i in range(len(res_name)): plt.plot(sqrt_energy[i], resolution[i], "o", label=res_name[i]) plt.annotate(f"{resolution[i]:.2f}%", xy=(sqrt_energy[i]+0.02,resolution[i]-0.2), fontsize=14) plt.legend(loc="upper left") plt.grid(alpha=0.3, which="major") plt.minorticks_on() plt.grid(alpha=0.2, which="minor", ls="--") plt.show() # - # The plot shows that it is following the resolution vs. square root of energy realtion. Only the unknown sample is outside the curve, which is probably because of some instrumental error #
3rd sem practicals/gamma ray scintillator sprectroscopy/gamma ray.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib notebook # # Solar Data Processing with Python # # Now we have a grasp of the basics of python and know what kind of solar data exist using SunPy, but the whole reason for downloading python in the first place was to analyze solar data. Let's take a closer look at examples of solar data analysis. # ## Fitting A Gaussian to Data. # # # One of the most common data types in solar data processing is a time series. A time series is a measurement of how one physical parameter changes as a function of time. This example shows how to fit a gaussian to a spectral line. In this example, it will be as "real world" as possible. # # First, let's import some useful libraries. # + from datetime import datetime, timedelta #we saw these in the last tutorial import numpy as np from astropy.io import fits #how to read .fits files from astropy.modeling import models, fitting #some handy fitting tools from astropy import matplotlib.pyplot as plt from scipy.integrate import trapz #numerical itegration tool import astropy.units as u #units!! import sunpy #solar data analysis tools import sunpy.data.sample #Data interaction tools # + sunpy.data.download_sample_data() #Download some sample data # - # Next we need to load in the data set we want to work with: filename = sunpy.data.sample.GBM_TIMESERIES hdulist = fits.open(filename) # So what did we get when we opened the file? Let's take a look: len(hdulist) # We got 4 items in the list. Lets take a look at the first one: hdulist[0].header # It looks like this data is from the GLAST telescope measuring gamma rays. Let's take a look at the second item: hdulist[1].header # Alright, now we are getting somewhere. This has data in units of 'keV' and max/min measurements. Let's take a look at the other elements of the list we got: hdulist[2].header hdulist[3].header # So it looks like we are working with some energy counts data, temportal information, quality measurements, etc. # ### Plotting Spectral Data # Let's take a look at some of the data we've got. len(hdulist[2].data) hdulist[2].data.names hdulist[2].data["COUNTS"] hdulist[2].data["COUNTS"].shape # There is a large array of counts at 128 different energies. Let's take a look at the lowest energy measurements: plt.plot(hdulist[2].data["counts"][:,0]) # So now we have a plot of counts over some perieod of time. We can see there is one major spike in the data. Let's filter the data so that we just have the major peak without the spike. w = np.logical_and(hdulist[2].data["counts"][:,0] > 300, hdulist[2].data["counts"][:,0] < 2000) w # This function, `np.logical_and`, returns a logical. We can see that `w` is now an array of true and false values. To take a subsection of our data where our filter is true: counts = hdulist[2].data["counts"][:,0][w] plt.plot(counts) counts len(counts) # Now, it is good to add some units to data when you can. The header of the file tells us what the units are, but in this case, counts have no units. # ### Fitting the data with a Gaussian # # Now that we have extracted a detection feature from the whole data. Now let's say we want to fit it with a gaussian. Do do this we will make use of a couple of packages in in AstroPy. We will initialize the gaussian fit with some approximations (max, center, FWHM): g_init = models.Gaussian1D(1500, 300, 100) # Now let's define a fitting method and produce a fit: fit_g = fitting.LevMarLSQFitter() # Since this fitting routine expects both X and Y coordinate data, we need to define an X vector: t=np.arange(0,len(counts)) g = fit_g(g_init, t, counts) # Let's take a look at some of the qualities of our fitted gaussian: g.mean g.stddev g.amplitude g # Our guesses wern't too bad, but we over estimated the Standard Deviation by about a factor of 5. The variable `g` has the fitted parameters of our gaussian but it doesn't actually contain an array. To plot it over the data, we need to create an array of values. We will make an array from 0 to 1410 with 2820 points in it. x = np.linspace(0, 1410, 2820) # To find the values of our fit at each location, it is easy: y = g(x) # Now we can plot it: plt.plot(counts) plt.plot(x, y, linewidth=2) # That isn't a very good fit. If we chose a more clever way to filter our data, or possibly fit two gaussians that could improve things. # ### Ingegrating under the curve. # # Let's find the area under the curve we just created. We can numerically integrate it easily: intensity = trapz(y,x) intensity # ## Creating a Histogram from a Map Image # # Often when working with images, it is useful to look at a histogram of the values to understand how the image is constructed. When working with solar data, we can use the `Map` object we saw earlier to help us construct one. # # First let's download some more libraries: import sunpy.map from astropy.coordinates import SkyCoord from sunpy.data.sample import AIA_171_IMAGE # We first create the Map using the sample data and we will create a submap of a quiet region. aia = sunpy.map.Map(AIA_171_IMAGE) bl = SkyCoord(-400 * u.arcsec, 0 * u.arcsec, frame=aia.coordinate_frame) tr = SkyCoord(0 * u.arcsec, 400 * u.arcsec, frame=aia.coordinate_frame) aia_smap = aia.submap(bl, tr) # We now create a histogram of the data in this region. dmin = aia_smap.min() dmax = aia_smap.max() num_bins = 50 hist, bins = np.histogram(aia_smap.data, bins=np.linspace(dmin, dmax, num_bins)) width = 0.7 * (bins[1] - bins[0]) x = (bins[:-1] + bins[1:]) / 2 # Let’s plot the histogram as well as some standard values such as mean upper, and lower value and the one-sigma range. plt.figure() plt.bar(x, hist, align='center', width=width, label='Histogram') plt.xlabel('Intensity') plt.axvline(dmin, label='Data min={:.2f}'.format(dmin), color='black') plt.axvline(dmax, label='Data max={:.2f}'.format(dmax), color='black') plt.axvline(aia_smap.data.mean(), label='mean={:.2f}'.format(aia_smap.data.mean()), color='green') one_sigma = np.array([aia_smap.data.mean() - aia_smap.data.std(), aia_smap.data.mean() + aia_smap.data.std()]) plt.axvspan(one_sigma[0], one_sigma[1], alpha=0.3, color='green', label='mean +/- std = [{0:.2f}, {1:.2f}]'.format( one_sigma[0], one_sigma[1])) plt.axvline(one_sigma[0], color='green') plt.axvline(one_sigma[1], color='red') plt.legend() plt.show() # Finally let’s overplot what the one-sigma range means on the map # # fig = plt.figure() fig.add_subplot(projection=aia_smap) aia_smap.plot() levels = one_sigma / dmax * u.percent * 100 aia_smap.draw_contours(levels=levels, colors=['red', 'green']) plt.show()
TU_Lecture_PyDat.ipynb