code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="6bYaCABobL5q" # ##### Copyright 2018 The TensorFlow Authors. # + cellView="form" id="FlUw7tSKbtg4" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] id="xc1srSc51n_4" # # 使用 SavedModel 格式 # + [markdown] id="-nBUqG2rchGH" # <table class="tfo-notebook-buttons" align="left"> # <td><a target="_blank" href="https://tensorflow.google.cn/guide/saved_model" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看</a></td> # <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/saved_model.ipynb" class=""> <img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td> # <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/saved_model.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 GitHub 上查看源代码</a></td> # <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/saved_model.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td> # </table> # + [markdown] id="CPE-fshLTsXU" # SavedModel 包含一个完整的 TensorFlow 程序——不仅包含权重值,还包含计算。它不需要原始模型构建代码就可以运行,因此,对共享和部署(使用 [TFLite](https://tensorflow.google.cn/lite)、[TensorFlow.js](https://js.tensorflow.google.cn/)、[TensorFlow Serving](https://tensorflow.google.cn/tfx/serving/tutorials/Serving_REST_simple) 或 [TensorFlow Hub](https://tensorflow.google.cn/hub))非常有用。 # # 本文深入探讨有关使用低级别 `tf.saved_model` API 的一些详细内容: # # - 如果使用 `tf.keras.Model`,`keras.Model.save(output_path)` 可能正是您需要的方法:参阅 [Keras 保存和序列化](keras/save_and_serialize.ipynb) # # - 在训练过程中,如果您只想保存/加载权重,请参阅[训练检查点](./checkpoint.ipynb)指南。 # # + [markdown] id="9SuIC7FiI9g8" # ## 从 Keras 创建 SavedModel # # 为便于简单介绍,本部分将导出一个预训练 Keras 模型来处理图像分类请求。本指南的其他部分将详细介绍和讨论创建 SavedModel 的其他方式。 # + id="Le5OB-fBHHW7" import os import tempfile from matplotlib import pyplot as plt import numpy as np import tensorflow as tf tmpdir = tempfile.mkdtemp() # + id="wlho4HEWoHUT" physical_devices = tf.config.experimental.list_physical_devices('GPU') if physical_devices: tf.config.experimental.set_memory_growth(physical_devices[0], True) # + id="SofdPKo0G8Lb" file = tf.keras.utils.get_file( "grace_hopper.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg") img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224]) plt.imshow(img) plt.axis('off') x = tf.keras.preprocessing.image.img_to_array(img) x = tf.keras.applications.mobilenet.preprocess_input( x[tf.newaxis,...]) # + [markdown] id="sqVcFL10JkF0" # 我们会使用 Grace Hopper 的一张照片作为运行示例,并使用一个预先训练的 Keras 图像分类模型,因为它简单易用。您也可以使用自定义模型,后文会作详细介绍。 # + id="JhVecdzJTsKE" labels_path = tf.keras.utils.get_file( 'ImageNetLabels.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt') imagenet_labels = np.array(open(labels_path).read().splitlines()) # + id="aEHSYjW6JZHV" pretrained_model = tf.keras.applications.MobileNet() result_before_save = pretrained_model(x) decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1] print("Result before saving:\n", decoded) # + [markdown] id="r4KIsQDZJ5PS" # 对该图像的顶部预测是“军服”。 # + id="8nfznDmHCW6F" mobilenet_save_path = os.path.join(tmpdir, "mobilenet/1/") tf.saved_model.save(pretrained_model, mobilenet_save_path) # + [markdown] id="pyX-ETE3wX63" # 保存路径遵循 TensorFlow Serving 使用的惯例,路径的最后一个部分(此处为 `1/`)是模型的版本号——它可以让 Tensorflow Serving 之类的工具推断相对新鲜度。 # + [markdown] id="VCZZ8avqLF1g" # 我们可以使用 `tf.saved_model.load` 将 SavedModel 重新加载到 Python 中,看看海军上将 Hopper 的照片是如何分类的。 # + id="NP2UpVFRV7N_" loaded = tf.saved_model.load(mobilenet_save_path) print(list(loaded.signatures.keys())) # ["serving_default"] # + [markdown] id="K5srGzowfWff" # 导入的签名总是会返回字典。要自定义签名名称和输出字典键,请参阅[在导出过程中指定签名](#specifying_signatures_during_export)。 # + id="ChFLpegYfQGR" infer = loaded.signatures["serving_default"] print(infer.structured_outputs) # + [markdown] id="cJYyZnptfuru" # 从 SavedModel 运行推理会产生与原始模型相同的结果。 # + id="9WjGEaS3XfX7" labeling = infer(tf.constant(x))[pretrained_model.output_names[0]] decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1] print("Result after saving and loading:\n", decoded) # + [markdown] id="SJEkdXjTWbtl" # ## 在 TensorFlow Serving 中运行 SavedModel # # 可以通过 Python 使用 SavedModel(下文中有详细介绍),但是,生产环境通常会使用专门服务进行推理,而不会运行 Python 代码。使用 TensorFlow Serving 时,这很容易从 SavedModel 进行设置。 # # 有关服务的详细信息,请参阅 [TensorFlow Serving REST 教程](https://tensorflow.google.cn/tfx/tutorials/serving/rest_simple),其中包括在笔记本中或本地机器上安装 `tensorflow_model_server` 的说明。的说明。简而言之,要为上面导出的 `mobilenet` 模型提供服务,只需在 SavedModel 目录中指向模型服务器。 # # ```bash # nohup tensorflow_model_server \ --rest_api_port=8501 \ --model_name=mobilenet \ --model_base_path="/tmp/mobilenet" >server.log 2>&1 # ``` # # 然后发送请求。 # # ```python # # # !pip install requests import json import numpy import requests data = json.dumps({"signature_name": "serving_default", "instances": x.tolist()}) headers = {"content-type": "application/json"} json_response = requests.post('http://localhost:8501/v1/models/mobilenet:predict', data=data, headers=headers) predictions = numpy.array(json.loads(json_response.text)["predictions"]) # ``` # # 生成的 `predictions` 结果与 Python 的结果相同。 # + [markdown] id="Bi0ILzu1XdWw" # ## 磁盘上的 SavedModel 格式 # # SavedModel 是一个包含序列化签名和运行这些签名所需的状态的目录,其中包括变量值和词汇表。 # # + id="6u3YZuYZXyTO" # !ls {mobilenet_save_path} # + [markdown] id="ple4X5utX8ue" # `saved_model.pb` 文件用于存储实际 TensorFlow 程序或模型,以及一组已命名的签名——每个签名标识一个接受张量输入和产生张量输出的函数。 # # SavedModel 可能包含模型的多个变体(多个 `v1.MetaGraphDefs`,通过 `saved_model_cli` 的 `--tag_set` 标记进行标识),但这种情况很少见。可以为模型创建多个变体的 API 包括 [tf.Estimator.experimental_export_all_saved_models](https://tensorflow.google.cn/api_docs/python/tf/estimator/Estimator#experimental_export_all_saved_models) 和 TensorFlow 1.x 中的 `tf.saved_model.Builder`。 # + id="Pus0dOYTYXbI" # !saved_model_cli show --dir {mobilenet_save_path} --tag_set serve # + [markdown] id="eALHpGvRZOhk" # `variables` 目录包含一个标准训练检查点(参阅[训练检查点指南](./checkpoint.ipynb))。 # + id="EDYqhDlNZAC2" # !ls {mobilenet_save_path}/variables # + [markdown] id="VKmaZQpHahGh" # `assets` 目录包含 TensorFlow 计算图使用的文件,例如,用于初始化词汇表的文本文件。本例中没有使用这种文件。 # # SavedModel 可能有一个用于保存 TensorFlow 计算图未使用的任何文件的 `assets.extra` 目录,例如,为使用者提供的关于如何处理 SavedModel 的信息。TensorFlow 本身并不会使用此目录。 # + [markdown] id="zIceoF_CYmaF" # ## 保存自定义模型 # # `tf.saved_model.save` 支持保存 `tf.Module` 对象及其子类,如 `tf.keras.Layer` 和 `tf.keras.Model`。 # # 我们来看一个保存和恢复 `tf.Module` 的示例。 # # + id="6EPvKiqXMm3d" class CustomModule(tf.Module): def __init__(self): super(CustomModule, self).__init__() self.v = tf.Variable(1.) @tf.function def __call__(self, x): print('Tracing with', x) return x * self.v @tf.function(input_signature=[tf.TensorSpec([], tf.float32)]) def mutate(self, new_v): self.v.assign(new_v) module = CustomModule() # + [markdown] id="J4FcP-Co3Fnw" # 当您保存 `tf.Module` 时,任何 `tf.Variable` 特性、`tf.function` 装饰的方法以及通过递归遍历找到的 `tf.Module` 都会得到保存。(参阅[检查点教程](./checkpoint.ipynb),了解此递归便利的详细信息。)但是,所有 Python 特性、函数和数据都会丢失。也就是说,当您保存 `tf.function` 时,不会保存 Python 代码。 # # 如果不保存 Python 代码,SavedModel 如何知道怎样恢复函数? # # 简单地说,`tf.function` 的工作原理是,通过跟踪 Python 代码来生成 ConcreteFunction(一个可调用的 `tf.Graph` 包装器)。当您保存 `tf.function` 时,实际上保存的是 `tf.function` 的 ConcreteFunction 缓存。 # # 要详细了解 `tf.function` 与 ConcreteFunction 之间的关系,请参阅 [tf.function 指南](../../guide/function)。 # + id="85PUO9iWH7xn" module_no_signatures_path = os.path.join(tmpdir, 'module_no_signatures') module(tf.constant(0.)) print('Saving model...') tf.saved_model.save(module, module_no_signatures_path) # + [markdown] id="2ujwmMQg7OUo" # ## 加载和使用自定义模型 # + [markdown] id="QpxQy5Eb77qJ" # 在 Python 中加载 SavedModel 时,所有 `tf.Variable` 特性、`tf.function` 装饰方法和 `tf.Module` 都会按照与原始保存的 `tf.Module` 相同对象结构进行恢复。 # + id="EMASjADPxPso" imported = tf.saved_model.load(module_no_signatures_path) assert imported(tf.constant(3.)).numpy() == 3 imported.mutate(tf.constant(2.)) assert imported(tf.constant(3.)).numpy() == 6 # + [markdown] id="CDiauvb_99uk" # 由于没有保存 Python 代码,所以使用新输入签名调用 `tf.function` 会失败: # # ```python # imported(tf.constant([3.])) # ``` # # <pre>ValueError: Could not find matching function to call for canonicalized inputs ((<tf.tensor shape="(1,)" dtype="float32">,), {}). Only existing signatures are [((TensorSpec(shape=(), dtype=tf.float32, name=u'x'),), {})]. </tf.tensor></pre> # + [markdown] id="4Vsva3UZ-2sf" # ### 基本微调 # # 可以使用变量对象,还可以通过导入的函数向后传播。对于简单情形,这足以支持 SavedModel 的微调(即重新训练)。 # + id="PEkQNarJ-7nT" optimizer = tf.optimizers.SGD(0.05) def train_step(): with tf.GradientTape() as tape: loss = (10. - imported(tf.constant(2.))) ** 2 variables = tape.watched_variables() grads = tape.gradient(loss, variables) optimizer.apply_gradients(zip(grads, variables)) return loss # + id="p41NM6fF---3" for _ in range(10): # "v" approaches 5, "loss" approaches 0 print("loss={:.2f} v={:.2f}".format(train_step(), imported.v.numpy())) # + [markdown] id="XuXtkHSD_KSW" # ### 一般微调 # # 与普通 `__call__` 相比,Keras 的 SavedModel 提供了[更多详细信息](https://github.com/tensorflow/community/blob/master/rfcs/20190509-keras-saved-model.md#serialization-details)来解决更复杂的微调情形。TensorFlow Hub 建议在共享的 SavedModel 中提供以下详细信息(如果适用),以便进行微调: # # - 如果模型使用随机失活,或者是训练与推理之间的前向传递不同的另一种技术(如批次归一化),则 `__call__` 方法会获取一个可选的 Python 值 `training=` 参数。该参数的默认值为 `False`,但可将其设置为 `True`。 # - 对于变量的对应列表,除了 `__call__` 特性,还有 `.variable` 和 `.trainable_variable` 特性。在微调过程中,`.trainable_variables` 省略了一个变量,该变量原本可训练,但打算将其冻结。 # - 对于 Keras 等将权重正则化项表示为层或子模型特性的框架,还有一个 `.regularization_losses` 特性。它包含一个零参数函数的列表,这些函数的值应加到总损失中。 # # 回到初始 MobileNet 示例,我们来看看具体操作: # + id="Y6EUFdY8_PRD" loaded = tf.saved_model.load(mobilenet_save_path) print("MobileNet has {} trainable variables: {}, ...".format( len(loaded.trainable_variables), ", ".join([v.name for v in loaded.trainable_variables[:5]]))) # + id="B-mQJ8iP_R0h" trainable_variable_ids = {id(v) for v in loaded.trainable_variables} non_trainable_variables = [v for v in loaded.variables if id(v) not in trainable_variable_ids] print("MobileNet also has {} non-trainable variables: {}, ...".format( len(non_trainable_variables), ", ".join([v.name for v in non_trainable_variables[:3]]))) # + [markdown] id="qGlHlbd3_eyO" # ## 导出时指定签名 # # TensorFlow Serving 之类的工具和 `saved_model_cli` 可以与 SavedModel 交互。为了帮助这些工具确定要使用的 ConcreteFunction,我们需要指定服务上线签名。`tf.keras.Model` 会自动指定服务上线签名,但是,对于自定义模块,我们必须明确声明服务上线签名。 # # 默认情况下,自定义 `tf.Module` 中不会声明签名。 # + id="h-IB5Xa0NxLa" assert len(imported.signatures) == 0 # + [markdown] id="BiNtaMZSI8Tb" # 要声明服务上线签名,请使用 `signatures` 关键字参数指定 ConcreteFunction。当指定单个签名时,签名键为 `'serving_default'`,并将保存为常量 `tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY`。 # + id="_pAdgIORR2yH" module_with_signature_path = os.path.join(tmpdir, 'module_with_signature') call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32)) tf.saved_model.save(module, module_with_signature_path, signatures=call) # + id="nAzRHR0UT4hv" imported_with_signatures = tf.saved_model.load(module_with_signature_path) list(imported_with_signatures.signatures.keys()) # + [markdown] id="_gH91j1IR4tq" # 要导出多个签名,请将签名键的字典传递给 ConcreteFunction。每个签名键对应一个 ConcreteFunction。 # + id="6VYAiQmLUiox" module_multiple_signatures_path = os.path.join(tmpdir, 'module_with_multiple_signatures') signatures = {"serving_default": call, "array_input": module.__call__.get_concrete_function(tf.TensorSpec([None], tf.float32))} tf.saved_model.save(module, module_multiple_signatures_path, signatures=signatures) # + id="8IPx_0RWEx07" imported_with_multiple_signatures = tf.saved_model.load(module_multiple_signatures_path) list(imported_with_multiple_signatures.signatures.keys()) # + [markdown] id="43_Qv2W_DJZZ" # 默认情况下,输出张量名称非常通用,如 `output_0`。为了控制输出的名称,请修改 `tf.function`,以便返回将输出名称映射到输出的字典。输入的名称来自 Python 函数参数名称。 # + id="ACKPl1X8G1gw" class CustomModuleWithOutputName(tf.Module): def __init__(self): super(CustomModuleWithOutputName, self).__init__() self.v = tf.Variable(1.) @tf.function(input_signature=[tf.TensorSpec([], tf.float32)]) def __call__(self, x): return {'custom_output_name': x * self.v} module_output = CustomModuleWithOutputName() call_output = module_output.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32)) module_output_path = os.path.join(tmpdir, 'module_with_output_name') tf.saved_model.save(module_output, module_output_path, signatures={'serving_default': call_output}) # + id="1yGVy4MuH-V0" imported_with_output_name = tf.saved_model.load(module_output_path) imported_with_output_name.signatures['serving_default'].structured_outputs # + [markdown] id="Dk5wWyuMpuHx" # ## Estimator 中的 SavedModel # # Estimator 通过 [tf.Estimator.export_saved_model](https://tensorflow.google.cn/api_docs/python/tf/estimator/Estimator#export_saved_model) 导出 SavedModel。有关详细信息,请参阅 [Estimator 指南](https://tensorflow.google.cn/guide/estimator)。 # + id="B9KQq5qzpzbK" input_column = tf.feature_column.numeric_column("x") estimator = tf.estimator.LinearClassifier(feature_columns=[input_column]) def input_fn(): return tf.data.Dataset.from_tensor_slices( ({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16) estimator.train(input_fn) serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( tf.feature_column.make_parse_example_spec([input_column])) estimator_base_path = os.path.join(tmpdir, 'from_estimator') estimator_path = estimator.export_saved_model(estimator_base_path, serving_input_fn) # + [markdown] id="XJ4PJ-Cl4060" # 此 SavedModel 接受序列化 `tf.Example` 协议缓冲区,这对服务非常有用。但是,我们也可以使用 `tf.saved_model.load` 进行加载,并从 Python 运行该模型。 # + id="c_BUBBNB1UH9" imported = tf.saved_model.load(estimator_path) def predict(x): example = tf.train.Example() example.features.feature["x"].float_list.value.extend([x]) return imported.signatures["predict"]( examples=tf.constant([example.SerializeToString()])) # + id="C1ylWZCQ1ahG" print(predict(1.5)) print(predict(3.5)) # + [markdown] id="_IrCCm0-isqA" # 通过 `tf.estimator.export.build_raw_serving_input_receiver_fn` 可以创建输入函数,这些函数使用原始张量,而不是 `tf.train.Example`。 # + [markdown] id="Co6fDbzw_UnD" # ## 在 C++ 中加载 SavedModel # # SavedModel [加载器](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/saved_model/loader.h)的 C++ 版本提供了一个从路径中加载 SavedModel 的 API,同时允许使用 SessionOption 和 RunOption。您必须指定与计算图相关联的标记才能加载模型。加载的 SavedModel 版本称为 SavedModelBundle,其中包含 MetaGraphDef 以及加载该版本所处的会话。 # # ```C++ # const string export_dir = ... SavedModelBundle bundle; ... LoadSavedModel(session_options, run_options, export_dir, {kSavedModelTagTrain}, &bundle); # ``` # + [markdown] id="b33KuyEuAO3Z" # <a id="saved_model_cli"></a> # # ## SavedModel 命令行界面详解 # # 使用 SavedModel 命令行界面 (CLI) 可以检查和执行 SavedModel。例如,您可以使用 CLI 来检查模型的 `SignatureDef`。通过 CLI,您可以快速确认与模型相符的输入张量的 dtype 和形状。此外,如果要测试模型,您可以通过 CLI 传入各种格式的样本输入(例如,Python 表达式),然后获取输出,从而执行健全性检查。 # # ### 安装 SavedModel CLI # # 一般来说,通过以下两种方式都可以安装 TensorFlow: # # - 安装预构建的 TensorFlow 二进制文件。 # - 从源代码构建 TensorFlow。 # # 如果您是通过预构建的 TensorFlow 二进制文件安装的 TensorFlow,则 SavedModel CLI 已安装到您的系统上,路径为 `bin/saved_model_cli`。 # # 如果是从源代码构建的 TensorFlow,则还必须运行以下附加命令才能构建 `saved_model_cli`: # # ``` # $ bazel build tensorflow/python/tools:saved_model_cli # ``` # # ### 命令概述 # # SavedModel CLI 支持在 SavedModel 上使用以下两个命令: # # - `show`:用于显示 SavedModel 中可用的计算。 # - `run`:用于从 SavedModel 运行计算。 # # ### `show` 命令 # # SavedModel 包含一个或多个模型变体(技术为 `v1.MetaGraphDef`),这些变体通过 tag-set 进行标识。要为模型提供服务,您可能想知道每个模型变体中使用的具体是哪一种 `SignatureDef` ,以及它们的输入和输出是什么。那么,利用 `show` 命令,您就可以按照层级顺序检查 SavedModel 的内容。具体语法如下: # # ``` # usage: saved_model_cli show [-h] --dir DIR [--all] [--tag_set TAG_SET] [--signature_def SIGNATURE_DEF_KEY] # ``` # # 例如,以下命令会显示 SavedModel 中的所有可用 tag-set: # # ``` # $ saved_model_cli show --dir /tmp/saved_model_dir The given SavedModel contains the following tag-sets: serve serve, gpu # ``` # # 以下命令会显示 tag-set 的所有可用 `SignatureDef` 键: # # ``` # $ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve The given SavedModel `MetaGraphDef` contains `SignatureDefs` with the following keys: SignatureDef key: "classify_x2_to_y3" SignatureDef key: "classify_x_to_y" SignatureDef key: "regress_x2_to_y3" SignatureDef key: "regress_x_to_y" SignatureDef key: "regress_x_to_y2" SignatureDef key: "serving_default" # ``` # # 如果 tag-set 中有*多个*标记,则必须指定所有标记(标记之间用逗号分隔)。例如: # # <pre>$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve,gpu</pre> # # 要显示特定 `SignatureDef` 的所有输入和输出 TensorInfo,请将 `SignatureDef` 键传递给 `signature_def` 选项。如果您想知道输入张量的张量键值、dtype 和形状,以便随后执行计算图,这会非常有用。例如: # # ``` # $ saved_model_cli show --dir \ /tmp/saved_model_dir --tag_set serve --signature_def serving_default The given SavedModel SignatureDef contains the following input(s): inputs['x'] tensor_info: dtype: DT_FLOAT shape: (-1, 1) name: x:0 The given SavedModel SignatureDef contains the following output(s): outputs['y'] tensor_info: dtype: DT_FLOAT shape: (-1, 1) name: y:0 Method name is: tensorflow/serving/predict # ``` # # 要显示 SavedModel 中的所有可用信息,请使用 `--all` 选项。例如: # # <pre>$ saved_model_cli show --dir /tmp/saved_model_dir --all<br>MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:<br><br>signature_def['classify_x2_to_y3']:<br> The given SavedModel SignatureDef contains the following input(s):<br> inputs['inputs'] tensor_info:<br> dtype: DT_FLOAT<br> shape: (-1, 1)<br> name: x2:0<br> The given SavedModel SignatureDef contains the following output(s):<br> outputs['scores'] tensor_info:<br> dtype: DT_FLOAT<br> shape: (-1, 1)<br> name: y3:0<br> Method name is: tensorflow/serving/classify<br><br>...<br><br>signature_def['serving_default']:<br> The given SavedModel SignatureDef contains the following input(s):<br> inputs['x'] tensor_info:<br> dtype: DT_FLOAT<br> shape: (-1, 1)<br> name: x:0<br> The given SavedModel SignatureDef contains the following output(s):<br> outputs['y'] tensor_info:<br> dtype: DT_FLOAT<br> shape: (-1, 1)<br> name: y:0<br> Method name is: tensorflow/serving/predict</pre> # # ### `run` 命令 # # 调用 `run` 命令即可运行计算图计算,传递输入,然后显示输出,还可以选择保存。具体语法如下: # # ``` # usage: saved_model_cli run [-h] --dir DIR --tag_set TAG_SET --signature_def SIGNATURE_DEF_KEY [--inputs INPUTS] [--input_exprs INPUT_EXPRS] [--input_examples INPUT_EXAMPLES] [--outdir OUTDIR] [--overwrite] [--tf_debug] # ``` # # 要将输入传递给模型,`run` 命令提供了以下三种方式: # # - `--inputs` 选项:可传递文件中的 NumPy ndarray。 # - `--input_exprs` 选项:可传递 Python 表达式。 # - `--input_examples` 选项:可传递 `tf.train.Example`。 # # #### `--inputs` # # 要传递文件中的输入数据,请指定 `--inputs` 选项,一般格式如下: # # ```bsh # --inputs <INPUTS> # ``` # # 其中,*INPUTS* 采用以下格式之一: # # - `<input_key>=<filename>` # - `<input_key>=<filename>[<variable_name>]` # # 您可以传递多个 *INPUTS*。如果确实要传递多个输入,请使用分号分隔每个 *INPUTS*。 # # `saved_model_cli` 使用 `numpy.load` 来加载 *filename*。*filename* 可能为以下任何格式: # # - `.npy` # - `.npz` # - pickle 格式 # # `.npy` 文件始终包含一个 NumPy ndarray。因此,从 `.npy` 文件加载时,会将内容直接分配给指定的输入张量。如果使用 `.npy` 文件来指定 *variable_name*,则会忽略 *variable_name*,并且会发出警告。 # # 从 `.npz` (zip) 文件加载时,您可以选择指定 *variable_name*,以便标识 zip 文件中要为输入张量键加载的变量。如果不指定 *variable_name*,SavedModel CLI 会检查 zip 文件中是否只包含一个文件。如果是,则为指定的输入张量键加载该文件。 # # 从 pickle 文件加载时,如果未在方括号中指定 `variable_name`,则会将该 pickle 文件中的任何内容全部传递给指定的输入张量键。否则,SavedModel CLI 会假设该 pickle 文件中有一个字典,并将使用与 *variable_name* 对应的值。 # # #### `--input_exprs` # # 要通过 Python 表达式传递输入,请指定 `--input_exprs` 选项。当您没有现成的数据文件,而又想使用与模型的 `SignatureDef` 的 dtype 和形状相符的一些简单输入来对模型进行健全性检查时,这非常有用。例如: # # ```bsh # `<input_key>=[[1],[2],[3]]` # ``` # # 除了 Python 表达式,您还可以传递 NumPy 函数。例如: # # ```bsh # `<input_key>=np.ones((32,32,3))` # ``` # # (请注意,`numpy` 模块已作为 `np` 提供。) # # #### `--input_examples` # # 要传递 `tf.train.Example` 作为输入,请指定 `--input_examples` 选项。对于每个输入键,它会获取一个字典列表,其中每个字典是 `tf.train.Example` 的一个实例。字典键就是特征,而值则是每个特征的值列表。例如: # # ```bsh # `<input_key>=[{"age":[22,24],"education":["BS","MS"]}]` # ``` # # #### 保存输出 # # 默认情况下,SavedModel CLI 会将输出写入 stdout。如果将字典传递给 `--outdir` 选项,则会将输出保存为以给定字典下的输出张量键命名的 `.npy` 文件。 # # 使用 `--overwrite` 可重写现有输出文件。 #
site/zh-cn/guide/saved_model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] azdata_cell_guid="5cd39845-9699-4a6b-895e-19a75e6f062d" # SOSNEP02NB01 # # <img src="https://raw.githubusercontent.com/microsoft/dataexposed/main/graphics/sosn-white-new-very-small.jpg" alt="Logo" height="150"> # + [markdown] azdata_cell_guid="e8fed3cb-77bf-4c53-b22c-05b43ec86af7" # ## Anscombe's quartet # # Four datasets with the same mean, standard deviation, and regression line. # # code from [Anscombe's quartet — Matplotlib 3.3.4 documentation](https://matplotlib.org/stable/gallery/specialty_plots/anscombe.html) # + azdata_cell_guid="aca95bd7-8ac2-426d-8422-15d59b5506b2" import matplotlib.pyplot as plt import numpy as np x = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5] y1 = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68] y2 = [9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74] y3 = [7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73] x4 = [8, 8, 8, 8, 8, 8, 8, 19, 8, 8, 8] y4 = [6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89] datasets = { 'I': (x, y1), 'II': (x, y2), 'III': (x, y3), 'IV': (x4, y4) } # + azdata_cell_guid="27dacdda-5c4b-41a6-b286-8d4ad99c8d06" tags=[] means = { 'I': (np.mean(y1).round(2)), 'II': (np.mean(y2).round(2)), 'III': (np.mean(y3).round(2)), 'IV': (np.mean(y4).round(2)) } stddevs = { 'I': (np.std(y1).round(2)), 'II': (np.std(y2).round(2)), 'III': (np.std(y3).round(2)), 'IV': (np.std(y4).round(2)) } corrs = { 'I': (np.corrcoef(x,y1).round(2)[1][0]), 'II': (np.corrcoef(x,y2).round(2))[1][0], 'III': (np.corrcoef(x,y3).round(2))[1][0], 'IV': (np.corrcoef(x4,y4).round(2)[1][0]) } print('Mean: ', means) print('Standard Deviation: ', stddevs) print('Correlation Coefficient: ', corrs) # + azdata_cell_guid="e0cd749d-4c27-4759-af3c-5c3c3c2d76f0" fig, axs = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6, 6), gridspec_kw={'wspace': 0.08, 'hspace': 0.08}) axs[0, 0].set(xlim=(0, 20), ylim=(2, 14)) axs[0, 0].set(xticks=(0, 10, 20), yticks=(4, 8, 12)) for ax, (label, (x, y)) in zip(axs.flat, datasets.items()): ax.text(0.1, 0.9, label, fontsize=20, transform=ax.transAxes, va='top') ax.tick_params(direction='in', top=True, right=True) ax.plot(x, y, 'o') # linear regression p1, p0 = np.polyfit(x, y, deg=1) x_lin = np.array([np.min(x), np.max(x)]) y_lin = p1 * x_lin + p0 ax.plot(x_lin, y_lin, 'r-', lw=2) # add text box for the statistics stats = (f'$\\mu$ = {np.mean(y):.2f}\n' f'$\\sigma$ = {np.std(y):.2f}\n' f'$r$ = {np.corrcoef(x, y)[0][1]:.2f}') bbox = dict(boxstyle='round', fc='blanchedalmond', ec='orange', alpha=0.5) ax.text(0.95, 0.07, stats, fontsize=9, bbox=bbox, transform=ax.transAxes, horizontalalignment='right') plt.show()
notebooks/SOSNEP02NB01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=false editable=false # # AIT Development notebook # # # ## notebook of structure # # |#|area name|cell num|description|edit or not| # |---|---|---|---|---| # | 1|flags set|1|setting of launch jupyter or ait flag.|no edit| # | 2|ait-sdk install|1|Use only jupyter launch.<br>find ait-sdk and install.|no edit| # | 3|create requirements and pip install|3|Use only jupyter launch.<br>create requirements.txt.<br>And install by requirements.txt.|should edit(second cell, you set use modules.)| # | 4|import|2|you should write use import modules.<br>but bottom lines do not edit.|should edit(first cell, you import your moduel.)| # | 5|create manifest|1|Use only jupyter launch.<br>create ait.manifest.json.|should edit| # | 6|create input|1|Use only jupyter launch.<br>create ait.input.json.|should edit| # | 7|initialize|1|this cell is initialize for ait progress.|no edit| # | 8|functions|N|you defined measures, resources, downloads in ait.manifesit.json. <br>Define any functions to add these.|should edit| # | 9|main|1|Read the data set or model and calls the function defined in `functions-area`.|should edit| # |10|entrypoint|1|Call the main function.|no edit| # |11|license attribute set|1|Use only notebook launch.<br>Setting attribute for license.|should edit| # |12|prepare deploy|1|Use only notebook launch.<br>Convert to python programs and create dag.py.|no edit| # # ## notebook template revision history # # ### 1.0.1 2020/10/21 # # * add revision history # * separate `create requirements and pip install` editable and noeditable # * separate `import` editable and noeditable # # ### 1.0.0 2020/10/12 # # * new cerarion # + deletable=false editable=false ######################################### # area:flags set # do not edit ######################################### # Determine whether to start AIT or jupyter by startup argument import sys is_ait_launch = (len(sys.argv) == 2) # + deletable=false editable=false ######################################### # area:ait-sdk install # do not edit ######################################### if not is_ait_launch: # get ait-sdk file name from pathlib import Path from glob import glob import re def numericalSort(value): numbers = re.compile(r'(\d+)') parts = numbers.split(value) parts[1::2] = map(int, parts[1::2]) return parts latest_sdk_file_path=sorted(glob('../lib/*.whl'), key=numericalSort)[-1] ait_sdk_name = Path(latest_sdk_file_path).name # copy to develop dir import shutil # current_dir = %pwd shutil.copyfile(f'../lib/{ait_sdk_name}', f'{current_dir}/{ait_sdk_name}') # install ait-sdk # !pip install --upgrade pip # !pip install --force-reinstall ./$ait_sdk_name # + deletable=false editable=false ######################################### # area:create requirements and pip install # do not edit ######################################### if not is_ait_launch: from ait_sdk.common.files.ait_requirements_generator import AITRequirementsGenerator requirements_generator = AITRequirementsGenerator() # - ######################################### # area:create requirements and pip install # should edit ######################################### if not is_ait_launch: requirements_generator.add_package('Cython') requirements_generator.add_package('docopt') requirements_generator.add_package('clint') requirements_generator.add_package('crontab') requirements_generator.add_package('tablib') requirements_generator.add_package('matplotlib') requirements_generator.add_package('Pillow') requirements_generator.add_package('pycocotools') requirements_generator.add_package('tensorflow','2.5.0') requirements_generator.add_package('lxml') requirements_generator.add_package('tf_slim') requirements_generator.add_package('pandas') requirements_generator.add_package('numpy') requirements_generator.add_package('ipython') requirements_generator.add_package('zipp','3.1.0') # + deletable=false editable=false ######################################### # area:create requirements and pip install # do not edit ######################################### if not is_ait_launch: requirements_generator.add_package(f'./{ait_sdk_name}') requirements_path = requirements_generator.create_requirements(current_dir) # !pip install -r $requirements_path # + ######################################### # area:import # should edit ######################################### # import if you need modules cell import numpy as np import os import csv import pathlib import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile import json import pandas as pd import glob import shutil from pathlib import Path from os import makedirs, path from collections import Iterable from io import StringIO from matplotlib import pyplot as plt from PIL import Image from IPython.display import display # + deletable=false editable=false ######################################### # area:import # do not edit ######################################### # must use modules import shutil # do not remove from ait_sdk.common.files.ait_input import AITInput # do not remove from ait_sdk.common.files.ait_output import AITOutput # do not remove from ait_sdk.common.files.ait_manifest import AITManifest # do not remove from ait_sdk.develop.ait_path_helper import AITPathHelper # do not remove from ait_sdk.utils.logging import get_logger, log, get_log_path # do not remove from ait_sdk.develop.annotation import measures, resources, downloads, ait_main # do not remove # must use modules # - ######################################### # area:create manifest # should edit ######################################### if not is_ait_launch: from ait_sdk.common.files.ait_manifest_generator import AITManifestGenerator manifest_genenerator = AITManifestGenerator(current_dir) manifest_genenerator.set_ait_name('eval_bdd100k_aicc_tf2.3') manifest_genenerator.set_ait_description('''The image classification model infers the image data (.jpg). Compare the inference result with the correct answer data (.json). Output the coverage of the comparison result. !!!Caution!!! Please set the memory allocation of docker to 4GB or more.''') manifest_genenerator.set_ait_author('AIST') manifest_genenerator.set_ait_email('') manifest_genenerator.set_ait_version('0.1') manifest_genenerator.set_ait_quality('https://airc.aist.go.jp/aiqm/quality/internal/Accuracy_of_trained_model') manifest_genenerator.set_ait_reference('') manifest_genenerator.add_ait_inventories(name='trained_model_checkpoint', type_='model', description='trained_model_checkpoint', format_=['zip'], schema='https://www.tensorflow.org/guide/saved_model') manifest_genenerator.add_ait_inventories(name='trained_model_graph', type_='model', description='trained_model_graph', format_=['zip'], schema='https://www.tensorflow.org/guide/saved_model') manifest_genenerator.add_ait_inventories(name='trained_model_protobuf', type_='model', description='trained_model_protobuf', format_=['zip'], schema='https://www.tensorflow.org/guide/saved_model') manifest_genenerator.add_ait_inventories(name='test_set_images', type_='dataset', description='image_dataset(bdd100K)', format_=['zip'], schema='https://bdd-data.berkeley.edu/') manifest_genenerator.add_ait_inventories(name='test_set_labels', type_='dataset', description='image_label_dataset(bdd100K)', format_=['json'], schema='https://bdd-data.berkeley.edu/') manifest_genenerator.add_ait_inventories(name='labels_define', type_='dataset', description='labels_define', format_=['txt'], schema='https://github.com/tensorflow/models/tree/master/research/object_detection/data') manifest_genenerator.add_ait_measures(name='traffic_sign_accuracy', type_='float', description='accuracy predicted of traffic_sign', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='traffic_light_accuracy', type_='float', description='accuracy predicted of traffic_light', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='car_accuracy', type_='float', description='accuracy predicted of car', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='rider_accuracy', type_='float', description='accuracy predicted of rider', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='motor_accuracy', type_='float', description='accuracy predicted of motor', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='person_accuracy', type_='float', description='accuracy predicted of person', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='bus_accuracy', type_='float', description='accuracy predicted of bus', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='truck_accuracy', type_='float', description='accuracy predicted of truck', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='bike_accuracy', type_='float', description='accuracy predicted of bike', structure='single', min='0', max='1') manifest_genenerator.add_ait_measures(name='train_accuracy', type_='float', description='accuracy predicted of train', structure='single', min='0', max='1') manifest_genenerator.add_ait_resources(name='all_label_accuracy_csv', type_='table', description='accuracy of all label') manifest_genenerator.add_ait_resources(name='all_label_accuracy_png', type_='picture', description='accuracy of all label') manifest_genenerator.add_ait_downloads(name='Log', description='AIT_log') manifest_genenerator.add_ait_downloads(name='each_label_accuracy', description='accuracy of each label') manifest_genenerator.add_ait_downloads(name='each_picture', description='predict of each picture') manifest_path = manifest_genenerator.write() ######################################### # area:create input # should edit ######################################### if not is_ait_launch: from ait_sdk.common.files.ait_input_generator import AITInputGenerator input_generator = AITInputGenerator(manifest_path) input_generator.add_ait_inventories(name='trained_model_checkpoint', value='trained_model_checkpoint/trained_model_checkpoint.zip') input_generator.add_ait_inventories(name='trained_model_graph', value='trained_model_graph/trained_model_graph.zip') input_generator.add_ait_inventories(name='trained_model_protobuf', value='trained_model_protobuf/trained_model_protobuf.zip') input_generator.add_ait_inventories(name='test_set_images', value='test_set_images/test_set_images.zip') input_generator.add_ait_inventories(name='test_set_labels', value='test_set_labels/bdd100k_labels_images_val.json') input_generator.add_ait_inventories(name='labels_define', value='labels_define/mscoco_complete_label_map.pbtxt') input_generator.write() # + deletable=false editable=false ######################################### # area:initialize # do not edit ######################################### logger = get_logger() ait_manifest = AITManifest() ait_input = AITInput(ait_manifest) ait_output = AITOutput(ait_manifest) if is_ait_launch: # launch from AIT current_dir = path.dirname(path.abspath(__file__)) path_helper = AITPathHelper(argv=sys.argv, ait_input=ait_input, ait_manifest=ait_manifest, entry_point_dir=current_dir) else: # launch from jupyter notebook # ait.input.json make in input_dir input_dir = '/usr/local/qai/mnt/ip/job_args/1/1' # current_dir = %pwd path_helper = AITPathHelper(argv=['', input_dir], ait_input=ait_input, ait_manifest=ait_manifest, entry_point_dir=current_dir) ait_input.read_json(path_helper.get_input_file_path()) ait_manifest.read_json(path_helper.get_manifest_file_path()) ### do not edit cell # - if not is_ait_launch: # !apt-get update -yqq # !apt-get install gcc g++ make libffi-dev libssl-dev -yqq # !apt-get install python-lxml -y # !apt-get install python-dev libxml2 libxml2-dev libxslt-dev curl -y # !export PYTHONPATH=$PYTHONPATH:/tensorflow/models/research:/tensorflow/ # %cd /tmp # !apt-get install -y unzip # !curl -OL https://github.com/google/protobuf/releases/download/v3.9.0/protoc-3.9.0-linux-x86_64.zip # !unzip -d protoc3 protoc-3.9.0-linux-x86_64.zip # !mv protoc3/bin/* /usr/local/bin/ # !mv protoc3/include/* /usr/local/include/ # !rm -rf protoc-3.9.0-linux-x86_64.zip protoc3 if not is_ait_launch: # !apt-get install git -y # !git clone https://github.com/cocodataset/cocoapi.git # %cd /tmp/cocoapi/PythonAPI # !python3 setup.py build_ext --inplace # !rm -rf build # !python3 setup.py build_ext install # !rm -rf build # !mkdir /tensorflow # %cd /tensorflow # !git clone https://github.com/tensorflow/models.git # %cd /tensorflow/models/research # !protoc object_detection/protos/*.proto --python_out=. # %cd /workdir/root/develop # + sys.path.append(os.path.join(Path().resolve(), '/tensorflow/models/research')) from object_detection.utils import ops as utils_ops from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util utils_ops.tf = tf.compat.v1 tf.gfile = tf.io.gfile g_total_accuracy = [] resultFile = [] # With a range of -1 you will get evrything limit_box_range = 0 # Obtaining yolo results g_image_name = "" count_traffic_label = 0 gt_list_boxes = [] total_correct_predicted = [0,0,0,0,0,0,0,0,0,0] total_GT_correct_predicted = [0,0,0,0,0,0,0,0,0,0] total_predicted_per_label = [0,0,0,0,0,0,0,0,0,0] correct_predicted = [0,0,0,0,0,0,0,0,0,0] jsonFileName = "" average_accuracy = [] avg_acc_per_traffic_sign = [] avg_acc_per_traffic_light = [] avg_acc_per_car = [] avg_acc_per_rider = [] avg_acc_per_motor = [] avg_acc_per_person = [] avg_acc_per_bus = [] avg_acc_per_truck = [] avg_acc_per_bike = [] avg_acc_per_train = [] f_each = [] # - def _run_inference_for_single_image(model, image) -> np.ndarray: image = np.asarray(image) # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image) # The model expects a batch of images, so add an axis with `tf.newaxis`. input_tensor = input_tensor[tf.newaxis,...] # Run inference model_fn = model.signatures['serving_default'] output_dict = model_fn(input_tensor) # All outputs are batches tensors. # Convert to numpy arrays, and take index [0] to remove the batch dimension. # We're only interested in the first num_detections. num_detections = int(output_dict.pop('num_detections')) output_dict = {key:value[0, :num_detections].numpy() for key,value in output_dict.items()} output_dict['num_detections'] = num_detections # detection_classes should be ints. output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64) # Handle models with masks: if 'detection_masks' in output_dict: # Reframe the the bbox mask to the image size. detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( output_dict['detection_masks'], output_dict['detection_boxes'], image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5, tf.uint8) output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy() return output_dict def _read_labels(label_path) -> np.ndarray: labels = json.load(open(label_path, 'r')) if not isinstance(labels, Iterable): labels = [labels] return labels def _get_boxes(objects) -> np.ndarray: return [o for o in objects if ('box2d' in o and o['box2d'] is not None) or ('box3d' in o and o['box3d'] is not None)] def _get_id_label(current_object_label) -> int: if current_object_label == "traffic sign": #traffic_sign = 0 return 0 elif current_object_label == "traffic light": #traffic light = 1 return 1 elif current_object_label == "car": #car = 2 return 2 elif current_object_label == "rider": #rider = 3 return 3 elif current_object_label == "motor": #motor = 4 return 4 elif current_object_label == "person": #person = 5 return 5 elif current_object_label == "bus": #bus = 6 return 6 elif current_object_label == "truck": #truck = 7 return 7 elif current_object_label == "bike": #bike = 8 return 8 elif current_object_label == "train": #train = 9 return 9 return -1 def _get_label_from_id(id) -> str: if id == 0: return "traffic sign" elif id == 1: return "traffic light" elif id == 2: return "car" elif id == 3: return "rider" elif id == 4: return "motor" elif id == 5: return "person" elif id == 6: return "bus" elif id == 7: return "truck" elif id == 8: return "bike" elif id == 9: return "train" return "" def _adding_avg_label_from_id(id, value, avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train): if id == 0: avg_acc_per_traffic_sign.append(value) elif id == 1: avg_acc_per_traffic_light.append(value) elif id == 2: avg_acc_per_car.append(value) elif id == 3: avg_acc_per_rider.append(value) elif id == 4: avg_acc_per_motor.append(value) elif id == 5: avg_acc_per_person.append(value) elif id == 6: avg_acc_per_bus.append(value) elif id == 7: avg_acc_per_truck.append(value) elif id == 8: avg_acc_per_bike.append(value) elif id == 9: avg_acc_per_train.append(value) return avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train def _search_image_information(objects, image_name) -> np.ndarray: list_boxes = [] for counter,o in enumerate(objects): if o['name'] != image_name: continue for b in _get_boxes(o["labels"]): current_object_label = b['category'] x1 = b['box2d']['x1'] y1 = b['box2d']['y1'] x2 = b['box2d']['x2'] y2 = b['box2d']['y2'] if abs(x1 - x2) < limit_box_range and abs(y1 - y2) < limit_box_range: continue # Last value is to determine if this box has been found or not list_boxes.append([current_object_label,x1,y1,x2,y2, False]) break return list_boxes def _load_labels_ground_truth(gt_list_boxes, total_GT_correct_predicted): list_ground_truth = [0,0,0,0,0,0,0,0,0,0] for counter, box in enumerate(gt_list_boxes): id = _get_id_label(box[0]) list_ground_truth[id] = list_ground_truth[id] + 1 total_GT_correct_predicted[id] = total_GT_correct_predicted[id] + 1 return list_ground_truth,total_GT_correct_predicted def _averageOfList(num) -> float: sumOfNumbers = 0 for t in num: sumOfNumbers = sumOfNumbers + t avg = sumOfNumbers / len(num) return avg # JSONファイルと推論結果を比較する def _compare_json(gt_list_boxes, objects, total_GT_correct_predicted,avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train) -> None: for line in resultFile: listWords = line.split(" ") if (listWords[0] == "Enter"): # CHECK ACCURACY OF PREVIOUS IMAGE if len(gt_list_boxes) > 0: total_labels = len(gt_list_boxes) list_ground_truth,total_GT_correct_predicted = _load_labels_ground_truth(gt_list_boxes,total_GT_correct_predicted) labels_found = 0 # WRITE RESULTS IN A FILE for box in gt_list_boxes: if box[5]: labels_found = labels_found + 1 labels_found = 0 for counter, items in enumerate(list_ground_truth): labels_found = labels_found + correct_predicted[counter] f_each.append(str(g_image_name)+" ACCURACY of labels found: "+str(labels_found/total_labels)+"\n") avg_acc = 0 for counter, items in enumerate(list_ground_truth): if items == 0: f_each.append(_get_label_from_id(counter)+ "- Correct Predicted/labels in image:"+ str(correct_predicted[counter]) + "/" + str(items) + " = 0\n") else: f_each.append(_get_label_from_id(counter)+ "- Correct Predicted/labels in image:"+ str(correct_predicted[counter]) + "/" + str(items) + " = " + str(correct_predicted[counter]/items) + "\n") if predicted_per_label[counter] == 0: f_each.append(str(_get_label_from_id(counter))+ "- Good predicted(TP)/total predicted(FP):"+ str(correct_predicted[counter])+ "/"+str(predicted_per_label[counter])+" = 0\n") f_each.append(str(_get_label_from_id(counter))+ "- False predicted(FP)/total predicted(FP):"+ str(predicted_per_label[counter] - correct_predicted[counter])+ "/"+str(predicted_per_label[counter])+" = 0\n") else: f_each.append(str(_get_label_from_id(counter))+ "- Good predicted(TP)/total predicted:"+ str(correct_predicted[counter]) + "/"+str(predicted_per_label[counter])+" = "+ str(correct_predicted[counter]/predicted_per_label[counter]) + "\n") f_each.append(str(_get_label_from_id(counter))+ "- False predicted(FP)/total predicted:"+ str(predicted_per_label[counter] - correct_predicted[counter]) + "/"+str(predicted_per_label[counter])+" = "+ str((predicted_per_label[counter] - correct_predicted[counter])/predicted_per_label[counter]) + "\n") if predicted_per_label[counter] > 0 and items == 0: f_each.append(str(_get_label_from_id(counter))+ "- Accuracy: 0 \n") label_avg_acc = 0 avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train = _adding_avg_label_from_id(counter, 0, avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train) elif predicted_per_label[counter] == 0 and items > 0: f_each.append(str(_get_label_from_id(counter))+ "- Accuracy: 0 \n") label_avg_acc = 0 avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train = _adding_avg_label_from_id(counter, 0, avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train) elif predicted_per_label[counter] == 0 and items == 0: f_each.append(str(_get_label_from_id(counter))+ "- Accuracy: 0 \n") avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train = _adding_avg_label_from_id(counter, 100, avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train) label_avg_acc = 0 else: # It has predicted correctly that there are any label of the object searched f_each.append(str(_get_label_from_id(counter))+ "- Accuracy: " + str((correct_predicted[counter]/items)*100) + " \n") label_avg_acc = (correct_predicted[counter]/items)*100 avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train = _adding_avg_label_from_id(counter, ((correct_predicted[counter]/items)*100), avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train) avg_acc = avg_acc + label_avg_acc f_each.append("---\n") f_each.append(str(g_image_name)+" AVERAGE ACCURACY: "+str(avg_acc/10)+"\n") average_accuracy.append(avg_acc/10) f_each.append("--------------------------------------------------------------------\n") correct_predicted = [0,0,0,0,0,0,0,0,0,0] predicted_per_label = [0,0,0,0,0,0,0,0,0,0] g_image_name = listWords[3] # ADD HERE THE INFORMATION LABELS OF THAT images gt_list_boxes = _search_image_information(objects, g_image_name) else: # using remove() to # perform removal while("" in listWords) : listWords.remove("") current_object_label = listWords[0][:-1] if current_object_label == "traffi": current_object_label = "traffic" + " " + listWords[1][:-1] count_traffic_label = 1 else: count_traffic_label = 0 # 他の物体を認識したときはスルーする if _get_id_label(current_object_label) == -1: print(current_object_label) continue # Left_x left_x = listWords[2+count_traffic_label] # top_y top_y = listWords[4+count_traffic_label] # width width = listWords[6+count_traffic_label] # height height = listWords[8+count_traffic_label][:-2] # Get the X-Y info x1 = int(float(left_x)) y1 = int(float(top_y)) x2 = x1 + int(float(width)) y2 = y1 + int(float(height)) if abs(x1 - x2) < limit_box_range and abs(y1 - y2) < limit_box_range: continue total_predicted_per_label[_get_id_label(current_object_label)] = total_predicted_per_label[_get_id_label(current_object_label)] + 1 predicted_per_label[_get_id_label(current_object_label)] = predicted_per_label[_get_id_label(current_object_label)] + 1 position = -1 range = 200 for counter, box in enumerate(gt_list_boxes): box_label= box[0] box_x1 = box[1] if box_label == current_object_label: if range >= abs(box_x1 - x1): range = abs(box_x1 - x1) position = counter if position >= 0: if gt_list_boxes[position][5] == False: gt_list_boxes[position][5] = True id = _get_id_label(current_object_label) correct_predicted[id] = correct_predicted[id] + 1 total_correct_predicted[id] = total_correct_predicted[id] + 1 def _accuracy_count(total_GT_correct_predicted, avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train) -> None: f_each.append("\n") f_each.append("TOTAL\n") f_each.append("--------------------------------------------------------------------\n") f_each.append("Average accuracy: " + str(_averageOfList(average_accuracy))+"\n") string_result_amount_labels = "" string_result_acc = "" for counter,amount_label in enumerate(total_GT_correct_predicted): f_each.append(str(_get_label_from_id(counter)) +"- Amount of labels: " + str(amount_label) + "\n") if counter == 0: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_traffic_sign))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_traffic_sign)) + "," elif counter == 1: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_traffic_light))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_traffic_light)) + "," elif counter == 2: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_car))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_car)) + "," elif counter == 3: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_rider))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_rider)) + "," elif counter == 4: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_motor))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_motor)) + "," elif counter == 5: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_person))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_person)) + "," elif counter == 6: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_bus))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_bus)) + "," elif counter == 7: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_truck))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_truck)) + "," elif counter == 8: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_bike))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_bike)) + "," elif counter == 9: f_each.append(str(_get_label_from_id(counter)) +"- Average accuracy: " + str(_averageOfList(avg_acc_per_train))+"\n") string_result_acc = string_result_acc + str(_averageOfList(avg_acc_per_train)) + "," string_result_amount_labels = string_result_amount_labels + str(amount_label) + "," f_each.append("---\n") def _view_accuracy_label(total_GT_correct_predicted) -> None: false_result = [] for index,amount_label in enumerate(total_predicted_per_label): correct_buffer = total_correct_predicted[index] false_buffer = 0 if amount_label > total_GT_correct_predicted[index]: false_buffer = amount_label - correct_buffer else: false_buffer = total_GT_correct_predicted[index] - correct_buffer false_result.append(false_buffer) if correct_buffer == 0: g_total_accuracy.append(0) else: g_total_accuracy.append( correct_buffer / (correct_buffer + false_buffer)) df = pd.DataFrame({'Name': ["traffic sign","traffic light","car","rider","motor","person","bus","truck","bike","train"], 'Original_data': total_GT_correct_predicted, 'Predicted_num': total_predicted_per_label, 'Predicted_correct_num': total_correct_predicted, 'Predicted_false_num': false_result, 'Accuracy': g_total_accuracy}) print(df) # resourcesに追加 save_all_label_accuracy_csv(df) save_all_label_accuracy_png(df) # ワイルドカードを使ってファイルの削除 def _remove_glob(pathname, recursive=True): for p in glob.glob(pathname, recursive=recursive): if os.path.isfile(p): os.remove(p) # 画像データの解凍 def _load_image(file_path: str) -> np.ndarray: with zipfile.ZipFile(file_path) as existing_zip: existing_zip.extractall('/tmp') path_to_test_images_dir = pathlib.Path('/tmp') test_image_path = sorted(list(path_to_test_images_dir.glob("*.jpg"))) data = test_image_path return data # + ######################################### # area:functions # should edit ######################################### @log(logger) @downloads(ait_output, path_helper, 'each_picture', '{}predict.jpg') def save_images(model, image_path_list, category_index, file_path: str=None) -> None: out_files = [] for image_path in image_path_list: # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = np.array(Image.open(image_path)) # Actual detection. output_dict = _run_inference_for_single_image(model, image_np) tmp_image_name = 'Enter Image Path: ' + image_path.name resultFile.append(tmp_image_name) for index, item in enumerate(output_dict['detection_boxes']): kind = output_dict['detection_classes'][index] accuracy = output_dict['detection_scores'][index] accuracy_int = int(accuracy * 100) # boxの位置情報はymin、xmin、ymax、xmaxで、0~1の範囲 # imageは1280*720なので、x軸×1280、y軸×720 if accuracy_int >= 50: left_x = int(output_dict['detection_boxes'][index][1] * 1280) top_y = int(output_dict['detection_boxes'][index][0] * 780) width = int(output_dict['detection_boxes'][index][3] * 1280) height = int(output_dict['detection_boxes'][index][2] * 780) kind_name = "" if category_index[kind]['name'] == "bicycle": kind_name = "bike" elif category_index[kind]['name'] == "motorcycle": kind_name = "motor" else: kind_name = category_index[kind]['name'] tmp_buffer = kind_name + ': ' + str(accuracy_int) + '%\t(left_x: ' + str(left_x) + \ ' top_y: ' + str(top_y) + ' width: ' + str(width) + ' height: ' + str(height) + ')' resultFile.append(tmp_buffer) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks_reframed', None), use_normalized_coordinates=True, min_score_thresh=.5, line_thickness=8) predict_image = file_path.format(image_path.name[0:-4]) out_files.append(predict_image) Image.fromarray(image_np).save(predict_image) return out_files # + ######################################### # area:functions # should edit ######################################### @log(logger) @resources(ait_output, path_helper, 'all_label_accuracy_csv', 'all_label_accuracy.csv') def save_all_label_accuracy_csv(df, file_path: str=None) -> None: df.to_csv(file_path) # + ######################################### # area:functions # should edit ######################################### @log(logger) @resources(ait_output, path_helper, 'all_label_accuracy_png', 'all_label_accuracy.png') def save_all_label_accuracy_png(df, file_path: str=None) -> None: # 以前の画像ファイルがあれば削除 _remove_glob(str(Path(file_path).parent)+ '/*.png') fig = plt.figure(figsize=(12, 8), dpi=100) ax = fig.subplots() ax.axis('off') ax.axis('tight') tmp_table = ax.table(cellText=df.values, colLabels=df.columns, rowLabels=df.index, bbox=[0,0,1,1]) plt.savefig(file_path) plt.close('all') # + ######################################### # area:functions # should edit ######################################### @log(logger) @downloads(ait_output, path_helper, 'each_label_accuracy', 'each_label_accuracy.csv') def save_each_label_accuracy(file_path: str=None) -> None: with open(file_path, 'w') as f: writer = csv.writer(f) for buf in f_each: writer.writerow(buf) # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'traffic_sign_accuracy') def measure_traffic_sign_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'traffic_light_accuracy') def measure_traffic_light_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'car_accuracy') def measure_car_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'rider_accuracy') def measure_rider_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'motor_accuracy') def measure_motor_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'person_accuracy') def measure_person_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'bus_accuracy') def measure_bus_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'truck_accuracy') def measure_truck_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'bike_accuracy') def measure_bike_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @measures(ait_output, 'train_accuracy') def measure_train_accuracy(accuracy): return accuracy # + ######################################### # area:functions # should edit ######################################### @log(logger) @downloads(ait_output, path_helper, 'Log', 'ait.log') def move_log(file_path: str=None) -> None: shutil.move(get_log_path(), file_path) # + ######################################### # area:main # should edit ######################################### @log(logger) @ait_main(ait_output, path_helper) def main() -> None: # インベントリのMNISTラベル・画像を読み込み image_path = ait_input.get_inventory_path('test_set_images') model_checkpoint_path = ait_input.get_inventory_path('trained_model_checkpoint') model_graph_path = ait_input.get_inventory_path('trained_model_graph') model_protobuf_path = ait_input.get_inventory_path('trained_model_protobuf') # モデル読み込み zipfile.ZipFile(model_checkpoint_path).extractall('/tmp') zipfile.ZipFile(model_graph_path).extractall('/tmp/ssd_mobilenet_v2_coco') zipfile.ZipFile(model_protobuf_path).extractall('/tmp/ssd_mobilenet_v2_coco') model = tf.saved_model.load(str('/tmp/ssd_mobilenet_v2_coco/saved_model')) # jsonファイルのパス jsonFileName = ait_input.get_inventory_path('test_set_labels') objects = _read_labels(jsonFileName) # ラベルの定義読み込み label_map_name = ait_input.get_inventory_path('labels_define') category_index = label_map_util.create_category_index_from_labelmap(label_map_name, use_display_name=True) # 画像読み込み image_list = _load_image(image_path) # 推論 (resources) save_images(model, image_list, category_index) resultFile.append('Enter Image Path: END') # JSONファイルと推論結果を比較する _compare_json(gt_list_boxes, objects, total_GT_correct_predicted, avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train) # 推論結果の集計 _accuracy_count(total_GT_correct_predicted, avg_acc_per_traffic_sign, avg_acc_per_traffic_light, avg_acc_per_car, avg_acc_per_rider, avg_acc_per_motor, avg_acc_per_person, avg_acc_per_bus, avg_acc_per_truck, avg_acc_per_bike, avg_acc_per_train) # label毎に確率を集計 (resources) _view_accuracy_label(total_GT_correct_predicted) # download save_each_label_accuracy() # measure measure_traffic_sign_accuracy(g_total_accuracy[0]) measure_traffic_light_accuracy(g_total_accuracy[1]) measure_car_accuracy(g_total_accuracy[2]) measure_rider_accuracy(g_total_accuracy[3]) measure_motor_accuracy(g_total_accuracy[4]) measure_person_accuracy(g_total_accuracy[5]) measure_bus_accuracy(g_total_accuracy[6]) measure_truck_accuracy(g_total_accuracy[7]) measure_bike_accuracy(g_total_accuracy[8]) measure_train_accuracy(g_total_accuracy[9]) move_log() # + deletable=false editable=false ######################################### # area:entory point # do not edit ######################################### if __name__ == '__main__': main() # - ######################################### # area:license attribute set # should edit ######################################### ait_owner='AIST' ait_creation_year='2020' # + deletable=false editable=false ######################################### # area:prepare deproy # do not edit ######################################### if not is_ait_launch: from ait_sdk.deploy import prepare_deploy from ait_sdk.license.license_generator import LicenseGenerator # current_dir = %pwd prepare_deploy(ait_manifest, ait_sdk_name, current_dir, requirements_path, is_remote_deploy=True) # output License.txt license_generator = LicenseGenerator() license_generator.write('../top_dir/LICENSE.txt', ait_creation_year, ait_owner) # -
ait_repository/ait/eval_bdd100k_aicc_tf2.3_0.1/develop/my_ait.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import numpy as np import pprint from fastestimator.summary.logs import parse_log_file import matplotlib.pyplot as plt def plot_box_within_exp(ylim=None): acc = {} model_pool = [2,3,4,5,6,8,9,10] # take out 1, 7 because they diverge too much for init_lr in ["0.1", "0.001", "0.00001"]: acc[init_lr] = [] for model_run in model_pool: stat = [] for run in range(5): summary = parse_log_file(f"../../../logs/supplementary/proxy_test/selection/run_{model_run}/{init_lr}_{run}.txt", ".txt") max_acc = np.max([x for x in summary.history["eval"]["accuracy"].values()]) stat.append(max_acc) acc[init_lr].append(stat) fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(15, 5)) for i, key in enumerate(acc.keys()): bplot1 = axs[i].boxplot( acc[key], vert=True, # vertical box aligmnent patch_artist=True) # fill with color # set ax title axs[i].set_title(f"init_lr = {key}") axs[i].set_ylim(ylim) # adding horizontal grid lines axs[i].yaxis.grid(True) axs[i].set_xticks([y+1 for y in range(len(model_pool))]) axs[i].set_xticklabels(model_pool) plt.savefig("../../../results/supplementary/model_selection.jpeg", dpi=300, transparent=True) plot_box_within_exp() def get_total(ylim=None): acc = {} for init_lr in ["0.1", "0.001", "0.00001"]: acc[init_lr] = [] for model_run in range(1, 11): # svhn stat = [] for run in range(5): summary = parse_log_file(f"../../../logs/supplementary/proxy_test/selection/run_{model_run}/{init_lr}_{run}.txt", ".txt") max_acc = np.max([x for x in summary.history["eval"]["accuracy"].values()]) stat.append(max_acc) acc[init_lr].append(stat) total = [0] * 10 for i in range(10): for init_lr in ["0.1", "0.001", "0.00001"]: total[i] += sum(acc[init_lr][i]) return total get_total()
source/supplementary/graph/plot_proxy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"is_executing": false} import tensorflow as tf # + pycharm={"name": "#%%\n", "is_executing": false} from tensorflow.examples.tutorials.mnist import input_data # + pycharm={"name": "#%%\n", "is_executing": false} mnist = input_data.read_data_sets("MNIST_data", one_hot=True) # + pycharm={"name": "#%%\n", "is_executing": false} # 创建占位符代表输入的图像向量 x = tf.placeholder(tf.float32, [None, 28*28]) # + pycharm={"name": "#%%\n", "is_executing": false} # 创建变量W, b W = tf.Variable(tf.zeros([28*28,10])) b = tf.Variable(tf.zeros([10])) # + pycharm={"name": "#%%\n", "is_executing": false} # 标识模型的输出 y = tf.nn.softmax(tf.matmul(x, W)+b) # + pycharm={"name": "#%%\n", "is_executing": false} # 实际的图像标签 y_ = tf.placeholder(tf.float32, [None, 10]) # + pycharm={"name": "#%%\n"}
chapter_01/softmax.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3-azureml # kernelspec: # display_name: Python 3.6 - AzureML # language: python # name: python3-azureml # --- # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1618817830904} import json import numpy as np import pandas as pd import os from sklearn.externals import joblib def init(model_path="model.pkl"): global model model = joblib.load(model_path) def predict(data): try: #data = np.array(json.loads(data)) result = model.predict(data) # You can return any data type, as long as it is JSON serializable. return result except Exception as e: error = str(e) return error def predict_proba(data): try: #data = np.array(json.loads(data)) result = model.predict_proba(data) # You can return any data type, as long as it is JSON serializable. return result except Exception as e: error = str(e) return error # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1618817347252} id_name = '' target = 'y' model_path = "blogFeedBack_model.pkl" test_file = "blogFeedBack_test.csv" init(model_path) test_df = pd.read_csv(test_file) y_true = test_df[target] y_pred = predict(test_df.drop([target], axis=1)) from sklearn.metrics import mean_squared_error mse = mean_squared_error(y_true, y_pred) print("test mse = {}".format(mse)) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616676962376} ## predcict test dataset on Kaggle kaggle_submit_head = ["Id", "Action"] kaggle_test_file = "kaggle-test-d_aea.csv" kaggle_pred_file = "kaggle-test-d_aea-predictions.csv" kaggle_test_df = pd.read_csv(kaggle_test_file) kaggle_test_id = np.array(kaggle_test_df[id_name]) #kaggle_y_true = np.array(kaggle_test_df[target]) print(kaggle_test_df.head()) print(kaggle_test_df.shape) kaggle_y_pred = predict(kaggle_test_df.drop([id_name], axis=1)) print(kaggle_y_pred.shape) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616676939313} print(kaggle_test_id.shape) assert kaggle_y_pred.shape == kaggle_test_id.shape kaggle_pred_df = pd.DataFrame({kaggle_submit_head[0]: kaggle_test_id, kaggle_submit_head[1]: kaggle_y_pred}) print(kaggle_pred_df.head()) kaggle_pred_df.to_csv(kaggle_pred_file, index=False)
code/azure/test_ blogFeedBack.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %reload_ext autoreload # %autoreload 2 # + from nbc_analysis import concat_filtered_events from nbc_analysis.utils.debug_utils import runit from toolz import first, take,concatv import re import pandas as pd import numpy as np # - df = runit(concat_filtered_events) df g, df = next(iter(grouping)) ds = df.show.value_counts() list(ds.items()) # ## Exploring dataset df = ret mpids = df.groupby(level=0).size() mpids.sort_values(ascending=False, inplace=True) mpids dx = df.loc[[-3460882801377953287, 3665820832050183348]] dx = dx.set_index('show', append=True) dx.groupby(dx.index.names).size().unstack().T.fillna('') ret.loc[655341257817720116].T
notebooks/name_parse/concat_filtered_events.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import matplotlib.pyplot as plt # %matplotlib inline face_classifier = cv2.CascadeClassifier('Models/haarcascade_frontalface_default.xml') eye_classifier = cv2.CascadeClassifier('Models/haarcascade_eye.xml') # + cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() faces = face_classifier.detectMultiScale(frame) for (x,y,w,h) in faces: cv2.rectangle(frame, (x,y),(x+w,y+h),(234,0,0),3) eyes = eye_classifier.detectMultiScale(frame) for (ix,iy,iw,ih) in eyes: cv2.rectangle(frame, (ix,iy),(ix+iw, iy+ih),(0,255,0),1) cv2.imshow("webcame", frame) if cv2.waitKey(1)==27: break cap.release() cv2.destroyAllWindows() # -
Human face and eye detection project/Video_eyes_faces_detection.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #need to remove outliers import pandas as pd import glob import numpy as np import seaborn as sns import matplotlib.pyplot as plt import scipy as sc import statistics as stats from statsmodels.graphics.gofplots import qqplot, qqline plt.rcParams['font.sans-serif'] = "Arial" plt.rcParams['font.family'] = "sans-serif" visitsPerMothPath = r"E:\Downloads\ManducaMultiSenseData\Step5" duration_path = r"E:\Downloads\ManducaMultiSenseData\Moth_Visits_Information\Duration" outpath = r"E:\Downloads\ManducaMultiSenseData\Moth_Visits_Information" moth_list_files = glob.glob(visitsPerMothPath + "\\*.csv") moth_list_files[0][41:-30] names = [] visit_per_moth = [] light_level = [] successful_visits = [] fed_status = [] ratio_list = [] for moth in moth_list_files: df = pd.read_csv(moth) name = moth[41:-30] visit_number = len(df.In_Frame) successful_number = len(df.ProboscisDetect[df.ProboscisDetect.values>0]) successful_visits.append(successful_number) visit_per_moth.append(visit_number) ratio_list.append(successful_number/visit_number) names.append(name) if name.startswith("L50"): light = ["high"] elif name.startswith("L0.1"): light = ["low"] if successful_number > 0: fed = ["yes"] else: fed = ["no"] fed_status.append(fed) light_level.append(light) duration = [] pre_duration = [] post_duration = [] for dset in df.index: dur = df.Out_Frame.values[dset] - df.In_Frame.values[dset] if df.ProboscisDetect.values[dset]>0: pre_dur = df.ProboscisDetect.values[dset] - df.In_Frame.values[dset] post_dur = -df.ProboscisDetect.values[dset]+ df.Out_Frame.values[dset] else: pre_dur = dur post_dur = 0 duration.append(dur) pre_duration.append(pre_dur) post_duration.append(post_dur) new = pd.DataFrame({'duration': duration, 'pre_duration': pre_duration, 'post_duration': post_duration}) step5_duration = pd.concat([df,new], axis = 1) step5_duration.to_csv(duration_path + "\\" + name + "_duration.csv") new_df = pd.DataFrame({'name' : names, 'visits' : visit_per_moth, 'ratio': ratio_list, 'successful_visits' : successful_visits, 'Fed_Status': list(np.squeeze(fed_status)), 'Light_Level': list(np.squeeze(light_level))}) new_df.to_csv(outpath + "\\Moth_Visits_Table.csv") names, visit_per_moth, successful_visits, fed_status, light_level moth_data = pd.read_csv(outpath + "\\Moth_Visits_Table.csv") moth_data high_visit_list = moth_data[(moth_data.Light_Level.str.contains("high")) & moth_data.visits.notnull()].visits.values low_visit_list = moth_data[(moth_data.Light_Level.str.contains("low")) & moth_data.visits.notnull()].visits.values qqplot(low_visit_list, line='s', c = "purple"), qqplot(high_visit_list, line='s', c = "orangered"), print(sc.stats.shapiro(low_visit_list), 'low_visit'), print(sc.stats.shapiro(high_visit_list), 'high_visit') high_succ_list = moth_data[(moth_data.Light_Level.str.contains("high"))].successful_visits.values low_succ_list = moth_data[(moth_data.Light_Level.str.contains("low"))].successful_visits.values qqplot(low_visit_list, line='s', c = "purple"), qqplot(high_visit_list, line='s', c = "orangered") print(sc.stats.shapiro(low_succ_list), 'low_succ'), print(sc.stats.shapiro(high_succ_list), 'high_succ') high_ratio_list = moth_data[(moth_data.Light_Level.str.contains("high"))].ratio.values low_ratio_list = moth_data[(moth_data.Light_Level.str.contains("low"))].ratio.values qqplot(low_ratio_list, line='s', c = "purple"), qqplot(high_visit_list, line='s', c = "orangered") print(sc.stats.shapiro(low_ratio_list), 'low_ratio'), print(sc.stats.shapiro(high_ratio_list), 'high_ratio') #KS p-value = 0.2 Utest p-value = 0.053. Small largest gap but orange is always higher (I guess not significantly? Like the magnitude of difference is small) n_bins = 50 fig, ax = plt.subplots(figsize=(8, 4)) n, bins, patches = ax.hist(low_visit_list, n_bins, density=True, histtype='step', cumulative=True, label='Empirical', color = 'purple') n, bins, patches = ax.hist(high_visit_list, n_bins, density=True, histtype='step', cumulative=True, label='Empirical', color = 'darkorange') sc.stats.ks_2samp(high_visit_list, low_visit_list), sc.stats.mannwhitneyu(low_visit_list, high_visit_list, use_continuity=True, alternative='two-sided') #KS p-value = 0.026 Utest p-value = 0.055. Massive largest gap but orange is always higher (I guess not significantly? Like the magnitude of difference is small) fig, ax = plt.subplots(figsize=(8, 4)) n, bins, patches = ax.hist(low_succ_list, n_bins, density=True, histtype='step', cumulative=True, label='Empirical', color = 'purple') n, bins, patches = ax.hist(high_succ_list, n_bins, density=True, histtype='step', cumulative=True, label='Empirical', color = 'darkorange') sc.stats.ks_2samp(low_succ_list, high_succ_list), sc.stats.mannwhitneyu(low_succ_list, high_succ_list, use_continuity=True, alternative='two-sided') #KS p-value = 0.11 Utest p-value = 0.034. Intermediate largest gap but orange is always higher fig, ax = plt.subplots(figsize=(8, 4)) n, bins, patches = ax.hist(low_ratio_list, n_bins, density=True, histtype='step', cumulative=True, label='Empirical', color = 'purple') n, bins, patches = ax.hist(high_ratio_list, n_bins, density=True, histtype='step', cumulative=True, label='Empirical', color = 'darkorange') sc.stats.ks_2samp(low_ratio_list, high_ratio_list), sc.stats.mannwhitneyu(low_ratio_list, high_ratio_list, use_continuity=True, alternative='two-sided') # + sns.set(style="ticks") f, ax = plt.subplots(figsize=(20.52, 18.30)) # Plot the orbital period with horizontal boxes sns.boxplot(x="Light_Level", y="visits", data=moth_data, palette=['mediumpurple','orange']) # Add in points to show each observation sns.swarmplot(x="Light_Level", y="visits", data=moth_data, size=6, color="0.2", linewidth=0) high_visit = len(moth_data[(moth_data.Light_Level.str.contains("high")) & moth_data.visits.notnull()].index) low_visit = len(moth_data[(moth_data.Light_Level.str.contains("low")) & moth_data.visits.notnull()].index) lineb = ax.plot(np.zeros(0), '-b') liner = ax.plot(np.zeros(0), '-r') lines = (lineb,liner) for tick in ax.get_yticklabels(): tick.set_fontsize(55) # Tweak the visual presentation ax.xaxis.grid(False) ax.set_ylabel("Visits", size = 70) ax.set_xlabel("Light Level", size = 60) # ax.set_title("Visits Per Moth", size = 16) ax.set_xticklabels(["Low","High"], size = 60) sns.despine(trim=True, left=True) plt.savefig(outpath + "\\VisitsPerMoth_Box.png") # + sns.set(style="ticks") # Initialize the figure with a logarithmic x axis f, ax = plt.subplots(figsize=(10, 10)) # Plot the orbital period with horizontal boxes sns.boxplot(x="Light_Level", y="successful_visits", data=moth_data, palette=['mediumpurple','orange']) # Add in points to show each observation sns.swarmplot(x="Light_Level", y="successful_visits", data=moth_data, size=6, color="0.2", linewidth=0) high_fed = len(moth_data[(moth_data.Light_Level.str.contains("high")) & moth_data.Fed_Status.str.contains("yes")].index) low_fed = len(moth_data[(moth_data.Light_Level.str.contains("low")) & moth_data.Fed_Status.str.contains("yes")].index) # Tweak the visual presentation ax.xaxis.grid(False) ax.set_ylabel("Successful Visits", size = 30) ax.set_xlabel("Light Level", size = 40) ax.set_xticklabels(["Low","High"], size = 40) for tick in ax.get_yticklabels(): tick.set_fontsize(30) # ax.set_title("Ratio of Successful Visits Per Moth", size = 40) ax.set_ylim(-3,60) ax.locator_params(nbins=4, axis = "y") # ax.text(0.66, 1.3,'N = ' +str(high_fed) + "/" + str(high_visit), color = 'orange', weight = 'bold', size = 30) # ax.text(0.66, 1.1, 'N = ' + str(low_fed) + "/" + str(low_visit), color = 'mediumpurple', weight = 'bold', size = 30) sns.despine(trim=True, left=True) plt.tight_layout() plt.savefig(outpath + "//SuccessfulVisits.png") # + #do a qqplot of this followed with normality tests (as well as something to quant show that they have similar shpae) sns.set(style="ticks") # Initialize the figure with a logarithmic x axis f, ax = plt.subplots(figsize=(20.52, 18.30)) # Plot the orbital period with horizontal boxes sns.boxplot(x="Light_Level", y="ratio", data=moth_data, palette=['mediumpurple','orange']) # Add in points to show each observation sns.swarmplot(x="Light_Level", y="ratio", data=moth_data, size=6, color="0.2", linewidth=0) high_fed = len(moth_data[(moth_data.Light_Level.str.contains("high")) & moth_data.Fed_Status.str.contains("yes")].index) low_fed = len(moth_data[(moth_data.Light_Level.str.contains("low")) & moth_data.Fed_Status.str.contains("yes")].index) # Tweak the visual presentation ax.xaxis.grid(False) ax.set_ylabel("Fraction Successful", size = 70) ax.set_xlabel("Light Level", size = 60) ax.set_xticklabels(["Low","High"], size = 60) for tick in ax.get_yticklabels(): tick.set_fontsize(55) # ax.set_title("Ratio of Successful Visits Per Moth", size = 40) ax.set_ylim(-0.1,1.1) ax.locator_params(nbins=4, axis = "y") # ax.text(0.66, 1.3,'N = ' +str(high_fed) + "/" + str(high_visit), color = 'orange', weight = 'bold', size = 30) # ax.text(0.66, 1.1, 'N = ' + str(low_fed) + "/" + str(low_visit), color = 'mediumpurple', weight = 'bold', size = 30) sns.despine(trim=True, left=True) plt.tight_layout() plt.savefig(outpath + "//FractionSuccessfulVisits.png") # - test_stat = np.mean(moth_data.ratio[moth_data.Light_Level == "low"]) - np.mean(moth_data.ratio[moth_data.Light_Level == "high"]) test_stat # + # resample def null_perm_test(): null_trt = np.random.choice(moth_data.Light_Level, replace = False, size = len(moth_data.Light_Level)) null_test_stat = np.mean(moth_data.ratio[null_trt == "low"]) - np.mean(moth_data.ratio[null_trt == "high"]) return(null_test_stat) # resample 10000 times to generate sampling distribution under the null hypothesis null_dist = np.array([null_perm_test() for ii in range(10000)]) # - plt.hist(null_dist, bins= 20) plt.vlines(x = test_stat, ymin = 0, ymax =1000, color = "red") plt.vlines(x = -test_stat, ymin = 0, ymax =1000, color = "red") plt.show() pval = np.mean((null_dist >= test_stat) | (-test_stat >= null_dist) ) pval moth_data[(moth_data.Light_Level.str.contains("high"))].values moth_data = pd.read_csv(r"C:\Users\Daniellab\Documents\TanviStuff\MultiSensory\Lightlevel-FlowerShape\MothChart-LightLevel-FlowerShape.csv") high_visit_flower = len(moth_data[(moth_data.Animal_Name.str.contains("L50_c-3")) & moth_data.Total_trials.notnull()].index) low_visit_flower = len(moth_data[(moth_data.Animal_Name.str.contains("L0.1_c-3")) & moth_data.Total_trials.notnull()].index) high_visit_flower, low_visit_flower # + duration_list = glob.glob(duration_path + "\\*duration.csv") df1 = [] df2 = [] df3 = [] df4 = [] df5 = [] df6 = [] df7 = [] df8 = [] df9 = [] for file in duration_list: df = pd.read_csv(file) nam = file[79:-13] name = [nam]*len(df["In_Frame"]) if nam.startswith("L50"): light = ["high"]*len(df["In_Frame"]) else: light = ["low"]*len(df["In_Frame"]) df1.extend(name) df2.extend(df.In_Frame) df3.extend(df.Out_Frame) df4.extend(df.ProboscisDetect) df5.extend(df.DiscoveryTime) df6.extend(df.duration) df7.extend(df.pre_duration) df8.extend(df.post_duration) df9.extend(light) new_df = pd.DataFrame({'name': df1, 'In_Frame': df2, 'Out_Frame': df3, 'ProboscisDetect':df4, 'DiscoveryTime': df5, 'duration': df6, 'pre_duration': df7, 'post_duration': df8, 'Light_Level': list(np.squeeze(df9))}) new_df.to_csv(duration_path + "\\all_moth_durations.csv") # + duration_data = pd.read_csv(duration_path + "\\all_moth_durations.csv") duration_data['duration(s)'] = pd.Series(duration_data['duration'].values/100, index=duration_data.index) sns.set(style="ticks") # Initialize the figure with a logarithmic x axis f, ax = plt.subplots(figsize=(30, 15)) # Plot the orbital period with horizontal boxes sns.violinplot(x="name", y="duration(s)", hue = "Light_Level", dodge=False, data=duration_data, inner="quart", palette=['mediumpurple','darkorange']) # Add in points to show each observation sns.swarmplot(x="name", y="duration(s)", data=duration_data, size=5, color="0.6", linewidth=0) # Tweak the visual presentation ax.xaxis.grid(True) ax.tick_params(axis='x', which='major', labelsize=17, rotation=90) ax.tick_params(axis='y', which='major', labelsize=17) ax.set_ylabel("Duration (s)", size = 30, labelpad=10) ax.set_xlabel("Moth", size = 30, labelpad=20) ax.set_title("Duration of each visit for each moth", size = 40) ax.legend(prop={'size': 20}) sns.despine(trim=True, left=True) plt.savefig(outpath_fig + "\\DurationPerVisitPerMoth_Violin.png") # - # duration without outliers from scipy import stats z = np.abs(stats.zscore(duration_data['duration(s)'].values)) outlier_ID = z > 5 duration_data['zscore'] = z duration_data['outlier_ID'] = outlier_ID notoutliers_dur = duration_data[duration_data.outlier_ID == False] # + # Initialize the figure with a logarithmic x axis f, ax = plt.subplots(figsize=(30, 15)) # Plot the orbital period with horizontal boxes sns.violinplot(x="name", y="duration(s)", hue = "Light_Level", dodge=False, data=notoutliers_dur,inner="quart", palette=['mediumpurple','darkorange']) # Add in points to show each observation sns.swarmplot(x="name", y="duration(s)", data=notoutliers_dur, size=5, color="0.6", linewidth=0) # Tweak the visual presentation ax.xaxis.grid(True) ax.tick_params(axis='x', which='major', labelsize=17, rotation=90) ax.tick_params(axis='y', which='major', labelsize=17) ax.set_ylabel("Duration (s)", size = 30, labelpad=10) ax.set_xlabel("Moth", size = 30, labelpad=20) ax.set_title("Duration of each visit for each moth. No outliers", size = 40) ax.legend(prop={'size': 20}) sns.despine(trim=True, left=True) plt.savefig(outpath_fig + "\\NoOutliers_DurationPerVisitPerMoth_Violin.png") # -
A_Step1_Moth_Visits_Table.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Linear Regression Example # # A linear regression learning algorithm example using TensorFlow library. # # - Author: <NAME> # - Project: https://github.com/aymericdamien/TensorFlow-Examples/ import tensorflow as tf import numpy import matplotlib.pyplot as plt rng = numpy.random # Parameters learning_rate = 0.01 training_epochs = 1000 display_step = 50 # Training Data train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167, 7.042,10.791,5.313,7.997,5.654,9.27,3.1]) train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221, 2.827,3.465,1.65,2.904,2.42,2.94,1.3]) n_samples = train_X.shape[0] # + # tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Set model weights W = tf.Variable(rng.randn(), name="weight") b = tf.Variable(rng.randn(), name="bias") # - # Construct a linear model pred = tf.add(tf.multiply(X, W), b) # Mean squared error cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) # Gradient descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: sess.run(init) # Fit all training data for epoch in range(training_epochs): for (x, y) in zip(train_X, train_Y): sess.run(optimizer, feed_dict={X: x, Y: y}) #Display logs per epoch step if (epoch+1) % display_step == 0: c = sess.run(cost, feed_dict={X: train_X, Y:train_Y}) print ("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \ "W=", sess.run(W), "b=", sess.run(b)) print ("Optimization Finished!") training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y}) print ("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n') #Graphic display plt.plot(train_X, train_Y, 'ro', label='Original data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line') plt.legend() plt.show() # + # Regression result # - test complete;Gopal
tests/tf/linear_regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7 (tensorflow) # language: python # name: tensorflow # --- # <a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_4_transfer_nlp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # # # T81-558: Applications of Deep Neural Networks # **Module 9: Regularization: L1, L2 and Dropout** # * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) # * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # # Module 9 Material # # * Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=WLlP6S-Z8Xs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_1_keras_transfer.ipynb) # * Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=ctVA1_46YEE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_2_popular_transfer.ipynb) # * Part 9.3: Transfer Learning for Computer Vision and Keras [[Video]](https://www.youtube.com/watch?v=61vMUm_XBMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_3_transfer_cv.ipynb) # * **Part 9.4: Transfer Learning for Languages and Keras** [[Video]](https://www.youtube.com/watch?v=ajmAAg9FxXA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_4_transfer_nlp.ipynb) # * Part 9.5: Transfer Learning for Keras Feature Engineering [[Video]](https://www.youtube.com/watch?v=Dttxsm8zpL8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_5_transfer_feature_eng.ipynb) # # Google CoLab Instructions # # The following code ensures that Google CoLab is running the correct version of TensorFlow. try: # %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False # # Part 9.4: Transfer Learning for Languages and Keras # # Transfer learning is commonly used with Natural Language Processing (NLP). This course has an entire module that covers NLP. However, for now we will look how a NLP network can be loaded into Keras for transfer learning. The following three sources were helpful for the creation of this section. # # * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & <NAME>. (2018). [Universal sentence encoder](https://arxiv.org/abs/1803.11175). arXiv preprint arXiv:1803.11175. # * [Deep Transfer Learning for Natural Language Processing: Text Classification with Universal Embeddings](https://towardsdatascience.com/deep-transfer-learning-for-natural-language-processing-text-classification-with-universal-1a2c69e5baa9) # * [Keras Tutorial: How to Use Google's Universal Sentence Encoder for Spam Classification](http://hunterheidenreich.com/blog/google-universal-sentence-encoder-in-keras/) # # These examples make use of TensorFlow Hub, which allows pretrained models to easily be loaded into TensorFlow. To install TensorHub use the following command. # !pip install tensorflow_hub # It is also necessary to install TensorFlow Datasets. This can be done with the following command. # !pip install tensorflow_datasets # https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb#scrollTo=2ew7HTbPpCJH # Load the Internet Movie DataBase (IMDB) reviews data set. # + import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"], batch_size=-1, as_supervised=True) train_examples, train_labels = tfds.as_numpy(train_data) test_examples, test_labels = tfds.as_numpy(test_data) # /Users/jheaton/tensorflow_datasets/imdb_reviews/plain_text/0.1.0 # - # Load a pretrained embedding model called [gnews-swivel-20dim](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1). This was trained by Google on gnews data and can convert RAW text into vectors. model = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1" hub_layer = hub.KerasLayer(model, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True) # Consider the following 3 movie reviews. train_examples[:3] # The embedding layer can convert each to 20-number vectors. hub_layer(train_examples[:3]) # We add addition layers to attempt to classify the movie reviews as either positive or negative. # + model = tf.keras.Sequential() model.add(hub_layer) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(1, activation='sigmoid')) model.summary() # - # Compile the neural network. model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Split and train the neural network. # + x_val = train_examples[:10000] partial_x_train = train_examples[10000:] y_val = train_labels[:10000] partial_y_train = train_labels[10000:] # - history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1) # Evaluate the neural network. # + results = model.evaluate(test_data, test_labels) print(results) # - history_dict = history.history history_dict.keys() # + # %matplotlib inline import matplotlib.pyplot as plt acc = history_dict['accuracy'] val_acc = history_dict['val_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() # + plt.clf() # clear figure plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() # -
t81_558_class_09_4_transfer_nlp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Enumerate # # A função enumarate() retorna um objeto iterável. seq = ['a','b','c'] enumerate(seq) list(enumerate(seq)) for indice, valor in enumerate(seq): print(indice, valor) for indice, valor in enumerate(seq): if indice >= 2: break else: print(valor) lista = ['Marketing', 'Tecnologia', 'Business'] for i, item in enumerate('Isso é uma string'): print(i, item) for i, item in enumerate(range(10)): print(i, item)
04-TratamentoDeArquivos_Modulos_Pacotes_FuncBuilt-in/10-Enumerate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: mediapipe # language: python # name: mediapipe # --- # # 0. Install and Import Dependencies # !pip install mediapipe opencv-python import cv2 import mediapipe as mp import numpy as np mp_drawing = mp.solutions.drawing_utils mp_pose = mp.solutions.pose # + # VIDEO FEED cap = cv2.VideoCapture(0) while cap.isOpened(): ret, frame = cap.read() cv2.imshow('Mediapipe Feed', frame) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # - # # 1. Make Detections cap = cv2.VideoCapture(0) ## Setup mediapipe instance with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5) as pose: while cap.isOpened(): ret, frame = cap.read() # Recolor image to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) image.flags.writeable = False # Make detection results = pose.process(image) # Recolor back to BGR image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Render detections mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS, mp_drawing.DrawingSpec(color=(245,117,66), thickness=2, circle_radius=2), mp_drawing.DrawingSpec(color=(245,66,230), thickness=2, circle_radius=2) ) cv2.imshow('Mediapipe Feed', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # + # mp_drawing.DrawingSpec?? # - # # 2. Determining Joints # <img src="https://i.imgur.com/3j8BPdc.png" style="height:300px" > cap = cv2.VideoCapture(0) ## Setup mediapipe instance with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5) as pose: while cap.isOpened(): ret, frame = cap.read() # Recolor image to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) image.flags.writeable = False # Make detection results = pose.process(image) # Recolor back to BGR image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Extract landmarks try: landmarks = results.pose_landmarks.landmark print(landmarks) except: pass # Render detections mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS, mp_drawing.DrawingSpec(color=(245,117,66), thickness=2, circle_radius=2), mp_drawing.DrawingSpec(color=(245,66,230), thickness=2, circle_radius=2) ) cv2.imshow('Mediapipe Feed', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() len(landmarks) for lndmrk in mp_pose.PoseLandmark: print(lndmrk) landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].visibility landmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value] landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value] # # 3. Calculate Angles def calculate_angle(a,b,c): a = np.array(a) # First b = np.array(b) # Mid c = np.array(c) # End radians = np.arctan2(c[1]-b[1], c[0]-b[0]) - np.arctan2(a[1]-b[1], a[0]-b[0]) angle = np.abs(radians*180.0/np.pi) if angle >180.0: angle = 360-angle return angle shoulder = [landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].x,landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].y] elbow = [landmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value].x,landmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value].y] wrist = [landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].x,landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].y] shoulder, elbow, wrist calculate_angle(shoulder, elbow, wrist) tuple(np.multiply(elbow, [640, 480]).astype(int)) cap = cv2.VideoCapture(0) ## Setup mediapipe instance with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5) as pose: while cap.isOpened(): ret, frame = cap.read() # Recolor image to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) image.flags.writeable = False # Make detection results = pose.process(image) # Recolor back to BGR image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Extract landmarks try: landmarks = results.pose_landmarks.landmark # Get coordinates shoulder = [landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].x,landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].y] elbow = [landmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value].x,landmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value].y] wrist = [landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].x,landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].y] # Calculate angle angle = calculate_angle(shoulder, elbow, wrist) # Visualize angle cv2.putText(image, str(angle), tuple(np.multiply(elbow, [640, 480]).astype(int)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.LINE_AA ) except: pass # Render detections mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS, mp_drawing.DrawingSpec(color=(245,117,66), thickness=2, circle_radius=2), mp_drawing.DrawingSpec(color=(245,66,230), thickness=2, circle_radius=2) ) cv2.imshow('Mediapipe Feed', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # # 4. Curl Counter # + cap = cv2.VideoCapture(0) # Curl counter variables counter = 0 stage = None ## Setup mediapipe instance with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5) as pose: while cap.isOpened(): ret, frame = cap.read() # Recolor image to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) image.flags.writeable = False # Make detection results = pose.process(image) # Recolor back to BGR image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Extract landmarks try: landmarks = results.pose_landmarks.landmark # Get coordinates shoulder = [landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].x,landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].y] elbow = [landmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value].x,landmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value].y] wrist = [landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].x,landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].y] # Calculate angle angle = calculate_angle(shoulder, elbow, wrist) # Visualize angle cv2.putText(image, str(angle), tuple(np.multiply(elbow, [640, 480]).astype(int)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.LINE_AA ) # Curl counter logic if angle > 160: stage = "down" if angle < 30 and stage =='down': stage="up" counter +=1 print(counter) except: pass # Render curl counter # Setup status box cv2.rectangle(image, (0,0), (225,73), (245,117,16), -1) # Rep data cv2.putText(image, 'REPS', (15,12), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,0), 1, cv2.LINE_AA) cv2.putText(image, str(counter), (10,60), cv2.FONT_HERSHEY_SIMPLEX, 2, (255,255,255), 2, cv2.LINE_AA) # Stage data cv2.putText(image, 'STAGE', (65,12), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,0), 1, cv2.LINE_AA) cv2.putText(image, stage, (60,60), cv2.FONT_HERSHEY_SIMPLEX, 2, (255,255,255), 2, cv2.LINE_AA) # Render detections mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS, mp_drawing.DrawingSpec(color=(245,117,66), thickness=2, circle_radius=2), mp_drawing.DrawingSpec(color=(245,66,230), thickness=2, circle_radius=2) ) cv2.imshow('Mediapipe Feed', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # -
Media Pipe Pose Tutorial-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.9 64-bit (''cygan'': virtualenv)' # language: python # name: python369jvsc74a57bd0987972a5af0063b4ad0c1bc948181328354b4e09dc76cc55a6609915adabc773 # --- # ## Notebook for calculating Mask Consistency Score for GAN-transformed images from PIL import Image import cv2 from matplotlib import pyplot as plt import tensorflow as tf import glob, os import numpy as np import matplotlib.image as mpimg #from keras.preprocessing.image import img_to_array, array_to_img # ## 1. Resize GAN-transformed Dataset to 1024*1024 # #### 1.1 Specify Args: Directory, folder name and the new image size dir = '/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Powertrain14_Blattfeder/Results/training4_batch4_400trainA_250trainB/samples_testing' # #### 1.2 Create new Folder "/A2B_FID_1024" in Directory # + folder = 'A2B_FID' image_size = 1024 old_folder = (os.path.join(dir, folder)) new_folder = (os.path.join(dir, folder+'_'+str(image_size))) if not os.path.exists(new_folder): try: os.mkdir(new_folder) except FileExistsError: print('Folder already exists') pass # - print(os.path.join(old_folder)) print(os.path.join(dir, folder+'_'+str(image_size))) # #### 1.3 Function for upsampling images of 256-256 or 512-512 to images with size 1024-1024 # + def resize_upsampling(old_folder, new_folder, size): dim = (size, size) for image in os.listdir(old_folder): img = cv2.imread(os.path.join(old_folder, image)) # INTER_CUBIC or INTER_LANCZOS4 img_resized = cv2.resize(img, dim, interpolation = cv2.INTER_CUBIC) print('Shape: '+str(img.shape)+' is now resized to: '+str(img_resized.shape)) cv2.imwrite(os.path.join(new_folder , image),img_resized) def resize_downsampling(old_folder, new_folder, size): dim = (size, size) for image in os.listdir(old_folder): img = cv2.imread(os.path.join(old_folder, image)) img_resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) print('Shape: '+str(img.shape)+' is now resized to: '+str(img_resized.shape)) cv2.imwrite(os.path.join(new_folder , image),img_resized) # - # #### 1.4 Run the aforementoined function resize_upsampling(old_folder, new_folder, 1024) # #### Resize the syntetic image masks to 1024-1024 dir2 = '/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Evaluation/BatchSize/Blattfeder/' folder = 'SegmentationMasks' # + size = 1024 old_folder = (os.path.join(dir2, folder)) new_folder = (os.path.join(dir2, folder+'_'+str(size))) if not os.path.exists(new_folder): try: os.mkdir(new_folder) except FileExistsError: print('Folder already exists') pass resize_downsampling(old_folder, new_folder, size) # - # ## 2. Use the annotation Tool Labelme to create polygons in JSON format # We than use the JSON files with polygon data to create semantic segmentation mask - no instance segmentation needed, because we do not need to differenciate between distinct features. We use the bash and python skript in this directory to do the mask translation. # !ls # !pwd # Insert the folder path as **input_dir** where the GAN transformed images with corresponding JSON label are located. input_dir = '/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Evaluation/BatchSize/Blattfeder/Batch4' output_dir = input_dir+'_mask' print(output_dir) # !python3 labelme2voc.py $input_dir $output_dir --labels labels.txt masks_gan = output_dir+'/SegmentationObjectPNG' # ## 3. GAN Image Data # ### 3.1 Prepare Data: Create Folder with binary images def binarize(im_path, size=1024): """Read, binarize and save images as png. Args: path: A string, path of images. """ img = Image.open(im_path).convert('L') img = np.array(img) thresh = 10 im_bool = img > thresh maxval = 255 im_bin = (img > thresh) * maxval #print(im_bin.shape) #save array to images im_save_bi = Image.fromarray(np.uint8(im_bin)) im_save_bool = Image.fromarray((im_bool)) return im_save_bool # + #test GAN Data masks_gan = masks_gan masks_gan_save = output_dir+'/binarized' if not os.path.exists(masks_gan_save): try: os.mkdir(masks_gan_save) except FileExistsError: print('Folder already exists') pass path = os.path.join(masks_gan, '*.png') files = list(glob.glob(path)) files.sort(reverse=True) for file in files: image= binarize(file) plt.imshow(image) bbox = image.getbbox() plt.title(f'Bbox: {bbox} Name: {file[-10:]}') image.save(os.path.join(masks_gan_save,file[-10:])) # - # ## 4. Syntetic Image Masks # ### 4.1 Prepare Data: Create Folder with binary images # #### Operation for reading png segmentation masks from folder path, resize, convert to greyscale and save imagesin new folder # + #test GAN Data masks_syn = '/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Evaluation/BatchSize/Blattfeder/SegmentationMasks_1024' masks_syn_save = masks_syn path = os.path.join(masks_syn, '*.png') files = list(glob.glob(path)) files.sort(reverse=True) for file in files: image = binarize(file) plt.imshow(image) bbox = image.getbbox() plt.title(f'Bbox: {bbox} Name: {file[-19:]}') image.save(os.path.join(masks_syn_save,file[-19:])) # - def loadpolygon(): return # Since True is regarded as 1 and False is regarded as 0, when multiplied by 255 which is the Max value of uint8, True becomes 255 (white) and False becomes 0 (black) def convexhull(): return def calculatescore(ground_truth, prediction_gan): """ Compute feature consitency score of two segmentation masks. IoU(A,B) = |A & B| / (| A U B|) Dice(A,B) = 2*|A & B| / (|A| + |B|) Args: y_true: true masks, one-hot encoded. y_pred: predicted masks, either softmax outputs, or one-hot encoded. metric_name: metric to be computed, either 'iou' or 'dice'. metric_type: one of 'standard' (default), 'soft', 'naive'. In the standard version, y_pred is one-hot encoded and the mean is taken only over classes that are present (in y_true or y_pred). The 'soft' version of the metrics are computed without one-hot encoding y_pred. Returns: IoU of ground truth and GAN transformed syntetic Image, as a float. Inputs are B*W*H*N tensors, with B = batch size, W = width, H = height, N = number of classes """ # check image shape to be the same assert ground_truth.shape == prediction_gan.shape, 'Input masks should be same shape, instead are {}, {}'.format(ground_truth.shape, prediction_gan.shape) #print('Ground truth shape: '+str(ground_truth.shape)) #print('Predicted GAN image shape: '+str(prediction_gan.shape)) intersection = np.logical_and(ground_truth, prediction_gan) union = np.logical_or(ground_truth, prediction_gan) mask_sum = np.sum(np.abs(union)) + np.sum(np.abs(intersection)) iou_score = np.sum(intersection) / np.sum(union) dice_score = 2*np.sum(intersection) / np.sum(mask_sum) print('IoU is: '+str(iou_score)) print('Dice/F1 Score is: '+str(dice_score)) return iou_score, dice_score # ## 6. Calculate mean IoU # Translate image mask to white RGB(255,255,255), fill convex hull, and compare masks to calculate 'Feature Consistency Score' # + path_syn = masks_syn_save path_gan = masks_gan_save path_syn = os.path.join(path_syn, '*.png') path_gan = os.path.join(path_gan, '*.png') files_syn = list(glob.glob(path_syn)) files_gan = list(glob.glob(path_gan)) files_syn.sort(reverse=True) files_gan.sort(reverse=True) combined_list = zip(files_syn, files_gan) z = list(combined_list) # + iou_list = [] dice_list = [] for syn, gan in zip(files_syn, files_gan): img_syn = np.array(Image.open(syn)) img_gan = np.array(Image.open(gan)) print(f'Image name: {syn[-10:]}') iou, dice = calculatescore(img_syn, img_gan) print('\n') iou_list.append(iou) dice_list.append(dice) mean_iou = np.mean(iou_list) mean_dice = np.mean(dice_list) print(f'Mean IoU is: {mean_iou}') print(f'{iou_list}\n') print(f'Mean Dice score is: {mean_dice}') print(dice_list) # -
Notebook_Archive/FeatureConsistencyScore_2.0-BlattfederBatch4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # CountDiv # https://app.codility.com/programmers/lessons/5-prefix_sums/count_div/ def solution(A, B, K) : smallestdiv = int(A/K if A % K == 0 else A/K + 1) smallestdiv = smallestdiv * K if smallestdiv > B : return 0 numberofdiv = (B - smallestdiv)/K return int(numberofdiv) + 1 assert (solution(6,12,3) == 3) assert (solution(5,10,20) == 0) # # GenomicRangeQuery # https://app.codility.com/programmers/lessons/5-prefix_sums/genomic_range_query/ # # (bad solution, requires improvement) # # + # failed for one test # def find(A,first,last,k) : mid = int(first + (last-first)/2) if A[mid] == k : return k if first >= last : if first < 0 : return A[first] if last >= len(A)-1: return A[len(A)-1] return A[last+1] if A[mid] > k: return find(A,first,mid-1,k) else: return find(A,mid+1,last,k) def findeqorgreater(A,k) : if len(A) == 0 : return -1 if (k > A[len(A)-1]) : return -1 return find(A,0,len(A)-1,k) def mapl(c) : if c == 'A' : return 0 elif c == 'C' : return 1 elif c == 'G' : return 2 else: return 3 def preparesorted(S) : P = [[],[],[],[]] for i in range(len(S)) : P[mapl(S[i])].append(i) return P def solution(S, P, Q) : PREP = preparesorted(S) res = [] for i in range(len(P)) : mini = P[i] maxi = Q[i] for c in range(0,4) : ind = findeqorgreater(PREP[c],mini) if ind == -1 : continue if ind >= mini and ind <= maxi : res.append(c+1) break return res # - S = "CAGCCTA" print (preparesorted(S)) P = [2,5,0] Q = [4,5,6] print(solution(S,P,Q)) S = "C" P = [0] Q = [0] print(solution(S,P,Q)) S = "GAGGGGGAGGGGGGG" #S = "CGGGGGGG" P = [0,0,6,8] Q = [0,4,9,11] print(solution(S,P,Q)) # # Question 8 # https://app.codility.com/programmers/lessons/5-prefix_sums/genomic_range_query/ # <br> # Solution is valid, performance and correctness # # # Solution # The problem can be narrowed to finding the existence of the letters A, C, G and T between range S[P[i],Q[i]].<br/> # If the letter A exists then the impact is 1, if letter A does not exist but letter C exists then the impact is 2 etc. # # Solution implementation # Create a sorted list of positions for every letter. <br> # For instance: assuming CAGCCTA<br> # A = [1,6]<br> # C = [0,3,4]<br> # G = [2]<br> # Letter T can be ignored because it letter A, C nor G exists it implies letter T and impact 4<br> # Then take the range P[i] : Q[i] and look for equals or greater position for letters A, C and G using binary search for every letter.<br> # <br> # For instance:<br> # Range 2:4<br> # Lower boundary 4 -> Look for letter A -> returns 6 -> greater then upper boundary 4 -> letter A does not exist<br> # Look for letter C -> return 4 -> equal to upper boundary 4 -> impact 2<br> # + from typing import List def findmingerec(num : int, sorted : List[int], firsti: int, lasti : int) -> int : # return the greater or equal value or -1 assert(firsti <= lasti) mid : int = firsti + int((lasti - firsti)/2) if sorted[mid] == num : return sorted[mid] if sorted[mid] > num : if firsti == mid : return sorted[mid] return findmingerec(num,sorted,firsti, mid-1) if lasti == mid : if lasti == len(sorted) - 1 : return -1 else: return sorted[lasti+1] return findmingerec(num,sorted,mid+1,lasti) def findminge(num : int, sorted : List[int]) -> int : if len(sorted) == 0 : return -1 return findmingerec(num,sorted,0,len(sorted)-1) def solution(S : str, P : List[int] , Q : List[int]) -> List[int] : assert(len(S) > 0) assert(len(Q) == len(P)) # prepare sorted list for every letter sortedA : List[int] = [] sortedC : List[int] = [] sortedG : List[int] = [] for i in range(len(S)) : if S[i] == 'A' : sortedA.append(i) elif S[i] == 'C' : sortedC.append(i) elif S[i] == 'G' : sortedG.append(i) # we have sorted lists answer : List[int] = [] for i in range(len(P)) : mini : int = 4 currentL : List[i] for j in range(3) : currentL : List[i] if j == 0 : currentL = sortedA elif j == 1: currentL = sortedC else : currentL = sortedG if len(currentL) == 0: continue inde : int = findminge(P[i],currentL) if inde != -1 and inde <= Q[i] : mini = j+1 break # mini - contains a minimal value answer.append(mini) return answer # + assert(findminge(0,[]) == -1) assert(findminge(3,[1]) == -1) assert(findminge(0,[1]) == 1) assert(findminge(5,[1,6,7,8]) == 6) assert(findminge(0,[1,6,7,8]) == 1) assert (findminge(8,[1,6,7,8]) == 8) assert (findminge(2,[1,6,7,8]) == 6) print(solution("CAGCCTA",[2,5,0],[4,5,6])) # - # # MinAvgTwoSlice # # https://app.codility.com/programmers/lessons/5-prefix_sums/min_avg_two_slice/ # # Calculate only 2 and 3 size slices # + from typing import List def solution(A: List[int]) -> int : minavg :float = (A[0] + A[1]) /2 mini : int = 0 for i in range(len(A) - 2) : avg2 : float = (A[i] + A[i+1])/2 avg3 : float = (A[i] + A[i+1] + A[i+2]) / 3 if avg2 < minavg : minavg = avg2 mini = i if avg3 < minavg : minavg = avg3 mini = i if (A[len(A) -2] + A[len(A) - 1]) / 2 < minavg : mini = len(A) - 2 return mini # - A = [4,2,2,5,1,5,8] assert(solution(A) == 1) A = [2,5,2,29] assert(solution(A) == 0) A = [29,2,5,2,29] assert(solution(A) == 1) A = [29,2,5,2] assert(solution(A) == 1) A = [29,2,5,2,1] assert(solution(A) == 3) # # PassingCars # https://app.codility.com/programmers/lessons/5-prefix_sums/passing_cars/ # + from typing import List def solution(A : List[int]) -> int : assert(len(A) > 0) # find first 0 found : bool = False for cur0 in range(0,len(A)) : if A[cur0] == 0 : found = True break if not found : return 0 sumpair = sum(A[cur0:]) sum1 = sumpair # cur0 : position of 0 element # sumpair: number of 1 up the list cur0 += 1 while cur0 < len(A) : if A[cur0] == 1: sum1 -= 1 elif sumpair != -1 : sumpair += sum1 if sumpair > 1000000000 : sumpair = -1 cur0 += 1 return sumpair # + assert (solution([0,1,0,1,1]) == 5) assert (solution([1]) == 0) assert (solution([0]) == 0) assert (solution([0,1,0,0,0]) == 1) assert(solution([0,1,0,0,1]) == 4) # -
codility-lessons/5 Prefix Sums.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/dimi-fn/Google-Playstore-Apps/blob/master/Google_playstore.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="OoH1VXsjcNlm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="2131a232-0ab1-47f1-8ecd-6fa659548d24" # Import statements # Import necessary python libraries and packages # For data analysis & manipulation import pandas as pd import numpy as np # For visualising distributional values import seaborn as sns import matplotlib.pyplot as plt # + id="tNjT7pqMcNlu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="c7dca5d9-f115-438f-86a8-1df1469570c2" # Python version import sys print ("The Python version is: {}".format(sys.version)) # + id="P507x5zMcNl1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="ed0061eb-40da-4449-87a6-a466625a821a" # Generating the version of a wide variety of packages/libraries used pd.__version__ pd.show_versions(as_json=False) # + id="QAZJ8sfIhnmM" colab_type="code" colab={} # Code to read csv file into colaboratory: # !pip install -U -q PyDrive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # + id="S7HGdKWQh3sG" colab_type="code" colab={} auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # + id="WFioYw7OcNl9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="00f937c2-4495-4c3a-e7f1-6e0f54945271" # Assigning the dataset with the name: "app" downloaded = drive.CreateFile({'id':'1s5mJCccMkkVGSAVrzRYn0rdP-gJz849C'}) downloaded.GetContentFile('Google-Playstore-Full.csv') app = pd.read_csv('Google-Playstore-Full.csv') # + id="2JTD9WTUcNmC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="95e139b0-fe18-4d20-e454-71937e2af39d" # The type of this dataset is a dataframe type(app) # + id="erUfJO9TcNmI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="eb84cb5a-a589-46b8-de68-b64d101f79f5" # The columns of this dataframe are "series" type(app["Installs"]) # + id="9APotmTVcNmN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 346} outputId="c66c3703-5a2b-4526-aeb7-fada6721d238" # First 5 rows of the dataframe app.head() # + id="f050eccdcNmT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="c776d978-55bf-4538-9bdd-ab73662030aa" # Getting the last five rows app.tail() # + id="E02DdjQjcNma" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1699548b-e5e9-455e-b9d4-5b2d3e4d2763" # Getting the number of rows and columns of the dataframe app.shape # + id="3ruc3CUgcNme" colab_type="code" colab={} # Removing the columns with index position: 11, 12, 13, 14. They do not seem to offer any substantial value to the data analysis app=app.drop("Unnamed: 11", axis=1) app=app.drop("Unnamed: 12", axis=1) app=app.drop("Unnamed: 13", axis=1) app=app.drop("Unnamed: 14", axis=1) # + id="UeibMlFqcNmk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="83cea696-badc-4f4a-f54b-e86ddd137058" # Number of rows and columns after removing the useless columns app.shape # + id="ECJS-X_wcNmq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="b72fa24e-03bc-4e24-c1c7-bfc09ae15a86" # Columns after removing the useless ones app.columns # + id="HA20SY8OcNmv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b49502db-2064-42c0-f3cb-5c90de051bf4" # Number of app categories app["Category"].nunique() # + id="XVmwIoE4cNm1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="55ab29c3-7707-40a4-82bd-ea29d7bd179a" # The app categories app.Category.unique() # + id="y3Xqj1jvcNm7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="02248daa-1fd9-401f-ac98-7ed337f9d9fe" # Viewing the number of classes (gradation) of the number of installations # There are 38 different classes app["Installs"].nunique() # + id="NljuZ3SZcNnA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="dd9b6925-e913-4a1e-b2be-671f682b0ffa" # The gradation of installations in the dataframe # There seem to be some input mistakes, such as "EDUCATION", which should not belong in this column. They will be edited app.Installs.unique() # + id="28KOeszWcNnG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d36e0d99-d09b-4c2a-b0a6-b166daca285c" # There are a lot of app sizes app["Size"].nunique() # + id="lvudevvTcNnL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="12375576-942c-41ff-e963-9121912b2682" # Viewing the content rating; who is permitted to download these apps # There are some invalid contents. They will be edited app["Content Rating"].unique() # + id="Cvfx6vSGcNnU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f9ffd3c6-e109-49cd-8180-e662f777affe" # the number of categories of the age content rating len(app["Content Rating"].unique()) # + id="esxlzRr4cNnY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="1bbcd72c-b80b-487c-8bb9-61190fb55912" #current first five rows app.head() # + id="gLnpt3WucNnc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="f941c126-e20f-4d5f-98ac-6fd2bc4c14c3" app.isnull().sum() # + id="HrUVGyAycNnh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9b5d7ff7-432c-45d1-eca5-23c99f6bd4f7" # There are totally 11 empty data entries which will be dropped len(app.isnull().sum()) # + id="a0X_YNCUcNnm" colab_type="code" colab={} # Dropping the entries where there are missing values app=app.dropna() # + id="uqfcZu5PcNnt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="2c79a843-fc1c-4458-bdbe-d39aeab0d30d" app.isnull().any() # False for every category means that there are no longer missing values # + id="bpQlTWeCcNn1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d1b03337-eb23-4bfc-c8e5-5037fa4180e5" # Ensuring there are no missing values in any column, in any data of every column app.isnull().any().any() # + [markdown] id="B43z8t9wcNoL" colab_type="text" # # Cleaning of the Data - Exploring and Managing the Data # + id="kD2YJZdgcNoM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 77} outputId="917d358e-a424-4d7b-e6e9-96bfdae5ebc8" # Start of cleaning # There were given some commands to locate any invalid data # I noticed that are some misplacing, e.g. here, "4" should move to "Rating", and "GAME_STRATEGY" should move to "Category" # Wherever the data are misplaced but valid, the data will be kept and edited (correcting the entry positions) # Wherever the data are misplaced and invalid too (with lot's of mistakes), the data will be removed app[app["Rating"]== "GAME_STRATEGY"] # + id="qo2w_7utcNoQ" colab_type="code" colab={} # dropping the invalid entry app.drop(index=13504, inplace=True) # + id="K1jtvbd2cNoU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="882c803e-f95e-40a1-c5b6-880086bc97df" # Now the column "Rating" is fixed app[app["Rating"]== "GAME_STRATEGY"] # + id="VLViuuJ1cNod" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 157} outputId="a84e9255-2c18-4e97-84b9-ff876eaae1e2" # Noticing the same pattern. Wrong entry data in the columns app[app["Rating"]== "NEWS_AND_MAGAZINES"] # + id="9j1cjSyCcNoh" colab_type="code" colab={} # Here the data are misplaced but valid # I am manually fixing the misplacing data values app.loc[23457, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "NEWS_AND_MAGAZINES", "3.857798815", "11976", "1,000,000+", "Varies with device", "0", "Everyone 10+", "March 16, 2019", "Varies with device", "NaN" # + id="8w6PDIYucNol" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="95cb1fe2-d0bf-4aa7-f9bb-f00014262fc6" app.loc[23457] # + id="HT-eKomfcNoq" colab_type="code" colab={} app.loc[48438, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "NEWS_AND_MAGAZINES", "4.775640965", "156", "10,000+", "6.9M", "0", "Teen", "March 30, 2019","4.1 and up", "NaN" # + id="hNHaWLj_cNov" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="5d40194d-b495-493a-877b-84aa9f9533c0" app.loc[48438] # + id="YRDP1Hr3cNo5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 77} outputId="b02b37a8-b3fd-49a7-cfea-aaf3c809d5eb" # Here is an example of misplaced data with a lot of mistakes. It does not seem important to be fixed, it will be dropped app[app["Rating"]== "ENTERTAINMENT"] # + id="fRXyY0fIcNo9" colab_type="code" colab={} app.drop(index=113151, inplace=True) # + id="0ILhP7zfcNpB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="6cbe28ed-5c80-4b97-8166-42fc63d7c8d0" # Ensuring that there are no longer wrong entries in the column "Rating" app[app["Rating"]== "ENTERTAINMENT"] # + id="j2C77A68cNpO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="1a79e4c3-80ae-42a1-d1f2-d1f1f823940b" app[app["Rating"]== "EDUCATION"] # + id="W0De3fNFcNpS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="c574a33c-323a-4a7e-c477-e5c213d772fd" # Dropping these data entries which do not seem important and they have a lot of mistakes app.drop(index=125479, inplace=True) app.drop(index=125480, inplace=True) app.drop(index=180371, inplace=True) app[app["Rating"]== "EDUCATION"] # + id="t7KQjhf1cNpe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 77} outputId="3b468d53-256a-42cc-d4f8-8784cb7acc1b" # In this line respecting the column "Rating" there are misplaced but valid data # Data will be fixed manually, putting them in the correct position app[app["Rating"]== "SOCIAL"] # + id="RYS3TzXfcNpj" colab_type="code" colab={} # Fixing the data entry positions manually app.loc[165230, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "SOCIAL", "4.098591328", "71", "5,000+", "7.7M", "0", "Everyone", "March 30, 2019","4.1 and up", "NaN" # + id="Hj269oKvcNpo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="f414a0dd-2c73-4bfa-ad75-2884ef5cb86b" app[app["Rating"]== "SOCIAL"] # + id="S2M9QVCbcNpx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 77} outputId="de74e54a-01c9-476e-d195-ec4925fafc7b" app[app["Rating"]== "PRODUCTIVITY"] # + id="0HY8VF_0cNp0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="217697d9-06f1-4a4f-d264-b91e98308627" # Fixing the data entry positions manually for the index position 168914 app.loc[168914, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "PRODUCTIVITY", "4.389830589", "59", "10,000+", "16M", "0", "Everyone", "December 21, 2018","4.1 and up", "NaN" app[app["Rating"]== "PRODUCTIVITY"] # Ensuring that column "Rating" is fixed from this kind of data entry # + id="u_oyrSP4cNqG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 186} outputId="5312956a-888d-41ad-aee4-81b8e4b3533d" app[app["Rating"]== "MUSIC_AND_AUDIO"] # + id="T-ATjiURcNqJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="a8def986-372b-4093-da7e-df3eaea87076" # Same logic here. Misplaced but valid data. They will be edited manually app.loc[177165, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "MUSIC_AND_AUDIO", "4.538461685", "13", "1,000+", "Varies with device", "0", "Teen", "October 24, 2018","Varies with device", "NaN" app.loc[193869, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "MUSIC_AND_AUDIO", "4.632093906", "511", "10,000+", "2.5M", "0", "Everyone", "September 25, 2018","2.3 and up", "NaN" app.loc[257773, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "MUSIC_AND_AUDIO", "4.400000095", "10", "1,000+", "3.5M", "0", "Everyone", "November 7, 2018","4.0 and up", "NaN" app[app["Rating"]== "PRODUCTIVITY"] # + id="8MmKdnf-cNqR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 77} outputId="9a6b39c8-51c1-44a6-e753-b4fc316177d7" app[app["Rating"]== "TRAVEL_AND_LOCAL"] # + id="HZKoJYzjcNqV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="406bbceb-137a-4d80-e5fd-131b224d8efa" # Fixing the entries in index position 190759 manually (misplaced but substantial values) app.loc[190759, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "TRAVEL_AND_LOCAL", "5", "6", "1,000+", "27M", "0", "Everyone", "October 16, 2017", "4.0 and up", "NaN" app[app["Rating"]== "TRAVEL_AND_LOCAL"] # + id="77u6B-s5cNqf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 77} outputId="4fb64c5a-4282-402a-93ee-e40020b0748b" app[app["Rating"]== "LIFESTYLE"] # + id="q1esWb7VcNqj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="f4377563-b3de-409b-fa5f-1e6c4e418356" # same logic as previously app.loc[194165, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = "LIFESTYLE", "4.388349533", "927", "100,000+", "3.7M", "0", "Everyone", "May 23, 2018", "4.0 and up", "NaN" app[app["Rating"]== "LIFESTYLE"] # + id="_TlOxf0PcNqs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 77} outputId="f780dabd-b2f7-43e7-b3eb-e6994c7a7a1d" app[app["Rating"]== " Economics"] # + id="qeYiFn50cNqu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="4451ced7-1315-4988-9026-d57545b31447" # Applying the same logic. Correting the misplaced (but valid) data app.loc[232811, ["Category", "Rating", "Reviews", "Installs", "Size", "Price", "Content Rating", "Last Updated", "Minimum Version", "Latest Version"]] = " Economics", "4.823529243", "17", "1,000+", "17M", "0", "Everyone", "October 22, 2018", "NaN", "NaN" app[app["Rating"]== " Economics"] # + id="9utOPNIScNqw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="c57cb7ac-28a1-422b-b0c0-99286452eac4" # Here we had an entry in column "Rating" which was 7. But we want "Rating<=5". # It was fixed so now there is no longer rating with numbers more than "5" app[app["Rating"]==7.000000] # + id="QehYYweKcNqy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 47} outputId="36d47156-4d34-4168-9485-c678cb3a3730" app.drop(index=99584, inplace=True) app[app["Rating"]==7.000000] # + id="343BiWC5cNq0" colab_type="code" colab={} # Converting the column "Rating" to float so that we can apply statistics app.Rating= app.Rating.astype(float) # + id="BGOKdk0XcNq2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="5522f71d-c394-482e-dadd-d6c9cb25974d" app.describe() # + id="dC3uU8qlcNq4" colab_type="code" colab={} # Converting the data in the column "Reviews" to float to that we can apply statistics app.Reviews= app.Reviews.astype(float) # + id="RUQii6sjcNq5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="e32bda08-75e6-4693-fd03-567fa79f393f" app.describe() # + id="ozY_CxEccNq8" colab_type="code" colab={} # I want to convert the column "Installs" into float # I firstly have to remove the "+" app.Installs= app["Installs"].str.replace("+", "") # + id="GC2jBktUcNrC" colab_type="code" colab={} # I had some problems converting the column into float, even when i removed "+" # I am removing the commas app.Installs= app["Installs"].str.replace(",", "") # + id="0Dk5D2MacNrF" colab_type="code" colab={} app["Installs"] = pd.to_numeric(app["Installs"]) # + id="KQQx2HZfcNrI" colab_type="code" colab={} # Removing "$" from the data entries in the column "Price" so that it can be converted to float app["Price"]= app["Price"].str.replace("$", "") # + id="ge4e7pr_cNrN" colab_type="code" colab={} # Convert the data in "Price" to float app["Price"]= app.Price.astype(float) # + id="sL6xktWjcNrQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="cf154ffc-b568-4f71-f25f-398585d3df24" # the data in the column "Prics" successfully converted to float # In these columns i can do various statistical applications app.describe() # + id="3k0BvM0icNrT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="93ed81f5-ebb6-47e6-84db-ca72fb2f190b" # procedure for converting the column "Size" to float # there are sizes counted in mb, kb, in numbers without measurement unit and with "varies with device" app.Size.unique() # + id="3y0iuCZ2cNrX" colab_type="code" colab={} # removing the "m" which is the mb for the size app.Size= app["Size"].str.replace("M", "") # + id="BEMB5gAwcNrc" colab_type="code" colab={} # assigning "Varies with device" with a number like "-1" so that i can seperate it later # app.Size= app["Size"].str.replace("Varies with device", "-1") # + id="XJL-zTNacNrl" colab_type="code" colab={} # Segmenting the column of the size y= app.iloc[:, 5:6] # + id="gjp3QjUHcNro" colab_type="code" colab={} # I tried to fix the last problems in converting the column "Size" to float # Here i am trying to remove "k" (kbs) and the blanks, and to convert kbs to mbs # It keeps giving me errors and i cannot fix it # i will not use the column "Size" for statistical applications #for x in y: # x = str(x) # x= x.replace(" ", "") # x= x.replace(",", ".") # if "k" in x: # x= x.replace("k", "") # x=x.replace(" k", "") # x=x.replace("k ", "") # x= float(x) # x= x/1024 # + id="ofqCtTQicNrr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d27bd2fc-6267-476b-ac68-b15c5533d271" # There are 11,728 apps whose size varies with device len(app[app["Size"]== "Varies with device"]) # + id="K4lebuGmcNrv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="2edc45e4-2787-4c7b-8042-cc1f105e361f" app.Size.describe() # + id="oLAOMY78cNry" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="dd9dc4f6-83e3-492b-ab7a-b20fe1b37696" print ("Apps whose size varies with device are {}% of the dataset".format(11728/267040*100)) # + [markdown] id="CDlQQ65ycNr8" colab_type="text" # # Statistical Analysis # + id="NRd5M0HWcNr9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3b1e4eb3-ab32-4425-8806-eee2c67f8888" #ensuring the shape of dataframe before proceeding to further statistics and visualization app.shape # + id="RtCMHv0rcNsB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="6c6f2e61-9cec-49a1-861a-fae24c620c25" # the columns, the data of which we can do statistic manipulation app.describe() # + id="ev_n7qybcNsH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="68482c97-96ff-4ac6-d365-860c76725d8b" app.info() # data type for each column # + id="cGKRWTtscNsK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0e4c0ae6-3505-49b8-fc39-e4aee4e3f223" #reinsuring there are no any missing values app.isnull().any().any() # + id="ByGpLyKncNsN" colab_type="code" colab={} #****************************************************************************************************** # Reviewing the unique values and the number of unique values in each column after the cleaning process # + id="mEg_mU4gcNsR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="d7eb8511-634d-4f17-f165-7b49c230c184" # Values in "Category" app["Category"].unique() # + id="NMpypUcacNsT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d90b20c6-e27b-4d4f-82c9-5a63702c34d0" # There are 51 different categories app["Category"].nunique() # + id="Zk-ELt2LcNsW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="80d44473-1719-407b-8806-5469b0a76adb" # Unique values of Rating app["Rating"].unique() # + id="RbzyDrN8cNsY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a3149d02-2c79-4e3c-a73e-651973b5707c" # There are 99,845 unique values of Rating app["Rating"].nunique() # + id="ZS1ZzrRKcNsa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="896db27c-c42f-4107-9755-6e77aba8d7f7" # Unique values of the column "Reviews" app["Reviews"].unique() # + id="zfW08LRkcNsd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e19522d7-a676-45c9-998a-0342f202ddce" # There are 24,531 different reviews app["Reviews"].nunique() # + id="IvVboARycNsj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="c79bd5bb-342b-44d5-b3ae-189762031c94" # Unique values of installations app["Installs"].unique() # + id="4NPNoyvQcNsn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1ca53934-f652-484a-f681-0eb116eec30d" # There are 21 different classes of installations app["Installs"].nunique() # + id="M2zeVLQQcNsq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="293e4d1f-47f3-4253-9c0b-40bf6f0c4768" # Unique values in the column "Size" app["Size"].unique() # + id="_yIqc6cFcNsr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="59eaeaaa-e6b7-4459-ec8e-edb337769d1f" # There are 1,236 different sizes for the apps app["Size"].nunique() # + id="9yzzQwH0cNsu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="da800ef9-5394-4de6-bb25-5bef8edc62c6" # There are 488 different prices app["Price"].nunique() # + id="yLgb0zMZcNsx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="2c77eeb5-5d4a-490b-d07f-1edd7f9a06e4" # Unique values of the column "Content Rating" app["Content Rating"].unique() # + id="1KyYt0CPcNs0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e4590375-7d52-44ce-90f0-8358467e71c3" # There are 6 different content ratings len(app["Content Rating"].unique()) # + id="IMVmRaC-cNs7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="54edae4d-ea03-4f16-8b82-d0f707529397" print("***************************************************") print("Minimum number of ratings: %.2f" %app["Rating"].min()) print("Maximum number of ratings: %.2f" %app["Rating"].max()) print("***************************************************") print("Minimum number of reviews: %.2f" %app["Reviews"].min()) print("Maximum number of reviews: %.2f" %app["Reviews"].max()) print("***************************************************") print("Minimum number of installs: %.2f" %app["Installs"].min()) print("Maximum number of installs: %.2f" %app["Installs"].max()) print("***************************************************") print("Minimum number of prices: %.2f" %app["Price"].min()) print("Maximum number of prices: %.2f" %app["Price"].max()) print("***************************************************") # + id="jVT6iVfHcNs_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="8be3ab64-d401-4cfc-d270-d0a28d811372" # Getting the measures of central tendency for all the installation grouped by "Category" app.groupby("Category").Installs.agg(["min", "mean", "median", "max"]) # + id="yJjrCUCZcNtC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 536} outputId="0aaaba1b-b730-480c-edd2-851ea7506ee8" # Sorting (descending sorting) the dataframe by number of installs app.sort_values(by="Installs", ascending= False) # + id="FREMgQgZcNtE" colab_type="code" colab={} top_installed_apps=app.sort_values(by="Installs", ascending= False) # + id="I7ZCCqi9cNtH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 511} outputId="7c97a5cf-ef11-4dce-83cb-df0a699768b7" #************************************************************* # top 10 apps based on the number of installations #************************************************************* top_installed_apps.head(10) # + id="8f71N-RvcNtN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="57abf549-6a4d-4a58-baa2-a7ce3f677ba2" # Apps with 5 billion installations (5b is the 1st greater class of installations in the dataset) len(app[app["Installs"]>= 5000000000]) # + id="oZnr8aRlcNtP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2f91353f-855f-4da1-b3a5-0a161b8c3e3a" # Apps with more than 1 billion installations (1b is the 2nd greater class of installations in the dataset) len(app[app["Installs"]>= 1000000000]) # + id="WyNVgiblcNtT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 536} outputId="4294a09e-aa8e-4f3b-a24f-b51ec6facc6d" top_installed_and_rated_apps = app.sort_values(by=["Installs", "Rating"], ascending=False) top_installed_and_rated_apps # main top apps # + id="SCgCKBIBcNtZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 478} outputId="b823eb62-caef-4bc1-c5ce-4dc5c513be8e" #************************************************************************** # top 10 apps based on the number of installations and rating together (main top apps) #************************************************************************** top_installed_and_rated_apps.head(10) # + id="pF0rGPhMcNtb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 586} outputId="e5fb8f70-b49a-4dce-880c-bd1d8f91afc6" top_installed_and_reviewed_apps = app.sort_values(by=["Installs", "Reviews"], ascending=False) top_installed_and_reviewed_apps # + id="flf5pOQgcNtd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="380ab9b4-8f29-48b0-a87d-844f73a38e26" #************************************************************************** # top 10 apps based on the number of installations and reviews together #************************************************************************** top_installed_and_reviewed_apps.head(10) # + id="zVCmJ-3bcNtf" colab_type="code" colab={} top_10_installed_and_rated_apps= top_installed_and_rated_apps.head(10) # + id="PI3hNCqFcNth" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="46173684-c653-46f6-cc64-1714b8110b0f" top_10_installed_and_rated_apps.Category.sort_values(ascending=False) # + id="t145Tr7wcNtj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2861b8a4-b6cf-4a41-a023-927f41b72ad2" # There are totally 244,396 apps app["App Name"].nunique() # + id="NiboSUbjcNtl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 302} outputId="0cbb008f-9ad6-4caf-cfed-c9c4380a1ff5" # here i will see the number of apps which belong to the categories of the most installed and rated apps count_VIDEO_PLAYERS=0 count_TRAVEL_AND_LOCAL=0 count_TOOLS=0 count_SOCIAL=0 count_PHOTOGRAPHY=0 count_GAME_ARCADE=0 count_COMMUNICATION=0 for x in app["Category"]: if x== "VIDEO_PLAYERS": count_VIDEO_PLAYERS=count_VIDEO_PLAYERS+1 elif x== "TRAVEL_AND_LOCAL": count_TRAVEL_AND_LOCAL= count_TRAVEL_AND_LOCAL+1 elif x== "TOOLS": count_TOOLS= count_TOOLS+1 elif x== "SOCIAL": count_SOCIAL= count_SOCIAL+1 elif x== "PHOTOGRAPHY": count_PHOTOGRAPHY= count_PHOTOGRAPHY+1 elif x== "GAME_ARCADE": count_GAME_ARCADE= count_GAME_ARCADE+1 elif x== "COMMUNICATION": count_COMMUNICATION= count_COMMUNICATION+1 print ("*****************************************************************************************************") print ("*****************************************************************************************************") print ("Number of apps that belong in category: \"Video Players\" is: {}".format(count_VIDEO_PLAYERS)) print ("*****************************************************************************************************") print ("Number of apps that belong in category: \"Travel and Local\" is: {}".format(count_TRAVEL_AND_LOCAL)) print ("*****************************************************************************************************") print ("Number of apps that belong in category: \"Tools\" is: {}".format(count_TOOLS)) print ("*****************************************************************************************************") print ("Number of apps that belong in category: \"Social\" is: {}".format(count_SOCIAL)) print ("*****************************************************************************************************") print ("Number of apps that belong in category: \"Photography\" is: {}".format(count_PHOTOGRAPHY)) print ("*****************************************************************************************************") print ("Number of apps that belong in category: \"Game Arcade\" is: {}".format(count_GAME_ARCADE)) print ("*****************************************************************************************************") print ("Number of apps that belong in category: \"Communication\" is: {}".format(count_COMMUNICATION)) print ("*****************************************************************************************************") print ("*****************************************************************************************************") # + id="VC7oIYW-cNtn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="000c31f7-89a4-4ae9-8b34-899aac2d113e" top_10_installed_and_rated_apps["Content Rating"].sort_values(ascending=False) # + id="3Bp4Bzb8cNto" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="15fed55a-7ab7-4571-e33c-00abf5d08d06" app["Content Rating"].nunique() # + id="YKH1MJ6rcNtq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="94741e35-de6b-4afa-be03-8067f3421cc2" # There are totally 6 categories of content rating # In the top 10 installed and rated apps, there are 3 different content ratings # I will now see their performance in the whole dataset, along with the other 3 remaining content ratings in the whole dataset count_Teen=0 count_Everyone_10 = 0 count_Everyone=0 count_Mature_17=0 count_Adults_only=0 count_Unrated=0 for x in app["Content Rating"]: if x== "Teen": count_Teen= count_Teen+1 elif x== "Everyone 10+": count_Everyone_10= count_Everyone_10+1 elif x== "Everyone": count_Everyone= count_Everyone+1 elif x== "Mature 17+": count_Mature_17 = count_Mature_17+1 elif x== "Adults only 18+": count_Adults_only= count_Adults_only+1 elif x== "Unrated": count_Unrated= count_Unrated+1 print ("*****************************************************************************************************") print ("Number of apps of all the dataset, having the content rating which belong the top apps:") print ("*") print ("*") print ("Number of apps that belong to the content rating \"Teen\" is: {}".format(count_Teen)) print ("*****************************************************************************************************") print ("Number of apps that belong to the content rating \"Everyone 10+\" is: {}".format(count_Everyone_10)) print ("*****************************************************************************************************") print ("Number of apps that belong to the content rating \"Everyone\" is: {}".format(count_Everyone)) print ("*****************************************************************************************************") print ("*****************************************************************************************************") print ("#####################################################################################################") print ("Number of apps having content rating not included in the top apps") print ("*") print ("*") print ("Number of apps that belong to the content rating \"Mature 17+\" is: {}".format(count_Mature_17)) print ("Number of apps that belong to the content rating \"Adults only 18+\" is: {}".format(count_Adults_only)) print ("Number of apps that belong to the content rating \"Unrated\" is: {}".format(count_Unrated)) # + id="nYjZLFVEcNts" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="2b8b2199-6409-4674-9f0b-bef88e2a4a26" # The aforementioned can be found more easily with the below command app["Content Rating"].value_counts(ascending=False) # + id="OuSn3rvUcNtw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 478} outputId="b4b69234-3796-4e90-fcbb-4d5c460b0031" # In this and in the next 2 commands, i will try to see if there is any correlation between installations, Rating and Reviews top_10_installed_and_rated_apps # + id="Tx1JvwobcNty" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="26db501a-b9ab-435a-e3d9-9cd15ec01fad" #**************************************************************************************************************** # It seems that none of the best rated apps belong to the top installed (filtered by rating too) apps #**************************************************************************************************************** app.sort_values(by="Rating", ascending= False).head(10) # + id="TCwin5KHcNt0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 494} outputId="5e4d5f3a-7762-45ef-a755-c4ee91c2daef" #**************************************************************************************************************** # Relationship between Reviews and main top apps # It seems that Instagram, Clean Master - Antivirus, Youtube and Subway Surfers # The above 4 apps belong to the top installed (filtered by rating too) and simultaneously to the top reviewed apps # So there is correlation of 4 out of 10 apps respecting top installed-rated apps and top reviewed apps #**************************************************************************************************************** app.sort_values(by="Reviews", ascending= False).head(10) # + id="b4gy2bi0cNt2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 478} outputId="a4258078-625a-4ac8-d2a9-79bf0be4b561" top_10_installed_and_rated_apps # + id="jul-iugwcNt4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="71e0a7f6-fd01-4373-cf89-8645473aeddb" # Prices of the apps app["Price"].value_counts().sort_values(ascending=False).head(10) # + id="r-XDPPmjcNt7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b92f19e7-89ec-4b1e-f703-e39f2fe0fd0b" app.Price.nunique() # + [markdown] id="WPigHvshcNuI" colab_type="text" # # Visualising Data # + id="Rtm3GkUscNuJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 106} outputId="917acc95-2b41-4ab0-9c1e-e31442ed5c33" app.head(2) # + id="3fxapDrkcNuL" colab_type="code" colab={} import seaborn as sns import matplotlib.pyplot as plt # + id="ziFbi0uKcNuN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="a99653d2-8acb-4859-88cf-8426996c8f9d" # Top 5 app Categories of all the dataset app["Category"].value_counts().nlargest(5).sort_values(ascending=True).plot.barh() plt.ylabel("Categories") plt.xlabel("Count") plt.title("Google Playstore - Top 5 App Categories") plt.show() # + id="kMT-UGxfcNuP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="1489fc95-5fda-4ba5-d983-7bfa5bc9f1c9" app["Category"].value_counts().nlargest(5).sort_values(ascending=False) # + id="YY8RB4PrcNuR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 420} outputId="bae1426f-cfbc-4806-bd9e-f3f9271eff04" # In which category do main 100 top apps belong top_installed_and_rated_apps["Category"].head(100).value_counts() # + id="ndM8jr0tcNuU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="70f7dd74-3306-401a-cb44-11e8b65323e2" status= ("PRODUCTIVITY", "TOOLS", "COMMUNICATION", "SOCIAL") y_pos= np.arange(len(status)) numbers= [17,16,11,6] plt.bar(y_pos, numbers, align="center", alpha=0.6) plt.xticks(y_pos, status) plt.ylabel("Count") plt.title("Categories - (Main) Top 100 Apps") plt.show() # + id="iVQLhceTcNuW" colab_type="code" colab={} x=top_installed_and_rated_apps.head(100) # + id="UwtSVQFwcNuY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 348} outputId="0ccba27e-e719-4607-c4b2-e6eee2697e20" # Relationship betweeen: Classes and number of Installations app["Installs"].value_counts().sort_values(ascending=False).plot.bar() plt.ylabel("Number of Installations") plt.xlabel("Classes of Installations") plt.title("Google Playstore - Grading in Number of Installations") plt.show() # + id="7bR85tdHcNuc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="c47c0db2-d81c-4639-f5f1-f59cc432bdd5" # Top 5 Gradings in the number of installations app["Installs"].value_counts().nlargest(5) # + id="EVfUU0wQcNue" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 308} outputId="dd7f0d48-00cb-4898-fbd2-e910974cad3b" app["Price"].value_counts().nlargest(5).sort_values(ascending=False).plot.bar() plt.ylabel("Number of Apps") plt.xlabel("Prices in Dollars") plt.title("Google Playstore - Prices") plt.show() # + id="wSOxnTU9cNug" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="bbb2b617-f5d1-43aa-c4d6-2b4c2250ce0e" app["Price"].value_counts().nlargest(5) # + id="kypRkDvlcNui" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 365} outputId="869ec711-2106-4a25-b6b1-d455514a1bb7" app["Content Rating"].value_counts().sort_values(ascending=False).plot.bar() plt.ylabel("Number of Apps") plt.xlabel("Content Rating") plt.title("Google Playstore - Content Rating") plt.show() # + id="22oJe917cNul" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="2e5920fd-da9c-4625-d7e3-58065f5d38cf" app["Content Rating"].value_counts() # + id="FlOKebmzcNun" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="6b304571-7ca9-4c79-ab8a-71c8008d74d9" top_installed_and_rated_apps["Content Rating"].head(100).value_counts() # + id="iTiLG8n8cNup" colab_type="code" colab={} ##################### # + id="gASv1bUBcNut" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="b2ba949a-6469-42b0-b40d-cafbf9e74b8a" top_installed_and_rated_apps.head(100).Installs.value_counts(ascending=False) # + id="i04l27BucNuw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 278} outputId="5568d4cb-d6f9-462e-8d8b-fbf9bfc3269b" app_category= top_installed_and_rated_apps.head(100).Installs app_category.plot.density().set_yscale("log") # + id="qnijynBTcNuy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="4395d242-dcd9-440f-ffb3-17613b1c0e1b" app_category= top_installed_and_rated_apps.head(100).Rating app_category.plot.density().set_yscale("log") # + id="eW7nYehOcNu3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="eb6486c9-c394-4df3-a186-dc890b7cd38b" top_installed_and_rated_apps.head(100).Rating.value_counts(ascending=False) # + id="39tqr_UicNu4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="8418c916-715d-4887-895c-26dc84c45ebc" top_installed_and_rated_apps.head(100).Reviews.value_counts() # + id="_FXItN__cNu6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="9a694917-3b07-462e-de2e-732941892b0c" app_category= top_installed_and_rated_apps.head(100).Reviews app_category.plot.density().set_yscale("log") # + id="ZoFpACzNcNu8" colab_type="code" colab={} #################### # + id="HvvD6DbncNvB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="cd341111-640e-42a0-b382-be7e3de9797e" app["Rating"].value_counts() # + id="2OY3tHLicNvD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="2c686bb9-bc4c-4530-e6af-d5cb7b14ad9a" app_rating= app["Rating"] num_bins=7 plt.hist(app_rating, num_bins, facecolor="green", alpha = 1) plt.xlabel("Google Playstore - App Ratings") plt.ylabel("Number of Apps") plt.show # + id="tD8LyjcOcNvE" colab_type="code" colab={} ############################### # + id="h3zZDG1qcNvG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="d2fc0e0f-bc71-4bb6-aa27-3fa5623c9db8" app1=top_installed_and_rated_apps.head(100) app1["Content Rating"].value_counts().plot.pie() plt.title("Content Rating - Top 100 (Main) Apps") plt.show() # + id="YVkU9wrEcNvI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="28e10968-91ba-460d-d815-bff1cc842bb2" app1["Content Rating"].value_counts() # + id="_MDL92yjcNvJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="747893fc-16a6-4ba1-ce85-46cc8301b130" app2= top_installed_and_rated_apps.head(100) app2["Installs"].value_counts().plot.pie() plt.title("Gradation of Installations - Main Top 100 Apps") plt.show() # + id="8BRjbPl7cNvL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="c4a0c188-6ede-469f-9f4b-d3025aecc93d" app2["Installs"].value_counts() # + id="MD_09PNccNvO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 478} outputId="c3a284ed-fb28-4505-b7ea-ce241e044e48" top_10_installed_and_rated_apps # + id="Pizz_eNTcNvQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 332} outputId="5c7968ad-8bcb-477e-a43f-cce210e3d1bc" # top 10 main apps app4= top_10_installed_and_rated_apps top_apps=app4.groupby(["Installs", "App Name"]).size().unstack() top_apps.plot(kind="bar", stacked=True) ax=plt.gca() plt.show() # + id="FYDwEjEocNvT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="f7f789fb-077b-47ed-d53f-aec3ebbbd5a6" top_installed_and_rated_apps.head(100)["Content Rating"].value_counts(ascending= False) # + id="S6-z6MMGcNvU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 616} outputId="49012068-6ce8-4796-8286-b9460c67de68" # violin plot - main top 100 apps - content rating vs installs app5=top_installed_and_rated_apps.head(100) plt.figure(figsize=(15,10)) sns.violinplot(x= "Content Rating", y="Installs", data= app5) plt.show() # + id="vlJ428BncNvY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 487} outputId="aa64e9b8-3b34-42eb-e698-33b1c692f716" plt.figure(figsize=(16,8)) corr= app.corr() sns.heatmap(corr) plt.show() # + [markdown] id="6diIChfAcNvh" colab_type="text" # # Unsupervised Methods # + id="RdCQ1Kn9cNvh" colab_type="code" colab={} new_app = app.head(1000) # taking a part(sample) from the data set to apply supervided and unsupervised # i did not take a bigger sample because of memory crashes # + id="4QS6OwT9cNvp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="85b17604-28a7-4877-848f-8a11fcfdc6ed" new_app.shape # + id="dS2FNR_AcNvr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="6cae5e69-c792-40dc-83af-27a834899299" new_app["Installs"].value_counts().sort_values(ascending=False) # + id="fkOHxd_OcNvt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 348} outputId="3198eec3-83bf-4699-a0fd-e973d70bc343" # I want to see the gradation of the number of installations of the new dataframe(sample), so as to compare later # so as to compare the unsupervised and supervised methods results new_app["Installs"].value_counts().sort_values(ascending=False).plot.bar() plt.ylabel("Number of Installations in the Sample") plt.xlabel("Classes of Installations") plt.title("Gradation in Number of Installations - Sample (first 1000 Lines of the Dataset)") plt.show() # + id="iV6DTfK-cNvv" colab_type="code" colab={} #********************************** # Import packages from Scikit Learn from sklearn import cluster # in unsupervised method we have clusters/ data are grouped into clusters from sklearn import metrics # for the distances between the data from sklearn.preprocessing import scale # for scaling from sklearn.preprocessing import LabelEncoder # for converting strings to floats from sklearn.preprocessing import OrdinalEncoder # for converting strings to floats when x(attributes) are strings # + id="kSAvhGETcNvy" colab_type="code" colab={} #******************************************************************************** # Segmenting the data i have chosen into attributes (features)=x, and target=(y) # y will be the number of installations # x will be: Category, Rating, Reviews and Content Rating x= new_app[['Category', 'Rating', 'Reviews', 'Content Rating']] # attributes y= new_app["Installs"] # y included the classes of installations. e.g. 100,000 in the dataset means more than 100,000 installations # + id="DCdouT-scNv3" colab_type="code" colab={} # x has strings. This command is for converting strings to floats x_transformed= OrdinalEncoder().fit_transform(x) # + id="fZaia-kucNv9" colab_type="code" colab={} # Preparing the data- Scaling/ Handling the data in such way they can belong in a spesific range # and represent the same degree of difference #*********************************************************************************************************** scaled_data= scale(x_transformed) # + id="jaN6A6UgcNwC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="290174fe-9586-4100-b870-a784222ae1ca" scaled_data # + id="vVW1VWeHcNwG" colab_type="code" colab={} # import python libraries for creating clusters, for converting and for scaling from sklearn import cluster from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import scale # + id="tkbwQjoQcNwI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6bb397c8-1ca6-4c8c-bb6a-4e88b4b05092" # i have taken a sample, so now the clusters of installations are 14 from 21 that normally are for the whole dataset # creating clusters using Agglomerative Clustering len(np.unique(y)) # + id="Cb_Fdrz1cNwQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="ee5745b6-2379-4ecc-c160-c63fef21428b" y.unique() # + id="hGa4-4qQcNwT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="e7698bbc-02bd-4e2d-f7b4-20468e5e5572" #****************************************************************************************************** # Hierarchical agglomerative clustering - bottom-up approach #****************************************************************************************************** # Using average in linkage means that i use the average of the distances of each observation from sklearn.cluster import AgglomerativeClustering n_samples, n_features = scaled_data.shape n_digits = len(np.unique(y)) model = cluster.AgglomerativeClustering(n_clusters=n_digits, linkage="average", affinity="cosine") model.fit(scaled_data) # this is the model created # + id="Xqkscn7acNwW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 722} outputId="91f4e847-ab35-4578-a8a9-62dca9d974fb" print (model.labels_) # + id="uKTrtQNEcNwc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="abf44a54-daf6-48d2-c971-5d2532d005b9" # Silhouette score: comprares the similarity of an object to its own cluster with that to other clusters # models labels= models assigned to the model # # print (metrics.silhouette_score(scaled_data,model.labels_)) print (metrics.completeness_score(y, model.labels_)) print (metrics.homogeneity_score(y, model.labels_)) # + id="w0Ocy8U_cNwe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5e9553ec-ebec-4127-a0b4-9e77c44c6174" len(np.unique(y)) # + id="HQE3AkNHcNwk" colab_type="code" colab={} from scipy.cluster.hierarchy import dendrogram, linkage # + id="UVsSMBJBcNwl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 301} outputId="5307d085-2d8e-4418-fc56-28317b171ecf" # Creating Hierarchical Clustering Dendrogram model= linkage(scaled_data, "ward") plt.figure() plt.title("Hierarchical Clustering Dendrogram") plt.xlabel("sample index") plt.ylabel("distance") dendrogram(model, leaf_rotation=90., leaf_font_size=8.) plt.show() # + id="xpAfab24cNwm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e1b79a50-be28-4c7d-a8b1-9cda820b9b91" len(np.unique(y)) # + id="kCuWOVvbcNwo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 890} outputId="7049d002-1f95-4794-a85f-1791430b9783" #****************************************************************************************************** # Clustering using K-means # need for spesification of numbers of clusters # clusters in this sample are 14 #****************************************************************************************************** from sklearn import cluster from sklearn.preprocessing import LabelEncoder n_samples, n_features = scaled_data.shape n_digits = len(np.unique(y)) for k in range(2, 15): kmeans = cluster.KMeans(n_clusters=k) kmeans.fit(scaled_data) print(k) print(metrics.silhouette_score(scaled_data, kmeans.labels_)) print(metrics.completeness_score(y, kmeans.labels_)) print(metrics.homogeneity_score(y, kmeans.labels_)) # different results on every iteration because we are using random starting points# best score seems to be when k=13 (sometimes when k=14) # + id="LvJfle14cNwp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="75e5d208-8339-407b-be38-1d927dc30ec1" # same command with above, but now creating a list for every score in order to show it to a graph n_samples, n_features = scaled_data.shape n_digits = len(np.unique(y)) y_silhouette=[] y_completeness=[] y_homogeneity=[] for k in range(2, 15): kmeans = cluster.KMeans(n_clusters=k) kmeans.fit(scaled_data) print(k) print(metrics.silhouette_score(scaled_data, kmeans.labels_)) y_silhouette.append(metrics.silhouette_score(scaled_data, kmeans.labels_)) print(metrics.completeness_score(y, kmeans.labels_)) y_completeness.append(metrics.completeness_score(y, kmeans.labels_)) print(metrics.homogeneity_score(y, kmeans.labels_)) y_homogeneity.append(metrics.homogeneity_score(y, kmeans.labels_)) print("*********************************************************************************************************************") print("*********************************************************************************************************************") print ("silhouette scores are:\n{}".format(y_silhouette)) print("*********************************************************************************************************************") print ("completeness scores are:\n{}".format(y_completeness)) print("*********************************************************************************************************************") print ("homogeneity scores are:\n{}".format(y_homogeneity)) print("*********************************************************************************************************************") # + id="hualPI25cNwt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="20a8173a-6247-473f-ab75-266874b614b0" plt.plot(y_silhouette) plt.plot(y_completeness) plt.plot(y_homogeneity) plt.legend(["Silhouette", "Completeness", "Homogeneity"]) plt.title("K-means Scores") plt.show() # + [markdown] id="h_hM90-VcNw3" colab_type="text" # # Supervised Methods # + id="lV9ZnPZ4cNw3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e2261c3f-58bb-40de-9c8c-de78d1954eea" new_app.shape # + id="GJLd-FsycNw6" colab_type="code" colab={} supervised_app_x= new_app[['Category', 'Rating', 'Reviews', 'Content Rating']] supervised_app_y= new_app["Installs"] # + id="zsdWoFVXcNw7" colab_type="code" colab={} supervised_x=supervised_app_x.values # attributes supervised_y= supervised_app_y.values #target # + id="_HNXz0i5cNw8" colab_type="code" colab={} supervised_x_transformed= OrdinalEncoder().fit_transform(supervised_x) # conversting the string values to floats for applying distance metrics # + id="Wwer17pycNw-" colab_type="code" colab={} # segmenting the data in a training and test set of a 60/40 split # + id="QCsqoHGUcNw_" colab_type="code" colab={} from sklearn.model_selection import train_test_split # + id="IuxxXByncNxB" colab_type="code" colab={} supervised_x_transformed_train, supervised_x_transformed_test, supervised_y_train, supervised_y_test= train_test_split(supervised_x_transformed, supervised_y, test_size=0.4) # + id="R9keL_vVcNxC" colab_type="code" colab={} # Creating classifiers to predict the class of installations, using: # i. Logistic regression # ii. KNN # + id="TUKy90CfcNxE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="ab2b4b8c-c806-4755-b990-af7432fd4f0b" print("LOGISTIC REGRESSION") print("**************************************") from sklearn.linear_model import LogisticRegression lm = LogisticRegression() lm.fit(supervised_x_transformed_train, supervised_y_train) lm.predict_proba(supervised_x_transformed_test) # + id="sxmNF0dicNxF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="d51bc5a4-401e-49ac-f572-194098396791" print(lm.intercept_) # + id="Zg9NJrpOcNxH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 252} outputId="e7610d93-9aa4-4f40-9303-af2790761f43" print(lm.coef_) # + id="YR6KW3SQcNxJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 658} outputId="39182833-e50c-47fd-bfaf-f32d84b40165" predicted = lm.predict(supervised_x_transformed_test) print(metrics.classification_report(supervised_y_test, predicted)) print(metrics.confusion_matrix(supervised_y_test, predicted)) # + id="4j1zKqR9cNxK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="a86379b4-67a9-405f-ae37-91cb5c2d157b" #K nearest neighbours print("KNN") print("**************************************") from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier() model.fit(supervised_x_transformed_train, supervised_y_train) print(model) # + id="nWMxSTQ2cNxL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 658} outputId="75559cf8-04db-4e51-ea67-bed0e1f94c98" predicted= model.predict(supervised_x_transformed_test) print (metrics.classification_report(supervised_y_test, predicted)) print (metrics.confusion_matrix(supervised_y_test, predicted)) # + id="IWJ3cy78cNxN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="33700b1b-cfdc-44f7-af56-e5525f3e8356" print (metrics.accuracy_score(supervised_y_test, predicted))
Google_playstore.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="jovZkMjIa4rA" colab_type="text" # # Import Libraries # + id="Tl6lR9imbHZH" colab_type="code" colab={} import numpy as np import matplotlib.pyplot as pt import pandas as pd # + [markdown] id="3-khL_wzcEjC" colab_type="text" # # importing datasheet # + id="I4Q3_KrQcLTN" colab_type="code" colab={} datasheet = pd.read_excel('TableLibrary.xlsx') X = datasheet.iloc[:,:-1].values Y = datasheet.iloc[:,-1].values # + id="niiczixTfEkc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="94b15d7c-f812-4623-b8e9-22a2f9b45b0c" executionInfo={"status": "ok", "timestamp": 1589568238216, "user_tz": -60, "elapsed": 790, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(X) # + id="KwvWbVYPf-s9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bcf31572-8bc8-43dd-8bd9-0c96a284c4cb" executionInfo={"status": "ok", "timestamp": 1589568242326, "user_tz": -60, "elapsed": 874, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(Y) # + [markdown] id="x0GDl1P-gG-B" colab_type="text" # # Taking care of Missing Data # + id="O1Igu8k0gW_w" colab_type="code" colab={} from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values=np.nan, strategy='mean') imputer.fit(X[:,1:]) X[:,1:] = imputer.transform(X[:,1:]) # + id="OlmB0ZkpifKz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="7539cae6-4bfd-47cd-931a-54b590e02559" executionInfo={"status": "ok", "timestamp": 1589568250042, "user_tz": -60, "elapsed": 824, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(X) # + [markdown] id="5RM_8H5xizTM" colab_type="text" # # Encoding Catagory # # # # # # # # # # # # + [markdown] id="x5GiKwo0mqfK" colab_type="text" # ## Encoding Independent Variable # + id="4Iqskiwmm58R" colab_type="code" colab={} from sklearn.preprocessing import LabelEncoder le = LabelEncoder() X[:,0] = le.fit_transform(X[:,0]) # + id="Yynt2b0MlvZ5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="120708f9-1624-4b23-c88b-b0348919ba14" executionInfo={"status": "ok", "timestamp": 1589569761720, "user_tz": -60, "elapsed": 836, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(X) # + [markdown] id="RZtIjEnUnJob" colab_type="text" # ## Encoding dependent variable. # We do not want to do any encoding for the dependent vafriable # + [markdown] id="a0d58pbgnlfj" colab_type="text" # # Spliting the data into training set and test set # + id="Nh-fvkSPnvcB" colab_type="code" colab={} from sklearn.model_selection import train_test_split X_Tran, X_Test, Y_Tran, Y_Test=train_test_split(X,Y,test_size=0.2,random_state=1) # + id="p2SnMJRInve-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="542737fe-c62b-4062-a861-26e3bacea1d4" executionInfo={"status": "ok", "timestamp": 1589569846487, "user_tz": -60, "elapsed": 702, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(X_Tran) # + id="KlKnTdzJv01w" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="718d54ae-e76a-47e7-be2c-2c34cdbb3eeb" executionInfo={"status": "ok", "timestamp": 1589569874301, "user_tz": -60, "elapsed": 736, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(Y_Tran) # + id="A2SgPLKKv696" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="543056fe-9231-428b-d334-34b34186d4e7" executionInfo={"status": "ok", "timestamp": 1589569909353, "user_tz": -60, "elapsed": 753, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(X_Test) # + id="p36gZLDMwBrA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="24404a7a-8f90-4aa5-c0c8-00668c0e81fb" executionInfo={"status": "ok", "timestamp": 1589569933042, "user_tz": -60, "elapsed": 790, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(Y_Test) # + [markdown] id="49i43-AUwIYN" colab_type="text" # # Feature Scalling # + id="jroHdTwewLbO" colab_type="code" colab={} from sklearn.preprocessing import StandardScaler sc=StandardScaler() X_Tran=sc.fit_transform(X_Tran) X_Test=sc.fit_transform(X_Test) # + id="hvkPJ5qFw8Wn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="4ba2c253-24a8-42c3-c791-18ecb2f62c6d" executionInfo={"status": "ok", "timestamp": 1589570214808, "user_tz": -60, "elapsed": 775, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(X_Tran) # + id="zFDVXttrxMQf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="66b6bc27-c100-4b94-abf3-0732468c4187" executionInfo={"status": "ok", "timestamp": 1589570234211, "user_tz": -60, "elapsed": 755, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} print(X_Test)
Regression/DataProcessingTool.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd json = open('dados/aluguel.json') print(json.read()) df_js = pd.read_json('dados/aluguel.json') df_js txt = open('dados/aluguel.txt') print(txt.read()) df_text = pd.read_table('dados/aluguel.txt') df_text df_xlsx = pd.read_excel('dados/aluguel.xlsx') df_xlsx df_html = pd.read_html('dados/dados_html_1.html') df_html[0] df_html2 = pd.read_html('dados/dados_html_2.html') df_html2[0] df_html2[1] df_html2[2]
extras/Importando Dados.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nbgrader={} # # Matplotlib Exercise 2 # + [markdown] nbgrader={} # ## Imports # + nbgrader={} # %matplotlib inline import matplotlib.pyplot as plt import numpy as np # + [markdown] nbgrader={} # ## Exoplanet properties # + [markdown] nbgrader={} # Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets. # # http://iopscience.iop.org/1402-4896/2008/T130/014001 # # Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo: # # https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue # # A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data: # + import os assert os.path.isfile('open_exoplanet_catalogue.txt') # + nbgrader={} # !head -n 30 open_exoplanet_catalogue.txt # + [markdown] nbgrader={} # Use `np.genfromtxt` with a delimiter of `','` to read the data into a NumPy array called `data`: # + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true} data = np.genfromtxt('open_exoplanet_catalogue.txt', comments="#", delimiter=',') z = np.array(data) # + deletable=false nbgrader={"checksum": "5dcbc888bcd5ce68169a037e67cdd37f", "grade": true, "grade_id": "matplotlibex02a", "points": 2} assert data.shape==(1993,24) # + [markdown] nbgrader={} # Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper. # # * Customize your plot to follow Tufte's principles of visualizations. # * Customize the box, grid, spines and ticks to match the requirements of this data. # * Pick the number of bins for the histogram appropriately. # + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true} PM = z[:,2] plt.figure(figsize=(20,20)) plt.hist(PM, bins = range(0,12,1)) plt.xlabel('Planetary Mass') plt.ylabel('Number of Planets') plt.title('Number of Planets per Planetary Mass Range') # + deletable=false nbgrader={"checksum": "27c6f50d571df0da41b2bed77769300e", "grade": true, "grade_id": "matplotlibex02b", "points": 4} assert True # leave for grading # + [markdown] nbgrader={} # Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis. # # * Customize your plot to follow Tufte's principles of visualizations. # * Customize the box, grid, spines and ticks to match the requirements of this data. # + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true} orbital_e = z[:,8] semimajor=z[:,7] plt.figure(figsize=(10,10)) plt.scatter(semimajor, orbital_e) plt.grid(True) plt.yticks([0,50,100,150,200,250,300,350,400], [0,50,100,150,200,250,300,350,400]) plt.tight_layout() plt.xlabel('Semimajor axis') plt.ylabel('Orbital eccentricity') plt.title('Orbital eccentricity vs Semimajor axis of Planets') # + deletable=false nbgrader={"checksum": "eac3900a2375e914caac56021476284b", "grade": true, "grade_id": "matplotlibex02c", "points": 4} assert True # leave for grading
assignments/assignment04/MatplotlibEx02.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import scipy.integrate as spi import numpy as np import pylab as pl miu = 0.1 N=1000 beta=1.4247 gamma=0.14286 TS=1.0 ND=70.0 S0=1-1e-6 I0=1e-6 INPUT = (S0, I0, 0.0) def diff_eqs(INP,t): '''The main set of equations''' Y=np.zeros((3)) V = INP Y[0] = miu*N- beta * V[0] * V[1]/N-miu*V[0] Y[1] = beta * V[0] * V[1]/N - gamma * V[1]-miu*V[1] Y[2] = gamma * V[1]-miu*V[2] return Y # For odeint t_start = 0.0; t_end = ND; t_inc = TS t_range = np.arange(t_start, t_end+t_inc, t_inc) RES = spi.odeint(diff_eqs,INPUT,t_range) print (RES) #Ploting pl.subplot(111) pl.plot(RES[:,1], '-r', label='Infectious') pl.plot(RES[:,0], '-g', label='Susceptibles') pl.plot(RES[:,2], '-k', label='Recovereds') pl.legend(loc=0) pl.title('SIR_Model.py') pl.xlabel('Time') pl.ylabel('Infectious Susceptibles and Recovereds') pl.xlabel('Time') pl.show() # + import scipy.integrate as spi import numpy as np import pylab as pl from plotdf import plotdf def f(x,g=1,m=1,b=1,N=1): return np.array([m*N- b * x[0] * x[1]/N-m*x[0],b * x[0] * x[1]/N - g * x[1]-m*x[1]]) plotdf(f, # Function giving the rhs of the diff. eq. system np.array([0,1000]), # [xmin,xmax] np.array([0,1000]),# [ymin,ymax] # Additional parameters for `f` (optional) parameters={"g":0.14,"m":0.1,"b":1.4,'N':1000})
Assignment2_A.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + nbsphinx="hidden" #jupman-purge-output #RICORDATI DI ESEGUIRE QUESTA CELLA import sys; sys.path.append('../../../'); import jupman; # - # # Esame Ven 16, Apr 2021 A # # **Seminari Python @Sociologia, Università di Trento** # # ## [Scarica](../../../_static/generated/sps-2021-04-16-exam.zip) esercizi e soluzioni # ## Esercizio - prendilettere # # ✪ Data una `frase` che contiene **esattamente** 3 parole e ha **sempre** come parola centrale un numero $n$, scrivi del codice the STAMPA i primi $n$ caratteri della terza parola # # Esempio - data: # # ```python # frase = "Prendi 4 lettere" # ``` # # il tuo codice deve stampare: # # ``` # lett # ``` # # + #jupman-purge-output frase = "Prendi 4 lettere" # lett #frase = "Prendere 5 caratteri" # carat #frase = "Take 10 characters" # characters # scrivi qui parole = frase.split() n = int(parole[1]) print(parole[2][:n]) # - # ## Esercizio - brico # # ✪✪ Un magazzino per appassionati del fai da te dispone di un `catalogo` che associa tipologie di oggetti agli scaffali dove posizionarli. Ogni giorno, una lista di `arrivi` viene popolata con le tipologie di oggetti arrivati. Tali tipologie vanno collocate nel `magazzino`, un dizionario che associa ad ogni scaffale la tipologia di oggetto prescritta dal catalogo. Scrivi del codice che data la lista di `arrivi` e il `catalogo`, popola il dizionario `magazzino`. # # Esempio - dati: # # ```python # # arrivi = ['sedie', 'lampade', 'cavi'] # # catalogo = {'stufe' : 'A', # 'sedie' : 'B', # 'caraffe' : 'D', # 'lampade' : 'C', # 'cavi' : 'F', # 'giardinaggio' : 'E'} # # magazzino = {} # ``` # # dopo il tuo codice deve risultare: # # ```python # >>> magazzino # {'B': 'sedie', 'C': 'lampade', 'F': 'cavi'} # ``` # + #jupman-purge-output arrivi = ['sedie', 'lampade', 'cavi'] # magazzino diventa: {'B': 'sedie', 'C': 'lampade', 'F': 'cavi'} #arrivi = ['caraffe', 'giardinaggio'] # magazzino diventa: {'D': 'caraffe', 'E': 'giardinaggio'} #arrivi = ['stufe'] # magazzino diventa: {'A': 'stufe'} catalogo = {'stufe' : 'A', 'sedie' : 'B', 'caraffe' : 'D', 'lampade' : 'C', 'cavi' : 'F', 'giardinaggio' : 'E'} # scrivi qui magazzino = {} for consegna in arrivi: magazzino[ catalogo[consegna] ] = consegna magazzino # - # ## Esercizio - La parola più lunga # # ✪✪ Scrivi del codice che data una `frase`, stampa la **lunghezza** della parola più lunga. # # - **NOTA**: vogliamo solo sapere la lunghezza della parola più lunga, non la parola stessa ! # # Esempio - data: # # ```python # frase = "La strada si inerpica lungo il ciglio della montagna" # ``` # # il tuo codice dovrà stampare # # ``` # 8 # ``` # # che è la lunghezza delle parole più lunghe che sono a parimerito `inerpica` e `montagna` # + #jupman-purge-output frase = "La strada si inerpica lungo il ciglio della montagna" # 8 #frase = "Il temibile pirata Le Chuck dominava spietatamente i mari del Sud" # 13 #frase = "Praticamente ovvio" # 12 # scrivi qui max([len(parola) for parola in frase.split()]) # - # ## Esercizio - Scalinate # # ✪✪✪ Data una lista di lunghezza dispari riempita di zeri eccetto il numero in mezzo, scrivi del codice che MODIFICA la lista per scrivere numeri che decrescano mano a mano che ci si allontana dal centro. # # - la lunghezza della lista è sempre dispari # - assumi che la lista sarà sempre di lunghezza sufficiente per arrivare ad avere zero in ciascun bordo # - una lista di dimensione 1 conterrà solo uno zero # # Esempio 1 - data: # # ```python # lista = [0, 0, 0, 0, 4, 0, 0, 0, 0] # ``` # # dopo il tuo codice, deve risultare: # # ```python # >>> lista # [0, 1, 2, 3, 4, 3, 2, 1, 0] # ``` # # Esempio 2 - data: # # ```python # lista = [0, 0, 0, 3, 0, 0, 0] # ``` # # dopo il tuo codice, deve risultare: # # ```python # >>> lista # [0, 1, 2, 3, 2, 1, 0] # ``` # # # + #jupman-purge-output lista = [0, 0, 0, 0, 4, 0, 0, 0, 0] # -> [0, 1, 2, 3, 4, 3, 2, 1, 0] #lista = [0, 0, 0, 3, 0, 0, 0] # -> [0, 1, 2, 3, 2, 1, 0] #lista = [0, 0, 2, 0, 0] # -> [0, 1, 2, 1, 0] #lista = [0] # -> [0] # scrivi qui m = len(lista) // 2 for i in range(m): lista[m+i] = m - i for i in range(m): lista[i] = i lista # - # ## Esercizio - Prima di seconda # # ✪✪✪ Data una `stringa` e due caratteri `car1` e `car2`, scrivi del codice che STAMPA `True` se tutte le occorrenze di `car1` in stringa sono **sempre** seguite da `car2`. # # Esempio - data: # # ```python # stringa,car1,car2 = "accatastare la posta nella stiva", 's','t' # ``` # # stampa `True` perchè tutte le occorrenze di `s` sono seguite da `t` # # # ```python # stringa,car1,car2 = "dadaista entusiasta", 's','t' # ``` # # stampa `False`, perchè viene ritrovata la sequenza `si` dove `s` non è seguita da `t` # # - **USA** un `while`, cerca di farlo efficiente terminandolo appena puoi # - **NON** usare **break** # + #jupman-purge-output stringa,car1,car2 = "accatastare la posta nella stiva", 's','t' # True #stringa,car1,car2 = "dadaista entusiasta", 's','t' # False #stringa,car1,car2 = "barbabietole", 't','o' # True #stringa,car1,car2 = "barbabietole", 'b','a' # False #stringa,car1,car2 = "a", 'a','b' # False #stringa,car1,car2 = "ab", 'a','b' # True #stringa,car1,car2 = "aa", 'a','b' # False # scrivi qui i = 0 res = True if len(stringa) == 1: res = False while i + 1 < len(stringa) and res: if stringa[i] == car1 and stringa[i+1] != car2: res = False i += 1 res # -
exams/2021-04-16/solutions/exam-2021-04-16-sol.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### Copyright IBM All Rights Reserved. # #### SPDX-License-Identifier: Apache-2.0 # # Db2 Sample For Scikit-Learn # # In this code sample, we will show how to use the Db2 Python driver to import data from our Db2 database. Then, we will use that data to create a machine learning model with scikit-learn. # # Many wine connoisseurs love to taste different wines from all over the world. Mostly importantly, they want to know how the quality differs between each wine based on the ingredients. Some of them also want to be able to predict the quality before even tasting it. In this notebook, we will be using a dataset that has collected certain attributes of many wine bottles that determines the quality of the wine. Using this dataset, we will help our wine connoisseurs predict the quality of wine. # # This notebook will demonstrate how to use Db2 as a data source for creating machine learning models. # # Prerequisites: # 1. Python 3.6 and above # 2. Db2 on Cloud instance (using free-tier option) # 3. Data already loaded in your Db2 instance # 4. Have Db2 connection credentials on hand # # We will be importing two libraries- `ibm_db` and `ibm_dbi`. `ibm_db` is a library with low-level functions that will directly connect to our db2 database. To make things easier for you, we will be using `ibm-dbi`, which communicates with `ibm-db` and gives us an easy interface to interact with our data and import our data as a pandas dataframe. # # For this example, we will be using the [winequality-red dataset](../data/winequality-red.csv), which we have loaded into our Db2 instance. # # NOTE: Running this notebook within a docker container. If `!easy_install ibm_db` doesn't work on your normally on jupter notebook, you may need to also run this notebook within a docker container as well. # ## 1. Import Data # Let's first install and import all the libraries needed for this notebook. Most important we will be installing and importing the db2 python driver `ibm_db`. # !pip install sklearn # !easy_install ibm_db # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # The two python ibm db2 drivers we need import ibm_db import ibm_db_dbi # - # Now let's import our data from our data source using the python db2 driver. # + # replace only <> credentials dsn = "DRIVER={{IBM DB2 ODBC DRIVER}};" + \ "DATABASE=<DATABASE NAME>;" + \ "HOSTNAME=<HOSTNMAE>;" + \ "PORT=50000;" + \ "PROTOCOL=TCPIP;" + \ "UID=<USERNAME>;" + \ "PWD=<<PASSWORD>>;" hdbc = ibm_db.connect(dsn, "", "") hdbi = ibm_db_dbi.Connection(hdbc) sql = 'SELECT * FROM <SCHEMA NAME>.<TABLE NAME>' wine = pandas.read_sql(sql,hdbi) #wine = pd.read_csv('../data/winequality-red.csv', sep=';') # - wine.head() # ## 2. Data Exploration # In this step, we are going to try and explore our data inorder to gain insight. We hope to be able to make some assumptions of our data before we start modeling. wine.describe() # + # Minimum price of the data minimum_price = np.amin(wine['quality']) # Maximum price of the data maximum_price = np.amax(wine['quality']) # Mean price of the data mean_price = np.mean(wine['quality']) # Median price of the data median_price = np.median(wine['quality']) # Standard deviation of prices of the data std_price = np.std(wine['quality']) # Show the calculated statistics print("Statistics for housing dataset:\n") print("Minimum quality: {}".format(minimum_price)) print("Maximum quality: {}".format(maximum_price)) print("Mean quality: {}".format(mean_price)) print("Median quality {}".format(median_price)) print("Standard deviation of quality: {}".format(std_price)) # - wine.corr() corr_matrix = wine.corr() corr_matrix["quality"].sort_values(ascending=False) # ## 3. Data Visualization wine.hist(bins=50, figsize=(30,25)) plt.show() boxplot = wine.boxplot(column=['quality']) # ## 4. Creating Machine Learning Model # Now that we have cleaned and explored our data. We are ready to build our model that will predict the attribute `quality`. wine_value = wine['quality'] wine_attributes = wine.drop(['quality'], axis=1) # + from sklearn.preprocessing import StandardScaler # Let us scale our data first sc = StandardScaler() wine_attributes = sc.fit_transform(wine_attributes) # + from sklearn.decomposition import PCA # Apply PCA to our data pca = PCA(n_components=8) x_pca = pca.fit_transform(wine_attributes) # - # We need to split our data into train and test data. # + from sklearn.model_selection import train_test_split # Split our data into test and train data x_train, x_test, y_train, y_test = train_test_split( wine_attributes,wine_value, test_size = 0.25) # - # We will be using Logistic Regression to model our data # + from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix, accuracy_score lr = LogisticRegression() # Train our model lr.fit(x_train, y_train) # Predict using our trained model and our test data lr_predict = lr.predict(x_test) # - # Print confusion matrix and accuracy score lr_conf_matrix = confusion_matrix(y_test, lr_predict) lr_acc_score = accuracy_score(y_test, lr_predict) print(lr_conf_matrix) print(lr_acc_score*100)
db2_for_machine_learning_samples/notebooks/Db2 Sample For Scikit-Learn.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + language="javascript" # MathJax.Hub.Config({ # TeX: { equationNumbers: { autoNumber: "AMS" } } # }); # - # <NAME> # <NAME> # <NAME> # <NAME> # <NAME> # # $$\text{Exercise 1}$$ # 1. Let us consider the array $M$ defined below import numpy as np M = np.array([[ 80842, 333008, 202553, 140037, 81969, 63857, 42105, 261540], [481981, 176739, 489984, 326386, 110795, 394863, 25024, 38317], [ 49982, 408830, 485118, 16119, 407675, 231729, 265455, 109413], [103399, 174677, 343356, 301717, 224120, 401101, 140473, 254634], [112262, 25063, 108262, 375059, 406983, 208947, 115641, 296685], [444899, 129585, 171318, 313094, 425041, 188411, 335140, 141681], [ 59641, 211420, 287650, 8973, 477425, 382803, 465168, 3975], [ 32213, 160603, 275485, 388234, 246225, 56174, 244097, 9350], [496966, 225516, 273338, 73335, 283013, 212813, 38175, 282399], [318413, 337639, 379802, 198049, 101115, 419547, 260219, 325793], [148593, 425024, 348570, 117968, 107007, 52547, 180346, 178760], [305186, 262153, 11835, 449971, 494184, 472031, 353049, 476442], [ 35455, 191553, 384154, 29917, 187599, 68912, 428968, 69638], [ 20140, 220691, 163112, 47862, 474879, 15713, 279173, 161443], [ 76851, 450401, 193338, 189948, 211826, 384657, 17636, 189954], [182580, 403458, 63004, 87202, 17143, 308769, 316192, 408259], [379223, 322891, 302814, 309594, 13650, 68095, 327748, 27138], [ 33346, 202960, 221234, 61986, 96524, 230606, 266487, 299766], [ 46264, 33510, 434647, 26750, 412754, 346058, 263454, 270444], [182770, 486269, 402948, 105055, 463911, 378799, 329053, 16793], [379586, 293162, 121602, 144605, 105553, 71148, 40126, 183534], [493338, 5241, 498084, 189451, 219466, 12869, 482845, 16529], [145955, 461866, 427009, 70841, 318111, 199631, 99330, 407887], [133826, 214053, 295543, 339707, 199545, 194494, 304822, 337793], [395524, 206326, 308488, 297633, 472409, 487606, 422244, 443487], [464749, 75275, 299486, 248984, 76076, 197544, 75103, 394086], [404365, 67309, 267556, 96483, 166099, 203867, 57120, 211249], [ 86992, 286426, 130451, 419955, 109839, 332663, 101095, 50820], [108699, 436141, 333670, 45890, 425798, 6967, 155851, 360687], [252907, 121138, 81509, 367876, 471831, 226950, 349068, 197092]]) M # 1. What is the average value of the second and fourth columns? # + np.average(M[:, 1:4:2]) # - # 2. What is the average value of the last 5 rows of the second and fourth columns? # + np.average(M[25:30, 1:4:2]) # - # 3. Update $M$ by Replacing all its even integers by 0. # + np.where(M%2==0, 0,M) # - # # $$\text{Exercise 2}$$ # # Let $\{ x_k\}$ be a partition of $[a,b]$ such that $a=x_0<x_1<\cdots<x_{N-1}<x_N=b$ and $H$ be the length of the $k$-th subinterval ($H = x_k - x_{k-1}$), # then we have # $$\int_a^bf(x)dx \approx \sum_{k=1}^N \frac{f(x_{k-1})+f(x_k)}{2}H = A$$ # # # Write a function named <b>Trap</b> that takes $a,b,N, f$ as inputs and return A import numpy as np def Trap(a,b,N,f): x=np.linspace(a,b, N+1) A=0 for k in range(1,N+1): H= x[k]- x[k-1] A = A + H*( (f(x[k-1]) + f(x[k]))/2) return (A) Trap(0,1,10000, lambda x: x) # # $$\text{Exercise 3}$$ # # Let us consider the function $f$ defined on the set of integer as follows # # \begin{equation} # f(n) = # \begin{cases} # n/2 & \quad \text{if } n \text{ is even}\\ # -(n)/3 +1 & \quad \text{if } n \text{ is odd and divisible by 3}\\ # (n+1)/2 +1 & \quad \text{else} # \end{cases} # \end{equation} # # Write a python function named <b> ComptFunc </b> that takes two intergers $a$ and $b$ , $a<b$ and return an array $flist$ of all $f(n), n \in nlist$ where $nlist$ is the list of all the integers in the interval $[a,b]$. # + def First(n): if n%2==0: return (n/2) elif n%2!=0 and n%3==0: return((-n/3)+1) else: return(1+ (n+1)/2) flist= np.vectorize(First) # - def ComptFunc(a,b): nlist=np.arange(a,b+1) return(flist(nlist)) ComptFunc(-10,15) # # $$\text{Exercise 4}$$ # # Write a python code to create the following numpy arrays # $$ # A = \begin{pmatrix} # 1 & 4 & 6 \\ # 0 & -3 & 2 \\ # -2 & -2 & -2 # \end{pmatrix}, \quad # B = \begin{pmatrix} # 2 & -1 & 0 \\ # 2 & -1 & 0 \\ # 2 & -3 & 1 # \end{pmatrix}, # $$ # and compute # - $A-B$, # - $4A + B$, # - $trace(A)$, # - $B^t$ the transpose of $B$, # - $AB$, # - $BA^t$, # - the determinant of $A$ # + A=np.array([[1,4,6],[0,-3,2],[-2,-2,-2]]) B=np.array([ [2,-1,0], [2,-1,0],[2,-3,1]]) print(A-B) print("********") print(4*A+B) print("********") print(np.trace(A)) print("********") print(B.T) print("********") print(np.linalg.det(A)) print("********") print(A@B) print("********") print(B@A.T) # - # # $$\text{Exercise 5}$$ # # write a python code to solve the system of equation # \begin{equation}\label{sysEqu} # \left\lbrace # \begin{array}{ll} # 2x + y -2z &= 3,\\ # x-y-z &= 0,\\ # x+y+3z &=12 # \end{array}\right. # \end{equation} import numpy as np A=np.array([[2,1,-2], [1, -1, -1],[1,1,3]]) L= np.linalg.inv(A) B=np.array([[3],[0],[12]]) L@B print(L@B) # # $$\text{Exercise 6}$$ # # Let us consider the sequence $U_n$ given by # \begin{equation}\label{fib} # \left\lbrace # \begin{array}{ll} # U_0 &= 1,\\ # U_1 &= 2,\\ # U_{n} &=-3U_{n-1} +U_{n-2}, \;\; \forall\; n=2,3,4\cdots # \end{array}\right. # \end{equation} # # Write a python function named <b>SeqTerms</b> that takes as input an integer $n,\;\;n\geq 0$ and return an array of the first $n+1$ terms (i.e. $U_0, \cdots, U_{n}$) of the sequence \eqref{fib}. # + import numpy as np def SeqTerms(n): U0=1 U1=2 x=np.array([]) y= np.array([U0,U1]) if n==0: return(x) elif n==1: return (np.array([U0])) else: for i in range( 2, n+1): Un=-3*U1 + U0 U0=U1 U1=Un y= np.append(y,np.array([Un])) return(y) # - SeqTerms(3) # # $$\text{Exercise 7}$$ # # Let $\{ x_k\}$ be a partition of $[0,1]$ such that $0=x_0<x_1<\cdots<x_{99}<x_{100}=1$ and $H$ be the length of the $k$-th subinterval ($H = x_k - x_{k-1}$). # # 1. Write a python code that use Euler’s method to solve the initial value problem # \begin{equation}\label{eul1} # \begin{cases} # y' = \dfrac{2-2xy}{x^2+1}, & \quad \text{on } [0, 1]\\\\ # y(0) = 1, # \end{cases} # \end{equation} # i.e. generates array of $y_k\approx g(x_k)=g_k$ where $g$ is the exact solution of \eqref{eul1} # # 2. The exact solution of the initial value problem is given by # \begin{equation}\label{exact} # g(x) = \dfrac{2x+1}{x^2+1} # \end{equation} # # use $subplot$ to plot side by side # - the exact and appromixate solution in the same window (i.e. ($y_k$ vs $x_k$ and $g_k$ vs $x_k$)) # - the error $e_k = |y_k -g_k| $ against $x_k$. import numpy as np def Euler(N): f = lambda x,y : (2 - 2*x*y)/(x**2 + 1) y=1 x=0 AA=[0] BB=[1] H= (1)/N for k in range(1,N+1): x= x + H y= y + H*f(x,y) AA.append(x) BB.append(y) return (AA,BB) AA,BB=Euler(100) # + import matplotlib.pyplot as plt plt.subplot(1,2, 1) x = np.linspace(0, 1, 101) y_x = (2*x + 1)/(x**2 + 1) plt.plot(x, y_x) plt.plot(AA[::5],BB[::5],'*' ) plt.title('exact and approximate') plt.legend(['exact', 'approximation']) plt.subplot(1,2, 2) plt.plot(AA,abs( BB-y_x)) plt.title('error') plt.show() # - # # $$\text{Exercise 8}\quad (\text{generalization of previous exercise})$$ # # Let $\{ x_k\}$ be a partition of $[a,b]$ such that $a=x_0<x_1<\cdots<x_{99}<x_{N}=b$ and $H$ be the constant length of the $k$-th subinterval ($H = x_k - x_{k-1}$). Let us consider initial value problem # # \begin{equation}\label{eul2} # \begin{cases} # y' = f(x,y), & \quad \text{on } [a, b]\\\\ # y(a) = c, # \end{cases} # \end{equation} # # 1. Write a python function <b> EulerMethod </b> that takes $a,b,c,N,$ and $f$ and plot the approximate solution of \eqref{eul2} using Euler method. import numpy as np def EulerMethod(a,b,c,N,f): y=c x=a A=[a] B=[c] H= (b-a)/N for k in range(a+1,N+1): x= x + H y= y + H*f(x,y) A.append(x) B.append(y) return (A,B) # 2. Test the function <b> EulerMethod </b> by using the data in Exercise 6 EulerMethod(0,1,1,100, lambda x,y:(2 - 2*x*y)/(x**2 + 1)) # # $$\text{Exercise 9}$$ # # Consider a 25 L tank that will be filled with water in two different ways. In the first case, the water volume that enters the tank per time (rate of volume increase) is piecewise constant, while in the second case, it is continuously increasing. # # For each of these two cases, you are asked to develop a code that can compute how the total water volume V  in the tank will develop with time t over a period of 3 s. Your calculations must be based on the information given: the initial volume of water (1L in both cases), and the volume of water entering the tank per time. Write a python code to solve and plot the exact volume against the time for the rate $r(t)$ given by # 1. Case 1: piecewise constant # \begin{equation} # r(t) = # \begin{cases} # 1 L/s & \quad \text{if }\quad 0s<t<1s\\ # 3 L/s & \quad \text{if }\quad 1s\leq t< 2s\\ # 7 L/s & \quad \text{if }\quad 2s\leq t\leq 3s # \end{cases} # \end{equation} # # 2. Case 2: $r(t) = e^t$ for all $0< t\leq 3$. # + def func(t): if 0<t<1: return 1 elif 1<t<2: return 3 elif 2<t<3: return 7 f=np.vectorize(func) # - import numpy as np def Method2(a,b,c,N,f): v=c x=a E=[a] F=[c] H= (b-a)/N for k in range(a+1,N+1): x= x + H v= v + H*f(x) E.append(x) F.append(v) return (E,F) E,F=Method2(0,3,1,10,f) print(E,F) # + import numpy as np def Method(a,b,c,N,f): v=c x=a C=[a] D=[c] H= (b-a)/N for k in range(a+1,N+1): x= x + H v= v + H*f(x) C.append(x) D.append(v) return (C,D) C,D=Method(0,3,1,10, lambda x: np.exp(x)) print(C,D) # + import matplotlib.pyplot as plt plt.subplot(2,4, 1) plt.plot(E,F) plt.title('') plt.subplot(2,4, 2) plt.plot(C,D) plt.title('') plt.show() # -
FirstGroupProject_Group3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # From Linear Regression to Deep Learning in Pytorch # # In this chapter we'll see how to build supervised learning models of seemingly arbitrary complexity without having to manually specify a particular parametric form. The key to this is to stack regression models on top of each other so that the outputs of previous models become the inputs to the subsequent models. But instead of training these models up one-at-a-time, we can use gradient descent to simultaneously search the combined parameter spaces of each of these "layers". Deriving the expressions for the gradients of the loss relative to each of these parameters would be incredibly tedious, so we will introduce an auto-differentiation tool provided by the `torch` pacakge that will make gradient descent a breeze to implement.
_site/content/linreg-dl-torch/intro.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # SymPy # + jupyter={"outputs_hidden": false} import sympy as sp import numpy as np import matplotlib.pyplot as plt # - # --- # ## Part I # # $$ \Large {\displaystyle f(x)=3e^{-{\frac {x^{2}}{8}}}} \sin(x/3)$$ # * Find the **first four terms** of the Taylor series of the above equation (at x = 0) (use `sympy` variables) # * Make a plot of the function (use `numpy` variables) # * Plot size 10 in x 4 in # * X limts -5, 5 # * Y limits -2, 2 # * Use labels for the different lines # * On the same plot: # * Over-plot the 1st-term Taylor series using a different color/linetype/linewidth/label # * Over-plot the 1st-term + 2nd-term Taylor series using a different color/linetype/linewidth/label # * Over-plot the 1st-term + 2nd-term + 3rd-term Taylor series using a different color/linetype/linewidth/label # * Over-plot the 1st-term + 2nd-term + 3rd-term + 4th-term Taylor series using a different color/linetype/linewidth/label # + jupyter={"outputs_hidden": false} # + jupyter={"outputs_hidden": false} # + jupyter={"outputs_hidden": false} # - # --- # ## Part II # # $$\Large # {\displaystyle g(x)=\frac{1}{5}x^{3} + \frac{1}{2}x^{2} + \frac{1}{3}x - \frac{1}{2}} # $$ # # #### Plot `f(x)` and `g(x)` on the same plot # #### What are the value(s) for `x` where `f(x) = g(x)` # + jupyter={"outputs_hidden": false} # + jupyter={"outputs_hidden": false} # + jupyter={"outputs_hidden": false} # - # --- # ### Due Wed Mar 03 - 1 pm # - `File -> Download as -> HTML (.html)` # - `upload your .html file to the class Canvas page` # + jupyter={"outputs_hidden": false}
HW_Sympy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/AuleVala/cap-comp215/blob/main/Dexter_Hine_COMP215_Lab2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="nbRPZPKZU5Pk" # COMP 215 - LAB 2 # ---------------- # #### Name: <NAME> # #### Date: 18/1/2022 # This lab exercise is mostly a review of strings, tuples, lists, dictionaries, and functions. # We will also see how "list comprehension" provides a compact form for "list accumulator" algorithms. # # As usual, the first code cell simply imports all the modules we'll be using... # + pycharm={"name": "#%%\n"} id="DCwJEzl1NN6z" import json, requests import matplotlib.pyplot as plt from pprint import pprint # + [markdown] pycharm={"name": "#%% md\n"} id="Z3BscYh_NN60" # We'll answer some questions about movies and TV shows with the IMDb database: https://www.imdb.com/ # > using the IMDb API: https://imdb-api.com/api # # You can register for your own API key, or simply use the one provided below. # # Here's an example query: # * search for TV Series with title == "Lexx" # + id="JDGbYTySU3BP" outputId="4c25cb20-2598-462e-b3a1-e720704e127a" colab={"base_uri": "https://localhost:8080/"} API_KEY = '<KEY>' title = 'lexx' url = "https://imdb-api.com/en/API/SearchTitle/{key}/{title}".format(key=API_KEY, title=title) response = requests.request("GET", url, headers={}, data={}) data = json.loads(response.text) # recall json.loads for lab 1 results = data['results'] pprint(results) # + [markdown] pycharm={"name": "#%% md\n"} id="s4FjimDQNN62" # Next we extract the item we want from the data set by applying a "filter": # + pycharm={"name": "#%%\n"} id="GlwDPCgfNN62" colab={"base_uri": "https://localhost:8080/"} outputId="fcca51a4-360d-4fa5-dc18-35841bd3c0c3" items = [item for item in results if item['title']=='Lexx' and "TV" in item['description']] assert len(items) == 1 lexx = items[0] pprint(lexx) # + [markdown] pycharm={"name": "#%% md\n"} id="A_Tz1ATJNN62" # ## Exercise 1 # # In the code cell below, re-write the "list comprehension" above as a loop so you understand how it works. # Notice how the "conditional list comprehension" is a compact way to "filter" items of interest from a large data set. # # + pycharm={"name": "#%%\n"} id="dAFL0HqrNN63" # Your code here def find_IMDB(title,description): items = [] for item in results: if item['title'] == title and description in item['description']: items.append(item) assert len(items) == 1 return items[0] lexx = find_IMDB('Lexx','TV') # + [markdown] id="DNRs7ynOYwYk" # Notice that the `lexx` dictionary contains an `id` field that uniquely identifies this record in the database. # # We can use the `id` to fetch other information about the TV series, for example, # * get names of all actors in the TV Series Lexx # # + colab={"base_uri": "https://localhost:8080/"} id="tiyXTDfnZAd0" outputId="3c7173ba-ab85-42f9-fb90-653ca8b7de70" url = "https://imdb-api.com/en/API/FullCast/{key}/{id}".format(key=API_KEY, id=lexx['id']) response = requests.request("GET", url, headers={}, data={}) data = json.loads(response.text) actors = data['actors'] pprint(actors) # recall the slice operator (it's a long list!) # + [markdown] id="iOZspDBVbBns" # Notice that the `asCharacter` field contains a number of different pieces of data as a single string, including the character name. # This kind of "free-form" text data is notoriously challenging to parse... # # ## Exercise 2 # # In the code cell below, write a python function that takes a string input (the text from `asCharacter` field) # and returns the number of episodes, if available, or None. # # Hints: # * notice this is a numeric value followed by the word "episodes" # * recall str.split() and str.isdigit() and other string build-ins. # # Add unit tests to cover as many cases from the `actors` data set above as you can. # # + pycharm={"name": "#%%\n"} id="yo41zAMHNN65" # your code here def actor_in_show(name): for actor in range(len(actors)): if name in actors[actor]['name']: return actors[actor]['asCharacter'] return False def how_many_episodes(name): if actor_in_show(name) == False: return "Actor Not In Show" number_of_episodes = "" show = actor_in_show(name) show = show.split("episode") for character in show[0]: if character in "0123456789": number_of_episodes = number_of_episodes + character return (number_of_episodes) assert int(how_many_episodes("<NAME>")) == 1 # + [markdown] pycharm={"name": "#%% md\n"} id="tNxuGXP4NN65" # ## Exercise 3 # # In the code cell below, write a python function that takes a string input (the text from `asCharacter` field) # and returns just the character name. This one may be even a little harder! # # Hints: # * notice the character name is usually followed by a forward slash, `/` # * don't worry if your algorithm does not perfectly parse every character's name -- # it may not really be possible to correclty handle all cases because the field format does not follow consistent rules # # Add unit tests to cover as many cases from the `actors` data set above as you can. # # + pycharm={"name": "#%%\n"} id="Q4KWNep9NN65" colab={"base_uri": "https://localhost:8080/"} outputId="f806ef4e-1d9a-4509-e5a8-e5f299fc48e0" # Your code here def who_did_they_play(name): if actor_in_show(name) == False: return "Actor Not In Show" personage = actor_in_show(name) if '/ ...' in personage: personage = personage.split('/ ...') return personage[0] elif '(uncredited)' in personage: personage = personage.split(" (uncredited)") return personage[0] elif 'Self -' in personage: personage = personage.split(" -") return personage[0] else: personage = personage.split(" episode") # =All of the character names either precede, / ..., -, (uncredited), or episode personage = personage[0] personage = personage.split(" ")# Since the episode numbers are preceded by a space, I decided to split the string by the number of spaces. x = "" for i in range(len(personage)-1): x = x +" "+ personage[i] return x assert who_did_they_play("<NAME>") == "Fifi " for n in range(len(actors)): print(who_did_they_play(actors[n]['name'])) # + [markdown] id="f_iRYAa3lqV0" # # ## Exercise 4 # # Using the functions you developed above, define 2 list comprehensions that: # * create list of 2 tuples with (actor name, character description) for actors in Lexx (from `asCharacter` field) # * create a list of dictionaries, with keys: 'actor' and 'character' for the same data # # Hint: this is a very simple problem - the goal is to learn how to build these lists using a comprehension. # # Pretty print (pprint) your lists to visually verify the results. # + id="Ds1YevErlzCe" pycharm={"name": "#%%\n"} colab={"base_uri": "https://localhost:8080/"} outputId="cfc377c8-9e19-4da3-c34f-39eccc5461e1" # your code here role = [] for n in range(len(actors)): role.append((actors[n]['name'], who_did_they_play(actors[n]['name']))) pprint(role) role2 = [] for n in range(len(actors)): role2.append( {"actor name":actors[n]['name'], "character":who_did_they_play(actors[n]['name'])}) pprint(role2)
Dexter_Hine_COMP215_Lab2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Importing Necessary Packages import numpy as np import pandas as pd # Loading the dataset df=pd.read_csv("Paste the path of times.csv here for reading the dataset") # DataFrame.head() would return first five rows the dataset df.head() # Checking the column values datatype, for further calculations datatype=df.dtypes print(datatype) # Convert Time.Spotted from String object to DateTime object # + #Enter code below(note the dtype changed should reflect in the initial dataset) # -
src/zodiac.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + def generate(numRows): if numRows <= 0: return [] elif numRows == 1: return [[1]] elif numRows == 2: return [[1], [1, 1]] a = [[1], [1, 1]] for i in range(2, numRows): s = a[i - 1] arr = [1] for j in range(len(s) - 1): sum = s[j] + s[j + 1] arr.append(sum) arr.append(1) a.append(arr) return a generate(1) # -
Anjani/Leetcode/Array/Pascal's Triangle.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') # # Functions # function to get single breath (id must exist...there are some that are missing) # also once removed u_out=1, lengths are not always 30 (some 28, etc) def get_breath(df,my_id): return df[df.breath_id == my_id] def plot_breath(df,my_id): id1 = get_breath(df,my_id) r = id1.R.iloc[0] c = id1.C.iloc[0] plt.figure(figsize=(8,5)) plt.plot(id1.pressure,label='pressure') plt.plot(id1.u_in,label='u_in') plt.title(f'Pressure and u_in for Breath id={my_id}, R={r}, C={c}') plt.legend(); # # Load Files # + # %%time # Load files train = pd.read_csv(r'F:\Sync\Work\Kaggle Competitions\Ventilator Pressure Prediction\Data\train.csv') test = pd.read_csv(r'F:\Sync\Work\Kaggle Competitions\Ventilator Pressure Prediction\Data\test.csv') y = train['pressure'] # - print(f'Train memory usage: {train.memory_usage().sum()}') print(f'Test memory usage: {test.memory_usage().sum()}') # # Memory Reduction # + # down convert columns to save memory... # probably do float64's too # train train['id'] = train['id'].astype(np.int32) train['breath_id'] = train['breath_id'].astype(np.int32) train['R'] = train['R'].astype(np.int8) #or OHC? train['C'] = train['C'].astype(np.int8) train['u_out'] = train['u_out'].astype(np.int8) train['u_in'] = train['u_in'].astype(np.float32) train['time_step'] = train['time_step'].astype(np.float32) #test test['id'] = test['id'].astype(np.int32) test['breath_id'] = test['breath_id'].astype(np.int32) test['R'] = test['R'].astype(np.int8) #or OHC? test['C'] = test['C'].astype(np.int8) test['u_out'] = test['u_out'].astype(np.int8) test['u_in'] = test['u_in'].astype(np.float32) test['time_step'] = test['time_step'].astype(np.float32) for col in test.columns: print(test[col].dtype) # - print(f'Train memory usage: {train.memory_usage().sum()}') print(f'Test memory usage: {test.memory_usage().sum()}') # # Split data into inhalitory and exhalitory phase (only scored on inhale) # + train_in = train[train.u_out == 0] test_in = test[test.u_out == 0] y_in = train_in['pressure'] train_out = train[train.u_out == 1] test_out = test[test.u_out == 1] # - train_in.head() # # Add Features # # 1. Apply lag shift (tested shift =1 ) # Shift = 2 performs better (2.37 vs. 2.0x) # 2. Add differentials for dt and du_in # 3. Add integral column for d_uin # # # + # apply lag shift in training set u_in_lag = train_in.u_in.shift(1,fill_value=0) train_in['u_in_lag'] = u_in_lag train_in.drop(['u_in'],axis=1,inplace=True) # and for test set u_in_lag = test_in.u_in.shift(1,fill_value=0) test_in['u_in_lag'] = u_in_lag test_in.drop(['u_in'],axis=1,inplace=True) # - # ### Add integral column # make an index of breaths train_breath_idx = train_in.breath_id.unique() test_breath_idx = test_in.breath_id.unique() len(test_breath_idx) # + # function that creates the derivative and integral features # can send either test or train to... def make_deriv_int_features(df,prefix='train'): dt_vals = np.zeros(len(df)) u_in_integ = np.zeros(len(df)) u_in_slope = np.zeros(len(df)) start_idx = 0 end_idx = 0 # make an index of breaths breath_idx = df.breath_id.unique() for breath in breath_idx: num_vals = len(df.breath_id[df.breath_id == breath]) end_idx = start_idx + num_vals u_in_vals = df.u_in[df.breath_id == breath] time_vals = df.time_step[df.breath_id == breath] #print('one') # create numpy arrays for derivative and integral values dt = np.zeros(num_vals) du_in = np.zeros(num_vals) integ = np.zeros(num_vals) # fill in first value of arrays (post) dt[0] = time_vals.iloc[1] - time_vals.iloc[0] du_in[0] = 0 # extrapolate? integ[0] = 0 # start with avg starting value of pressure? #print('two') # loop over breaths for i in range(num_vals-1): dt[i+1] = time_vals.iloc[i+1] - time_vals.iloc[i] du_in[i+1] = u_in_vals.iloc[i+1] - u_in_vals.iloc[i] integ[i+1] = integ[i] + u_in_vals.iloc[i]*dt[i]#(id1.dt.iloc[0]*id1.lag.iloc[i+1] * (c/r)) *-.3*np.log(id1.time_step.iloc[i+1]*1) #print('three') dt_vals[start_idx:end_idx] = dt u_in_slope[start_idx:end_idx] = du_in u_in_integ[start_idx:end_idx] = integ start_idx = end_idx df['dt'] = dt_vals df['du_in'] = u_in_slope df['u_in_integ'] = u_in_integ #np.save('dt_vals_'+label,dt_vals) #np.save('u_in_slope_'+label,u_in_slope) #np.save('u_in_integ_'+label,u_in_integ) #r = id1.R.iloc[0] #c = id1.C.iloc[0] # - make_deriv_int_features(test_in,prefix='train') make_deriv_int_features(test_in,prefix='test') test_in test_breath_idx b1 = get_breath(train_in,1) plt.plot(b1.u_in_integ,label='u_in integ') plt.plot(b1.du_in,label = 'u_in slope') plt.plot(b1.u_in, label='u_in') #plt.plot(b1.pressure,label='pressure') plt.legend() test_in.head() train_in.to_pickle('train_in_w_feat') test_in.to_pickle('test_in_w_feat') # dt seems one index point off #plt.plot(train_in.u_in_integ[0:30]) #plt.plot(train_in.du_in[0:30]) # # Model from sklearn.metrics import mean_absolute_error #confusion_matrix, classification_report # + # Split data - after all analysis is done from sklearn.model_selection import train_test_split train_in.drop(columns = ['pressure','id'], inplace = True) #test = test.drop(columns = 'id', inplace = True) X_train, X_valid, y_train, y_valid = train_test_split(train_in, y_in, train_size=0.8, test_size=0.2, random_state=12) X_test_in = test_in.drop(columns=['id'],inplace=False) # - # Logistic Regression - not working...yet. X_test_in # %%time from catboost import CatBoostRegressor # loop for manual type cv #preds = [] for i in np.arange(1,2): # X_train, X_valid, y_train, y_valid = train_test_split(train, y, train_size=0.8, test_size=0.2, # random_state=i) model_cat = CatBoostRegressor(loss_function="MAE", eval_metric="MAE", #task_type="GPU", learning_rate=.6, iterations=8000, l2_leaf_reg=50, random_seed=12, od_type="Iter", depth=5, #early_stopping_rounds=6500, border_count=64, verbose=False ) model_cat.fit(X_train,y_train) pred_cat = model_cat.predict(X_valid) score_cat = mean_absolute_error(y_valid,pred_cat) #print(f'iters={i}, lr={j}, CatBoost MAE Score: {score_cat}') print(f'CatBoost MAE Score: {score_cat}') #preds.append(model_cat.predict_proba(X_test)[:,1]) # 400, .6 = 3.976 # ### Final Model # create outpreds = average out value out_preds = np.ones(len(test_out)) i = list(test_out.id) out_preds_s = pd.Series(out_preds,index = i) out_preds_s pred_final = model_cat.predict(X_test_in) # add indexs to recombine with out preds pred_final_s = pd.Series(pred_final,index=list(test_in.id)) pred_final_s.head() both = pred_final_s.append(out_preds_s).sort_index() both.values output = pd.DataFrame({'id': test.id, 'pressure': both.values}) output.to_csv('submission.csv', index=False) print("Submission saved!")
notebooks/Ventilator-integral column added.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_pytorch_p36 # language: python # name: conda_pytorch_p36 # --- # # Predicting Boston Housing Prices # # ## Using XGBoost in SageMaker (Hyperparameter Tuning) # # _Deep Learning Nanodegree Program | Deployment_ # # --- # # As an introduction to using SageMaker's High Level Python API for hyperparameter tuning, we will look again at the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass. # # The documentation for the high level API can be found on the [ReadTheDocs page](http://sagemaker.readthedocs.io/en/latest/) # # ## General Outline # # Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons. # # 1. Download or otherwise retrieve the data. # 2. Process / Prepare the data. # 3. Upload the processed data to S3. # 4. Train a chosen model. # 5. Test the trained model (typically using a batch transform job). # 6. Deploy the trained model. # 7. Use the deployed model. # # In this notebook we will only be covering steps 1 through 5 as we are only interested in creating a tuned model and testing its performance. # ## Step 0: Setting up the notebook # # We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need. # + # %matplotlib inline import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_boston import sklearn.model_selection # - # In addition to the modules above, we need to import the various bits of SageMaker that we will be using. # + import sagemaker from sagemaker import get_execution_role from sagemaker.amazon.amazon_estimator import get_image_uri from sagemaker.predictor import csv_serializer # This is an object that represents the SageMaker session that we are currently operating in. This # object contains some useful information that we will need to access later such as our region. session = sagemaker.Session() # This is an object that represents the IAM role that we are currently assigned. When we construct # and launch the training job later we will need to tell it what IAM role it should have. Since our # use case is relatively simple we will simply assign the training job the role we currently have. role = get_execution_role() # - # ## Step 1: Downloading the data # # Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward. boston = load_boston() # ## Step 2: Preparing and splitting the data # # Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets. # + # First we package up the input data and the target variable (the median value) as pandas dataframes. This # will make saving the data to a file a little easier later on. X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names) Y_bos_pd = pd.DataFrame(boston.target) # We split the dataset into 2/3 training and 1/3 testing sets. X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33) # Then we split the training set further into 2/3 training and 1/3 validation sets. X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33) # - # ## Step 3: Uploading the data files to S3 # # When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. # # ### Save the data locally # # First we need to create the test, train and validation csv files which we will then upload to S3. # This is our local data directory. We need to make sure that it exists. data_dir = '../data/boston' if not os.path.exists(data_dir): os.makedirs(data_dir) # + # We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header # information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and # validation data, it is assumed that the first entry in each row is the target variable. X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # - # ### Upload to S3 # # Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project. # + prefix = 'boston-xgboost-tuning-HL' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) # - # ## Step 4: Train the XGBoost model # # Now that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. Unlike in the previous notebooks, instead of training a single model, we will use SageMaker's hyperparameter tuning functionality to train multiple models and use the one that performs the best on the validation set. # # To begin with, as in the previous approaches, we will need to construct an estimator object. # + # As stated above, we use this utility method to construct the image name for the training container. container = get_image_uri(session.boto_region_name, 'xgboost') # Now that we know which container to use, we can construct the estimator object. xgb = sagemaker.estimator.Estimator(container, # The name of the training container role, # The IAM role to use (our current role in this case) train_instance_count=1, # The number of instances to use for training train_instance_type='ml.m4.xlarge', # The type of instance ot use for training output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), # Where to save the output (the model artifacts) sagemaker_session=session) # The current SageMaker session # - # Before beginning the hyperparameter tuning, we should make sure to set any model specific hyperparameters that we wish to have default values. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html) xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, objective='reg:linear', early_stopping_rounds=10, num_round=200) # Now that we have our estimator object completely set up, it is time to create the hyperparameter tuner. To do this we need to construct a new object which contains each of the parameters we want SageMaker to tune. In this case, we wish to find the best values for the `max_depth`, `eta`, `min_child_weight`, `subsample`, and `gamma` parameters. Note that for each parameter that we want SageMaker to tune we need to specify both the *type* of the parameter and the *range* of values that parameter may take on. # # In addition, we specify the *number* of models to construct (`max_jobs`) and the number of those that can be trained in parallel (`max_parallel_jobs`). In the cell below we have chosen to train `20` models, of which we ask that SageMaker train `3` at a time in parallel. Note that this results in a total of `20` training jobs being executed which can take some time, in this case almost a half hour. With more complicated models this can take even longer so be aware! # + from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs. objective_metric_name = 'validation:rmse', # The metric used to compare trained models. objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric. max_jobs = 20, # The total number of models to train max_parallel_jobs = 4, # The number of models to train in parallel hyperparameter_ranges = { 'max_depth': IntegerParameter(3, 12), 'eta' : ContinuousParameter(0.05, 0.5), 'min_child_weight': IntegerParameter(2, 8), 'subsample': ContinuousParameter(0.5, 0.9), 'gamma': ContinuousParameter(0, 10), }) # - # Now that we have our hyperparameter tuner object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method. # + # This is a wrapper around the location of our train and validation data, to make sure that SageMaker # knows our data is in csv format. s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation}) # - # As in many of the examples we have seen so far, the `fit()` method takes care of setting up and fitting a number of different models, each with different hyperparameters. If we wish to wait for this process to finish, we can call the `wait()` method. xgb_hyperparameter_tuner.wait() # Once the hyperamater tuner has finished, we can retrieve information about the best performing model. xgb_hyperparameter_tuner.best_training_job() # In addition, since we'd like to set up a batch transform job to test the best model, we can construct a new estimator object from the results of the best training job. The `xgb_attached` object below can now be used as though we constructed an estimator with the best performing hyperparameters and then fit it to our training data. xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job()) # ## Step 5: Test the model # # Now that we have our best performing model, we can test it. To do this we will use the batch transform functionality. To start with, we need to build a transformer object from our fit model. xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') # Next we ask SageMaker to begin a batch transform job using our trained model and applying it to the test data we previous stored in S3. We need to make sure to provide SageMaker with the type of data that we are providing to our model, in our case `text/csv`, so that it knows how to serialize our data. In addition, we need to make sure to let SageMaker know how to split our data up into chunks if the entire data set happens to be too large to send to our model all at once. # # Note that when we ask SageMaker to do this it will execute the batch transform job in the background. Since we need to wait for the results of this job before we can continue, we use the `wait()` method. An added benefit of this is that we get some output from our batch transform job which lets us know if anything went wrong. xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() # Now that the batch transform job has finished, the resulting output is stored on S3. Since we wish to analyze the output inside of our notebook we can use a bit of notebook magic to copy the output file from its S3 location and save it locally. # !aws s3 cp --recursive $xgb_transformer.output_path $data_dir # To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement. Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) plt.scatter(Y_test, Y_pred) plt.xlabel("Median Price") plt.ylabel("Predicted Price") plt.title("Median Price vs Predicted Price") # ## Optional: Clean up # # The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. # + # First we will remove all of the files contained in the data_dir directory # !rm $data_dir/* # And then we delete the directory itself # !rmdir $data_dir # -
Tutorials/Boston Housing - XGBoost (Hyperparameter Tuning) - High Level.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:aparent] # language: python # name: conda-env-aparent-py # --- # + import pandas as pd import scipy import numpy as np import scipy.sparse as sp import scipy.io as spio import operator import matplotlib.pyplot as plt import isolearn.io as isoio def sort_and_balance_library(library_dict, included_libs=None, count_filter_dict=None) : #Filter on read count print('Library size before count filtering = ' + str(len(library_dict['data']))) if count_filter_dict is not None : included_index = [] for index, row in library_dict['data'].iterrows() : if row['library_index'] not in count_filter_dict : included_index.append(index) elif row['total_count'] >= count_filter_dict[row['library_index']] : included_index.append(index) library_dict['data'] = library_dict['data'].iloc[included_index].reset_index(drop=True) library_dict['cuts'] = library_dict['cuts'][included_index] print('Library size after count filtering = ' + str(len(library_dict['data']))) #Sort and balance library dataframe and cut matrix L_included = included_libs arranged_index_len = 0 arranged_index_len = int(np.sum([len(np.nonzero(library_dict['data']['library_index'] == lib)[0]) for lib in L_included])) min_join_len = int(np.min([len(np.nonzero(library_dict['data']['library_index'] == lib)[0]) for lib in L_included])) arranged_index = np.zeros(arranged_index_len, dtype=np.int) arranged_remainder_index = 0 arranged_join_index = arranged_index_len - len(L_included) * min_join_len for lib_i in range(0, len(L_included)) : lib = L_included[lib_i] print('Arranging lib ' + str(lib)) #1. Get indexes of each Library lib_index = np.nonzero(library_dict['data']['library_index'] == lib)[0] #2. Sort indexes of each library by count lib_count = library_dict['data'].iloc[lib_index]['total_count'] sort_index_lib = np.argsort(lib_count) lib_index = lib_index[sort_index_lib] #3. Shuffle indexes of each library modulo 2 even_index_lib = np.nonzero(np.arange(len(lib_index)) % 2 == 0)[0] odd_index_lib = np.nonzero(np.arange(len(lib_index)) % 2 == 1)[0] lib_index_even = lib_index[even_index_lib] lib_index_odd = lib_index[odd_index_lib] lib_index = np.concatenate([lib_index_even, lib_index_odd]) #4. Join modulo 2 i = 0 for j in range(len(lib_index) - min_join_len, len(lib_index)) : arranged_index[arranged_join_index + i * len(L_included) + lib_i] = lib_index[j] i += 1 #5. Append remainder for j in range(0, len(lib_index) - min_join_len) : arranged_index[arranged_remainder_index] = lib_index[j] arranged_remainder_index += 1 library_dict['data'] = library_dict['data'].iloc[arranged_index].reset_index(drop=True) library_dict['cuts'] = library_dict['cuts'][arranged_index] print('Done sorting library.') return library_dict def plot_cut_2mers(datafr, cut_mat) : cut_mer2 = {} seqs = list(datafr['seq'].values) seqs = np.array(seqs, dtype=np.object) total_count = np.array(datafr['total_count']) cx = sp.coo_matrix(cut_mat) for i,j,v in zip(cx.row, cx.col, cx.data) : seq = seqs[i] #mer2 = seq[j-1:j+1] mer2 = seq[j:j+2] if mer2 not in cut_mer2 : cut_mer2[mer2] = 0 cut_mer2[mer2] += v cut_mer2_sorted = sorted(cut_mer2.items(), key=operator.itemgetter(1)) mer2_list = [] mer2_vals = [] for i in range(0, len(cut_mer2_sorted)) : mer2_list.append(cut_mer2_sorted[i][0]) mer2_vals.append(cut_mer2_sorted[i][1]) f = plt.figure(figsize=(6, 4)) plt.bar(mer2_list, mer2_vals, color='black') plt.title('Proximal cleavage dinuc.', fontsize=14) plt.xlabel('Dinucleotide', fontsize=14) plt.ylabel('Read count', fontsize=14) plt.xticks(fontsize=14, rotation=45) plt.yticks(fontsize=14) plt.tight_layout() plt.show() # + #Read legacy library data frame and cut matrix iso_df = pd.read_csv('processed_data_legacy/apa_general3_antimisprime_orig.csv', sep=',') cut_df = pd.read_csv('processed_data_legacy/apa_general_cuts_antimisprime_orig.csv', sep=',') cut_mat = spio.loadmat('processed_data_legacy/apa_general_cuts_antimisprime_orig_cutdistribution.mat')['cuts'] # + iso_df = iso_df.drop(columns=['total_count_vs_distal', 'proximal_avgcut', 'proximal_stdcut']) iso_df = iso_df.rename(columns={'total_count_vs_all' : 'total_count'}) iso_df = iso_df.copy().set_index('seq') cut_df['row_index_cuts'] = np.arange(len(cut_df), dtype=np.int) cut_df = cut_df[['seq', 'total_count', 'row_index_cuts']].copy().set_index('seq') # + joined_df = iso_df.join(cut_df, how='inner', rsuffix='_cuts') joined_cuts = cut_mat[np.ravel(joined_df['row_index_cuts'].values), :] joined_df = joined_df.drop(columns=['row_index_cuts']).copy().reset_index() joined_df = joined_df.rename(columns={'library' : 'library_index', 'library_name' : 'library'}) print(len(joined_df)) print(joined_cuts.shape) print(joined_df.head()) # + #Sort library data library_dict = sort_and_balance_library({'data' : joined_df, 'cuts' : joined_cuts}, included_libs=[2, 5, 8, 11, 20, 22, 30, 31, 32, 33, 34, 35]) # + print('Dataframe length = ' + str(len(library_dict['data']))) print('Cut matrix size = ' + str(library_dict['cuts'].shape)) # + #Check sublibrary counts in top readcount portion of library libs = library_dict['data']['library'].unique() total_size = len(library_dict['data']) for lib in libs : lib_size = len(np.nonzero((library_dict['data']['library'] == lib))[0]) print('len(' + lib + ') = ' + str(lib_size)) # + #Dump random MPRA dataframe and cut matrix isoio.dump({'plasmid_df' : library_dict['data'], 'plasmid_cuts' : library_dict['cuts']}, 'processed_data_lifted/apa_plasmid_data_legacy') # + #Plot combined library cut dinucleotides plot_cut_2mers(library_dict['data'], library_dict['cuts']) # + #Plot overlayed cut profiles f = plt.figure(figsize=(12, 8)) libs = library_dict['data']['library'].unique() ls = [] for lib in libs : lib_index = np.nonzero((library_dict['data']['library'] == lib))[0] lib_cut_probs = np.array(library_dict['cuts'][lib_index].todense()) lib_cuts = lib_cut_probs * np.ravel(library_dict['data']['total_count'].values)[lib_index].reshape(-1, 1) proximal_profile = np.ravel(np.sum(lib_cuts, axis=0))[:-1] proximal_profile /= np.sum(proximal_profile) la, = plt.plot(np.arange(len(proximal_profile)), proximal_profile, linewidth=2, label=lib) ls.append(la) #Proximal plt.axvline(x=50, linewidth=2, c='black', linestyle='--') plt.axvline(x=50 + 6, linewidth=2, c='black', linestyle='--') plt.axvline(x=50 + 21, linewidth=2, c='orange', linestyle='--') plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel('Position', fontsize=16) plt.ylabel('Read count', fontsize=16) plt.title('Proximal site', fontsize=16) plt.tight_layout() plt.legend(handles = ls, fontsize=14) plt.show() # + #Check sublibrary counts in top readcount portion of library from_fraction = 0.075 libs = library_dict['data']['library'].unique() total_size = len(library_dict['data']) for lib in libs : lib_slice = library_dict['data'].iloc[-int(from_fraction * total_size):] lib_size = len(np.nonzero((lib_slice['library'] == lib))[0]) print('len(' + lib + ') = ' + str(lib_size)) # + #Plot sublibrary cumulative proportions library_fractions_from_top = np.linspace(0, 1, num=41)[1:] libs = library_dict['data']['library'].unique() cum_fraction = np.zeros((len(library_fractions_from_top), len(libs))) total_lib_size = float(len(library_dict['data'])) frac_i = 0 for library_fraction in library_fractions_from_top : lib_i = 0 for lib in libs : lib_slice = library_dict['data'].iloc[-int(library_fraction * total_lib_size):] lib_size = len(np.nonzero((lib_slice['library'] == lib))[0]) curr_frac = float(lib_size) / float(len(lib_slice)) cum_fraction[frac_i, lib_i] = curr_frac lib_i += 1 frac_i += 1 fig = plt.subplots(figsize=(12, 8)) plt.stackplot(library_fractions_from_top, np.fliplr(cum_fraction.T), labels=libs) plt.legend(loc='upper left', fontsize=14) plt.xticks(library_fractions_from_top, np.flip(np.round(1.0 - library_fractions_from_top, 2), axis=0), fontsize=16, rotation=45) plt.yticks(np.linspace(0, 1, num=10 + 1), np.round(np.linspace(0, 1, num=10 + 1), 2), fontsize=16) plt.xlim(np.min(library_fractions_from_top), np.max(library_fractions_from_top)) plt.ylim(0, 1) plt.xlabel('Percentile of data (low to high read count)', fontsize=16) plt.ylabel('Library proportion of Percentile to 100%', fontsize=16) plt.title('Cumulative library proportion', fontsize=16) plt.tight_layout() plt.show() # + total_count = np.ravel(library_dict['data']['total_count'].values) lib_frac = np.arange(total_count.shape[0]) / float(total_count.shape[0]) libs = library_dict['data']['library'].unique() fig = plt.figure(figsize = (12, 8)) ls = [] for lib in libs : lib_index = np.nonzero(library_dict['data']['library'] == lib)[0] lib_slice = library_dict['data'].iloc[lib_index] lib_count = np.ravel(lib_slice['total_count'].values) lib_frac = np.arange(len(lib_slice)) / float(len(lib_slice)) lt, = plt.plot(lib_frac, lib_count, linewidth=2, label=lib) ls.append(lt) plt.legend(handles=ls, loc='upper left', fontsize=14) plt.xticks(np.round(np.linspace(0, 1, num=10 + 1), 2), np.round(np.linspace(0, 1, num=10 + 1), 2), fontsize=16, rotation=45) plt.yticks(fontsize=16) plt.xlim(0, 1) plt.ylim(0, 500) plt.xlabel('Percentile of data', fontsize=16) plt.ylabel('Read count', fontsize=16) plt.title('Individual Sublibrary Read count distribution', fontsize=16) plt.tight_layout() plt.show() # + total_count = np.ravel(library_dict['data']['total_count'].values) total_lib_frac = np.arange(total_count.shape[0]) / float(total_count.shape[0]) libs = library_dict['data']['library'].unique() fig = plt.figure(figsize = (12, 8)) ls = [] for lib in libs : lib_index = np.nonzero(library_dict['data']['library'] == lib)[0] lib_slice = library_dict['data'].iloc[lib_index] lib_count = np.ravel(lib_slice['total_count'].values) lib_frac = total_lib_frac[lib_index] lt, = plt.plot(lib_frac, lib_count, linewidth=2, label=lib) ls.append(lt) plt.legend(handles=ls, loc='upper left', fontsize=14) plt.xticks(np.round(np.linspace(0, 1, num=10 + 1), 2), np.round(np.linspace(0, 1, num=10 + 1), 2), fontsize=16, rotation=45) plt.yticks(fontsize=16) plt.xlim(0.85, 1) plt.ylim(0, 500) plt.xlabel('Percentile of data', fontsize=16) plt.ylabel('Read count', fontsize=16) plt.title('Ordered Library Read count distribution', fontsize=16) plt.tight_layout() plt.show() # -
data/random_mpra_legacy/combined_library/aparent_random_mpra_lift_legacy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sentiment Analysis from textblob import TextBlob texts=["The movie was good.", "The movie was not good.", "I really think this product sucks.", "Really great product.", "I don't like this product"] for t in texts: print(t, "==>", TextBlob(t).sentiment.polarity) text=TextBlob("""The movie was good. The movie was not good. I really think this product sucks. Really great product. I don't like this product""") for s in text.sentences: print("=>", s) for s in text.sentences: print(s, "==> ", s.sentiment.polarity) # # Creating our own classifier # Lets use [Sentiment Polarity Dataset 2.0](https://www.cs.cornell.edu/people/pabo/movie-review-data/), included in the `NLTK` library.<Br> # It consists of 1000 positive and 1000 negative processed reviews. Introduced in Pang/Lee ACL 2004. Released June 2004. # + import nltk from nltk.corpus import stopwords from collections import defaultdict from nltk import word_tokenize import string from nltk.corpus import movie_reviews as mr print("The corpus contains %d reviews"% len(mr.fileids())) for i in mr.fileids()[995:1005]: # Reviews 995 to 1005 print(i, "==>", i.split('/')[0]) # - # Let's see the content of one of these reviews print(mr.raw(mr.fileids()[1])) # Calculating the frequency of each word in the document ... from nltk.probability import FreqDist FreqDist(mr.raw(mr.fileids()[1]).split()) # Lets take a look at the most frequent words in the corpus wordfreq = FreqDist() for i in mr.fileids(): wordfreq += FreqDist(w.lower() for w in mr.raw(i).split()) # The previous code has flaws because split() is a very basic way of finding the words. Let's use `word_tokenize()` or `mr.words()` instead... wordfreq = FreqDist() for i in mr.fileids(): wordfreq += FreqDist(w.lower() for w in mr.words(i)) print(wordfreq) print(wordfreq.most_common(10)) # stop words and punctuation are causing trouble, lets remove them... stopw = stopwords.words('english') wordfreq = FreqDist() for i in mr.fileids(): wordfreq += FreqDist(w.lower() for w in mr.words(i) if w.lower() not in stopw and w.lower() not in string.punctuation) print(wordfreq) print(wordfreq.most_common(10)) # ## Shuffling # Lets shuffle the documents, otherwise they will remain sorted ["neg", "neg" ... "pos"] import random docnames=mr.fileids() random.shuffle(docnames) # Lets split each document into words ... documents=[] for i in docnames: y = i.split('/')[0] documents.append( ( mr.words(i) , y) ) # Let's take a look at our documents... for docs in documents[:5]: print(docs) # ## Document representation # # Now, lets produce the final document representation, in the form of a Frequency Distribution ... # # First, without stop words and punctuation ... (you could use other technique, such as IDF) stopw = stopwords.words('english') docrep=[] for words,tag in documents: features = FreqDist(w for w in words if w.lower() not in stopw and w.lower() not in string.punctuation) docrep.append( (features, tag) ) # Let's take a look at our documents again... for doc in docrep[:5]: print(doc) # ## NLTK classifier: Naive Bayes # # Defining our training and test sets... # + numtrain = int(len(documents) * 80 / 100) # number of training documents train_set, test_set = docrep[:numtrain], docrep[numtrain:] print(test_set[0]) # + from nltk.classify import NaiveBayesClassifier as nbc classifier = nbc.train(train_set) print("Accuracy:", nltk.classify.accuracy(classifier, test_set)) classifier.show_most_informative_features(5) # -
#8 NLP/SentimentAnalysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="UliMe-8qSyC1" executionInfo={"status": "ok", "timestamp": 1633434784527, "user_tz": -420, "elapsed": 859, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgmWib-TGQsrsgMeCL0ATFrkLN8Dk4RucPuSUMPGw=s64", "userId": "04070286001301206388"}} outputId="033e121a-c04f-42f4-ff01-ca5c7019d31a" # !wget --no-check-certificate \ # https://docs.google.com/spreadsheets/d/1NZiHg1709-wU8mdXth2CrXCNT-aQq_faoTUMyPvAHBk/edit?usp=sharing # + [markdown] id="fh2WgApXOnMB" # **VIEW INFORMATION FROM THE DATASET** # + id="1Dwr4SpuDZzj" # Import pandas library to read the dataset import pandas as pd # Import load_iris function from sklearn library in .datasets packages from sklearn.datasets import load_iris # Read Iris.csv file to 'iris' variable with read_csv() function iris = pd.read_csv('Iris.csv') # View information from the dataset on the first five rows with head() function iris.head() # + [markdown] id="NQurDmrWO98Y" # **SPLIT THE DATASET (ATTRIBUTE & LABEL)** # + id="0hct5OlOHcz_" # Delete unnecessary attribute iris.drop('Id', axis=1, inplace=True) # Split the attribute and label from the dataset # X variable as a flowers size (attribute) X = iris[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']] # Y variable as a label from the flowers Y = iris['Species'] # + [markdown] id="UKKVH0vUPLbu" # **TRAINING THE MODEL** # + id="Eyf2VBAkIpjM" # Import DecisionTreeClassifier function from sklearn library in .tree packages from sklearn.tree import DecisionTreeClassifier # Create a Decision Tree model tree_model = DecisionTreeClassifier() # Model training tree_model.fit(X, Y) # + [markdown] id="OW4SvMU_PQ04" # **PREDICTION** # + id="zD7PDHALI8T0" # User input sepal_length = float(input('Sepal Length: ')) sepal_width = float(input('Sepal Width: ')) petal_length = float(input('Petal Length: ')) petal_width = float(input('Petal Width: ')) # Model prediction using the tree_model.predict([]) into the flowers variable flowers = tree_model.predict([[sepal_length, sepal_width, petal_length, petal_width]]) # Prediction output using the conditional statements if flowers == 'Iris-versicolor': print('This is Iris Versicolor Flower') elif flowers == 'Iris-setosa': print('This is Iris Setosa Flower') else: print('This is Iris Virginica Flower')
Classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd from pandas_datareader import data as wb import matplotlib.pyplot as plt import chart_studio.plotly as py import plotly.express as px import plotly.graph_objs as go import seaborn as sns Gl = wb.DataReader('GOOGL', data_source = 'yahoo', start='2004-1-1') Gl.head() Gl.tail() # # Simple Rate of Return # # # Calculation of rate of return using the simple rate of return method. # # ## $ \frac{P_{1} -P_{0}}{P_{0}} = \frac{P_{1}}{P_{0}}-1 $ # # Calculation of daily rate of return of the stock since the day of listing Gl['simple_return'] = Gl['Adj Close'] / Gl['Adj Close'].shift(1) - 1 print(Gl) # The daily rate of return of the stock over 16 years since Google's listing is less than 1 percent showing significant movement of the stock price are not a daily occurrence. Gl = Gl.rename(columns={"A_d_j___C_l_o_s_e": 'Adj Close', "s_i_m_p_l_e___r_e_t_u_r_n":"simple_return", "l_o_g___r_e_t_u_r_n":"log_return"}) Gl.head() Gl["simple return"].plot(figsize=(15, 9)) fig = sns.lineplot(data=Gl, x='Date', y='simple return', sizes=(20, 200)) # average daily rate of return avg_return = Gl['simple return'].mean() print(avg_return) # annual rate of return annual_avg_return = Gl['simple return'].mean() * 250 # 250 trading days arr = annual_avg_return * 100 arr = str(round(arr, 3)) + ' %' print(arr) # The average annual rate of return for Google stock over 16 years is 25.881% # # Log Returns # # ## $ ln(\frac{P_{t}}{P_{t}-1}) $ Gl["log_return"] = np.log(Gl['Adj Close']/ Gl['Adj Close'].shift(1)) print(Gl) Gl["log return"].plot(figsize=(15,9)) # avg daily return log_avg_return = Gl['log return'].mean() print(log_avg_return) # annual average daily return log_avg_arr = round((log_avg_return * 250), 3) log_avg_arr = log_avg_arr * 100 log_avg_arr = str(log_avg_arr) + ' %' print(log_avg_arr) # # Calculating Return on a Portfolio of Securities tickers = ['GOOGL', 'AMZN', 'AAPL', 'MSFT'] data = pd.DataFrame() for t in tickers: data[t] = wb.DataReader(t, data_source='yahoo', start="2004-08-19")['Adj Close'] data.info() data.head() # ## Normalisation to 100 # # # ## $ \frac{P_{t}}{P_{0}} * 100 $ (data/data.iloc[0] * 100).plot(figsize = (20, 10)); plt.show() # Normalisation of top 6 technology stock in the S&p to compare behaviour of stock over time from a single landmark.The best performing stock over a 16year period has been Apple, having been at par with Google in terms of performance but then peaking past it in 2007. Since 2004, it worst dips were between mod 2018-mid 2019 suffering a 50 % dip in price follwed by a 30% dip between late 2019 and early 2020. # # The seond best peforming stock was Amazon with a steady rise in pricing from the onset of the financial crisis, eventually experiencing a surge from 2015 with net positive performance to date. # # Microsoft has been the worst performing stock among the top technology stocks maintaining a gradual steady increase over the years, marginally different from the start. # # Calculating Return of the porfolio of assets returns = (data/data.shift(1)) - 1 returns.head() returns.plot(figsize = (20, 10)) # weighting of stock on the assumption that they are held in equal proportion in the portfolio weights = np.array([0.25,0.25,0.25,0.25]) annual_returns = returns.mean() * 250 annual_returns = round((annual_returns * 100), 2) annual_returns p1_return = np.dot(annual_returns, weights) p1_return = str(p1_return) + '%' p1_return # From the portfplio of the four stocks, Apple generated the highes returns at 38.71%, follwed by Amazon 34.42%, Google at 25.88%, & MSFT 18.92% with the weighted return of the protfolio being 29.48% # # Calculating Return of Indices # + tickers = ['^GSPC', '^IXIC', '^GDAXI', '^N225'] ind_data = pd.DataFrame() for t in tickers: ind_data[t] = wb.DataReader(t, data_source='yahoo', start='2004-08-19')['Adj Close'] # - ind_data.head() # + (ind_data/ind_data.iloc[0] * 100).plot(figsize=(15, 6));plt.show() # - # Over the years germany had the most stability and growth, according to the performance of its stock market peaking at 2008 then tumbling down along with the rest of the global economies during the global financial and economic crisis of 2008. Recovery ensued in from 2009 up until the soverign debt crisis where the western economies were greatly affected, iondicated by the dip in the indices. Japan Was the worst hit during the global financial crisis of 2008 and did not recover economically unlike most of the other countries , probavbly indicative of a period of stagnation of economic growth. # # It was however insulated from the 2012 sovereign debt crisis, suffering zero shocks unlike western economies which were largely exposed to it. Globally recovery ensued after the 2012 crisis with the American and German economies performing quite well at the same level up until mid 2017 when the American economy outpaced the German one - probaly driven by increased liquidity and market confidence following business friendly tax measures introduced by Trump's government. # # There were a couple of shocks in the market, with the greatest being in 2019, driven by a market sell off fuelled by fears of a recession, followed by a greater dip in Q1 2020, driven by the impact the mitigative measures against the outbreak of COVID-19 had on the global economy. ind_returns = (ind_data/ind_data.shift(1)) -1 annual_ind_returns = ind_returns.mean() * 250 annual_ind_returns # + tickers = ['AAPL','MSFT','^IXIC', '^GDAXI', '^GSPC'] ind_data2 = pd.DataFrame() for t in tickers: ind_data2[t] = wb.DataReader(t, 'yahoo', start='2004-08-19')['Adj Close'] # - (ind_data2/ind_data2.iloc[0] * 100).plot(figsize=(15, 6)); plt.show() # From the data, both the best and worst performing stock in our portfolio returned more than our average rate of return of the examined indices, 16 years to date. However, for 13.5 yrs, Microsoft had a similar rate of reterun Year on year, as the main indices with positive deviation starting 2017, Q3 after which it outperformed the inidices by a small margin.
Rate of Return Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Wild Mushroom Classification # # An applied machine learning project to classify whether a wild mushrooms is toxic or safe to eat. # # ## Objectives # # - The dataset is already cleaned, the goal of this project is to create a working model as quickly as possible, identify any issues with it, then fix those issues # # ## Data Dictionary # # - **safe**: 1=safe to eat, 0=poisonous # - **cap-shape**: bell=b, conical=c, convex=x, flat=f, knobbed=k, sunken=s # - **cap-surface**: fibrous=f, grooves=g, scaly=y, smooth=s # - **cap-color**: brown=n, buff=b, cinnamon=c, gray=g, green=r, pink=p, purple=u, red=e, white=w, yellow=y # - **bruises**: bruises=t, no=f # - **odor**: almond=a, anise=l, creosote=c, fishy=y, foul=f, musty=m, none=n, pungent=p, spicy=s # - **gill-attachment**: attached=a, descending=d, free=f, notched=n # - **gill-spacing**: close=c, crowded=w, distant=d # - **gill-size**: broad=b, narrow=n # - **gill-color**: black=k, brown=n, buff=b, chocolate=h, gray=g, green=r, orange=o, pink=p, purple=u, red=e, white=w, yellow=y # - **stalk-shape**: enlarging=e, tapering=t # - **stalk-root**: bulbous=b, club=c, cup=u, equal=e, rhizomorphs=z, rooted=r, missing=? # - **stalk-surface-above-ring**: fibrous=f, scaly=y, silky=k, smooth=s # - **stalk-surface-below-ring**: fibrous=f, scaly=y, silky=k, smooth=s # - **stalk-color-above-ring**: brown=n, buff=b, cinnamon=c, gray=g, orange=o, pink=p, red=e, white=w, yellow=y # - **stalk-color-below-ring**: brown=n, buff=b, cinnamon=c, gray=g, orange=o, pink=p, red=e, white=w, yellow=y # - **veil-type**: partial=p, universal=u # - **veil-color**: brown=n, orange=o, white=w, yellow=y # - **ring-number**: none=n, one=o, two=t # - **ring-type**: cobwebby=c, evanescent=e, flaring=f, large=l, none=n, pendant=p, sheathing=s, zone=z # - **spore-print-color**: black=k, brown=n, buff=b, chocolate=h, green=r, orange=o, purple=u, white=w, yellow=y # - **population**: abundant=a, clustered=c, numerous=n, scattered=s, several=v, solitary=y # - **habitat**: grasses=g, leaves=l, meadows=m, paths=p, urban=u, waste=w, woods=d # # # ## Machine Learning Task # # - Apply supervised learning classification techniques to determine whether a wild mushroom, based on its attributes, is toxic or safe to eat # - Win condition: not specified # + # Module imports import numpy as np import pandas as pd pd.set_option('display.max_columns', 200) import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns # Import models and metrics from sklearn.utils import resample from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score, precision_score, recall_score, f1_score # Import train-test split from sklearn.model_selection import train_test_split # + # Uncomment to load XGBoost if necessary # import sys # # !{sys.executable} -m pip install xgboost # - from xgboost import DMatrix from xgboost import XGBClassifier # + # Import data df = pd.read_csv('data_vs_wild.csv') df.head() # - df.info() df.describe() # **Note**: the dataset doesn't have any missing values, but does show imbalanced classes within the target variable. Less than 1% of the observations are safe to eat. It will be important to stratify the train and test split on the `safe` column and try training models on both up-scaled positive class (safe) observations and down-scaled negative class (poisonous) observations. Also, `accuracy` won't be an appropriate scoring metric to evaluate models given the imbalanced class situation. df.describe(include='object') for col in df.dtypes[df.dtypes == 'object'].index: print(col) print(df[col].value_counts()) # Data observations: # # - `veil-type` column has zero variance and doesn't add any information, all observations are the same value (drop column) # - `ring-number` values are ordinal, should convert to numbers 0, 1, 2 # - `bruises` is an indicator variable, should convert to 0, 1 # - `stalk-root` uses "?" to indicate missing values, but not necessary to change (would just change it to another character) # Drop veil-type column as all values are same df.drop('veil-type', axis=1, inplace=True) # + # Convert ring-number to ordinal number values def ring_to_num(val): # Converts n (none) to 0, o (one) to 1, t (two) to 2 if val == 'n': return 0 elif val == 'o': return 1 elif val == 't': return 2 else: return np.nan df['ring-number'] = df['ring-number'].apply(ring_to_num) # - # Convert bruises column to indicator variable df['bruises'] = (df['bruises'] == 't').astype(int) df.head() # Create count plots for categorical variables for col in df.dtypes[df.dtypes == 'object'].index: sns.countplot(data=df, y=col) plt.title(col.title()) plt.show() # No sparse columns for the remaining features # + # Create analytical base table cols = df.dtypes[df.dtypes == 'object'].index abt = pd.get_dummies(df, columns=cols) abt.head() # + # Create train and test splits of data X = abt.drop('safe', axis=1) y = abt['safe'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) # - # The large number of observations requires substantial computing power to train and tune a model. This is especially the case when using a non-GPU personal laptop, so for efficiency's sake, training is done on only one baseline model. # + xgb = XGBClassifier(random_state=42) xgb.fit(X_train, y_train) xgb_pred = xgb.predict(X_test) xgb_pred_probs = xgb.predict_proba(X_test) xgb_pred_probs = [p[1] for p in xgb_pred_probs] # - print(confusion_matrix(y_test, xgb_pred)) print(classification_report(y_test, xgb_pred)) print('Baseline XGB Precision: {0:.4f}'.format(precision_score(y_test, xgb_pred))) print('Baseline XGB Recall: {0:.4f}'.format(recall_score(y_test, xgb_pred))) print('Baseline XGB F1 Score: {0:.4f}'.format(f1_score(y_test, xgb_pred))) print('Baseline XGB AUROC Score: {0:.4f}'.format(roc_auc_score(y_test, xgb_pred_probs))) # The performance of the baseline model has very low recall and f1-scores for the positive class due to the imbalanced nature of the dataset on the target variable. To address the issue, a resampling technique will try both upsampling the positive class and downsampling the negative class, based off the current training data. # + # Re-combine training data to split by safe/poisonous X_all_train = X_train.copy() X_all_train['safe'] = y_train print(X_all_train.shape) poison = X_all_train[X_all_train['safe'] == 0] # length 796673 safe = X_all_train[X_all_train['safe'] == 1] # length 3327 # + # Upsample safe mushrooms safe_upsamp = resample(safe, replace=True, n_samples=len(poison), random_state=42) upsampled = pd.concat([poison, safe_upsamp]) print(upsampled.shape) X_up_train = upsampled.drop('safe', axis=1) y_up_train = upsampled['safe'] print(X_up_train.shape, y_up_train.shape) # + # Check model performance with upsampled data xgb_upsamp = XGBClassifier(random_state=42) xgb_upsamp.fit(X_up_train, y_up_train) xgb_up_pred = xgb_upsamp.predict(X_test) xgb_up_pred_probs = xgb_upsamp.predict_proba(X_test) xgb_up_pred_probs = [p[1] for p in xgb_up_pred_probs] # - print(confusion_matrix(y_test, xgb_up_pred)) print(classification_report(y_test, xgb_up_pred)) print('Upsampled XGB Precision: {0:.4f}'.format(precision_score(y_test, xgb_up_pred))) print('Upsampled XGB Recall: {0:.4f}'.format(recall_score(y_test, xgb_up_pred))) print('Upsampled XGB F1 Score: {0:.4f}'.format(f1_score(y_test, xgb_up_pred))) print('Upsampled XGB AUROC: {0:.4f}'.format(roc_auc_score(y_test, xgb_up_pred_probs))) # + # Downsample poisonous mushrooms pois_down = resample(poison, replace=True, n_samples=len(safe), random_state=42) downsampled = pd.concat([safe, pois_down]) print(downsampled.shape) X_down_train = downsampled.drop('safe', axis=1) y_down_train = downsampled['safe'] print(X_down_train.shape, y_down_train.shape) # + # Check model performance with downsampled data xgb_down = XGBClassifier(random_state=42) xgb_down.fit(X_down_train, y_down_train) xgb_down_pred = xgb_down.predict(X_test) xgb_down_pred_probs = xgb_down.predict_proba(X_test) xgb_down_pred_probs = [p[1] for p in xgb_down_pred_probs] # - print(confusion_matrix(y_test, xgb_down_pred)) print(classification_report(y_test, xgb_down_pred)) print('Downsampled XGB Precision: {0:.4f}'.format(precision_score(y_test, xgb_down_pred))) print('Downsampled XGB Recall: {0:.4f}'.format(recall_score(y_test, xgb_down_pred))) print('Downsampled XGB F1 Score: {0:.4f}'.format(f1_score(y_test, xgb_down_pred))) print('Downsampled XGB AUROC: {0:.4f}'.format(roc_auc_score(y_test, xgb_down_pred_probs))) # ## Conclusion # # After applying upsampling and downsampling techniques on the training set to balance the target classes, there was a positive impact on the models' recall (3% to 88%) and a slight bump to AUROC (may not be statistically significant). However, the resampled F1 scores were slightly worse, mainly because the precision score declined dramatically. The resampled models predicted a lot more false positives after training on a balanced-class dataset, and therefore offset any benefits from the recall improvement. # # Since there was no specific win condition for this exercise, there isn't a clear metric to focus on. The AUROC scores were similar, and visualizing each model's ROC curve doesn't show much separation at different thresholds. If you're looking for food in the woods and you can't tolerate any level of poison, then a high precision rate is key. The baseline model wins here - while it won't classify much as safe to eat, it'll be correct 90% of the time when it does. If you could tolerate some level of poison, or were under pressure trying to collect a lot of safe samples, then either of the resampled models with higher recall would be better. # + # Baseline model fpr, tpr, thresh = roc_curve(y_test, xgb_pred_probs) # Upsampled safe mushrooms fpr_up, tpr_up, thresh_up = roc_curve(y_test, xgb_up_pred_probs) # Downsampled poisonous mushrooms fpr_dn, tpr_dn, thres_dn = roc_curve(y_test, xgb_down_pred_probs) # + # Plot the ROC Curves plt.figure(figsize=(10, 8)) plt.title('Receiver Operating Characteristic Curve') plt.plot(fpr, tpr, label='Baseline AUROC: {0:.2f}' .format(roc_auc_score(y_test, xgb_pred_probs))) plt.plot(fpr_up, tpr_up, label='Upsampled AUROC: {0:.2f}' .format(roc_auc_score(y_test, xgb_up_pred_probs))) plt.plot(fpr_dn, tpr_dn, label='Downsampled AUROC: {0:.2f}' .format(roc_auc_score(y_test, xgb_down_pred_probs))) plt.legend(loc='lower right') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate')
WildMushroomClassifier/WildMushroomClassification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="_IizNKWLomoA" # <img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200> # <br></br> # <br></br> # # ## *Data Science Unit 4 Sprint 4 Lesson 3* # # # Generative Adversarial Networks (GANs) # # # *PS: The person below does not exist* # # # <img src="https://thispersondoesnotexist.com/image" width=500> # + [markdown] colab_type="text" id="0EZdBzC6pvV9" # # Lecture # # Learning Objectives: # 1. What is a GAN? # - Describe the mechanisms of a Generator & Discriminator # - Describe the Adverserial process # 2. How does a GAN achieve good results? # - Talk about relationship with Game Theory # - Illustrate NASH equilibrium # + [markdown] colab_type="text" id="W0hA8noPn94y" # ## GAN Overview # <img src="GAN Overview.jpeg" width=800> # # <br></br> # <br></br> # <br></br> # # ## GAN Framework # <img src="GAN Framework.jpeg" width=800> # + [markdown] colab_type="text" id="4Lg1r7f3lfCw" # ## *Two* neural networks - adversaries! # # ![Spy vs. Spy](https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Comikaze_Expo_2011_-_Spy_vs_Spy_%286325381362%29.jpg/360px-Comikaze_Expo_2011_-_Spy_vs_Spy_%286325381362%29.jpg) # # Generative Adversarial Networks is an approach to unsupervised learning, based on the insight that we can train *two* networks simultaneously and pit them against each other. # # - The discriminator is trained with real - but unlabeled - data, and has the goal of identifying whether or not some new item belongs in it. # - The generator starts from noise (it doesn't see the data at all!), and tries to generate output to fool the discriminator (and gets to update based on feedback). # # GANs can be considered a zero-sum game, in the [game theory](https://en.wikipedia.org/wiki/Game_theory) sense. Game theory is a common approach to modeling strategic competitive behavior between rational decision makers, and is heavily used in economics as well as computer science. # # If you've also heard the hype about [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning), one way to understand it is: # # ``` # Reinforcement Learning : GAN :: Decision Theory : Game Theory # ``` # # That is, Reinforcement Learning is more closely analogous to [decision theory](https://en.wikipedia.org/wiki/Decision_theory), a relative to the field of game theory, featuring the behavior of an "agent" against "nature" (the environment). The agent is strategic and rational, but the environment simply is. # + [markdown] colab_type="text" id="poUB58kaP7J5" # ## A Foray into Game Theory # # What is a "zero sum" game? It is a model of the interaction of two strategic agents, in a situation where, for one to gain, the other must lose, and vice-versa. # # A famous example is the [Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner's_dilemma). The typical story behind this game is something like this: # # > Two criminals who committed a crime together are caught by the authorities. There is enough evidence to put them each away for 3 years, but the police interrogate them separately, and offer each of them a deal - "Tell us what the other criminal did, and we'll go lighter on you." # # > The deal is tempting - the person who takes it shaves 2 years off their sentence. But, it adds 5 years to the sentence of the other person. So if both talk, they both get 3 - 2 + 5 = 6 years, twice as much if they both don't. But if one talks and the other doesn't, the talker gets 1 year and the non-talker gets 8! # # > The result is, individually, they both prefer defecting (talking with the police) regardless of what the other person does. But, they'd both be better off if they could somehow trust one another to not talk to the police. # # Mathematically, we consider this outcome a *Nash equilibrium* - a stable situation where neither player would want to unilaterally change strategy. But, it's one where a *pareto superior* outcome exists (an outcome that both players would prefer to what they have now). # # An illustration (with different numbers) of the Prisoner's Dilemma: # # ![Prisoner's Dilemmat](https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Dilema_do_Prisioneiro.png/480px-Dilema_do_Prisioneiro.png) # # More generally, these could be referred to as "constant sum" games - "zero sum" implies that for any player to get ahead, the other must inevitably end up behind. The above illustration could be of a game where people are "splitting loot", and so everybody *gets* something - it's just that some get more than others. The utility can be normalized so it sums to zero, or any other constant. # # Game Theory is one of the core tools used in social science and other areas to model and explain behavior. The main path to overcome "dilemmas" is *iteration* - through repetition, players can have a reputation, and value that reputation more than the outcome in any single round. For example, think of the lengths some restaurants take to ensure positive reviews. # # *Exercise* - think of at least two scenarios that could be explained with Prisoner's Dilemma, and of one other scenario that you think could also be modeled as some sort of strategic game between agents. # + [markdown] colab_type="text" id="5z5Z0pnYPwSf" # ## Minimum Viable GAN # # Courtesy of Keras: # + colab={} colab_type="code" id="penxpauRuyWt" import os import numpy as np import matplotlib.pyplot as plt from tqdm import tqdm # performance timing # Building on Keras from keras.layers import Input from keras.models import Model, Sequential from keras.layers.core import Dense, Dropout from keras.layers.advanced_activations import LeakyReLU from keras.datasets import fashion_mnist from keras.optimizers import Adam from keras import initializers # + colab={} colab_type="code" id="2qQOHKrRu-rN" np.random.seed(10) random_dim = 100 def load_minst_data(): # load the data - we'll use Fashion MNIST, for a change of pace (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() # normalize our inputs to be in the range[-1, 1] x_train = (x_train.astype(np.float32) - 127.5)/127.5 # convert x_train with a shape of (60000, 28, 28) to (60000, 784) so we have # 784 columns per row x_train = x_train.reshape(60000, 784) return (x_train, y_train, x_test, y_test) # + colab={} colab_type="code" id="_FDog5HivCwV" def get_discriminator(optimizer): discriminator = Sequential() discriminator.add(Dense( 1024, input_dim=784, kernel_initializer=initializers.RandomNormal(stddev=0.02))) discriminator.add(LeakyReLU(0.2)) discriminator.add(Dropout(0.3)) discriminator.add(Dense(512)) discriminator.add(LeakyReLU(0.2)) discriminator.add(Dropout(0.3)) discriminator.add(Dense(256)) discriminator.add(LeakyReLU(0.2)) discriminator.add(Dropout(0.3)) discriminator.add(Dense(1, activation='sigmoid')) discriminator.compile(loss='binary_crossentropy', optimizer=optimizer) return discriminator def get_generator(optimizer): generator = Sequential() generator.add(Dense( 256, input_dim=random_dim, kernel_initializer=initializers.RandomNormal(stddev=0.02))) generator.add(LeakyReLU(0.2)) generator.add(Dense(512)) generator.add(LeakyReLU(0.2)) generator.add(Dense(1024)) generator.add(LeakyReLU(0.2)) generator.add(Dense(784, activation='tanh')) generator.compile(loss='binary_crossentropy', optimizer=optimizer) return generator def get_gan_network(discriminator, random_dim, generator, optimizer): # We initially set trainable to False since we only want to train either the # generator or discriminator at a time discriminator.trainable = False # gan input (noise) will be 100-dimensional vectors gan_input = Input(shape=(random_dim,)) # the output of the generator (an image) x = generator(gan_input) # get the output of the discriminator (probability if the image is real/not) gan_output = discriminator(x) gan = Model(inputs=gan_input, outputs=gan_output) gan.compile(loss='binary_crossentropy', optimizer=optimizer) return gan def plot_generated_images(epoch, generator, examples=100, dim=(10, 10), figsize=(10, 10)): noise = np.random.normal(0, 1, size=[examples, random_dim]) generated_images = generator.predict(noise) generated_images = generated_images.reshape(examples, 28, 28) plt.figure(figsize=figsize) for i in range(generated_images.shape[0]): plt.subplot(dim[0], dim[1], i+1) plt.imshow(generated_images[i], interpolation='nearest', cmap='gray_r') plt.axis('off') plt.tight_layout() plt.savefig('gan_generated_image_epoch_%d.png' % epoch) # + colab={"base_uri": "https://localhost:8080/", "height": 3681} colab_type="code" id="YKsazCE-vFLy" outputId="dd5b57f9-e4c7-496d-c79e-9cc21a66aa0a" def train(epochs=1, batch_size=128): # Get the training and testing data x_train, y_train, x_test, y_test = load_minst_data() # Split the training data into batches of size 128 batch_count = x_train.shape[0] // batch_size # Build our GAN netowrk adam = Adam(lr=0.0002, beta_1=0.5) generator = get_generator(adam) discriminator = get_discriminator(adam) gan = get_gan_network(discriminator, random_dim, generator, adam) for e in range(1, epochs+1): print ('-'*15, 'Epoch %d' % e, '-'*15) for _ in tqdm(range(batch_count)): # Get a random set of input noise and images noise = np.random.normal(0, 1, size=[batch_size, random_dim]) image_batch = x_train[np.random.randint(0, x_train.shape[0], size=batch_size)] # Generate fake MNIST images generated_images = generator.predict(noise) X = np.concatenate([image_batch, generated_images]) # Labels for generated and real data y_dis = np.zeros(2*batch_size) # One-sided label smoothing y_dis[:batch_size] = 0.9 # Train discriminator discriminator.trainable = True discriminator.train_on_batch(X, y_dis) # Train generator noise = np.random.normal(0, 1, size=[batch_size, random_dim]) y_gen = np.ones(batch_size) discriminator.trainable = False gan.train_on_batch(noise, y_gen) if e == 1 or e % 20 == 0: plot_generated_images(e, generator) train(40, 64) # + [markdown] colab_type="text" id="jHwENTrvL5pP" # Pretty decent results, even after not too many iterations. # # We can do even better, with pretrained StyleGAN! # + [markdown] colab_type="text" id="R8XpLKVincLu" # ## StyleGAN - A Style-Based Generator Architecture for Generative Adversarial Networks # # Original paper: https://arxiv.org/abs/1812.04948 # # Source code: https://github.com/NVlabs/stylegan # # Many applications: # - https://thispersondoesnotexist.com # - https://thiscatdoesnotexist.com # - https://thisairbnbdoesnotexist.com # - https://stackroboflow.com # + colab={"base_uri": "https://localhost:8080/", "height": 104} colab_type="code" id="e1FaXXDoi5Z2" outputId="9ce51778-43f5-4617-8093-e3afaf2337eb" # !git clone https://github.com/NVlabs/stylegan # %cd stylegan/ # + colab={"base_uri": "https://localhost:8080/", "height": 1580} colab_type="code" id="GkJUFfsgnqr_" outputId="559f89d5-b07c-4966-f326-485410039faa" # From stylegan/pretrained_example.py import os import pickle import numpy as np import PIL.Image import dnnlib import dnnlib.tflib as tflib import config def main(): # Initialize TensorFlow. tflib.init_tf() # Load pre-trained network. url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # karras2019stylegan-ffhq-1024x1024.pkl with dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f: _G, _D, Gs = pickle.load(f) # _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run. # _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run. # Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot. # Print network details. Gs.print_layers() # Pick latent vector. rnd = np.random.RandomState(5) latents = rnd.randn(1, Gs.input_shape[1]) # Generate image. fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) images = Gs.run(latents, None, truncation_psi=0.7, randomize_noise=True, output_transform=fmt) # Save image. os.makedirs(config.result_dir, exist_ok=True) png_filename = os.path.join(config.result_dir, 'example.png') PIL.Image.fromarray(images[0], 'RGB').save(png_filename) main() # + colab={"base_uri": "https://localhost:8080/", "height": 1041} colab_type="code" id="7rqXQzb6N1jF" outputId="0a5b7fda-861e-4d6d-c692-c4c00643e957" from IPython.display import Image Image(filename='results/example.png') # + [markdown] colab_type="text" id="0lfZdD_cp1t5" # # Assignment - ⭐ EmojiGAN ⭐ # # Using the provided "minimum viable GAN" code, train a pair of networks to generate emoji. To get you started, here's some emoji data: # + colab={"base_uri": "https://localhost:8080/", "height": 193} colab_type="code" id="Ltj1je1fp5rO" outputId="98ced068-b9a4-442c-9659-d2a36f9c6791" # !pip install emoji_data_python # + colab={"base_uri": "https://localhost:8080/", "height": 15285} colab_type="code" id="U6pPH5jkak29" outputId="4598d5ff-ab8d-4470-f104-a0430ff2a55d" # !wget https://github.com/LambdaSchool/DS-Unit-4-Sprint-4-Deep-Learning/raw/master/module3-generative-adversarial-networks/emoji.zip # !unzip emoji.zip # + colab={} colab_type="code" id="THt33z4SbBQ3" import imageio import matplotlib.pyplot as plt from skimage import color example_emoji = imageio.imread('emoji/1f683.png') grayscale_emoji = color.rgb2gray(example_emoji) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="p10_F1XEeRmc" outputId="c7126430-0e09-4880-b889-1a9292bda21e" example_emoji.shape # + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="vE49epWUetuF" outputId="4fb62854-ce68-45f7-bcd2-d9886cc3a558" plt.imshow(example_emoji); # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="vRrcs3ihiFXo" outputId="d0161cb5-a0cd-425e-e2e2-ef32dd78a49e" grayscale_emoji.shape # + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="VTytktY0iIyX" outputId="6951a4d9-f125-4332-8410-d7ed5a6c181f" plt.imshow(grayscale_emoji, cmap="gray"); # + [markdown] colab_type="text" id="AC_HfDXYhs-_" # **Your goal** - *train a GAN that makes new emoji!* # # The good news - the data is naturally 28x28, which is the same size as the earlier example (resulting in an input layer size of $28 \times 28=784$). It's big enough to kinda look like a thing, but small enough to be feasible to train with limited resources. # # The bad news - the emoji are 4 layer PNGs (RGBA), and grayscale conversion is inconsistent at best (the above looks pretty good, but experiment and you'll see). It's OK to convert to grayscale and train that way to start (since it'll pretty much drop in to the example code with minimal modification), but you may want to see if you can figure out handling all 4 layers of the input image (basically - growing the dimensionality of the data). # # The worse news - this dataset may not be large enough to get the same quality of results as MNIST. The resources/stretch goals section links to additional sources, so feel free to get creative (and practice your scraping/ingest skills) - but, it is suggested to do so only *after* working some with this as a starting point. # # *Hint* - the main challenge in getting an MVP running will just be loading and converting all the images. [os.listdir](https://docs.python.org/3.7/library/os.html#os.listdir) plus a loop, and refactoring the image processing code into a function, should go a long way. # + [markdown] colab_type="text" id="zE4a4O7Bp5x1" # # Resources and Stretch Goals # + [markdown] colab_type="text" id="uT3UV3gap9H6" # Stretch goals # - [emoji-data](https://github.com/iamcal/emoji-data) - more, bigger, emoji # - [Slackmojis](https://slackmojis.com) - even more - many of them animated, which would be a significant additional challenge (probably not something for a day) # # Resources # - [StyleGAN Explained](https://towardsdatascience.com/explained-a-style-based-generator-architecture-for-gans-generating-and-tuning-realistic-6cb2be0f431) - blog post describing GANs and StyleGAN in particular # - [Implementing GANs in TensorFlow](https://blog.paperspace.com/implementing-gans-in-tensorflow/) - blog post showing TF implementation of a simple GAN # - [Training GANs using Google Colaboratory](https://towardsdatascience.com/training-gans-using-google-colaboratory-f91d4e6f61fe) - an approach using Torch and GPU instances # - [Gym](https://gym.openai.com) - a toolkit for reinforcement learning, another innovative ML approach # - [deep emoji generative adversarial network](https://github.com/anoff/deep-emoji-gan) - yes, the idea of an emoji GAN has been done - so check out this extended analysis of the results # - [DeepMoji](http://deepmoji.mit.edu) - not a GAN, but a cool application of deep learning to emoji
module3-generative-adversarial-networks/LS_DS_443_Generative_Adversarial_Networks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction # # Pandoc (http://pandoc.org) is a document processing program that runs on multiple operating systems (Mac, Windows, Linux) and can read and write a wide variety of file formats. In may respects Pandoc can be though of as a *universal translator* for documents. As shown in the following figure there are many formats supported for input and output by Pandoc. IN this workshop we will focus on a specific subset of input and output document types while remembering that we are just scratching the surface of the transformations we can perform with Pandoc. # # <a href="http://pandoc.org/index.html"><img src="diagram.jpg" style="width:50%" /></a> # # In particular, we will focus on converting documents written in Pandoc's extended version of Markdown (originally developed by <NAME> as an simple ASCII text syntax for writing blog posts - https://daringfireball.net/projects/markdown/) into some useful output formats including: # # * HTML Pages # * HTML-based presentation slide decks # * PDF documents (including memos, letters, reports, manuscripts, presentation slides, and poster presentations) # * Word documents (DOCX) # # The bottom line is that Pandoc provides a useful tool that provides a number of significant benefits including: # # * Allowing for a clear separation between *content development* from *styling and presentation* of that content. # * A simple text-based working file format that can be created in any text editor # * A source format that integrates very well into version control systems for collaborative development # * A toolchain that is supported in all major operating systems, providing platform independent document generation # * A simple command syntax that can be used to automate document generation processes for simple replication of document workflow # * A model for developing customized document templates for output formats that allows for a high degree of customization and consistency # * The use of a powerful page layout system (LaTeX - http://www.latex-project.org/about/) that is broadly used to generate high-quality print format documents. # --- # # Installing Pandoc on Your Computer # # Pandoc is available for installation on Mac, Windows and Linux. While basic Pandoc functionality is provided by the Pandoc itself, the PDF generation capabilities use the LaTeX system. For a really useful system you will want to install both. The installation page for Pandoc provides links and guidance for installing both Pandoc and LaTeX on all supported operating systems. # # http://pandoc.org/installing.html # --- # # Some sample documents # ## A simple `hello world` of documents # # Markdown document: http://localhost:8888/edit/PandocTraining/00-Instructor/01-HelloWorld/helloWorld.md # # ``` # % My title # % Author # % Some date # # # Heading 1 # # Hello World - this is as simple as it gets ... # ``` # # Commands to generate different representations of this document: # # pandoc -o helloWorld.pdf helloWorld.md # pandoc -o helloWorld.docx helloWorld.md # pandoc -o helloWorld.html helloWorld.md # # Try it yourself by running (ctrl-enter) the following set of commands # + language="bash" # pandoc -o helloWorld.pdf helloWorld.md # pandoc -o helloWorld.docx helloWorld.md # pandoc -o helloWorld.html helloWorld.md # - # Let's take a look at the generated documents: # # * [PDF file](/files/PandocTraining/00-Instructor/01-HelloWorld/helloWorld.pdf) # * [Word Document](/files/PandocTraining/00-Instructor/01-HelloWorld/helloWorld.docx) # * [HTML File](/files/PandocTraining/00-Instructor/01-HelloWorld/helloWorld.html) # ## Templated & Styled Content # # [Markdown document](/edit/PandocTraining/00-Instructor/01-HelloWorld/templates.md) # # ``` # % My title # % My name # % Today # # --- # recipientSalutation: Recipient Salutation # recipientName: Recipient Name # recipientTitle: Recipient Title # recipientAddress: Recipient Address # ... # # Biltong qui pancetta ball tip turkey eiusmod, tongue bresaola ham dolore. Tempor eiusmod ground round pork strip steak sirloin tongue. Magna cillum consequat, minim do tenderloin in porchetta ham officia qui. Picanha swine minim, ham hock boudin aliqua nisi ball tip aliquip deserunt ribeye in est burgdoggen voluptate. Cupim velit landjaeger nisi flank exercitation sunt laboris dolore. # # ``` # # PDF Generation Commands: # # pandoc -o templates.pdf templates.md # pandoc -o templates.pdf --template "main.tex" templates.md # # Word Document Generation Commands: (limited to styling - rest of template characteristics can't currently be set) # # pandoc -o templates.docx templates.md # pandoc -o templates.docx --reference-docx "docxTemplate.docx" templates.md # # HTML File Generation Commands: # # pandoc -o templates.html templates.md # pandoc -o templatesTemplated.html --template "ulPage.html" templates.md # pandoc -o templatesStyled.html -css page.css templates.md # + language="bash" # pandoc -o templates.pdf templates.md # pandoc -o templatesTemplated.pdf --template="formal_letter_4.tex" templates.md # - # Generated Files: # # * [Untemplated PDF File](/files/PandocTraining/00-Instructor/01-HelloWorld/templates.pdf) # * [Templated PDF File](/files/PandocTraining/00-Instructor/01-HelloWorld/templatesTemplated.pdf) # + language="bash" # pandoc -o templates.docx templates.md # pandoc -o templatesTemplated.docx --reference-docx "docxTemplate.docx" templates.md # - # Generated Files: # # * [Untemplated DOCX File](/files/PandocTraining/00-Instructor/01-HelloWorld/templates.docx) # * [Templated DOCX File](/files/PandocTraining/00-Instructor/01-HelloWorld/templatesTemplated.docx) # + language="bash" # pandoc -o templates.html templates.md # pandoc -o templatesTemplated.html --template "ulPage.html" templates.md # pandoc -o templatesStyled.html --css=page.css -s templates.md # - # Generated Files: # # * [Untemplated HTML file](/files/PandocTraining/00-Instructor/01-HelloWorld/templates.html) # * [Templated HTML file](/files/PandocTraining/00-Instructor/01-HelloWorld/templatesTemplated.html) # * [Styled HTML file](/files/PandocTraining/00-Instructor/01-HelloWorld/templatesStyled.html) # --- # ## Some Actual Documents # # ### A Class Syllabus # # [The source markdown file](/edit/PandocTraining/00-Instructor/01-HelloWorld/OILS515_syllabus.md) # # The commands to generate multiple representations of the syllabus: # # pandoc --standalone --toc --latex-engine=pdflatex -V geometry:margin=1in -V fontsize:11pt -o OILS515_syllabus.pdf OILS515_syllabus.md # # pandoc --toc -s --standalone --css=page2.css -o OILS515_syllabus.html OILS515_syllabus.md # # pandoc -s -o OILS515_syllabus.epub OILS515_syllabus.md # + language="bash" # pandoc --standalone --toc --latex-engine=pdflatex -V geometry:margin=1in -V fontsize:11pt -o OILS515_syllabus.pdf OILS515_syllabus.md # pandoc --toc -s --standalone --css=page2.css -o OILS515_syllabus.html OILS515_syllabus.md # pandoc -s -o OILS515_syllabus.epub OILS515_syllabus.md # - # The generated files: # # * [The PDF file](/files/PandocTraining/00-Instructor/01-HelloWorld/OILS515_syllabus.pdf) # * [The HTML file](/files/PandocTraining/00-Instructor/01-HelloWorld/OILS515_syllabus.html) # * [The EPub file](/files/PandocTraining/00-Instructor/01-HelloWorld/OILS515_syllabus.epub) # ### A Recently Presented Conference Poster # # [The source markdown file](/edit/PandocTraining/00-Instructor/01-HelloWorld/AgileCuration_2016AGUPoster/2016-12_AGUPoster.md) # # ```bash # # cd AgileCuration_2016AGUPoster # pandoc -s -S \ # --normalize \ # --filter pandoc-citeproc \ # --csl ./science.csl \ # --template=poster.tex \ # -f markdown+raw_tex \ # -o 2016-12_AGUPoster.pdf \ # 2016-12_AGUPoster.md # ``` # + language="bash" # cd AgileCuration_2016AGUPoster # change into the directory that has all the files # pandoc -s -S \ # --normalize \ # --filter pandoc-citeproc \ # --csl ./science.csl \ # --template=poster.tex \ # -f markdown+raw_tex \ # -o 2016-12_AGUPoster.pdf \ # 2016-12_AGUPoster.md # # pandoc -s -S \ # --normalize \ # --filter pandoc-citeproc \ # --csl ./science.csl \ # -o 2016-12_AGUPoster.html \ # 2016-12_AGUPoster.md # - # The generated file: # # * [The PDF file](AgileCuration_2016AGUPoster/2016-12_AGUPoster.pdf) # * [The HTML file](AgileCuration_2016AGUPoster/2016-12_AGUPoster.html) # ### A Collection of Slide Presentations # # [`01_DataManagement.md`](/edit/PandocTraining/00-Instructor/01-HelloWorld/GMT200_DataManagement/01_DatManagement.md) # # [`02_DataSecurity.md`](/edit/PandocTraining/00-Instructor/01-HelloWorld/GMT200_DataManagement/01_DataSecurity.md) # # [`03_DataManagementPlanning.md`](/edit/PandocTraining/00-Instructor/01-HelloWorld/GMT200_DataManagement/01_DataManagementPlanning.md) # # Commands to generate each of the slide shows: # # ``` # pandoc --section-divs --slide-level 3 -c lobo_slides.css --standalone -t dzslides -o 01_DataManagement.slides.html 01_DataManagement.md # # pandoc --section-divs --slide-level 3 -c lobo_slides.css --standalone -t dzslides -o 02_DataSecurity.slides.html 02_DataSecurity.md # # pandoc --section-divs --slide-level 3 -c lobo_slides.css --standalone -t dzslides -o 03_DataManagementPlanning.slides.html 03_DataManagementPlanning.md # ``` # # Commands to generate the corresponding PDF files: # # ``` # pandoc --template=default.latex --latex-engine=xelatex --self-contained --standalone -o 01_DataManagement.pdf 01_DataManagement.md # # pandoc --template=default.latex --latex-engine=xelatex --self-contained --standalone -o 02_DataSecurity.pdf 02_DataSecurity.md # # pandoc --template=default.latex --latex-engine=xelatex --self-contained --standalone -o 03_DataManagementPlanning.pdf 03_DataManagementPlanning.md # ``` # + language="bash" # cd GMT200_DataManagement # pandoc --section-divs --slide-level 3 -c lobo_slides.css --standalone -t dzslides -o 01_DataManagement.slides.html 01_DataManagement.md # pandoc --section-divs --slide-level 3 -c lobo_slides.css --standalone -t dzslides -o 02_DataSecurity.slides.html 02_DataSecurity.md # pandoc --section-divs --slide-level 3 -c lobo_slides.css --standalone -t dzslides -o 03_DataManagementPlanning.slides.html 03_DataManagementPlanning.md # pandoc --template=default.latex --latex-engine=xelatex --self-contained --standalone -o 01_DataManagement.pdf 01_DataManagement.md # pandoc --template=default.latex --latex-engine=xelatex --self-contained --standalone -o 02_DataSecurity.pdf 02_DataSecurity.md # pandoc --template=default.latex --latex-engine=xelatex --self-contained --standalone -o 03_DataManagementPlanning.pdf 03_DataManagementPlanning.md # - # [All the files related to this example](/tree/PandocTraining/00-Instructor/01-HelloWorld/GMT200_DataManagement) # --------------------------------------- # [NEXT - Pandoc Markdown Syntax](/notebooks/PandocTraining/00-Instructor/02-Syntax/02%20-%20Pandoc%20Mardown%20Syntax.ipynb)
00-Instructor/01-HelloWorld/.ipynb_checkpoints/01 - Introduction to Pandoc-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <table> # <tr><td><img style="height: 150px;" src="images/geo_hydro1.jpg"></td> # <td bgcolor="#FFFFFF"> # <p style="font-size: xx-large; font-weight: 900; line-height: 100%">AG Dynamics of the Earth</p> # <p style="font-size: large; color: rgba(0,0,0,0.5);">Juypter notebooks</p> # <p style="font-size: large; color: rgba(0,0,0,0.5);"><NAME></p> # </td> # </tr> # </table> # # Angewandte Geophysik II: Trigonometrische Funktionen # ---- # *<NAME>, # Geophysics Section, # Institute of Geological Sciences, # Freie Universität Berlin, # Germany* # Die *Trigonometrie* beschäftigt sich mit der Vermessung von Dreiecken # (aus dem Griechischen: trigonon-Dreieck; metron-Vermessung) # ## Allgemeines Dreieck # # Das Dreieck wird definiert über # - Drei Punkte: A,B,C # - Drei Seiten: a,b,c # - Drei Winkel: $\alpha, \beta, \gamma$ # # <img src=figures/trig_triangle.jpg style=width:6cm> # ### Rechtwinkliges Dreieck # # Ist einer der Winkel 90 Grad, dann ist das Dreieck *rechtwinklig*. # <img src=figures/trig_triangle_rect.jpg style=width:6cm> # In diesem Fall ist die Seite $c$ die *Hypothenuse*, die Seiten $a$ und $b$ die *Katheden*. # In einem rechtwinkligen Dreieck können wir mit Hilfe der trigononetrischen Funktionen *Sinus*, # *Cosinus* und *Tangens* Beziehungen zwischen Winkels und Seiten herstellen: # $$ # \begin{array}{rcl} # \sin \alpha &=& {a \over c} \\ # \cos \alpha &=& {b \over c} \\ # \tan \alpha &=& {a \over b} # \end{array} # $$ import numpy as np a = np.sqrt(2) b = np.sqrt(2) c = np.sqrt(a**2 + b**2) print (" a: %10.2f\n b: %10.2f\n c: %10.2f" % (a,b,c)) # ## Polarkoordinaten # # Mit Hilfe der trigonometrischen Funktionen können wir Koordinaten *transformieren*. Als Beispiel wählen wir die **Polarkoordinaten**. # <img src=figures/trig_polar.jpg style=width:6cm> # Wir definieren die karthesischen Koordinaten $x$ und $y$: # $$ # \begin{array}{rcl} # x &=& r \cos(\alpha) \\ # y &=& r \sin(\alpha) # \end{array} # $$ # # Umgekehrt gilt für die Polarkoordinaten $r$ und $\alpha$: # $$ # \begin{array}{rcl} # r &=& \sqrt{x^2 + y^2} \\ # \alpha &=& \arccos({x \over r}) = \arcsin({y \over r}) = \arctan({y \over x}) # \end{array} # $$ # ### Beziehungen im Dreieck # - **Winkel**: Alle Winkel im Dreieck summieren sich auf zu 180$^{\circ}$: # $$ # \alpha + \beta + \gamma = 180 # $$ # - **Pythagoras**: Wir nutzen den *Satz von Pythagoras* im rechtwickligen Dreieck: # $$ # a^2 + b^2 = c^2 # $$ # Wenn wir diese Gleichung durch $c$ teilen, folgt # $$ # ({a \over c})^2 + ({b \over c})^2 = ({c \over c})^2 # $$ # Wir identifizieren die Brüche als *Sinus* und *Cosinus*: # $$ # \sin^2 \alpha + \cos^2 \alpha = 1 # $$ # Weitere Identitäten: # -**Gegenüberliegende Winkel**: # $$ # \begin{array}{rcl} # \sin(\alpha) &=& -\sin(-\alpha) \\ # \cos(\alpha) &=& \cos(-\alpha) \\ # \tan(\alpha) &=& -\tan(-\alpha) # \end{array} # $$ # - **Doppelte Winkel**: # $$ # \begin{array}{rcl} # \sin(2\alpha) &=& 2 \sin(\alpha) \cos(\alpha) \\ # \cos(2\alpha) &=& \cos^2(\alpha) - \sin^2(\alpha) \\ # \tan(2\alpha) &=& {{2\tan(\alpha)} \over {1-\tan^2(\alpha)}} # \end{array} # $$ # - **<NAME>**: # $$ # \begin{array}{rcl} # \sin({\alpha \over 2}) &=& \pm \sqrt{{{1-\cos(\alpha)} \over {2}}}\\ # \cos({\alpha \over 2}) &=& \pm \sqrt{{{1+\cos(\alpha)} \over {2}}}\\ # \tan({\alpha \over 2}) &=& \pm \sqrt{{{1-\cos(\alpha)} \over {1+\cos(\alpha)}}} # \end{array} # $$ # Das Vorzeichen hängt vom Winkel $\alpha$ ab. # - **Winkelsummen**, **Winkeldifferencen**: # $$ # \begin{array}{rcl} # \sin(\alpha \pm \beta) &=& \sin(\alpha) \cos(\beta) \pm \cos(\alpha) \sin(\beta) \\ # \cos(\alpha \pm \beta) &=& \cos(\alpha) \cos(\beta) \mp \sin(\alpha) \sin(\beta) \\ # \tan(\alpha \pm \beta) &=& {{\tan(\alpha) \pm \tan(\beta)} \over {1 \mp \tan(\alpha) \tan(\beta)}} # \end{array} # $$ # Beachte den Wechsel von $\pm$ auf $\mp$. # ## Einheitskreis # Wir setzen die Hypothenuse auf Eins, $c=1$, und den Punkt $A$ in den Urspung, und erhalten so den *Einheitskreis*. # # Wenn wir nun den Winkel $\alpha$ von Null an beginnen zu rotieren in die positive Richtung, erzeugen wir die # *Sinus*- und die *Cosinus*-Funktion. # # <img src=figures/trig_unitcircle.jpg style=width:10cm> # ## Harmonische Funktionen # Das *Sinus*- und *Cosinus*-Funktion eine wiederkehrende Periodizität haben, nutzen wir sie als # **harmonische Funktionen**. Wir definieren: # # - Amplitude $A$ # - Periode $B$ # - Phase $C$ # - Shift $D$ # # $$ # y(t) = A \sin\left({{2\pi t} \over {B}} +C \right) + D # $$ # %matplotlib inline import numpy as np import matplotlib.pyplot as plt from ipywidgets import interactive import ipywidgets as widgets # + # define interval t = np.linspace(-np.pi,3*np.pi,101) A = 1 B = 2*np.pi C = 0 D = 0 def plot_sinus(A,B,C,D): # define function y1 = A*np.sin(2*np.pi*t/B+C) + D # plot function plt.figure(figsize=(12.0, 6.0)) plt.xlim([-np.pi,2*np.pi]) plt.xticks([-np.pi,0,np.pi,2*np.pi]) plt.plot([-np.pi,2*np.pi],[0,0],linestyle='dashed',color='grey') plt.plot([0,0],[-1,1],linestyle='dashed',color='grey') plt.plot(t,y1) # call interactive module interactive_plot = interactive(plot_sinus, A=widgets.FloatSlider(min=0,max=3,step=0.1,value=1,description='Amplitude'), B=widgets.FloatSlider(min=np.pi,max=3*np.pi,step=np.pi/2,value=2*np.pi,description='Frequency'), C=widgets.FloatSlider(min=-3,max=3,step=0.1,value=0,description='Phase'), D=widgets.FloatSlider(min=0,max=3,step=0.1,value=0,description='Offset'), ) output = interactive_plot.children[-1] interactive_plot # - # ## Spektrum # Wir haben gesehen, dass wir eine harmonische Funktion u.a. durch die Periode $B$ beschreiben können. # # Die Periode $B$ hängt mit der Frequenz $f$ zusammen: # $$ # f = {{1}\over{B}} # $$ # Wir formulieren unsere Sinus-Funktion nun in eine *Zeitreihe* um, indem wir die Variable $x$ durch die Zeit $t$ ersetzen: # $$ # f(t) = A \sin\left({{2\pi t}\over{B}} \right) # $$ # Wir setzen uns eine *Zeitreihe* zusammen aus zwei Sinusfunktionen, # einmal mit der Frequenz $f_1=4$ Hz und der Amplitude $A_1=1$, # einmal mit der Frequenz $f_2=7$ Hz und der Amplitude $A_2=0.5$: # $$ # f(t) = A_1 \sin(2\pi f_1 t) + A_2 \sin(2\pi f_2 t) # $$ # + def plot_timeseries(A1,A2,f1,f2,noise): # define time series rate = 120.0 t = np.arange(0, 10, 1/rate) x = A1*np.sin(2*np.pi*f1*t) + A2*np.sin(2*np.pi*f2*t) + np.random.randn(len(t))*noise # plot time series plt.figure(figsize=(12.0, 6.0)) plt.xlim([0,10]) plt.xticks([0,2,4,6,8,10]) plt.xlabel('Time [s]') plt.ylim([-3.5,3.5]) plt.yticks([-2,-1,0,1,2]) plt.plot(t,x) plt.show() # call interactive module interactive_plot = interactive(plot_timeseries, A1=widgets.FloatSlider(min=0,max=3,step=0.1,value=1,description='Amplitude 1'), A2=widgets.FloatSlider(min=0,max=3,step=0.1,value=0.5,description='Amplitude 2'), f1=widgets.IntSlider(min=1,max=10,step=1,value=4,description='Frequency 1'), f2=widgets.IntSlider(min=1,max=10,step=1,value=7,description='Frequency 2'), noise=widgets.FloatSlider(min=0,max=1,step=0.1,value=0.2,description='Noise'), ) output = interactive_plot.children[-1] interactive_plot # - # Wir nutzen die *eindimensionale reelle Fouriertransformation*, um das Spektrum dieser Zeitreihe zu analysieren. # + def plot_spectrum(A1,A2,f1,f2,noise): # define time series rate = 120.0 t = np.arange(0, 10, 1/rate) x = A1*np.sin(2*np.pi*f1*t) + A2*np.sin(2*np.pi*f2*t) + np.random.randn(len(t))*noise # calculate spektrum/power spectrum p1 = np.abs(np.fft.rfft(x)) p2 = 20*np.log10(np.abs(np.fft.rfft(x))) f = np.linspace(0, rate/2, len(p1)) # plot spectrum plt.figure(figsize=(12.0, 6.0)) plt.xticks([0,4,7,10,15,20,25,30]) plt.xlabel('Frequency [Hz]') plt.plot(f,p1,label='Spectrum') plt.plot(f,p2,label='Spectrum (dB)') plt.legend() plt.show() # call interactive module interactive_plot = interactive(plot_spectrum, A1=widgets.FloatSlider(min=0,max=3,step=0.1,value=1,description='Amplitude 1'), A2=widgets.FloatSlider(min=0,max=3,step=0.1,value=0.5,description='Amplitude 2'), f1=widgets.IntSlider(min=1,max=10,step=1,value=4,description='Frequency 1'), f2=widgets.IntSlider(min=1,max=10,step=1,value=7,description='Frequency 2'), noise=widgets.FloatSlider(min=0,max=1,step=0.1,value=0.2,description='Noise'), ) output = interactive_plot.children[-1] interactive_plot # - # ... done
.ipynb_checkpoints/AGII_lab01_Trigonometrie-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CoNLL_4.ipynb # # This notebook contains the fourth part of the model training and analysis code from our CoNLL-2020 paper, ["Identifying Incorrect Labels in the CoNLL-2003 Corpus"](https://www.aclweb.org/anthology/2020.conll-1.16/). # # If you're new to the Text Extensions for Pandas library, we recommend that you start # by reading through the notebook [`Analyze_Model_Outputs.ipynb`](https://github.com/CODAIT/text-extensions-for-pandas/blob/master/notebooks/Analyze_Model_Outputs.ipynb), which explains the # portions of the library that we use in the notebooks in this directory. # # ### Summary # # This notebook repeats the model training process from `CoNLL_3.ipynb`, but performs a 10-fold cross-validation. This process involves training a total of 170 models -- 10 groups of 17. Next, this notebook evaluates each group of models over the holdout set from the associated fold of the cross-validation. Then it aggregates together these outputs and uses the same techniques used in `CoNLL_2.ipynb` to flag potentially-incorrect labels. Finally, the notebook writes out CSV files containing ranked lists of potentially-incorrect labels. # # # # Libraries and constants # + # Libraries import numpy as np import pandas as pd import os import sys import time import torch import transformers from typing import * import sklearn.model_selection import sklearn.pipeline import matplotlib.pyplot as plt import multiprocessing import gc # And of course we need the text_extensions_for_pandas library itself. try: import text_extensions_for_pandas as tp except ModuleNotFoundError as e: raise Exception("text_extensions_for_pandas package not found on the Jupyter " "kernel's path. Please either run:\n" " ln -s ../../text_extensions_for_pandas .\n" "from the directory containing this notebook, or use a Python " "environment on which you have used `pip` to install the package.") # Code shared among notebooks is kept in util.py, in this directory. import util # BERT Configuration # Keep this in sync with `CoNLL_3.ipynb`. #bert_model_name = "bert-base-uncased" #bert_model_name = "bert-large-uncased" bert_model_name = "dslim/bert-base-NER" tokenizer = transformers.BertTokenizerFast.from_pretrained(bert_model_name, add_special_tokens=True) bert = transformers.BertModel.from_pretrained(bert_model_name) # If False, use cached values, provided those values are present on disk _REGENERATE_EMBEDDINGS = True _REGENERATE_MODELS = True # Number of dimensions that we reduce the BERT embeddings down to when # training reduced-quality models. #_REDUCED_DIMS = [8, 16, 32, 64, 128, 256] _REDUCED_DIMS = [32, 64, 128, 256] # How many models we train at each level of dimensionality reduction _MODELS_AT_DIM = [4] * len(_REDUCED_DIMS) # Consistent set of random seeds to use when generating dimension-reduced # models. Index is [index into _REDUCED_DIMS, model number], and there are # lots of extra entries so we don't need to resize this matrix. from numpy.random import default_rng _MASTER_SEED = 42 rng = default_rng(_MASTER_SEED) _MODEL_RANDOM_SEEDS = rng.integers(0, 1e6, size=(8, 8)) # Create a Pandas categorical type for consistent encoding of categories # across all documents. _ENTITY_TYPES = ["LOC", "MISC", "ORG", "PER"] token_class_dtype, int_to_label, label_to_int = tp.io.conll.make_iob_tag_categories(_ENTITY_TYPES) # Parameters for splitting the corpus into folds _KFOLD_RANDOM_SEED = _MASTER_SEED _KFOLD_NUM_FOLDS = 10 # - # # Read inputs # # Read in the corpus, retokenize it with the BERT tokenizer, add BERT embeddings, and convert # to a single dataframe. # Download and cache the data set. # NOTE: This data set is licensed for research use only. Be sure to adhere # to the terms of the license when using this data set! data_set_info = tp.io.conll.maybe_download_conll_data("outputs") data_set_info # The raw dataset in its original tokenization corpus_raw = {} for fold_name, file_name in data_set_info.items(): df_list = tp.io.conll.conll_2003_to_dataframes(file_name, ["pos", "phrase", "ent"], [False, True, True]) corpus_raw[fold_name] = [ df.drop(columns=["pos", "phrase_iob", "phrase_type"]) for df in df_list ] # Use Ray to make things faster import ray if ray.is_initialized(): ray.shutdown() ray.init() # + # Retokenize with the BERT tokenizer and optinally regenerate embeddings. actor_pool = ray.util.actor_pool.ActorPool([ util.BertActor.remote(bert_model_name, token_class_dtype, compute_embeddings=_REGENERATE_EMBEDDINGS) for i in range(multiprocessing.cpu_count()) ]) bert_toks_by_fold = {} for fold_name in corpus_raw.keys(): print(f"Processing fold '{fold_name}'...") raw = corpus_raw[fold_name] for tokens_df in raw: actor_pool.submit(lambda a, v: a.process_doc.remote(v), tokens_df) bert_toks_by_fold[fold_name] = tp.jupyter.run_with_progress_bar( len(raw), lambda i: actor_pool.get_next()) # The actors will stay active until their associated Python objects # go out of scope and are garbage-collected. del actor_pool gc.collect(0) bert_data = bert_toks_by_fold # + # Create a single dataframe of annotated tokens for the entire corpus if _REGENERATE_EMBEDDINGS: corpus_df = tp.io.conll.combine_folds(bert_data) # We can't currently serialize span columns that cover multiple documents (see issue 73), # so the Feather file won't contain them. Drop these columns for consistency when # we regenerate the embeddings here. cols_to_drop = [c for c in corpus_df.columns if "span" in c] corpus_df.drop(columns=cols_to_drop, inplace=True) else: # Use embeddings computed in CoNLL_3.ipynb _EMBEDDINGS_FILE = "outputs/corpus.feather" if not os.path.exists(_EMBEDDINGS_FILE): raise ValueError(f"Precomputed embeddings not found at {_EMBEDDINGS_FILE}. " f"Please rerun CoNLL_3.ipynb to regenerate this file, or " f"set _REGENERATE_EMBEDDINGS to True in the previous cell.") corpus_df = pd.read_feather("outputs/corpus.feather") corpus_df.head() # - # # Prepare folds for a 10-fold cross-validation # # We divide the documents of the corpus into 10 random samples. # IDs for each of the keys doc_keys = corpus_df[["fold", "doc_num"]].drop_duplicates().reset_index(drop=True) doc_keys # + # We want to split the documents randomly into _NUM_FOLDS sets, then # for each stage of cross-validation train a model on the union of # (_NUM_FOLDS - 1) of them while testing on the remaining fold. # sklearn.model_selection doesn't implement this approach directly, # but we can piece it together with some help from Numpy. #from numpy.random import default_rng rng = np.random.default_rng(seed=_KFOLD_RANDOM_SEED) iloc_order = rng.permutation(len(doc_keys.index)) kf = sklearn.model_selection.KFold(n_splits=_KFOLD_NUM_FOLDS) train_keys = [] test_keys = [] for train_ix, test_ix in kf.split(iloc_order): # sklearn.model_selection.KFold gives us a partitioning of the # numbers from 0 to len(iloc_order). Use that partitioning to # choose elements from iloc_order, then use those elements to # index into doc_keys. train_iloc = iloc_order[train_ix] test_iloc = iloc_order[test_ix] train_keys.append(doc_keys.iloc[train_iloc]) test_keys.append(doc_keys.iloc[test_iloc]) train_keys[1].head(10) # - # # Dry run: Train and evaluate models on the first fold # # Train models on the first of our 10 folds and manually examine some of the # model outputs. # Gather the training set together by joining our list of documents # with the entire corpus on the composite key <fold, doc_num> train_inputs_df = corpus_df.merge(train_keys[0]) train_inputs_df # Repeat the same process for the test set test_inputs_df = corpus_df.merge(test_keys[0]) test_inputs_df # ## Train an ensemble of models # + import importlib util = importlib.reload(util) import sklearn.linear_model # Wrap util.train_reduced_model in a Ray task @ray.remote def train_reduced_model_task( x_values: np.ndarray, y_values: np.ndarray, n_components: int, seed: int, max_iter: int = 10000) -> sklearn.base.BaseEstimator: return util.train_reduced_model(x_values, y_values, n_components, seed, max_iter) # Ray task that trains a model using the entire embedding @ray.remote def train_full_model_task(x_values: np.ndarray, y_values: np.ndarray, max_iter: int = 10000) -> sklearn.base.BaseEstimator: return ( sklearn.linear_model.LogisticRegression( multi_class="multinomial", max_iter=max_iter ) .fit(x_values, y_values) ) def train_models(train_df: pd.DataFrame) \ -> Dict[str, sklearn.base.BaseEstimator]: """ Train an ensemble of models with different levels of noise. :param train_df: DataFrame of labeled training documents, with one row per token. Must contain the columns "embedding" (precomputed BERT embeddings) and "token_class_id" (integer ID of token type) :returns: A mapping from mnemonic model name to trained model """ X = train_df["embedding"].values Y = train_df["token_class_id"] # Push the X and Y values to Plasma so that our tasks can share them. X_id = ray.put(X.to_numpy().copy()) Y_id = ray.put(Y.to_numpy().copy()) names_list = [] futures_list = [] print(f"Training model using all of " f"{X._tensor.shape[1]}-dimension embeddings.") names_list.append(f"{X._tensor.shape[1]}_1") futures_list.append(train_full_model_task.remote(X_id, Y_id)) for i in range(len(_REDUCED_DIMS)): num_dims = _REDUCED_DIMS[i] num_models = _MODELS_AT_DIM[i] for j in range(num_models): model_name = f"{num_dims}_{j + 1}" seed = _MODEL_RANDOM_SEEDS[i, j] print(f"Training model '{model_name}' (#{j + 1} " f"at {num_dims} dimensions) with seed {seed}") names_list.append(model_name) futures_list.append(train_reduced_model_task.remote(X_id, Y_id, num_dims, seed)) #models[model_name] = util.train_reduced_model(X, Y, num_dims, seed) # Block until all training tasks have completed and fetch the resulting models. models_list = ray.get(futures_list) models = { n: m for n, m in zip(names_list, models_list) } return models def maybe_train_models(train_df: pd.DataFrame, fold_num: int): import pickle _CACHED_MODELS_FILE = f"outputs/fold_{fold_num}_models.pickle" if _REGENERATE_MODELS or not os.path.exists(_CACHED_MODELS_FILE): m = train_models(train_df) print(f"Trained {len(m)} models.") with open(_CACHED_MODELS_FILE, "wb") as f: pickle.dump(m, f) else: # Use a cached model when using cached embeddings with open(_CACHED_MODELS_FILE, "rb") as f: m = pickle.load(f) print(f"Loaded {len(m)} models from {_CACHED_MODELS_FILE}.") return m models = maybe_train_models(train_inputs_df, 0) print(f"Model names after loading or training: {', '.join(models.keys())}") # + # Uncomment this code if you need to have the cells that follow ignore # some of the models saved to disk. # _MODEL_SIZES_TO_KEEP = [32, 64, 128, 256] # _RUNS_TO_KEEP = [4] * len(_MODEL_SIZES_TO_KEEP) # _OTHER_MODELS_TO_KEEP = ["768_1"] # to_keep = _OTHER_MODELS_TO_KEEP.copy() # for size in _MODEL_SIZES_TO_KEEP: # for num_runs in _RUNS_TO_KEEP: # for i in range(num_runs): # to_keep.append(f"{size}_{i+1}") # models = {k: v for k, v in models.items() if k in to_keep} # print(f"Model names after filtering: {', '.join(models.keys())}") # - # ## Evaluate the models on this fold's test set # + def eval_models(models: Dict[str, sklearn.base.BaseEstimator], test_df: pd.DataFrame): """ Bulk-evaluate an ensemble of models generated by :func:`train_models`. :param models: Output of :func:`train_models` :param test_df: DataFrame of labeled test documents, with one row per token. Must contain the columns "embedding" (precomputed BERT embeddings) and "token_class_id" (integer ID of token type) :returns: A dictionary from model name to results of :func:`util.analyze_model` """ todo = [(name, model) for name, model in models.items()] results = tp.jupyter.run_with_progress_bar( len(todo), lambda i: util.analyze_model(test_df, int_to_label, todo[i][1], bert_data, corpus_raw, expand_matches=True), "model" ) return {t[0]: result for t, result in zip(todo, results)} evals = eval_models(models, test_inputs_df) # + # Summarize how each of the models does on the test set. def make_summary_df(evals_df: pd.DataFrame) -> pd.DataFrame: global_scores = [r["global_scores"] for r in evals_df.values()] return pd.DataFrame({ "name": list(evals_df.keys()), "dims": pd.Series([n.split("_")[0] for n in evals_df.keys()]).astype(int), "num_true_positives": [r["num_true_positives"] for r in global_scores], "num_entities": [r["num_entities"] for r in global_scores], "num_extracted": [r["num_extracted"] for r in global_scores], "precision": [r["precision"] for r in global_scores], "recall": [r["recall"] for r in global_scores], "F1": [r["F1"] for r in global_scores] }).sort_values("dims") summary_df = make_summary_df(evals) # + # Plot the tradeoff between dimensionality and F1 score x = summary_df["dims"] y = summary_df["F1"] plt.figure(figsize=(4,4)) plt.scatter(x, y) #plt.yscale("log") #plt.xscale("log") plt.ylim([0.75, 1.0]) plt.xlabel("Number of Dimensions") plt.ylabel("F1 Score") # Also dump the raw data to a local file. pd.DataFrame({"num_dims": x, "f1_score": y}).to_csv("outputs/dims_vs_f1_score_xval.csv", index=False) plt.show() # - # ## Aggregate the model results and compare with the gold standard full_results = util.merge_model_results(evals) full_results # Drop Boolean columns for now results = full_results[["fold", "doc_offset", "span", "ent_type", "gold", "num_models"]] results (results[results["gold"] == True][["num_models", "span"]] .groupby("num_models").count() .rename(columns={"span": "count"})) (results[results["gold"] == False][["num_models", "span"]] .groupby("num_models").count() .rename(columns={"span": "count"})) # Pull out some hard-to-find examples, sorting by document to make labeling easier hard_to_get = results[results["gold"]].sort_values(["num_models", "fold", "doc_offset"]).head(20) hard_to_get # ### TODO: Relabel the above 20 examples with a Markdown table (copy from CSV) # # Hardest results not in the gold standard for models to avoid hard_to_avoid = results[~results["gold"]].sort_values(["num_models", "fold", "doc_offset"], ascending=[False, True, True]).head(20) hard_to_avoid # ### TODO: Relabel the above 20 examples (copy from CSV) # # # Remainder of Experiment # # For each of the 10 folds, train a model on the fold's training set and run # analysis on the fold's test set. # + def handle_fold(fold_ix: int) -> Dict[str, Any]: """ The per-fold processing of the previous section's cells, collapsed into a single function. :param fold_ix: 0-based index of fold :returns: a dictionary that maps data structure name to data structure """ # To avoid accidentally picking up leftover data from a previous cell, # variables local to this function are named with a leading underscore _train_inputs_df = corpus_df.merge(train_keys[fold_ix]) _test_inputs_df = corpus_df.merge(test_keys[fold_ix]) _models = maybe_train_models(_train_inputs_df, fold_ix) _evals = eval_models(_models, _test_inputs_df) _summary_df = make_summary_df(_evals) _full_results = util.merge_model_results(_evals) _results = _full_results[["fold", "doc_offset", "span", "ent_type", "gold", "num_models"]] return { "models": _models, "summary_df": _summary_df, "full_results": _full_results, "results": _results } # Start with the (already computed) results for fold 0 results_by_fold = [ { "models": models, "summary_df": summary_df, "full_results": full_results, "results": results } ] for fold in range(1, _KFOLD_NUM_FOLDS): print(f"Starting fold {fold}.") results_by_fold.append(handle_fold(fold)) print(f"Done with fold {fold}.") # - # Combine all the results into a single dataframe for the entire corpus all_results = pd.concat([r["results"] for r in results_by_fold]) all_results # # Generate CSV files for manual labeling # Reformat for output dev_and_test_results = all_results[all_results["fold"].isin(["dev", "test"])] in_gold_to_write, not_in_gold_to_write = util.csv_prep(dev_and_test_results, "num_models") in_gold_to_write not_in_gold_to_write in_gold_to_write.to_csv("outputs/CoNLL_4_in_gold.csv", index=False) not_in_gold_to_write.to_csv("outputs/CoNLL_4_not_in_gold.csv", index=False) # Repeat for the contents of the original training set train_results = all_results[all_results["fold"] == "train"] in_gold_to_write, not_in_gold_to_write = util.csv_prep(train_results, "num_models") in_gold_to_write not_in_gold_to_write in_gold_to_write.to_csv("outputs/CoNLL_4_train_in_gold.csv", index=False) not_in_gold_to_write.to_csv("outputs/CoNLL_4_train_not_in_gold.csv", index=False)
tutorials/corpus/CoNLL_4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="OMeeiX5hGv7D" # BFT with Path # # + id="-onr1C_pGvRn" def get_all_social_paths(self, user_id): """ Takes a user's user_id as an argument Returns a dictionary containing every user in that user's extended network with the shortest friendship path between them. The key is the friend's ID and the value is the path. """ visited = {} # Note that this is a dictionary, not a set # !!!! IMPLEMENT ME # bft with path # create empty queue q = Queue() # enqueue first path q.enqueue((user_id)) # while the queue is not empty while q.size() > 0: # dequeue the path path = q.dequeue() # get last item in path - set as # a variable # set a newuser_id to the last element in the path [-1] newuser_ide = path[-1] # check if visited - if friend_id not in visited[newuser_id]: if newuser_id not in visited: # set visited[newuser_id]=Path visited[newuser_id] = path # for each friend_id in friendships[newuser_id]: for friend_id in self.friendships[newuser_id]: # copy the Path as newPath new_path = path.copy() # append friend_id to newPath new_path.append(newuser_id) # enqueue newPath q.enqueue(new_path) return visited # + [markdown] id="d3oXzTsLGwkQ" # Tracking Collisions using linear paths # + id="l242IaHVGw6W" def populate_graph_linear(self, num_users, avg_friendships): """ Takes a number of users and an average number of friendships as arguments Creates that number of users and a randomly distributed friendships between those users. The number of users must be greater than the average number of friendships. ***for use with sparse graphs*** """ # Reset graph self.last_id = 0 self.users = {} self.friendships = {} # !!!! IMPLEMENT ME # Add users for i in range(0, num_users): self.add_user(f'User {i+1}') # Create friendships # generate all possible friendships # create a list of possible friendships total_friendships = [] collisions = 0 target_friendships = (num_users * avg_friendships) # utilize random library while total_friendships < target_friendships: user_id = random.randint(1, self.last_id) friend_id = random.randint(1, self.last_id) if self.add_friendship(user_id, friend_id): total_friendships += 2 else: collisions += 1 print(f'COLLISIONS: {collisions}') # + [markdown] id="TciKucV9nabK" # Time testing for performance (collisions) # # + id="CMU_OhNUngkF" if __name__ == '__main__': num_users = 200 avg_friendships = 100 start_time = time.time() sg.populate_graph(num_users, avg_friendships) end_time = time.time() print(f'Quadratic Runtime: {end_time - start_time} sec') start_time = time.time() sg.populate_graph_linear(num_users, avg_friendships) end_time = time.time() print(f'Linear Runtime: {end_time - start_time} sec') # sg = SocialGraph() # sg.populate_graph(10, 2) # print(sg.friendships) # connections = sg.get_all_social_paths(1) # print(connections)
CS41long_U1S1M4_Notes_RJProctor.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python38-azureml # kernelspec: # display_name: Python 3.8 - AzureML # language: python # name: python38-azureml # --- # + [markdown] nteract={"transient": {"deleting": false}} # # Azure Synapse - Configure Azure ML and Azure Synapse Analytics # # __Notebook Version:__ 1.0<br> # __Python Version:__ Python 3.8 - AzureML<br> # __Required Packages:__ No<br> # __Platforms Supported:__ Azure Machine Learning Notebooks, Spark Version 3.1 # # __Data Source Required:__ No # # ### Description # This notebook provides step-by-step instructions to set up Azure ML and Azure Synapse Analytics environment for your big data analytics scenarios that leverage Azure Synapse Spark engine. It covers: </br> # Configuring your Azure Synapse workspace, # creating a new [Azure Synapse Spark pool](https://docs.microsoft.com/azure/synapse-analytics/spark/apache-spark-overview#spark-pool-architecture), # configuring your Azure Machine Learning workspace, and creating a new [link service](https://docs.microsoft.com/azure/machine-learning/how-to-link-synapse-ml-workspaces) to link Azure Synapse with Azure Machine Learning workspace. # Additionally, the notebook provides the steps to export your data from a Log Analytics workspace to an [Azure Data Lake Storage gen2](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-introduction) that you can use for big data analytics.<br> # *** Python modules download may be needed. ***<br> # *** Please run the cells sequentially to avoid errors. Please do not use "run all cells". *** <br> # # ## Table of Contents # 1. Warm-up # 2. Authentication to Azure Resources # 3. Configure Azure Synapse Workspace # 4. Configure Azure Synapse Spark Pool # 5. Configure Azure ML Workspace and Linked Services # 6. Export Data from Azure Log Analytics to Azure Data Lake Storage Gen2 # 7. Bonus # + [markdown] nteract={"transient": {"deleting": false}} # ## 1. Warm-up # + gather={"logged": 1633641554118} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # Load Python libraries that will be used in this notebook from azure.common.client_factory import get_client_from_cli_profile from azure.common.credentials import get_azure_cli_credentials from azure.mgmt.resource import ResourceManagementClient from azure.loganalytics.models import QueryBody from azure.mgmt.loganalytics import LogAnalyticsManagementClient from azure.loganalytics import LogAnalyticsDataClient from azureml.core import Workspace, LinkedService, SynapseWorkspaceLinkedServiceConfiguration, Datastore from azureml.core.compute import SynapseCompute, ComputeTarget import json import os import pandas as pd import ipywidgets from IPython.display import display, HTML, Markdown from urllib.parse import urlparse # + gather={"logged": 1633641556419} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # Functions will be used in this notebook def read_config_values(file_path): "This loads pre-generated parameters for Microsoft Sentinel Workspace" with open(file_path) as json_file: if json_file: json_config = json.load(json_file) return (json_config["tenant_id"], json_config["subscription_id"], json_config["resource_group"], json_config["workspace_id"], json_config["workspace_name"], json_config["user_alias"], json_config["user_object_id"]) return None def has_valid_token(): "Check to see if there is a valid AAD token" try: credentials, sub_id = get_azure_cli_credentials() creds = credentials._get_cred(resource=None) token = creds._token_retriever()[2] print("Successfully signed in.") return True except Exception as ex: if "Please run 'az login' to setup account" in str(ex): print("Please sign in first.") return False elif "AADSTS70043: The refresh token has expired" in str(ex): message = "**The refresh token has expired. <br> Please continue your login process. Then: <br> 1. If you plan to run multiple notebooks on the same compute instance today, you may restart the compute instance by clicking 'Compute' on left menu, then select the instance, clicking 'Restart'; <br> 2. Otherwise, you may just restart the kernel from top menu. <br> Finally, close and re-load the notebook, then re-run cells one by one from the top.**" display(Markdown(message)) return False elif "[Errno 2] No such file or directory: '/home/azureuser/.azure/azureProfile.json'" in str(ex): print("Please sign in.") return False else: print(str(ex)) return False except: print("Please restart the kernel, and run 'az login'.") return False def convert_slist_to_dataframe(text, grep_text, grep_field_inx, remove_head, remove_tail): try: "This function converts IPython.utils.text.SList to Pandas.dataFrame" grep_result = text.grep(grep_text,field=grep_field_inx) df = pd.DataFrame(data=grep_result) df[grep_field_inx] = df[grep_field_inx].str[remove_head:].str[:remove_tail] except: df = pd.DataFrame() finally: return df def process_la_result(result): "This function processes data returned from Azure LogAnalyticsDataClient, it returns pandas DataFrame." json_result = result.as_dict() cols = pd.json_normalize(json_result['tables'][0], 'columns') final_result = pd.json_normalize(json_result['tables'][0], 'rows') if final_result.shape[0] != 0: final_result.columns = cols.name return final_result def set_continuation_flag(flag): "Set continuation flag message" if flag == False: print("continuation flag is false.") return flag def validate_input(regex, text): "User input validation" import re pattern = re.compile(regex, re.I) if text == None: print("No Input found.") return False; elif not re.fullmatch(pattern, text): print("Input validation failed.") return False; else: return True; # + gather={"logged": 1633641559682} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # Calling the above function to populate Microsoft Sentinel workspace parameters # The file, config.json, was generated by the system, however, you may modify the values, or manually set the variables tenant_id, subscription_id, resource_group, workspace_id, workspace_name, user_alias, user_object_id = read_config_values('config.json'); print("Current Microsoft Sentinel Workspace: " + workspace_name) # + [markdown] nteract={"transient": {"deleting": false}} # ## 2. Authentication to Azure Resources # + gather={"logged": 1633641564008} # Azure CLI is used to get device code to login into Azure, you need to copy the code and open the DeviceLogin site. # You may add [--tenant $tenant_id] to the command if has_valid_token() == False: # !echo -e '\e[42m' # !az login --tenant $tenant_id --use-device-code # Initializing Azure Storage and Azure Resource Python clients resource_client = get_client_from_cli_profile(ResourceManagementClient, subscription_id = subscription_id) # Set continuation_flag if resource_client == None: continuation_flag = set_continuation_flag(False) else: continuation_flag = set_continuation_flag(True) # !az account set --subscription $subscription_id # + [markdown] nteract={"transient": {"deleting": false}} # ## 3. Configure Azure Synapse Workspace # In this section, you first select an Azure resource group, then select an Azure Synapse workspace. # # + gather={"logged": 1633628547832} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # 1. Select Azure Resource Group for Synapse if continuation_flag: group_list = resource_client.resource_groups.list() synapse_group_dropdown = ipywidgets.Dropdown(options=sorted([g.name for g in group_list]), description='Groups:') display(synapse_group_dropdown) # + gather={"logged": 1633628557616} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # 2. Select an Azure Synapse workspace if continuation_flag and synapse_group_dropdown.value != None: # response = !az synapse workspace list --subscription $subscription_id --resource-group $synapse_group_dropdown.value if response!= None: name_list = convert_slist_to_dataframe(response, '"name', 0, 13, -2) if len(name_list) > 0: synapse_workspace_dropdown = ipywidgets.Dropdown(options=name_list[0], description='Synapse WS:') display(synapse_workspace_dropdown) else: print("No workspace found, please select one Resource Group with Synapse workspace.") else: continuation_flag = False print("Please create Azure Synapse Analytics Workspace before proceeding to next.") else: continuation_flag = False print("Need to have a Azure Resource Group to proceed.") # + [markdown] nteract={"transient": {"deleting": false}} # ## 4. Configure Azure Synapse Spark Pool # In this section, you will create an Spark pool if you don't have one yet. # 1. Enter a pool name, the rule for naming: must contain letters or numbers only and no special characters, must be 15 or less characters, must start with a letter, not contain reserved words, and be unique in the workspace.</br> # 2. Create the pool </br> # 3. List Spark pools for the Azure Synapse workspace # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633641333970} new_spark_pool_name = input("New Spark pool name:") # + gather={"logged": 1633641369157} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # 1. !!PROCEED THIS ONLY WHEN YOU WANT TO: Create an Azure Synapse Spark Pool!! if continuation_flag and validate_input(r"[A-Za-z0-9]{1,15}", new_spark_pool_name): # !az synapse spark pool create --name $new_spark_pool_name --subscription $subscription_id \ # --workspace-name $synapse_workspace_dropdown.value \ # --resource-group $synapse_group_dropdown.value \ # --spark-version 3.1 --node-count 3 --node-size Small --debug print('====== Task completed. ======') elif continuation_flag: print("Please enter a valid Spark pool name.") # + [markdown] nteract={"transient": {"deleting": false}} # Run the cell below to select a Spark pool that you want to use from the Spark pool list. # + gather={"logged": 1633641662510} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # 2. List Azure Synapse Spark Pool if continuation_flag and synapse_workspace_dropdown.value != None: # response_pool = !az synapse spark pool list --resource-group $synapse_group_dropdown.value --workspace-name $synapse_workspace_dropdown.value --subscription $subscription_id if response_pool!= None and len(response_pool.grep("ERROR: AADSTS70043")) == 0: pool_list = convert_slist_to_dataframe(response_pool, '"name', 0, 13, -2) if len(pool_list) > 0: spark_pool_dropdown = ipywidgets.Dropdown(options=pool_list[0], description='Spark Pools:') display(spark_pool_dropdown) else: print("First make sure you have logged into the system.") else: continuation_flag = False print("Need to have a Azure Spnapse Workspace to proceed.") # + [markdown] nteract={"transient": {"deleting": false}} # ## 5. Configure Azure ML Workspace and Linked Services # In this section, you will create a linked service, to link the selected Azure ML workspace to the selected Azure Synapse workspace, you need to be an owner of the selected Synapse workspace to proceed. You then can attached a Spark pool to the linked service. </br> # 1. Select Azure resource group for Azure ML </br> # 2. Select Azure ML workspace </br> # 3. Get existing linked services for selected Azure ML workspace </br> # 4. Enter a linked service name </br> # 5. Create the linked service </br> # 6. Enter a Synapse compute name </br> # 7. Attach the Spark pool to the linked service </br> # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642278196} # Select Azure Resource Group for Azure ML if continuation_flag: aml_group_list = resource_client.resource_groups.list() aml_group_dropdown = ipywidgets.Dropdown(options=sorted([g.name for g in aml_group_list]), description='Groups:') display(aml_group_dropdown) # + gather={"logged": 1633642285758} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # Select Azure ML Workspace if continuation_flag and aml_group_dropdown.value != None: aml_workspace_result = Workspace.list(subscription_id=subscription_id, resource_group=aml_group_dropdown.value) if aml_workspace_result != None: aml_workspace_dropdown = ipywidgets.Dropdown(options=sorted(list(aml_workspace_result.keys())), description='AML WS:') display(aml_workspace_dropdown) else: continuation_flag = False print("Need to have a Azure Resource Group to proceed.") # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642290757} # Get Linked services for selected AML workspace if continuation_flag and aml_workspace_dropdown.value != None: has_linked_service = False aml_workspace = Workspace.get(name=aml_workspace_dropdown.value, subscription_id=subscription_id, resource_group=aml_group_dropdown.value) aml_synapse_linked_service_list = LinkedService.list(aml_workspace) if aml_synapse_linked_service_list != None: for ls_name in [ls.name for ls in aml_synapse_linked_service_list]: display(ls_name) has_linked_service = True else: print("No linked service") continuation_flag = False # + [markdown] nteract={"transient": {"deleting": false}} # ** EXECUTE THE FOLLOWING CELL ONLY WHEN YOU WANT TO: Create a new AML - Synapse linked service! ** </br> # ** Owner role of the Synapse workspace is required to create a linked service. ** # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642297847} linked_service_name=input('Linked service name:') # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1632356190410} # !!PROCEED THIS ONLY WHEN YOU WANT TO: Create new linked service!! if continuation_flag and aml_workspace != None and synapse_workspace_dropdown.value != None and linked_service_name != None: # Synapse Link Service Configuration synapse_link_config = SynapseWorkspaceLinkedServiceConfiguration(subscription_id = aml_workspace.subscription_id, resource_group = synapse_group_dropdown.value, name= synapse_workspace_dropdown.value) # Link workspaces and register Synapse workspace in Azure Machine Learning linked_service = LinkedService.register(workspace = aml_workspace, name = linked_service_name, linked_service_config = synapse_link_config) # + [markdown] nteract={"transient": {"deleting": false}} # ** EXECUTE THE FOLLOWING CELL ONLY WHEN YOU WANT TO: Attach the selected Spark pool to the newly created linked service! ** # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642305321} synapse_compute_name=input('Synapse compute name:') # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1632356221430} # !!PROCEED THIS ONLY WHEN YOU WANT TO: Attach the selected Spark pool to the above linked service if continuation_flag and aml_workspace != None and synapse_workspace_dropdown.value != None and linked_service != None and spark_pool_dropdown.value != None and synapse_compute_name != None: spark_attach_config = SynapseCompute.attach_configuration(linked_service, type='SynapseSpark', pool_name=spark_pool_dropdown.value) synapse_compute = ComputeTarget.attach(workspace = aml_workspace, name= synapse_compute_name, attach_configuration= spark_attach_config) synapse_compute.wait_for_completion() # + [markdown] nteract={"transient": {"deleting": false}} # ## 6. Export Data from Azure Log Analytics to Azure Data Lake Storage Gen2 # In this section, you can export Microsoft Sentinel data in Log Analytics to a selected ADLS Gen2 storage. </br> # 1. Authenticate to Log Analytics </br> # 2. Select Log Analytics tables, no more than 10 tables. This step may take a few minutes. </br> # 3. List existing Azure storages accounts in the selected Synapse workspace</br> # 4. Set target ADLS Gen2 storage as the data export destination</br> # 5. List all existing data export rules in the storage account</br> # 6. Enter data export rule name </br> # 7. Create a new data export rule </br> # # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642693855} # 1. Initialzie Azure LogAnalyticsDataClient, which is used to access Microsoft Sentinel log data in Azure Log Analytics. # You may need to change resource_uri for various cloud environments. resource_uri = "https://api.loganalytics.io" la_client = get_client_from_cli_profile(LogAnalyticsManagementClient, subscription_id = subscription_id) creds, _ = get_azure_cli_credentials(resource=resource_uri) la_data_client = LogAnalyticsDataClient(creds) # + [markdown] nteract={"transient": {"deleting": false}} # * In the following step, you may select no more than 10 tables for data export. This process may take a few minutes, please be patient. # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642700739} # 2. Get all tables available using Kusto query language. If you need to know more about KQL, please check out the link provided at the introductory section. tables_result = None table_list = None all_tables_query = "union withsource = SentinelTableName * | distinct SentinelTableName | sort by SentinelTableName asc" if la_data_client != None: tables_result = la_data_client.query(workspace_id, QueryBody(query=all_tables_query)) if tables_result != None: table_list = process_la_result(tables_result) tables = sorted(table_list.SentinelTableName.tolist()) table_dropdown = ipywidgets.SelectMultiple(options=tables, row = 5, description='Tables:') display(table_dropdown) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642712137} # 3. List AzureBlobFS Storage URL in Synapse linked service if continuation_flag and synapse_workspace_dropdown.value != None: # synapse_linked_service_response = !az synapse linked-service list --workspace-name $synapse_workspace_dropdown.value sls_list = convert_slist_to_dataframe(synapse_linked_service_response, '"url', 0, 14, -1) if len(sls_list) > 0: synapse_linked_service_dropdown = ipywidgets.Dropdown(options=sls_list[0], description='ADLS URL:') display(synapse_linked_service_dropdown) else: continuation_flag = False print("Please create Azure Synapse linked service for storage before proceeding to next.") else: continuation_flag = False print("Need to have a Azure Synapse workspace to proceed.") # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642899785} # 4. Set target ADLS Gen2 storage as data export destination if continuation_flag and synapse_linked_service_dropdown.value != None: adls_gen2_name = urlparse(synapse_linked_service_dropdown.value).netloc.split('.')[0] if continuation_flag and adls_gen2_name == None: # You may set ADLS Gen2 manually here: adls_gen2_name = "" if continuation_flag and synapse_group_dropdown.value != None and adls_gen2_name != None: adls_resource_id = '/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}'.format(subscription_id, synapse_group_dropdown.value, adls_gen2_name) else: continuation_flag = False print("Need to have a resource group and an ADLS Gen2 account to continue.") # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633642953956} # 5. List all data export rules # Keep in mind that you cannot a destination that is already defined in a rule. Destination (resource id) must be unique across export rules in your workspace!! if continuation_flag: # export_response = !az monitor log-analytics workspace data-export list --resource-group $resource_group --workspace-name $workspace_name if export_response != None: export_list = convert_slist_to_dataframe(export_response, '"resourceId', 0, 19, -2) if len(export_list) > 0: data_export_dropdown = ipywidgets.Dropdown(options=export_list[0], description='Data Exports:') display(data_export_dropdown) else: print("No data export rule was found") else: print("No data export rule was found, you may create one in the following step.") # + [markdown] nteract={"transient": {"deleting": false}} # ** EXECUTE THE FOLLOWING CELL ONLY WHEN YOU WANT TO: Export data tables from Log Analytics to the selected Azure Data Lake Storage Gen 2! ** # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633643007863} export_name=input('Export name:') # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1628200011200} # 6. !!PROCEED THIS ONLY WHEN YOU WANT TO: Export data from Log Analytics to Azure Data Lake Storage Gen 2 if continuation_flag and adls_resource_id != None and table_dropdown.value != None and export_name != None: tables = " ".join(table_dropdown.value) # !az monitor log-analytics workspace data-export create --resource-group $resource_group --workspace-name $workspace_name \ # --name $export_name --tables $tables --destination $adls_resource_id # + [markdown] nteract={"transient": {"deleting": false}} # ## Bonus # These are optional steps. # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1633643058766} # List Log Analytics data export rules if continuation_flag: # export_response = !az monitor log-analytics workspace data-export list --resource-group $resource_group --workspace-name $workspace_name if export_response != None: export_rule_list = convert_slist_to_dataframe(export_response, '"name', 0, 13, -2) if len(export_rule_list) > 0: export_rule_dropdown = ipywidgets.Dropdown(options=export_rule_list[0], description='Export Rules:') display(export_rule_dropdown) # + [markdown] nteract={"transient": {"deleting": false}} # ** EXECUTE THE FOLLOWING CELL ONLY WHEN YOU WANT TO: Delete a data export rule by name! ** # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1628198253214} # 2-b. Delete a Log Analytics data export rule if continuation_flag and export_rule_dropdown.value != None: # result = !az monitor log-analytics workspace data-export delete --resource-group $resource_group --workspace-name $workspace_name --name $export_rule_dropdown.value --yes print(result)
Configurate Azure ML and Azure Synapse Analytics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1 align="center"> # Further optimizations of Grover’s Search algorithm # </h1> from qiskit import * from qiskit.tools.visualization import plot_histogram from qiskit.compiler import transpile # Loading your IBM Q account(s) + hardware IBMQ.load_account() provider = IBMQ.get_provider('ibm-q') device=provider.get_backend('ibmq_16_melbourne') # Simulator simulator = Aer.get_backend('qasm_simulator') # %run boolean_grover.py # boolean_grover() # %run phase&modified_grover.py # phase_grover() # Nowadays, one of the critical issues to implement quantum algorithms is the required number of elementary gates, qubits, and delay. Current quantum computers and simulators are mainly prototypes, and there is a lack of computational resources. Therefore, it is necessary to optimize the quantum operations to reduce the necessary number of gates and qubits. This work target the oracle and diffuser part of Grover's algorithm where I represent an optimization of them. A quantitative analysis has been performed on the number of qubits, circuit depth, number of gates. # # The oracle plays an important role in Grover's circuit, where it occurs two times, one in the so called the oracle part, and the other in the diffuser part. # # As we saw in previous code (the boolean_grover), we implemented the boolean oracle (BO) type for the Grover's circuit, which relies on an additional qubit to carry out the difference of phase on the state of interest. Whereas there is another type of oracle; the phase oracle (PO), this latter don't use any ancillary qubit, it dephase the target ket directly by applying a Z gate on the last qubit (multiply its amplitude by $e^{i\pi}$). Hence, the PO is better than the BO since it uses fewer qubits to run. # # The next step is implementing it in the quantum circuit, to do that, the obvious thing is the multi conditioned qubits gate Fig1, but this latter, when transpiled to be run on quantum hardware, will show a high circuit depth which doesn't play in our favor, because the circuit exceeds the limits regarding the coherence time of the quantum computer currently used. So this is a reason why it would be interesting to increase that time to do like those jobs efficiently; it is a good thing to have a very long coherence time so you can fit all your gates $-$in recent quantum superconducting qubits hardware, the highest-coherence qubits fabricated today is around the 100 microseconds. On the other hand, searching for a long time is not the only interest, nowadays scientists are interested in the question of how many operations can we fit within that time? where each of those gates takes some time to apply, so it's a question of how much time do those operations take relative to the coherence time that the qubit has, thus we need to be able to apply those operations very quickly. Decreasing the circuit depth is mandatory to contribute achieving that goal. # # <img src="5_noancilla.png" alt="drawing" width="200" label="figure 1"/> # # Another solution rather than expanding circuit depth exponentially is to use ancilla qubit. All we need to use is multiple two-qubits Toffoli gates (`ccx`), each of them can be decomposed to 15 elementary gates, 6 CNOT gates among them, acting just only on the pair of qubits Fig2 (from Mike & Ike book) # # <img src="5_ ancilla.png" alt="drawing" width="800" label="figure 2"/> # # You can see how many CCNOT when we use ancilla qubit, it is cheaper nevertheless it's still extremely expensive, because increasing the number of controlled qubits will use more ancillas which is out of the scope of the hardware implementation. # # This is why thinking of decreasing the number of ancillary qubits in the oracle is mandatory. Eventually, this is what I did, playing with those two-qubit Toffoli gates I figure out that the Fig2 can be implemented with 9 qubits rather than 10, hence we reduce the nubmer of qubits (we are going to see by using the phase oracle we reduce the number of qubits by 3 (2 in case of three qubits) compared with the boolean_grover code which uses the boolean oracle and the Mike & Ike $C^n(U)$ implementation) and also the depth since we are going to cancel one Toffoli gate and a the control-U (again by using the phase oracle we reduce the number of ccx by three (and two in the case of three qubits) compared with the previous file which uses the boolean oracle and the Mike & Ike $C^n(U)$ implementation). # # ### -The optimized Oracle: # As we can see below the number of qubits is reduced by one and this new design is applicable for n multi-qubit control gate. # ########################### Mike & Ike $C^n(U)$ design: n=5 qubits control gate ########################### # ***** Target qubit is q9 ***** s0=boolean_oracle(['11111'], 'ancilla') s0.draw('mpl') ############################## New design n=5-qubits control gate ############################## # ***** Target qubit is q5 ***** s=oracle(['111111'], 'ancilla') s.draw('mpl') # In the new design implementation, ignore the two Hadamard gate (they are added because $HXH=Z$ and this concern the phase oracle), also the length of digits inside the functions of the Mike & Ike design is 4 and not 5 because I used the function boolean_grover circuit which uses one additional qubit, hence she implement 5-qubit control gate. # # An exceptional case is the three control qubit where it uses two Toffoli gate rather than four following the Mike & Ike design, it is implemented as: ############################## New design 3-qubits control gate ############################## # ***** Target qubit is q2 ***** s1=oracle(['111'], 'ancilla') s1.draw('mpl') ############################## Mike & Ike design 3-qubits control gate ############################## # ***** Target qubit is q5 ***** s2=boolean_oracle(['111'], 'ancilla') s2.draw('mpl') # ### -The optimized diffuser # In this part there is a set of operation that realize the transition from a difference of phase (made by the oracle) to a difference of amplitude (inversion about the mean): # # 1-Cancel the superposition by a layer of H gate let's calle it A # # 2-Multiply the amplitude of the $|0>_n$ state by -1, this is called a mirror operation $M_0=X^{\otimes n} M_1 X^{\otimes n}$. # # 3-Restoring the superposition by another layer of H gate called $A^\dagger$, which increases the probability of seeing the desired state. # # The overall effect is $D=-AM_0A^\dagger$ in the standard version of Grover's algorithm $A=H$ whereas in this paper [[1](https://arxiv.org/abs/2005.06468)] they assign to it another operation; whereby their main achievement is a generalization of the inversion about the mean step, thus rather than canceling the superposition they go forward to another state that makes the reflection easier. They succeeded to combine the action of the first layer of step 1 $H^{\otimes n}$ with the first layer of step 2 $X^{\otimes n}$ into a more efficient operator B where the Grover diffuser turn into $D=B^\dagger M_B B$. They supposed an operator B such that $BA=X^{\otimes n}$, then $A^\dagger=X^{\otimes n} B$ and $A=B^\dagger X^{\otimes n}$ then the Grover diffusion becomes: $$\begin{aligned} # D &=A M_{0} A^{\dagger} \\ # &=B^{\dagger} X^{\otimes n} M_{0} X^{\otimes n} B \\ # &=B^{\dagger} X^{\otimes n} X^{\otimes n} M_{1} X^{\otimes n} X^{\otimes n} B \\ # &=B^{\dagger} M_{1} B # \end{aligned}$$ # In this case, $M_B=M_1$, which uses fewer gates than $M_0$, therefore avoid the $X$ gates on either side of the multi-control gate. The modified version of the algorithm set $B=R_x(\pi/2)$ and $B^\dagger=R_x(-\pi/2)$; in general this leads to save $2n$ X gates per mirror operation. # # Hence the modification is as follow: $HX \rightarrow R_x(\pi/2)$ and $X^\dagger H^\dagger \rightarrow R_x(-\pi/2)$ # # The preparation layer is also replaced: $H \rightarrow R_x(\pi/2)$ # # And now let's do a comparison between the boolean Grover's circuit and phase+modified Grover's circuit by evaluating the circuit depth, widht and quantum cost ( this latter is done by the transpiler where it is the procedded step before running the circuit on the quantum hardware, it does 2 main things: # # 1-Expresses high-level gate definitions in terms of the basis gates actually supported by the quantum hardware, in this case is the `ibmq_16_melbourne`. # # 2-Optimizes the code. ) # # ### 1-Phase oracle + new design Grover's circuit evaluation: phase=grover(['1111'], 'ancilla',3) print('Circuit width', phase.width() -4) # the method will compute the classical registers wires+the quantum ones, #the quantum registers are what matters us, this is why I subtract 4. print('Count of operations', phase.count_ops()) phase.draw('mpl') tranpile1= transpile(phase, device) print('Phase circuit depth = ',tranpile1.depth()) print('Number of operations ', tranpile1.count_ops()) # $\Rightarrow$ The $Circuit\;depth = 336$ and the $Quantum\;cost=305+113+42=460$ # ### 2-Boolean oracle Grover's circuit evaluation: boolean=boolean_grover(['1111'], 'ancilla',3) print('Circuit width', boolean.width() -4) print('Count of operations', boolean.decompose().count_ops()) boolean.decompose().draw('mpl') transpile2= transpile(boolean, device) print('Boolean circuit depth = ',transpile2.depth()) print('Number of operation', transpile2.count_ops()) # $\Rightarrow$ The $Circuit\;depth = 588$ and the $Quantum\;cost=592+196+63+3=854$ # Let's us calculate the difference between the two circuits: # # $\Rightarrow$ The $Circuit\;depth\;diff = 588-336=252$, the $Quantum\;cost\;diff=854-460=394$ and the $Circuit\;width\;diff=8-5=3$ # # **The $Circuit\;depth\;diff$ and the $Quantum\;cost\;diff$ will grow up exponentially with the number of qubits, whereas the $Circuit\;width\;diff$ will be always different by 3 (2 for the case of three qubits) which means that we saved three qubits (two qubits for the case of three qubits) and reduces the number of gates** # ## Test of the *Phase oracle + new design* Grover's circuit # I will do the test with the `ancilla` circuit because the major modification are done on it. ######################## three qubits test ######################## test1=grover(['011'],'ancilla',2) count1 = execute(test1,simulator).result().get_counts() print(count1) plot_histogram(count1,color='limegreen') ######################## Six qubits test ######################## test2=grover(['110010'],'ancilla',6) count2 = execute(test2,simulator).result().get_counts() print(count2) plot_histogram(count2,color='c') ######################## eight qubits test with three solutions ######################## test3=grover(['11111111', '00000000','01110111'], 'ancilla', 7) count3 = execute(test3,simulator).result().get_counts() #print(count3) plot_histogram(count3,color='yellowgreen') # <h1 align="center"> # References # </h1> # # [1] [<NAME>., <NAME>., &amp; <NAME>. (2020, May 26). Optimizing Quantum Search Using a Generalized Version of Grover's Algorithm.](https://arxiv.org/abs/2005.06468)
Demo_phase&modified_vs_boolean.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Regression Practical Assessment # This assessment is for determining how much you have learnt in the past sprint, the results of which will be used to determine how EDSA can best prepare you for the working world. This assessment consists of and practical questions in Regression. # # The answers for this test will be input into Athena as Multiple Choice Questions. The questions are included in this notebook and are made **bold** and numbered according to the Athena Questions. # # As this is a time-constrained assessment, if you are struggling with a question, rather move on to a task you are better prepared to answer rather than spending unnecessary time on one question. # # **_Good Luck!_** # ## Honour Code # I **YOUR NAME, <NAME>**, confirm - by submitting this document - that the solutions in this notebook are a result of my own work and that I abide by the EDSA honour code (https://drive.google.com/file/d/1QDCjGZJ8-FmJE3bZdIQNwnJyQKPhHZBn/view?usp=sharing). # # Non-compliance with the honour code constitutes a material breach of contract. # ### Download the data # # Download the Notebook and data files here: https://raw.githubusercontent.com/Explore-AI/Public-Data/master/Data/Machine_Learning_Assessment.zip # # ### Imports # + import numpy as np import pandas as pd from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor # - # ### Reading in the data # For this assessment we will be using a dataset about the quality of wine. Read in the data and take a look at it. # # **Note** the feature we will be predicting is quality, i.e. the label is quality. df = pd.read_csv('winequality.csv') df.head() # ## Task 1 - Data pre-processing # # Write a function to pre-process the data so that we can run it through the classifier. The function should: # * Split the data into features and labels # * Standardise the features using sklearn's ```StandardScaler``` # * Split the data into 75% training and 25% testing data # * Set random_state to equal 16 for this internal method # * If there are any NAN values, fill them with zeros # # _**Function Specifications:**_ # * Should take a dataframe as input. # * Should return two `tuples` of the form `(X_train, y_train), (X_test, y_test)`. # # **Note: be sure to pay attention to the test size and random state you use as the following questions assume you split the data correctly** #question 11 for x in df: print(x, df[x].isna().mean()) def data_preprocess(df): #your code here df['pH'] = df['pH'].fillna(0) df['sulphates'] = df['sulphates'].fillna(0) df['chlorides'] = df['chlorides'].fillna(0) df['residual sugar'] = df['residual sugar'].fillna(0) df['citric acid'] = df['citric acid'].fillna(0) df['volatile acidity'] = df['volatile acidity'].fillna(0) df['fixed acidity'] = df['fixed acidity'].fillna(0) y = df['quality'].values df.pop('quality') x = df.values from sklearn.preprocessing import StandardScaler sc = StandardScaler() x = sc.fit_transform(x) X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state = 16) return (X_train, y_train), (X_test, y_test) ((X_train, y_train), (X_test, y_test)) = data_preprocess(df) print(X_train[0]) print(y_train[0]) print(X_test[3]) print(y_test[3]) (X_train, y_train), (X_test, y_test) = data_preprocess(df) print(X_train[0]) print(y_train[0]) print(X_test[3]) print(y_test[3]) #Question11 X_train[12][5] #Question 12 print(X_test[12, 5]) #Question 13 print(y_train[15]) #Question 14 print(y_test[15]) # _**Expected Outputs:**_ # # ```python # (X_train, y_train), (X_test, y_test) = data_preprocess(df) # print(X_train[0]) # print(y_train[0]) # print(X_test[3]) # print(y_test[3]) # # [ 1.75018984 -0.00412596 0.12564592 0.97278786 -0.70253493 0.51297929 # -0.36766435 -1.26942219 0.21456681 0.92881824 2.13682458 0.42611996] # 7 # [-0.57136659 -0.30574457 -0.54115965 -0.12776549 1.33614333 -0.37168026 # 1.26631947 0.30531117 0.18121615 -0.91820734 0.32886116 -0.07697409] # 6 # # ``` # **Q11. What is the result of printing out the 6th column and the 13th row of X_train?** # # **Q12. What is the result of printing out 6th column and the 13th row of X_test?** # # **Q13. What is the result of printing out the 16th row y_train?** # # **Q14. What is the result of printing out the 16th row of y_test?** # ## Task 2 - Train Linear Regression Model # # Since this dataset is about predicting quality, which ranges from 1 to 10, lets try fit the data to a regression model and see how well that performs. # # Fit a model using sklearn's `LinearRegression` class with its default parameters. Write a function that will take as input `(X_train, y_train)` that we created previously, and return a trained model. # # _**Function Specifications:**_ # * Should take two numpy `arrays` as input in the form `(X_train, y_train)`. # * Should return an sklearn `LinearRegression` model. # * The returned model should be fitted to the data. def train_model(X_train, y_train): #your code here global reg reg = LinearRegression() reg.fit(X_train, y_train) return reg # + #Question 15 train_model(X_train, y_train).intercept_ # - #Question 16 train_model(X_train, y_train).coef_[2] # **Q15. What is the result of printing out ***model.intercept_*** for the fitted model rounded to 3 decimal places?** # # **Q16. What is the result of printing out ***model.coef_[2]*** for the fitted model rounded to 2 decimal places?** # ## Task 3 - Test Regression Model # # We would now like to test our regression model. This test should give the residual sum of squares, which for your convenience is written as # $$ # RSS = \sum_{i=1}^N (p_i - y_i)^2, # $$ # where $p_i$ refers to the $i^{\rm th}$ prediction made from `X_test`, $y_i$ refers to the $i^{\rm th}$ value in `y_test`, and $N$ is the length of `y_test`. # # _**Function Specifications:**_ # * Should take a trained model and two `arrays` as input. This will be the `X_test` and `y_test` variables. # * Should return the residual sum of squares over the input from the predicted values of `X_test` as compared to values of `y_test`. # * The output should be a `float` rounded to 2 decimal places. def test_model(model, X_test, y_test): #your code here y_pred = model.predict(X_test) print(np.sum((y_pred-y_test)**2)) return #Question 17 test_model(reg, X_test, y_test) # **Q17. What is the Residual Sum of Squares value for the fitted Linear Regression Model on the test set?** # ## Task 4 - Train Decision Tree Regresson Model # # Let us try improve this accuracy by training a model using sklearn's `DecisionTreeRegressor` class with a random state value of 42. Write a function that will take as input `(X_train, y_train)` that we created previously, and return a trained model. # # _**Function Specifications:**_ # * Should take two numpy `arrays` as input in the form `(X_train, y_train)`. # * Should return an sklearn `DecisionTreeRegressor` model with a random state value of 42. # * The returned model should be fitted to the data. def train_dt_model(X_train, y_train): #your code here global dt dt = DecisionTreeRegressor(random_state=42) dt.fit(X_train, y_train) return dt train_dt_model(X_train, y_train) # Now that you have trained your model, lets see how well it does on the test set. Use the test_reg_model function you previously created to do this. def r_err(predictions, y_test): #your code here rss = np.sum(np.square(y_test - predictions)) print(rss) r_err(reg.predict(X_test), y_test) test_model(dt, X_test, y_test) # **Q18. What is the Residual Sum of Squares value for the fitted Decision Tree Regression Model on the test set?** # ## Task 5 - Mean Absolute Error # Write a function to compute the Mean Absolute Error (MAE), which is given by: # # $$ # MAE = \frac{1}{N} \sum_{n=i}^N |p_i - y_i| # $$ # # where $p_i$ refers to the $i^{\rm th}$ `prediction`, $y_i$ refers to the $i^{\rm th}$ value in `y_test`, and $N$ is the length of `y_test`. # # _**Function Specifications:**_ # * Should take two `arrays` as input. You can think of these as the `predictions` and `y_test` variables you get when testing a model. # * Should return the mean absolute error over the input from the predicted values of `X_test` as compared to values of `y_test`. # * The output should be a `float` rounded to 3 decimal places. def mean_abs_err(predictions, y_test): #your code here from sklearn import metrics return metrics.mean_absolute_error(predictions, y_test) print(mean_abs_err(np.array([7.5,7,1.2]),np.array([3.2,2,-2]))) # **Q9. What is the result of printing out mean_abs_err(np.array([7.5,7,1.2]),np.array([3.2,2,-2]))?** print(mean_abs_err(np.array([7.5,7,1.2]),np.array([3.2,2,-2]))) # **Q10. Which regression model (Linear vs DecisionTree) has the lowest Mean Absolute error?** mean_abs_err(dt.predict(X_test),y_test) mean_abs_err(reg.predict(X_test),y_test)
Regression/Wine Quality Exam Prac/Regression_Prac_Exam_Student_Version.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import os from datetime import date import itertools cols = ["Resource","CustomAttr15","Summary","LastOccurrence","CustomAttr11"] #Range [A-E] single = os.getcwd() + "\\" + "single.csv" dff = pd.read_csv(single) df = dff[cols] print(df) # - # # Numpy # + arr = df.to_numpy() #convert df to np print(arr[0][0]) #printing an index value of numpy arr rw, col = arr.shape #last row, last column print(rw,col) #loop and access lst = [] dic = {} for i in range(rw): lst2 = [] for j in range(col): #print(arr[i][j]) #print Row by index lst2.append(arr[i][j]) #create a list dic.update( {i : lst2} ) #create dict #print(dic) # - #add new column derived from existing one lst3 = [] for i in range(rw): x = arr[i][2] #only printing summary if 'DOWN' in x: lst3.append('down') else: lst3.append('no') arr = np.append(arr, np.array([lst3]).transpose(), axis=1) df = pd.DataFrame(arr) print(df) # # List # + #derived list from df dff = pd.Series(df['CustomAttr15']) mlst1 = dff.to_list() mlst2 = df.values.tolist() mlst3 = df.columns.values.tolist() mlst4 = df['Summary'].values.tolist() mlst5 = df[['Summary','LastOccurrence']].values.tolist() #print(mlst4) def lp_1d_list(mlst1): i = 0 for i in range(len(mlst1)): print(mlst1[i]) i = i + 1 def lp_nested_seperate_2_list(mlst1,mlst4): for a in mlst1: for b in mlst4: print(a,">",b) def lp_nested_list(mlst2): for i in range(len(mlst2)): for j in range(len(mlst2[i])): print(mlst2[i][j]) # List Methods append(), count(), index(), pop(), sort() fruits = ['apple', 'banana', 'cherry','banana'] fruits.append("orange") print(fruits) print(fruits.count("banana")) print(fruits.index("cherry")) fruits = ['apple', 'banana', 'cherry'] cars = ['Ford', 'BMW', 'Volvo'] fruits.extend(cars) print(fruits) #JOIN 2 LIST fruits = fruits.pop(1) print(fruits) # - # # dictionary # + dic1 = {} dic2 = {1: 'apple', 2: 'ball'} dic3 = {'name': 'John', 1: [2, 4, 3]} dic4 = dict({1:'apple', 2:'ball'}) dic5 = dict([(1,'apple'), (2,'ball')]) #create dictionary from 2 list (as key , as value) dlist = dict(zip(mlst1, mlst5)) #print(dlist) #dataframe to dictionary ddf1 = df.to_dict() def lp_dic(): for key in ddf1: print(key,ddf1[key]) for v in ddf1.values(): print(v) def lp_key_wise(dl): for k,v in dlist.items(): print("STCODE:", k, ":", v[0],',', v[1]) lp_key_wise(dlist) #Method of Dictionary fromkeys(), get(), items(), keys(), values(), pop(), update() person = {'name': 'Phill', 'age': 22} #print(person.get('name')) d = {1: "one", 2: "three"} d1 = {2: "two"} d.update(d1) #print(d) person = {'name': 'Phill'} person.setdefault('age', 22) #print(person) # -
Z_ALL_FILE/Jy1/diclistnp-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import geopy.distance import networkx as nx coord = pd.read_csv('test.csv') coord = np.array(coord) coord.shape def adjacency_matrix(coord, drone_range): list_edges=[] n_points = coord.shape[0] adj_mat = np.zeros((n_points,n_points)) for i in range(n_points): for j in range(i,n_points): dist = geopy.distance.vincenty(coord[i], coord[j]).miles if dist <= drone_range and i!=j: adj_mat[i][j]=1 adj_mat[j][i]=1 list_edges.append((i,j)) S = dict() for i in range(n_points): sub =[] for j in range(n_points): if adj_mat[i][j] == 1: sub.append(j) S[i]=sub return adj_mat,list_edges,S adj_mat,list_edges,S = adjacency_matrix(coord,3) adj_mat = list(adj_mat) S def brut_force(S,n): import itertools u = set(range(n)) for i in range(1,n): comb = itertools.combinations(range(n),i) for combination in comb: covered = list(combination) for vertex in combination: covered = covered + S[vertex] if set(covered)==u: return combination a = brut_force(S,22) a def heuristic(S,n): num_neighbors = [len(S[i]) for i in range(n)] cover = [np.argmax(num_neighbors)] covered = S[np.argmax(num_neighbors)]+[cover[0]] print(covered) list_vertices = list(range(n)) remaining_points = [x for x in list_vertices if x!= np.argmax(num_neighbors)] while set(covered)!= set(list_vertices): max_add = len(set(list_vertices)-set(covered)) rem = set(list_vertices)-set(covered) tmp = remaining_points[0] for i in remaining_points: diff = rem - set(S[i]) if len(diff)< max_add: max_add = len(diff) tmp = i covered = covered + S[tmp]+[tmp] cover.append(tmp) remaining_points = [x for x in list_vertices if x!= tmp] return cover heuristic(S,22) brut_force(d,5) a = heuristic(d,5) a a = heuristic(S,22) b = S[a[0]]+S[a[1]]+S[a[2]] set(b) len(set(b)) import googlemaps # + #gmaps = googlemaps.Client(key='<KEY>') # + my_dist = gmaps.distance_matrix('51 California Ave San Francisco CA 94130','135 Fisher Loop San Francisco CA 94129',mode='driving') # - print(my_dist) 26*60 my_dist['rows'][0]['elements'][0]['distance']['value'] df = pd.read_csv('prob32.csv') df.head() def dist_matrix(list_address,cost): n = len(list_address) matrix = np.zeros((n,n)) for i in range(n): for j in range(n): my_dist = gmaps.distance_matrix(list_address[i],list_address[j],mode='driving') matrix[i][j]=my_dist['rows'][0]['elements'][0][cost]['value'] return matrix matrix = dist_matrix(df.address,'distance') # + from ortools.constraint_solver import pywrapcp from ortools.constraint_solver import routing_enums_pb2 # Distance callback class CreateDistanceCallback(object): """Create callback to calculate distances between points.""" def __init__(self,matrix): """Array of distances between points.""" self.matrix = matrix def Distance(self, from_node, to_node): return int(self.matrix[from_node][to_node]) # Cities city_names = df['Name'] tsp_size = len(city_names) num_routes = 1 # The number of routes, which is 1 in the TSP. # Nodes are indexed from 0 to tsp_size - 1. The depot is the starting node of the route. depot = 14 # Create routing model if tsp_size > 0: routing = pywrapcp.RoutingModel(tsp_size, num_routes, depot) search_parameters = routing.DefaultSearchParameters() # Create the distance callback, which takes two arguments (the from and to node indices) # and returns the distance between these nodes. dist_between_nodes = CreateDistanceCallback(matrix) dist_callback = dist_between_nodes.Distance routing.SetArcCostEvaluatorOfAllVehicles(dist_callback) # Solve, returns a solution if any. assignment = routing.SolveWithParameters(search_parameters) if assignment: # Solution cost. print ("Total distance: " + str(assignment.ObjectiveValue()/1000) + " km\n") # Inspect solution. # Only one route here; otherwise iterate from 0 to routing.vehicles() - 1 route_number = 0 index = routing.Start(route_number) # Index of the variable for the starting node. route = '' while not routing.IsEnd(index): # Convert variable indices to node indices in the displayed route. route += str(city_names[routing.IndexToNode(index)]) + ' -> ' index = assignment.Value(routing.NextVar(index)) route += str(city_names[routing.IndexToNode(index)]) print ("Route:\n\n" + route) else: print ('No solution found.') else: print ('Specify an instance greater than 0.') # -
hw_4/traveling_bridegroom.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/BirHic/Depth-Asena/blob/main/DepthV1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="ufKsJWw_NUxY" # #**DESTEK SONLANDIRILMIŞTIR** # # #**DONT USE THİS, USE THE V2** # #**Contact me on Telegram @DevXenon** # `Kullanımı:` # - Google Drive üzerinde \"`DAIN`\" adlı bir klasör oluşturun. # - Google Drive üzerinde \"`videos`\" adlı bir klasör oluşturun. # # - Önemli Not: Videoları \"`videos`\" klasörüne yükleyin. \"`DAIN`\" adlı klasöre hiçbir dosya yüklemeyin. # + [markdown] id="iH9-RLwWOjgE" # - `Önemli Not`: Sayfa yenilenirse yada bağlantı koparsa işlemleri en baştan yapmanız gerekecektir. Bunun önüne geçmek için çift ekran kullanabilirsiniz. # + cellView="form" id="MFjr1BIUPC4c" #@markdown # Gerekli Kurulum #@markdown ## Input file #@markdown Lütfen sadece `mp4` ve `gif` formatında video yükleyin. Sadece `720p` ve altı çözünürlüğe sahip videolar desteklenir. Sadece `input.mp4` yazan yeri yüklediğiniz video ismi ile değiştirin. INPUT_FILEPATH = "videos/input.mp4" #@param{type:"string"} #@markdown ## Output file #@markdown Sadece `output.mp4` yazan yeri belirlediğiniz bir isim ile değiştirin. OUTPUT_FILE_PATH = "DAIN/output.mp4" #@param{type:"string"} #@markdown ## Hedef FPS #@markdown Yüksek FPS değeri daha uzun süre işlem gerektirir TARGET_FPS = 60 #@param{type:"number"} #@markdown `AŞAĞIDAKİ KODLARI BİLGİNİZ OLMADAN DOKUNMAYIN` #@markdown ## Frame input directory #@markdown Ana GDrive Root Dizininde, Elde Edilen PNG Dosyalarının Hangi Klasörün İçine oluşturulacağını seçin. Örnek PNG Adları 00001.png, 00002.png FRAME_INPUT_DIR = '/content/DAIN/input_frames' #@param{type:"string"} #@markdown ## Frame output directory #@markdown Ana GDrive Root Dizininde, Oluşturulan PNG Dosyalarının Hangi Klasörün İçine Yerleştirileceğini seçin. FRAME_OUTPUT_DIR = '/content/DAIN/output_frames' #@param{type:"string"} #@markdown ## Start Frame #@markdown Videonun Kaçıncı Kareden Başlayacağını Seçin. Gerekmedikçe Dokunmayın! START_FRAME = 1 #@param{type:"number"} #@markdown ## End Frame #@markdown Videonun Kaçıncı Karede Sonlanacağını Seçin. Gerekmedikçe Dokunmayın! END_FRAME = -1 #@param{type:"number"} #@markdown ## Seamless playback #@markdown Perfect Loop İçin İlk ve Son Kareleri Birleştirir. Etkinleştirme için Tike Basın. SEAMLESS = True #@param{type:"boolean"} #@markdown ## Resize hotfix #@markdown Interpolate frameleri, orijinal girdi çerçevelerine kıyasla biraz \ "kaymış / daha küçük \". Bu, Interpolate frame'lerini + 2px çözünürlüğe yeniden boyutlandırarak ve sonucu başlangıç ​​noktası (1,1) ile orijinal boyutuna kırparak kısmen hafifletilebilir. Bu düzeltme olmadan Interpolate, \ "titreşimli \" çıktı oluşturma eğilimindedir ve metin gibi statik öğelerle oldukça fark edilir. #@markdown #@markdown Bu düzeltme, daha yumuşak bir videoyu ayarlamak için bu tür efektleri daha az görünür kılmaya çalışır. CVPR 2018'in bu sorun için düzeltme olarak neyi kullandığını bilmiyorum, ancak orijinal, varsayılan test görüntülerinde böyle bir davranış gösteriyor. Daha ileri düzey kullanıcılar Interpolation yöntemini değiştirebilir. Cv2.INTER_CUBIC ve cv2.INTER_LANCZOS4 yöntemleri önerilir. Geçerli varsayılan değer cv2.INTER_LANCZOS4'tür. RESIZE_HOTFIX = True #@param{type:"boolean"} #@markdown ## Auto-remove PNG directory #@markdown Ffmpeg video oluşturulduktan sonra çıktı PNG dizinini otomatik siler. PNG dosyalarını saklamak istiyorsanız bunu "False" olarak ayarlayın. AUTO_REMOVE = True #@param{type:"boolean"} # + [markdown] id="X2EY3xonRyNh" # # Modüller Yükleniyor... # Hiçbir modül bağlı olduğunuz internet üzerinden indirilmez. Data harcaması - Bellek kullanımı yapmaz. # + id="OKrwCZFrRq0G" # Connect Google Drive from google.colab import drive drive.mount('/content/gdrive') print('Google Drive connected.') # + id="HJ0qubNwR8CE" # PyTorch ve SciPy Mödüllerinin en bilinen versiyonları yükleniyor... # !apt-get update # !pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html # !pip install scipy==1.1.0 # !CUDA_VISIBLE_DEVICES=0 # !sudo apt-get install imagemagick imagemagick-doc print("Tamamlandı.") # + id="LSTmNqkySG_g" # Interpolate Dosyaları Klonlanıyor... # %cd /content # !git clone -b master --depth 1 https://github.com/baowenbo/DAIN /content/DAIN # %cd /content/DAIN # !git log -1 # + id="_CqiDjUCSZbW" # Bu Aşama 10-15 Dakika Sürecektir. # Interpolate Yükleniyor. # %cd /content/DAIN/my_package/ # !./build.sh print("Tamamlandı") # + id="MvZJ4xwOSmn-" # Bu Aşama 5 Dakika Sürecektir. # PyTorch Dosyaları Yükleniyor. # %cd /content/DAIN/PWCNet/correlation_package_pytorch1_0 # !./build.sh print("Tamamlandı.") # + [markdown] id="an-6iYvwTGl2" # - `Önemli Not:` Sayfayı yenilemeden, bağlantıyı kesmeden birden fazla işlem yapmak için yukarıdaki video ismini, hedef fps'i vs. değiştirdikten sonra aşağıdaki koddan itibaren çalıştırın. # + id="yu188fLxTJBN" # BU KODU BİRDEN FAZLA İŞLEMLERDE KULLANIN. # İkinci bir videoyu Interpolate etmeden bu kodu kullanarak eski PNG dosyalarını silin. if(AUTO_REMOVE): # !rm -rf {FRAME_OUTPUT_DIR}/* # + id="EA2ZpOMRTayq" # Pre-trained Modeller İndiriliyor. # %cd /content/DAIN # !mkdir model_weights # !wget -O model_weights/best.pth http://vllab1.ucmerced.edu/~wenbobao/DAIN/best.pth # + id="nd4wpDfiTscC" # Videodaki FPS Değeri Hesaplanıyor. # %shell yes | cp -f /content/gdrive/My\ Drive/{INPUT_FILEPATH} /content/DAIN/ import os filename = os.path.basename(INPUT_FILEPATH) import cv2 cap = cv2.VideoCapture(f'/content/DAIN/{filename}') fps = cap.get(cv2.CAP_PROP_FPS) print(f"Input file has {fps} fps") if(fps/TARGET_FPS>0.5): print("DAHA YÜKSEK FPS DEĞERİ GİRİN. YENİ GİRDİĞİNİZ FPS DEĞERİ ORİJİNAL VİDEONUN 2 KATI OLMALIDIR") # + id="7qhYERkOT8ZL" # ffmpeg extract - Generating individual frame PNGs from the source file. # %shell rm -rf '{FRAME_INPUT_DIR}' # %shell mkdir -p '{FRAME_INPUT_DIR}' if (END_FRAME==-1): # %shell ffmpeg -i '/content/DAIN/{filename}' -vf 'select=gte(n\,{START_FRAME}),setpts=PTS-STARTPTS' '{FRAME_INPUT_DIR}/%05d.png' else: # %shell ffmpeg -i '/content/DAIN/{filename}' -vf 'select=between(n\,{START_FRAME}\,{END_FRAME}),setpts=PTS-STARTPTS' '{FRAME_INPUT_DIR}/%05d.png' from IPython.display import clear_output clear_output() # png_generated_count_command_result = %shell ls '{FRAME_INPUT_DIR}' | wc -l frame_count = int(png_generated_count_command_result.output.strip()) import shutil if SEAMLESS: frame_count += 1 first_frame = f"{FRAME_INPUT_DIR}/00001.png" new_last_frame = f"{FRAME_INPUT_DIR}/{frame_count.zfill(5)}.png" shutil.copyfile(first_frame, new_last_frame) print(f"{frame_count} frame PNGs generated.") # + id="04P1qy7fUQie" # PNG'lerde ki Alfa oranına bakılıyor... import subprocess as sp # %cd {FRAME_INPUT_DIR} channels = sp.getoutput('identify -format %[channels] 00001.png') print (f"{channels} detected") # Removing alpha if detected if "a" in channels: print("Alpha channel detected and will be removed.") print(sp.getoutput('find . -name "*.png" -exec convert "{}" -alpha off PNG24:"{}" \;')) # + id="GMrdzEAoUXk7" # Interpolation # %shell mkdir -p '{FRAME_OUTPUT_DIR}' # %cd /content/DAIN # !python -W ignore colab_interpolate.py --netName DAIN_slowmotion --time_step {fps/TARGET_FPS} --start_frame 1 --end_frame {frame_count} --frame_input_dir '{FRAME_INPUT_DIR}' --frame_output_dir '{FRAME_OUTPUT_DIR}' # + id="lQ6iv2U3UdSY" # Dosyalar Bulunuyor, Orijinali ile Eşleşmesi İçin Sıkıştırılıyor... # %cd {FRAME_OUTPUT_DIR} if (RESIZE_HOTFIX): images = [] for filename in os.listdir(FRAME_OUTPUT_DIR): img = cv2.imread(os.path.join(FRAME_OUTPUT_DIR, filename)) filename = os.path.splitext(filename)[0] if(not filename.endswith('0')): dimensions = (img.shape[1]+2, img.shape[0]+2) resized = cv2.resize(img, dimensions, interpolation=cv2.INTER_LANCZOS4) crop = resized[1:(dimensions[1]-1), 1:(dimensions[0]-1)] cv2.imwrite(f"{filename}.png", crop) # + id="WcRBjOI2UuMi" # Video Oluşturuluyor... # %cd {FRAME_OUTPUT_DIR} # %shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '*.png' '/content/gdrive/My Drive/{OUTPUT_FILE_PATH}' # + [markdown] id="KLhrSlXvVDr0" # ## 30 Saniye bekledikten sonra videonuz `DAIN` klasöründe olacaktır. Drive'ı açarak kontrol edebilirsiniz. # + [markdown] id="TGhqmgudVVcI" # ` AŞAĞIDAKİ KOMUTLAR HER ZAMAN ÇALIŞMAYABİLİR. KULLANIMINI BİLENLER HARİCİ KODLARLA OYNAMAYIN` # + id="-J9NsCFqVbnw" # [Experimental] Create video with sound # Only run this, if the original had sound. # %cd {FRAME_OUTPUT_DIR} # %shell ffmpeg -i '/content/DAIN/{filename}' -acodec copy output-audio.aac # %shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '*.png' -i output-audio.aac -shortest '/content/gdrive/My Drive/{OUTPUT_FILE_PATH}' if (AUTO_REMOVE): # !rm -rf {FRAME_OUTPUT_DIR}/* # !rm -rf output-audio.aac
DepthV1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Time normalization of data # # <NAME> # Time normalization is usually employed for the temporal alignment of cyclic data obtained from different trials with different duration (number of points). The most simple and common procedure for time normalization used in Biomechanics and Motor Control is knwown as the normalization to percent cycle (although it might not be the most adequate procedure in certain cases ([Helwig et al., 2011](http://www.sciencedirect.com/science/article/pii/S0021929010005038)). # # In the percent cycle, a fixed number (typically a temporal base from 0 to 100%) of new equaly spaced data is created based on the old data with a mathematical procedure knwon as interpolation. # **Interpolation** is the estimation of new data points within the range of known data points. This is different from **extrapolation**, the estimation of data points outside the range of known data points. # Time normalization of data using interpolation is a simple procedure and it doesn't matter if the original data have more or less data points than desired. # # The Python function `tnorm.py` (code at the end of this text) implements the normalization to percent cycle procedure for time normalization. The function signature is: # ```python # yn, tn, indie = tnorm(y, axis=0, step=1, k=3, smooth=0, mask=None, show=False, ax=None) # ``` # Let's see now how to perform interpolation and time normalization; first let's import the necessary Python libraries and configure the environment: # # <!-- TEASER_END --> # Import the necessary libraries import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import sys sys.path.insert(1, r'./../functions') # add to pythonpath # For instance, consider the data shown next. The time normalization of these data to represent a cycle from 0 to 100%, with a step of 1% (101 data points) is: y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3] print("y data:") y t = np.linspace(0, 100, len(y)) # time vector for the original data tn = np.linspace(0, 100, 101) # new time vector for the new time-normalized data yn = np.interp(tn, t, y) # new time-normalized data print("y data interpolated to 101 points:") yn # The key is the Numpy `interp` function, from its help: # # >interp(x, xp, fp, left=None, right=None) # >One-dimensional linear interpolation. # >Returns the one-dimensional piecewise linear interpolant to a function with given values at discrete data-points. # # A plot of the data will show what we have done: plt.figure(figsize=(10,5)) plt.plot(t, y, 'bo-', lw=2, label='original data') plt.plot(tn, yn, '.-', color=[1, 0, 0, .5], lw=2, label='time normalized') plt.legend(loc='best', framealpha=.5) plt.xlabel('Cycle [%]') plt.show() # The function `tnorm.py` implments this kind of normaliztion with option for a different interpolation than the linear one used, deal with missing points in the data (if these missing points are not at the extremities of the data because the interpolation function can not extrapolate data), other things. # Let's see the `tnorm.py` examples: from tnorm import tnorm >>> # Default options: cubic spline interpolation passing through >>> # each datum, 101 points, and no plot >>> y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3] >>> tnorm(y) >>> # Linear interpolation passing through each datum >>> yn, tn, indie = tnorm(y, k=1, smooth=0, mask=None, show=True) >>> # Cubic spline interpolation with smoothing >>> yn, tn, indie = tnorm(y, k=3, smooth=1, mask=None, show=True) >>> # Cubic spline interpolation with smoothing and 50 points >>> x = np.linspace(-3, 3, 60) >>> y = np.exp(-x**2) + np.random.randn(60)/10 >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True) # + run_control={"breakpoint": false} >>> # Deal with missing data (use NaN as mask) >>> x = np.linspace(-3, 3, 100) >>> y = np.exp(-x**2) + np.random.randn(100)/10 >>> y[0] = np.NaN # first point is also missing >>> y[30: 41] = np.NaN # make other 10 missing points >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True) # + run_control={"breakpoint": false} >>> # Deal with 2-D array >>> x = np.linspace(-3, 3, 100) >>> y = np.exp(-x**2) + np.random.randn(100)/10 >>> y = np.vstack((y-1, y[::-1])).T >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True) # - # ## Function tnorm.py # + run_control={"breakpoint": false} # # %load './../functions/tnorm.py' """Time normalization (from 0 to 100% with step interval).""" from __future__ import division, print_function import numpy as np __author__ = '<NAME>, https://github.com/demotu/BMC' __version__ = "1.0.5" __license__ = "MIT" def tnorm(y, axis=0, step=1, k=3, smooth=0, mask=None, show=False, ax=None): """Time normalization (from 0 to 100% with step interval). Time normalization is usually employed for the temporal alignment of data obtained from different trials with different duration (number of points). This code implements a procedure knwown as the normalization to percent cycle, the most simple and common method used among the ones available, but may not be the most adequate [1]_. NaNs and any value inputted as a mask parameter and that appears at the extremities are removed before the interpolation because this code does not perform extrapolation. For a 2D array, the entire row with NaN or a mask value at the extermity is removed because of alignment issues with the data from different columns. NaNs and any value inputted as a mask parameter and that appears in the middle of the data (which may represent missing data) are ignored and the interpolation is performed throught these points. This code can perform simple linear interpolation passing throught each datum or spline interpolation (up to quintic splines) passing through each datum (knots) or not (in case a smoothing parameter > 0 is inputted). See this IPython notebook [2]_. Parameters ---------- y : 1-D or 2-D array_like Array of independent input data. Must be increasing. If 2-D array, the data in each axis will be interpolated. axis : int, 0 or 1, optional (default = 0) Axis along which the interpolation is performed. 0: data in each column are interpolated; 1: for row interpolation step : float or int, optional (default = 1) Interval from 0 to 100% to resample y or the number of points y should be interpolated. In the later case, the desired number of points should be expressed with step as a negative integer. For instance, step = 1 or step = -101 will result in the same number of points at the interpolation (101 points). If step == 0, the number of points will be the number of data in y. k : int, optional (default = 3) Degree of the smoothing spline. Must be 1 <= k <= 5. If 3, a cubic spline is used. The number of data points must be larger than k. smooth : float or None, optional (default = 0) Positive smoothing factor used to choose the number of knots. If 0, spline will interpolate through all data points. If None, smooth=len(y). mask : None or float, optional (default = None) Mask to identify missing values which will be ignored. It can be a list of values. NaN values will be ignored and don't need to be in the mask. show : bool, optional (default = False) True (1) plot data in a matplotlib figure. False (0) to not plot. ax : a matplotlib.axes.Axes instance, optional (default = None). Returns ------- yn : 1-D or 2-D array Interpolated data (if axis == 0, column oriented for 2-D array). tn : 1-D array New x values (from 0 to 100) for the interpolated data. inds : list Indexes of first and last rows without NaNs at the extremities of `y`. If there is no NaN in the data, this list is [0, y.shape[0]-1]. Notes ----- This code performs interpolation to create data with the desired number of points using a one-dimensional smoothing spline fit to a given set of data points (scipy.interpolate.UnivariateSpline function). References ---------- .. [1] http://www.sciencedirect.com/science/article/pii/S0021929010005038 .. [2] http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/TimeNormalization.ipynb See Also -------- scipy.interpolate.UnivariateSpline: One-dimensional smoothing spline fit to a given set of data points. Examples -------- >>> # Default options: cubic spline interpolation passing through >>> # each datum, 101 points, and no plot >>> y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3] >>> tnorm(y) >>> # Linear interpolation passing through each datum >>> y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3] >>> yn, tn, indie = tnorm(y, k=1, smooth=0, mask=None, show=True) >>> # Cubic spline interpolation with smoothing >>> y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3] >>> yn, tn, indie = tnorm(y, k=3, smooth=1, mask=None, show=True) >>> # Cubic spline interpolation with smoothing and 50 points >>> x = np.linspace(-3, 3, 100) >>> y = np.exp(-x**2) + np.random.randn(100)/10 >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True) >>> # Deal with missing data (use NaN as mask) >>> x = np.linspace(-3, 3, 100) >>> y = np.exp(-x**2) + np.random.randn(100)/10 >>> y[0] = np.NaN # first point is also missing >>> y[30: 41] = np.NaN # make other 10 missing points >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True) >>> # Deal with 2-D array >>> x = np.linspace(-3, 3, 100) >>> y = np.exp(-x**2) + np.random.randn(100)/10 >>> y = np.vstack((y-1, y[::-1])).T >>> yn, tn, indie = tnorm(y, step=-50, k=3, smooth=1, show=True) """ from scipy.interpolate import UnivariateSpline y = np.asarray(y) if axis: y = y.T if y.ndim == 1: y = np.reshape(y, (-1, 1)) # turn mask into NaN if mask is not None: y[y == mask] = np.NaN # delete rows with missing values at the extremities iini = 0 iend = y.shape[0]-1 while y.size and np.isnan(np.sum(y[0])): y = np.delete(y, 0, axis=0) iini += 1 while y.size and np.isnan(np.sum(y[-1])): y = np.delete(y, -1, axis=0) iend -= 1 # check if there are still data if not y.size: return None, None, [] if y.size == 1: return y.flatten(), None, [0, 0] indie = [iini, iend] t = np.linspace(0, 100, y.shape[0]) if step == 0: tn = t elif step > 0: tn = np.linspace(0, 100, np.round(100 / step + 1)) else: tn = np.linspace(0, 100, -step) yn = np.empty([tn.size, y.shape[1]]) * np.NaN for col in np.arange(y.shape[1]): # ignore NaNs inside data for the interpolation ind = np.isfinite(y[:, col]) if np.sum(ind) > 1: # at least two points for the interpolation spl = UnivariateSpline(t[ind], y[ind, col], k=k, s=smooth) yn[:, col] = spl(tn) if show: _plot(t, y, ax, tn, yn) if axis: y = y.T if yn.shape[1] == 1: yn = yn.flatten() return yn, tn, indie def _plot(t, y, ax, tn, yn): """Plot results of the tnorm function, see its help.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') else: if ax is None: _, ax = plt.subplots(1, 1, figsize=(8, 5)) ax.set_prop_cycle('color', ['b', 'r', 'b', 'g', 'b', 'y', 'b', 'c', 'b', 'm']) #ax.set_color_cycle(['b', 'r', 'b', 'g', 'b', 'y', 'b', 'c', 'b', 'm']) for col in np.arange(y.shape[1]): if y.shape[1] == 1: ax.plot(t, y[:, col], 'o-', lw=1, label='Original data') ax.plot(tn, yn[:, col], '.-', lw=2, label='Interpolated') else: ax.plot(t, y[:, col], 'o-', lw=1) ax.plot(tn, yn[:, col], '.-', lw=2, label='Col= %d' % col) ax.locator_params(axis='y', nbins=7) ax.legend(fontsize=12, loc='best', framealpha=.5, numpoints=1) plt.xlabel('[%]') plt.tight_layout() plt.show() # -
notebooks/TimeNormalization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Word2vecTokenizer # <div style="position: absolute; right:0;top:0"><a href="./tokenizer.ipynb" style="text-decoration: none"> <font size="5">←</font></a> # <a href="../evaluation.py.ipynb" style="text-decoration: none"> <font size="5">↑</font></a></div> # # This module provides the `W2VTokenizer` class that transforms the `text` of a document into `tokens`. # It keeps only those tokens that appear in the vocabulary of the corresponding embedding model, # but tries to combine tokens into phrases if they appear in the model. # # --- # ## Setup and Settings # --- # + from __init__ import init_vars init_vars(vars(), ('info', {}), ('runvars', {})) import re import data import config from base import nbprint from widgetbase import nbbox from util import ProgressIterator, add_method from embedding.main import get_model import tokenizer.common from tokenizer.token_util import TokenizerBase from tokenizer.default_tokenizer import DefaultTokenizer from tokenizer.widgets import token_picker, run_and_compare, show_comparison if RUN_SCRIPT: token_picker(info, runvars, 'C') # - # --- # ## Tokenize Document # --- # The following functions consitute the `W2VTokenizer` class that transforms the raw text of a document into tokens. # + class W2VTokenizer(TokenizerBase): def __init__(self, *args, **kwargs): super().__init__(*args,**kwargs) self.embedding_model = get_model(self.info) self.filter = self.embedding_model.filter.filter if RUN_SCRIPT: nbbox() w2v_tokenizer = W2VTokenizer(info) w2v_tokenizer.text = runvars['document']['text'] # - # ### Prepare Text # # This step lowercases all characters and replaces the following: # - `separator_token` by `separator_token_replacement` # - all whitespaces by a single whitespace # - `#` by nothing # + _re_whitespace = re.compile('[\s]+', re.UNICODE) _re_url = re.compile('(http://[^\s]+)|(https://[^\s]+)|(www\.[^\s]+)') @add_method(W2VTokenizer) def prepare(self): self.text = self.text.lower() self.text = self.text.replace(tokenizer.common.separator_token,tokenizer.common.separator_token_replacement) self.text = self.text.replace('#', '') self.text, count = _re_url.subn(' ', self.text) self.text, count = _re_whitespace.subn(' ', self.text) if RUN_SCRIPT: run_and_compare(w2v_tokenizer, w2v_tokenizer.prepare, 'text') # - # ### Replace numbers # # All numbers are replaced by `#`. This include all numbers in the Unicode 'Number, Decimal Digit' category. # + _re_decimal = re.compile('\d', re.UNICODE) @add_method(W2VTokenizer) def replace_numbers(self): self.text, count = _re_decimal.subn('#', self.text) if RUN_SCRIPT: run_and_compare(w2v_tokenizer, w2v_tokenizer.replace_numbers, 'text') # - # ### Split at breaking characters # # This step splits the string into substrings $s_i$ at all sequences of non alphanumeric characters (`\w`), whitespace (`\s`), or apostrophes (`\'`). Later, the algorithm will only try to combine tokens from each $s_i$ separately into phrases, but not tokens from different substrings. # + _re_breaking = re.compile('[^\w\s\'\’#]+', re.UNICODE) @add_method(W2VTokenizer) def split_text(self): self.subtexts = _re_breaking.split(self.text) if RUN_SCRIPT: run_and_compare(w2v_tokenizer, w2v_tokenizer.split_text, 'text', 'subtexts') # - # ### Split at nonbreaking characters # + @add_method(W2VTokenizer) def split_subtexts(self): self.tokenlists = [subtext.split() for subtext in self.subtexts if len(subtext) > 0] if RUN_SCRIPT: run_and_compare(w2v_tokenizer, w2v_tokenizer.split_subtexts, 'subtexts', 'tokenlists') # - # ### Filter # + @add_method(W2VTokenizer) def build_tokens(self): self.tokens = [] for tokenlist in self.tokenlists: self.tokens = self.tokens + self.filter(tokenlist) if RUN_SCRIPT: run_and_compare(w2v_tokenizer, w2v_tokenizer.build_tokens, 'tokenlists', 'tokens') # - # --- # ## Complete function # --- @add_method(W2VTokenizer) def tokenize(self, text, *args): self.text = text self.prepare() self.replace_numbers() self.split_text() self.split_subtexts() self.build_tokens() return self.tokens # ## Test tokenizer if RUN_SCRIPT: w2v_tokenizer.tokenize(runvars['document']['text']) show_comparison(runvars['document']['text'], w2v_tokenizer.tokens, 'Text', 'Tokens')
tomef/tokenizer/word2vec_tokenizer.py.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Monpa # language: python # name: monpa # --- # + import os os.chdir("C:\\Users\\ricardo\\Documents\\GitHub\\Kaggle\\1910_TMU_EnglishReviewClassification\\Data\\training_dataset_v1\\training_dataset") os.chdir("./neg") # + import os import pandas as pd pd.options.display.max_colwidth = 100000000 test_data = pd.DataFrame() text_list = [] class_list = [] for i, j in enumerate(os.listdir()): with open(j, 'r', encoding='UTF-8') as file: for line in file.readlines(): text_list.append(line) class_list.append("neg") os.chdir("../pos") for i, j in enumerate(os.listdir()): with open(j, 'r', encoding='UTF-8') as file: for line in file.readlines(): text_list.append(line) class_list.append("pos") test_data.loc[:, 'text'] = text_list test_data.loc[:, 'class'] = class_list # - test_data.to_csv("../train_dataframe.csv", header=True, index=False) # # Prepare test data # + import os os.chdir("C:/Users/ricardo/Documents/GitHub/College/Courses/10801_MachineLearningAndDeepLearning/Assignments/Assignment_2/Data/training_dataset_v1/training_dataset") os.chdir("../../") # + import pandas as pd pd.options.display.max_colwidth = 100000000 test_data = pd.read_csv("./test_dataset.csv", header=None) test_list = test_data.iloc[:, 1].tolist() output = pd.DataFrame({ 'text':test_list, }) output.loc[:, 'class'] = "" output.to_csv("./Ludwig_test_data.csv", header=True, index=False) # - # # Configure Output # + predict_list = [] ## Open file fp = open("C:\\Users\\ricardo\\Documents\\GitHub\\Kaggle\\1910_TMU_EnglishReviewClassification\\Data\\191031_LudwigPredict\\class_predictions.csv", "r") # 變數 lines 會儲存 filename.txt 的內容 lines = fp.readlines() # close file fp.close() # print content for i in range(len(lines)): if len(lines[i]) > 1: result = lines[i].strip("\n") predict_list.append(result) # - print(len(predict_list)) # + import pandas as pd import os os.chdir("C:\\Users\\ricardo\\Documents\\GitHub\\Kaggle\\1910_TMU_EnglishReviewClassification\\Data\\") submission = pd.read_csv("C:\\Users\\ricardo\\Documents\\GitHub\\Kaggle\\1910_TMU_EnglishReviewClassification\\Data\\submission.csv") submission.loc[:, 'Label'] = predict_list submission.to_csv("./final.csv", header=True, index=None)
Kaggle/TMU_EnglishReviewClassification/.ipynb_checkpoints/LudwigPreprocessing-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="Ro8uR3UYlJeA" # Here is the [glide-finetune repo](https://github.com/afiaka87/glide-finetune) code modified to make it work on Colab. Tested only with Colab Pro so far. # # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eliohead/glide-finetune-colab/blob/main/glide_finetune_colab.ipynb) # + id="QiRAvRnAG5U1" # !git clone https://github.com/eliohead/glide-finetune # !cd glide-finetune/ # + id="8xegIOxgHKRj" # !pip install -r '/content/glide-finetune/requirements.txt' # + id="IOl0UiG6HZ7S" # !pip install webdataset # + [markdown] id="O1VUUXtfHezC" # **Train GLIDE** # - **Make sure** your dataset folder is named '**data**'. Exact path has to be **'/content/data'** # - Images and text files has to be all together in the folder. If an image is named *001.jpg* its relative txt file should be named *001.txt* and so on. # - I deactivated sampling during training because continued to give me errors here in Colab. Only checkpoints will be saved (.pt files) and you can test image quality directly in testing notebook. # + id="eTcnT-FKHcmD" # !python '/content/glide-finetune/train_glide.py' \ # --epochs 20 \ # --use_captions \ # --project_name 'finetune1' \ # --batch_size 4 \ # --learning_rate 1e-04 \ # --side_x 64 \ # --side_y 64 \ # --resize_ratio 1.0 \ # --uncond_p 0.2 \ # --checkpoints_dir '/content/checkpoints' \ # + [markdown] id="3HeZ5B5eJL6b" # **Train Upsampler** # - Also here I deactivated sampling during training. Only checkpoints will be saved (.pt files) and you can test image quality directly in testing notebook. # # # + id="BKMfXBfuJKfV" # !python '/content/glide-finetune/train_glide.py' \ # --train_upsample \ # --epochs 40 \ # --learning_rate 1e-04 \ # --side_x 64 \ # --side_y 64 \ # --uncond_p 0.0 \ # --checkpoints_dir '/content/checkpoints' \ # + [markdown] id="a6yqXrEuJlL6" # ### Once training is complete, save checkpoints and go to testing notebook.
glide_finetune_colab.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- * Within a Python file, create new SQLAlchemy class called `Garbage` that holds the following values... * `__tablename__`: Should be "garbage_collection" * `id`: The primary key for the table that is an integer and automatically increments * `item`: A string that describes what kind of item was collected * `weight`: A double that explains how heavy the item is * `collector`: A string that lets users know which garbage man collected the item * Create a connection and a session before adding a few items into the SQLite database crafted. * Update the values within at least two of the rows added to the table. * Delete the row with the lowest weight from the table. * Print out all of the data within the database. # + # Import SQL Alchemy from sqlalchemy import create_engine # Import and establish Base for which classes will be constructed from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() # Import modules to declare columns and column data types from sqlalchemy import Column, Integer, String, Float # + deletable=false nbgrader={"checksum": "8af1cb904695d814b2f78e98ad1e76d6", "grade": false, "grade_id": "cell-55d75c21dbb2c337", "locked": false, "schema_version": 1, "solution": true} class Garbage (Base): __tablename__ = 'trashcollection' id=Column(Integer, primary_key=True) item=Column(String) weight=Column(Integer) collector=Column(String) # YOUR CODE HERE # + deletable=false nbgrader={"checksum": "59c29987b266a9c395699d181bb087a6", "grade": false, "grade_id": "cell-4230ec6995aad3fd", "locked": false, "schema_version": 1, "solution": true} # Create a connection to a SQLite database # YOUR CODE HERE engine = create_engine('sqlite:///trashcollection.sqlite') # - # Create the garbage_collection table within the database Base.metadata.create_all(engine) # To push the objects made and query the server we use a Session object from sqlalchemy.orm import Session session = Session(engine) # + deletable=false nbgrader={"checksum": "eea32595c61b7a5ae6d9d0e22b1d18df", "grade": false, "grade_id": "cell-544743e14f0f9a25", "locked": false, "schema_version": 1, "solution": true} # Create some instances of the Garbage class # YOUR CODE HERE session.add(Garbage(id=1, item = "bin", weight= 50, collector= "Bart" )) session.add(Garbage(id=2, item = "bin", weight= 70, collector="Ken" )) session.add(Garbage(id=3, item = "can", weight=15, collector= "Homer")) session.add(Garbage(id=4, item = "haul", weight=100, collector= "Lisa")) # + deletable=false nbgrader={"checksum": "2e13e07533fce1b94f2693a49958066b", "grade": false, "grade_id": "cell-7ead20b8cbdbfb73", "locked": false, "schema_version": 1, "solution": true} # Add these objects to the session # YOUR CODE HERE engine.execute('select * from trashcollection').fetchall() # + deletable=false nbgrader={"checksum": "d4a7717476260b79dd53ffdcee0b352e", "grade": false, "grade_id": "cell-a66cda367a0b1515", "locked": false, "schema_version": 1, "solution": true} # Update two rows of data # YOUR CODE HERE update_one = session.query(Garbage).filter(Garbage.id == 1).first() update_one.collector="<NAME>" session.dirty session.commit() # + deletable=false nbgrader={"checksum": "fc1a996c75d8d3b91c9f7d1c7a74feee", "grade": false, "grade_id": "cell-2cee82afd03d3679", "locked": false, "schema_version": 1, "solution": true} session.query(Garbage.id, Garbage.item, Garbage.weight, Garbage.collector).all() # Delete the row with the lowest weight # YOUR CODE HERE # + deletable=false nbgrader={"checksum": "ecc333c6906d189152b758d27e331e36", "grade": false, "grade_id": "cell-23672c755f55dd5d", "locked": false, "schema_version": 1, "solution": true} # Collect all of the items and print their information # YOUR CODE HERE garbagecollection = session.query(Garbage).filter_by(collecter='Homer').delete() session.commit() # - session.query(Garbage.id, Garbage.item, Garbage.weight, Garbage.collecter).all()
Day-2/Activities/04-Par_CruddyDB/Unsolved/Par_CruddyDB.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd from matplotlib import pyplot as plt import seaborn as sns # %matplotlib inline from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, confusion_matrix # - # # Домашнее задание №3 # # Решенный ноутбук нужно загрузить в [форму](http://bit.ly/dafe_hw) # Дедлайн: __22.05.2019__ # # ## Загрузка данных (5%) lfw_people = datasets.fetch_lfw_people( min_faces_per_person=50, resize=0.4, ) # Выведите: # - кол-во объектов # - Кол-во признаков # - кол-во классов # - посмотрите, сколько изображений соответствет каждому классу # - Отрисуйте изображения для случайных семплов с подписями имен класса # + # Ваш код здесь # - # ## Пора учить нейронки! (5%) from keras.models import Sequential from keras.layers import Dense, Flatten, Activation, Conv2D, MaxPooling2D, Dropout from keras.utils import np_utils from keras.optimizers import Adam # + # Конечно мы сначала должны нормировать и центрировать данные: x = # Ваш код здесь y = # Ваш код здесь # используйте lfw_people.target и lfw_people.images # + # Разделим данные на трейн/валидацию/тест: x_train, x_test, y_train_cat, y_test_cat = train_test_split(x, y, train_size=0.6, test_size=0.4, random_state=42) x_val, x_test, y_val_cat, y_test_cat = train_test_split(x_test, y_test_cat, train_size=0.5, test_size=0.5, random_state=42) # Cделайте one-hot-encoding для целевой переменной # Ваш код здесь # - # ## Fully connected neural network (20%) # Создадим нашу первую модель, будем использовать полносвязную нейронную сеть # - Первый слой (входной): 64 нейрона, функция активации ReLU # - Скрытый слой: 32 нейрона, функция активации ReLU # - Чтобы наша сеть не переобучалась, после каждого полно-связного используйте слой Dropout с параметром rate = 0.4 # - Выходной слой: на ваше усмотрение :) # # Будем обучать с помощью Adam на 100 эпохах, размер сэмпла для минибатча: 10 # # Учимся на train, валидируем на val # + # Ваш код здесь # - # Постройте confusion_matrix (используйте sklearn) на тестовой выборке (test) и посчитайте долю правильных ответов(test) # + confusionmatrix = # Ваш код здесь plt.figure(figsize=(12, 6)) sns.heatmap(confusionmatrix, cmap='Greens', annot=True, xticklabels=lfw_people.target_names, yticklabels=lfw_people.target_names) plt.show() # - # Оцените, кого с кем вы чаще всего путаете? Почему одних людей наша сеть путает с другими чаще? # # Вспомним, что в нашей задачи классы, не сбалансированы, какая получится доля правильных ответов, если мы всегда будем предсказывать константнам значением: <NAME> # + # Ваш код здесь # - # ## Это же картинки! Попробуем Convolution neural network (30%) # # - Используйте два сверточных слоя (по 16 нейронов с функцией активации ReLu, padding='same') # - Затем слой MaxPooling'a с размерром 2х2 # - Затем полносвязные слои: 32 нейрона и 16 нейронов # - Выходной слой как раньше # # Помните о требованиях свертки к размерности входных данных и используйте `.reshape` # + # Ваш код здесь # - # Постройте confusion_matrix (используйте sklearn) на тестовой выборке (test) и посчитайте долю правильных ответов(test) # + confusionmatrix = # Ваш код здесь plt.figure(figsize=(12, 6)) sns.heatmap(confusionmatrix, cmap='Greens', annot=True, xticklabels=lfw_people.target_names, yticklabels=lfw_people.target_names) plt.show() # - # Кажется, что стало лучше, но постройте график зависимости доли правильных ответов от эпохи на обучении и валидации. Какие выводы можно сделать? # ### Ваши ответы здесь # ## Aугментация дынных (40%) # Вспомним, что для борьбы с переобучением, нам может помочь Aугментация, для этого мы будем использовать стандартные возможности Keras. Загляните в [доку](https://keras.io/preprocessing/image/) или в эту [статью](https://towardsdatascience.com/image-augmentation-for-deep-learning-using-keras-and-histogram-equalization-9329f6ae5085) # # Вы можете использовать любые аугментации from keras.preprocessing.image import ImageDataGenerator # + # Ваш код здесь # - # Посмотрите долю правильных ответов на тестовой выборке и постройте график зависимости от эпох для обучения и валидации # Ответьте на вопросы и объясните свой ответ: # - Удалось ли Вам победить переобучение? # - Будет ли полезна аугментация horizontal_flip? # ### Ваши ответы здесь # ## Дополнительное задание (дополнительные +30%) # Обучите такую нейронную сеть, чтобы на тестовой выборке достичь доли правильных ответов больше 92% # + # Ваш код здесь
archieve/2019/homeworks/Part 1/hw3/hw3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # <img align="right" src="../../doc/banner_siegel.png" style="width:1000px;"> # # Xarray-I: Data Structure # # ## Background # # In the previous notebook, we experienced that the data we wanna access are loaded in a form called **`xarray.dataset`**. This is the form in which earth observation data are usually stored in a datacube. # # **`xarray`** is an open source project and Python package which offers a toolkit for working with ***multi-dimensional arrays*** of data. **`xarray.dataset`** is an in-memory representation of a netCDF (network Common Data Form) file. Understanding the structure of a **`xarray.dataset`** is the key to enable us work with these data. Thus, in this notebook, we are mainly dedicated to helping users of our datacube understand its data structure. # # Firstly let's come to the end stage of the previous notebook, where we have loaded a data product. The data product "s2_l2a_bavaria" is used as example in this notebook. # ## Description # # The following topics are convered in this notebook: # * **What is inside a `xrray.dataset` (the structure)?** # * **(Basic) Subset Dataset / DataArray** # * **Reshape a Dataset** # + import datacube import pandas as pd from odc.ui import DcViewer from odc.ui import with_ui_cbk import xarray as xr import matplotlib.pyplot as plt # Set config for displaying tables nicely # !! USEFUL !! otherwise parts of longer infos won't be displayed in tables pd.set_option("display.max_colwidth", 200) pd.set_option("display.max_rows", None) # Connect to DataCube # argument "app" --- user defined name for a session (e.g. choose one matching the purpose of this notebook) dc = datacube.Datacube(app = "nb_understand_ndArrays") # + # Load Data Product ds = dc.load(product= "s2_l2a", x = (-2.44, -2.42), y = (13.58, 13.6), output_crs = "EPSG:32734", time = ("2020-06-01", "2020-06-30"), measurements= ["blue", "green", "red","nir"], resolution = (-10,10), group_by = "solar_day", progress_cbk=with_ui_cbk()) ds # - da = ds.to_array().rename({"variable":"band"}) print(da) ds2 = da.to_dataset(dim="time") ds2 # ## **What is inside a `xarray.dataset`?** # The figure below is a diagramm depicting the structure of the **`xarray.dataset`** we've just loaded. Combined with the diagramm, we hope you may better interpret the texts below explaining the data strucutre of a **`xarray.dataset`**. # # ![xarray data structure](https://live.staticflickr.com/65535/51083605166_70dd29baa8_k.jpg) # As read from the output block, this dataset has three ***Data Variables*** , "blue", "green" and "red" (shown with colors in the diagramm), referring to individual spectral band. # # Each data variable can be regarded as a **multi-dimensional *Data Array*** of same structure; in this case, it is a **three-dimensional array** (shown as 3D Cube in the diagramm) where `time`, `x` and `y` are its ***Dimensions*** (shown as axis along each cube in the diagramm). # # In this dataset, there are 35 ***Coordinates*** under `time` dimension, which means there are 35 time steps along the `time` axis. There are 164 coordinates under `x` dimension and 82 coordinates under `y` dimension, indicating that there are 164 pixels along `x` axis and 82 pixels along `y` axis. # # As for the term ***Dataset***, it is like a *Container* holding all the multi-dimensional arrays of same structure (shown as the red-lined box holding all 3D Cubes in the diagramm). # # So this instance dataset has a spatial extent of 164 by 82 pixels at given lon/lat locations, spans over 35 time stamps and 3 spectral band. # # **In summary, *`xarray.dataset`* is substantially a container for high-dimensional *`DataArray`* with common attributes (e.g. crs) attached**, : # * **Data Variables (`values`)**: **it's generally the first/highest dimension to subset from a high dimensional array.** Each `data variable` contains a multi-dimensional array of all other dimensions. # * **Dimensions (`dims`)**: other dimensions arranged in hierachical order *(e.g. 'time', 'y', 'x')*. # * **Coordinates (`coords`)**: Coordinates along each `Dimension` *(e.g. timesteps along 'time' dimension, latitudes along 'y' dimension, longitudes along 'x' dimension)* # * **Attributes (`attrs`)**: A dictionary(`dict`) containing Metadata. # Now let's deconstruct the dataset we have just loaded a bit further to have things more clarified!:D # * **To check existing dimensions of a dataset** # + tags=[] ds.dims # - # * **To check the coordinates of a dataset** ds.coords#['time'] # * **To check all coordinates along a specific dimension** # <br> # <img src=https://live.staticflickr.com/65535/51115452191_ec160d4514_o.png, width="450"> ds.time # OR #ds.coords['time'] # * **To check attributes of the dataset** ds.attrs # ## **Subset Dataset / DataArray** # # * **To select all data of "blue" band** # <br> # <img src=https://live.staticflickr.com/65535/51115092614_366cb774a8_o.png, width="350"> ds.blue # OR #ds['blue'] # Only print pixel values ds.blue.values # * **To select blue band data at the first time stamp** # <br> # <img src=https://live.staticflickr.com/65535/51116131265_8464728bc1_o.png, width="350"> ds.blue[0] # * **To select blue band data at the first time stamp while the latitude is the largest in the defined spatial extent** # <img src=https://live.staticflickr.com/65535/51115337046_aeb75d0d03_o.png, width="350"> ds.blue[0][0] # * **To select the upper-left corner pixel** # <br> # <img src=https://live.staticflickr.com/65535/51116131235_b0cca9589f_o.png, width="350"> ds.blue[0][0][0] # ### **subset dataset with `isel` vs. `sel`** # * Use `isel` when subsetting with **index** # * Use `sel` when subsetting with **labels** # * **To select data of all spectral bands at the first time stamp** # <br> # <img src=https://live.staticflickr.com/65535/51114879732_7d62db54f4_o.png, width="750"> ds.isel(time=[0]) # * **To select data of all spectral bands of year 2020** # <br> # <img src=https://live.staticflickr.com/65535/51116281070_75f1b46a9c_o.png, width="750"> ds.sel(time='2020-06') #print(ds.sel(time='2019')) # ***Tip: More about indexing and sebsetting Dataset or DataArray is presented in the [Notebook_05](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/05_xarrayII.ipynb).*** # ## **Reshape Dataset** # # * **Convert the Dataset (subset to 2019) to a *4-dimension* DataArray** da = ds.sel(time='2020-06').to_array().rename({"variable":"band"}) da # * **Convert the *4-dimension* DataArray back to a Dataset by setting the "time" as DataVariable (reshaped)** # # ![ds_reshaped](https://live.staticflickr.com/65535/51151694092_ca550152d6_o.png) ds_reshp = da.to_dataset(dim="time") print(ds_reshp) # ## Recommended next steps # # If you now understand the **data structure** of `xarray.dataset` and **basic indexing** methods illustrated in this notebook, you are ready to move on to the next notebook where you will learn more about **advanced indexing** and calculating some **basic statistical parameters** of the n-dimensional arrays!:D # # In case you are gaining interest in exploring the world of **xarrays**, you may lay yourself into the [Xarray user guide](http://xarray.pydata.org/en/stable/index.html). # # <br> # To continue working through the notebooks in this beginner's guide, the following notebooks are designed to be worked through in the following order: # # 1. [Jupyter Notebooks](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/01_jupyter_introduction.ipynb) # 2. [eo2cube](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/02_eo2cube_introduction.ipynb) # 3. [Loading Data](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/03_data_lookup_and_loading.ipynb) # 4. ***Xarray I: Data Structure (this notebook)*** # 5. [Xarray II: Index and Statistics](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/05_xarrayII.ipynb) # 6. [Plotting data](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/06_plotting_basics.ipynb) # 7. [Spatial analysis](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/07_basic_analysis.ipynb) # 8. [Parallel processing with Dask](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/08_parallel_processing_with_dask.ipynb) # # The additional notebooks are designed for users to build up both basic and advanced skills which are not covered by the beginner's guide. Self-motivated users can go through them according to their own needs. They act as complements for the guide: # <br> # # 1. [Python's file management tools](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/I_file_management.ipynb) # 2. [Image Processing basics using NumPy and Matplotlib](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/II_numpy_image_processing.ipynb) # 3. [Vector Processing](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/III_process_vector_data.ipynb) # 4. [Advanced Plotting](https://github.com/eo2cube/eo2cube_notebooks/blob/main/get_started/intro_to_eo2cube/IV_advanced_plotting.ipynb) # *** # ## Additional information # # This notebook is for the usage of Jupyter Notebook of the [Department of Remote Sensing](http://remote-sensing.org/), [University of Wuerzburg](https://www.uni-wuerzburg.de/startseite/). # # **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). # # # **Contact:** If you would like to report an issue with this notebook, you can file one on [Github](https://github.com). # # **Last modified:** January 2022
3_Data_Cube_Basics/notebooks/.ipynb_checkpoints/03_xarrayI_data_structure-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Step-by-step MNIST Digits Classification - Convolutional Neural Networks # # 1. Load dataset, explore it (display images, mean, min, max values, etc.) and split it into train, validation and test sets # 2. Data scaling # 3. One hot encoding # 4. Define your model, cost function, optimizer, learning rate # 5. Define your callbacks (save your model, patience, etc.) # 6. Train your model # # 6.1 If you are satisfied with the train and validation performance go to the next step # # 6.2 If you are not satisfied with the train and validation performance go back to step 5 # 7. Test your model on the test and extract relevant metrics # + # %matplotlib inline import matplotlib.pylab as plt import numpy as np import tensorflow as tf physical_devices = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(physical_devices[0], True) # - # ## Load dataset, explore it and split it into train, validation and test sets # # - [Load MNIST Keras](https://keras.io/api/datasets/mnist/#load_data-function) # Loading the data using the Keras function (X_dev, Y_dev), (X_test, Y_test) = tf.keras.datasets.mnist.load_data() # The data comes already split # in dev and test sets print("Development set") print("Images: ",X_dev.shape) print("Labels shape:",Y_dev.shape) print("\nNumber of classes:",np.unique(Y_dev).size) print("\nClasses:",np.unique(Y_dev)) print("\nTest set") print("Images: ",X_test.shape) print("Labels shape: ",Y_test.shape) plt.figure() plt.hist(Y_dev, bins = range(11)) plt.xlabel("Labels") plt.ylabel("Label Frequency") plt.show() # Disaplying some samples from the development set sample_indexes = np.random.choice(np.arange(X_dev.shape[0], dtype = int),size = 30, replace = False) plt.figure(figsize = (24,18)) for (ii,jj) in enumerate(sample_indexes): plt.subplot(5,6,ii+1) plt.imshow(X_dev[jj], cmap = "gray") plt.title("Label: %d" %Y_dev[jj]) plt.show() # + #The number of classes across samples looks balanced # Let's shuffle the samples and split them indexes = np.arange(X_dev.shape[0], dtype = int) np.random.shuffle(indexes) X_dev = X_dev[indexes] Y_dev = Y_dev[indexes] nsplit = int(0.75*X_dev.shape[0]) # Train/validation split # Train and validation split X_train = X_dev[:nsplit] Y_train = Y_dev[:nsplit] X_val = X_dev[nsplit:] Y_val = Y_dev[nsplit:] print("\nTrain set") print("Images: ",X_train.shape) print("Labels shape: ",Y_train.shape) print("\nValidation set") print("Images: ",X_val.shape) print("Labels shape: ",Y_val.shape) # - print(X_train.min(),X_train.max(),X_train.mean(),X_train.std()) print(X_val.min(),X_val.max(),X_val.mean(),X_val.std()) # ## 2. Data Scaling # + norm_type = 0 # 0 -> min-max; 1-> standardization if norm_type == 0: X_train = X_train/255 X_val = X_val/255 X_test = X_test/255 elif norm_type == 1: train_mean, train_std = X_train.mean(),X_train.std() X_train = (X_train - train_mean)/train_std X_val = (X_val - train_mean)/train_std X_test = (X_test - train_mean)/train_std else: pass # - # ## 3. One hot encoding # + Y_train_oh = tf.keras.utils.to_categorical(Y_train) Y_val_oh = tf.keras.utils.to_categorical(Y_val) Y_test_oh = tf.keras.utils.to_categorical(Y_test) print("Labels:") print(Y_train[:5]) print() print("One hot encoded labels:") print(Y_train_oh[:5]) # - # ## 4. Define your model, cost function, optimizer, learning rate def my_model(ishape = (28,28,1),k = 10, lr = 1e-4): model_input = tf.keras.layers.Input(shape = ishape) l1 = tf.keras.layers.Conv2D(48, (3,3), padding='same', activation='relu')(model_input) l2 = tf.keras.layers.Conv2D(48, (3,3), padding='same', activation='relu')(l1) l2_drop = tf.keras.layers.Dropout(0.25)(l2) l3 = tf.keras.layers.MaxPool2D((2,2))(l2_drop) l4 = tf.keras.layers.Conv2D(96, (3,3), padding='same', activation='relu')(l3) l5 = tf.keras.layers.Conv2D(96, (3,3), padding='same', activation='relu')(l4) l5_drop = tf.keras.layers.Dropout(0.25)(l5) flat = tf.keras.layers.Flatten()(l5_drop) out = tf.keras.layers.Dense(k,activation = 'softmax')(flat) model = tf.keras.models.Model(inputs = model_input, outputs = out) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lr), loss='categorical_crossentropy', metrics = ["accuracy"]) return model model = my_model() print(model.summary()) # ## 5. Define your callbacks (save your model, patience, etc.) # # - [Keras callbacks](https://keras.io/api/callbacks/) # + model_name = "best_model_mnist_cnn.h5" early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience = 20) monitor = tf.keras.callbacks.ModelCheckpoint(model_name, monitor='val_loss',\ verbose=0,save_best_only=True,\ save_weights_only=True,\ mode='min') # Learning rate schedule def scheduler(epoch, lr): if epoch%10 == 0: lr = lr/2 return lr lr_schedule = tf.keras.callbacks.LearningRateScheduler(scheduler,verbose = 0) # - # ## 6. Train your model model.fit(X_train[:,:,:,np.newaxis],Y_train_oh,batch_size = 32, epochs = 1000, \ verbose = 1, callbacks= [early_stop, monitor, lr_schedule],validation_data=(X_val[:,:,:,np.newaxis],Y_val_oh)) # ## 7. Test your model on the test and extract relevant metrics model.load_weights(model_name) metrics = model.evaluate(X_test[:,:,:,np.newaxis],Y_test_oh) print("Categorical cross-entropy:", metrics[0]) print("Accuracy:", metrics[1]) # + Ypred = model.predict(X_test[:,:,:,np.newaxis]).argmax(axis = 1) wrong_indexes = np.where(Ypred != Y_test)[0] print(wrong_indexes.size) # Disaplying some samples from the development set sample_indexes = np.random.choice(np.arange(wrong_indexes.shape[0], dtype = int),size = 30, replace = False) plt.figure(figsize = (24,18)) for (ii,jj) in enumerate(sample_indexes): plt.subplot(5,6,ii+1) plt.imshow(X_test[wrong_indexes[jj]], cmap = "gray") plt.title("Label: %d, predicted: %d" %(Y_test[wrong_indexes[jj]],Ypred[wrong_indexes[jj]])) plt.show()
JNotebooks/tutorial10_step_by_step_MNIST_digits_classification_cnn.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1>Examples</h1> # <strong>Example #1: Function Method</strong> # # Let's create an iterator using Generator by reversing Some company's revenue in million dollars for one week. # + def reverse_revenue_generator(x): i = len(x)-1 while i>=0: yield x[i] i-=1 #company's revenue revenue = [3,4,10,11,18,21,39] for rev in reverse_revenue_generator(revenue): print(rev) # - # <strong>Example #2: Expression Method</strong> # # Using the expression method. # + # creating a list of numbers some_vals = [1, 16, 81, 100] # take square root of each term list_ = [x**0.5 for x in some_vals] # This can be done using a generator expression # generator expressions are surrounded by parenthesis () genr = (x**0.5 for x in some_vals) print(list_) print(*genr) # - # Copyright © 2020, Mass Street Analytics, LLC. All Rights Reserved.
03 Advanced/18-generators.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- """ Quickstart to Tensorboard The dataset is using MNIST handwritten dataset. author: <NAME> """ from __future__ import print_function import os import tensorflow as tf print("Tensorflow Version: {}".format(tf.__version__)) # # Prepare # ## Load MNIST dataset from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/mnist_data", one_hot=True) # # Parameters / Model # ## hyper-parameters full_training = True learning_rate = 1e-2 training_epochs = 100 if full_training else 1 batch_size = 64 if full_training else 32 display_step = 1 log_path = os.path.join("/","tmp","tensorboard_log","mnist_example") # ## network def conv2d(input, weight_shape, bias_shape): in_count = weight_shape[0] * weight_shape[1] * weight_shape[2] weight_init = tf.random_normal_initializer(stddev=(2/in_count)**0.5) bias_init = tf.constant_initializer(value=0) weight = tf.get_variable("W", weight_shape, initializer=weight_init) bias = tf.get_variable("b", bias_shape, initializer=bias_init) conv = tf.nn.conv2d(input, weight, strides=[1, 1, 1, 1], padding="SAME") return tf.nn.relu(tf.nn.bias_add(conv, bias)) def pool2d(input, k=2): return tf.nn.max_pool(value=input, ksize=[1,k,k,1], strides=[1,k,k,1], padding="SAME") def dense(input, weight_shape, bias_shape): in_count = weight_shape[0] * weight_shape[1] weight_init = tf.random_normal_initializer(stddev=(2/in_count)**0.5) bias_init = tf.constant_initializer(value=0) weight = tf.get_variable("W", weight_shape, initializer=weight_init) bias = tf.get_variable("b", bias_shape, initializer=bias_init) logits = tf.matmul(input, weight) return tf.nn.relu(tf.nn.bias_add(logits, bias)) def inference(input, keep_prob=0.5): x = tf.reshape(input, [-1, 28, 28, 1]) with tf.variable_scope("hidden_1"): conv_1 = conv2d(x, [3, 3, 1, 32], [32]) # 28 x 28 x 32 pool_1 = pool2d(conv_1, 2) # 14 x 14 x 32 with tf.variable_scope("hidden_2"): conv_2 = conv2d(pool_1, [3, 3, 32, 64], [64]) # 14 x 14 x 64 pool_2 = pool2d(conv_2, 2) # 7 x 7 x 64 with tf.variable_scope("fc"): pool_2_flat = tf.reshape(pool_2, [-1, 7 * 7 * 64]) fc_1 = dense(pool_2_flat, [7 * 7 * 64, 1024], [1024]) # dropout fc_1_dropout = tf.nn.dropout(fc_1, keep_prob=keep_prob) with tf.variable_scope("output"): output = dense(fc_1_dropout, [1024, 10], [10]) return output # # Learning # ## Target def loss(output, y): """ output: the logits value from inference y: the labeling data """ cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=y) loss = tf.reduce_mean(cross_entropy) return loss def training(loss, global_step): """ loss: the loss value global_step: the global training step index """ tf.summary.scalar("loss", loss) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) grads = tf.gradients(loss, tf.trainable_variables()) grads = list(zip(grads, tf.trainable_variables())) apply_grads = optimizer.apply_gradients(grads_and_vars=grads, global_step=global_step) return grads, apply_grads def evaluate(output, y): """ output: the logits value from inference y: the labeling data """ compare = tf.equal(tf.argmax(output, axis=1), tf.argmax(y, axis=1)) accuracy = tf.reduce_mean(tf.cast(compare, tf.float32)) tf.summary.scalar("eval", accuracy) return accuracy # ## Training print(""" Run 'tensorboard --logdir=/tmp/tensorboard_log/' to monitor the training process. """) with tf.Graph().as_default(): with tf.variable_scope("mlp"): x = tf.placeholder("float", [None, 784]) # x is batch input y = tf.placeholder("float", [None, 10]) # y is output for 10 classification keep_prob = tf.placeholder(tf.float32) output = inference(x, keep_prob=keep_prob) # get the inference result loss_val = loss(output=output, y=y) # get the loss global_step = tf.Variable(0, name="global_step", trainable=False) # training step train_grads, train_opt = training(loss=loss_val, global_step=global_step) # training body eval_opt = evaluate(output=output, y=y) # evaluation result # show all training variable info # may cause summary name error # INFO:tensorflow:Summary name mlp/hidden_1/W:0 is illegal; using mlp/hidden_1/W_0 instead. for var in tf.trainable_variables(): tf.summary.histogram(var.name, var) # show grads info for grad, var in train_grads: tf.summary.histogram(var.name + '/gradient', grad) init_var = tf.global_variables_initializer() summary_opt = tf.summary.merge_all() # merge all summaries saver = tf.train.Saver() # for saving checkpoints with tf.Session() as sess: summary_writer = tf.summary.FileWriter(log_path, graph=sess.graph) # write the summary sess.run(init_var) # initialize all variables for epoch in range(training_epochs): avg_loss = 0. total_batch = int(mnist.train.num_examples / batch_size) for idx in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size=batch_size) # get the batch data feed_dict_data = {x: batch_x, y: batch_y, keep_prob: 0.5} grads, _ = sess.run([train_grads, train_opt], feed_dict=feed_dict_data) # run training batch_loss = sess.run(loss_val, feed_dict=feed_dict_data) avg_loss += batch_loss / total_batch # calculate the average loss if epoch % display_step == 0: # record log feed_dict_val_data = {x: mnist.validation.images, y: mnist.validation.labels, keep_prob: 1.0} acc = sess.run(eval_opt, feed_dict=feed_dict_val_data) # calculate the accuracy print("Epoch: {}, Accuracy: {}, Vaildation Error: {}".format(epoch+1, round(acc,2), round(1-acc,2))) tf.summary.scalar("validation_accuracy", acc) summary_str = sess.run(summary_opt, feed_dict=feed_dict_val_data) summary_writer.add_summary(summary_str, sess.run(global_step)) # write out the summary saver.save(sess, os.path.join(log_path, "model-checkpoint"), global_step=global_step) print("Training finishing.") feed_dict_test_data = {x: mnist.test.images, y: mnist.test.labels, keep_prob: 1.0} acc = sess.run(eval_opt, feed_dict=feed_dict_test_data) # test result print("Test Accuracy:",acc)
frameworks/tensorflow/CNN_Tensorboard.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Practice Problems # ### Lecture 11 # Answer each number in a separate cell # # Rename this notebook with your last name, first initial and the lecture # # ex. Cych_B_11 # # Turn-in this notebook on Canvas. # 1. Lambda functions # - Write a regular function **s** that takes one argument and returns the square of that argument # - Write a **lambda** function called **square** that squares the input parameter # - Call your function **s** with the value 4. Print the output # - Call your function **square** with the value 4. Print the output # # 2. map( ) # - create a list of values from 0 to 300 in intervals of 10 # - create a list of values from 0 to 90 in intervals of 3 # - write an anonymous function that calculates the difference betwen two values # - apply **map( )** and the anonymous function to the two lists you defined # - print the results of the **map( )** function - remember to use **list( )** to make a list from the list generator made by **map( )** # # 3. filter( ) # - Copy the following dictionary into a code cell: # # **lastEruption = {"Mt.Etna": 2017, "Mt. St. Helens" :1980, "Mt. Erebus": 2017, "Mount Teide" : 1909, "Mt. Hood": 1800}** # # - Define an anonymous function **active** that returns a boolean. It should return **True** if a volcano has erupted in the last 5 years # - use the function **filter( )**, the function **active**, and the dictionary of volcanoes to determine which volcanoes have erupted in the last 5 years. # - Print the names of the recently active volcanoes # # 4. reduce( ) # - write an anonymous function that finds multiples of 7 # - write a different anonymous function that returns the greater of two numbers # - use your two anonymous functions with **filter( )** and **reduce( )** to find the greatest multiple of 7 in this list: # \[234, 55, 40, 100, 450, 335, 308, 693, 333, 405, 303, 109, 321, 565, 891\] # - print the final value # # 5. List comprehensions # - The following dictionary, **atomicNumbers**, has the atomic number of an element as the key and the element name as the value # # **atomicNumbers = {1:'H', 2:"He", 3: "Li", 4:"Be", 5:"B", 6:"C", 7: "N", 8:"O", 9:"F", 10:"Ne", 11:"Na", 12:"Mg", 13:"Al", 14:"Si", 15:"P", 16:"S", 17:"Cl", 18:"Ar"}** # - The following list, **lifeElements**, contains the atomic numbers essential for life # # **lifeElements = \[6,1, 8,7,15,16\]** # - use a list comprehension to print out the names of the elements that are essential for life. # # 6. Dictionary comprehensions # - The following list - elements- is a list of the first 18 elements in the periodic table # # **elements = \["H", "He", "Li", "Be", "B", "C", "N", "O", "F", "N", "Na", "Mg", "Al", "Si", "P", "S", "Cl", "Ar"\]** # - Create a dictionary comprehension of the elements and their atomic number. The key is the element name while the value is the atomic number # - print the dictionary
originals/Lecture_11_Practice_Problems.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import auxiliary_tools from tqdm import tqdm from geopy.distance import geodesic import array import random import numpy as np import json import pickle import numpy from math import sqrt from deap import algorithms from deap import base from deap import benchmarks from deap.benchmarks.tools import diversity, convergence, hypervolume from deap import creator from deap import tools def generate_individual(creator, route_requests, rand_dist_min, rand_dist_max): individual = [] for request in route_requests: rand_distance = random.randint(rand_dist_min, rand_dist_max)/1000 rand_angle = random.randint(1, 360) gene = geodesic(kilometers=rand_distance).destination(request, rand_angle)[:2] individual.append(gene) individual = np.array(individual) return creator.individual(individual) def mutation(individual, mutation_probability, rand_dist_min, rand_dist_max): mutated_individual = [] for gene in individual: if random.random() < mutation_probability: rand_distance = random.randint(rand_dist_min, rand_dist_max)/1000 rand_angle = random.randint(1, 360) mutated_gene = geodesic(kilometers=rand_distance).destination(gene, rand_angle)[:2] mutated_individual.append( mutated_gene ) else: mutated_individual.append( gene ) return creator.individual(np.array(mutated_individual)) def crossover(individual_a, individual_b, crossover_probability): child_a = [] child_b = [] for i, (gene_a, gene_b) in enumerate(zip(individual_a, individual_b)): if random.random() < crossover_probability: child_a.append(gene_b) child_b.append(gene_a) else: child_a.append(gene_a) child_b.append(gene_b) return (creator.individual(np.array(child_a)), creator.individual(np.array(child_b))) # + def client_fitness(route_requests, individual): c_fitness = [] for i in range(len(route_requests)): request_r = route_requests[i] request_origin = [request_r[0], request_r[1]] vs_individual = individual[i] vs_destination = vs_individual c_fitness.append(auxiliary_tools.getGeoDistanceETA_OSRM(request_origin, vs_destination, 5005, 'walking')) fitness_value = np.sum([f[0] for f in c_fitness]) return fitness_value def operator_fitness(individual, penalty_const): ori_dest = [(first, second) for first, second in zip(individual, individual[1:])] penalty_sum = 0 for pair in ori_dest: if max(pair[0] != pair[1]) == True: penalty_sum+=penalty_const o_fitness = [] for od_r in ori_dest: o_fitness.append(auxiliary_tools.getGeoDistanceETA_OSRM(od_r[0], od_r[1], 5004, 'driving')) fitness_value = np.sum([f[0] for f in o_fitness]) + penalty_sum return fitness_value def fitness(individual, route_requests, penalty_const): import time # start_time = time.time() from pexecute.thread import ThreadLoom loom = ThreadLoom(max_runner_cap=10) loom.add_function(client_fitness, [route_requests, individual], {}) loom.add_function(operator_fitness, [individual, penalty_const], {}) output = loom.execute() client_f = output[0]['output'] operator_f = output[1]['output'] # print("--- %s seconds ---" % round(time.time() - start_time)) return client_f, operator_f # + penalty_const = auxiliary_tools.getPenaltyConst(2) route_requests = auxiliary_tools.loadPrep(1, 0) crossover_probability = 0.4 mutation_probability = 0.5 rand_dist_min = 0 rand_dist_max = 500 population_size = 25 number_generations = 100 idx_evol = 5 # - route_requests import time start_time = time.time() # + creator.create("min_fitness", base.Fitness, weights=(-1.0, -1.0)) creator.create("individual", list, fitness=creator.min_fitness) toolbox = base.Toolbox() toolbox.register("create_individual", generate_individual, creator, route_requests=route_requests, rand_dist_min=rand_dist_min, rand_dist_max=rand_dist_max) toolbox.register("initialize_population", tools.initRepeat, list, toolbox.create_individual) toolbox.register("evaluate", fitness, route_requests=route_requests, penalty_const=penalty_const) toolbox.register("crossover", crossover, crossover_probability=crossover_probability) toolbox.register("mutate", mutation, mutation_probability=mutation_probability, rand_dist_min=rand_dist_min, rand_dist_max=rand_dist_min) toolbox.register("select", tools.selNSGA2) # - population = toolbox.initialize_population(n=population_size) population_fitnesses = [toolbox.evaluate(individual) for individual in tqdm(population)] for individual, fitness in zip(population, population_fitnesses): individual.fitness.values = fitness # + list_generations_mean_fitness = [] list_generations_std_fitness = [] list_generations_min_fitness = [] list_generations_max_fitness = [] list_population_generations = [] list_best_individuo_generations = [] list_best_individuo_fitness_generations = [] # - current_generation = 0 while current_generation < number_generations: offspring = list(map(toolbox.clone, population)) crossed_offspring = [] # Crossing for child_1, child_2 in zip(offspring[::2], offspring[1::2]): child_a, child_b = toolbox.crossover(child_1, child_2) del child_a.fitness.values del child_b.fitness.values crossed_offspring.append(child_a) crossed_offspring.append(child_b) # Mutation for mu in range(0,len(crossed_offspring)): mutant = toolbox.mutate(crossed_offspring[mu]) del mutant.fitness.values crossed_offspring[mu] = mutant # Fitness crossed_offspring_fitnesses = [toolbox.evaluate(individual) for individual in tqdm(crossed_offspring)] for individual, fitness in zip(crossed_offspring, crossed_offspring_fitnesses): individual.fitness.values = fitness new_population = population+crossed_offspring # Selection selected = toolbox.select(new_population, population_size) population_selected = list(map(toolbox.clone, selected)) population[:] = population_selected print("---- STATISTICS GENERATION %s ----" % (current_generation)) fits_client = [ind.fitness.values[0] for ind in population] fits_operator = [ind.fitness.values[1] for ind in population] length = len(population) mean_client = sum(fits_client) / length mean_operator= sum(fits_operator) / length sum2_client = sum(x*x for x in fits_client) sum2_operator = sum(x*x for x in fits_operator) std_client = abs(sum2_client / length - mean_client**2)**0.5 std_operator = abs(sum2_operator / length - mean_operator**2)**0.5 list_generations_mean_fitness.append((mean_client, mean_operator)) list_generations_std_fitness.append((std_client, std_operator)) list_generations_min_fitness.append((min(fits_client), min(fits_operator))) list_generations_max_fitness.append((max(fits_client), max(fits_operator))) list_population_generations.append(population) print(" Min %s %s" % (min(fits_client), min(fits_operator))) print(" Max %s %s" % (max(fits_client), max(fits_operator))) print(" Avg %s %s" % (mean_client, mean_operator)) print(" Std %s %s" % (std_client, std_operator)) current_generation+= 1 best_ind = tools.selBest(population, 1)[0] list_best_individuo_generations.append(best_ind) list_best_individuo_fitness_generations.append(tuple(best_ind.fitness.values)) print("Best individual is %s, %s" % (best_ind.fitness.values)) data_ga = [list_generations_mean_fitness, list_generations_std_fitness, list_generations_min_fitness, list_generations_max_fitness, list_population_generations, list_best_individuo_generations, list_best_individuo_fitness_generations] pickle.dump(data_ga, open("data_ga_evol_"+str(idx_evol)+".pkl", "wb")) print("--- %s seconds ---" % round(time.time() - start_time)) import pickle best_ind = tools.selBest(population, 1)[0] pickle.dump(np.array(best_ind), open("best_individual"+str(idx_evol)+".pkl", "wb"))
notebooks/evolving_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook prepares edgar v5.0 emissions to be used in WRF-Chem anthro_emiss preprocessing tool. It adds date and datetime variables and units attribute needed. import xarray as xr import os import numpy as np import pandas as pd # + #define input and output paths ed_pth = '/geos/d21/s1878599/edgarv5_process/monthly_all_sectors_with_tot/' save_dir = '/geos/d21/s1878599/edgarv5_process/edgar2015_mozart_wrfchem/' #create save dir if doesn't exist. if not os.path.isdir(save_dir): # !mkdir -p $save_dir # - ds=xr.open_dataset(ed_pth +'monthly_v50_2015_CO_.0.1x0.1.nc') ds # ## Add monthly date and datesec variables and units attribute for f in os.listdir(ed_pth): sp=f.split('_')[3] print(sp) s=xr.open_dataset(ed_pth+f) #clean variables attributes and add units attributes for sec in list(s.keys()): s[sec].attrs['units']='kg m-2 s-1' #add datetime variables. s=s.assign({'date': xr.DataArray(data=np.array([20150101, 20150201, 20150301, 20150401, 20150501, 20150601, 20150701, 20150801, 20150901, 20151001, 20151101, 20151201]),dims=["time"])}) s=s.assign({'datesec': xr.DataArray(data=np.zeros(12),dims=["time"])}) #save s.to_netcdf(save_dir +'edgar_v5_2015_'+ sp +'_0.1x0.1.nc',format='NETCDF3_64BIT') ds=xr.open_dataset('/geos/d21/s1878599/edgarv5_process/edgar2015_mozart_wrfchem/edgar_v5_2015_TOLUENE_0.1x0.1.nc') ds ds. ds.datesec # # Run anthro_emiss and check outputs wrfchemi t=xr.open_dataset('../../../../WRF-Chem3.9.1.1/PREPEMISS/ANTHRO/src/wrfchemi_d01_2019-10-01_06:00:00') t t.E_NH3.plot()
code/edgarv5_to_WRFChem_anthroemiss.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Import Libraries # + colab={"base_uri": "https://localhost:8080/", "height": 978} colab_type="code" executionInfo={"elapsed": 44811, "status": "ok", "timestamp": 1568867168947, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA2F8QyAQn5_Xz3Osyijxy72tkVAj62JKUXBDwmYg=s64", "userId": "18121850103730925571"}, "user_tz": 240} id="4JUjbCbbrhDp" outputId="e2443d27-3f82-4c66-c38d-b2b74ceae468" import gym import utils import numpy as np import random from tqdm import tqdm import matplotlib.pyplot as plt # %matplotlib inline # - # ### Create Bandit class as environment # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 195, "status": "ok", "timestamp": 1568867485471, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA2F8QyAQn5_Xz3Osyijxy72tkVAj62JKUXBDwmYg=s64", "userId": "18121850103730925571"}, "user_tz": 240} id="EEwWn8Lb3n45" outputId="3cbed62a-834d-46b0-a0b1-48e88034aeb2" # Here the default setting is to have 2 arms, # first one having a probability of winning 0.5 and reward 1, and # the second one having probability of winning 0.1 and reward 100 DEFAULT_BANDIT = [(0.5,1), (0.10,100)] class KBandit: def __init__(self, k= None): """ input: k list of tuples each tuple like (win propability, score) returns: K armed bandit based on input tuple """ if k is None: k = DEFAULT_BANDIT self.k = k self.actions = range(len(k)) def step(self, action): """ input: pulls lever with number action returns: reward from lever pull """ win_p, score = self.k[action] return np.random.choice(a = [0,score], size=1, p=[1-win_p,win_p]) # - # If we create a Bandit object with default setting and pull either arm to see results # + B = KBandit() print('Pulling first arm 10 times') # This gives us a reward of 1 roughly half of the times for i in range(10): print(B.step(0),end='') print('\n\nPulling second arm 10 times') # This gives us a reward of 100 roughly 1/10 of the times for i in range(10): print(B.step(1),end='') # - # ### The pseudo code for simple Epsilon-Greedy bandit algorithm is shown below # + [markdown] colab_type="text" id="ealT0gK6WXWl" # ![cliffworld](https://drive.google.com/uc?id=1qg4bZPVvs88CARX353b2kzvxoCtv9Gzy) # - # ### Implement Epsilon-Greedy algorithm # + colab={} colab_type="code" id="MvD4GsfzyoCv" class Epsilon_greedy: def __init__(self, epsilon): self.epsilon = epsilon self.q = None self.n = None def set_epsilon(self, epsilon): self.epsilon = epsilon def _init_model(self, env): num_actions = len(env.actions) self.q = np.array([0.0]*num_actions) self.n = np.array([0.0]*num_actions) def select_action(self): if self.epsilon > np.random.uniform(size=1)[0]: return self.random_action() else: return self.greedy_action() def random_action(self): return np.random.choice(range(len(self.q))) def greedy_action(self): best_actions = np.where(max(self.q)==self.q)[0] return np.random.choice(best_actions) def update(self, action, reward): self.n[action] += 1 self.q[action] += 1/self.n[action]*(reward - self.q[action]) def train(self, env, num_epochs= 1000, force_init=False): print('Training the algorithm') if self.q is None or force_init or self.n is None: self._init_model(env) for i in range(num_epochs): action = self.select_action() reward = env.step(action) self.update(action, reward) # - # ### Perform Epsilon-Greedy on Bandit # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 7328, "status": "ok", "timestamp": 1568867826758, "user": {"displayName": "<NAME> Nasrabad", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA2F8QyAQn5_Xz3Osyijxy72tkVAj62JKUXBDwmYg=s64", "userId": "18121850103730925571"}, "user_tz": 240} id="bAxkveA38RHe" outputId="1272fd1c-385a-49eb-b841-ba1f602787d7" # Create a Bandit object bandit_env = KBandit() # Create an epsilon-greedy object e_greedy = Epsilon_greedy(0.05) # Train the algorithm e_greedy.train(bandit_env, num_epochs=10000) # Results print('Number of times arms pulled',e_greedy.n) print('Estimated rewards', e_greedy.q) # - # If we train for large number of epochs, we can see the rewards converging 0.5 and 10 # + [markdown] colab_type="text" id="-ocaa28bNcdF" # # Forzen lake Dynamic programming # # Winter is here. You and your friends were tossing around a frisbee at the park # when you made a wild throw that left the frisbee out in the middle of the lake. # The water is mostly frozen, but there are a few holes where the ice has melted. # If you step into one of those holes, you'll fall into the freezing water. # At this time, there's an international frisbee shortage, so it's absolutely imperative that # you navigate across the lake and retrieve the disc. # However, the ice is slippery, so you won't always move in the direction you intend. # The surface is described using a grid like the following # SFFF # FHFH # FFFH # HFFG # S : starting point, safe # F : frozen surface, safe # H : hole, fall to your doom # G : goal, where the frisbee is located # The episode ends when you reach the goal or fall in a hole. # You receive a reward of 1 if you reach the goal, and zero otherwise. # + colab={} colab_type="code" id="EkMkhsfQHOmc" # Create a frozen lake environment from GYM env = gym.make("FrozenLake-v0").env # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 321, "status": "ok", "timestamp": 1568868035956, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA2F8QyAQn5_Xz3Osyijxy72tkVAj62JKUXBDwmYg=s64", "userId": "18121850103730925571"}, "user_tz": 240} id="WlfIFQtIIApv" outputId="bdf91f51-fa30-49ca-a2b3-4a1b26cc26c8" # Test run of frozen lake observation = env.reset() while True: #your agent goes here action = env.action_space.sample() observation, reward, done, info = env.step(action) env.render() if done: break; env.close() # - # # Pseudo code for Value Iteration Algorithm # + [markdown] colab_type="text" id="NskYzMwXWl-m" # ![cliffworld](https://drive.google.com/uc?id=1CKukd0rhRMCRm1HEWAony42jGV1HoydO) # + colab={} colab_type="code" id="WUqqSIa7FUnL" # Create Class for Value Iteration Using Dynamic Programming class DP_value_iteration: def __init__(self, theta=0.001, discount_factor=0.9): self.theta = theta self.df = discount_factor self.V = None self.P = None def _get_greedy_policy_from_V(self, env): # Create a deterministic policy using the optimal value function policy = np.zeros([env.nS, env.nA]) for s in range(env.nS): # One step lookahead to find the best action for this state A = self._one_step_lookahead(env, s) best_action = np.argmax(A) # Always take the best action policy[s, best_action] = 1.0 self.P = policy def _one_step_lookahead(self, env, state): A = np.zeros(env.nA) for a in range(env.nA): for prob, next_state, reward, done in env.P[state][a]: A[a] += prob * (reward + self.df * self.V[next_state]) return A def _get_delta(self, best_action_value, s): return np.abs(best_action_value - self.V[s]) def _init_model(self, env): self.V = np.zeros(env.nS) def train(self, env, force_init=False): if self.V is None or roce_init: self._init_model(env) while True: delta = 0 # Update each state... for s in range(env.nS): # Do a one-step lookahead to find the best action A = self._one_step_lookahead(env, s) best_action_value = np.max(A) # Calculate delta across all states seen so far delta = max(delta, np.abs(best_action_value - self.V[s])) # Update the value function. Ref: Sutton book eq. 4.10. self.V[s] = best_action_value if delta < self.theta: break self._get_greedy_policy_from_V(env) # + colab={} colab_type="code" id="vBN9nqYUPXMb" # Perform the Dynamic Programming Value Iteration on Frozen Lake Environment env = gym.make("FrozenLake-v0", is_slippery=False).env dp_agent = DP_value_iteration() dp_agent.train(env) # - # ### Value function and Policy After Learning # + colab={"base_uri": "https://localhost:8080/", "height": 337} colab_type="code" executionInfo={"elapsed": 342, "status": "ok", "timestamp": 1568868807579, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA2F8QyAQn5_Xz3Osyijxy72tkVAj62JKUXBDwmYg=s64", "userId": "18121850103730925571"}, "user_tz": 240} id="nB73I-SQSFRV" outputId="421a557b-53be-4eaa-aa1b-683f1bdc7ee0" print('Game world start') env.render() print() policy, v = dp_agent.P, dp_agent.V print("Reshaped Grid Policy (0=left, 1=down, 2=right, 3=up):") print(np.reshape(np.argmax(policy, axis=1), (env.nA, env.nA))) print("") print("Reshaped Grid Value Function:") print(v.reshape((env.nA, env.nA))) # - # # Pseudo -Code for Monte Carlo Algorithm # + [markdown] colab_type="text" id="NGhfkc8yXFI3" # ![cliffworld](https://drive.google.com/uc?id=1lZRPjVflaigujbzg9CLX2wCD-K2-eAmW) # + colab={} colab_type="code" id="2EHJDfkGYVA1" import sys from collections import defaultdict # First create random and greedy policy class Random_policy: def __init__(self, num_actions): self.A = np.ones(num_actions, dtype=float) / num_actions def get_policy_for_state(self, s): return self.A class Greedy_policy(): def __init__(self, Q): self.Q = Q def set_Q(self, Q): self.Q = Q def get_policy_for_state(self, s): A = np.zeros_like(self.Q[s], dtype=float) best_action = np.argmax(self.Q[s]) A[best_action] = 1.0 return A class Episode_generator(): def __init__(self, agent, env): self.agent = agent self.env = env def create_episode(self): episode = [] state = self.env.reset() for t in range(100): # Sample an action from our policy probs = self.agent.get_policy_for_state(state) action = np.random.choice(np.arange(len(probs)), p=probs) next_state, reward, done, _ = self.env.step(action) episode.append((state, action, reward)) if done: break state = next_state return episode class MC_controll_importance_sampling: def __init__(self, num_episodes = 100000, verbose=False, discount_factor=0.9): self.verbose = verbose self.nE = num_episodes self.tp = None self.df=discount_factor self.Q = None self.C = None def _init_model(self, env): self.Q = defaultdict(lambda: np.zeros(env.action_space.n)) self.C = defaultdict(lambda: np.zeros(env.action_space.n)) self.tp = Greedy_policy(self.Q.copy()) self.behavorial_agent = Random_policy(env.action_space.n) self.episode_gen = Episode_generator(self.behavorial_agent, env) def _update(self, episode): G = 0.0 W = 1.0 # for each step in episode backwards for t in range(len(episode))[::-1]: state, action, reward = episode[t] G = self.df * G + reward self.C[state][action] += W self.Q[state][action] += (W / self.C[state][action]) * (G - self.Q[state][action]) self.tp.set_Q(self.Q.copy()) if action != np.argmax(self.tp.get_policy_for_state(state)): break W = W * 1./self.behavorial_agent.get_policy_for_state(state)[action] def train(self, env, force_init=False): if self.Q is None or self.C is None or force_init: self._init_model(env) for i_episode in tqdm(range(1, self.nE + 1)): if i_episode % 10000 == 0 and self.verbose: print("Episode %i" % (i_episode)) sys.stdout.flush() episode = self.episode_gen.create_episode() self._update(episode) # + colab={} colab_type="code" id="_Jt-ppxTYU9s" # Create Frozen Lake Environment env = gym.make("FrozenLake-v0", is_slippery=False).env # Initialize Monte Carlo Importance Sampling Object mc_ctrl = MC_controll_importance_sampling() mc_ctrl._init_model(env) # Train with MC algorithm mc_ctrl.train(env) # - # ### Grid Policy After MC Learning # + colab={"base_uri": "https://localhost:8080/", "height": 225} colab_type="code" executionInfo={"elapsed": 138393, "status": "ok", "timestamp": 1568773083339, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="mTSP7_0OnZDY" outputId="dbc60d4a-881a-4c66-a33e-cf8e06cf05f4" policy = [mc_ctrl.tp.get_policy_for_state(s) for s in range(env.nS)] print('Game world start') env.reset() env.render() print("Reshaped Grid Policy (0=left, 1=down, 2=right, 3=up):") print(np.reshape(np.argmax(policy, axis=1), (env.nA, env.nA))) print("") # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 279010, "status": "ok", "timestamp": 1568773226704, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="B34A7FSjws2I" outputId="fd8a2964-a996-445b-88a9-a3440897de20" env = gym.make("FrozenLake-v0", is_slippery=True).env mc_ctrl = MC_controll_importance_sampling(verbose = True) mc_ctrl._init_model(env) mc_ctrl.train(env) policy = [mc_ctrl.tp.get_policy_for_state(s) for s in range(env.nS)] print('Game world start') env.reset() env.render() print("Reshaped Grid Policy (0=left, 1=down, 2=right, 3=up):") print(np.reshape(np.argmax(policy, axis=1), (env.nA, env.nA))) print("") # + colab={} colab_type="code" id="o6WsQJqkx2mz" # Try the MC algorithm on another learning task env = gym.make("Blackjack-v0") mc_ctrl = MC_controll_importance_sampling(verbose = False) mc_ctrl._init_model(env) mc_ctrl.train(env) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 1757, "status": "ok", "timestamp": 1568773427962, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="I5ody2Aey35H" outputId="009c6d41-e578-433f-8c2b-ab82848c4117" # For plotting: Create value function from action-value function # by picking the best action at each state V = defaultdict(float) for state, action_values in mc_ctrl.Q.items(): action_value = np.max(action_values) V[state] = action_value utils.plot_value_function(V, title="Optimal Value Function") # + [markdown] colab_type="text" id="CHI2A0Vfiyu8" # # Q-Learning # + [markdown] colab_type="text" id="3QlWEA5jLWwj" # # Cliff walking # # # This is a simple implementation of the Gridworld Cliff # reinforcement learning task. # Adapted from Example 6.6 (page 106) from Reinforcement Learning: An Introduction # by <NAME> Barto: # http://incompleteideas.net/book/bookdraft2018jan1.pdf # With inspiration from: # https://github.com/dennybritz/reinforcement-learning/blob/master/lib/envs/cliff_walking.py # The board is a 4x12 matrix, with (using Numpy matrix indexing): # [3, 0] as the start at bottom-left # [3, 11] as the goal at bottom-right # [3, 1..10] as the cliff at bottom-center # Each time step incurs -1 reward, and stepping into the cliff incurs -100 reward # and a reset to the start. An episode terminates when the agent reaches the goal. # # ![cliffworld](https://drive.google.com/uc?id=1FbykOcXgrhxbz7goBhQWztR6apQ2gPUV) # - # # Pseudo-Code for Q-learning Algorithm # + [markdown] colab_type="text" id="kCi94pBsXVKS" # ![cliffworld](https://drive.google.com/uc?id=1U0KWkrCGIZ0AenGyorNGlAN88Yp8us-N) # - # # Create Q-learning Code # + colab={} colab_type="code" id="eQBZPAQsixeV" import itertools class Greedy_policy(): def __init__(self, Q): self.Q = Q def set_Q(self, Q): self.Q = Q def get_policy_for_state(self, s): A = np.zeros_like(self.Q[s], dtype=float) best_action = np.argmax(self.Q[s]) A[best_action] = 1.0 return A class Epsilon_greedy_policy(Greedy_policy): def __init__(self, Q, epsilon): super(Epsilon_greedy_policy, self).__init__(Q) self.epsilon = epsilon def get_policy_for_state(self, s): greedy_policy = super(Epsilon_greedy_policy, self).get_policy_for_state(s) return self.make_greedy_policy_epsilon_greedy(greedy_policy) def make_greedy_policy_epsilon_greedy(self, greedy_pol): best_action = np.argmax(greedy_pol) greedy_pol += self.epsilon/len(greedy_pol) greedy_pol[best_action] -= self.epsilon return greedy_pol class Q_Learning: def __init__(self, num_episodes = 10000, verbose=False, discount_factor=0.9, alpha=0.5, epsilon=0.1): self.verbose = verbose self.nE = num_episodes self.df=discount_factor self.alpha=alpha self.epsilon=epsilon self.greedy_pol = None self.Q = None def _init_model(self, env): self.Q = defaultdict(lambda: np.zeros(env.action_space.n)) self.greedy_pol = Epsilon_greedy_policy(self.Q.copy(), self.epsilon) def _td_update(self, state, action, env): next_state, reward, done, _ = env.step(action) best_next_action = np.argmax(self.Q[next_state]) td_target = reward + self.df * self.Q[next_state][best_next_action] td_delta = td_target - self.Q[state][action] self.Q[state][action] += self.alpha * td_delta self.greedy_pol.set_Q(self.Q.copy()) return done, next_state def run_episode(self, env): state = env.reset() for t in itertools.count(): # Take a step action_probs = self.greedy_pol.get_policy_for_state(state) action = np.random.choice(np.arange(len(action_probs)), p=action_probs) done, next_state = self._td_update(state, action, env) if done or t == 50: break state = next_state def train(self, env, force_init=False): if self.Q is None or self.C is None or force_init: self._init_model(env) for i_episode in range(1, self.nE + 1): if i_episode % 10 == 0 and self.verbose: print("Episode %i" % (i_episode)) sys.stdout.flush() self.run_episode(env) # + colab={} colab_type="code" id="kK6esCMHiapM" from gym.envs.toy_text.cliffwalking import CliffWalkingEnv env = CliffWalkingEnv() q_learner = Q_Learning(num_episodes=100) q_learner.train(env) env.render() world = np.zeros(env.shape) for s, v in q_learner.Q.items(): pos = np.unravel_index(s, env.shape) world[pos] = np.argmax(v) # UP:0, RIGHT: 1, DOWN: 2, LEFT: 3 world # + colab={} colab_type="code" id="u8XHhFYQi6cV" env = CliffWalkingEnv() q_learner = Q_Learning(num_episodes=100) q_learner.train(env) # + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" executionInfo={"elapsed": 54046, "status": "ok", "timestamp": 1568773280912, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="xjtVyJBaqK1L" outputId="d6b97412-0204-49fb-d31b-eb2de59c7f49" env.render() world = np.zeros(env.shape) for s, v in q_learner.Q.items(): pos = np.unravel_index(s, env.shape) world[pos] = np.argmax(v) # UP:0, RIGHT: 1, DOWN: 2, LEFT: 3 world # - # ## Visualize Q-learning Result # + colab={} colab_type="code" id="_Wj4Pfz4oM9k" import seaborn as sns def mk_heatmap(Q, action, action_name): world = np.zeros(env.shape) for s, v in q_learner.Q.items(): pos = np.unravel_index(s, env.shape) world[pos] = v[action] sns.set(rc={'figure.figsize':(11.7,5.27)}) sns.heatmap(world, vmax = 10, vmin = -10, annot=True, cmap = sns.diverging_palette(240, 10, n=9), linewidths=.5, linecolor='black').set_title(action_name) # + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" executionInfo={"elapsed": 54735, "status": "ok", "timestamp": 1568773281637, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="tB818z2EtUnc" outputId="bb58d29b-5388-452c-9900-a067c69c0a44" mk_heatmap(q_learner.Q, 0, 'UP') # + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" executionInfo={"elapsed": 55224, "status": "ok", "timestamp": 1568773282150, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="0WxPolYFtU2v" outputId="a14a2615-6622-452c-c867-af9b03115667" mk_heatmap(q_learner.Q, 1, 'RIGHT') # + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" executionInfo={"elapsed": 55728, "status": "ok", "timestamp": 1568773282683, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="MvtVTBv3tU57" outputId="ed954ead-59fe-42fb-db79-a23b6ce24862" mk_heatmap(q_learner.Q, 2, 'DOWN') # + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" executionInfo={"elapsed": 56484, "status": "ok", "timestamp": 1568773283468, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="j8IioRATtU9Y" outputId="8758c32e-70c9-4ecd-a1e9-012cb2469c82" mk_heatmap(q_learner.Q, 3, 'LEFT') # + [markdown] colab_type="text" id="zSHT4qtj9DQP" # # SARSA # - # ## Pseudo-Code for SARSA Algorithm # + [markdown] colab_type="text" id="wLD8UuhEXfRN" # ![cliffworld](https://drive.google.com/uc?id=1Gcet8XATip6pmJkamO4OKbL1b71I3TfN) # + colab={} colab_type="code" id="D_P7xU0S9KPH" class SARSA(Q_Learning): def _td_update(self, state, action, env): next_state, reward, done, _ = env.step(action) # Pick the next action next_action_probs = self.greedy_pol.get_policy_for_state(state) next_action = np.random.choice(np.arange(len(next_action_probs)), p=next_action_probs) td_target = reward + self.df * self.Q[next_state][next_action] td_delta = td_target - self.Q[state][action] self.Q[state][action] += self.alpha * td_delta self.greedy_pol.set_Q(self.Q.copy()) return done, next_state # + colab={} colab_type="code" id="SCtrTp3r97tU" env = CliffWalkingEnv() sarsa_learner = SARSA(num_episodes=500, epsilon = 0.01, verbose=False) sarsa_learner.train(env) # + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" executionInfo={"elapsed": 57295, "status": "ok", "timestamp": 1568773284320, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="5ffXuDGj9YqO" outputId="93be9cd9-996b-4a84-9c77-10a235a330f9" env.render() world = np.zeros(env.shape) for s, v in sarsa_learner.Q.items(): pos = np.unravel_index(s, env.shape) world[pos] = np.argmax(v) # UP:0, RIGHT: 1, DOWN: 2, LEFT: 3 world # - # # Visualize SARSA Result # + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" executionInfo={"elapsed": 57789, "status": "ok", "timestamp": 1568773284837, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="jb-_HEuw9Yqy" outputId="e2a33559-183a-44ae-cb34-74299ea3c60e" mk_heatmap(sarsa_learner.Q, 0, 'UP') # + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" executionInfo={"elapsed": 58357, "status": "ok", "timestamp": 1568773285423, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="jBk6ksoq9Yq5" outputId="f595b9aa-ebfa-467f-e192-a6596a276a7b" mk_heatmap(sarsa_learner.Q, 1, 'RIGHT') # + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" executionInfo={"elapsed": 58823, "status": "ok", "timestamp": 1568773285913, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="EuGRHLIr9YrA" outputId="100e8d7c-6599-407c-bdc6-be41239ade6c" mk_heatmap(sarsa_learner.Q, 2, 'DOWN') # + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" executionInfo={"elapsed": 59525, "status": "ok", "timestamp": 1568773286636, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="OVfsYRue9YrI" outputId="303cc615-49a6-43f1-c5ce-437d357fe982" mk_heatmap(sarsa_learner.Q, 3, 'LEFT') # + [markdown] colab_type="text" id="JbZ4cR8OXnsT" # # Exercises # # ## Implement on-policy MC # # Implement the On-policy first-visit MC control method: # # ![cliffworld](https://drive.google.com/uc?id=16a330_POv-RkJtZilwVXHgmdgvylEENL) # # + [markdown] colab_type="text" id="ZE3Bq-xSX0MX" # ## Mountain car # # A car is on a one-dimensional track, positioned between two "mountains". The goal is to drive up the mountain on the right; however, the car's engine is not strong enough to scale the mountain in a single pass. Therefore, the only way to succeed is to drive back and forth to build up momentum. # # ### Observational space # Here the core problem is that the observational space is continuous (Note that action space is still discrete). # # | Num | Observation | Min | Max | # |------|-------------|------|------| # | 0 | position | -1.2 | 0.6 | # | 1 | velocity | -0.07| 0.07 | # # # ### Action space # # | Num | Observation | # |------|-------------| # | 0 | push left | # | 1 | no push | # | 2 | push right | # # + colab={"base_uri": "https://localhost:8080/", "height": 422} colab_type="code" executionInfo={"elapsed": 6074, "status": "ok", "timestamp": 1568773313809, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12953947379101213852"}, "user_tz": 240} id="36enIK2J3m2q" outputId="3de48a43-2717-49aa-d859-d33b705685a7" # Example run with random agent env = utils.wrap_env(gym.make("MountainCar-v0")) observation = env.reset() while True: env.render() #your agent goes here action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break; env.close() utils.show_video() # -
reinforcement learning/session1/RL_Workshop_Session1_with_solutions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: work # language: python # name: work # --- # + import os import cv2 import imagehash import pandas as pd import numpy as np import matplotlib.pyplot as plt from tqdm import tqdm from PIL import Image #import pytesseract from scipy.stats import entropy # - DATASET_FOLDER = "test_task_logo_dataset" files = next(os.walk(DATASET_FOLDER))[2] files = list(filter(lambda x: x.endswith("jpg"), files)) files = list(map(lambda x: os.path.join(DATASET_FOLDER, x), files)) files = np.array(files) df = pd.DataFrame([]) df["path"] = files df["hash"] = df["path"].apply(lambda x: str(imagehash.phash(Image.open(x)))) # + def compute_groups(series, threshold=50): series = series.apply(lambda x: 2 * imagehash.hex_to_hash(x).hash.astype(np.int8) - 1) series = np.array(series.tolist()) matrix = np.einsum("kij,mij->km", series, series) groups = map(lambda x: list(np.where(x > threshold)[0]), list(matrix)) return groups def compute_entropy(img): r, _ = np.histogram(img[..., 0], bins=255) g, _ = np.histogram(img[..., 1], bins=255) b, _ = np.histogram(img[..., 2], bins=255) return entropy(r), entropy(g), entropy(b) # - groups = compute_groups(df.hash) df["group"] = -1 with tqdm(ascii=True, leave=False, total=len(df)) as bar: for i, group in enumerate(groups): if (df["group"].loc[group] == -1).any(): df.loc[group, "group"] = i bar.update() df.to_csv("dataset_full.csv", index=False) counts = df.groupby("group").count().reset_index()[["group", "path"]] counts = counts.rename(columns={"path": "n_images"}) clean = df.groupby("group").apply(lambda g: g["path"].iloc[0]).reset_index() clean = pd.DataFrame(clean) clean = clean.rename(columns={0: "path"}) clean = clean.merge(counts, how="inner", on="group") clean.to_csv("dataset.csv", index=False) clean # + df["entropy_r"] = 0 df["entropy_g"] = 0 df["entropy_b"] = 0 df["h"] = 0 df["w"] = 0 with tqdm(ascii=True, leave=False, total=len(df)) as bar: for index, row in df.iterrows(): img = np.array(Image.open(row.path)) r, g, b = compute_entropy(img) df.loc[index, "entropy_r"] = r df.loc[index, "entropy_g"] = g df.loc[index, "entropy_b"] = b df.loc[index, "h"] = img.shape[0] df.loc[index, "w"] = img.shape[1] bar.update() df["entropy"] = (df["entropy_r"] + df["entropy_g"] + df["entropy_b"]) / 3.0 df = df.sort_values(by="entropy") # - def display_images(csv, rows, cols, show=True, title_column=None, fname=None): fig, axes = plt.subplots(nrows=rows, ncols=cols, figsize=(8, 8), dpi=150) n_total = len(csv) n_grid = rows * cols subset = csv.sample(n=n_grid, replace=n_grid > n_total) axes = axes.ravel() if n_grid > 1 else [axes] i = 0 for index, row in subset.iterrows(): image = cv2.cvtColor(cv2.imread(row.path), cv2.COLOR_BGR2RGB) axes[i].imshow(image) if title_column: title = row[title_column] if title != "no_logo": title = "logo" #title = "\n".join(title.split()) axes[i].set_title(title, fontsize=10) axes[i].set_axis_off() axes[i].imshow(image) axes[i].set_axis_off() i += 1 if fname is not None: plt.savefig(fname, dpi=150) if show: #plt.tight_layout() plt.show() plt.close(fig) df = pd.read_csv("dataset_with_labels.csv") df[df.n_images > 1] df df[df.api_name == "no_logo"] counts = df.groupby(["h", "w"]).count().sort_values(by="group") counts = counts[counts.group > 25] counts df[df.n_images > 3] df.entropy.hist(bins=50) display_images(df[df.api_name == "no_logo"], 7, 7) (16 / 36 + 14 / 32 + 11 / 36) / 3 100 - 19.4
EDA.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from csv import reader, DictReader, DictWriter from langdetect import detect from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer import re import cld2 clean_up_re = re.compile('(\[[0-9]{4,4}-[0-9]{2,2}-[0-9]{2,2}\])? *[0-9]*(Food ordered:)?') def clean_comment(comment): m = clean_up_re.match(comment) if m: comment = comment[m.span()[1]:] comment = comment.strip() return comment def get_simple_label(label): return label.split(" ")[0] def get_labels(el): labels = [] for i in range(3): label = el[f'Topic {i+1}'] # print(label) if label: labels.append(get_simple_label(label)) return labels def mylang_detect(text): try: lang = detect(text) except: lang = "en" if lang not in ['en', 'de']: lang = cld2.detect(text)[2][0][1] if lang not in ['en', 'de']: lang = "en" return lang # - csvdict = DictReader(open("/Users/aliosha/Downloads/Tagging.csv"), dialect='excel') csvdict = DictReader(open("/Users/aliosha/Downloads/emails.csv"), dialect='excel') email_list = [] for i, e in enumerate(csvdict): email_list.append(e) if i > 10000: break email_list[1000:1010] corpus = [] for el in csvdict: comment = clean_comment(el['comment']) labels = get_labels(el) score = el.get('score') if comment and labels: item = {"comment":comment, "labels":labels, "score":score} corpus.append(item) len(corpus) corpus[:10] import textacy import spacy import textblob import textblob_de from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer analyzer = SentimentIntensityAnalyzer() analyzer.polarity_scores("hey") # + analyzer = SentimentIntensityAnalyzer() def get_sentiment(comment): sentiment_values = analyzer.polarity_scores(comment) if sentiment_values['compound'] > 0.5: return 'pos' elif sentiment_values['compound'] < -0.5: return 'neg' else: return 'neu' # test_score = {'compound': 0.9551, 'neg': 0.0, 'neu': 0.645, 'pos': 0.355} # get_sentiment(test_score) # - textblob.TextBlob("not a kill review").sentiment # + def get_emotion_polarity(text, lang=None): if not lang: lang = mylang_detect(text) if lang == "en": tb = textblob.TextBlob(text) else: tb = textblob_de.TextBlobDE(text) # ts = doc_text[i] # em = textacy.lexicon_methods.emotional_valence(ts) return tb.sentiment.polarity def get_emotion(text, lang=None): polarity = get_emotion_polarity(text, lang) if polarity < -0.05: emotion = "negative" elif polarity > 0.05: emotion = "positive" else: emotion = "neutral" return emotion def create_corpus(csvdict): corpus = [] for el in csvdict: comment = clean_comment(el['comment']) labels = get_labels(el) if comment: item = {"comment":comment, "orig_labels":labels, "orig_posit":el['+ / -']} corpus.append(item) return corpus def predict_labels(text): return get_labels_from_dict(magpie.predict_from_text(text)) # - csvdict = DictReader(open("/Users/aliosha/Downloads/Tagging.csv"), dialect='excel') my_dict = list(csvdict) len(my_dict) element = my_dict[10] element corpus = create_corpus(my_dict) corpus[:10] for el in corpus: if el.get('orig_posit') not in ['positive', 'negative']: print(el.get('orig_posit'), get_sentiment(el['comment']), get_emotion(el['comment']), el['comment']) analyzer.polarity_scores("wrong sandwich") # + def translate(rev): if rev == "pos": return "positive" if rev == "neg": return "negative" return "neutral" def are_equal(rev1, rev2): if translate(rev1) == rev2: return True return False for el in corpus: if not are_equal(get_sentiment(el['comment']), get_emotion(el['comment'])): print(el.get('orig_posit'), get_sentiment(el['comment']), get_emotion(el['comment']), el['comment']) # - t_doc = textacy.doc.Doc("Got the chicken burritos and quesadillas.") textacy.preprocess_text("Got the chicken burritos and quesadillas.") textacy def get_labels_from_dict(labels, min_val=0.1): return_labels = [] for label, val in labels: # print(label, val) if val > min_val: return_labels.append(label) return return_labels # + # for el in corpus: # comment = el["comment"] # el["predict_labels"] = predict_labels(comment) # el["predict_posit"] = get_emotion(comment) # corpus[:10] # - with open('results.csv', 'w', newline='') as csvresults: csv_labels = ['comment', 'orig_labels', 'predict_labels', 'orig_posit', 'predict_posit'] writer = DictWriter(csvresults, fieldnames=csv_labels) writer.writeheader() for el in corpus: writer.writerow(el)
experiments vader sent.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import torch import time from torchvision.models import * import pandas as pd import os # make models from str model_name = "resnet18" # make results os.makedirs("results", exist_ok=True) def computeTime(model, input_size=[1, 3, 224, 224], device='cuda', FP16=False): inputs = torch.randn(input_size) if device == 'cuda': model = model.cuda() inputs = inputs.cuda() if FP16: model = model.half() inputs = inputs.half() model.eval() i = 0 time_spent = [] while i < 200: start_time = time.time() with torch.no_grad(): _ = model(inputs) if device == 'cuda': torch.cuda.synchronize() # wait for cuda to finish (cuda is asynchronous!) if i != 0: time_spent.append(time.time() - start_time) i += 1 print('Avg execution time (ms): {:.3f}'.format(np.mean(time_spent))) return np.mean(time_spent) # + modellist = ["resnet18", "resnet34", "resnet50", "resnet101", "resnet152", \ "resnext50_32x4d", "resnext101_32x8d", "mnasnet1_0", "squeezenet1_0", "densenet121", "densenet169", "inception_v3"] # resnet is enought for now modellist = ["resnet18", "resnet34", "resnet50"] # + batchlist = [1, 4, 8, 16, 32] imsize = [128, 256, 512, 1024] for i, model_name in enumerate(modellist): runtimes = [] # define model print("model: {}".format(model_name)) mdl = globals()[model_name] model = mdl() for batch in batchlist: runtimes.append(computeTime(model, input_size=[batch, 3, 256, 256], device="cuda", FP16=False)/batch) if i == 0: dfbatch = pd.DataFrame({model_name: runtimes}, index = batchlist) else: dfbatch[model_name] = runtimes runtimes = [] for isize in imsize: runtimes.append(computeTime(model, input_size=[1, 3, isize, isize], device="cuda", FP16=False)) if i == 0: dfimsize = pd.DataFrame({model_name: runtimes}, index = imsize) else: dfimsize[model_name] = runtimes # - dfbatch.to_csv("results/batch.csv") dfimsize.to_csv("results/imsize.csv") dfbatch dfimsize import
inference_batch_vs_imsize.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torchtext.datasets import Multi30k from torchtext.data import Field, BucketIterator import spacy import random import math import os # - SEED = 2222 random.seed(SEED) torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True spacy_de = spacy.load('de') spacy_en = spacy.load('en') def process_en(text): return [tok.text for tok in spacy_en.tokenizer(text)] def process_de(text): return [tok.text for tok in spacy_de.tokenizer(text)] Source = Field(tokenize=process_de, init_token='<sos>', eos_token='<eos>', lower=True) Target = Field(tokenize=process_en, init_token='<sos>', eos_token='<eos>', lower=True) train_data, valid_data, test_data = Multi30k.splits(exts=('.de', '.en'), fields=(Source, Target)) len(train_data),len(valid_data),len(test_data) Source.build_vocab(train_data, min_freq=2) Target.build_vocab(train_data, min_freq=2) BATCH_SIZE = 128 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device.type train_iterator, valid_iterator, test_iterator = BucketIterator.splits( (train_data, valid_data, test_data), batch_size=BATCH_SIZE, device=device) class Encoder(nn.Module): def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout): super().__init__() self.input_dim = input_dim self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.embedding = nn.Embedding(input_dim, emb_dim) self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional=True) self.dropout = nn.Dropout(dropout) def forward(self, src): embedded = self.dropout(self.embedding(src))#[sent len, batch size] outputs, hidden = self.rnn(embedded)#[sent len, batch size, emb dim] #outputs -> [sent len, batch size, hid dim * n directions] #hidden -> [n layers * n directions, batch size, hid dim] hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim=1))) #hidden -> [batch size, dec hid dim] return outputs, hidden class Attention(nn.Module): def __init__(self, enc_hid_dim, dec_hid_dim): super().__init__() self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.attn = nn.Linear((enc_hid_dim * 2) + dec_hid_dim, dec_hid_dim) self.vec = nn.Parameter(torch.rand(dec_hid_dim)) def forward(self, hidden, encoder_outputs): #hidden -> [batch size, dec hid dim] #encoder_outputs -> [src sent len, batch size, enc hid dim * 2] batch_size = encoder_outputs.shape[1] src_len = encoder_outputs.shape[0] hidden = hidden.unsqueeze(1).repeat(1, src_len, 1) encoder_outputs = encoder_outputs.permute(1, 0, 2) #hidden -> [batch size, src sent len, dec hid dim] #encoder_outputs -> [batch size, src sent len, enc hid dim * 2] association = torch.tanh(self.attn(torch.cat((hidden, encoder_outputs), dim=2))) #association -> [batch size, src sent len, dec hid dim] association = association.permute(0, 2, 1) #association -> [batch size, dec hid dim, src sent len] #vec -> [dec hid dim] vec = self.vec.repeat(batch_size, 1).unsqueeze(1) #vec -> [batch size, 1, dec hid dim] attention = torch.bmm(vec, association).squeeze(1) #attention-> [batch size, src len] return F.softmax(attention, dim=1) class Decoder(nn.Module): class Seq2Seq(nn.Module): def __init__(self, encoder, decoder, device): super().__init__() self.encoder = encoder self.decoder = decoder self.device = device def forward(self, src, trg, teacher_forcing_ratio=0.5): #src->[sent len, batch size] #trg->[sent len, batch size] batch_size = trg.shape[1] max_len = trg.shape[0] target_voc_size = self.decoder.output_dim outputs = torch.zeros(max_len, batch_size, target_voc_size).to(self.device) context = self.encoder(src) hidden = context input = trg[0,:] for t in range(1, max_len): output, hidden = self.decoder(input, hidden, context) outputs[t] = output teacher_force = random.random() < teacher_forcing_ratio top1 = output.max(1)[1] input = (trg[t] if teacher_force else top1) return outputs # + INPUT_DIM = len(Source.vocab) OUTPUT_DIM = len(Target.vocab) ENC_EMB_DIM = 256 DEC_EMB_DIM = 256 HID_DIM = 512 ENC_DROPOUT = 0.5 DEC_DROPOUT = 0.5 enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, ENC_DROPOUT) dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, DEC_DROPOUT) model = Seq2Seq(enc, dec, device).to(device) # - model optimizer = optim.Adam(model.parameters()) pad_idx = Target.vocab.stoi['<pad>'] criterion = nn.CrossEntropyLoss(ignore_index=pad_idx) def train(model, iterator, optimizer, criterion, clip): model.train() epoch_loss = 0 for i, batch in enumerate(iterator): source = batch.src target = batch.trg#[sent len, batch size] optimizer.zero_grad() output = model(source, target)#[sent len, batch size, output dim] loss = criterion(output[1:].view(-1, output.shape[2]), target[1:].view(-1)) #trg->[(sent len - 1) * batch size] #output->[(sent len - 1) * batch size, output dim] loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) def evaluate(model, iterator, criterion): model.eval() epoch_loss = 0 with torch.no_grad(): for i, batch in enumerate(iterator): source = batch.src target = batch.trg output = model(source, target, 0) loss = criterion(output[1:].view(-1, output.shape[2]), target[1:].view(-1)) epoch_loss += loss.item() return epoch_loss / len(iterator) # + DIR = 'models' MODEL_DIR = os.path.join(DIR, 'seq2seq_model.pt') N_EPOCHS = 10 CLIP = 10 best_loss = float('inf') if not os.path.isdir(f'{DIR}'): os.makedirs(f'{DIR}') for epoch in range(N_EPOCHS): train_loss = train(model, train_iterator, optimizer, criterion, CLIP) valid_loss = evaluate(model, valid_iterator, criterion) if valid_loss < best_loss: best_loss = valid_loss torch.save(model.state_dict(), MODEL_DIR) print(f'| Epoch: {epoch+1:03} | Train Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f} | Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f} |') # + model.load_state_dict(torch.load(MODEL_DIR)) test_loss = evaluate(model, test_iterator, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |') # -
section6/s6v2-Modifying Seq2Seq - Attention.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Functions for Model Testing # This tutorial covers different methods of **comparing data to given (fixed) QIP models**. This is distinct from model-based *tomography*, which finds the best-fitting model for a data set within a space of models set by a `Model` object's parameterization. You might use this as a tool alongside or separate from GST. Perhaps you suspect that a given noisy QIP model is compatible with your data - model *testing* is the way to find out. Because there is no optimization involved, model testing requires much less time than GST does, and doens't place any requirements on which circuits are used in performing the test (though some circuits will give a more precise result). # # ## Setup # First, after some usual imports, we'll create some test data based on a depolarized and rotated version of a standard 1-qubit model consisting of $I$ (the identity), $X(\pi/2)$ and $Y(\pi/2)$ gates. import pygsti import numpy as np import scipy from scipy import stats from pygsti.modelpacks import smq1Q_XYI datagen_model = smq1Q_XYI.target_model().depolarize(op_noise=0.05, spam_noise=0.1).rotate((0.05,0,0.03)) exp_list = pygsti.circuits.create_lsgst_circuits( smq1Q_XYI.target_model(), smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(), [1,2,4,8,16,32,64]) ds = pygsti.data.simulate_data(datagen_model, exp_list, num_samples=1000, sample_error='binomial', seed=100) # ## Step 1: Construct a test model # After we have some data, the first step is creating a model or models that we want to test. This just means creating a `Model` object containing the operations (including SPAM) found in the data set. We'll create several models that are meant to look like guesses (some including more types of noise) of the true underlying model. target_model = smq1Q_XYI.target_model() test_model1 = target_model.copy() test_model2 = target_model.depolarize(op_noise=0.07, spam_noise=0.07) test_model3 = target_model.depolarize(op_noise=0.07, spam_noise=0.07).rotate( (0.02,0.02,0.02) ) # ## Step 2: Test it! # There are three different ways to test a model. Note that in each case the default behavior (and the only behavior demonstrated here) is to **never gauge-optimize the test `Model`**. (Whenever gauge-optimized versions of an `Estimate` are useful for comparisons with other estimates, *copies* of the test `Model` are used *without* actually performing any modification of the original `Model`.) # # ### Method1: `run_model_test` # First, you can do it "from scratch" by calling `run_model_test`, which has a similar signature as `run_long_sequence_gst` and folows its pattern of returning a `Results` object. The "estimateLabel" advanced option, which names the `Estimate` within the returned `Results` object, can be particularly useful. # + # creates a Results object with a "default" estimate results = pygsti.run_model_test(test_model1, ds, target_model, smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(), [1,2,4,8,16,32,64]) # creates a Results object with a "default2" estimate results2 = pygsti.run_model_test(test_model2, ds, target_model, smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(), [1,2,4,8,16,32,64], advanced_options={'estimate_label': 'default2'}) # creates a Results object with a "default3" estimate results3 = pygsti.run_model_test(test_model3, ds, target_model, smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(), [1,2,4,8,16,32,64], advanced_options={'estimate_label': 'default3'}) # - # Like any other set of `Results` objects which share the same `DataSet` and operation sequences, we can collect all of these estimates into a single `Results` object and easily make a report containing all three. # + results.add_estimates(results2) results.add_estimates(results3) pygsti.report.construct_standard_report( results, title="Model Test Example Report", verbosity=1 ).write_html("../tutorial_files/modeltest_report", auto_open=True, verbosity=1) # - # ### Method 2: `add_model_test` # Alternatively, you can add a model-to-test to an existing `Results` object. This is convenient when running GST via `run_long_sequence_gst` or `run_stdpractice_gst` has left you with a `Results` object and you also want to see how well a hand-picked model fares. Since the `Results` object already contains a `DataSet` and list of sequences, all you need to do is provide a `Model`. This is accomplished using the `add_model_test` method of a `Results` object. # + #Create some GST results using run_stdpractice_gst gst_results = pygsti.run_stdpractice_gst(ds, target_model, smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(), [1,2,4,8,16,32,64]) #Add a model to test gst_results.add_model_test(target_model, test_model3, estimate_key='MyModel3') #Create a report to see that we've added an estimate labeled "MyModel3" pygsti.report.construct_standard_report( gst_results, title="GST with Model Test Example Report 1", verbosity=1 ).write_html("../tutorial_files/gstwithtest_report1", auto_open=True, verbosity=1) # - # ### Method 3: `models_to_test` argument # Finally, yet another way to perform model testing alongside GST is by using the `models_to_test` argument of `run_stdpractice_gst`. This essentially combines calls to `run_stdpractice_gst` and `Results.add_model_test` (demonstrated above) with the added control of being able to specify the ordering of the estimates via the `modes` argument. To important remarks are in order: # # 1. You *must* specify the names (keys of the `models_to_test` argument) of your test models in the comma-delimited string that is the `modes` argument. Just giving a dictionary of `Model`s as `models_to_test` will not automatically test those models in the returned `Results` object. # # 2. You don't actually need to run any GST modes, and can use `run_stdpractice_gst` in this way to in one call create a single `Results` object containing multiple model tests, with estimate names that you specify. Thus `run_stdpractice_gst` can replace the multiple `run_model_test` calls (with "estimateLabel" advanced options) followed by collecting the estimates using `Results.add_estimates` demonstrated under "Method 1" above. # + gst_results = pygsti.run_stdpractice_gst(ds, target_model, smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(), [1,2,4,8,16,32,64], modes="full TP,Test2,Test3,Target", # You MUST models_to_test={'Test2': test_model2, 'Test3': test_model3}) pygsti.report.construct_standard_report( gst_results, title="GST with Model Test Example Report 2", verbosity=1 ).write_html("../tutorial_files/gstwithtest_report2", auto_open=True, verbosity=1) # - # Thats it! Now that you know more about model-testing you may want to go back to the [overview of pyGST applications](../02-Using-Essential-Objects.ipynb).
jupyter_notebooks/Tutorials/algorithms/ModelTesting-functions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Create new MIRI MRS reference files for CDP8b ## # First import the things that we need from the pipeline code import os as os import numpy as np import pdb as pdb from astropy.modeling import models from asdf import AsdfFile from jwst import datamodels from jwst.assign_wcs import miri # Import the MIRI coordinates code from https://github.com/STScI-MIRI/miricoord and ensure that it is on the PYTHONPATH. Also ensure that the output data directory is set: # setenv MIRICOORD_DATA_DIR /YourLocalPathToData/ (this is where output will happen) data_dir=os.path.join(os.path.expandvars('$MIRICOORD_DATA_DIR'),'temp/') import miricoord.miricoord.mrs.mrs_tools as mrst mrst.set_toolversion('cdp8b') import miricoord.miricoord.mrs.mrs_pipetools as mrspt mrspt.set_toolversion('cdp8b') # Import the python scripts that do the heavy lifting for reference file creation: import miricoord.miricoord.mrs.makecrds.makecrds_mrs_cdp8b as makecrds # Make new CDP-8b reference file for all channels and test them. makecrds.create_cdp8b_all(data_dir)
miricoord/mrs/makecrds/makecrds_mrs_cdp8b.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="bAf0tVYBeCdh" colab_type="text" # ## Ethereum Stock Prices # + [markdown] id="lg251WVdeCdj" colab_type="text" # * Import data # * Show prices, market capital and volume # * Analysis # * Preprocessing data # * Complete the Index # * Find NaN and Fix it # * Closed Price Column # * Split data into training and test datasets # * Normalizing datasets # * Building the LSTM model # * Regressor # * Sequence # * Special Normalizations for Sequences # * Custom: window steps by change rate # * Testing the model # # + [markdown] id="F2_5SOMweCdj" colab_type="text" # ### Import Libraries # + id="k2Fr7ZF5eCdk" colab_type="code" colab={} import os import io import math import random import requests from tqdm import tqdm import numpy as np import pandas as pd import sklearn import matplotlib.dates as mdates import datetime as dt from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score import matplotlib.pyplot as plt # %matplotlib inline plt.style.use('seaborn-colorblind') # + [markdown] id="kaMo2ToyeCdo" colab_type="text" # ### Import Data and Analysis # + id="Y5cZIv3GeCdo" colab_type="code" colab={} def download_file(url, filename): r = requests.get(url, stream=True) total_size = int(r.headers.get('content-length', 0)); block_size = 1024 total_kb_size = math.ceil(total_size//block_size) wrote = 0 with open(filename, 'wb') as f: for data in tqdm(r.iter_content(block_size), total=total_kb_size , unit='KB', unit_scale=True): wrote = wrote + len(data) f.write(data) # + id="KFtgY5YgeCds" colab_type="code" colab={} datafile = "eth-eur.csv" #import from server if not os.path.exists(datafile): download_file("https://www.coingecko.com/price_charts/export/279/eur.csv", datafile) # + id="fn9Vevo2eCdv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="716be403-d33e-4856-cdd2-d8a70bbb2d2c" data = pd.read_csv(datafile) #print a random sample data.iloc[random.randint(0, data.shape[0])] # + id="Z39JecV4eCdy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 212} outputId="b8bc1373-8ca1-4902-9340-9af4870b102c" data.info() # + id="ocxDI6WEKB7_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 446} outputId="67a70478-f99d-4fb8-b06b-43fb0e5f7064" import seaborn as sns # Generate a mask for the upper triangle mask = np.triu(np.ones_like(data.corr(), dtype=np.bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(9, 7)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) sns.heatmap(data.corr(), mask=mask, cmap=cmap, vmax=1, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) # + [markdown] id="v_zsVH9heCd1" colab_type="text" # Here we can see that every sample corresponds to a day which is defined by the __day in a date format, the current price, the capital market and the total volume of transactions__ performed that day. # # At first glance, the __market_cap__ by itself is a good indicator of the price since it has the highest correlation. # + id="ow2tzdMyeCd2" colab_type="code" colab={} #customize index data.snapped_at[0].split()[0] data.snapped_at = data.snapped_at.apply(lambda x: x.split()[0]) # + id="7YTBJbqyeCd5" colab_type="code" colab={} data.set_index('snapped_at', inplace=True) data.index = pd.to_datetime(data.index) # + id="CUfZXSmweCd8" colab_type="code" colab={} features = ['price', 'market_cap', 'total_volume'] # + id="-x7GGoZQeCd_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="35e4580b-a85f-4fcc-a195-edd1260399e3" data[features].plot(subplots=True, layout=(1,3), figsize=(20,4)); # + [markdown] id="yvDGtymueCeF" colab_type="text" # --- # ### Preprocessing Data # # #### Complete the Index # + [markdown] id="bM3UmwSoeCeG" colab_type="text" # The list is not complete _(2015-08-09 is missing)_ so we have to fill the blanks. # + id="3zK2l7bweCeG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="aa0415a6-724f-4c86-86f1-4b6009830271" #check '2015-08-09 00:00:00' in data.index # + id="OHkSLcoueCeJ" colab_type="code" colab={} #Generate all the possible days and use them to reindex start = data.index[data.index.argmin()] end = data.index[data.index.argmax()] index_complete = pd.date_range(start, end) data = data.reindex(index_complete) # + [markdown] id="TDBpMD1meCeM" colab_type="text" # Now, the index is completed but the inexistent samples must be filled out. # # #### Find NaN and Fix it # + id="FJnoL7OjeCeN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="9b1d10d7-9388-4f4e-cdfe-62cf6960973c" #Fill the blanks with the mean between the previous and the day after for idx in data.index: dayloc = data.index.get_loc(idx) day = data.loc[idx] if day.hasnans: #updating rg = slice(dayloc-1, dayloc+2) data.loc[idx] = data.iloc[rg].mean() print("Day <{}> updated".format(idx)) # + id="6YB9JItYeCeQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="6613f08b-8d1e-4afe-9339-530bb15af731" #Check data.loc['2015-08-09 00:00:00'] # + id="KM3Athx9eCeT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="ba2b2d1a-b43d-4b36-90fb-6eda56d28e02" #Checking if we have NaN in another place data[data.isnull().any(axis=1)].count() # + [markdown] id="W4FVcKB4eCeW" colab_type="text" # #### Closed Price Column # # Now we need to include a new feature which will define the closed price for every sample. Ethereum market is always open so we can forget about weekends and use directly the open price of the next sample. # # Afterwards the model will use this __*close_price* as the target__ since it's the value we are trying to predict. # # The following script will help us with that. # + id="etrTS7NDeCeW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 197} outputId="ba4a32c6-869e-4b8f-f7e0-3b98f69e1b7d" new_column = 'closed_price' datab = data.copy() nc = list() for idx in data.index: dayloc = data.index.get_loc(idx) #we put the price in the day after as closed price if dayloc == len(data.index)-1: #last position will not have closed_price closed_price = np.nan else: closed_price = data.iloc[dayloc+1].price nc.append(closed_price) data[new_column] = nc data.tail(5) # + id="HZBAlgPueCea" colab_type="code" colab={} #Delete last because we don't know still the closed price data = data.drop(data.index[len(data)-1]) # + [markdown] id="_jcJL4A7hbPP" colab_type="text" # #### Final Dataset # + id="u9F0L-eYWlmL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 194} outputId="8e41fbfb-110d-4e03-e927-edbaebb2ccf1" # Update the features with the highest correlation features = ['market_cap'] data = data.drop(['price', 'total_volume'], axis=1) data.info() # + [markdown] id="YKRURq24eCed" colab_type="text" # ---- # ### Split Data into Training and Test Datasets # + id="MumnS3QjeCee" colab_type="code" colab={} # 80% for training, 10% for validation and 10% for testing idx1 = round(len(data)*0.8) idx2 = round(len(data)*0.9) data_train, data_val, data_test = data[:idx1].copy(), data[idx1:idx2].copy(), data[idx2:].copy() # + id="gJ9K0awkeCeg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="91a7086a-0639-4853-fe00-6cc8d8dbfe6c" print("Size data_train: {}".format(data_train.shape[0])) print("Size data_val: {}".format(data_val.shape[0])) print("Size data_test: {}".format(data_test.shape[0])) # + [markdown] id="1XEGNzrNeCej" colab_type="text" # ### Normalizing Datasets # # Take care of this because we __don't know if the future values are in the range__. For this reason we'll __fit the scaler using only the training data and NOT the testing data__. # # __Standardization__ is a well known normalizer that uses the standard deviation assuming the dataset follows a Gaussian distribution (it's specially robust for new values outside of the expected values). # # _*__Note:__ this method assumes the distribution of data fits to Gaussian distribution._ # + id="fKAK0n_jeCej" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="85aaad28-f1b4-40d8-b9ca-f2e682e9d499" #Scale the data scaler = StandardScaler() data_train_norm, data_val_norm, data_test_norm = data_train.copy(), data_val.copy(), data_test.copy() data_train_norm[data.columns] = scaler.fit_transform(data_train[data.columns]) data_val_norm[data.columns] = scaler.transform(data_val[data.columns]) data_test_norm[data.columns] = scaler.transform(data_test[data.columns]) data_val_norm.describe() # + [markdown] id="UtJTecDAeCem" colab_type="text" # --- # ## Building the Models # + [markdown] id="5v6DVHbFeCem" colab_type="text" # ### Check Tensorflow and GPU # + id="7cWdX-yUeCem" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="a33c3c60-6eb5-487d-de72-467d5f9d2ff8" from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) # + [markdown] id="HsB6HiDteCep" colab_type="text" # ### Sequence Regressor 1-step # + id="xtnay2_ibuqR" colab_type="code" colab={} FEATURES_LEN = len(data.columns) - 1 # + id="8A2zmAqaeCeq" colab_type="code" colab={} from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Flatten X_train = data_train_norm[features].values.reshape((data_train_norm.shape[0], 1, FEATURES_LEN)) y_train = data_train_norm.closed_price.values X_val = data_val_norm[features].values.reshape((data_val_norm.shape[0], 1, FEATURES_LEN)) y_val = data_val_norm.closed_price.values # + id="kEJfUv1DeCet" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="fc6f4592-c72c-483e-a538-f11d31431a6e" print(X_train.shape) print(y_train.shape) print(X_val.shape) print(y_val.shape) # + id="uswK6050eCez" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8b9f9dbc-8067-41f5-a1ff-628675f54b47" model = Sequential() model.add(LSTM(32, input_shape=(1, FEATURES_LEN) )) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X_train, y_train, epochs=50, batch_size=32, verbose=0) # + id="l_CbpPvCeCe5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 317} outputId="eb4599cc-4977-4752-fd34-316ecbd16df9" print("Training) R^2 score: {:.3f}".format(r2_score(y_train, model.predict(X_train)))) print("Validation) R^2 score: {:.3f}".format(r2_score(y_val, model.predict(X_val)))) pred = model.predict(X_val) plt.plot(y_val, label='Actual') plt.plot(pred, label='Prediction') plt.legend() # + id="zLVLrHNseCe8" colab_type="code" colab={} #saving model_1step = model # + [markdown] id="ZGxB3VN8eCfG" colab_type="text" # ### Sequence Regressor 7-steps # + id="LvP_AsIYeCfH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="949ace85-c4fd-483f-e617-afe4723cd6f8" ''' Helper function to transform the dataset to shapes defined by 7 steps and 3 features ''' def prepare_sequence(data, sequence_size=7): sequence = [] buckets = data.shape[0]//sequence_size init_sample = data.shape[0] - buckets*sequence_size samples = 0 for i in range(init_sample, data.shape[0] - sequence_size + 1): sequence.append(data[i:i+sequence_size]) samples += 1 return np.concatenate(sequence).reshape((samples, sequence_size, data.shape[1])) prepare_sequence(data[features]).shape # + id="BfNE9OAMeCfK" colab_type="code" colab={} #getting (samples, steps, features) X_train = prepare_sequence(data_train_norm[features]) X_val = prepare_sequence(data_val_norm[features]) y_train = data_train_norm.iloc[-len(X_train):].closed_price.values y_val = data_val_norm.iloc[-len(X_val):].closed_price.values # + id="92Tz9oJgeCfM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="8a166098-1a83-466f-a0ce-84a425a26c8e" print(X_train.shape) print(y_train.shape) print(X_val.shape) print(y_val.shape) # + id="vjeL_cSpeCfO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b20e7907-4a36-44f7-b212-55237f7aa070" model = Sequential() model.add(LSTM(32, input_shape=(7, FEATURES_LEN) )) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X_train, y_train, epochs=50, batch_size=32, verbose=0) # + id="rOMpZAaweCfS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 317} outputId="17ff00c3-7d2b-46b8-f5d1-543ed2aa0fa8" print("Training) R^2 score: {:.3f}".format(r2_score(y_train, model.predict(X_train)))) print("Validation) R^2 score: {:.3f}".format(r2_score(y_val, model.predict(X_val)))) pred = model.predict(X_val) plt.plot(y_val, label='Actual') plt.plot(pred, label='Prediction') plt.legend() # + id="5k1EVPZYeCfV" colab_type="code" colab={} #saving model_7steps = model # + [markdown] id="E5JbdzimeCfX" colab_type="text" # ### ⇩ SPECIAL NORMALIZATION FOR SEQUENCES ⇩ # # The neural network is not able to get good predictions for that data that has not seen before. For that reason we can find day that are not well fitted. This problem is related to the __'out-of-scale'__ data inputs. # # #### Custom: window steps by the rate of change #### # # Thinking that the batch size is a window of days that defines how the neural network learns, one idea is to normalize the window by the last sample. On this way we'll be able to keep almost all data in the same scale. # + id="bNIUNyvNeCfX" colab_type="code" colab={} def print_mean_std(data): mean = np.mean(data) std = np.std(data) print("mean:{:.3f} std:{:.3f}".format(mean, std)) # + id="J41g68e1eCfZ" colab_type="code" colab={} def window_normalization(data, window_size): y = np.empty_like(data, dtype='float64') normalizer = list() for i in range(0,len(data), window_size): j = min(i+window_size, len(data)) y[i:j] = data[i:j]/np.abs(data[j-1]) normalizer.append(np.abs(data[j-1])) #print_mean_std(y[i:j]) return y, normalizer def window_denormalization(norm_data, normalizer, window_size): y = np.empty_like(norm_data, dtype='float64') idx = 0 for i in range(0,len(norm_data), window_size): j = min(i+window_size, len(norm_data)) y[i:j] = norm_data[i:j]*normalizer[idx] idx += 1 return y # + id="S4aooL_veCfc" colab_type="code" colab={} #testing the function a = np.array([[1, 1, 1], [2, 2, 2], [2, 2, 2], [8, 8, 8]]) expected_result = np.array([[0.5, 0.5, 0.5], [1, 1, 1], [0.25, 0.25, 0.25], [1, 1, 1]]) norm_a, normalizer = window_normalization(a, 2) assert ( np.array_equal(norm_a, expected_result) ) assert ( np.array_equal(a, window_denormalization(norm_a, normalizer, 2)) ) # + id="bV4WJDrIeCfh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="b565b0ab-e5a0-4722-ce53-cab5ff535ec6" #Showing the last sample data.index[-1].strftime("%d-%m-%Y") # + id="8_odWXPDeCfk" colab_type="code" colab={} window_size=32 X_train = data_train[features].values y_train = data_train.closed_price.values X_train_norm, _ = window_normalization(X_train, window_size) y_train_norm, y_normalizer = window_normalization(y_train, window_size) #getting (samples, steps, features) X_train_norm = prepare_sequence(X_train_norm) y_train_norm = y_train_norm[-len(X_train_norm):] # + id="DiRgbjOdeCfo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e4486c1b-fe81-4ad1-f366-33d199ff9b84" model = Sequential() model.add(LSTM(32, input_shape=(7, FEATURES_LEN) )) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X_train_norm, y_train_norm, epochs=50, batch_size=window_size, verbose=0) # + id="qzJ8fB-JeCfq" colab_type="code" colab={} X_val = data_val[features].values y_val = data_val.closed_price.values X_val_norm, _ = window_normalization(X_val, window_size) y_val_norm, y_scaler = window_normalization(y_val, window_size) #getting (samples, steps, features) X_val_norm = prepare_sequence(X_val_norm) y_val_norm = y_val_norm[-len(X_val_norm):] # + id="jdjSC3TYeCf0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 317} outputId="a1463612-90a4-492a-db29-eda6f4fa525d" print("Training) R^2 score: {:.3f}".format(r2_score(y_train_norm, model.predict(X_train_norm)))) print("Validation) R^2 score: {:.3f}".format(r2_score(y_val_norm, model.predict(X_val_norm)))) pred = model.predict(X_val_norm) plt.plot(y_val_norm, label='Actual') plt.plot(pred, label='Prediction') plt.legend() # + id="Vbd661rTeCf3" colab_type="code" colab={} #saving model_win = model # + [markdown] id="g6I_S-LpeCf5" colab_type="text" # --- # ## Testing the Best Model # # Seeing the last results our best chance of accurate predictions (__at a glance__) is to use: # # * LSTM sequence by 7 steps # * Data Standardization # + id="AWAMgqSSeCf5" colab_type="code" colab={} X_test = prepare_sequence(data_test_norm[features]) y_test = data_test_norm.iloc[-len(X_test):].closed_price.values pred = model_7steps.predict(X_test) # Correction due to sequence of 7 values data_test_norm = data_test_norm[8:] # + id="WmBkhMgijyK6" colab_type="code" colab={} X_test = data_test_norm[features].values.reshape((data_test_norm.shape[0], 1, FEATURES_LEN)) y_test = data_test_norm.closed_price.values pred = model_1step.predict(X_test) # + id="DiN6oi4CfAgD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 578} outputId="c124a933-89e0-4ed8-dbc9-0b699831f565" #test_prices = scaler.inverse_transform(data_test_norm[8:]) test_prices = scaler.inverse_transform(data_test_norm) pred_prices = test_prices.copy() pred_prices[:, FEATURES_LEN] = pred.reshape(-1) pred_prices = scaler.inverse_transform(pred_prices) plt.figure(figsize=(20,10)) plt.plot(data_test_norm.index, test_prices[:, FEATURES_LEN], label='Actual Price') plt.plot(data_test_norm.index, pred_prices[:, FEATURES_LEN], label='Prediction') plt.ylabel('Price (€)') plt.xlabel('Date') plt.legend() plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=5)) plt.gcf().autofmt_xdate() # + id="1OYpsraNlCDj" colab_type="code" colab={}
stock/Ethereum_Stock.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/AZICO/AZICO.github.io/blob/master/Azamat_Jalilov_M4_LS_DS_214_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="fDdEY-Nub9tq" # Lambda School Data Science # # *Unit 2, Sprint 1, Module 4* # # --- # + [markdown] id="7IXUfiQ2UKj6" # # Logistic Regression # # # ## Assignment 🌯 # # You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'? # # > We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions. # # - [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later. # - [ ] Begin with baselines for classification. # - [ ] Use scikit-learn for logistic regression. # - [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.) # - [ ] Get your model's test accuracy. (One time, at the end.) # - [ ] Commit your notebook to your fork of the GitHub repo. # # # ## Stretch Goals # # - [ ] Add your own stretch goal(s) ! # - [ ] Make exploratory visualizations. # - [ ] Do one-hot encoding. # - [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html). # - [ ] Get and plot your coefficients. # - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). # + id="o9eSnDYhUGD7" # %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/' # !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' # + id="JGzCxi_lb9tv" # Load data downloaded from https://srcole.github.io/100burritos/ import pandas as pd df = pd.read_csv(DATA_PATH+'burritos/burritos.csv') # + id="ws6WJrTub9tx" # Derive binary classification target: # We define a 'Great' burrito as having an # overall rating of 4 or higher, on a 5 point scale. # Drop unrated burritos. df = df.dropna(subset=['overall']) df['Great'] = df['overall'] >= 4 # + id="U_zuSTVHb9t0" # Clean/combine the Burrito categories df['Burrito'] = df['Burrito'].str.lower() california = df['Burrito'].str.contains('california') asada = df['Burrito'].str.contains('asada') surf = df['Burrito'].str.contains('surf') carnitas = df['Burrito'].str.contains('carnitas') df.loc[california, 'Burrito'] = 'California' df.loc[asada, 'Burrito'] = 'Asada' df.loc[surf, 'Burrito'] = 'Surf & Turf' df.loc[carnitas, 'Burrito'] = 'Carnitas' df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other' # + id="dOEPOUgHb9t5" # Drop some high cardinality categoricals df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood']) # + id="val3mHeVb9t8" # Drop some columns to prevent "leakage" df = df.drop(columns=['Rec', 'overall']) # + id="0HoIsrdKb9t-" outputId="3f68fdb3-180c-4f46-d9e4-bc985231bdce" colab={"base_uri": "https://localhost:8080/", "height": 241} # Checking the dataset df.sample(5) # + id="TXb2DzbMci35" outputId="a69077e1-a5ef-4a0d-b6c2-1850f03c52ae" colab={"base_uri": "https://localhost:8080/"} # Checking for missing values df.isnull().sum() # + id="FoXfZmQNeC7p" outputId="a08c1d4d-c3f5-415c-9798-ef4c800506f4" colab={"base_uri": "https://localhost:8080/"} # Checking the shape of the df df.shape # + id="GJfHaVXueiJS" # Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later. # Begin with baselines for classification. # Use scikit-learn for logistic regression. # Get your model's validation accuracy. (Multiple times if you try multiple iterations.) # Get your model's test accuracy. (One time, at the end.) # Commit your notebook to your fork of the GitHub repo. # + id="iVKGXrpLhWI-" # Transforming date format df['Date'] = pd.to_datetime(df['Date']) # + id="JerxRGxvkXZv" outputId="b66ec428-0752-4e65-a529-b2b7cfd02bf2" colab={"base_uri": "https://localhost:8080/"} # Improting sklearn library for tran test split from sklearn.model_selection import train_test_split train = df[df['Date'] <= '2016-12-31'] val = df[(df['Date'] >= '2017-01-01') & (df['Date'] <= '2017-12-31')] test = df[df['Date'] >= '2018-01-01'] train.shape, val.shape, test.shape # + id="hfUKmnvS4E9O" outputId="47ad997a-7d9b-4560-cf00-086e97619442" colab={"base_uri": "https://localhost:8080/"} # Guessing for majority class target = 'Great' y_train = train[target] y_train.value_counts(normalize=True) # + id="begawofH5dFO" # Baseline majority_class = y_train.mode()[0] y_pred = [majority_class] * len(y_train) # + id="FenrZNYj5tiS" outputId="9004532a-b79f-44b7-9dce-054aecf8b25c" colab={"base_uri": "https://localhost:8080/"} # Calculating accuracy score using sklearn from sklearn.metrics import accuracy_score accuracy_score(y_train, y_pred) # + id="6ujkRroN6u10" outputId="a217f67f-36af-46c2-fb6e-b0b619637d33" colab={"base_uri": "https://localhost:8080/"} # Guessing based on majority class y_val = val[target] y_pred = [majority_class] * len(y_val) accuracy_score(y_val, y_pred) # + id="IDIrT6tl7G8L" outputId="ebca76e4-8bcf-4ecc-a9bc-61bc83323fbe" colab={"base_uri": "https://localhost:8080/"} # 1. Importing estimoator calss from sklearn.linear_model import LinearRegression # 2. Initiating this class linear_reg = LinearRegression() # 3. Arranging X feature matrices features = ['Cost', 'Hunger', 'Meat', 'Salsa', 'Synergy', 'Wrap'] X_train = train[features] X_val = val[features] # Impute missing values before fitting the model from sklearn.impute import SimpleImputer imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train) X_val_imputed = imputer.transform(X_val) # 4. Fitting the model linear_reg.fit(X_train_imputed, y_train) # 5. Applying the model linear_reg.predict(X_val_imputed) # + id="OzPqhWjN-Dxq" outputId="6d3632d6-ebda-42bc-93b6-89f2b66de691" colab={"base_uri": "https://localhost:8080/"} # Logistic Regression from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(X_train_imputed, y_train) print('Validation Accuracy', log_reg.score(X_val_imputed, y_val)) # + id="U7RCGsBN-Qed" outputId="3434ac29-82a5-4cb6-f079-e7a683950806" colab={"base_uri": "https://localhost:8080/", "height": 326} train.head()
Azamat_Jalilov_M4_LS_DS_214_assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Inha-AI/DACON-semiconductor-competition/blob/feature%2FYoonSungLee/submission_09.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="0vVeUdyssBKw" colab_type="code" colab={} import numpy as np import pandas as pd import matplotlib.pyplot as plt import keras from keras.models import Sequential from keras.layers import Dense, Activation, Dropout from keras.layers import BatchNormalization from sklearn.model_selection import train_test_split # + id="EwHXDEigsNB0" colab_type="code" outputId="2dd53a11-87c9-4621-d2c1-e8acb54041a0" colab={"base_uri": "https://localhost:8080/", "height": 131} from google.colab import drive drive.mount('/gdrive') # + id="-BNXAU5rsPJS" colab_type="code" colab={} df_train = pd.read_csv('/gdrive/My Drive/DACON-semiconductor-competition/dataset/train.csv') df_test = pd.read_csv('/gdrive/My Drive/DACON-semiconductor-competition/dataset/test.csv') # + id="TdYXclfgsVr6" colab_type="code" colab={} # 독립변수와 종속변수를 분리합니다. train_X = df_train.iloc[:,4:] train_Y = df_train.iloc[:,0:4] test_X = df_test.iloc[:,1:] # + id="0s42dM3KndH-" colab_type="code" colab={} # train set을 shuffle하여 다시 train set과 validation set으로 분리합니다. train_X, val_X, train_Y, val_Y = train_test_split(train_X, train_Y, test_size=0.25, random_state=42) # + [markdown] id="8tT4w6XasaSQ" colab_type="text" # # Model 9 # + [markdown] id="Y25a2iOC4-2x" colab_type="text" # * 6 layers # * (239, 252, 265, 178, 91) units, he_normal, relu # * BatchNormalization # * Dropout(0.15) # * Adam(0.008) # * epochs 300 # * batch_size 1000 # <br><br> # * layer층을 다시 복귀시킴 # * Dropout의 rate를 줄이고 다시 도입하여 overfitting 억제 시도 # * 대신 epochs를 대폭 늘려 가중치의 업데이트량을 증가시킴 # * sklearn의 train_test_split을 사용하여 train set과 validation set을 랜덤하게 추출 # * Model 1~8까지의 모델은 validation set을 train set의 끝에서 일정한 비율로 추출한 것이기 때문에, 그리고 너무 적은 비율로 추출했기 때문에 overfitting이 발생한 것처럼 보인 것이다. 하지만 이번 모델의 실험으로 알 수 있듯이 validation set을 랜덤하게 적절한 비율로 추출하면 실제 지금까지의 모델은 overfitting 문제는 아니라는 것을 알 수 있다. 따라서 Dropout의 필요성이 줄어들었고 모델을 정교하게 만드는 작업이 필요하다. # + id="4PSSfoS5sfWK" colab_type="code" outputId="19579d78-ffff-4427-c21a-76162a38e189" colab={"base_uri": "https://localhost:8080/", "height": 1000} # 케라스를 통해 모델 생성을 시작합니다. model_09 = Sequential() model_09.add(Dense(units=239, input_dim=226, kernel_initializer='he_normal')) model_09.add(BatchNormalization()) model_09.add(Activation('relu')) model_09.add(Dropout(0.15)) model_09.add(Dense(units=252, kernel_initializer='he_normal')) model_09.add(BatchNormalization()) model_09.add(Activation('relu')) model_09.add(Dropout(0.15)) model_09.add(Dense(units=265, kernel_initializer='he_normal')) model_09.add(BatchNormalization()) model_09.add(Activation('relu')) model_09.add(Dropout(0.15)) model_09.add(Dense(units=178, kernel_initializer='he_normal')) model_09.add(BatchNormalization()) model_09.add(Activation('relu')) model_09.add(Dropout(0.15)) model_09.add(Dense(units=91, kernel_initializer='he_normal')) model_09.add(BatchNormalization()) model_09.add(Activation('relu')) model_09.add(Dense(units=4, activation='linear')) adam = keras.optimizers.Adam(0.008) model_09.compile(loss='mae', optimizer=adam, metrics=['accuracy']) hist = model_09.fit(train_X, train_Y, epochs=300, batch_size=1000, validation_data=(val_X, val_Y)) # + id="EzHV2L2swuGQ" colab_type="code" outputId="0699a692-8550-4b27-bbaf-dd4e603398f6" colab={"base_uri": "https://localhost:8080/", "height": 1000} # 모델 아키텍처 from IPython.display import SVG from keras.utils.vis_utils import model_to_dot # %matplotlib inline SVG(model_to_dot(model_09, show_shapes=True).create(prog='dot', format='svg')) # + id="U_mWcOE6vyUH" colab_type="code" outputId="165d62a7-616f-4eff-fe29-43ee96825954" colab={"base_uri": "https://localhost:8080/", "height": 279} # 학습 과정 # %matplotlib inline import matplotlib.pyplot as plt fig, loss_ax = plt.subplots() acc_ax = loss_ax.twinx() loss_ax.plot(hist.history['loss'], 'y', label='train loss') loss_ax.plot(hist.history['val_loss'], 'r', label='val loss') acc_ax.plot(hist.history['acc'], 'b', label='train acc') acc_ax.plot(hist.history['val_acc'], 'g', label='val acc') loss_ax.set_xlabel('epoch') loss_ax.set_ylabel('loss') acc_ax.set_ylabel('mean absolute error') loss_ax.legend(loc='upper left') acc_ax.legend(loc='lower left') plt.show() # + id="wBwHpc7VvyZv" colab_type="code" colab={} # 예측값을 생성합니다. pred_test_09 = model_09.predict(test_X) # + id="jDk6Qnu3vyfX" colab_type="code" colab={} # submission 파일을 생성합니다. sample_sub = pd.read_csv('/gdrive/My Drive/DACON-semiconductor-competition/dataset/sample_submission.csv', index_col=0) submission = sample_sub+pred_test_09 submission.to_csv('/gdrive/My Drive/DACON-semiconductor-competition/submission_09.csv')
submission_09.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Converting scanned kuzushiji sheets to bw images with a single character # + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" import pandas as pd import numpy as np import matplotlib.pyplot as plt import cv2 import re # %matplotlib inline # - # ## Visualisiung the training data # Don't forget to add dataset! # https://www.kaggle.com/c/kuzushiji-recognition df_train = pd.read_csv('../input/kuzushiji-recognition/train.csv') unicode_map = {codepoint: char for codepoint, char in pd.read_csv('../input/kuzushiji-recognition/unicode_translation.csv').values} # + def convert_labels_set(labels_str): labels = [] for one_label_str in re.findall(r'U\+\S+\s\S+\s\S+\s\S+\s\S+', labels_str): charcode, x, y, w, h = one_label_str.split(' ') labels.append([charcode, int(x), int(y), int(w), int(h)]) return labels def visualize_training_data(image_path, labels): fs = 8 # Read image img = cv2.imread(image_path, cv2.IMREAD_COLOR) for label in convert_labels_set(labels): _, x, y, w, h = label cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 3) return img # + n_sheets = 2 for _ in range(n_sheets): img_filename, labels = df_train.values[np.random.randint(len(df_train))] viz_img = visualize_training_data('../input/kuzushiji-recognition/train_images/{}.jpg'.format(img_filename), labels) plt.figure(figsize=(15, 15)) plt.title(img_filename) plt.imshow(viz_img) plt.show() # - # ## Statistics # + n_labels = 0 chars_counts = {} for labels_set in df_train.values[:, 1]: if type(labels_set) is not str: continue labels = convert_labels_set(labels_set) n_labels += len(labels) for label in labels: try: chars_counts[label[0]] += 1 except KeyError: chars_counts.update({label[0]: 1}) # - chars_counts_list = [chars_counts[k] for k in chars_counts] n_classes = len([k for k in chars_counts]) print('Number of labels: {}'.format(n_labels)) print('Number of classes: {}'.format(n_classes)) print('Min max number of items per class: {} {}'.format(np.min(chars_counts_list), np.max(chars_counts_list))) print('Median number of items per class: {}'.format(np.median(chars_counts_list))) print('Mean number of items per class: {}'.format(np.mean(chars_counts_list))) # ## Kuzushiji images mining def get_char_images_from_sheet(src_image_path, labels_str, blur_kernel_size=3, img_size=32): src_img = cv2.imread(src_image_path, cv2.IMREAD_COLOR) char_imgs = [] for label in convert_labels_set(labels_str): char_img = np.zeros((img_size, img_size), dtype=np.uint8) _, x, y, w, h = label label_img = src_img[y:y + h, x:x + w, :] label_img = cv2.GaussianBlur(label_img, (blur_kernel_size, blur_kernel_size), cv2.BORDER_DEFAULT) label_img = cv2.cvtColor(label_img, cv2.COLOR_RGB2GRAY) # _, label_img = cv2.threshold(label_img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) label_img = 255 - label_img if w > h: label_img = cv2.resize(label_img, (img_size, int(img_size * h / w))) dy = int((img_size - int(img_size * h / w)) / 2) char_img[dy:dy + int(img_size * h / w), :] += label_img else: label_img = cv2.resize(label_img, (int(img_size * w / h), img_size)) dx = int((img_size - int(img_size * w / h)) / 2) char_img[:, dx:dx + int(img_size * w / h)] += label_img char_imgs.append(char_img) return char_imgs # + # img_filename, labels = df_train.values[np.random.randint(len(df_train))] char_imgs = get_char_images_from_sheet('../input/kuzushiji-recognition/train_images/{}.jpg'.format(img_filename), labels) for img in char_imgs: plt.figure(figsize=(2, 2)) plt.imshow(img, cmap='Greys') plt.show()
kuzushiji_2_bw_images.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: canopy-inference # language: python # name: canopy_inference # --- # # Inference Data Mount # !sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 172.31.91.151:/ ./efs_inference_data # # For Docker Run / Sagemaker # !pip install rasterio geopandas shapely tensorflow==2.4.1 tensorflow-addons[tensorflow] # !pip install tensorflow tensorflow-addons[tensorflow] # !pip list # !pip uninstall h5py # !pip install h5py # !pip list # # Start Local / Sagemaker Imports import os import rasterio as rio import numpy as np from rasterio.windows import Window from glob import glob from shapely.geometry import Polygon from shapely.geometry import box import geopandas as gpd from rasterio.windows import get_data_window import rasterio as rio from inference_predict import * import boto3 import matplotlib.pyplot as plt #import gdal # # Windowing def get_windows(img_dim, patch_size=(240, 240), stride=(240, 240)): patch_size = np.array(patch_size) stride = np.array(stride) img_dim = np.array(img_dim) # to take into account edges, add additional blocks around right side edge and bottom edge of raster new_img_dim = [img_dim[0] + stride[0],img_dim[1] + stride[0]] max_dim = (new_img_dim//patch_size)*patch_size - patch_size ys = np.arange(0, img_dim[0], stride[0]) xs = np.arange(0, img_dim[1], stride[1]) tlc = np.array(np.meshgrid(ys, xs)).T.reshape(-1, 2) tlc = tlc[tlc[:, 0] <= max_dim[0]] tlc = tlc[tlc[:, 1] <= max_dim[1]] windows = [] for y,x in tlc.astype(int): windows.append(Window(x, y, patch_size[1], patch_size[0])) return windows def add_ndvi(data, dtype_1=rio.float32): nir = data[3].astype(dtype_1) red = data[0].astype(dtype_1) # Allow division by zero np.seterr(divide='ignore', invalid='ignore') # Calculate NDVI ndvi = ((nir - red) / (nir + red)).astype(dtype_1) # Rescaling for use in 16bit output ndvi = (ndvi + 1) * (2**15 - 1) # Add NDVI band to end of array rast = np.concatenate((data,[ndvi]),axis=0) rast = rast.astype(rio.uint16) return rast # # Download Model Files import h5py # + model_url = 's3://canopy-production-ml/inference/model_files/model-best.h5' weights_url = 's3://canopy-production-ml/inference/model_files/model_weights_best.h5' download_model(model_url,weights_url) # - model = load_model("model.h5","model_weights.h5") label_list = ["Industrial_agriculture","ISL","Mining","Roads","Shifting_cultivation"] # + def output_windows(granule_dir,patch_size=100, stride=100,SAVE=False,SAVE_INDIVIDUAL=False, bands=[2, 3, 4, 8, 11, 12], model=model, predict_thresh=.5, label_list=label_list, job_name="test_inference", output_filename="./inference_output/result.json"): granule_list = glob(f'{granule_dir}/*.tif') output_dict = {} granule_id_list = [] window_id_list = [] window_geom_list = [] data_list = [] label_master_list = [] gdf_list = [] timestamp = gen_timestamp() for j,granule_path in enumerate(granule_list): granule_id = granule_path.split("/")[-1].split("_")[0] with rio.open(granule_path) as src: windows = get_windows(src.shape, (patch_size, patch_size), (stride, stride)) for i, window in enumerate(windows): print(f"predicting window {i + 1} of {len(windows)} of granulate {j + 1} of {len(granule_list)}",end='\r', flush=True) label_name_list = [] window_id = i+1 data = src.read(bands,window=window, masked=True) data = add_ndvi(data) shape = data.shape new_shape = (data.shape[0],patch_size,patch_size) if shape != new_shape: filled_array = np.full(new_shape, 0) filled_array[:shape[0],:shape[1],:shape[2]] = data data = filled_array window = Window(window.col_off,window.row_off,shape[2],shape[1]) #image pre-processing / inference prediction = model.predict(read_image_tf_out(data)) prediction = np.where(prediction > predict_thresh, 1, 0) prediction_i = np.where(prediction == 1)[1] for i in prediction_i: label_name_list.append(label_list[i]) label_master_list.append(label_name_list) #vectorizing raster bounds for visualization window_bounds = rio.windows.bounds(window, src.transform, height=patch_size, width=patch_size) geom = box(*window_bounds) geom_coords = list(geom.exterior.coords) # window_geom_list.append(geom) #create or append to dict.... if granule_id in output_dict: output_dict[granule_id].append({"window_id":window_id,"polygon_coords":geom_coords,"labels":label_name_list}) else: output_dict[granule_id] = [{"window_id":window_id,"polygon_coords":geom_coords,"labels":label_name_list}] save_to_s3(output_dict,output_filename,job_name,timestamp) # gdf = gpd.GeoDataFrame({"granule_id":granule_id_list,"window_id":window_id_list,"geometry":window_geom_list,"labels":label_master_list}) # gdf["labels"] = gdf["labels"].astype(str) # gdf_list.append(gdf) return output_dict # if SAVE: # if SAVE_INDIVIDUAL: # meta = raster.meta.copy() # # Get the window specific transform - IF we want to save windows independantly # # trans = raster.window_transform(window) # meta.update({ # # 'transform': trans, # 'dtype': src.dtype # }) # with rasterio.open(f"{out_path}/some_chip_{j}.tif", 'w', **meta) as dest: # dest.write(data) # else: # meta = raster.meta.copy() # with rasterio.open(f"{out_path}/some_chip_{j}.tif", 'w+', **meta) as dest: # dest.write(data, window=window) # + # granule_dir = "./efs_inference_data/" granule_dir = "/Volumes/Lacie/zhenyadata/Project_Canopy_Data/PC_Data/Sentinel_Data/Chips/misha_polygons_cloudfreemerge/yes/ISL/100/91" output_dict = output_windows(granule_dir) # - np.array(data).max() plt.figure() plt.imshow(data[0,:,:,0]) # + jupyter={"outputs_hidden": true} output_dict # + data = output_dict count = {} label_match_results = [] granule_count = len(data.keys()) granule_list = data.keys() count["granule_count"] = granule_count for k1 in list(data.keys()): for i in range(len(data[k1])): if len(data[k1][i]['labels']) == 0: if "null_chips" not in count.keys(): count["null_chips"] = 1 else: count["null_chips"] += 1 for label in data[k1][i]['labels']: if label not in count.keys(): count[label] = 1 else: count[label] += 1 # - count # + jupyter={"outputs_hidden": true} for i in range (len(output_dict['101'])): print(output_dict['101'][i]['labels']) # - new_gdf.shape gdf.plot() new_gdf.to_file("./inference_output/test.geojson", driver='GeoJSON') gdf.to_file("./inference_output/test.geojson", driver='GeoJSON') # # Read Output Files def process_output_files(json_path=json_path,download=True, filepath = "predict_test-2021-05-03-23-37-03.json", label_match="Shifting_cultivation"): s3 = boto3.resource('s3') #Download Model, Weights if download: bucket = json_path.split("/")[2] model_key = "/".join(json_path.split("/")[3:]) filename = json_path.split("/")[-1] s3.Bucket(bucket).download_file(model_key, filename ) filepath = filename with open(filepath) as jsonfile: data = json.load(jsonfile) count = {} label_match_results = [] granule_count = len(data.keys()) granule_list = data.keys() count["granule_count"] = granule_count for k1 in list(data.keys()): for i in range(len(data[k1])): if len(data[k1][i]['labels']) == 0: if "null_chips" not in count.keys(): count["null_chips"] = 1 else: count["null_chips"] += 1 for label in data[k1][i]['labels']: if label == label_match: label_match_results.append([k1,data[k1][i]]) if label not in count.keys(): count[label] = 1 else: count[label] += 1 return count, label_match_results, granule_list # + json_path = "s3://canopy-production-ml/inference/output/predict_test-2021-05-03-23-37-03.json" count, match_results, granule_list = download_model_files(json_path) # - count # # Output Vectorized Predicted Granules def s3_dir_match(s3_dir_url,granule_list): objs = [] bucket = s3_dir_url.split("/")[2] key = "/".join(s3_dir_url.split("/")[3:5]) s3 = boto3.resource('s3') my_bucket = s3.Bucket(bucket) window_geom_list = [] granule_id_list = [] for obj in my_bucket.objects.filter(Prefix=key): granule_id = obj.key.split("/")[-1].split("_")[0] if granule_id in granule_list: obj_url = "s3://" + bucket + "/" + obj.key with rio.open(obj_url) as src: bounds = src.bounds geom = box(*bounds) window_geom_list.append(geom) granule_id_list.append(granule_id) gdf = gpd.GeoDataFrame({"geometry":window_geom_list,"granule_id":granule_id_list}) return gdf gdf = s3_dir_match("s3://canopy-production-ml/full_congo_basin/02.17.21_CB_GEE_Pull/",granule_list) gdf gdf.to_file("granules.json", driver="GeoJSON", index=True) # # Create and Export GDF of Original Labels Data # + FILE_NAME = "/Users/purgatorid/Downloads/polygons_021521.csv" df = pd.read_csv( FILE_NAME) gdf = gpd.GeoDataFrame( df, crs={'init': 'epsg:4326'}) # - polygons = [] for polygon in df["polygon"]: polygons.append(Polygon(json.loads(polygon)["coordinates"][0])) gdf["geometry"] = polygons gdf.loc[90] gdf.to_file("output.json", driver="GeoJSON", index=True) # # Load and Reproject One Granulate Containing ISL # + def convert_raster(input_file, dest_dir, epsg_format='EPSG:3257', windows=False): """Converts the rasters in the src_dir into a different EPSG format, keeping the same folder structure and saving them in the dest_dir.""" print(input_file) filename = "test.tif" # print(filename) # If the respective grouping folders are not available output_filepath = dest_dir + filename print(output_filepath) # Finally, we convert converted = gdal.Warp(output_filepath, [input_file],format='GTiff', dstSRS=epsg_format, resampleAlg='near') converted = None print('Finished') # + granule = "/Users/purgatorid/Downloads/1241_full_congo_export_v12_all_bands_Feb_11_12_44_53_2021.tif" dest_dir = "/Users/purgatorid/Downloads/" convert_raster(granule,dest_dir) # - # # Visualize Results (Incomplete Code) # + jupyter={"outputs_hidden": true} def visualize_results(match_results,s3_url): for window in match_results: granule_id = window[0] # - t = {1,2,4} # # Running Without Windows Code - Direct Chip Predict # + def output_predictions(granule_dir,patch_size=100, stride=100,SAVE=False,SAVE_INDIVIDUAL=False, bands=[2, 3, 4, 8, 11, 12], model=model, predict_thresh=.5, label_list=label_list, job_name="test_inference", output_filename="./inference_output/result.json"): granule_list = glob(f'{granule_dir}/*.tif') output_dict = {} granule_id_list = [] window_id_list = [] window_geom_list = [] data_list = [] label_master_list = [] gdf_list = [] timestamp = gen_timestamp() for j,granule_path in enumerate(granule_list[0:1]): label_name_list = [] granule_id = granule_path.split("/")[-1].split("_")[0] with rio.open(granule_path) as src: data = src.read(bands,masked=True) data = add_ndvi(data) shape = data.shape # new_shape = (data.shape[0],patch_size,patch_size) # if shape != new_shape: # filled_array = np.full(new_shape, 0) # filled_array[:shape[0],:shape[1],:shape[2]] = data # data = filled_array # window = Window(window.col_off,window.row_off,shape[2],shape[1]) #image pre-processing / inference prediction = model.predict(read_image_tf_out(data)) print("original_prediction:",prediction) prediction = np.where(prediction > predict_thresh, 1, 0) print("sigmoid prediction gate:",prediction) prediction_i = np.where(prediction == 1)[1] print("index of matching labels:",prediction_i) for i in prediction_i: label_name_list.append(label_list[i]) label_master_list.append(label_name_list) #vectorizing raster bounds for visualization data_bounds = src.bounds geom = box(*data_bounds) geom_coords = list(geom.exterior.coords) # window_geom_list.append(geom) #create or append to dict.... if granule_id in output_dict: output_dict[granule_id].append({"polygon_coords":geom_coords,"labels":label_name_list}) else: output_dict[granule_id] = [{"polygon_coords":geom_coords,"labels":label_name_list}] save_to_s3(output_dict,output_filename,job_name,timestamp) # gdf = gpd.GeoDataFrame({"granule_id":granule_id_list,"window_id":window_id_list,"geometry":window_geom_list,"labels":label_master_list}) # gdf["labels"] = gdf["labels"].astype(str) # gdf_list.append(gdf) return output_dict # + # granule_dir = "./efs_inference_data/" granule_dir = "/Volumes/Lacie/zhenyadata/Project_Canopy_Data/PC_Data/Sentinel_Data/Chips/misha_polygons_cloudfreemerge/yes/ISL/100/91" output_dict = output_predictions(granule_dir) # - output_dict # + def output_predictions_chips(chip_dir, bands=[2, 3, 4, 8, 11, 12], model=model, predict_thresh=.5, label_list=label_list, job_name="test_inference", output_filename="./inference_output/result.json"): chip_list = glob(f'{granule_dir}/*.tif') output_dict = {} #granule_id_list = [] #window_id_list = [] chip_geom_list = [] data_list = [] label_master_list = [] gdf_list = [] timestamp = gen_timestamp() for j, chip_path in enumerate(chip_list): label_name_list = [] chip_id = granule_path.split("/")[-1].split('.')[0] with rio.open(chip_path) as src: data = src.read(bands, masked=True) data = add_ndvi(data) shape = data.shape # new_shape = (data.shape[0],patch_size,patch_size) # if shape != new_shape: # filled_array = np.full(new_shape, 0) # filled_array[:shape[0],:shape[1],:shape[2]] = data # data = filled_array # window = Window(window.col_off,window.row_off,shape[2],shape[1]) #image pre-processing / inference prediction = model.predict(read_image_tf_out(data)) print("original_prediction:",prediction) prediction = np.where(prediction > predict_thresh, 1, 0) print("sigmoid prediction gate:",prediction) prediction_i = np.where(prediction == 1)[1] print("index of matching labels:",prediction_i) for i in prediction_i: label_name_list.append(label_list[i]) label_master_list.append(label_name_list) #vectorizing raster bounds for visualization data_bounds = src.bounds geom = box(*data_bounds) geom_coords = list(geom.exterior.coords) # window_geom_list.append(geom) #create or append to dict.... if granule_id in output_dict: output_dict[granule_id].append({"polygon_coords":geom_coords,"labels":label_name_list}) else: output_dict[granule_id] = [{"polygon_coords":geom_coords,"labels":label_name_list}] save_to_s3(output_dict,output_filename,job_name,timestamp) # gdf = gpd.GeoDataFrame({"granule_id":granule_id_list,"window_id":window_id_list,"geometry":window_geom_list,"labels":label_master_list}) # gdf["labels"] = gdf["labels"].astype(str) # gdf_list.append(gdf) return output_dict
inference/_archive/inference_pipeline_David.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="Z3GZrM7xWgnX" # # Inspect Glomerulus Training Data # # Inspect and visualize data loading and pre-processing code. # + colab={"base_uri": "https://localhost:8080/", "height": 81} colab_type="code" executionInfo={"elapsed": 11556, "status": "ok", "timestamp": 1584546678502, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="6MBDY9agWgna" outputId="e5725d4a-bd9f-42be-c1a7-2c5f67cdb46d" import os import sys import itertools import math import logging import json import re import random import time import concurrent.futures import numpy as np import matplotlib import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.lines as lines from matplotlib.patches import Polygon import imgaug from imgaug import augmenters as iaa # Root directory of the project ROOT_DIR = os.getcwd() ROOT_DIR = os.path.dirname(ROOT_DIR) # go to the level above # Import Mask RCNN sys.path.append(ROOT_DIR) from mrcnn import utils from mrcnn import visualize from mrcnn.visualize import display_images from mrcnn import model as modellib from mrcnn.model import log sys.path.append(ROOT_DIR+'/Code') import glomerulus # %matplotlib inline # + colab={} colab_type="code" id="_BozyMWLWgnh" # Comment out to reload imported modules if they change # %load_ext autoreload # %autoreload 2 # + [markdown] colab_type="text" id="Ba4wUTHhWgnl" # ## Configurations # + colab={} colab_type="code" id="xHEmsJTNWgnn" # Dataset directory DATASET_DIR = os.path.join("/home/Fred", "Data/glomerulus") # Use configuation from glomerulus.py, but override # image resizing so we see the real sizes here class NoResizeConfig(glomerulus.GlomerulusConfig): IMAGE_RESIZE_MODE = "none" config = NoResizeConfig() # + [markdown] colab_type="text" id="IpZdauTWWgnr" # ## Notebook Preferences # + colab={} colab_type="code" id="cIiuSPAuWgns" def get_ax(rows=1, cols=1, size=16): """Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central point to control graph sizes. Adjust the size attribute to control how big to render images """ _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows)) return ax # + [markdown] colab_type="text" id="-CoWOZx9Wgnw" # ## Dataset # # + colab={"base_uri": "https://localhost:8080/", "height": 87} colab_type="code" executionInfo={"elapsed": 7530, "status": "ok", "timestamp": 1584546696255, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="waWSPQ-GWgnx" outputId="64af75c3-56ea-433c-f1e3-ef601cf74fe0" # Load dataset dataset = glomerulus.GlomerulusDataset() # The subset is the name of the sub-directory, such as stage1_train, # stage1_test, ...etc. You can also use these special values: # train: loads stage1_train but excludes validation images # val: loads validation images from stage1_train. For a list # of validation images see glomerulus.py dataset.load_glomerulus(DATASET_DIR, subset="test") # Must call before using the dataset dataset.prepare() print("Image Count: {}".format(len(dataset.image_ids))) print("Class Count: {}".format(dataset.num_classes)) for i, info in enumerate(dataset.class_info): print("{:3}. {:50}".format(i, info['name'])) # + [markdown] colab_type="text" id="KfbG9IihWgn3" # ## Display Samples # + colab={"base_uri": "https://localhost:8080/", "height": 877} colab_type="code" executionInfo={"elapsed": 21094, "status": "ok", "timestamp": 1584546719317, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="cSNwXP8WWgn5" outputId="cbf68d29-6758-42be-ca35-3c8973cd23a1" # Load and display random samples image_ids = np.random.choice(dataset.image_ids, 4) for image_id in image_ids: image = dataset.load_image(image_id) mask, class_ids = dataset.load_mask(image_id) visualize.display_top_masks(image, mask, class_ids, dataset.class_names, limit=1) # + colab={} colab_type="code" id="3k0QL7RI-UMw" to_display = [] titles = [] to_display.append(image) titles.append("H x W={}x{}".format(image.shape[0], image.shape[1])) unique_class_ids = np.unique(class_ids) # + colab={"base_uri": "https://localhost:8080/", "height": 586} colab_type="code" executionInfo={"elapsed": 10295, "status": "ok", "timestamp": 1584546739541, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="jfo0_xSOWgn-" outputId="3b7807e0-d48d-4d8e-e549-4b7cbe6db808" # Example of loading a specific image by its source ID # source_id = "R39 RD VEGF_15.0xC4" source_id = "C363_PAS1_C5" # Map source ID to Dataset image_id # Notice the glomerulus prefix: it's the name given to the dataset in GlomerulusDataset image_id = dataset.image_from_source_map["glomerulus.{}".format(source_id)] # Load and display image, image_meta, class_ids, bbox, mask = modellib.load_image_gt( dataset, config, image_id, use_mini_mask=False) log("molded_image", image) log("mask", mask) visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names, show_bbox=False) # + [markdown] colab_type="text" id="L7SAgYMNWgoB" # ## Dataset Stats # # Loop through all images in the dataset and collect aggregate stats. # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 111067, "status": "ok", "timestamp": 1584546868207, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="8HFEv-gcWgoD" outputId="4207a88a-decd-4748-f854-20c59df90c53" def image_stats(image_id): """Returns a dict of stats for one image.""" image = dataset.load_image(image_id) try: mask, _ = dataset.load_mask(image_id) except: print('planté sur',image_id) bbox = utils.extract_bboxes(mask) # Sanity check assert mask.shape[:2] == image.shape[:2] # Return stats dict return { "id": image_id, "shape": list(image.shape), "bbox": [[b[2] - b[0], b[3] - b[1]] for b in bbox # Uncomment to exclude nuclei with 1 pixel width # or height (often on edges) # if b[2] - b[0] > 1 and b[3] - b[1] > 1 ], "color": np.mean(image, axis=(0, 1)), } # Loop through the dataset and compute stats over multiple threads # This might take a few minutes t_start = time.time() with concurrent.futures.ThreadPoolExecutor() as e: stats = list(e.map(image_stats, dataset.image_ids)) t_total = time.time() - t_start print("Total time: {:.1f} seconds".format(t_total)) # + [markdown] colab_type="text" id="DofIE3fNWgoL" # ### Image Size Stats # + colab={"base_uri": "https://localhost:8080/", "height": 351} colab_type="code" executionInfo={"elapsed": 1388, "status": "ok", "timestamp": 1584547030572, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="zNWSpj4VWgoO" outputId="327467b2-f7e7-49a7-ea14-f766ca1acf7b" # Image stats image_shape = np.array([s['shape'] for s in stats]) image_color = np.array([s['color'] for s in stats]) print("Image Count: ", image_shape.shape[0]) print("Height mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format( np.mean(image_shape[:, 0]), np.median(image_shape[:, 0]), np.min(image_shape[:, 0]), np.max(image_shape[:, 0]))) print("Width mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format( np.mean(image_shape[:, 1]), np.median(image_shape[:, 1]), np.min(image_shape[:, 1]), np.max(image_shape[:, 1]))) print("Color mean (RGB): {:.2f} {:.2f} {:.2f}".format(*np.mean(image_color, axis=0))) # Histograms fig, ax = plt.subplots(1, 3, figsize=(16, 4)) ax[0].set_title("Height") _ = ax[0].hist(image_shape[:, 0], bins=20) ax[1].set_title("Width") _ = ax[1].hist(image_shape[:, 1], bins=20) ax[2].set_title("Height & Width") _ = ax[2].hist2d(image_shape[:, 1], image_shape[:, 0], bins=10, cmap="Blues") # + [markdown] colab_type="text" id="J8Kgsw7YWgoU" # ### Glomeruli per Image Stats # + colab={"base_uri": "https://localhost:8080/", "height": 333} colab_type="code" executionInfo={"elapsed": 1158, "status": "ok", "timestamp": 1584547100440, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="mGcSMzVBWgoX" outputId="4cba25e7-75a8-4d3b-b7a9-c4195219fac2" # Segment by image area image_area_bins = [1058*1831, 1018*1920] print("Glomeruli/Image") fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4)) area_threshold = 0 for i, image_area in enumerate(image_area_bins): glomeruli_per_image = np.array([len(s['bbox']) for s in stats if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area]) area_threshold = image_area if len(glomeruli_per_image) == 0: print("Image area <= {:4}**2: None".format(np.sqrt(image_area))) continue print("Image area <= {:4.0f}**2: mean: {:.1f} median: {:.1f} min: {:.1f} max: {:.1f}".format( np.sqrt(image_area), glomeruli_per_image.mean(), np.median(glomeruli_per_image), glomeruli_per_image.min(), glomeruli_per_image.max())) ax[i].set_title("Image Area <= {:4}**2".format(np.sqrt(image_area))) _ = ax[i].hist(glomeruli_per_image, bins=10) # + [markdown] colab_type="text" id="JV8ruDULWgoe" # ### Glomeruli Size Stats # + colab={"base_uri": "https://localhost:8080/", "height": 473} colab_type="code" executionInfo={"elapsed": 1398, "status": "ok", "timestamp": 1584547118910, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="wsqicKw4Wgog" outputId="8540dc16-cea3-45ed-fde7-1936beaf8c3d" # Glomeruli size stats fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4)) area_threshold = 0 for i, image_area in enumerate(image_area_bins): glomerulus_shape = np.array([ b for s in stats if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area for b in s['bbox']]) glomerulus_area = glomerulus_shape[:, 0] * glomerulus_shape[:, 1] area_threshold = image_area print("\nImage Area <= {:.0f}**2".format(np.sqrt(image_area))) print(" Total Glomeruli: ", glomerulus_shape.shape[0]) print(" glomerulus Height. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format( np.mean(glomerulus_shape[:, 0]), np.median(glomerulus_shape[:, 0]), np.min(glomerulus_shape[:, 0]), np.max(glomerulus_shape[:, 0]))) print(" glomerulus Width. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format( np.mean(glomerulus_shape[:, 1]), np.median(glomerulus_shape[:, 1]), np.min(glomerulus_shape[:, 1]), np.max(glomerulus_shape[:, 1]))) print(" glomerulus Area. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format( np.mean(glomerulus_area), np.median(glomerulus_area), np.min(glomerulus_area), np.max(glomerulus_area))) # Show 2D histogram _ = ax[i].hist2d(glomerulus_shape[:, 1], glomerulus_shape[:, 0], bins=20, cmap="Blues") # + colab={"base_uri": "https://localhost:8080/", "height": 337} colab_type="code" executionInfo={"elapsed": 1012, "status": "ok", "timestamp": 1584547162395, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="KD3NumOAWgol" outputId="6ea7f201-01ae-47f0-e40e-0a996c0e29d7" # Nuclei height/width ratio glomerulus_aspect_ratio = glomerulus_shape[:, 0] / glomerulus_shape[:, 1] print("glomerulus Aspect Ratio. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format( np.mean(glomerulus_aspect_ratio), np.median(glomerulus_aspect_ratio), np.min(glomerulus_aspect_ratio), np.max(glomerulus_aspect_ratio))) plt.figure(figsize=(15, 5)) _ = plt.hist(glomerulus_aspect_ratio, bins=100, range=[0, 5]) # + [markdown] colab_type="text" id="4ouZ0_HZWgot" # ## Image Augmentation # # Test out different augmentation methods # + colab={} colab_type="code" id="Olq7P71DWgou" # List of augmentations # http://imgaug.readthedocs.io/en/latest/source/augmenters.html augmentation = iaa.Sometimes(0.9, [ iaa.Fliplr(0.5), iaa.Flipud(0.5), iaa.Multiply((0.8, 1.2)), iaa.GaussianBlur(sigma=(0.0, 5.0)) ]) # + colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "17FvrJ_XtdZltHjcn5uFrMu4X-v-wEkLW"} colab_type="code" executionInfo={"elapsed": 16082, "status": "ok", "timestamp": 1584547190215, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="hLjYsWvKWgoz" outputId="0eed8467-4340-45d9-cf3e-1873f33ccda2" # Load the image multiple times to show augmentations limit = 4 ax = get_ax(rows=2, cols=limit//2) for i in range(limit): image, image_meta, class_ids, bbox, mask = modellib.load_image_gt( dataset, config, image_id, use_mini_mask=False, augment=False, augmentation=augmentation) visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names, ax=ax[i//2, i % 2], show_mask=False, show_bbox=False) # + [markdown] colab_type="text" id="HAbVdImnWgo3" # ## Image Crops # # Microscoy images tend to be large, but glomeruli are small. So it's more efficient to train on random crops from large images. This is handled by `config.IMAGE_RESIZE_MODE = "crop"`. # # # + colab={} colab_type="code" id="oit_Q25dWgo5" class RandomCropConfig(glomerulus.GlomerulusConfig): IMAGE_RESIZE_MODE = "crop" IMAGE_MIN_DIM = 1000 IMAGE_MAX_DIM = 1000 crop_config = RandomCropConfig() # + colab={"base_uri": "https://localhost:8080/", "height": 1000, "output_embedded_package_id": "1Y574Y2OfW9zoAkEJcoWi_68j1Xzgqgpg"} colab_type="code" executionInfo={"elapsed": 24876, "status": "ok", "timestamp": 1584547616218, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="zlaGoqUYWgo9" outputId="f094d3da-d8ca-4af4-98d4-57de1fa8a3ae" # Load the image multiple times to show augmentations limit = 4 # image_id = np.random.choice(dataset.image_ids, 1)[0] source_id = "C363_PAS1_C5" image_id = dataset.image_from_source_map["glomerulus.{}".format(source_id)] ax = get_ax(rows=2, cols=limit//2) for i in range(limit): image, image_meta, class_ids, bbox, mask = modellib.load_image_gt( dataset, crop_config, image_id, use_mini_mask=False) visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names, ax=ax[i//2, i % 2], show_mask=False, show_bbox=False) # + [markdown] colab_type="text" id="HwGj2rVCWgpD" # ## Mini Masks # # Instance binary masks can get large when training with high resolution images. For example, if training with 1024x1024 image then the mask of a single instance requires 1MB of memory (Numpy uses bytes for boolean values). If an image has 100 instances then that's 100MB for the masks alone. # # To improve training speed, we optimize masks: # * We store mask pixels that are inside the object bounding box, rather than a mask of the full image. Most objects are small compared to the image size, so we save space by not storing a lot of zeros around the object. # * We resize the mask to a smaller size (e.g. 56x56). For objects that are larger than the selected size we lose a bit of accuracy. But most object annotations are not very accurate to begin with, so this loss is negligable for most practical purposes. Thie size of the mini_mask can be set in the config class. # # To visualize the effect of mask resizing, and to verify the code correctness, we visualize some examples. # + colab={"base_uri": "https://localhost:8080/", "height": 613} colab_type="code" executionInfo={"elapsed": 6052, "status": "ok", "timestamp": 1584547649384, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="m457bu5FWgpF" outputId="f99507d2-f3dc-48d5-aba9-d2112d9b0729" # Load random image and mask. image_id = np.random.choice(dataset.image_ids, 1)[0] image = dataset.load_image(image_id) mask, class_ids = dataset.load_mask(image_id) original_shape = image.shape # Resize image, window, scale, padding, _ = utils.resize_image( image, min_dim=config.IMAGE_MIN_DIM, max_dim=config.IMAGE_MAX_DIM, mode=config.IMAGE_RESIZE_MODE) mask = utils.resize_mask(mask, scale, padding) # Compute Bounding box bbox = utils.extract_bboxes(mask) # Display image and additional stats print("image_id: ", image_id, dataset.image_reference(image_id)) print("Original shape: ", original_shape) log("image", image) log("mask", mask) log("class_ids", class_ids) log("bbox", bbox) # Display image and instances visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names) # + colab={"base_uri": "https://localhost:8080/", "height": 400} colab_type="code" executionInfo={"elapsed": 3050, "status": "ok", "timestamp": 1584547657720, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="EGkW-P7SWgpM" outputId="161b796a-5cc9-4678-c166-651a7375d57c" image_id = np.random.choice(dataset.image_ids, 1)[0] image, image_meta, class_ids, bbox, mask = modellib.load_image_gt( dataset, config, image_id, use_mini_mask=False) log("image", image) log("image_meta", image_meta) log("class_ids", class_ids) log("bbox", bbox) log("mask", mask) display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))]) # + colab={"base_uri": "https://localhost:8080/", "height": 509} colab_type="code" executionInfo={"elapsed": 5626, "status": "ok", "timestamp": 1584547671181, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="FDuRQYp-WgpP" outputId="c18d1b6d-c1fd-461a-ca44-11faf0f4452c" visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names) # + colab={"base_uri": "https://localhost:8080/", "height": 418} colab_type="code" executionInfo={"elapsed": 2031, "status": "ok", "timestamp": 1584547680253, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="KORY0PswWgpT" outputId="1a03926c-8650-4cab-ef3b-e287da3d0277" # Add augmentation and mask resizing. image, image_meta, class_ids, bbox, mask = modellib.load_image_gt( dataset, config, image_id, augment=True, use_mini_mask=True) log("mask", mask) display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))]) # + colab={"base_uri": "https://localhost:8080/", "height": 509} colab_type="code" executionInfo={"elapsed": 6073, "status": "ok", "timestamp": 1584547694634, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="yN5JqLCDWgpW" outputId="a865b848-ffe2-48a8-b8c4-25168eb374f3" mask = utils.expand_mask(bbox, mask, image.shape) visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names) # + [markdown] colab_type="text" id="L2ffgt8NWgpa" # ## Anchors # # For an FPN network, the anchors must be ordered in a way that makes it easy to match anchors to the output of the convolution layers that predict anchor scores and shifts. # * Sort by pyramid level first. All anchors of the first level, then all of the second and so on. This makes it easier to separate anchors by level. # * Within each level, sort anchors by feature map processing sequence. Typically, a convolution layer processes a feature map starting from top-left and moving right row by row. # * For each feature map cell, pick any sorting order for the anchors of different ratios. Here we match the order of ratios passed to the function. # + colab={"base_uri": "https://localhost:8080/", "height": 855} colab_type="code" executionInfo={"elapsed": 5157, "status": "ok", "timestamp": 1584547727402, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="CoB3VRj7Wgpb" outputId="07a2f18a-db02-4c60-e1a0-8a37e803a02c" ## Visualize anchors of one cell at the center of the feature map # Load and display random image image_id = np.random.choice(dataset.image_ids, 1)[0] image, image_meta, _, _, _ = modellib.load_image_gt(dataset, crop_config, image_id) # Generate Anchors backbone_shapes = modellib.compute_backbone_shapes(config, image.shape) anchors = utils.generate_pyramid_anchors(config.RPN_ANCHOR_SCALES, config.RPN_ANCHOR_RATIOS, backbone_shapes, config.BACKBONE_STRIDES, config.RPN_ANCHOR_STRIDE) # Print summary of anchors num_levels = len(backbone_shapes) anchors_per_cell = len(config.RPN_ANCHOR_RATIOS) print("Count: ", anchors.shape[0]) print("Scales: ", config.RPN_ANCHOR_SCALES) print("ratios: ", config.RPN_ANCHOR_RATIOS) print("Anchors per Cell: ", anchors_per_cell) print("Levels: ", num_levels) anchors_per_level = [] for l in range(num_levels): num_cells = backbone_shapes[l][0] * backbone_shapes[l][1] anchors_per_level.append(anchors_per_cell * num_cells // config.RPN_ANCHOR_STRIDE**2) print("Anchors in Level {}: {}".format(l, anchors_per_level[l])) # Display fig, ax = plt.subplots(1, figsize=(10, 10)) ax.imshow(image) levels = len(backbone_shapes) for level in range(levels): colors = visualize.random_colors(levels) # Compute the index of the anchors at the center of the image level_start = sum(anchors_per_level[:level]) # sum of anchors of previous levels level_anchors = anchors[level_start:level_start+anchors_per_level[level]] print("Level {}. Anchors: {:6} Feature map Shape: {}".format(level, level_anchors.shape[0], backbone_shapes[level])) center_cell = backbone_shapes[level] // 2 center_cell_index = (center_cell[0] * backbone_shapes[level][1] + center_cell[1]) level_center = center_cell_index * anchors_per_cell center_anchor = anchors_per_cell * ( (center_cell[0] * backbone_shapes[level][1] / config.RPN_ANCHOR_STRIDE**2) \ + center_cell[1] / config.RPN_ANCHOR_STRIDE) level_center = int(center_anchor) # Draw anchors. Brightness show the order in the array, dark to bright. for i, rect in enumerate(level_anchors[level_center:level_center+anchors_per_cell]): y1, x1, y2, x2 = rect p = patches.Rectangle((x1, y1), x2-x1, y2-y1, linewidth=2, facecolor='none', edgecolor=(i+1)*np.array(colors[level]) / anchors_per_cell) ax.add_patch(p) # + [markdown] colab_type="text" id="YT1o1PAKWgpg" # ## Data Generator # + colab={} colab_type="code" id="fY_7rhQAWgph" # Create data generator random_rois = 2000 g = modellib.data_generator( dataset, crop_config, shuffle=True, random_rois=random_rois, batch_size=4, detection_targets=True) # + colab={} colab_type="code" id="rBSXjetkWgpk" # Uncomment to run the generator through a lot of images # to catch rare errors # for i in range(1000): # print(i) # _, _ = next(g) # + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" executionInfo={"elapsed": 9719, "status": "ok", "timestamp": 1584547911917, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="Q4io22PXWgpo" outputId="52b27126-3c03-47dc-915b-f050b3829810" # Get Next Image if random_rois: [normalized_images, image_meta, rpn_match, rpn_bbox, gt_class_ids, gt_boxes, gt_masks, rpn_rois, rois], \ [mrcnn_class_ids, mrcnn_bbox, mrcnn_mask] = next(g) log("rois", rois) log("mrcnn_class_ids", mrcnn_class_ids) log("mrcnn_bbox", mrcnn_bbox) log("mrcnn_mask", mrcnn_mask) else: [normalized_images, image_meta, rpn_match, rpn_bbox, gt_boxes, gt_masks], _ = next(g) log("gt_class_ids", gt_class_ids) log("gt_boxes", gt_boxes) log("gt_masks", gt_masks) log("rpn_match", rpn_match, ) log("rpn_bbox", rpn_bbox) image_id = modellib.parse_image_meta(image_meta)["image_id"][0] print("image_id: ", image_id, dataset.image_reference(image_id)) # Remove the last dim in mrcnn_class_ids. It's only added # to satisfy Keras restriction on target shape. mrcnn_class_ids = mrcnn_class_ids[:,:,0] # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 5754, "status": "ok", "timestamp": 1584547919547, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="SyIPlvgTWgpx" outputId="263d97d9-fb84-4600-935f-417cf7d98fa7" b = 0 # Restore original image (reverse normalization) sample_image = modellib.unmold_image(normalized_images[b], config) # Compute anchor shifts. indices = np.where(rpn_match[b] == 1)[0] refined_anchors = utils.apply_box_deltas(anchors[indices], rpn_bbox[b, :len(indices)] * config.RPN_BBOX_STD_DEV) log("anchors", anchors) log("refined_anchors", refined_anchors) # Get list of positive anchors positive_anchor_ids = np.where(rpn_match[b] == 1)[0] print("Positive anchors: {}".format(len(positive_anchor_ids))) negative_anchor_ids = np.where(rpn_match[b] == -1)[0] print("Negative anchors: {}".format(len(negative_anchor_ids))) neutral_anchor_ids = np.where(rpn_match[b] == 0)[0] print("Neutral anchors: {}".format(len(neutral_anchor_ids))) # ROI breakdown by class for c, n in zip(dataset.class_names, np.bincount(mrcnn_class_ids[b].flatten())): if n: print("{:23}: {}".format(c[:20], n)) # Show positive anchors fig, ax = plt.subplots(1, figsize=(16, 16)) visualize.draw_boxes(sample_image, boxes=anchors[positive_anchor_ids], refined_boxes=refined_anchors, ax=ax) # + colab={"base_uri": "https://localhost:8080/", "height": 683} colab_type="code" executionInfo={"elapsed": 4071, "status": "ok", "timestamp": 1584547934734, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="fD_WPcq9Wgp4" outputId="156763c8-3d43-41f9-e57f-2691fcba64a9" # Show negative anchors visualize.draw_boxes(sample_image, boxes=anchors[negative_anchor_ids]) # + colab={"base_uri": "https://localhost:8080/", "height": 683} colab_type="code" executionInfo={"elapsed": 4295, "status": "ok", "timestamp": 1584547948559, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="bXCbcqFMWgqA" outputId="1655345a-4e79-46ca-c158-4a75ae2a9a87" # Show neutral anchors. They don't contribute to training. visualize.draw_boxes(sample_image, boxes=anchors[np.random.choice(neutral_anchor_ids, 100)]) # + [markdown] colab_type="text" id="ii9YNFpVWgqI" # ## ROIs # # Typically, the RPN network generates region proposals (a.k.a. Regions of Interest, or ROIs). The data generator has the ability to generate proposals as well for illustration and testing purposes. These are controlled by the `random_rois` parameter. # + colab={"base_uri": "https://localhost:8080/", "height": 767} colab_type="code" executionInfo={"elapsed": 4364, "status": "ok", "timestamp": 1584547957959, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="i6PP8BxwWgqL" outputId="0c4c1a1a-e6b3-4fbc-e23d-ef7464fba725" if random_rois: # Class aware bboxes bbox_specific = mrcnn_bbox[b, np.arange(mrcnn_bbox.shape[1]), mrcnn_class_ids[b], :] # Refined ROIs refined_rois = utils.apply_box_deltas(rois[b].astype(np.float32), bbox_specific[:,:4] * config.BBOX_STD_DEV) # Class aware masks mask_specific = mrcnn_mask[b, np.arange(mrcnn_mask.shape[1]), :, :, mrcnn_class_ids[b]] visualize.draw_rois(sample_image, rois[b], refined_rois, mask_specific, mrcnn_class_ids[b], dataset.class_names) # Any repeated ROIs? rows = np.ascontiguousarray(rois[b]).view(np.dtype((np.void, rois.dtype.itemsize * rois.shape[-1]))) _, idx = np.unique(rows, return_index=True) print("Unique ROIs: {} out of {}".format(len(idx), rois.shape[1])) # + colab={"base_uri": "https://localhost:8080/", "height": 777} colab_type="code" executionInfo={"elapsed": 3431, "status": "ok", "timestamp": 1584547979439, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="cHMCCGDxWgqT" outputId="457ba266-b126-4020-b8be-a99189dc3e9d" if random_rois: # Dispalay ROIs and corresponding masks and bounding boxes ids = random.sample(range(rois.shape[1]), 8) images = [] titles = [] for i in ids: image = visualize.draw_box(sample_image.copy(), rois[b,i,:4].astype(np.int32), [255, 0, 0]) image = visualize.draw_box(image, refined_rois[i].astype(np.int64), [0, 255, 0]) images.append(image) titles.append("ROI {}".format(i)) images.append(mask_specific[i] * 255) titles.append(dataset.class_names[mrcnn_class_ids[b,i]][:20]) display_images(images, titles, cols=4, cmap="Blues", interpolation="none") # + colab={"base_uri": "https://localhost:8080/", "height": 208} colab_type="code" executionInfo={"elapsed": 23199, "status": "ok", "timestamp": 1584548027677, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14463999960576791803"}, "user_tz": -60} id="hbM53r7uWgqZ" outputId="6a11e9c4-eb35-4fcf-aeb9-6662cb8b9a91" # Check ratio of positive ROIs in a set of images. if random_rois: limit = 10 temp_g = modellib.data_generator( dataset, crop_config, shuffle=True, random_rois=10000, batch_size=1, detection_targets=True) total = 0 for i in range(limit): _, [ids, _, _] = next(temp_g) positive_rois = np.sum(ids[0] > 0) total += positive_rois print("{:5} {:5.2f}".format(positive_rois, positive_rois/ids.shape[1])) print("Average percent: {:.2f}".format(total/(limit*ids.shape[1]))) # + colab={} colab_type="code" id="I--V6UxuWgqf"
Code/1. inspect_glomerulus_testdata.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os dataset_filename = "../../Datasets/clickbait-headlines.tsv" print("File: {} \nSize: {} MBs".format(dataset_filename, round(os.path.getsize(dataset_filename)/1024/1024, 2))) # + import csv data = [] labels = [] with open(dataset_filename) as f: reader = csv.reader(f, delimiter="\t") for line in reader: try: data.append(line[0]) labels.append(line[1]) except Exception as e: print(e) print(data[:3]) print(labels[:3]) # + # %%time from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer() vectors = vectorizer.fit_transform(data) print("The dimensions of our vectors:") print(vectors.shape) print("- - -") # - print("The data type of our vectors") print(type(vectors)) print("- - -") print("The size of our vectors (MB):") print(vectors.data.nbytes/1024/1024) print("- - -") print("The size of our vectors in dense format (MB):") print(vectors.todense().nbytes/1024/1024) print("- - - ") print("Number of non zero elements in our vectors") print(vectors.nnz) print("- - -") # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(vectors, labels, test_size=0.2) print(X_train.shape) print(X_test.shape) # + # %%time from sklearn.svm import LinearSVC svm_classifier = LinearSVC() svm_classifier.fit(X_train, y_train) predictions = svm_classifier.predict(X_test) # - print("prediction, label") for i in range(10): print(y_test[i], predictions[i]) # + from sklearn.metrics import accuracy_score, classification_report print("Accuracy: {}\n".format(accuracy_score(y_test, predictions))) print(classification_report(y_test, predictions))
Chapter01/Exercise01.01/chapter_01_exercise_01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Computation on Arrays: Broadcasting # # Another means of vectorizing operations is to use NumPy's *broadcasting* functionality. Broadcasting is simply a set of rules # for applying binary ufuncs(e.g, addition, subtraction, multiplication, etc.) on arrays of different sizes. # ## Introducing Broadcasting # # Recall that for arrays of the same size, binary operations are performed on an element-by-element basis: import numpy as np a = np.array([0, 1, 2]) b = np.array([5, 5, 5]) a + b # Broadcasting allows these types of binary operations to be performed on arrays of different sizes-for example, we can just as easily add a scalar(think of it as a zero-dimensional array) to an array: a + 5 # We can think of this as an operation that stretches or duplicates the value `5` into the array `[5, 5, 5]`, and adds the results. The advantage of NumPy's broadcasting is that this duplication of values does not actually take place, but it is a useful mental model as we think about broadcasting. # # We can similarly extend this to arrays of higher dimension. Observe the result when we add a one-dimensional array to a two-dimensional array: M = np.ones((3, 3)) M M + a # Here the one-dimensional array `a` is stretched, or broadcast across the second dimension in order to match the shape of `M`. # # While these examples are relatively easy to understand, more complicated cases can involve broadcasting of both arrays. Consider the following example: # + a = np.arange(3) b = np.arange(3)[:, np.newaxis] print(a) print(b) # - a + b # Just as before we stretched or broadcasted one value to match the shape of the other, here we've stretched both `a` and `b` to match a common shape, and the result is a two-dimensional array! The geometry of these examples is visualized in the following figure. # ## Rules of Broadcasting # # Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays: # # - Rule 1: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is *padded* with ones on its leading (left) side. # # - Rule 2: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other # shape. # # - Rule 3: If in any dimension the sizes disagree and neither is equal to 1, an error is raised. # ### Broadcasting example 1 # # Let's look at adding a two-dimensional array to a one-dimensional array: M = np.ones((2, 3)) a = np.arange(3) print(M) print(a) # Let's consider an operation on these two arrays. The shape of the arrays are: # # - `M.shape = (2, 3)` # - `a.shape = (3,)` # # We see by rule 1 that the array `a` has fewer dimensions, so we pad in on the left with ones: # # - `M.shape -> (2, 3)` # - `a.shape -> (1, 3)` # # By rule 2, we now see that the first dimension disagrees, so we stretch this dimension to match: # # - `M.shape -> (2, 3)` # - `a.shape -> (2, 3)` # # The shapes match, and we see that the final shape will be `(2, 3)` M + a # ## Broadcasting example 2 # Let's take a look at an example where both arrays need to be broadcast: a = np.arange(3).reshape((3, 1)) b = np.arange(3) print(a) print() print(b) # Again, we'll start by writing out the shape of the arrays: # # - `a.shape = (3,1)` # - `b.shape = (3, )` # # Rule 1 says we must pad the shape `b` with ones: # # - `a.shape -> (3, 1)` # - `b.shape -> (1, 3) ` # # And rule 2 tells us that we upgrade each of these ones to match the corresponding # size of the other array: # # - `a.shape -> (3, 3)` # - `b.shape -> (3, 3)` a + b # ## Broadcasting example 3 # # Now let's take a look at an example in which the two arrays are not compatible: M = np.ones((3, 2)) a = np.arange(3) print(M) print() print(a) # This is jut a slightly different situation than in the first example: the matrix `M` is transposed. How does this # affect the calculation? The shape of the arrays are # # - `M.shape = (3, 2)` # - `a.shape = (3, )` # # Again, rule 1 tells us that we must pad the shape of `a` with ones: # # - `M.shape -> (3, 2)` # - `a.shape -> (1, 3)` # # By rule 2, the first dimension of `a` is stretched to match that of `M`: # # - `M.shape -> (3, 2)` # - `a.shape -> (3, 3)` # # Now we hit rule 3-the final shapes do not match, so these two arrays are # incompatible, as we cna observe by attempting this operation: M + a # Note the potential confusion here: you could imagine making `a` and `M` compatible by, say, padding `a`'s shape with ones on the right rather than the left. # But this is not a how the broadcasting work. That sort of flexibility might be useful in some cases, but it would lead to potential areas of ambiguity. # If right-side padding is what you'd like, you can do this explicity by reshaping the array(we'll use the `np.newaxis` keyword) a[:, np.newaxis].shape a[:, np.newaxis] np.reshape(a, (3, 1)) M + a[:, np.newaxis] # Also note that while we've been focusing on the `+` operator here, these broadcasting rules apply to *any* binary `ufunc`. For example, here is the `logaddexp(a,b)` function, which computes `log(exp(a)) + exp(b))` with more precision than the naive approach: np.logaddexp(M, a[:, np.newaxis]) # ## Broadcasting in Practice # # Broadcasting operations form the core of many examples we'll see throughout this book. We'll now take a look at a couple simple examples of # where they can be useful. # # ### Centering an array # # In the previous section, we saw that ufuncs allow a NumPy user to remove the need # to explicitly write slow Python loops. Broadcasting extends this ability. One # commonly seen example is when centering an array of data. Image you have an array of 10 observations, each # of which consists of 3 values. Using the standard convention. we'll store this in a `10 x 3` array: X = np.random.random((10, 3)) # We can compute the mean of each feature using the `mean` aggregate across the first dimension: Xmean = X.mean(axis=0) # And now we can center the `X` array by subtracting the mean (this is a broadcasting operation): X_centered = X - Xmean # To double-check that we've done this correctly, we can check that the centered array has near zero mean: X_centered.mean(axis=0) # ### Plotting a two-dimensional function # # One place that broadcasting is very useful is in displaying images based on two-dimensional functions. If we want # to define a function `z = f(x, y)`, broadcasting can be used to compute the function across the grid: # # + # x and y have 50 steps from 0 to 5 x = np.linspace(0, 5, 50) y = np.linspace(0, 5, 50)[:, np.newaxis] z = np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x) # - # We'll use Matplotlib to plot this two-dimensional array. # %matplotlib inline import matplotlib.pyplot as plt # + plt.imshow(z, origin='lower', extent=[0, 5, 0, 5], cmap='viridis') plt.colorbar();
numpy/computation_on_arrays_broadcasting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import time def follow(f, target): f.seek(0, 2) while True: line = f.readline() if not line: time.sleep(0.1) continue target.send(line) def print_line(): while True: line = yield print(line, end="") # + line_printer = print_line() next(line_printer) with open('./fake_log.txt') as f: follow(f, target=line_printer) # - def branch_send(targets): while True: value = yield for target in targets: target.send(value) # + NEW_SUB = 0 def new_subscriber(): global NEW_SUB while True: line = yield if 'A new user subscribed' in line: NEW_SUB += 1 # + line_printer = print_line() next(line_printer) new_sub = new_subscriber() next(new_sub) entry = branch_send([line_printer, new_sub]) next(entry) with open('./fake_log.txt') as f: follow(f, target=entry) # - NEW_SUB
Ex04-Follow-Coroutine.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true # <h1>Table of Contents<span class="tocSkip"></span></h1> # <div class="toc"><ul class="toc-item"><li><span><a href="#Word-Count" data-toc-modified-id="Word-Count-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Word Count</a></span><ul class="toc-item"><li><span><a href="#Using-Counter" data-toc-modified-id="Using-Counter-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Using <code>Counter</code></a></span></li><li><span><a href="#Adding-Word-Counts-From-Two-Distinct-Datasets-Together" data-toc-modified-id="Adding-Word-Counts-From-Two-Distinct-Datasets-Together-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Adding Word Counts From Two Distinct Datasets Together</a></span></li></ul></li><li><span><a href="#Removing-Stopwords-Using-gensim" data-toc-modified-id="Removing-Stopwords-Using-gensim-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Removing Stopwords Using <code>gensim</code></a></span></li><li><span><a href="#Finding-Similar-Word-Matches-Using-difflib" data-toc-modified-id="Finding-Similar-Word-Matches-Using-difflib-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Finding Similar Word Matches Using <code>difflib</code></a></span><ul class="toc-item"><li><span><a href="#Fuzzy-Matching" data-toc-modified-id="Fuzzy-Matching-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Fuzzy Matching</a></span><ul class="toc-item"><li><span><a href="#Use-Cases" data-toc-modified-id="Use-Cases-3.1.1"><span class="toc-item-num">3.1.1&nbsp;&nbsp;</span>Use Cases</a></span></li><li><span><a href="#Limitations" data-toc-modified-id="Limitations-3.1.2"><span class="toc-item-num">3.1.2&nbsp;&nbsp;</span>Limitations</a></span></li><li><span><a href="#Slow-Performance" data-toc-modified-id="Slow-Performance-3.1.3"><span class="toc-item-num">3.1.3&nbsp;&nbsp;</span>Slow Performance</a></span></li><li><span><a href="#Not-&quot;Language-Aware&quot;" data-toc-modified-id="Not-&quot;Language-Aware&quot;-3.1.4"><span class="toc-item-num">3.1.4&nbsp;&nbsp;</span>Not "Language Aware"</a></span></li></ul></li><li><span><a href="#Install-the-Dependencies-if-Necessary" data-toc-modified-id="Install-the-Dependencies-if-Necessary-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Install the Dependencies if Necessary</a></span><ul class="toc-item"><li><span><a href="#Token-Set-Ratio-(following-examples-from-fuzzywuzzy's-documentation)" data-toc-modified-id="Token-Set-Ratio-(following-examples-from-fuzzywuzzy's-documentation)-3.2.1"><span class="toc-item-num">3.2.1&nbsp;&nbsp;</span>Token Set Ratio (following examples from <code>fuzzywuzzy</code>'s documentation)</a></span></li><li><span><a href="#Token-Set-Ratio" data-toc-modified-id="Token-Set-Ratio-3.2.2"><span class="toc-item-num">3.2.2&nbsp;&nbsp;</span>Token Set Ratio</a></span></li></ul></li></ul></li></ul></div> # - # # Word Count # ## Using `Counter` # A normal dictionary object will return a key error if you do not first initialize the key value: from typing import Dict ordinary_dict: Dict = dict() ordinary_dict["yu"] += 1 # The `Counter` object in `collections` has a default value of 0 for every key. # + from collections import Counter counter = Counter() counter["yu"] += 1 counter # - # Moreover, the you can pass in a list of strings to the `Counter` constructor, as well as calling the # `most_common` method to get the most common words: from typing import List words: List[str] = open("tale-of-two-cities.txt").read().split() dickens_counter = Counter(words) dickens_counter.most_common(5) # You can also quickly use this counter to find the percentage of words in a corpus that belong to a certain word: dickens_counter["the"] / sum(dickens_counter.values()) # ## Adding Word Counts From Two Distinct Datasets Together # We can add two `Counter` objects together to get their combined counts. In this example, we'll load in the `fraudulent_emails.txt` dataset and start a new counter called `email_counter`. email_counter = Counter(open("fraudulent_emails.txt").read().split()) email_counter.most_common(5) combined_counter: Counter = dickens_counter + email_counter combined_counter.most_common(5) # You can also subtract counts from one dataset: # get back the original email_counter (combined_counter - dickens_counter).most_common(5) # # Removing Stopwords Using `gensim` # # Removing stopwords in `nltk` often means you first have to tokenize the document into distinct tokens, then run each token through to check if it is a stopword. Another commonly used NLP library in Python, `gensim`, has a helper function to do this all in one go: # + from gensim.parsing.preprocessing import remove_stopwords text = ''' Rendered in a manner desperate, by her state and by the beckoning of their conductor, he drew over his neck the arm that shook upon his shoulder, lifted her a little, and hurried her into the room. He sat her down just within the door, and held her, clinging to him. ''' processed_text = remove_stopwords(text) processed_text # - # Note, however, this only works well if you are happy with Gensim's only predefined list of stopwords. To inspect what stopwords are used in Gensim, use # ```python # from gensim.parsing.preprocessing import STOPWORDS # print(STOPWORDS) # ``` # # Finding Similar Word Matches Using `difflib` # Within Python's Standard Library, the `difflib` has a variety of tools for helping identify differences between text and content. It uses an algorithm called the **Ratcliff-Obershelp algorithm**, which is described in brief below: # # > The idea is to find the longest contiguous matching subsequence that contains no “junk” elements; these “junk” elements are ones that are uninteresting in some sense, such as blank lines or whitespace. (Handling junk is an extension to the Ratcliff and Obershelp algorithm.) The same idea is then applied recursively to the pieces of the sequences to the left and to the right of the matching subsequence. This does not yield minimal edit sequences, but does tend to yield matches that “look right” to people. [Link](https://docs.python.org/3/library/difflib.html) # this loads in the top 20k most popular words in the English language words = set(map(lambda word: word.replace("\n", ""), open("20k.txt").readlines())) # + import difflib w = "knaght" difflib.get_close_matches(w, words) # - # You can combine this with a tokenizer to create your own (very basic) spellcheck function: # + from nltk.tokenize import word_tokenize def spellcheck_document(text): new_tokens = [] for token in word_tokenize(text): matches = difflib.get_close_matches(token.lower(), words, n=1, cutoff=0.7) if len(matches) == 0 or token.lower() in words: new_tokens.append(token) else: new_tokens.append(matches[0]) return " ".join(new_tokens) spellcheck_document("He is a craezy perzon") # - # ## Fuzzy Matching # Fuzzy matching refers to "approximate matching", where we are allowed a certain degree of error between the query value and the search result. # # The `fuzzywuzzy` library uses a distance measure called **Levenshtein Distance** which describes the minimum number of operations to transform one string into another. # # * `cat` $\rightarrow$ `cat` : `0` distance # * `dog` $\rightarrow$ `door`: `2` distance # # ### Use Cases # # * spell checking # * DNA analysis # * authorship/plagiarism detection # ### Limitations # ### Slow Performance # this loads in the top 20k most popular words in the English language words = set(map(lambda word: word.replace("\n", ""), open("20k.txt").readlines())) # + from timeit import default_timer as timer from fuzzywuzzy import process target = "kerfuffled" start = timer() for i in range(10): bests = process.extractBests(target, words, scorer=fuzz.ratio) end = timer() print(end - start) # Time in seconds to check 10 words print(f'Best results: {bests}') # - fuzz.ratio("kerfuffled", "ruled") # ### Not "Language Aware" # > Comparing the classification proposed by the Levenshtein distance to that of the comparative method shows that the Levenshtein classification is correct only 40% of time. Standardizing the orthography increases the performance, but only to a maximum of 65% accuracy within language subgroups. The accuracy of the Levenshtein classification **decreases rapidly with phylogenetic distance**, failing to discriminate homology and chance similarity across distantly related languages.This poor performance suggests the need for more linguistically nuanced methods for automated language classification tasks. # # ["Levenshtein distances fail to identify language relationships accurately" by <NAME>](https://dl.acm.org/doi/10.1162/COLI_a_00073) # ## Install the Dependencies if Necessary # ```python # # !pip3 install fuzzywuzzy # # !pip3 install python-Levenshtein # ``` from fuzzywuzzy import fuzz fuzz.ratio("cat", "saturday") fuzz.ratio("dog", "cat") fuzz.ratio("dog", "hog") fuzz.ratio("smithy", "smithfield") # is it symmetric? fuzz.ratio("smithfield", "smithy") fuzz.ratio("photosynthesis", "photosynthetic") # does case matter? fuzz.ratio("Photosynthesis", "photosynthetic") # + # what happens if you arbitrarily increase the size of the strings? fuzz.ratio("dog" * 3, "hog" * 3) # - # ### Token Set Ratio (following examples from `fuzzywuzzy`'s documentation) print(fuzz.token_sort_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")) print(fuzz.token_set_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")) # ### Token Set Ratio print(fuzz.ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")) print(fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear"))
week3/Tips and Tricks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Scoring functions # # Despite our reservations about treating our predictions as "yes/no" predictions of crime, we can consider using a [Scoring rule](https://en.wikipedia.org/wiki/Scoring_rule). # # ## References # # 1. Roberts, "Assessing the spatial and temporal variation in the skill of precipitation forecasts from an NWP model" [DOI:10.1002/met.57](http://onlinelibrary.wiley.com/doi/10.1002/met.57/abstract) # 2. Weijs, "Kullback–Leibler Divergence as a Forecast Skill Score with Classic Reliability–Resolution–Uncertainty Decomposition" [DOI:10.1175/2010MWR3229.1](https://doi.org/10.1175/2010MWR3229.1) # # ## Discussion # # The classical e.g. [Brier score](https://en.wikipedia.org/wiki/Brier_score) is appropriate when we have a sequence of events $i$ which either may occur or not. Let $p_i$ be our predicted probability that event $i$ will occur, and let $o_i$ be the $1$ if the event occurred, and $0$ otherwise. The Brier score is # $$ \frac{1}{N} \sum_{i=1}^N (p_i - o_i)^2. $$ # # The paper [1] considers aggregating this over different (spatial) scales. For the moment, we shall use [1] by analogy only, in order to deal with the problem that we might have repeated events ($o_i$ for us is the number of events to occur in a cell, so may be $>1$). We shall follow [1], vaguely, and let $u_i$ be the _fraction_ of the total number of events which occurred in spatial region (typically, grid cell) $i$. The score is then # $$ S = \frac{1}{N} \sum_{i=1}^N (p_i - u_i)^2 $$ # where we sum over all spatial units $i=1,\cdots,N$. # # ### Normalisation # # Notice that this is related to the KDE method. We can think of the values $(u_i)$ as a histogram estimation of the real probability density, and then $S$ is just the mean squared error, estimating the continuous version # $$ \int_{\Omega} (p(x) - f(x))^2 \ dx $$ # where $\Omega$ is the study area. If we divide by the area of $\Omega$, then we obtain a measure of difference which is invariant under rescaling of $\Omega$. # # The values $(p_i)$, as probabilities, sum to $1$, and the $(u_i)$ by definition sum to $1$. We hence see that an appropriate normalisation factor for $S$ is # $$ S = \frac{1}{NA} \sum_{i=1}^N (p_i - u_i)^2 $$ # where $A$ is the area of each grid cell and so $NA$ is the total area. # # ### Skill scores # # A related [Skill score](https://en.wikipedia.org/wiki/Forecast_skill) is # $$ SS = 1 - \frac{S}{S_\text{worst}} = 1 - \frac{\sum_{i=1}^N (p_i - u_i)^2}{\sum_{i=1}^N p_i^2 + u_i^2} # = \frac{2\sum_{i=1}^N p_iu_i}{\sum_{i=1}^N p_i^2 + u_i^2}. $$ # Here # $$ S_\text{worst} = \frac{1}{NA} \sum_{i=1}^N (p_i^2 + u_i^2) $$ # is the worst possible value for $S$ if there is no spatial association between the $(p_i)$ and $(u_i)$. # # ### Multi-scale issues # # Finally, [1] considers a multi-scale measure by aggregating the values $(p_i)$ and $(u_i)$ over larger and larger areas. # - Firstly we use $(p_i)$ and $(u_i)$ as is, on a grid of size $n\times m$ say. So $N=nm$. # - Now take the "moving average" or "sliding window" by averaging over each $2\times 2$ block. This gives a grid of size $(n-1) \times (m-1)$ # - And so on... # - Ending with just the average of $p_i$ over all the whole grid compared to the average of $u_i$ over the whole grid. These will always agree. # - If the grid is not square, then we will stop before this. Similarly, non-rectangular regions will need to be dealt with in an ad-hoc fashion. # # Finally, we should not forget to normalise correctly-- at each stage, the "averaged" values should still sum to $1$ (being probabilities) and we should continue to divide by the total area. Let us think a bit more clearly about this. Suppose we group the original cells into (in general, overlapping) regions $(\Omega_i)$ and values (the _sum_ of values in the regions) $(x_i)$ and $(x_i')$ say. We then want to _normalise_ these values in some, and compute the appropriate Brier score. If each region $\Omega_i$ has the same area (e.g. we start with a rectangular grid) then there is no issue. For more general grids (which have been clipped to geometry, say) we proceed with a vague _analogy_ by pretending that the regions $\Omega_i$ are actually disjoint, cover the whole study area, and that $x_i = \int_{\Omega_i} f$ for some non-normalised function $f$. # # - We renormalise $f$ by setting $g = af$ for some constant $a$ with $\int g=1$ so $a^{-1} = \int f = \sum_i x_i$. So $g = y_i$ on $\Omega_i$ where $y_i = \big( \sum_i x_i \big)^{-1} x_i$. # - Do the same for $x_i'$ leading to $y'_i = \big( \sum_i x'_i \big)^{-1} x'_i$. # - Compute $S = \frac{1}{|\Omega|} \int (g - g')^2 = \big(\sum_i |\Omega_i|\big)^{-1} \sum_i |\Omega_i| (y_i - y'_i)^2$ and similarly $S_{\text{worst}} = \big(\sum_i |\Omega_i|\big)^{-1} \sum_i |\Omega_i| (y_i^2 + (y'_i)^2)$. # ## Use Kullback-Leibler instead # # Following (2) now (and again with an adhoc change to allow non-binary variables) we could use Kullback-Leibler divergance (discussed in more detail, and more rigourously, in another notebook) to form the score: # $$ S_{KL} = \frac{1}{N} \sum_{i=1}^N \Big( u_i \log\big( u_i / p_i \big) # + (1-u_i) \log\big( (1-u_i) / (1-p_i) \big) \Big) $$ # We use the convention that $0 \cdot \log(0) = 0$, and we should adjust zero values of $p_i$ to some very small value.
evaluation/Scoring functions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _cell_guid="6c1b5539-1759-e753-6dae-41f655ad31eb" import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import Lasso # %matplotlib inline # + _cell_guid="4966d147-a568-0ce2-d84e-6dd84b2c62eb" train = pd.read_csv("../input/train.csv") test = pd.read_csv("../input/test.csv") # + _cell_guid="a8af6127-0180-983a-8ffd-ec18d569bbb8" all_data = pd.concat((train.loc[:,'MSSubClass':'SaleCondition'], test.loc[:,'MSSubClass':'SaleCondition']), ignore_index=True) # + [markdown] _cell_guid="7ffbc4c8-bf8e-b7ad-8a31-7cd41bfbb1fa" # ## Imputation of missing values ## # + _cell_guid="41f21588-54a2-cccf-ebfb-b82f22818d08" # I have no idea how to do it better. Probably, it is better to do nothing x = all_data.loc[np.logical_not(all_data["LotFrontage"].isnull()), "LotArea"] y = all_data.loc[np.logical_not(all_data["LotFrontage"].isnull()), "LotFrontage"] # plt.scatter(x, y) t = (x <= 25000) & (y <= 150) p = np.polyfit(x[t], y[t], 1) all_data.loc[all_data['LotFrontage'].isnull(), 'LotFrontage'] = np.polyval(p, all_data.loc[all_data['LotFrontage'].isnull(), 'LotArea']) # + [markdown] _cell_guid="adad1d8c-1f3a-cc8d-eebd-d0f97110a7da" # There are many features were NaN should be considered as absence of such property. In other cases I replace NaN with most common value # + _cell_guid="3bfae950-1716-ccdd-1fe7-c01a0f07cfd3" all_data.loc[all_data.Alley.isnull(), 'Alley'] = 'NoAlley' all_data.loc[all_data.MasVnrType.isnull(), 'MasVnrType'] = 'None' # no good all_data.loc[all_data.MasVnrType == 'None', 'MasVnrArea'] = 0 all_data.loc[all_data.BsmtQual.isnull(), 'BsmtQual'] = 'NoBsmt' all_data.loc[all_data.BsmtCond.isnull(), 'BsmtCond'] = 'NoBsmt' all_data.loc[all_data.BsmtExposure.isnull(), 'BsmtExposure'] = 'NoBsmt' all_data.loc[all_data.BsmtFinType1.isnull(), 'BsmtFinType1'] = 'NoBsmt' all_data.loc[all_data.BsmtFinType2.isnull(), 'BsmtFinType2'] = 'NoBsmt' all_data.loc[all_data.BsmtFinType1=='NoBsmt', 'BsmtFinSF1'] = 0 all_data.loc[all_data.BsmtFinType2=='NoBsmt', 'BsmtFinSF2'] = 0 all_data.loc[all_data.BsmtFinSF1.isnull(), 'BsmtFinSF1'] = all_data.BsmtFinSF1.median() all_data.loc[all_data.BsmtQual=='NoBsmt', 'BsmtUnfSF'] = 0 all_data.loc[all_data.BsmtUnfSF.isnull(), 'BsmtUnfSF'] = all_data.BsmtUnfSF.median() all_data.loc[all_data.BsmtQual=='NoBsmt', 'TotalBsmtSF'] = 0 all_data.loc[all_data.FireplaceQu.isnull(), 'FireplaceQu'] = 'NoFireplace' all_data.loc[all_data.GarageType.isnull(), 'GarageType'] = 'NoGarage' all_data.loc[all_data.GarageFinish.isnull(), 'GarageFinish'] = 'NoGarage' all_data.loc[all_data.GarageQual.isnull(), 'GarageQual'] = 'NoGarage' all_data.loc[all_data.GarageCond.isnull(), 'GarageCond'] = 'NoGarage' all_data.loc[all_data.BsmtFullBath.isnull(), 'BsmtFullBath'] = 0 all_data.loc[all_data.BsmtHalfBath.isnull(), 'BsmtHalfBath'] = 0 all_data.loc[all_data.KitchenQual.isnull(), 'KitchenQual'] = 'TA' all_data.loc[all_data.MSZoning.isnull(), 'MSZoning'] = 'RL' all_data.loc[all_data.Utilities.isnull(), 'Utilities'] = 'AllPub' all_data.loc[all_data.Exterior1st.isnull(), 'Exterior1st'] = 'VinylSd' all_data.loc[all_data.Exterior2nd.isnull(), 'Exterior2nd'] = 'VinylSd' all_data.loc[all_data.Functional.isnull(), 'Functional'] = 'Typ' all_data.loc[all_data.SaleCondition.isnull(), 'SaleCondition'] = 'Normal' all_data.loc[all_data.SaleCondition.isnull(), 'SaleType'] = 'WD' all_data.loc[all_data['PoolQC'].isnull(), 'PoolQC'] = 'NoPool' all_data.loc[all_data['Fence'].isnull(), 'Fence'] = 'NoFence' all_data.loc[all_data['MiscFeature'].isnull(), 'MiscFeature'] = 'None' all_data.loc[all_data['Electrical'].isnull(), 'Electrical'] = 'SBrkr' # only one is null and it has type Detchd all_data.loc[all_data['GarageArea'].isnull(), 'GarageArea'] = all_data.loc[all_data['GarageType']=='Detchd', 'GarageArea'].mean() all_data.loc[all_data['GarageCars'].isnull(), 'GarageCars'] = all_data.loc[all_data['GarageType']=='Detchd', 'GarageCars'].median() # + [markdown] _cell_guid="9bb00740-bde7-a678-c29f-fb1e2709105b" # ## Normalization ## # + _cell_guid="f706c097-313b-d53e-908d-2d02a10868c4" # where we have order we will use numeric all_data = all_data.replace({'Utilities': {'AllPub': 1, 'NoSeWa': 0, 'NoSewr': 0, 'ELO': 0}, 'Street': {'Pave': 1, 'Grvl': 0 }, 'FireplaceQu': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'NoFireplace': 0 }, 'Fence': {'GdPrv': 2, 'GdWo': 2, 'MnPrv': 1, 'MnWw': 1, 'NoFence': 0}, 'ExterQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1 }, 'ExterCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1 }, 'BsmtQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'NoBsmt': 0}, 'BsmtExposure': {'Gd': 3, 'Av': 2, 'Mn': 1, 'No': 0, 'NoBsmt': 0}, 'BsmtCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'NoBsmt': 0}, 'GarageQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'NoGarage': 0}, 'GarageCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'NoGarage': 0}, 'KitchenQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1}, 'Functional': {'Typ': 0, 'Min1': 1, 'Min2': 1, 'Mod': 2, 'Maj1': 3, 'Maj2': 4, 'Sev': 5, 'Sal': 6} }) # + _cell_guid="a22c83af-1b99-80c5-c9e0-ae3b98fc8277" newer_dwelling = all_data.MSSubClass.replace({20: 1, 30: 0, 40: 0, 45: 0, 50: 0, 60: 1, 70: 0, 75: 0, 80: 0, 85: 0, 90: 0, 120: 1, 150: 0, 160: 0, 180: 0, 190: 0}) newer_dwelling.name = 'newer_dwelling' # + _cell_guid="fcc7f095-ba43-0377-66bc-8ebdfbb18e31" all_data = all_data.replace({'MSSubClass': {20: 'SubClass_20', 30: 'SubClass_30', 40: 'SubClass_40', 45: 'SubClass_45', 50: 'SubClass_50', 60: 'SubClass_60', 70: 'SubClass_70', 75: 'SubClass_75', 80: 'SubClass_80', 85: 'SubClass_85', 90: 'SubClass_90', 120: 'SubClass_120', 150: 'SubClass_150', 160: 'SubClass_160', 180: 'SubClass_180', 190: 'SubClass_190'}}) # + _cell_guid="4245cfd6-86f6-4590-fbf3-3f292f3f0495" # The idea is good quality should rise price, poor quality - reduce price overall_poor_qu = all_data.OverallQual.copy() overall_poor_qu = 5 - overall_poor_qu overall_poor_qu[overall_poor_qu<0] = 0 overall_poor_qu.name = 'overall_poor_qu' overall_good_qu = all_data.OverallQual.copy() overall_good_qu = overall_good_qu - 5 overall_good_qu[overall_good_qu<0] = 0 overall_good_qu.name = 'overall_good_qu' overall_poor_cond = all_data.OverallCond.copy() overall_poor_cond = 5 - overall_poor_cond overall_poor_cond[overall_poor_cond<0] = 0 overall_poor_cond.name = 'overall_poor_cond' overall_good_cond = all_data.OverallCond.copy() overall_good_cond = overall_good_cond - 5 overall_good_cond[overall_good_cond<0] = 0 overall_good_cond.name = 'overall_good_cond' exter_poor_qu = all_data.ExterQual.copy() exter_poor_qu[exter_poor_qu<3] = 1 exter_poor_qu[exter_poor_qu>=3] = 0 exter_poor_qu.name = 'exter_poor_qu' exter_good_qu = all_data.ExterQual.copy() exter_good_qu[exter_good_qu<=3] = 0 exter_good_qu[exter_good_qu>3] = 1 exter_good_qu.name = 'exter_good_qu' exter_poor_cond = all_data.ExterCond.copy() exter_poor_cond[exter_poor_cond<3] = 1 exter_poor_cond[exter_poor_cond>=3] = 0 exter_poor_cond.name = 'exter_poor_cond' exter_good_cond = all_data.ExterCond.copy() exter_good_cond[exter_good_cond<=3] = 0 exter_good_cond[exter_good_cond>3] = 1 exter_good_cond.name = 'exter_good_cond' bsmt_poor_cond = all_data.BsmtCond.copy() bsmt_poor_cond[bsmt_poor_cond<3] = 1 bsmt_poor_cond[bsmt_poor_cond>=3] = 0 bsmt_poor_cond.name = 'bsmt_poor_cond' bsmt_good_cond = all_data.BsmtCond.copy() bsmt_good_cond[bsmt_good_cond<=3] = 0 bsmt_good_cond[bsmt_good_cond>3] = 1 bsmt_good_cond.name = 'bsmt_good_cond' garage_poor_qu = all_data.GarageQual.copy() garage_poor_qu[garage_poor_qu<3] = 1 garage_poor_qu[garage_poor_qu>=3] = 0 garage_poor_qu.name = 'garage_poor_qu' garage_good_qu = all_data.GarageQual.copy() garage_good_qu[garage_good_qu<=3] = 0 garage_good_qu[garage_good_qu>3] = 1 garage_good_qu.name = 'garage_good_qu' garage_poor_cond = all_data.GarageCond.copy() garage_poor_cond[garage_poor_cond<3] = 1 garage_poor_cond[garage_poor_cond>=3] = 0 garage_poor_cond.name = 'garage_poor_cond' garage_good_cond = all_data.GarageCond.copy() garage_good_cond[garage_good_cond<=3] = 0 garage_good_cond[garage_good_cond>3] = 1 garage_good_cond.name = 'garage_good_cond' kitchen_poor_qu = all_data.KitchenQual.copy() kitchen_poor_qu[kitchen_poor_qu<3] = 1 kitchen_poor_qu[kitchen_poor_qu>=3] = 0 kitchen_poor_qu.name = 'kitchen_poor_qu' kitchen_good_qu = all_data.KitchenQual.copy() kitchen_good_qu[kitchen_good_qu<=3] = 0 kitchen_good_qu[kitchen_good_qu>3] = 1 kitchen_good_qu.name = 'kitchen_good_qu' qu_list = pd.concat((overall_poor_qu, overall_good_qu, overall_poor_cond, overall_good_cond, exter_poor_qu, exter_good_qu, exter_poor_cond, exter_good_cond, bsmt_poor_cond, bsmt_good_cond, garage_poor_qu, garage_good_qu, garage_poor_cond, garage_good_cond, kitchen_poor_qu, kitchen_good_qu), axis=1) bad_heating = all_data.HeatingQC.replace({'Ex': 0, 'Gd': 0, 'TA': 0, 'Fa': 1, 'Po': 1}) bad_heating.name = 'bad_heating' MasVnrType_Any = all_data.MasVnrType.replace({'BrkCmn': 1, 'BrkFace': 1, 'CBlock': 1, 'Stone': 1, 'None': 0}) MasVnrType_Any.name = 'MasVnrType_Any' SaleCondition_PriceDown = all_data.SaleCondition.replace({'Abnorml': 1, 'Alloca': 1, 'AdjLand': 1, 'Family': 1, 'Normal': 0, 'Partial': 0}) SaleCondition_PriceDown.name = 'SaleCondition_PriceDown' Neighborhood_Good = pd.DataFrame(np.zeros((all_data.shape[0],1)), columns=['Neighborhood_Good']) Neighborhood_Good[all_data.Neighborhood=='NridgHt'] = 1 Neighborhood_Good[all_data.Neighborhood=='Crawfor'] = 1 Neighborhood_Good[all_data.Neighborhood=='StoneBr'] = 1 Neighborhood_Good[all_data.Neighborhood=='Somerst'] = 1 Neighborhood_Good[all_data.Neighborhood=='NoRidge'] = 1 # do smth with BsmtFinType1, BsmtFinType2 # + [markdown] _cell_guid="8cbb3844-4bef-bbad-3d78-6f834300d110" # I have no idea what to do with Exterior1st, Exterior2nd, RoofMatl, Condition1, Condition2, BldgType. I'll try convert them into some kind of price brackets # + _cell_guid="f0fdf976-c5bb-fe0b-3ffe-511197bf11b4" from sklearn.svm import SVC svm = SVC(C=100) # price categories pc = pd.Series(np.zeros(train.shape[0])) pc[:] = 'pc1' pc[train.SalePrice >= 150000] = 'pc2' pc[train.SalePrice >= 220000] = 'pc3' columns_for_pc = ['Exterior1st', 'Exterior2nd', 'RoofMatl', 'Condition1', 'Condition2', 'BldgType'] X_t = pd.get_dummies(train.loc[:, columns_for_pc], sparse=True) svm.fit(X_t, pc) pc_pred = svm.predict(X_t) # + _cell_guid="b7ecdeae-937b-a58d-befc-4d2b638e1041" p = train.SalePrice/100000 plt.hist(p[pc_pred=='pc1']) plt.hist(p[pc_pred=='pc2']) plt.hist(p[pc_pred=='pc3']) # + _cell_guid="b444fe09-fd4b-28e9-a08d-6080f8e9ea1d" price_category = pd.DataFrame(np.zeros((all_data.shape[0],1)), columns=['pc']) X_t = pd.get_dummies(all_data.loc[:, columns_for_pc], sparse=True) pc_pred = svm.predict(X_t) price_category[pc_pred=='pc2'] = 1 price_category[pc_pred=='pc3'] = 2 price_category = price_category.to_sparse() # + _cell_guid="df571648-bb68-1eed-ad9f-e924911f4dbf" # Monthes with the lagest number of deals may be significant season = all_data.MoSold.replace( {1: 0, 2: 0, 3: 0, 4: 1, 5: 1, 6: 1, 7: 1, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0}) season.name = 'season' # Numer month is not significant all_data = all_data.replace({'MoSold': {1: 'Yan', 2: 'Feb', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Avg', 9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'}}) # + _cell_guid="7428a0fe-0695-1709-fbee-e57148f7eb39" all_data = all_data.replace({'CentralAir': {'Y': 1, 'N': 0}}) all_data = all_data.replace({'PavedDrive': {'Y': 1, 'P': 0, 'N': 0}}) # + _cell_guid="4a4b1ece-a80f-0071-8020-522e9c75fa26" reconstruct = pd.DataFrame(np.zeros((all_data.shape[0],1)), columns=['Reconstruct']) reconstruct[all_data.YrSold < all_data.YearRemodAdd] = 1 reconstruct = reconstruct.to_sparse() recon_after_buy = pd.DataFrame(np.zeros((all_data.shape[0],1)), columns=['ReconstructAfterBuy']) recon_after_buy[all_data.YearRemodAdd >= all_data.YrSold] = 1 recon_after_buy = recon_after_buy.to_sparse() build_eq_buy = pd.DataFrame(np.zeros((all_data.shape[0],1)), columns=['Build.eq.Buy']) build_eq_buy[all_data.YearBuilt >= all_data.YrSold] = 1 build_eq_buy = build_eq_buy.to_sparse() # + _cell_guid="4aac1977-9a4c-0dfe-8abc-737cb0c945cb" # I hope this will help all_data.YrSold = 2010 - all_data.YrSold # + _cell_guid="54ee71df-fdcd-00df-6adf-f76453daacdf" year_map = pd.concat(pd.Series('YearGroup' + str(i+1), index=range(1871+i*20,1891+i*20)) for i in range(0, 7)) all_data.GarageYrBlt = all_data.GarageYrBlt.map(year_map) all_data.loc[all_data['GarageYrBlt'].isnull(), 'GarageYrBlt'] = 'NoGarage' # + _cell_guid="060e746d-16fb-1d41-468a-584b1c5b1cb3" all_data.YearBuilt = all_data.YearBuilt.map(year_map) all_data.YearRemodAdd = all_data.YearRemodAdd.map(year_map) # + [markdown] _cell_guid="6ebb4194-80fb-a91d-e153-cd38688b4a1d" # Scaling numeric data # + _cell_guid="2b153f12-8081-ebe4-e503-3c55c8350618" numeric_feats = all_data.dtypes[all_data.dtypes != "object"].index t = all_data[numeric_feats].quantile(.95) use_max_scater = t[t == 0].index use_95_scater = t[t != 0].index all_data[use_max_scater] = all_data[use_max_scater]/all_data[use_max_scater].max() all_data[use_95_scater] = all_data[use_95_scater]/all_data[use_95_scater].quantile(.95) # + _cell_guid="89aace43-ac8c-3c7d-eaf7-c6d85adec796" t = ['LotFrontage', 'LotArea', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal'] all_data.loc[:, t] = np.log1p(all_data.loc[:, t]) # + [markdown] _cell_guid="063dfaf9-7f49-5c62-655b-8c35d5d2f19e" # ## Preparing for sklearn## # + _cell_guid="83da8155-1385-dce5-9e8b-4630ef12ff61" # all classes in sklearn requires numeric data only # transform categorical variable into binary X = pd.get_dummies(all_data, sparse=True) X = X.fillna(0) # + _cell_guid="a64a33b1-9cd0-986a-bc12-76b30a69f912" X = X.drop('RoofMatl_ClyTile', axis=1) # only one is not zero X = X.drop('Condition2_PosN', axis=1) # only two is not zero X = X.drop('MSZoning_C (all)', axis=1) X = X.drop('MSSubClass_SubClass_160', axis=1) # this features definitely couse overfitting # + _cell_guid="4c3e142c-018d-484d-1b49-997d671a8b12" # add new features X = pd.concat((X, newer_dwelling, season, reconstruct, recon_after_buy, qu_list, bad_heating, MasVnrType_Any, price_category, build_eq_buy), axis=1) # + [markdown] _cell_guid="a3d8ee03-439b-1fe1-0170-2948e08f2fb2" # Next step is guess what new feachers we need to intoduse to make the model better. I'll make a lot of feachers and model wiil choose # + _cell_guid="ab19a761-2876-11c8-3fdb-9fbe703872e3" from itertools import product, chain def poly(X): areas = ['LotArea', 'TotalBsmtSF', 'GrLivArea', 'GarageArea', 'BsmtUnfSF'] # t = [s for s in X.axes[1].get_values() if s not in areas] t = chain(qu_list.axes[1].get_values(), ['OverallQual', 'OverallCond', 'ExterQual', 'ExterCond', 'BsmtCond', 'GarageQual', 'GarageCond', 'KitchenQual', 'HeatingQC', 'bad_heating', 'MasVnrType_Any', 'SaleCondition_PriceDown', 'Reconstruct', 'ReconstructAfterBuy', 'Build.eq.Buy']) for a, t in product(areas, t): x = X.loc[:, [a, t]].prod(1) x.name = a + '_' + t yield x XP = pd.concat(poly(X), axis=1) X = pd.concat((X, XP), axis=1) # + _cell_guid="cbdd91b4-6ea3-d1bd-3f13-8b92f9265048" X_train = X[:train.shape[0]] X_test = X[train.shape[0]:] # + _cell_guid="b4560804-7a54-6f1f-6781-1b173b55e5cd" # the model has become really big X_train.shape # + _cell_guid="b9faea74-013d-0813-9715-44d11b6665ff" y = np.log1p(train.SalePrice) # + _cell_guid="4be5d5c6-91c1-3f5f-c548-82014a276284" # this come from iterational model improvment. I was trying to understand why the model gives to the two points much better price x_plot = X_train.loc[X_train['SaleCondition_Partial']==1, 'GrLivArea'] y_plot = y[X_train['SaleCondition_Partial']==1] plt.scatter(x_plot, y_plot) # + _cell_guid="7b9a32b8-a62d-08c0-523c-d70b8dd23140" outliers_id = np.array([524, 1299]) outliers_id = outliers_id - 1 # id starts with 1, index starts with 0 X_train = X_train.drop(outliers_id) y = y.drop(outliers_id) # There are difinetly more outliers # + _cell_guid="1d04b984-f245-56ed-7fd0-5aff233d4e2e" from sklearn.cross_validation import cross_val_score from sklearn.metrics import make_scorer, mean_squared_error def rmsle(y, y_pred): return np.sqrt((( (np.log1p(y_pred*price_scale)- np.log1p(y*price_scale)) )**2).mean()) # scorer = make_scorer(rmsle, False) scorer = make_scorer(mean_squared_error, False) def rmse_cv(model, X, y): return (cross_val_score(model, X, y, scoring=scorer)).mean() # + [markdown] _cell_guid="66d40c8e-cf67-3810-965a-814f140e69c3" # ## Learning ## # The model is sparse with n_features > n_samples. Likely it's linear. It is classic case to use the Lasso model # + _cell_guid="af3513a4-b20d-2ed7-705c-c9a57d9d042a" alphas = [1e-4, 5e-4, 1e-3, 5e-3] cv_lasso = [rmse_cv(Lasso(alpha = alpha, max_iter=50000), X_train, y) for alpha in alphas] pd.Series(cv_lasso, index = alphas).plot() # + [markdown] _cell_guid="c1e2a969-b462-9393-5be4-09d042567cef" # Choose alpha with better score # + _cell_guid="93eddc21-2573-06ac-7487-d3ad7180941b" model_lasso = Lasso(alpha=5e-4, max_iter=50000).fit(X_train, y) # + [markdown] _cell_guid="27a3f809-5300-758d-f379-0c9ccaf988b5" # ## Getting results## # + _cell_guid="09ab72bf-82c6-f2b1-3818-668fef834c62" coef = pd.Series(model_lasso.coef_, index = X_train.columns).sort_values() imp_coef = pd.concat([coef.head(10), coef.tail(10)]) imp_coef.plot(kind = "barh") plt.title("Coefficients in the Model") # + [markdown] _cell_guid="9aefbc31-64a7-9461-8ba0-7a72841589c0" # Some features still look suspicious. May be, we need to exlude them like RoofMatl_ClyTile and others # + _cell_guid="f036759b-2fd7-80e5-ecd1-f3facb725451" # This is a good way to see how model predict data p_pred = np.expm1(model_lasso.predict(X_train)) plt.scatter(p_pred, np.expm1(y)) plt.plot([min(p_pred),max(p_pred)], [min(p_pred),max(p_pred)], c="red") # + [markdown] _cell_guid="ca10ffb3-cc8d-b9a5-dd28-5cdeafd238aa" # Some point are far from the red line. May be they are outliers like the 524th and the 1299th # + _cell_guid="6772895c-191f-98dd-b99c-f574a68ebe0c" # save to file to make a submission p = np.expm1(model_lasso.predict(X_test)) solution = pd.DataFrame({"id":test.Id, "SalePrice":p}, columns=['id', 'SalePrice']) solution.to_csv("lasso_sol.csv", index = False) # + [markdown] _cell_guid="f05a8111-558e-3d8e-0b91-9febf6e45696" # ## Model improvement## # With various model tunnings I've got 0.11720 in public leaderboard. Ways of improvement are find some more outliers and exlude (or include) features.
8 HOUSE PRICES/lasso-model-for-regression-problem.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ___ # <img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/0/08/Pareto_Efficient_Frontier_for_the_Markowitz_Portfolio_selection_problem..png" width="300px" height="100px" /> # # # # Proyecto Final # # En el proyecto final diseñarás un portafolio de inversión utilizando activos (o ETFs) reales de tu preferencia. Utilizarás todas las herramientas vistas a lo largo del curso para este fin, y después, evaluarás qué tan buena hubiera sido tu inversión de haber invertido en este portafolio. # # El proyecto se realizará en equipos de mínimo dos (2) y máximo tres (3) personas. Ni más, ni menos. Para esto, deberán conformar los equipos en este momento, y designar a un miembro del equipo que me envíe un correo con los nombres de los integrantes. # ___ # ## 1. Elección de activos # # Lo primero es elegir los activos que van a utilizar. En este punto tienen tres opciones: # # 1. Si deciden trabajar únicamente con acciones de empresas particulares, deben elegir y listar mínimo 20 activos. # # 2. Si deciden trabajar únicamente con ETFs, deben elegir y listar mínimo 10 ETFs. # # 3. Si deciden trabajar con una combinación de acciones y ETFs, deben elegir y listar por lo menos 5 ETFs y 10 activos. # # En cualquiera de los tres casos deben explicar detalladamente la elección de cada uno de los activos y/o ETFs que elijan. # ## 2. Selección de portafolios # # Una vez elegidos los activos y/o ETFs: # # 1. Usando los precios históricos de dichos activos hasta el 2014-12-31 (la fecha inicial es un parámetro que ustedes deberán elegir), diseñarán un portafolio en el que (hipotéticamente) habrían invertido durante todo el 2015. Para esto, deberán suponer un coeficiente de aversión al riesgo. # # 2. Usando los precios históricos de dichos activos hasta el 2015-12-31, diseñarán un portafolio en el que (hipotéticamente) habrían invertido durante todo el 2016. # # 3. Usando los precios históricos de dichos activos hasta el 2016-12-31, diseñarán un portafolio en el que (hipotéticamente) habrían invertido durante todo el 2017. # # 4. Usando los precios históricos de dichos activos hasta el 2017-12-31, diseñarán un portafolio en el que (hipotéticamente) habrían invertido durante todo el 2018. # # 5. Usando los precios históricos de dichos activos hasta el 2018-12-31, diseñarán un portafolio en el que (hipotéticamente) habrían invertido durante todo el 2019. # # 2. Usando los precios históricos de dichos activos hasta el 2019-12-31, diseñarán un portafolio en el que (hipotéticamente) habrían invertido durante todo el 2020. # ## 3. Evaluación del rendimiento # # Usando los portafolios que encontraron en el punto anterior, deberán encontrar: # # 1. El rendimiento del portafolio 1 durante el 2015. # # 2. El rendimiento del portafolio 2 durante el 2016. # # 3. El rendimiento del portafolio 3 durante el 2017. # # 4. El rendimiento del portafolio 4 durante el 2018. # # 5. El rendimiento del portafolio 5 durante el 2019. # # 6. El rendimiento del portafolio 6 durante lo que va del 2020. # # 7. El rendimiento total durante el periodo de tenencia. # # 8. El rendimiento promedio anual durante el periodo de tenencia. # # 9. Si hubieran invertido 10.000 USD en estos portafolios a lo largo del tiempo y nunca hubieran retirado ni adicionado nada más, ¿Cuánto dinero tendrían invertido en este momento? # ## 4. Adicional # # Todo lo anterior es lo mínimo necesario del proyecto. Sin embargo, se considerarán ampliamente si ustedes miden el comportamiento de los portafolios con otras métricas que se les ocurran. # # Además, se considerará ampliamente también si tienen en cuenta los precios reales de los activos en el momento de la compra, y como modifica esto las ponderaciones que encontraron para sus portafolios, considerando que no pueden comprar fracciones de activos y/o ETFs. # ## 5. Presentación # # Todo lo anterior lo deben realizar en un notebook de jupyter considerando que de ahí mismo realizarán la presentación del proyecto. # <script> # $(document).ready(function(){ # $('div.prompt').hide(); # $('div.back-to-top').hide(); # $('nav#menubar').hide(); # $('.breadcrumb').hide(); # $('.hidden-print').hide(); # }); # </script> # # <footer id="attribution" style="float:right; color:#808080; background:#fff;"> # Created with Jupyter by <NAME>. # </footer>
DescripcionProyecto.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # A toy example for modeling complex drug-protein interactions using RAF kinases and RAF inhibitors # Here, we provide the step-by-step construction code for a toy example to model complex drug-protein interactions using PySB with energy formulation through support for energy BioNetGen (Sekar JAP et al, 2016). This example describes RAF kinases as the drug target and RAF inhibitors as the drug (as developed in Kholodenko B., 2015). To run this code you'll need to have Pysb with BNG installed, please follow instructions at: http://pysb.org/ . # # ### Manual definition of the biochemical reaction system # # To start, we import all required Pysb classes and instantiate the model: # + from pysb import Model, Monomer, Parameter, Expression, Rule, Observable, Initial, Annotation, EnergyPattern, ANY from pysb.bng import generate_equations from pysb.export import export from pysb.core import as_complex_pattern, ComplexPattern from sympy import exp, log Model(); model.name='toy_example_RAF_RAFi'; # - # Next, we define the two basic components of the model, RAF kinases (R) and RAF inhibitors (I): #define a monomer R that represents a RAF kinase with a binding site for RAF (r) and another for the drug (i) Monomer('R', ['r', 'i']); #define a monomer I that represents a RAF inhibitor with a binding site for RAF (r) Monomer('I',['r']); # We define the parameters for initializing abundance of components: #define the initial conditions for R and I Parameter('R_0',0.01); # uM Parameter('I_0',0.0); # uM Initial(R(r=None, i=None), R_0); Initial(I(r=None), I_0); # Then, we define the kinetic parameters and thermodynamic factors: # + #define dissociation constant (kD), forward rate (kf) and distributionr rate (phi) for RAF dimerization Parameter('kr_RR',10); #/s Parameter('kf_RR',1.0); #/s/uM Parameter('phi_RR',1.0); #unitless #define dissociation constant (kD), forward rate (kf) and distributionr rate (phi) for drug binding to RAF Parameter('kr_RI',0.1); #/s Parameter('kf_RI',1.0); #/s/uM Parameter('phi_RI',1.0); #unitless #define thermodynamic factors f and g Parameter('f',1.0); #unitless Parameter('g',1.0); #unitless # - # We convert the kinetic parameters into corresponding energy parameters: # + #convert kinetic parameters into energies for RAF dimerization Expression('Gf_RR', log(kr_RR/kf_RR)); #unitless Expression('Ea0_RR',-phi_RR*log(kr_RR/kf_RR)-log(kf_RR)); #unitless #convert kinetic parameters into energies for drug binding to RAF Expression('Gf_RI', log(kr_RI/kf_RI)); #unitless Expression('Ea0_RI',-phi_RI*log(kr_RI/kf_RI)-log(kf_RI)); #unitless #convert thermodynamic factors into energies Expression('Gf_f',log(f)); #unitless Expression('Gf_g',log(g)); #unitless # - # We define the energy patterns to assign energies within biochemical species: # + # define energy in bond between R and R EnergyPattern('ep_RR',R(r=1)%R(r=1),Gf_RR); # define energy in bond between R and I EnergyPattern('ep_RI',R(i=1)%I(r=1),Gf_RI); # define additional energy in bond betwee RAF dimer and a single drug molecule Expression('Gf_RRI', Gf_f); EnergyPattern('ep_RRI',R(r=1,i=None)%R(r=1,i=2)%I(r=2), Gf_RRI); # define additional energy in bond betwee RAF dimer and two drug molecules Expression('Gf_IRRI', Gf_f + Gf_g); EnergyPattern('ep_IRRI',I(r=2)%R(r=1,i=2)%R(r=1,i=3)%I(r=3), Gf_IRRI); # - # We define observables that are used later to visualize results from model simulations: # + # define observable for total RAF and total drug Observable('Rtot_obs', R()); Observable('Itot_obs', I()); #define an observable that counts the amount of active RAF when RAF represents a BRAF V600E/K mutant #that is active independently of dimerization stauts (i.e. both as a monomer or as a dimer) as long as it is not drug bound Observable('R_BRAFmut_active_obs', R(i=None)); #define an observable that counts the amount of active RAF when RAF here represents a wild type version of BRAF or CRAF #that is active only when dimerized and not drug bound Observable('R_RAFwt_active_obs', R(r=1,i=None)%R(r=1)); # define observable for drug unbound RAF monomer Observable('R_obs', R(i=None,r=None), match='species'); # define observable for RAF dimer unbound by drug Observable('RR_obs', R(r=1,i=None)%R(r=1,i=None), match='species'); # define observable for RAF dimer bound by single drug Observable('RRI_obs', R(r=1,i=None)%R(r=1,i=2)%I(r=2), match='species'); # define observable for RAF dimer bound by double drug Observable('IRRI_obs', I(r=2)%R(r=1,i=2)%R(r=1,i=3)%I(r=3), match='species'); # - # As the last step in the model construction, we define the reactions for RAF dimerization and drug binding: # + #define RAF dimerization reaction Rule('RR', R(r=None)+R(r=None) | R(r=1)%R(r=1) , phi_RR, Ea0_RR, energy=True); #define drug binding to RAF reaction Rule('RI', R(i=None)+I(r=None) | R(i=1)%I(r=1) , phi_RI, Ea0_RI, energy=True); # - # ### Automatic generation of the kinetic model # # We generate the kinetic model by passing the information build via PySB to BNG, parse the returned reaction network and list the properties of the resulting kinetic model: # + from util_display import display_model_info from pysb.export.sbml import SbmlExporter as smblexport # generate the model equations generate_equations(model) #display model informations display_model_info(model) #save the generated model in PySB and BNG format generated_model_code = export(model, 'pysb_flat') with open(model.name+'.py', 'wt') as f: f.write(generated_model_code); generated_model_code = export(model, 'bngl') with open(model.name+'.bngl', 'wt') as f: f.write(generated_model_code); generated_model_code = export(model, 'sbml') with open(model.name+'.sbml', 'wt') as f: f.write(generated_model_code) # - # Now, we visualize the species and the forward and backward rates and dissociation constants in the model to check that thermodynamic factors indeed control cooperative reaction rates: # + from util_display import format_species_reactions, display_table import pandas as pd # prevent pandas from truncating long LaTeX expressions when rendering. pd.options.display.max_colwidth=None #obtain dataframe with math latex expression visualization for species and reactions (speciesdisp, reactionsdisp)=format_species_reactions(model); display_table(speciesdisp, caption='SPECIES'); display_table(reactionsdisp, caption='REACTIONS'); # - # ### Model simulation of drug-dose response at steady state # # We use the generated model to simulate the response of RAF kinases to three classes of RAF inhibitors: 1st generation (e.g. Vemurafenib, Dabrafenib and Encorafenib), paradox breakers (e.g. PLX8349) and panRAF (e.g. LY3009120, AZ628) inhibitors. We compare the results to a hypothetical RAF inhibitor that has no cooperative effect with RAF dimerization (f and g termodynamic parameters are unity so they do not impose any extra energy). We observe the effect of the drugs in situations with low and high propensity for RAF dimerization (controlled by setting the Kd values of the RAF dimerization reactions) to study the effect that the drugs have in absence or presence of the dimerization status of RAFs, for example as induced by Ras-GTP signal that induces RAF dimerization. This analysis is run to steady state, meaning that the drug-dose response represent the inhibitory response achieved when all reactions are let to run until they equilibrate. First, we set up the model with the right parameter values and with a drug-dose response concentrations of the RAF inhibitor: # + #import ODE simulator from pysb.simulator import ScipyOdeSimulator #import various utility packages import numpy as np #set the dilution range for the RAF inhibitor RAFi_dil=np.logspace(-4, 1, 20, base=10.0); #uM #set the values of f and g to model RAF inhibitors with different complex drug-protein interactions #independent: f=1, g=1 , 1st generation: f= 0.001, g=1000; paradox breaker: f= 1.0, g=1000; panRAF: f= 0.001, g=1 ff=[1.0, 0.001, 1.0, 0.001]; gg=[1.0, 1000, 1000, 1]; fgtitle=['non-cooperative', '1st_gen', 'paradox_breaker', 'panRAF']; #set the Kd values to use for baseline RAF dimerization to simulate Ras-GTP signaling that induced dimerization RR_kfs_exp=np.linspace(-1, 5, 13); RR_kfs=10**RR_kfs_exp; #set up the ODE simulator for the model sim = ScipyOdeSimulator(model); # - # Then, we perform multiple simulations of the systems at each defined combination of thermodynamic parameters (f,g), RAF inhibitor concentration (RAFi) and RAF dimerization baseline (RR_Kd): # + # %matplotlib notebook import matplotlib.pyplot as plt import math from tqdm.notebook import tqdm, trange from util_simulation import equilibrate #create a bar to keep track of simulation progress p_bar_sim = tqdm(desc='Simulation progress', total=len(ff)*len(RR_kfs)*len(RAFi_dil)) #define observables to plot plt_obs=['R_BRAFmut_active_obs', 'R_RAFwt_active_obs', 'R_obs', 'RR_obs', 'RRI_obs', 'IRRI_obs']; plot_obs_names=['RAF_mut', 'RAF_wt', 'R', 'RR', 'RRI', 'IRRI']; #define figure fig, ax = plt.subplots(len(plt_obs),len(gg), sharey=True); fig.suptitle("Simulations of RAF inhibitor dose-response effect on RAF activity"); #define plot colors cmap=plt.get_cmap('copper'); col=cmap(np.linspace(0.0, 1.0, len(RR_kfs_exp)))[::-1]; #simulate the different parameter combinations ss_v = np.empty([len(RAFi_dil), len(plt_obs)]); for i in range (len(ff)): for j in range(len(RR_kfs)): for k in range(len(RAFi_dil)): #run simulation with modified parameters param_values={'f': ff[i] ,'g': gg[i], 'kf_RR': RR_kfs[j], 'I_0': RAFi_dil[k]}; #run this to assure model is run to steady_ state, res=equilibrate(sim, param_values=param_values) #update progress p_bar_sim.update(1); #extract end of simulation for each osbervables from the dataframe of simulation results ss_v[k,:]=res.dataframe[plt_obs].iloc[-1]; #plot the results for a given RR_KD and f,g combination for z in range(len(plt_obs)): #plot simualtion h=ax[z,i].plot(RAFi_dil, ss_v[:,z], color = col[j,:]); ax[z,i].set_xscale('log'); #set axis names if (i==0): ax[z,i].set_ylabel(plot_obs_names[z]); if (z==0): ax[z,i].title.set_text(fgtitle[i]); if (z==(len(plt_obs)-1)): ax[z,i].set_xlabel('RAFi (uM)'); else: ax[z,i].set_xticklabels([]); #add legend fig.legend(ax, labels= list(map(str, math.log(model.parameters['kr_RR'].value,10)-RR_kfs_exp)) , bbox_to_anchor=(1.04,1), loc="upper right", borderaxespad=0.1, title="Kd_RR (log10)"); # - # The resulting simulations show the expected behavior of these four different RAF inhibitor classes on both the situation of RAF representing a BRAF V600E/K mutant or wild-type BRAF/CRAF: # # * **Non-cooperative RAF inhibitor** - In the first column, the independent RAF inhibitor does not change its affinity to RAF depending on dimerization status, nor does it influence dimerization itself. As such, BRAF mutant inhibition is effective (first row) and independent on RAF dimerization from upstream signal (RR_kD). Similarly, the effect on a wild-type RAF context is a straightforward dose-dependent inhibition of RAF signaling that depends on the baseline RAF dimerization due to upstream signal (RR_kD) (second row). Note that there is no difference in the potency of the drug between the oncogenic BRAF mutant and RAF wild-type signal (compare first and second panel). # # # * **1st generation RAF inhibitor** - In the second column, the 1st generation RAF inhibitor efficiently inhibits RAF signaling from monomers (light colored line), but RAF dimeric signal generated by either upstream RAF dimerization (RR_Kd) or by the propensity of the drug to cause RAF dimerization (f parameter less than one) creates a resistance mechanism (seen as the increased amount of active RAFs in the first row when comparing light and dark colored lines). This resistance mechanism is due to the low affinity of the drug for the second RAF protomer in a RAF dimer (g parameter more than one), as seen by the rise of single drug-bound RAF dimers in the 5th row. In addition, wild-type RAF signaling is potentiated by the drug, as can be seen by the increase in active RAF signaling (2nd row) that is drug-dependent at otherwise low levels of upstream RAF dimerization. This effect, known to induced toxicity through inducing proliferation of other malignant cells by drug-induced MAPK activation, is also known as paradoxical activation. # # # * **Paradox-breaker RAF inhibitor** - In the third column, a paradox breaker RAF inhibitor is seen reducing the extend of resistance due to RAF dimerization (1st row) in the BRAF-mutant case and of RAF dimerization potentiation by paradoxical activation (2nd row), since the drug binding does not induce dimerization (f is equal to one) and thus does not synergize with the upstream RAF dimerization signal. This can be seen as the reduced amount of single drug-bound RAF dimers in the 5th row. # # # * **panRAF inhibitor** - In the last column, a panRAF inhibitor is seen eventually binding both protomers in RAF dimers (g is equal to one), thus completely ablating any resistance mechanism caused by RAF dimerization. This can be seen as the eventual reduction in the amount of single drug-bound RAF dimers in the 5th row and dose-dependent rise of double drug-bound RAF dimers in the 6th column. Note that in this case, pan RAF inhibitors have the same potency on the RAF-mutant and wild-type signal again (compare potencies on 1st and 2nd rows). # # # Thus, the model automatically generated using energy-based rule-based modelling properly describes and simulates the complex drug-protein interactions that are supposed to govern drug efficacy and toxicity in this simplified scenario for RAF kinases and RAF inhibitors. # # # ### Model simulation of temporal dynamics during drug treatment # # The previous simulations analyzed the drug-dose response of RAF to RAF inhibitors applied for long-term drug treatment at fixed conditions (e.g. fixed level of RAF dimerization baseline). However, drugs operate on proteins that often experience temporal perturbations, as for example from the activation of an upstream signal. In this second example, we simulate the behaviour of RAF inhibitors having different energetic properties during temporal dynamic perturbations in RAF dimerization. # # First, we generate the dynamic profile for the addtion of a RAF inhibitor and for a subsequent series of square pulses of increased RAF dimerization (through the k forward rate Kf_RR): # + #define the train of square pulses that define temporal dynamic control over RAF dimerization (Kf_RR) ncycle=6; ton=40; toff=50; toset=10; Kf_RR_min=10**-2; Kf_RR_max=10**2.5; #set the concentration and time of addition of the RAFi inhibitor RAFi_conc_init=0.0; RAFi_conc_add=10.0; RAFi_time_add=20; #generate list of events that dynamically change RAFi t_events=[0.0, RAFi_time_add]; events=['I_0', 'I_0']; events_value=[RAFi_conc_init, RAFi_conc_add]; #generate list of events that dynamically change Kf_RR for i in range(ncycle): t_events= t_events + [t_events[-1] + toff + toset] + [t_events[-1] + ton + toff + toset] ; events= events + ['kf_RR'] + ['kf_RR']; events_value= events_value + [Kf_RR_max] + [Kf_RR_min]; toset=0; #generate dynamic signals for RAFi and Kf_RR t_Kf_RR_dyn=[0.0]; Kf_RR_dyn=[Kf_RR_min]; t_RAFi=[0.0]; RAFi_dyn=[RAFi_conc_init]; for i in range(len(events)): if (events[i]=='kf_RR'): t_Kf_RR_dyn= t_Kf_RR_dyn + [t_events[i]] + [t_events[i]]; Kf_RR_dyn= Kf_RR_dyn + [Kf_RR_dyn[-1]] + [events_value[i]]; elif (events[i]=='I_0'): t_RAFi= t_RAFi + [t_events[i]] + [t_events[i]]; RAFi_dyn= RAFi_dyn + [RAFi_dyn[-1]] + [events_value[i]]; t_RAFi= t_RAFi + [t_Kf_RR_dyn[-1]]; RAFi_dyn = RAFi_dyn + [RAFi_dyn[-1]]; # - # Next, we define the energetic properties of two 1st generation RAF inhibitors which have the same kinetic rates but in which the cooperativity with RAF dimerization is assigned either to the forward (RR_phi=1.0) or backward rate (RR_phi=0.0). This will change how long lived are RAF dimers once induced by the dynamic pulses of increase RAF dimerization. # + #define energy parameters for the RAF inhibitors to be simulated ff=[0.001, 0.001]; gg=[1000, 1000]; RI_phi=[1.0, 1.0]; RR_phi=[1.0, 0.0]; lgn_dyn=['RAFi','Kd_RR', 'phi_RI=1', 'phi_RR=0']; #create figure for plotting fig, ax = plt.subplots(2+len(plt_obs),1); #plot dynamic RAFi concentration ax[0].plot(t_RAFi, RAFi_dyn, color='k'); ax[0].set_ylabel('RAFi (I)'); ax[0].set_xticklabels([]); #plot dynamic Kd_RR rate ax[1].plot(t_Kf_RR_dyn, Kf_RR_dyn, color='r'); ax[1].set_ylabel('kf_RR'); ax[1].set_xticklabels([]); #for each RAF inhibitor to be simulated for i in range (len(ff)): #set up kinetic parameters and initial conditions param_values={'f': ff[i] ,'g': gg[i], 'phi_RI':RI_phi[i], 'phi_RR':RR_phi[i], 'kf_RR': Kf_RR_dyn[0], 'I_0': RAFi_dyn[0]}; #run it to steady state before running drug addition and Kd_RR pulse train res=equilibrate(sim, param_values=param_values); #run consecutive simulations updating conditions according to events #(drug addition, changes in Kd_RR) pulse train to set KdRR values res_obs=res.dataframe[plt_obs].to_records(index=False)[-1]; t=[0.0]; for j in range(len(events)-1): #save the state of the previous simulation to restart at same point initials_pre = res.dataframe.iloc[-1, :len(model.species)].copy(); #create the tspan for simulation from these event to the next tspan= np.linspace(t_events[j], t_events[j+1]); #update param values with the event value param_values[events[j]]=events_value[j]; #if the drug changed, set it in the current species states if (events[j]=="I_0"): #get index of inhibitor specie i_I_0=model.get_species_index(as_complex_pattern(model.monomers.I(r=None))); initials_pre[i_I_0]=events_value[j]; #run the simulation for the necessary time (until next event) res=sim.run(tspan=tspan, param_values=param_values, initials=np.array(initials_pre)); #append the observables res_obs=np.append(res_obs[plt_obs],res.dataframe[plt_obs].to_records(index=False)); #append simulation time t=np.append(t,tspan); #plot the results for a given setting of parameters for z in range(len(plt_obs)): #set same yaxes if (z>0): ax[z].get_shared_y_axes().join(ax[2], ax[z+2]); #plot simualtion h=ax[z+2].plot(t, res_obs[plt_obs[z]]); #set axis names ax[z+2].set_ylabel(plot_obs_names[z]); if (z==(len(plt_obs)-1)): ax[z+2].set_xlabel('Time (s)'); else: ax[z+2].set_xticklabels([]); #add legend fig.legend(ax, labels=lgn_dyn , loc="upper right", borderaxespad=0.1); # - # The simulation shows that distribution rates, which define how changes in cooperativity affect forward or backward rates, can greatly influce drug efficacy during dynamic perturations. In this case, two 1st generation RAF inhibitors with different distribution rates, but having the exact same forward and backward rates, behave very differently when a dynamic perturbation in RAF dimerization is applied the system. By changing the distribution rate from RR_phi=0 to RR_phi=1, the cooperativity imposed between drug binding and RAF dimerization shifts from controlling the rate of RAF dissasembly to control the rate of RAF assembly. In the case of RR_phi=0, the drug binding to RAF dimers slows their disassembly after the dimerization signal disappears, those creating a continuous RAF signal that can cause resistance (RAF_mut, second row) or toxicity (RAF_wt). Instead, in the case of RR_ph=1, the drug binding to RAF dimers increases the speed of their assembly when the dimerization signal appears. This causes a slightly faster induction of RAF dimers, but removes the continous activation seen in the previous case. The beahviour of drugs with complex interactions with targeted proteins is thus influenced not just by their forward and backward rates, but also by the way in which cooperativity affects assembly/dissasembly rates of multiprotein-drug complexes.
toy_example_RAF_RAFi.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + from pyspark import SparkContext, SparkConf, SQLContext conf = SparkConf() \ .setAppName("test_JDBC2") \ .setMaster("spark://spark-master:7077") \ .set("spark.jars", "/opt/workspace/jars/mysql-connector-java-8.0.21.jar") sc = SparkContext(conf=conf) sqlContext = SQLContext(sc) spark = sqlContext.sparkSession # - df = spark.read.format("jdbc") \ .option("driver", "com.mysql.cj.jdbc.Driver") \ .option("url", "jdbc:mysql://mysql2:3306/employee?serverTimezone=UTC&useSSL=false") \ .option("dbtable", "dept_csv") \ .option("user", "root") \ .option("password", "<PASSWORD>") \ .load() df.show() # + from pyspark.sql.types import StructType, StructField, StringType, IntegerType data = [("HISTORY", "SUWON", 50)] schema = StructType([ StructField("dname", StringType(),True), StructField("loc", StringType(),True), StructField("deptno", IntegerType(),True) ]) df_2 = spark.createDataFrame(data=data, schema=schema) # df_2.printSchema() df_2.show(truncate=False) # - df_2.write.format('jdbc') \ .mode('append') \ .option("driver", "com.mysql.cj.jdbc.Driver") \ .option("url", "jdbc:mysql://mysql2:3306/employee?serverTimezone=UTC&useSSL=false") \ .option("dbtable", "dept_csv") \ .option("user", "root") \ .option("password", "<PASSWORD>") \ .save() sc.stop() spark.stop()
hadoop_spark/scripts/jupyter_notebook/test_mysql_conn_v1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Load Train and Test Data for 30 Day Mortality # Import libraries import numpy as np import pandas as pd data_train_30d = pd.read_csv('data_pp_train_30d.csv') data_test_30d = pd.read_csv('data_pp_test_30d.csv') X_train = data_train_30d.copy() y_train = data_train_30d.copy() X_test = data_test_30d.copy() y_test = data_test_30d.copy() # + X_train = X_train.drop(['one_year', 'thirty_days'], axis = 1) y_train = y_train[['thirty_days']] X_test = X_test.drop(['one_year', 'thirty_days'], axis = 1) y_test = y_test[['thirty_days']] y_train = y_train['thirty_days'] y_test = y_test['thirty_days'] print('Loaded 30 Days Train Sample:') print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print('Loaded 30 Days Test Sample:') print('X_test shape:', X_train.shape) print(X_test.shape[0], 'test samples')
Process Books/LoadData30D.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Importance of valid sampling # By <NAME>" Nitishinskaya and <NAME> # # Notebook released under the Creative Commons Attribution 4.0 License. # # --- # # In order to evaluate a population based on sample observations, the sample must be unbiased. Otherwise, it is not representative of the population, and conclusions drawn from it will be invalid. For example, we always take into account the number of samples we have, and expect to have more accurate results if we have more observations. Here we will discuss four other types of sampling bias. # # Data-mining bias # # Data mining refers to testing a set of data for the presense of different patterns, and can lead to bias if used excessively. Because our analyses are always probabilistic, we can always try enough things that one will appear to work. For instance, if we test 100 different variables for correlation with a dataset using a 5% significance level, we expect to find 5 that are significantly correlated with the data just by random chance. Below we test this for random variables and a random dataset. The result will be different each time, so try rerunning the cell! # + import numpy as np from scipy.stats import pearsonr # Generate completely random numbers randos = [np.random.rand(100) for i in range(100)] y = np.random.rand(100) # Compute correlation coefficients (Pearson r) and record their p-values (2nd value returned by pearsonr) ps = [pearsonr(x,y)[1] for x in randos] # Print the p-values of the significant correlations, i.e. those that are less than .05 print [p for p in ps if p < .05] # - # Above we data-mined by hand. There is also intergeneratinal data mining, which is using previous results about the same dataset you are investigating or a highly related one, which can also lead to bias. # # The problem here is that there is no reason to believe that the pattern we found will continue; for instance, if we continue to generate random numbers above, they will not continue to be correlated. [Meaningless correlations](http://tylervigen.com/view_correlation?id=866) can arise between datasets by coincidence. This is similar to the problem of overfitting, where a model is contorted to fit historical data perfectly but then fails out of sample. It is important to perform such an out-of-sample test (that is, using data not overlapping with that which was examined when creating the model) in order to check for data-mining bias. # # Sample selection bias # # Bias resulting from data availability is called sample selection bias. Sometimes it is impossible to avoid, but awareness of the phenomenon can help avoid incorrect conclusions. Survivorship bias occurs when securities dropped from databases are not taken into account. This causes a bias because future analyses then do not take into account, for example, stocks of businesses that went bankrupt. However, businesses whose stock you buy now may very well go bankrupt in the future, so it is important to incorporate the behavior of such stocks into your model. # # Look-ahead bias # # Look-ahead bias occurs when attempting to analyze from the perspective of a day in the past and using information that was not available on that day. For instance, fundamentals data may not be reported in real time. Models subject to look-ahead bias cannot be used in practice since they would require information from the future. # # Time-period bias # # The choice of sample period affects results, and a model or analysis may not generalize to future periods. This is known as time-period bias. If we use only a short time period, we risk putting a lot of weight on a local phenomenon. However, if we use a long time period, we may include data from a prior regime that is no longer relevant.
lectures/drafts/Sampling bias.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import json import glob import os import csv # ### Humor detection dataset # + data = pd.DataFrame() for file in glob.glob('../data/raw/humor-detection-corpus/Raw-data/*.json'): df = pd.read_json(file,dtype=str) data = pd.concat([data,df],axis=0) # - labels = pd.read_csv('../data/raw/humor-detection-corpus/HumorCorpusFinal.txt',header=None,sep=' ',dtype=str) labels.columns = ['id','label'] data = pd.merge(data,labels,how='inner') try: os.makedirs('../data/processed/humon-detection-corpus/') except: pass data.head() data[['text','label']].to_csv(os.path.join('../data/processed/humon-detection-corpus/','data.txt'),\ header=None,sep='\t',index=False) data.shape # ### Sarcasm dataset # + data = pd.read_csv('../data/raw/SarcasmDetection_CodeMixed/Dataset/Sarcasm_tweets.txt',sep='\t',\ dtype=str,header=None) ids = [] texts = [] for i in range(data.shape[0]): if i%2 == 0: ids.append(data.values[i][0]) else: texts.append(data.values[i][0]) assert len(ids) == len(texts) data = pd.DataFrame() data['id'] = ids data['text'] = texts # + labels = pd.read_csv('../data/raw/SarcasmDetection_CodeMixed/Dataset/Sarcasm_tweet_truth.txt',sep='\t',\ dtype=str,header=None) ids = [] label = [] for i in range(labels.shape[0]): if i%2 == 0: ids.append(labels.values[i][0]) else: label.append(labels.values[i][0]) assert len(ids) == len(label) labels = pd.DataFrame() labels['id'] = ids labels['label'] = label # - data = pd.merge(data,labels,how='inner') data.head() data.shape try: os.makedirs('../data/processed/SarcasmDetection_CodeMixed/') except: pass data[['text','label']].to_csv(os.path.join('../data/processed/SarcasmDetection_CodeMixed/','data.txt'),\ header=None,sep='\t',index=False) # ### Stance detection dataset # + data = pd.read_csv('../data/raw/StanceDetection_CodeMixed/Dataset/Notebandi_tweets.txt',sep='\t',\ dtype=str,header=None) ids = [] texts = [] for i in range(data.shape[0]): if i%2 == 0: ids.append(data.values[i][0]) else: texts.append(data.values[i][0]) assert len(ids) == len(texts) data = pd.DataFrame() data['id'] = ids data['text'] = texts # + labels = pd.read_csv('../data/raw/StanceDetection_CodeMixed/Dataset/Notebandi_tweets_stance.txt',sep='\t',\ dtype=str,header=None) ids = [] label = [] for i in range(labels.shape[0]): if i%2 == 0: ids.append(labels.values[i][0]) else: label.append(labels.values[i][0]) assert len(ids) == len(label) labels = pd.DataFrame() labels['id'] = ids labels['label'] = label # - data = pd.merge(data,labels,how='inner') data.head() data.shape try: os.makedirs('../data/processed/StanceDetection_CodeMixed/') except: pass data[['text','label']].to_csv(os.path.join('../data/processed/StanceDetection_CodeMixed/','data.txt'),\ header=None,sep='\t',index=False) # ### Aggression dataset # + train_data = pd.DataFrame() for file in glob.glob('../data/raw/trac1-dataset/*/*_train.csv'): df = pd.read_csv(file,header=None,quoting=csv.QUOTE_ALL) df = df.dropna(axis=1,how='all') df.columns = ['id','text','label'] train_data = pd.concat([train_data,df],axis=0) #train_data.columns = ['id','text','label'] # - train_data.tail() # + val_data = pd.DataFrame() for file in glob.glob('../data/raw/trac1-dataset/*/*_dev.csv'): df = pd.read_csv(file,header=None,quoting=csv.QUOTE_ALL) df = df.dropna(axis=1,how='all') df.columns = ['id','text','label'] val_data = pd.concat([val_data,df],axis=0) #val_data.columns = ['id','text','label'] # - train_data = train_data.fillna('') val_data = val_data.fillna('') train_data = train_data[train_data.text != ''] val_data = val_data[val_data.text != ''] train_data.shape, val_data.shape try: os.makedirs('../data/processed/Aggression_dataset/') except: pass # + train_data[['text','label']].to_csv(os.path.join('../data/processed/Aggression_dataset/','train.txt'),\ header=None,sep='\t',index=False) val_data[['text','label']].to_csv(os.path.join('../data/processed/Aggression_dataset/','val.txt'),\ header=None,sep='\t',index=False) # -
notebooks/data_preparation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true # # Finding Lanes on the Road # # The objective of this project is to develop a software pipeline to identify lane lines on the road. The software pipeline is first tested on a series of images and then applied over a video stream. # + [markdown] deletable=true editable=true # # Pipeline # # ## Importing Required Packages/Dependencies # + deletable=true editable=true import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 # %matplotlib inline # + [markdown] deletable=true editable=true # ## Helper Functions # # + deletable=true editable=true import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ imshape = image.shape leftMinY = 0 leftMaxY = imshape[0] leftMinX = 0 leftMaxX = 0 rightMinY = 0 rightMaxY = imshape[0] rightMinX = 0 rightMaxX = 0 m1 = 1 m2 = 1 for line in lines: for x1,y1,x2,y2 in line: slope = (y2 - y1)/(x2- x1) if slope < 0: m1 = slope if(leftMinY < y1): leftMinY = y1 leftMinX = x1 if(leftMaxY > y2): leftMaxY = y2 leftMaxX = x2 elif slope > 0: m2 = slope if(rightMinY < y2): rightMinY = y2 rightMinX = x2 if(rightMaxY > y1): rightMaxY = y1 rightMaxX = x1 leftMinX = np.uint16(leftMinX - ((leftMinY - imshape[0])/m1)) #x2 - (y2 - y1) / m1 rightMinX = np.uint16(rightMinX - ((rightMinY - imshape[0] )/m2)) cv2.line(img, (leftMinX, imshape[0]), (leftMaxX, leftMaxY), color, thickness) cv2.line(img, (rightMinX, imshape[0]), (rightMaxX, rightMaxY), color, thickness) #for line in lines: #for x1,y1,x2,y2 in line: #cv2.line(img, (x1, y1), (x2, y2), [255,255,0], thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines,thickness=6) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., λ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + λ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, λ) def laneDetection(inputImage, boundaries, preProcessingFlag=True, intelligentFlag = True, extrapolateFlag= True, showMask=False,kernel_size = 5, min_line_len = 20,max_line_gap = 6 , low_threshold = 95,high_threshold = 120,rho = 1,theta = np.pi/180,threshold = 10, color=[255,0,0],thickness=2,alpha=5,beta=10): #img = mpimg.imread(inputImage) #img = (mpimg.imread(test)*255).astype('uint8') img = inputImage if preProcessingFlag==True: img = preProcess(img, boundaries) gray = grayscale(img) # convert image from RGB to gray grayG = gaussian_blur(gray, kernel_size) #Gaussian filter is applied to remove the scharp edges cannyImg = canny(grayG, low_threshold, high_threshold) # apply canny edge detection algorithm # mask detection if not intelligentFlag: # add simple mask - handmade imshape = cannyImg.shape vertices = np.array([[(0,imshape[0]),(imshape[1]/2+3*imshape[0]/70, imshape[0]/3+imshape[0]/4), (imshape[1]/2+imshape[0]/70, imshape[0]/3+imshape[0]/4), (imshape[1],imshape[0])]], dtype=np.int32) masked = region_of_interest(cannyImg,vertices) else: # find the horizon line - adaptive masking # better way is finding slope and intersection between two lines masked = cannyImg houghImg, successfulFlag = hough_lines(masked, rho, theta, threshold, min_line_len, max_line_gap,color,thickness,intelligentFlag, extrapolateFlag,showMask,alpha,beta) if successfulFlag==True: houghRGB = np.dstack((houghImg*(color[0]//255), houghImg*(color[1]//255), houghImg*(color[2]//255))) # *(color[1]/255) result = weighted_img(inputImage, houghRGB, α=1., β=0.8, λ=0.) return result else: return inputImage # + [markdown] deletable=true editable=true # ## Reading a sample Image # + deletable=true editable=true image = mpimg.imread('test_images/solidWhiteRight.jpg') print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # + [markdown] deletable=true editable=true # ## Testing on Images # + deletable=true editable=true import os imgFolder = "test_images" saveFolder = "testResults/images" imgs = os.listdir(imgFolder) for imageName in imgs: inputImagePath = os.path.join(imgFolder, imageName) image = cv2.imread(inputImagePath) gray = grayscale(image) kernel_size = 5 blur_gray = gaussian_blur(gray, kernel_size) high_threshold = 150 low_threshold = high_threshold / 3 edges = canny(blur_gray, low_threshold, high_threshold) imshape = image.shape vertices = np.array([[(0, imshape[0]),(imshape[1],imshape[0]),(imshape[1]/2, imshape[0]/2 + 50) ]], dtype=np.int32) masked_edges = region_of_interest(edges, vertices) rho = 2 theta = np.pi/180 threshold = 25 min_line_len = 40 max_line_gap = 70 line_image = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap) color_edges = np.dstack((edges, edges, edges)) result = weighted_img(color_edges, line_image) outputImagePath = os.path.join(saveFolder, imageName) cv2.imwrite(outputImagePath, result) plt.imshow(result) # + [markdown] deletable=true editable=true # ## Testing on Videos # + deletable=true editable=true from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): gray = grayscale(image) kernel_size = 5 blur_gray = gaussian_blur(gray, kernel_size) high_threshold = 150 low_threshold = high_threshold / 3 edges = canny(blur_gray, low_threshold, high_threshold) imshape = image.shape vertices = np.array([[(0, imshape[0]),(imshape[1],imshape[0]),(imshape[1]/2, imshape[1]/3 - 1) ]], dtype=np.int32) masked_edges = region_of_interest(edges, vertices) rho = 2 theta = np.pi/180 threshold = 25 min_line_len = 40 max_line_gap = 70 line_image = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap) #color_edges = np.dstack((masked_edges, masked_edges, masked_edges)) result = weighted_img(image, line_image) return result whiteLaneVideo = VideoFileClip("test_videos/solidWhiteRight.mp4") yellowLaneVideo = VideoFileClip("test_videos/solidYellowLeft.mp4") whiteLaneResult = 'testResults/videos/solidWhiteRight.mp4' yellowLaneResult = 'testResults/videos/solidYellowLeft.mp4' whiteLaneProcessed = whiteLaneVideo.fl_image(process_image) yellowLaneProcessed = yellowLaneVideo.fl_image(process_image) # %time whiteLaneProcessed.write_videofile(whiteLaneResult, audio=False) # %time yellowLaneProcessed.write_videofile(yellowLaneResult, audio=False) # + [markdown] deletable=true editable=true # ### White Lane Result # + deletable=true editable=true HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(whiteLaneResult)) # + [markdown] deletable=true editable=true # ### Yellow Lane Result # + deletable=true editable=true HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellowLaneResult)) # + [markdown] deletable=true editable=true # ## Reflection # + [markdown] deletable=true editable=true # ### How the pipeline works # # The pipeline first detects lanes in a set of images and uses the same techinque to detect lanes in a stream of video. The RGB images are converted to grayscale and gaussian blur is applied over them. Using canny edge detection, we then find the edges and extrapolate lines over the strongest edges. For a video stream, we essentially perform the same operations over every video frame which is inturn an image. The fl_image method comes in handy during lane detection on video streams. # # ### Shortcomings # # The above algorithm works best for smooth and straight lanes. When used on lanes with bumps and curves, this algorithm will fail to detect lanes. This algorithm will also fail in the event of bad weather. # # # ### Improvements # # The current implemntation has some jitters. Pre-processing the images would result in a smoother extrapolation. We can use the intelligent flag on hough lines coupled with other parameters to detect curved lanes.
lanelines.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp mgmnt.prep # - # # Main Preprocessing # # > This module comprises preprocessing techniques applied to software artifacts (TODO:cite here the papers employed for this preprocessings): # > # >This is an adapted version of Daniel McCrystal Nov 2019 # > # >This version also includes BPE preprocesing and NLTK. It's the main class to execute conventional pipelines. # # >Author: @danaderp March 2020 # #! pip install dit # #! pip install nltk # #! pip install tokenizers # #! pip install tensorflow_datasets # ! pip install -U tensorflow-gpu # ! pip install tensorflow_datasets #export from typing import List, Set, Callable, Tuple, Dict, Optional import re from nltk.stem.snowball import SnowballStemmer import nltk import pandas as pd import glob import os import pathlib from string import punctuation import csv from nltk.stem.snowball import SnowballStemmer englishStemmer=SnowballStemmer("english") # #! pip install nltk nltk.download('stopwords') #export from tensorflow.keras.preprocessing import text from pathlib import Path import glob from datetime import datetime #export # Imports import pandas as pd import sentencepiece as sp import numpy as np import json from pathlib import Path import sys import sentencepiece as spm from tokenizers import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing #export import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) from zipfile import ZipFile # ! unzip -qq cisco/CSB-CICDPipelineEdition-master.zip # ## Setup #hide path_data = '../dvc-ds4se/' #dataset path def libest_params(): return { 'system': 'libest', #'path_zip': Path("cisco/sacp-python-common.zip"), 'saving_path': path_data+ 'se-benchmarking/traceability/testbeds/processed/libest_data', 'language': 'english', 'dataset' : path_data + '' #'model_prefix': path_data+'models/bpe/sentencepiece/wiki_py_java_bpe_128k' #For BPE Analysis #'model_prefix': path_data+'models/bpe/sentencepiece/wiki_py_java_bpe_32k' 'model_prefix':path_data+'models/bpe/sentencepiece/wiki_py_java_bpe_8k' } model_prefix = { 'bpe8k' : path_data+'models/bpe/sentencepiece/wiki_py_java_bpe_8k', 'bpe32k' : path_data+'models/bpe/sentencepiece/wiki_py_java_bpe_32k', 'bpe128k' : path_data+'models/bpe/sentencepiece/wiki_py_java_bpe_128k' } #params = default_params() params = libest_params() # # Conventional Preprocessing Class #export class ConventionalPreprocessing(): '''NLTK libraries for Conventional Preprocessing''' def __init__(self, params, bpe = False): self.params = params #If BPE provided, then preprocessing with BPE is allowed on CONV if bpe: self.sp_bpe = spm.SentencePieceProcessor() self.sp_bpe.load(params['model_prefix']+'.model') else: self.sp_bpe = None pass def bpe_pieces_pipeline(self, doc_list): '''Computes BPE preprocessing according to params''' encoded_str = '' if self.sp_bpe is None: logging.info('Provide a BPE Model!') else: encoded_str = [self.sp_bpe.encode_as_pieces(doc) for doc in doc_list] return encoded_str #ToDo Transforme it into a For-Comprenhension def clean_punctuation(self, token): #remove terms !"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~0123456789 return re.sub(r'[^a-zA-Z\s]', ' ', token, re.I|re.A) def split_camel_case_token(self, token): return re.sub('([a-z])([A-Z])', r'\1 \2', token) def remove_terms(self, filtered_tokens): remove_terms = punctuation + '0123456789' return [token for token in filtered_tokens if token not in remove_terms and len(token)>2 and len(token)<21] def stemmer(self, filtered_tokens): return [englishStemmer.stem(token) for token in filtered_tokens ] def stop_words(self, filtered_tokens): stop_words = nltk.corpus.stopwords.words(self.params['language']) return [token for token in filtered_tokens if token not in stop_words] def basic_pipeline(self, dict_filenames): '''@dict_filenames: {filename: code}''' pre_process = [( key.replace('.txt', '-pre.txt') , self.clean_punctuation(dict_filenames[key][0]) ) for key in dict_filenames] pre_process = [( doc[0] , self.split_camel_case_token(doc[1]) ) for doc in pre_process] pre_process = [( doc[0] , doc[1].lower() ) for doc in pre_process] pre_process = [( doc[0] , doc[1].strip()) for doc in pre_process] # Leading whitepsace are removed pre_process_tokens = [(doc[0] , nltk.WordPunctTokenizer().tokenize(doc[1])) for doc in pre_process] filtered_tokens = [(doc[0], self.stop_words(doc[1]) ) for doc in pre_process_tokens] #Stop Words filtered_tokens = [(doc[0], self.stemmer(doc[1]) ) for doc in filtered_tokens] #Filtering Stemmings filtered_tokens = [(doc[0], self.remove_terms(doc[1])) for doc in filtered_tokens] #Filtering remove-terms pre_process = [(doc[0], ' '.join(doc[1])) for doc in filtered_tokens] return pre_process def fromdocs_pipeline(self, docs): #TODO """@tokenized_file: a list of tokens that represents a document/code""" pre_process = [ self.clean_punctuation(doc) for doc in docs] logging.info('fromtokens_pipeline: clean punctuation') pre_process = [ self.split_camel_case_token(doc) for doc in pre_process] logging.info('fromtokens_pipeline: camel case') pre_process = [ doc.lower() for doc in pre_process] logging.info('fromtokens_pipeline: lowe case') pre_process = [ doc.strip() for doc in pre_process] # Leading whitepsace are removed logging.info('fromtokens_pipeline: white space removed') pre_process_tokens = [ nltk.WordPunctTokenizer().tokenize(doc) for doc in pre_process] logging.info('fromtokens_pipeline: WordPunctTokenizer') filtered_tokens = [ self.stop_words(doc) for doc in pre_process_tokens] #Stop Words logging.info('fromtokens_pipeline: Stop words') filtered_tokens = [ self.stemmer(doc) for doc in filtered_tokens] #Filtering Stemmings logging.info('fromtokens_pipeline: Stemmings') filtered_tokens = [ self.remove_terms(doc) for doc in filtered_tokens] #Filtering remove-terms logging.info('fromtokens_pipeline: Removed Special Terns') pre_process = [ ' '.join(doc) for doc in filtered_tokens] logging.info('fromtokens_pipeline END') return pre_process def frombatch_pipeline(self, batch): #TODO """@batch: a TensorFlow Dataset Batch""" pre_process = [ self.clean_punctuation( doc.decode("utf-8") ) for doc in batch] logging.info('frombatch_pipeline: clean punctuation') pre_process = [ self.split_camel_case_token(doc) for doc in pre_process] logging.info('frombatch_pipeline: camel case') pre_process = [ doc.lower() for doc in pre_process] logging.info('frombatch_pipeline: lowe case') pre_process = [ doc.strip() for doc in pre_process] # Leading whitepsace are removed logging.info('frombatch_pipeline: white space removed') pre_process_tokens = [ nltk.WordPunctTokenizer().tokenize(doc) for doc in pre_process] logging.info('frombatch_pipeline: WordPunctTokenizer') filtered_tokens = [ self.stop_words(doc) for doc in pre_process_tokens] #Stop Words logging.info('frombatch_pipeline: Stop words') filtered_tokens = [ self.stemmer(doc) for doc in filtered_tokens] #Filtering Stemmings logging.info('frombatch_pipeline: Stemmings') filtered_tokens = [ self.remove_terms(doc) for doc in filtered_tokens] #Filtering remove-terms logging.info('frombatch_pipeline: Removed Special Terns') #pre_process = [ ' '.join(doc) for doc in filtered_tokens] logging.info('frombatch_pipeline [END]') return filtered_tokens def fromtensor_pipeline(self, ts_x): """@ts_x: es un elemento del tensor""" #TODO pre_process = self.clean_punctuation(ts_x) pre_process = self.split_camel_case_token(pre_process) pre_process = pre_process.lower() pre_process = pre_process.strip() pre_process = nltk.WordPunctTokenizer().tokenize(pre_process) filtered_tokens = self.stop_words(pre_process) filtered_tokens = self.stemmer(filtered_tokens) filtered_tokens = self.remove_terms(filtered_tokens) pre_process = ' '.join(filtered_tokens) logging.info('fromtokens_pipeline END') return pre_process def SaveCorpus(self, df, language='js', sep=',', mode='a'): timestamp = datetime.timestamp(datetime.now()) path_to_link = self.params['saving_path'] + '['+ self.params['system'] + '-' + language + '-{}].csv'.format(timestamp) df.to_csv(path_to_link, header=True, index=True, sep=sep, mode=mode) logging.info('Saving in...' + path_to_link) pass def LoadCorpus(self, timestamp, language='js', sep=',', mode='a'): path_to_link = self.params['saving_path'] + '['+ self.params['system'] + '-' + language + '-{}].csv'.format(timestamp) return pd.read_csv(path_to_link, header=0, index_col=0, sep=sep) #export def open_file(f, encoding='utf-8'): try: #return open(filename, 'r', encoding="ISO-8859-1").read() return open(f, 'r', encoding = encoding).read() except: print("Exception: ", sys.exc_info()[0]) #export def get_files(system, ends): path = Path("cisco/CSB-CICDPipelineEdition-master/") names = [entry for entry in path.glob('**/*' +ends)] filenames = [(filename, os.path.basename(filename), open_file(filename) ) for filename in names] return pd.DataFrame( filenames ,columns = ['names','filenames','content']) # # 2. Utils # > From @Nathan # export def jsonl_list_to_dataframe(file_list, columns=None): """Load a list of jsonl.gz files into a pandas DataFrame.""" return pd.concat([pd.read_json(f, orient='records', compression='gzip', lines=True)[columns] for f in file_list], sort=False) # export #This if for SearchNet Dataset def get_dfs(path): """ Grabs the different data splits and converts them into dataframes. Expects format from Code Search Net Challenge. SearchNetDataset """ dfs = [] for split in ["train", "valid", "test"]: files = sorted((path/split).glob("**/*.gz")) df = jsonl_list_to_dataframe(files, ["code", "docstring"]) dfs.append(df) return dfs # export def df_to_txt_file(df, output, cols): """Converts a dataframe and converts it into a text file that SentencePiece can use to train a BPE model""" if cols is None: cols = list(df.columns) merged_df = pd.concat([df[col] for col in cols]) with open(output/'text.txt', 'w') as f: f.write('\n'.join(list(merged_df))) return output/'text.txt' # export def sp_model_from_df(df, output, model_name, cols = None): """Trains a SentencePiece BPE model from a pandas dataframe""" fname = df_to_txt_file(df, output, cols) sp.SentencePieceTrainer.train(f'--input={fname} --model_prefix={output / model_name} --hard_vocab_limit=false') # export def sp_model_from_glob(path, glob, model_name): fns = list(path.glob(glob)) fns = ",".join(map(str, fns)) sp.SentencePieceTrainer.train(f'--input={fns} --model_prefix={path / model_name} --hard_vocab_limit=false') # export def gen_hugface_model(df, output, tokenizer = ByteLevelBPETokenizer(), vocab_sz = 30_000, min_freq = 3, cols = None): fname = df_to_txt_file(df, output, cols) tokenizer.train(files = [str(fname)], vocab_size = vocab_sz, min_frequency = min_freq, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) return tokenizer # export def tokenize_fns(fns, tokenizer, exts, output, data_type): docs = [] for fn in fns: system = fn.parent.name output_path = output/system/data_type output_path.mkdir(parents=True, exist_ok=True) files = [] for ext in exts: files.extend(fn.glob(f'**/*.{ext}')) for file in files: if 'README' not in file.name: with open(file, encoding='ISO-8859-1') as f: docs.append(tokenizer.EncodeAsPieces(f.read())) with open((output_path/file.name).with_suffix('.bpe'), 'w') as f: f.write(' '.join(docs[-1])) return docs # export def read_bpe_files(path): bpe_files = [] for file in path.glob('**/*.bpe'): with open(file) as f: bpe_files.append(f.read().split(' ')) return bpe_files # export #This implementation was oriented to traceability datasets def split_lines_to_files(lines, fn_pattern, output_path, tokenizer): for line in lines: fn, content = line.split(fn_pattern) fn = fn.replace('"', '') fn = fn.replace(' Test ', '') content = tokenizer.EncodeAsPieces(content) with open((output_path/fn).with_suffix('.bpe'), 'w') as f: f.write(' '.join(content)) # ### Testing Utils #hide path_data = Path('../dvc-ds4se/') #dataset path def utils_params(): return { 'system': 'searchnet', 'path_data': path_data / 'code/searchnet/java/final/jsonl', 'path_test_out': path_data /'nbs_experiments' } #path = Path('/tf/data/') params = utils_params() df_trn, df_val, df_tst = get_dfs( params['path_data'] ) df_trn.head() params['path_test_out'] / 'trn.csv' #Sampling some data df_trn.sample(frac = 0.01).to_json(params['path_test_out'] /'trn.jsonl', index = False) df_val.sample(frac = 0.01).to_csv(params['path_test_out'] /'val.csv', index = False) df_tst.sample(frac = 0.01).to_csv(params['path_test_out'] /'tst.csv', index = False) df_trn.sample(frac = 0.01).to_json(params['path_test_out'] /'trn.jsonl') df = pd.read_csv(params['path_test_out'] / 'trn.csv') df.head() # ### tst tokenizer hugface tokenizer = gen_hugface_model(df, path) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) print(tokenizer.encode("public static void main(String[] args) { getDirFromLib(); }").tokens) tokenizer.save(str( params['path_test_out'] ), "java_tokenizer") dummy_data = { 'first': ['1', '2', '6', '7', '8'], 'second': ['K', 'M', 'O', 'Q', 'S'], 'third': ['L', 'N', 'P', 'R', 'T']} df = pd.DataFrame(dummy_data); df
nbs/0.1_mgmnt.prep.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # - # # Importing necessary libraries # + import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns import tensorflow as tf # - # # Loading the data # + train = pd.read_csv("../input/digit-recognizer/train.csv") test = pd.read_csv("../input/digit-recognizer/test.csv") train.head() # - # # Taking a look at the features y_train = train['label'].astype('float32') X_train = train.drop(['label'], axis=1).astype('int32') X_test = test.astype('float32') X_train.shape, y_train.shape, X_test.shape # + plt.style.use('dark_background') plt.figure(figsize=(10,8)) plt.xticks(size=20) plt.yticks(size=20) x = sns.countplot(x='label', data=train,palette=['cyan','yellow','pink','b','y','purple','magenta','hotpink','turquoise','lightblue']); x.set_xlabel("Label",fontsize=30) x.set_ylabel('Count',fontsize=30) #sns.plt.show() # - # # Data normalization X_train = X_train/255 X_test = X_test/255 # + X_train = X_train.values.reshape(-1,28,28,1) X_test = X_test.values.reshape(-1,28,28,1) X_train.shape, X_test.shape # - # # One-hot encoding from keras.utils.np_utils import to_categorical y_train = to_categorical(y_train, num_classes = 10) y_train.shape print(train['label'].head()) y_train[0:5,:] # # Splitting the data from sklearn.model_selection import train_test_split X_train, X_cv, y_train, y_cv = train_test_split(X_train, y_train, test_size = 0.1, random_state=42) # + plt.imshow(X_train[1][:,:,0]) plt.title(y_train[1].argmax()); # - # # Importing DL libraries from keras.layers import Input,InputLayer, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D from keras.layers import AveragePooling2D, MaxPooling2D, Dropout from keras.models import Sequential,Model from keras.optimizers import SGD from keras.callbacks import ModelCheckpoint,LearningRateScheduler import keras from keras import backend as K # # Building a CNN # + # Building a CNN model input_shape = (28,28,1) X_input = Input(input_shape) # layer 1 x = Conv2D(64,(3,3),strides=(1,1),name='layer_conv1',padding='same')(X_input) x = BatchNormalization()(x) x = Activation('relu')(x) x = MaxPooling2D((2,2),name='maxPool1')(x) # layer 2 x = Conv2D(32,(3,3),strides=(1,1),name='layer_conv2',padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = MaxPooling2D((2,2),name='maxPool2')(x) # layer 3 x = Conv2D(32,(3,3),strides=(1,1),name='conv3',padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = MaxPooling2D((2,2), name='maxPool3')(x) # fc x = Flatten()(x) x = Dense(64,activation ='relu',name='fc0')(x) x = Dropout(0.25)(x) x = Dense(32,activation ='relu',name='fc1')(x) x = Dropout(0.25)(x) x = Dense(10,activation ='softmax',name='fc2')(x) conv_model = Model(inputs=X_input, outputs=x, name='Predict') conv_model.summary() # - # # Note : # In SGD 30 epochs is a reasonable choice to use although it takes a long time to compute. # Adam optimizer converges quicker. Use one of the following optimizers. # # Using Adam # Adam optimizer conv_model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) conv_model.fit(X_train, y_train, epochs=10, batch_size=100, validation_data=(X_cv,y_cv)) # # Using SGD # + # SGD optimizer sgd = SGD(lr=0.0005, momentum=0.5, decay=0.0, nesterov=False) conv_model.compile(optimizer=sgd,loss='categorical_crossentropy',metrics=['accuracy']) conv_model.fit(X_train, y_train, epochs=30, validation_data=(X_cv, y_cv)) # + y_pred = conv_model.predict(X_test) y_pred = np.argmax(y_pred,axis=1) my_submission = pd.DataFrame({'ImageId': list(range(1, len(y_pred)+1)), 'Label': y_pred}) my_submission.to_csv('submission.csv', index=False) # - # # # Test Accuracy # Adam optimizer (10 epochs, batch size = 100) : 0.98985<br> # SGD optimizer (12 epochs) : 0.97600
digitrecognition-using-adam-sgd-99.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="11xi8CRmVA1d" # *Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [<NAME>](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* # # Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning). # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 119} colab_type="code" executionInfo={"elapsed": 2889, "status": "ok", "timestamp": 1525034072517, "user": {"displayName": "<NAME>", "photoUrl": "//lh6.googleusercontent.com/-cxK6yOSQ6uE/AAAAAAAAAAI/AAAAAAAAIfw/P9ar_CHsKOQ/s50-c-k-no/photo.jpg", "userId": "118404394130788869227"}, "user_tz": 240} id="WuXDfh6UVA1g" outputId="80c92b95-76aa-444e-9aa3-9bcf5d000a6e" # %load_ext watermark # %watermark -a '<NAME>' -v -p torch # + [markdown] colab_type="text" id="Cii2luqnVA1s" # - Runs on CPU (not recommended here) or GPU (if available) # + [markdown] colab_type="text" id="EYAtjwgyVA1t" # # Model Zoo -- Convolutional Autoencoder with Nearest-neighbor Interpolation (Trained on CelebA) # + [markdown] colab_type="text" id="Ke9a_LDUVA1v" # A convolutional autoencoder using nearest neighbor upscaling layers that compresses 128*128*3=49,152 pixel face images to an intermediate 1000-pixel representation (~50x reduction!). # + [markdown] colab_type="text" id="1iZAQwueVA1x" # ## Imports # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="CO_0yUH6VA1z" import numpy as np from PIL import Image import os import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset from torch.utils.data import DataLoader from torchvision import transforms # + [markdown] colab_type="text" id="uhGdecraVA1-" # ## Dataset # + [markdown] colab_type="text" id="sxaXYk5cVA2A" # ### Downloading the Dataset # + [markdown] colab_type="text" id="T05ELUTiVA2B" # Note that the ~200,000 CelebA face image dataset is relatively large (~1.3 Gb). The download link provided below was provided by the author on the official CelebA website at http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html. # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 119} colab_type="code" executionInfo={"elapsed": 2551, "status": "ok", "timestamp": 1525034075604, "user": {"displayName": "<NAME>", "photoUrl": "//lh6.googleusercontent.com/-cxK6yOSQ6uE/AAAAAAAAAAI/AAAAAAAAIfw/P9ar_CHsKOQ/s50-c-k-no/photo.jpg", "userId": "118404394130788869227"}, "user_tz": 240} id="VZAXaPOa7HGe" outputId="2af847fa-a390-4d88-faff-33dc03e8664b" # !curl https://www.dropbox.com/sh/8oqt9vytwxb3s4r/AADIKlz8PR9zr6Y20qbkunrba/Img/img_align_celeba.zip?dl=1 -O -J -L # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 7389, "status": "ok", "timestamp": 1525034083012, "user": {"displayName": "<NAME>", "photoUrl": "//lh6.googleusercontent.com/-cxK6yOSQ6uE/AAAAAAAAAAI/AAAAAAAAIfw/P9ar_CHsKOQ/s50-c-k-no/photo.jpg", "userId": "118404394130788869227"}, "user_tz": 240} id="r4yjlY9A7NTz" outputId="d057ec4d-08a9-4248-cc5d-5bba1da84aea" # ! y | unzip img_align_celeba.zip >/dev/null # + [markdown] colab_type="text" id="Z1zF9mPwVA2P" # ### Create a Custom Data Loader # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="Xql81wgGWWHN" batch_size = 64 # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="fV2cRUDx7mqz" class CelebaDataset(Dataset): """Custom Dataset for loading CelebA face images""" def __init__(self, txt_path, img_dir, transform=None): self.img_dir = img_dir self.img_names = [i for i in os.listdir(img_dir) if i.endswith('.jpg')] self.transform = transform def __getitem__(self, index): img = Image.open(os.path.join(self.img_dir, self.img_names[index])) if self.transform is not None: img = self.transform(img) return img def __len__(self): return len(self.img_names) custom_transform = transforms.Compose([#transforms.Grayscale(), transforms.Resize((128, 128)), #transforms.Lambda(lambda x: x/255.), transforms.ToTensor()]) train_dataset = CelebaDataset(txt_path='img_align_celeba', img_dir='img_align_celeba', transform=custom_transform) train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=1) # + [markdown] colab_type="text" id="wHRUkZFFVA2T" # ## Settings # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 330, "status": "ok", "timestamp": 1525034084237, "user": {"displayName": "<NAME>", "photoUrl": "//lh6.googleusercontent.com/-cxK6yOSQ6uE/AAAAAAAAAAI/AAAAAAAAIfw/P9ar_CHsKOQ/s50-c-k-no/photo.jpg", "userId": "118404394130788869227"}, "user_tz": 240} id="KthquBjBVA2V" outputId="4014037d-f8b6-4dcc-e6ec-9db4e4cd38fd" ########################## ### SETTINGS ########################## # Device device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print('Device:', device) # Hyperparameters random_seed = 123 learning_rate = 1e-4 num_epochs = 20 # + [markdown] colab_type="text" id="pnSfNaJrVA2Z" # ### Model # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="aKDo_CM9-5eL" class AutoEncoder(nn.Module): def __init__(self, in_channels, dec_channels, latent_size): super(AutoEncoder, self).__init__() self.in_channels = in_channels self.dec_channels = dec_channels self.latent_size = latent_size ############### # ENCODER ############## self.e_conv_1 = nn.Conv2d(in_channels, dec_channels, kernel_size=(4, 4), stride=(2, 2), padding=1) self.e_bn_1 = nn.BatchNorm2d(dec_channels) self.e_conv_2 = nn.Conv2d(dec_channels, dec_channels*2, kernel_size=(4, 4), stride=(2, 2), padding=1) self.e_bn_2 = nn.BatchNorm2d(dec_channels*2) self.e_conv_3 = nn.Conv2d(dec_channels*2, dec_channels*4, kernel_size=(4, 4), stride=(2, 2), padding=1) self.e_bn_3 = nn.BatchNorm2d(dec_channels*4) self.e_conv_4 = nn.Conv2d(dec_channels*4, dec_channels*8, kernel_size=(4, 4), stride=(2, 2), padding=1) self.e_bn_4 = nn.BatchNorm2d(dec_channels*8) self.e_conv_5 = nn.Conv2d(dec_channels*8, dec_channels*16, kernel_size=(4, 4), stride=(2, 2), padding=1) self.e_bn_5 = nn.BatchNorm2d(dec_channels*16) self.e_fc_1 = nn.Linear(dec_channels*16*4*4, latent_size) ############### # DECODER ############## self.d_fc_1 = nn.Linear(latent_size, dec_channels*16*4*4) self.d_conv_1 = nn.Conv2d(dec_channels*16, dec_channels*8, kernel_size=(4, 4), stride=(1, 1), padding=0) self.d_bn_1 = nn.BatchNorm2d(dec_channels*8) self.d_conv_2 = nn.Conv2d(dec_channels*8, dec_channels*4, kernel_size=(4, 4), stride=(1, 1), padding=0) self.d_bn_2 = nn.BatchNorm2d(dec_channels*4) self.d_conv_3 = nn.Conv2d(dec_channels*4, dec_channels*2, kernel_size=(4, 4), stride=(1, 1), padding=0) self.d_bn_3 = nn.BatchNorm2d(dec_channels*2) self.d_conv_4 = nn.Conv2d(dec_channels*2, dec_channels, kernel_size=(4, 4), stride=(1, 1), padding=0) self.d_bn_4 = nn.BatchNorm2d(dec_channels) self.d_conv_5 = nn.Conv2d(dec_channels, in_channels, kernel_size=(4, 4), stride=(1, 1), padding=0) # Reinitialize weights using He initialization for m in self.modules(): if isinstance(m, torch.nn.Conv2d): nn.init.kaiming_normal_(m.weight.detach()) m.bias.detach().zero_() elif isinstance(m, torch.nn.ConvTranspose2d): nn.init.kaiming_normal_(m.weight.detach()) m.bias.detach().zero_() elif isinstance(m, torch.nn.Linear): nn.init.kaiming_normal_(m.weight.detach()) m.bias.detach().zero_() def encode(self, x): #h1 x = self.e_conv_1(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.e_bn_1(x) #h2 x = self.e_conv_2(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.e_bn_2(x) #h3 x = self.e_conv_3(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.e_bn_3(x) #h4 x = self.e_conv_4(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.e_bn_4(x) #h5 x = self.e_conv_5(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.e_bn_5(x) #fc x = x.view(-1, self.dec_channels*16*4*4) x = self.e_fc_1(x) return x def decode(self, x): # h1 #x = x.view(-1, self.latent_size, 1, 1) x = self.d_fc_1(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = x.view(-1, self.dec_channels*16, 4, 4) # h2 x = F.upsample(x, scale_factor=2) x = F.pad(x, pad=(2, 1, 2, 1), mode='replicate') x = self.d_conv_1(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.d_bn_1(x) # h3 x = F.upsample(x, scale_factor=2) x = F.pad(x, pad=(2, 1, 2, 1), mode='replicate') x = self.d_conv_2(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.d_bn_2(x) # h4 x = F.upsample(x, scale_factor=2) x = F.pad(x, pad=(2, 1, 2, 1), mode='replicate') x = self.d_conv_3(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.d_bn_3(x) # h5 x = F.upsample(x, scale_factor=2) x = F.pad(x, pad=(2, 1, 2, 1), mode='replicate') x = self.d_conv_4(x) x = F.leaky_relu(x, negative_slope=0.2, inplace=True) x = self.d_bn_4(x) # out x = F.upsample(x, scale_factor=2) x = F.pad(x, pad=(2, 1, 2, 1), mode='replicate') x = self.d_conv_5(x) x = F.sigmoid(x) return x def forward(self, x): z = self.encode(x) decoded = self.decode(z) return z, decoded # + [markdown] colab_type="text" id="PfAT3P8_VA2e" # ## Training # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 1224} colab_type="code" executionInfo={"elapsed": 10453399, "status": "ok", "timestamp": 1525044538453, "user": {"displayName": "<NAME>", "photoUrl": "//lh6.googleusercontent.com/-cxK6yOSQ6uE/AAAAAAAAAAI/AAAAAAAAIfw/P9ar_CHsKOQ/s50-c-k-no/photo.jpg", "userId": "118404394130788869227"}, "user_tz": 240} id="rbZ8ploO_JW2" outputId="42e24455-31a2-425b-fea4-599e7cc144d3" ########################## ### TRAINING ########################## epoch_start = 1 torch.manual_seed(random_seed) model = AutoEncoder(in_channels=3, dec_channels=32, latent_size=1000) model = model.to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) ################## Load previous # the code saves the autoencoder # after each epoch so that in case # the training process gets interrupted, # we will not have to start training it # from scratch files = os.listdir() for f in files: if f.startswith('autoencoder_i_') and f.endswith('.pt'): print('Load', f) epoch_start = int(f.split('_')[-2]) + 1 model.load_state_dict(torch.load(f)) break ################## for epoch in range(epoch_start, num_epochs+1): for batch_idx, features in enumerate(train_loader): # don't need labels, only the images (features) features = features.to(device) ### FORWARD AND BACK PROP latent_vector, decoded = model(features) cost = F.mse_loss(decoded, features) optimizer.zero_grad() cost.backward() ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 500: print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f' %(epoch, num_epochs, batch_idx, len(train_dataset)//batch_size, cost)) # Save model if os.path.isfile('autoencoder_i_%d_%s.pt' % (epoch-1, device)): os.remove('autoencoder_i_%d_%s.pt' % (epoch-1, device)) torch.save(model.state_dict(), 'autoencoder_i_%d_%s.pt' % (epoch, device)) # + [markdown] colab_type="text" id="OBO9L5FnVA2h" # ## Evaluation # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 323} colab_type="code" executionInfo={"elapsed": 3782, "status": "ok", "timestamp": 1525044542253, "user": {"displayName": "<NAME>", "photoUrl": "//lh6.googleusercontent.com/-cxK6yOSQ6uE/AAAAAAAAAAI/AAAAAAAAIfw/P9ar_CHsKOQ/s50-c-k-no/photo.jpg", "userId": "118404394130788869227"}, "user_tz": 240} id="UpJLf9FnVqSw" outputId="121c6c55-6171-4b1b-c6ea-5a199abb4bc5" # %matplotlib inline import matplotlib.pyplot as plt model = AutoEncoder(in_channels=3, dec_channels=32, latent_size=1000) model = model.to(device) model.load_state_dict(torch.load('autoencoder_i_20_%s.pt' % device)) model.eval() torch.manual_seed(random_seed) for batch_idx, features in enumerate(train_loader): features = features.to(device) logits, decoded = model(features) break ########################## ### VISUALIZATION ########################## n_images = 5 fig, axes = plt.subplots(nrows=2, ncols=n_images, sharex=True, sharey=True, figsize=(18, 5)) orig_images = features.detach().cpu().numpy()[:n_images] orig_images = np.moveaxis(orig_images, 1, -1) decoded_images = decoded.detach().cpu().numpy()[:n_images] decoded_images = np.moveaxis(decoded_images, 1, -1) for i in range(n_images): for ax, img in zip(axes, [orig_images, decoded_images]): ax[i].axis('off') ax[i].imshow(img[i]) # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="x4D9J36yDr5G"
code/model_zoo/pytorch_ipynb/autoencoder-conv-2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Wide vs. Long Format Data # # ## About the data # In this notebook, we will be using daily temperature data from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2). We will use the Global Historical Climatology Network - Daily (GHCND) dataset for the Boonton 1 station (GHCND:USC00280907); see the documentation [here](https://www1.ncdc.noaa.gov/pub/data/cdo/documentation/GHCND_documentation.pdf). # # *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for "NCEI weather API" to find the updated one.* # # ## Setup # + import matplotlib.pyplot as plt import pandas as pd wide_df = pd.read_csv('data/wide_data.csv', parse_dates=['date']) long_df = pd.read_csv( 'data/long_data.csv', usecols=['date', 'datatype', 'value'], parse_dates=['date'] )[['date', 'datatype', 'value']] # sort columns # - # ## Wide format # Our variables each have their own column: wide_df.head(6) # Describing all the columns is easy: wide_df.describe(include='all', datetime_is_numeric=True) # It's easy to graph with `pandas` (covered in chapter 5): wide_df.plot( x='date', y=['TMAX', 'TMIN', 'TOBS'], figsize=(15, 5), title='Temperature in NYC in October 2018' ).set_ylabel('Temperature in Celsius') plt.show() # ## Long format # Our variable names are now in the `datatype` column and their values are in the `value` column. We now have 3 rows for each date, since we have 3 different `datatypes`: long_df.head(6) # Since we have many rows for the same date, using `describe()` is not that helpful: long_df.describe(include='all', datetime_is_numeric=True) # Plotting long format data in `pandas` can be rather tricky. Instead we use `seaborn` (covered in [`ch_06/1-introduction_to_seaborn.ipynb`](../ch_06/1-introduction_to_seaborn.ipynb)): # + import seaborn as sns sns.set(rc={'figure.figsize': (15, 5)}, style='white') ax = sns.lineplot( data=long_df, x='date', y='value', hue='datatype' ) ax.set_ylabel('Temperature in Celsius') ax.set_title('Temperature in NYC in October 2018') plt.show() # - # With long data and `seaborn`, we can easily facet our plots: # + sns.set( rc={'figure.figsize': (20, 10)}, style='white', font_scale=2 ) g = sns.FacetGrid(long_df, col='datatype', height=10) g = g.map(plt.plot, 'date', 'value') g.set_titles(size=25) g.set_xticklabels(rotation=45) plt.show() # - # <hr> # <div> # <a href="../ch_02/6-adding_and_removing_data.ipynb"> # <button style="float: left;">&#8592; Chapter 2</button> # </a> # <a href="./2-using_the_weather_api.ipynb"> # <button style="float: right;">Next Notebook &#8594;</button> # </a> # </div> # <br> # <hr>
ch_03/1-wide_vs_long.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys import numpy as np import pandas as pd import itertools from collections import Counter import pysubgroup as ps sys.setrecursionlimit(3000) import pickle from SDDeclinations import * from SGDiscovery import * from SDPostprocessing import * from DynamicThreshold import * from scipy.stats import expon, gamma import math import matplotlib.pyplot as plt import matplotlib import seaborn as sns # %sns.set(color_codes=True) # %matplotlib inline plt.rcParams["figure.figsize"] = [16, 6] import warnings warnings.filterwarnings("ignore") from matplotlib.axes._axes import _log as matplotlib_axes_logger matplotlib_axes_logger.setLevel('ERROR') threshold = 10000 requetes = pd.read_csv('requetes_sd_jf.csv', index_col=[0]) requetes['durationMSDecales'] = requetes['durationMS'] - 5000 cond = requetes['durationMSDecales'] < 100000 cond2 = requetes['nbLignes'] < 100000 # ### EXECUTION TIME # #### Real distribution plt.hist(requetes[cond]['durationMSDecales'], 100, alpha=0.5, density = False) plt.show() # #### Simulated distribution 'Loi exponentielle avec lambda = 1 / moyenne' durations = expon.rvs(scale=requetes[cond]['durationMSDecales'].mean(),loc=0,size=requetes[cond].shape[0]) sns.distplot(durations,kde= False ,hist = True, bins=100, color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) 'Loi exponentielle avec lambda = ln(2) / mediane' durations = expon.rvs(scale=requetes[cond]['durationMSDecales'].median() / math.log(2),loc=0,size=requetes[cond].shape[0]) sns.distplot(durations,kde=False,hist = True, bins=100, color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) def get_threshold_duration (requetes, pvalue) : requetes['durationMSDecales'] = requetes['durationMS'] - 5000 expo = expon(scale=requetes['durationMSDecales'].mean(),loc=0) for i in np.arange(0,100000,100): if expo.cdf(i) < pvalue and expo.cdf(i+100) > pvalue : break print(i + 100 + 5000) get_threshold_duration (requetes, 0.65) def get_dynamic_target_duration (requetes, pvalue) : requetes['durationMSDecales'] = requetes['durationMS'] - 5000 expo = expon(scale=requetes['durationMSDecales'].mean(),loc=0) requetes['pvalue_duration'] = requetes['durationMSDecales'].apply(lambda x : expo.cdf(x)) requetes['class_duration'] = requetes ['pvalue_duration'].apply(lambda x : discretize_duration(x,pvalue)) get_dynamic_target_duration (requetes, 0.65) requetes['class_duration'].value_counts() # ### EXECUTION TIME with #LINES plt.hist(requetes[cond2]['nbLignes'], 100, alpha=0.5, density=False) plt.show() 'Loi exponentielle avec lambda = 1 / moyenne' nbLignes = expon.rvs(scale=requetes[cond2]['nbLignes'].mean(),loc=0,size=requetes[cond2].shape[0]) sns.distplot(nbLignes,kde= False ,hist = True, bins=100, color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) 'Loi exponentielle avec lambda = ln(2) / mediane' nbLignes = expon.rvs(scale=requetes[cond2]['nbLignes'].median() / math.log(2),loc=0,size=requetes[cond2].shape[0]) sns.distplot(nbLignes,kde= False ,hist = True, bins=100, color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) expo_nbLignes = expon(scale=requetes[cond2]['nbLignes'].mean(),loc=0) expo_nbLignes.cdf(1000) (requetes[cond2]['nbLignes'] < 1000).value_counts() 82841 / (41109 + 82841) # + 'Loi gamma de parametres K et THETA' 'Estimateurs' esp = requetes[cond2]['durationMS'].mean() # esp = k * theta var = requetes[cond2]['durationMS'].var() # var = k * (theta)**2 theta = var / esp k = esp / theta print('K =',k) print('THETA =',theta) # - nbLignes = gamma.rvs(a = k*2, scale=theta,loc=0,size=requetes[cond2].shape[0]) sns.distplot(nbLignes,kde= False ,hist = True, bins=100, color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) gamma_nbLignes = gamma(a = 0.06, scale=theta,loc=0) gamma_nbLignes.cdf(100) (requetes[cond2]['nbLignes'] <= 100).value_counts() 71520 / (71520 + 52430) # ### Independance between the two distributions requetes[['durationMS','nbLignes']].corr() # ### Product of the two CDF # + 'P(duration <= T) * (1- P(nbLignes <= N))' gamma_nbLignes = gamma(a = 0.06, scale=theta,loc=0) requetes['pvalue_nbLignes'] = requetes['nbLignes'].apply(lambda x : 1 - gamma_nbLignes.cdf(x)) # - requetes['product_pvalue'] = requetes['pvalue_duration'] * requetes['pvalue_nbLignes'] requetes[['durationMS','pvalue_duration','nbLignes','pvalue_nbLignes','product_pvalue']].sort_values(by='product_pvalue', ascending=False).head(10) # #### Real distribution of product of P-values plt.hist(requetes['product_pvalue'], 100, alpha=0.5, density = False) plt.show() # #### Simulated Distribution 'Loi exponentielle avec lambda = 1 / moyenne' product_pvalues = expon.rvs(scale=requetes['product_pvalue'].mean(),loc=0,size=requetes.shape[0]) sns.distplot(product_pvalues,kde= False ,hist = True, bins=100, color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) 'Loi exponentielle avec lambda = ln(2) / mediane' product_pvalues = expon.rvs(scale=requetes['product_pvalue'].median() / math.log(2),loc=0,size=requetes.shape[0]) sns.distplot(product_pvalues,kde= False ,hist = True, bins=100, color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) # + 'Loi gamma de parametres K et THETA' 'Estimateurs' esp = requetes['product_pvalue'].mean() # esp = k * theta var = requetes['product_pvalue'].var() # var = k * (theta)**2 theta = var / esp k = esp / theta print('K =',k) print('THETA =',theta) # - product_pvalues = gamma.rvs(a = k, scale = theta, loc = 0,size = requetes.shape[0]) sns.distplot(product_pvalues,kde = False ,hist = True, bins = 100, color = 'darkblue', hist_kws = {'edgecolor':'black'}, kde_kws = {'linewidth': 4}) gamma_product_pvalues = gamma(a = k, scale = theta, loc = 0) gamma_product_pvalues.cdf(0.12) def get_dynamic_target_class(requetes, pvalue) : # pvalues duration MS requetes['durationMSDecales'] = requetes['durationMS'] - 5000 expo = expon(scale=requetes['durationMSDecales'].mean(),loc=0) requetes['pvalue_duration'] = requetes['durationMSDecales'].apply(lambda x : expo.cdf(x)) # pvalues nbLignes esp_nbLignes = requetes[cond2]['durationMS'].mean() # esp = k * theta var_nbLignes = requetes[cond2]['durationMS'].var() # var = k * (theta)**2 theta_nbLignes = var_nbLignes / esp_nbLignes k_nbLignes = esp_nbLignes / theta_nbLignes gamma_nbLignes = gamma(a = k_nbLignes*2, scale=theta_nbLignes,loc=0) requetes['pvalue_nbLignes'] = requetes['nbLignes'].apply(lambda x : 1 - gamma_nbLignes.cdf(x)) # product pvalues requetes['product_pvalue'] = requetes['pvalue_duration'] * requetes['pvalue_nbLignes'] #pvalues of product of pvalues esp_pvalues = requetes['product_pvalue'].mean() # esp = k * theta var_pvalues = requetes['product_pvalue'].var() # var = k * (theta)**2 theta_pvalues = var_pvalues / esp_pvalues k_pvalues = esp_pvalues / theta_pvalues gamma_product_pvalues = gamma(a = k_pvalues, scale = theta_pvalues, loc = 0) requetes['pvalue_pvalues'] = requetes['product_pvalue'].apply(lambda x : gamma_product_pvalues.cdf(x)) requetes['class'] = requetes['pvalue_pvalues'].apply(lambda x : discretize_duration(x,pvalue)) get_dynamic_target_class(requetes, 0.65) requetes['class'].value_counts() 51644 / (97152 + 51644) for column in requetes.columns[23:35] : print(column) plt.hist(requetes[column], 50, alpha=0.5, density=False) plt.show() requetes[requetes['requete'].str.contains('mng_batch.noinstance')]
Code/sd-4sql/supplementary/dynamic-threshold.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Setup import json import requests from datetime import datetime as dt from collections import defaultdict import pandas as pd #https://pypi.org/project/gamma-viewer/ from gamma_viewer import GammaViewer from IPython.display import display, Markdown def printjson(j): print(json.dumps(j,indent=4)) def print_json(j): printjson(j) # + def post(name,url,message,params=None): if params is None: response = requests.post(url,json=message) else: response = requests.post(url,json=message,params=params) if not response.status_code == 200: print(name, 'error:',response.status_code) print(response.json()) return {} return response.json() def automat(db,message): automat_url = f'https://automat.renci.org/{db}/query' response = requests.post(automat_url,json=message) print(response.status_code) return response.json() def strider(message): url = 'https://strider.renci.org/1.1/query?log_level=DEBUG' strider_answer = post(strider,url,message) return strider_answer def aragorn(message, coalesce_type='xnone'): if coalesce_type == 'xnone': answer = post('aragorn','https://aragorn.renci.org/1.1/query',message) else: answer = post('aragorn','https://aragorn.renci.org/1.1/query',message, params={'answer_coalesce_type':coalesce_type}) return answer ## def bte(message): url = 'https://api.bte.ncats.io/v1/query' return post(strider,url,message) def geneticskp(message): url = 'https://translator.broadinstitute.org/genetics_provider/trapi/v1.1/query' return post(strider,url,message) def coalesce(message,method='all'): url = 'https://answercoalesce.renci.org/coalesce/graph' return post('AC'+method,url,message) def striderandfriends(message): strider_answer = strider(message) coalesced_answer = post('coalesce','https://answercoalesce.renci.org/coalesce/all',strider_answer) omni_answer = post('omnicorp','https://aragorn-ranker.renci.org/omnicorp_overlay',coalesced_answer) weighted_answer = post('weight','https://aragorn-ranker.renci.org/weight_correctness',omni_answer) scored_answer = post('score','https://aragorn-ranker.renci.org/score',weighted_answer) return strider_answer,coalesced_answer,omni_answer,weighted_answer,scored_answer # + def print_errors(strider_result): errorcounts = defaultdict(int) for logmessage in strider_result['logs']: if logmessage['level'] == 'ERROR': jm = json.loads(logmessage['message']) words = jm['error'].split() e = " ".join(words[:-5]) errorcounts[e] += 1 for error,count in errorcounts.items(): print(f'{error} ({count} times)') def print_queried_sources(strider_result): querycounts = defaultdict(int) for logmessage in strider_result['logs']: if 'step' in logmessage and isinstance(logmessage['step'],list): for s in logmessage['step']: querycounts[s['url']] += 1 for url,count in querycounts.items(): print(f'{url} ({count} times)') def print_query_for_source(strider_result,url): for logmessage in strider_result['logs']: if 'step' in logmessage and isinstance(logmessage['step'],list): for s in logmessage['step']: if s['url']==url: print(s) # - def retrieve_ars_results(mid): message_url = f'https://ars.transltr.io/ars/api/messages/{mid}?trace=y' response = requests.get(message_url) j = response.json() results = {} for child in j['children']: if child['actor']['agent'] in ['ara-aragorn', 'ara-aragorn-exp']: childmessage_id = child['message'] child_url = f'https://ars.transltr.io/ars/api/messages/{childmessage_id}' child_response = requests.get(child_url).json() try: nresults = len(child_response['fields']['data']['message']['results']) if nresults > 0: results[child['actor']['agent']] = {'message':child_response['fields']['data']['message']} except: nresults=0 print( child['status'], child['actor']['agent'],nresults ) return results def get_provenance(message): """Given a message with results, find the source of the edges""" prov = defaultdict(lambda: defaultdict(int)) # {qedge->{source->count}} results = message['message']['results'] kg = message['message']['knowledge_graph']['edges'] edge_bindings = [ r['edge_bindings'] for r in results ] for bindings in edge_bindings: for qg_e, kg_l in bindings.items(): for kg_e in kg_l: for att in kg[kg_e['id']]['attributes']: if att['attribute_type_id'] == 'MetaInformation:Provenance': source = att['value'] prov[qg_e][source]+=1 qg_edges = [] sources = [] counts = [] for qg_e in prov: for source in prov[qg_e]: qg_edges.append(qg_e) sources.append(source) counts.append(prov[qg_e][source]) prov_table = pd.DataFrame({"QG Edge":qg_edges, "Source":sources, "Count":counts}) return prov_table # ## Query Specific standup_json='StandupDefinitions/standup_20.json' with open(standup_json,'r') as jsonfile: standup_info = json.load(jsonfile) display(Markdown(f"# {standup_info['Query Title']}")) display(Markdown(f"{standup_info['Query Description']}")) print(f'Github Issue: {standup_info["github_issue"]}') # The query as run through the ARS: query = json.loads(requests.get(standup_info['query_location']).content) printjson(query) # ## ARS Assessment ARS_Responses = [(dt.strptime(x['ARS_result_date'],'%Y-%m-%d'),x['ARS_result_id']) for x in standup_info['ARS_Results']] ARS_Responses.sort() for ars_date, ars_id in ARS_Responses: display(Markdown(f'### {ars_date}')) #_ = retrieve_ars_results(ars_id) print(f'https://arax.ncats.io/?source=ARS&id={ars_id}') # ## Strider Direct start = dt.now() strider_result = strider(query) end = dt.now() print(f"Strider produced {len(strider_result['message']['results'])} results in {end-start}.") from collections import defaultdict dd = defaultdict(int) for r in strider_result['message']['results']: eb = r['edge_bindings']['e0'][0]['id'] edge = strider_result['message']['knowledge_graph']['edges'][eb] k = (r['node_bindings']['n0'][0]['id'], edge['predicate'], edge['attributes'][0]['value']) dd[k] += 1 #print(r['node_bindings']['n0'][0]['id'], eb, edge['predicate'], edge['attributes'][0]['value']) #for k,v in dd.items(): # print(k,v) res = list(dd.keys()) res.sort() for r in res: print(r, dd[r]) # ### Provenance prov = get_provenance(strider_result) display(prov) # ### Queried sources print_queried_sources(strider_result) # ### Errors print_errors(strider_result) # ### Results view = GammaViewer(props={"data":strider_result}) display(view) # ### Strider Assessment # Enter Assessment Here start = dt.now() gkp_result = geneticskp(query) end = dt.now() print(f"GKP produced {len(gkp_result['message']['results'])} results in {end-start}.") printjson(gkp_result['message']['results']) query10 = { "message": { "query_graph": { "edges": { "e01": { "object": "n0", "subject": "n1", "predicate": "biolink:related_to" } }, "nodes": { "n0": { "id": "NCBIGene:23221", "category": "biolink:Gene" }, "n1": { "category": "biolink:Gene" } } } } } start = dt.now() gkp_result = geneticskp(query10) end = dt.now() print(f"GKP produced {len(gkp_result['message']['results'])} results in {end-start}.") # ## ARAGORN start = dt.now() aragorn_result = aragorn(query) end = dt.now() if 'results' in aragorn_result['message']: print(f"ARAGORN produced {len(aragorn_result['message']['results'])} results in {end-start}.") else: print('Error, no result field') print_json(aragorn_result['message']) view = GammaViewer(props={"data":aragorn_result}) display(view) tq={ "message": { "query_graph": { "nodes": { "n1": { "id": "HP:0033127", "category": "biolink:PhenotypicFeature" }, "n2": { "category": "biolink:Disease" } }, "edges": { "e02": { "subject": "n1", "object": "n2", "predicate": "biolink:affected_by" } } } } } r = automat('uberongraph',tq) # ### ARAGORN Assessment # How did we do? query = requests.get('https://kp-registry.renci.org/kps').json() printjson(query)
standups/Query20.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Rotation Dataset maker import numpy as np import matplotlib.pyplot as plt import matplotlib import os from PIL import Image from numpy import * # SKLEARN from sklearn.utils import shuffle from sklearn.cross_validation import train_test_split # + # input image dimensions img_rows, img_cols = 64, 64 # number of channels img_channels = 3 #%% # data path1 = '/home/mirror/Documents/pic' #path of folder of images path2 = '/home/mirror/Documents/out' #path of folder to save images # + listing = sorted(os.listdir(path1)) num_samples=size(listing) print (num_samples) for file in listing: img = Image.open(path1 + '/' + file) name = file[:len(file)-4] for i in range(1,51): angle = random.uniform(0,360) img1 = img.resize((img_rows, img_cols)) img1 = img1.rotate(angle, expand = True); img1 = img1.resize((img_rows, img_cols)) img1.save(path2 +'/' +name + "_"+str(angle) + ".png", "png") # - random.uniform(0,360)
Thesis/rotation dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Crime data visualization in San Francisco # # San Francisco has one of the most "open data" policies of any large city. In this lab, we are going to download about 85M of data (238,456) describing all police incidents since 2018 (I'm grabbing data on August 5, 2019). # # ## Getting started # # Download [Police Department Incident Reports 2018 to present](https://data.sfgov.org/Public-Safety/Police-Department-Incident-Reports-2018-to-Present/wg3w-h783) or, if you want, all [San Francisco police department incident since 1 January 2003](https://data.sfgov.org/Public-Safety/SFPD-Incidents-from-1-January-2003/tmnf-yvry). Click the "Export" button and then save in "CSV for Excel" format. (It's fairly large at all 140MB so it could take a while if you have a slow connection.) # # We can easily figure out how many records there are: # # ```bash # $ wc -l Police_Department_Incident_Reports__2018_to_Present.csv # 388670 Police_Department_Incident_Reports__2018_to_Present.csv # ``` # # So 388,669 not including the header row. You can name that data file whatever you want but I will call it `SFPD.csv` for these exercises and save it in `/tmp`. # ## Sniffing the data # # Let's assume the file you downloaded and is in `/tmp`: # + import pandas as pd df_sfpd = pd.read_csv('/tmp/SFPD.csv') df_sfpd.head(2).T # - # To get a better idea of what the data looks like, let's do a simple histogram of the categories and crime descriptions. Here is the category histogram: from collections import Counter counter = Counter(df_sfpd['Incident Category']) counter.most_common(10) from collections import Counter counter = Counter(df_sfpd['Incident Description']) counter.most_common(10) # ## Word clouds # # A more interesting way to visualize differences in term frequency is using a so-called word cloud. For example, here is a word cloud showing the categories from 2003 to the present. # # <img src="figures/SFPD-wordcloud.png" width="400"> # # Python has a nice library you can use: # # ```bash # $ pip install wordcloud # ``` # # **Exercise**: In a file called `catcloud.py`, once again get the categories and then create a word cloud object and display it: # # ```python # from wordcloud import WordCloud # import matplotlib.pyplot as plt # from collections import Counter # import pandas as pd # import sys # # df_sfpd = pd.read_csv(sys.argv[1]) # # ... delete Incident Categories with nan ... # categories = ... create Counter object on column 'Incident Category' ... # # wordcloud = WordCloud(width=1800, # height=1400, # max_words=10000, # random_state=1, # relative_scaling=0.25) # # wordcloud.fit_words(categories) # # plt.imshow(wordcloud) # plt.axis("off") # plt.show() # ``` # ### Which neighborhood is the "worst"? # # **Exercise**: Now, pullout the neighborthood and do a word cloud on that in `hoodcloud.py` (it's ok to cut/paste): # # <img src="figures/SFPD-hood-wordcloud.png" width="400"> # ### Crimes per neighborhood # # # **Exercise**: Filter the data using pandas from a particular precinct or neighborhood, such as Mission and South of Market. Modify `catcloud.py` to use a pandas query to filter for those records. Pass the hood as an argument (`sys.argv[2]`): # # ```bash # $ python catcloud.py /tmp/SFPD.csv Mission # ``` # # Run the `catcloud.py` script to get an idea of the types of crimes per those two neighborhoods. Here are the mission and SOMA districts crime category clouds: # # <table> # <tr> # <td><b>Mission</b></td><td>SOMA</td> # </tr> # <tr> # <td><img src="figures/SFPD-mission-wordcloud.png" width="300"></td><td><img src="figures/SFPD-soma-wordcloud.png" width="300"></td> # </tr> # </table> # ### Which neighborhood has most car break-ins? # # **Exercise**: Modify `hoodcloud.py` to filter for `Motor Vehicle Theft`. Pass the hood as an argument (`sys.argv[2]`): # # ```bash # $ python hoodcloud.py /tmp/SFPD.csv 'Motor Vehicle Theft' # ``` # # <img src="figures/SFPD-car-theft-hood-wordcloud.png" width="300"> # # Hmm..ok, so parking in the Mission is ok, but SOMA, BayView/Hunters point are bad news. # # If you get stuck in any of these exercises, you can look at the [code associated with this notes](https://github.com/parrt/msds692/tree/master/notes/code/sfpd).
notes/sfpd.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Europe # This notebook contains functions that combine World Database of Protected Areas and Ramsar data for a country or continent to create a full protected areas dataset for the desired region and calculates the total length of rivers on protected lands and total area of protected lands that are affected by proposed dams by country for a desired continent. # + # Get Europe figshare data to work # See if it runs with WDPA and ramsar merged # Imports import warnings import os import sys import numpy as np import numpy.ma as ma import pandas as pd import geopandas as gpd import matplotlib.pyplot as plt import matplotlib.patches as mpatches import matplotlib.lines as mlines import seaborn as sns from geopandas import GeoDataFrame as gdf from geopandas import GeoSeries as gs from shapely.geometry import Point, Polygon import earthpy as et import earthpy.plot as ep import contextily as ctx # - # Check path and set working directory. wd_path = os.path.join(et.io.HOME, 'earth-analytics', 'data') if os.path.exists(wd_path): os.chdir(wd_path) else: print("Path does not exist") # + # Download Data stored on figshare # Free flowing rivers current DOR et.data.get_data(url="https://ndownloader.figshare.com/files/23273213") # Free flowing rivers future DOR et.data.get_data(url="https://ndownloader.figshare.com/files/23273216") # World Database of Protected Areas Europe et.data.get_data(url="https://ndownloader.figshare.com/files/23355098") # Ramsar Sites et.data.get_data(url="https://ndownloader.figshare.com/files/22507082") # Country boundaries et.data.get_data(url="https://ndownloader.figshare.com/files/22507058") # Continent boundaries et.data.get_data(url="https://ndownloader.figshare.com/files/23392280") # Continent-country csv et.data.get_data(url="https://ndownloader.figshare.com/files/23393756") # - # Custon Function def all_pa_continent(wdpa_polys, ramsar_polys, cont_name): """ This function takes WDPA polygons for a continent and global ramsar polygons and returns a multipolygon feature of the World Database of Protected Areas merged with the ramsar areas for that continent. Parameters ---------- wdpa_polys: gdf The feature with the WDPA polygons for the selected continent. ramsar_polys: gdf The feature with all global ramsar polygons. cont_name: str The name of the selected continent. Returns ------- wdpa_ramsar: gdf A gdf of both the ramsar and WDPA protected areas for the continent. """ # Remove ramsar areas from WDPA dataset try: wdpa_polys.set_index('DESIG', inplace=True) wdpa_polys.drop( "Ramsar Site, Wetland of International Importance", inplace=True) except: print('No ramsar areas in WDPA dataset.') # Remove duplicates from WDPA dataset (areas tagged by both state and local authorities) try: wdpa_polys.set_index('NAME', inplace=True) wdpa_polys.drop_duplicates(subset=None, keep='first', inplace=False) except: print('No duplicates in the WDPA dataset.') # Pull out the ramsar areas for the continent or country and merge with protected areas ramsar_polys = ramsar_polys[ramsar_polys["continent"] == cont_name] wdpa_ramsar = wdpa_polys.append(ramsar_polys, 'sort=True') return wdpa_ramsar # + # Open continent & country borders & ISOs country_borders = gpd.read_file(os.path.join(wd_path, "earthpy-downloads", "country-borders", "99bfd9e7-bb42-4728-87b5-07f8c8ac631c2020328-1-1vef4ev.lu5nk.shp")) continent_iso = pd.read_csv(os.path.join(wd_path, "earthpy-downloads", "continent-country.csv")) continent_borders = gpd.read_file(os.path.join(wd_path, "earthpy-downloads", "continent-poly", "Continents.shp")) # Reproject data to World Equidistant Cylindrical, datum WGS84, units meters, EPSG 4087 country_borders = country_borders.to_crs('epsg:4087') continent_borders = continent_borders.to_crs('epsg:4087') # + # Open the dams csv files with pandas fname = os.path.join("earthpy-downloads", "future_dams_2015.csv") df = pd.read_csv(fname) # Covert the pandas dataframe to a shapefile for plotting # Set output path for shp dams_path = os.path.join('earthpy-downloads') if not os.path.exists(dams_path): os.mkdir(dams_path) # Define the geometry for the points geometry = [Point(xy) for xy in zip(df.Lon_Cleaned, df.LAT_cleaned)] crs = {'init': 'epsg:4326'} geo_df = gdf(df, crs=crs, geometry=geometry) geo_df.to_file(driver='ESRI Shapefile', filename=os.path.join( dams_path, 'proposed_dams.shp')) # Open the proposed dams shapefile with geopandas dams_all = gpd.read_file(os.path.join(dams_path, "proposed_dams.shp")) # + # Open ramsar areas ramsar_polys = gpd.read_file(os.path.join( "earthpy-downloads", "ramsar-site-data", "ramsar-boundaries", "features_publishedPolygon.shp")) # Rename ramsar columns to match WDPA try: ramsar_polys = ramsar_polys.rename( columns={"iso3": "PARENT_ISO", "officialna": "NAME", "area_off": "Shape_Area"}) except: print('Ramsar column names already match WDPA dataset.') # Merge continent names with ramsar data for analyzing by continent ramsar_polys = pd.merge(ramsar_polys, continent_iso, left_on='PARENT_ISO', right_on='ISO3') # Data cleaning - take only necessary ramsar columns ramsar_polys = ramsar_polys[['NAME', 'PARENT_ISO', 'Shape_Area', 'continent', 'geometry']] # Reproject ramsar data to World Equidistant Cylindrical, datum WGS84, units meters, EPSG 4087 ramsar_polys = ramsar_polys.to_crs('epsg:4087') # - # Open current DOR shapefiles dor_0to5 = gpd.read_file(os.path.join(wd_path, "earthpy-downloads", "DOR_Binned", "DOR_0to5.shp")) dor_5to10 = gpd.read_file(os.path.join(wd_path, "earthpy-downloads", "DOR_Binned", "DOR_5to10.shp")) dor_10to15 = gpd.read_file(os.path.join(wd_path, "earthpy-downloads", "DOR_Binned", "DOR_10to15.shp")) dor_15to20 = gpd.read_file(os.path.join(wd_path, "earthpy-downloads", "DOR_Binned", "DOR_15to20.shp")) dor_over20 = gpd.read_file(os.path.join(wd_path, "earthpy-downloads", "DOR_Binned", "DOR_over20.shp")) # Get all current rivers in selected continent all_rivers_0to5_eur = dor_0to5[dor_0to5['CONTINENT'] == "Europe"] all_rivers_5to10_eur = dor_5to10[dor_5to10['CONTINENT'] == "Europe"] all_rivers_10to15_eur = dor_10to15[dor_10to15['CONTINENT'] == "Europe"] all_rivers_15to20_eur = dor_15to20[dor_15to20['CONTINENT'] == "Europe"] all_rivers_over20_eur = dor_over20[dor_over20['CONTINENT'] == "Europe"] # + # For loop to (1) calculating difference in DOR between planned and current, (2) pulling only rivers with class > 3, # (3) projecting rivers, (4) buffer by 1/100 km to become polys for overlay fuction, (5) pull only needed columns gdf_list = [all_rivers_0to5_eur, all_rivers_5to10_eur, all_rivers_10to15_eur, all_rivers_15to20_eur, all_rivers_over20_eur] river_list_prj = [] for shp in gdf_list: shp['DOR_DIFF'] = shp['DOR_PLA'] - shp['DOR'] shp = shp[shp.RIV_CLASS >= 3] shp = shp.to_crs('epsg:4087') shp['geometry'] = shp.buffer(100) shp = shp[['LENGTH_KM', 'RIV_ORD', 'RIV_CLASS', 'CONTINENT', 'ISO_NAME', 'BAS_NAME', 'DOR', 'DOR_PLA', 'DOR_DIFF', 'Shape_Leng', 'geometry']] river_list_prj.append(shp) # Re-assign names based on list index dor_0to5 = river_list_prj[0] dor_5to10 = river_list_prj[1] dor_10to15 = river_list_prj[2] dor_15to20 = river_list_prj[3] dor_over20 = river_list_prj[4] # Concatanate all rivers gdfs for easier analysis later all_rivers = pd.concat([dor_0to5, dor_5to10, dor_10to15, dor_15to20, dor_over20], axis=0) # Remove rivers that have DOR_DIFF of 0 all_rivers_lg = all_rivers[all_rivers.DOR_DIFF > 0] # + # Analyze Europe # Open WDPA polygons wdpa_eur_polys = gpd.read_file(os.path.join(wd_path, "earthpy-downloads", "WDPA_Europe", "WDPA_Europe.shp")) # Data cleaning - remove polygons with no area wdpa_eur_polys = wdpa_eur_polys[wdpa_eur_polys.geometry != None] # Merge continent names with WDPA data for analyzing by continent wdpa_eur_polys = pd.merge(wdpa_eur_polys, continent_iso, left_on='PARENT_ISO', right_on='ISO3') # Take only the columns we need wdpa_eur_polys = wdpa_eur_polys[[ 'NAME', 'DESIG', 'PARENT_ISO', 'Shape_Area', 'continent', 'geometry']] # Reproject WDPA data wdpa_eur_polys = wdpa_eur_polys.to_crs('epsg:4087') # # Get the combined WDPA & ramsar areas for selected continent # wdpa_ramsar_sa = all_pa_continent(wdpa_eur_polys, ramsar_polys, "Europe") # - # Getting river length affected # Overlay current rivers on protected areas for selected continent to get ONLY rivers the overlap PAs river_overlap_eur = gpd.overlay( wdpa_eur_polys, all_rivers_lg, how='intersection') # Getting protected areas affected # Overlay projected rivers on pas for selected continent to get ONLY pas that overlap rivers pa_overlap_eur = gpd.overlay( river_overlap_eur, wdpa_eur_polys, how='intersection') # + # Get a list of countries in each continent for calculating lengths/areas by country later eur_countries = continent_iso[continent_iso.continent == 'Europe'] # Create empty lists country_sums = [] area_sums = [] countries = [] # Sum up the total river length affected by country in the continent for country in eur_countries.ISO3: country_sums.append(( river_overlap_eur.loc[river_overlap_eur['PARENT_ISO'] == country, 'LENGTH_KM'].sum()).round(0)) area_sums.append(( pa_overlap_eur.loc[pa_overlap_eur['PARENT_ISO_1'] == country, 'Shape_Area_1'].sum()).round(0)) countries.append(country) # Create a pandas dataframe of lengths and areas affected eur_output = pd.DataFrame(list(zip(countries, country_sums, area_sums)), columns=[ 'Country', 'Affected_KM', 'Affected_Area']) # - # View data eur_output.head() # + # Remove rows with 0 values for rivers. eur_affected = eur_output[eur_output.Affected_KM !=0] eur_affected = eur_output[eur_output.Affected_Area !=0] eur_affected # - # Export to csv. affected_path = os.path.join(wd_path, "eur_affected.csv" ) eur_affected.to_csv(affected_path) # + # Create Graph # use white grid plot background from seaborn # sns.set(font_scale=1.5, style='whitegrid') fig, ax = plt.subplots(figsize=(20, 10)) ax.bar(eur_affected["Country"], eur_affected["Affected_KM"], color='blue', label="River Length") plt.legend(loc="upper right") ax.set(title="Length of Potential River Reaches impacted by Future Dam Construction\nEurope", xlabel="Country", ylabel="Kilometers") ax.text(0.5, -0.1, "Data Source: Free Flowing Rivers Database", size=12, ha="center", transform=ax.transAxes) # + # Create Graph #use white grid plot background from seaborn # sns.set(font_scale=1.5, style='whitegrid') fig, ax = plt.subplots(figsize=(20, 10)) ax.bar(eur_affected["Country"], eur_affected["Affected_Area"], color='orange', label="Land Area") plt.legend(loc="upper right") ax.set(title="Potential Protected Areas impacted by Future Dam Construction\nEurope", xlabel="Country", ylabel="Square Meters") ax.set_yscale("log") ax.text(0.5, -0.1, "Data Source: World Database of Protected Areas and Ramsar Site Database", size=12, ha="center", transform=ax.transAxes) # + # Turn europe dams from point to poly by buffering so that overlay function works dams_all = dams_all.set_geometry('geometry').to_crs('epsg:4087') dams_all['geometry'] = dams_all.buffer(50000) # Overlay proposed dams on protected areas for selected continent to get ONLY dams that overlap PAs europe_dams = gpd.overlay( dams_all, wdpa_ramsar_europe, how='intersection') # Overlay protected areas on proposed dams for selected continent to get ONLY pas with dams europe_pas_affected = gpd.overlay( wdpa_ramsar_europe, dams_all, how='intersection') # + # Export to csv. affected_dams_path = os.path.join(wd_path, "europe_dams_affected.csv" ) europe_dams.to_csv(affected_dams_path) # Export to csv. affected_pas_path = os.path.join(wd_path, "europe_pas_affected.csv" ) europe_pas_affected.to_csv(affected_pas_path) # - pas_with_dams_path = os.path.join(wd_path, "europe_pas_with_dams.csv" ) europe_pas_with_dams = gpd.sjoin(dams_all, wdpa_ramsar_europe, how="inner", op='intersects') europe_pas_with_dams.to_csv(pas_with_dams_path) # + # Get continent border for plotting eur_border = continent_borders[continent_borders['CONTINENT'] == "Europe"] # Buffer rivers for nicer map dor_0to5 = dor_0to5.buffer(5000) dor_5to10 = dor_5to10.buffer(5000) dor_10to15 = dor_10to15.buffer(5000) dor_15to20 = dor_15to20.buffer(5000) dor_over20 = dor_over20.buffer(5000) # + # Get continent dams for plotting eur_dams = dams_all[dams_all['Continent'] == "Europe"] eur_dams = eur_dams.to_crs('epsg:4087') # + # Plot all rivers and all protected areas for selected continent # Create legend black_line = mlines.Line2D([], [], color='black', label='Country Border') blue_line = mlines.Line2D([], [], color='blue', label='0 to 5') yellow_line = mlines.Line2D([], [], color='yellow', label='5 to 10') orange_line = mlines.Line2D([], [], color='orange', label='10 to 15') red_line = mlines.Line2D([], [], color='red', label='15 to 20') magenta_line = mlines.Line2D([], [], color='magenta', label='20 plus') lightgreen_patch = mpatches.Patch(color='lightgreen', label='Protected Area') black_dot = mlines.Line2D([], [], color='white', marker='o', markerfacecolor='gray', label='Proposed Dam Site') fig, ax = plt.subplots(figsize=(15, 15)) eur_border.plot(ax=ax, color="none", edgecolor="black", linewidth=2) wdpa_eur_polys.plot(ax=ax, color="lightgreen", edgecolor='none') dor_0to5.plot(ax=ax, color='blue', edgecolor='none') dor_5to10.plot(ax=ax, color='yellow', edgecolor='none') dor_10to15.plot(ax=ax, color='orange', edgecolor='none') dor_15to20.plot(ax=ax, color='red', edgecolor='none') dor_over20.plot(ax=ax, color='magenta', edgecolor='none') eur_dams.plot(ax=ax, color='gray', markersize=50) ctx.add_basemap(ax, crs='epsg:4087') ax.set_title( 'All Protected Areas, Proposed Dams, And Large Rivers (Class > 2)\n by Degree of Regulation in Europe', size=20) ax.set_axis_off() ax.legend(handles=[black_line, blue_line, yellow_line, orange_line, red_line, magenta_line, lightgreen_patch, black_dot], fontsize=15, frameon=True, loc=('lower right'), title="LEGEND") # + fig, ax1 = plt.subplots(figsize=(20, 10)) plt.title("Potential River Reaches and Protected Areas in Europe\nImpacted by Future Dam Construction", size=30) ax1.set_xlabel('Country') ax1.set_ylabel('Area (sq. meters)', color='orange') ax1.bar(eur_affected["Country"], eur_affected["Affected_Area"], color='orange', label="Land Area") ax1.tick_params(axis='y') ax1.grid(False) ax2 = ax1.twinx() ax2.set_ylabel ('Length (km)', color='blue') ax2.plot(eur_affected["Country"], eur_affected["Affected_KM"], marker='o', color='blue', markersize=10, linewidth=5, label="River Length") ax2.tick_params(axis='y') ax2.grid(False) ax2.text(0.5, -0.2, "Data Source: Free Flowing Rivers Database, World Database of Protected Areas, and Ramsar Site Database", size=12, ha="center", transform=ax.transAxes) plt.show() # -
final-notebooks/dam-impacts-on-protected-areas-calc-europe.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # $\Huge Code$ $\hspace{0.1cm}$ $\Huge to$ $\hspace{0.1cm}$ $\Huge do$ $\hspace{0.1cm}$ $\Huge maps$ $\hspace{0.1cm}$ $\Huge cutouts$ $\Huge :$ # # Modules : # %matplotlib inline import healpy as hp import matplotlib.pyplot as plt from matplotlib import rc rc('text', usetex=True) from astropy.io import fits import numpy as np from astropy import constants as cst from astropy.cosmology import FlatLambdaCDM import pysm as pysm import ccatp_sky_model as sky cosmo = FlatLambdaCDM(H0=70, Om0=0.3, Tcmb0=2.7255) T_CMB = cosmo.Tcmb0.si.value k_B = cst.k_B.value h = cst.h.value c = cst.c.value data_path = "/vol/arc3/data1/sz/CCATp_sky_model/templates/" # # Function : def project_maps(allsky_map, RA, DEC, map_size = 10, pixel_size = 0.4): '''Creates gnomic projections of HEALPix all-sky maps. Parameters ---------- allsky_map: float array numpy array containing a healpy all-sky map with a valid nside RA: float or float array array, optional Right acention of objects, fk5 coordinates are required. DEC: float or float array, optional Declination of objects, fk5 coordinates are required. map_size: float, optional Size of the desired projected map in degree, map will be square. Default: 10 pixel_size: float, optional Pixel size of the desired projected map in arcmin. Default: 0.4 Returns ------- maps: float array Array containing the projected maps ''' RA = np.asarray(RA) DEC = np.asarray(DEC) n = len(RA) npix = int(map_size*60 / pixel_size) maps = np.zeros((n, npix, npix), dtype=np.float32) for i in np.arange(n): maps[i,:,:] = hp.visufunc.gnomview(allsky_map, coord = ['G', 'C'], rot=[RA[i],DEC[i]], reso = pixel_size, xsize = npix, return_projected_map = True, no_plot=True) return(maps) # # Launch : # ## Example map : CMB CMB_path = 'CMB/CMB_unlensed_CITA_mK.fits' CMB = hp.read_map(data_path + CMB_path, dtype = np.float32) hp.mollview(CMB, title="$CMB$ $map$ $from$ $WebSky$", norm='hist',unit='$\mu K_{CMB}$') # ## Applying the function Cutout = project_maps(allsky_map=CMB, RA=[123], DEC=[-81], map_size = 10, pixel_size = 0.4) plt.imshow(Cutout[0])
examples/project_maps.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # - <b>Classification algorithms</b> # # - <b>Submitted by Kaushik </b> # - <b>email- <EMAIL></b> # # ## Self-Organizing Map (SOM) # # import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline ds = pd.read_csv("Dataset_II.csv") ds1 = pd.read_csv("Dataset_II.csv") ds.head() # # Data preprocessing ds.columns ds.drop(["Unnamed: 0","Purpose"],axis=1,inplace=True) ds.head() ds_null=ds.columns[ds.isnull().any()] ds[ds_null].isnull().sum() # ds["Saving accounts"].fillna(ds["Saving accounts"].mean(), inplace=True) # ds["Checking account"].fillna(ds["Checking account"].mean(), inplace=True) ds["Saving accounts"].fillna(method='bfill',inplace=True) ds["Checking account"].fillna(method='ffill',inplace=True) ds.isnull().sum() ds.head() ds.dtypes ds_categ = list(ds.select_dtypes(exclude = ["number"]).columns) ds_categ from sklearn.preprocessing import LabelEncoder le = LabelEncoder() for i in ds_categ: print(ds[i].unique()) ds[i] = le.fit_transform(ds[i]) ds.head(20) # ds_copy = ds.copy() # ds_copy.head() ds.tail() ds.dtypes # + X=ds.iloc[:, :].values y= np.genfromtxt('Dataset_II.csv', delimiter=',', usecols=(9), dtype=str) t = np.zeros(len(y), dtype=int) t[y == 'radio/TV'] = 0 t[y == 'education'] = 1 t[y == 'furniture/equipment'] = 2 t[y == 'car'] = 3 t[y == 'business'] = 4 t[y == 'repairs'] = 5 t[y == 'vacation/others'] = 6 t[y == 'domestics appliances'] = 7 # - # featureScaling from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0,1)) X = scaler.fit_transform(X) # Training SOM from minisom import MiniSom som=MiniSom(x=10,y=10,input_len=8,sigma=1.0,learning_rate=0.5) som.random_weights_init(X) som.train_random(data=X,num_iteration=100) # Visulazing the result from pylab import bone, pcolor,colorbar,plot,show # + bone() pcolor(som.distance_map().T) colorbar() markers=['o','s','+','-','*','D','p','-s'] colors=['C0','C1','C2','C3','C4','C5','C6','C7'] plt.figure(figsize=(10, 10)) for i,x in enumerate(X): w=som.winner(x) plot(w[0]+0.5, w[1]+0.5, markers[t[i]], markeredgecolor=colors[t[i]],markerfacecolor='None',markersize=10,markeredgewidth=2) show() plt.show() # + bone() pcolor(som.distance_map().T) colorbar() markers=['o','s','+','-','*','D','p','-s'] colors=['C0','C1','C2','C3','C4','C5','C6','C7'] for i,x in enumerate(X): w=som.winner(x) plot(w[0]+0.5, w[1]+0.5, markers[t[i]], markeredgecolor=colors[t[i]],markerfacecolor='None',markersize=10,markeredgewidth=2) show() # - print("Training...") som.train_batch(X, 1000, verbose=True) # random training print("\n...ready!") # - A self-organizing map (SOM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction. # - With the help SOM, different cluster is created and according to the cluster the types of loan a cluster can have. # - 'radio/TV' = 0 ---->'o'-->'C0' # - education' = 1 --->'s'-->'C1' # - 'furniture/equipment'= 2 ---->'+'-->'C2' # - 'car' = 3 ---->'-'-->'C3' # - 'business' = 4---->'*'-->'C4' # - 'repairs' = 5---->'D'-->'C5' # - 'vacation/others' = 6---->'p'-->'C6' # - 'domestics appliances' = 7---->'-s'-->'C7' # # - Some cluster can take no.of loans eg- a cluster can take radio/TV,education,car etc. label = np.genfromtxt('Dataset_II.csv', delimiter=',', usecols=(9), dtype=str) import matplotlib.pyplot as plt from matplotlib import gridspec # + X=np.genfromtxt('data.csv', delimiter=',', usecols=(1, 2, 3, 4, 5, 6, 7, 8)) from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0,1)) X = scaler.fit_transform(X) label = np.genfromtxt('Dataset_II.csv', delimiter=',', usecols=(9), dtype=str) labels_map = som.labels_map(X, label) label_names = np.unique(label) plt.figure(figsize=(10, 10)) the_grid = gridspec.GridSpec(10, 10) for position in labels_map.keys(): label_fracs = [labels_map[position][l] for l in label_names] plt.subplot(the_grid[6-position[1], position[0]], aspect=1) patches, texts = plt.pie(label_fracs) plt.legend(patches, label_names, bbox_to_anchor=(0., 6.5), ncol=3) plt.show() # -
Assignment6/Project2/Assignment_06-Project-2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Max likelihood # + # imports from importlib import reload import pandas import numpy as np from scipy import special import seaborn as sns import mpmath from matplotlib import pyplot as plt # - # # Generate a faux sample # # ## Let $E_{\rm max} = 10^{50}$ and $E_{\rm th} = 10^{40}$ gamma = -2. NFRB = 100 lEmax = 50. Emax = 10**lEmax lEth = 40. Eth = 10**lEth norm = (Emax**(gamma+1) - Eth**(gamma+1))/(1+gamma) norm randu = np.random.uniform(size=NFRB) randE = (randu*(gamma+1)*norm + 10**(lEth*(gamma+1)))**(1/(1+gamma)) df = pandas.DataFrame() df['E'] = randE df['logE'] = np.log10(randE) sns.histplot(data=df, x='logE') # # Max Likelihood Time! # ## Methods # # ### We express the log-likelihood as # # ## $\ln \mathcal{L} = - \int\limits_{E_{\rm th}}^{E_{\rm max}} p(E) dE + \sum\limits_{j=1}^N \ln p(E)$ # # ### where $j$ is over all the $N$ FRBs and $p(E) = C E^\gamma$ # ## Likelihood terms # ### This terms accounts for the total space explored. It *decreases* with increasing Emax def misses_term(C, Eth, Emax, gamma): return -C * (Emax**(gamma+1) - Eth**(gamma+1)) / (1+gamma) # ### This term is simply proportional to the probability def hit_term(Eval, C, gamma): NFRB = len(Eval) fterm = NFRB * np.log(C) sterm = gamma * np.sum(np.log(Eval)) return fterm + sterm def guess_C(gamma, Emax, Eth, NFRB): return NFRB * (gamma+1) / (Emax**(gamma+1) - Eth**(gamma+1)) # ## Test case $E_{max} = 10^{42}$ Emax = 1e42 #Eth = 1e40 guessC = guess_C(gamma, Emax, Eth, NFRB) guessC logC = np.log10(guessC) Cvals = 10**(np.linspace(logC-1, logC+1, 1000)) LL_C = misses_term(Cvals, Eth, Emax, gamma) + hit_term(df.E, Cvals, gamma) sns.lineplot(x=np.log10(Cvals), y=LL_C) Cmax = Cvals[np.argmax(LL_C)] Cmax # ## Loop a bit # + LLs = [] Emaxs = 10**(np.linspace(42., 47., 100)) Cmaxs = [] for Emax in Emaxs: guessC = guess_C(gamma, Emax, Eth, NFRB) logC = np.log10(guessC) Cvals = 10**(np.linspace(logC-1, logC+1, 1000)) # misses = misses_term(Cvals, Eth, Emax, gamma) hits = hit_term(df.E, Cvals, gamma) LL_C = misses + hits #print(guessC, Cvals[np.argmax(LL_C)]) imax = np.argmax(LL_C) LLs.append(np.max(LL_C)) Cmaxs.append(Cvals[imax]) #print(misses[imax], hits[imax]) LLs = np.array(LLs) Cmaxs = np.array(Cmaxs) # - # ## Plot ax = sns.lineplot(x=np.log10(Emaxs), y=Cmaxs) ax.set_xlabel('log10 Emax') ax.set_ylabel(r' $C$') ax = sns.lineplot(x=np.log10(Emaxs), y=LLs - np.max(LLs)) ax.set_xlabel('log10 Emax') ax.set_ylabel(r' $\Delta \, LL$') # ### Clearly $\Delta LL$ is small (less than 1!) for all Emax values and there is no preference beyond 1e45. # ### This follows our intuition.. # ---- # # Alternative approach # # ## $\ln \mathcal{L} = \ln p_n(N) + \sum\limits_j^{N} \ln p_j(E)$ # # ## with $p_j(E)$ normalized to unity # # ## As with the FRBs, we will assume we have another normlization constant (not $C$) that we can tune to given $N$ events. # # ## Therefore, we can always maximize $p_n(N)$ def norm_pE(Eth, Emax, gamma): norm = (Emax**(1+gamma) - Eth**(1+gamma))/(1+gamma) return norm # + LLs2 = [] #Emaxs = 10**(np.linspace(42., 47., 100)) Cmaxs = [] for Emax in Emaxs: # norm = norm_pE(Eth, Emax, gamma) #print(guessC, Cvals[np.argmax(LL_C)]) pE = df.E**gamma / norm # LLs2.append(np.sum(np.log(pE))) LLs2 = np.array(LLs2) #Cmaxs = np.array(Cmaxs) # - ax = sns.lineplot(x=np.log10(Emaxs), y=LLs2 - np.max(LLs2), label='CJ version') ax = sns.lineplot(x=np.log10(Emaxs), y=LLs - np.max(LLs), label='x version') ax.set_xlabel('log10 Emax') ax.set_ylabel(r' $\Delta \, LL$') ax.legend() # ---- # # Gamma function def Gamma_misses_term(C, Eth, Emax, gamma): norm = float(mpmath.gammainc(gamma+1, a=Eth/Emax)) # Emax terms cancel return -(C/Emax) * norm def Gamma_hit_term(Eval, C, gamma, Emax): NFRB = len(Eval) fterm = NFRB * (np.log(C) - 2*np.log(Emax)) sterm= np.sum(np.log((Eval/Emax)**(gamma) * np.exp(-Eval/Emax))) #import pdb; pdb.set_trace() return fterm + sterm def Gamma_guess_C(gamma, Emax, Eth, NFRB): return NFRB * Emax / float(mpmath.gammainc(gamma+1, a=Eth/Emax)) gamma # ## Do it # + LLsG = [] Emaxs = 10**(np.linspace(42., 47., 100)) Cmaxs = [] for Emax in Emaxs: guessC = Gamma_guess_C(gamma, Emax, Eth, NFRB) logC = np.log10(guessC) Cvals = 10**(np.linspace(logC-1, logC+1, 1000)) # misses = Gamma_misses_term(Cvals, Eth, Emax, gamma) hits = Gamma_hit_term(df.E, Cvals, gamma, Emax) LL_C = misses + hits #import pdb; pdb.set_trace() #print(guessC, Cvals[np.argmax(LL_C)]) imax = np.argmax(LL_C) LLsG.append(np.max(LL_C)) Cmaxs.append(Cvals[imax]) #print(misses[imax], hits[imax]) LLsG = np.array(LLsG) Cmaxs = np.array(Cmaxs) # - Gamma_guess_C(gamma, 1e44, Eth, NFRB) ax = sns.lineplot(x=np.log10(Emaxs), y=Cmaxs) ax.set_xlabel('log10 Emax') ax.set_ylabel(r' $C$') ax = sns.lineplot(x=np.log10(Emaxs), y=LLsG - np.max(LLsG), label='Gamma function') #ax = sns.lineplot(x=np.log10(Emaxs), y=LLs - np.max(LLs), label='x version') ax.set_xlabel('log10 Emax') ax.set_ylabel(r' $\Delta \, LL$') #ax.set_ylim(-1., 0.) ax.legend()
docs/nb/Max_Like.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### This code does not run on Colab import pandas as pd pd.options.display.float_format = '{:,.1f}'.format import numpy as np import matplotlib.pyplot as plt import dataframe_image as dfi # # 1. Avalanche df Preparation # url of Zip file: # https://www.doi.org/10.16904/envidat.134 # # File used in zip file: data_set_1_avalanche_observations_wi9899_to_wi1819_davos.csv # + #Information on the columns of the orginal CSV file # CSV FIlE UNIT DESCRIPTION # no Avalanche number # date_release Date of release # snow_type Type of snow: "dry", "wet", "mixed", "unknown" # trigger_type Type of trigger: "HUMAN", "NATURAL", "EXPLOSIVE", "UNKNOWN" # max_elevation_m m Avalanche Maximum elevation - NOT USED # min_elevation_m m Avalanche Minimum elevation - NOT USED # aspect_degrees degree Avalanche Aspect - NOT USED # length_m m Avalanche length - NOT USED # width_m m Avalanche width - NOT USED # perimeter_length_m m Avalanche perimeter - NOT USED # area_m2 m2 Avalanche surface area # aval_size_class Avalanche size class (1 to 5) - NOT USED # weight_AAI Avalanche activity index (AAI) - NOT USED # max.danger.corr Avalanche danger level on the day (1 to 5) # - # import avalanche data df_avalanche = pd.read_csv('../DataSet_Avalanche_Original_EnviDat/data_set_1_avalanche_observations_wi9899_to_wi1819_davos.csv', sep = ';') df_avalanche # Rename column df_avalanche.rename(columns = {'date_release':'Date', 'snow_type':'Snow_type', 'trigger_type':'Trigger_type', 'area_m2': 'Avalanche_size_m2', 'max.danger.corr':'Avalanche_danger_level'}, inplace = True) # Remove unused columns df_avalanche.drop(columns = {'no','max_elevation_m', 'min_elevation_m', 'aspect_degrees', 'length_m', 'width_m', 'perimeter_length_m', 'aval_size_class', 'weight_AAI'}, inplace = True) # convert column Date to datetime format df_avalanche['Date']= pd.to_datetime(df_avalanche['Date']) # Remove NaN value df_avalanche = df_avalanche.dropna() # Change one column to int df_avalanche['Avalanche_danger_level'] = df_avalanche['Avalanche_danger_level'].astype(int) # Export example of the data dfi.export(df_avalanche.iloc[11141:11147], '../Tables/df_avalanche.png', max_rows= 7) df_avalanche.iloc[11141:11147] # One wrong date was recorded # found because in Meteo df there was a big snow fall between 2005_12_16 and 2005_12_17, with more than 50cm of new snow # it is not possible that 240 avalanches are happening only in the beginning of this snow fall # Print df_meteo between 2 dates, as example of the data start_date = "2005-12-15" end_date = "2005-12-19" after_start_date = df_avalanche['Date'] >= start_date before_end_date = df_avalanche['Date'] <= end_date between_two_dates = after_start_date & before_end_date df_avalanche.loc[between_two_dates] # correct the wrong date df_avalanche = df_avalanche.replace(to_replace = '2005-12-16', value= '2005-12-17') # Set index = 'Date' df_avalanche = df_avalanche.set_index('Date') #Start_date = df_avalanche.index[0] #End_date = df_avalanche.index[-1] #print('Davos Avalanche Record Timeframe: ' + str(Start_date) + ' to ' + str(End_date)) #print('Total number of avalanches recorded: ' + str(df_avalanche.Snow_type.size)) #print('Total number of "Dry Snow" avalanches : ' + str(df_avalanche[df_avalanche['Snow_type'] == 'dry'].Snow_type.count())) #print('Total number of "Wet Snow" avalanches : ' + str(df_avalanche[df_avalanche['Snow_type'] == 'wet'].Snow_type.count())) #print('Total number of "Mixed Snow" avalanches : ' + str(df_avalanche[df_avalanche['Snow_type'] == 'mixed'].Snow_type.count())) #print('Total number of "Unknown Snow" avalanches : ' + str(df_avalanche[df_avalanche['Snow_type'] == 'unknown'].Snow_type.count())) # + # Rearrange the df, to that the number of each type of avalanche can be counted df_avalanche['Num_wet'] = df_avalanche['Snow_type']* (df_avalanche['Snow_type'] == 'wet') df_avalanche['Num_wet'] = df_avalanche['Num_wet'] == 'wet' df_avalanche['Num_wet'] = df_avalanche['Num_wet'].astype(int) df_avalanche['Num_dry'] = df_avalanche['Snow_type']* (df_avalanche['Snow_type'] == 'dry') df_avalanche['Num_dry'] = df_avalanche['Num_dry'] == 'dry' df_avalanche['Num_dry'] = df_avalanche['Num_dry'].astype(int) df_avalanche['Num_mixed'] = df_avalanche['Snow_type']* (df_avalanche['Snow_type'] == 'mixed') df_avalanche['Num_mixed'] = df_avalanche['Num_mixed'] == 'mixed' df_avalanche['Num_mixed'] = df_avalanche['Num_mixed'].astype(int) df_avalanche['Num_unknown'] = df_avalanche['Snow_type']* (df_avalanche['Snow_type'] == 'unknown') df_avalanche['Num_unknown'] = df_avalanche['Num_unknown'] == 'unknown' df_avalanche['Num_unknown'] = df_avalanche['Num_unknown'].astype(int) df_avalanche['Num_Natural'] = df_avalanche['Trigger_type']* (df_avalanche['Trigger_type'] == 'NATURAL') df_avalanche['Num_Natural'] = df_avalanche['Num_Natural'] == 'NATURAL' df_avalanche['Num_Natural'] = df_avalanche['Num_Natural'].astype(int) df_avalanche['Num_Human'] = df_avalanche['Trigger_type']* (df_avalanche['Trigger_type'] == 'HUMAN') df_avalanche['Num_Human'] = df_avalanche['Num_Human'] == 'HUMAN' df_avalanche['Num_Human'] = df_avalanche['Num_Human'].astype(int) df_avalanche['Num_Explosive'] = df_avalanche['Trigger_type']* (df_avalanche['Trigger_type'] == 'EXPLOSIVE') df_avalanche['Num_Explosive'] = df_avalanche['Num_Explosive'] == 'EXPLOSIVE' df_avalanche['Num_Explosive'] = df_avalanche['Num_Explosive'].astype(int) df_avalanche['Num_Unknown_t'] = df_avalanche['Trigger_type']* (df_avalanche['Trigger_type'] == 'UNKNOWN') df_avalanche['Num_Unknown_t'] = df_avalanche['Num_Unknown_t'] == 'UNKNOWN' df_avalanche['Num_Unknown_t'] = df_avalanche['Num_Unknown_t'].astype(int) df_avalanche = df_avalanche.reset_index() df_avalanche # - # Write processed df_avalanche data to csv file df_avalanche.to_csv('../Processed_DataSets/Avalanches.csv', index = False) # Change columns to int df_avalanche['Avalanche_danger_level'] = df_avalanche['Avalanche_danger_level'].astype(int) df_avalanche['Avalanche_size_m2'] = df_avalanche['Avalanche_size_m2'].astype(int) # export example of data to file dfi.export(df_avalanche.iloc[11141:11147], '../Tables/df_avalanche_processed.png', max_rows= 7) df_avalanche.iloc[11141:11147] # + # Description of the columns of the processed file Avalanche.csv # DF_AVALANCHE UNIT DESCRIPTION # Date Date of release # no Avalanche number (NOT USED) # Snow_type Type of snow: "dry", "wet", "mixed", "unknown" # Trigger_type Type of trigger: "HUMAN", "NATURAL", "EXPLOSIVE", "UNKNOWN" # Avalanche_size_m2 m2 Avalanche area # Avalanche_danger_level Avalanche danger level on the day (1 to 5) # Num_wet Count of "wet" type avalanches # Num_dry Count of "dry" type avalanches # Num_mixed Count of "mixed" type avalanches # Num_unknown Count of "unknwown" type avalanches # Num_Natural Count of "NATURAL" triggered avalanches # Num_Human Count of "HUMAN" triggered avalanches # Num_explosive Count of "EXPLOSIVE" triggered avalanches # Num_unknown_t Count of "UNKNOWN" triggered avalanches # - # # 2. Some Avalanches Statistic Plots # ### Histogram - Box plot of avalanche size df_Level = df_avalanche.set_index(['Avalanche_danger_level','Snow_type','Date']).copy() # **Statistic about "dry" avalanches size** df_Level.xs(('dry'), level = ('Snow_type')).Avalanche_size_m2.describe() # **Statistic about "wet" avalanches size** df_Level.xs(('wet'), level = ('Snow_type')).Avalanche_size_m2.describe() # **Box plot - Avalanche size** # + fig, ax1 = plt.subplots(figsize=(6, 10)) labels = ['wet avalanches', 'dry avalanches'] # rectangular box plot bplot1 = ax1.boxplot([df_Level.xs(('wet'), level = ('Snow_type')).Avalanche_size_m2, df_Level.xs(('dry'), level = ('Snow_type')).Avalanche_size_m2], showfliers=True, vert=True, patch_artist=True, labels=labels) ax1.set_title('Size of avalanches - WITH outlier points', fontdict={'fontsize':16}, pad = 15) # fill with colors colors = ['pink', 'lightblue'] for patch, color in zip(bplot1['boxes'], colors): patch.set_facecolor(color) # adding horizontal grid lines ax1.yaxis.grid(True) ax1.set_ylabel('Size in m2',fontdict={'fontsize':12}) plt.savefig('../Plots/Avalanche_size_box1.jpg') plt.show() # - # **Box plot - Avalanche size - Without outliers** fig, ax2 = plt.subplots(figsize=(6, 8)) labels = ['wet avalanches', 'dry avalanches'] # rectangular box plot bplot1 = ax2.boxplot([df_Level.xs(('wet'), level = ('Snow_type')).Avalanche_size_m2, df_Level.xs(('dry'), level = ('Snow_type')).Avalanche_size_m2], showmeans=True, showfliers=False, vert=True, patch_artist=True, labels=labels) ax2.set_title('Size of avalanches - WITHOUT outlier points', fontdict={'fontsize':16}, pad = 15) # fill with colors colors = ['pink', 'lightblue'] for patch, color in zip(bplot1['boxes'], colors): patch.set_facecolor(color) # adding horizontal grid lines ax2.yaxis.grid(True) ax2.set_ylabel('Size in m2',fontdict={'fontsize':12}) plt.savefig('../Plots/Avalanche_size_box2.jpg') plt.show() # + fig, ax3 = plt.subplots(figsize=(10, 5)) ax3.hist(df_Level.xs(('wet'), level = ('Snow_type')).Avalanche_size_m2, bins=50, color = 'pink', range=(0,35000), fill=False, histtype='step', label='Wet avalanches') ax3.hist(df_Level.xs(('dry'), level = ('Snow_type')).Avalanche_size_m2, bins=50, color = 'lightblue', range=(0,35000), fill=False, histtype='step', label='Dry avalanches') ax3.set_title('Histogram - Size of avalanche - WITHOUT outlier points', fontdict={'fontsize':16}, pad = 15) ax3.set_xlabel('Size in m2',fontdict={'fontsize':12}) ax3.legend() plt.savefig('../Plots/Avalanche_size_hist.jpg') plt.show() # - # ### Total Number of avalanches by years df_Level_Years = df_Level.reset_index() df_Level_Years = df_Level_Years.set_index('Date') del df_Level_Years['Num_mixed'] del df_Level_Years['Num_unknown'] del df_Level_Years['Num_Explosive'] del df_Level_Years['Num_Unknown_t'] df_Level_Years = df_Level_Years.resample('1Y').agg({'Avalanche_size_m2':'mean', 'Num_Natural':'sum', 'Num_Human': 'sum', 'Avalanche_danger_level':'count'}) # + # General variable plotting Title_size = {'fontsize': 20, 'fontweight': 'demibold' } Legend_size = {'fontsize': 16} OffsetAxes = 15 OffsetTitle = 30 FigWidth = 20 FigHeigth = 5 plt.style.use('..\..\..\.matplotlib\stylelib\scientific2.mplstyle') # Define the variables for that plot V1='Human' V2='Natural' # Define the label and the x an y value labels = ['2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019'] x = np.arange(len(labels)) # the label locations width = 0.35 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, df_Level_Years.Num_Human.iloc[-10:], width, label=str(V1)) rects2 = ax.bar(x + width/2, df_Level_Years.Num_Natural.iloc[-10:], width, label=str(V2)) # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_xlabel('Avalanche danger level', fontdict = Legend_size, labelpad = OffsetAxes) ax.set_ylabel('Number',fontdict = Legend_size, labelpad = OffsetAxes) ax.set_ylim(top= 1000) ax.set_title('Number of avalanches Human/Natural causes',fontdict = Title_size, pad = OffsetTitle) ax.set_xticks(x) ax.set_xticklabels(labels, fontdict = Legend_size) ax.legend() fig.tight_layout() fig.set_figwidth(FigWidth) fig.set_figheight(FigHeigth) plt.savefig('../Plots/Num_of_avlanches_HUMAN-NATURAL.jpg') plt.show()
JupyterNotebooks/Avalanches.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Optimization and Topological Data Analysis # <NAME>, https://mathieucarriere.github.io/website/ # In this notebook, we are going to see how to combine Gudhi and Tensorflow in order to do persistence optimization! import numpy as np import matplotlib.pyplot as plt import gudhi as gd import tensorflow as tf import tensorflow_addons as tfa import pandas as pd # ## Point cloud optimization # You might have already seen a few examples of persistence diagram computations on point clouds. Among the different possibilities, the Rips filtration is the most common option due to its simplicity and easy implementation. In this notebook, we will see how Gudhi and Tensorflow can be combined to perform optimization of persistence diagrams. # Let's have a quick refresher ;-) # First, let's generate a random point cloud in the unit square and visualize it. X = np.random.uniform(size=[300,2]) plt.scatter(X[:,0], X[:,1], s=3) plt.show() # Yep, looks pretty random indeed. Let's now compute its Rips persistence diagram. This is literally two lines of code with Gudhi :-) st = gd.RipsComplex(points=X, max_edge_length=1.).create_simplex_tree(max_dimension=2) dgm = st.persistence() plot = gd.plot_persistence_diagram(dgm) # As usual, there is one point (in dimension 0) at $+\infty$ which represents the whole connected component, a bunch of points in dimension 0 with abscissa 0 (actually, as many as there are points in the initial point cloud), and some points in dimension 1. There is nothing else because we only kept the 2-skeleton of the Rips complex in the previous cell. # This is great. But have you ever thought about the inverse problem? That is, can you tweak the point cloud so that the corresponding persistence diagram satisfies some properties? That sounds hard. Turns out it is not if you combine Gudhi and Tensorflow ;-) # Before jumping to the code, let's think about what's happening here. If you think about how persistence is computed, the coordinates of any point $p$ in a persistence diagram are actually given by the filtration values of two very specific simplices: the so-called positive and negative simplices of $p$, denoted by $\sigma_+(p)$ and $\sigma_-(p)$ (check chapter VII.1 in [this reference book](https://books.google.com/books/about/Computational_Topology.html?id=MDXa6gFRZuIC) for more details if you feel lost). So, we have: $$p=(f(\sigma_+(p)), f(\sigma_-(p))),$$ # where $f$ is the filtration function. This means that if $f$ is actually defined by some parameters $f = f_\theta$, then the gradient $\nabla_\theta p$ is actually given by $\nabla_\theta f_\theta(\sigma_+(p))$ and $\nabla_\theta f_\theta(\sigma_-(p))$. # Interesting, but how does this look like for the Rips filtration? Well, first, the parameters $\theta$ are now the positions of the points in the point cloud. Second, as you may recall, the filtration value of any simplex is simply the maximal distance between any two vertices in the simplex: $f(\{v_0,\cdots, v_n\})=\|v_a-v_b\|$, $0\leq a,b\leq n$, where $\|v_a-v_b\|\geq \|v_i-v_j\|$, $\forall 0\leq i,j \leq n$. This has two consequences. First, this means that one can create the persistence diagram by simply picking entries of the distance matrix between the points. Second, this also means that the gradient of $f$ only depends on the positions of $v_a$ and $v_b$: $\nabla f=\frac{v_a-v_b}{\|v_a-v_b\|}$. You can check [this article](https://arxiv.org/abs/1506.03147) for more details. # All right! So the only thing that remains to do is to compute the positive and negative simplices, right? Turns out that Gudhi can do that with the `persistence_pairs()` function. Well, let's go then! # First, let's write a function that computes the positive and negative simplices associated to the persistence pairs of a Rips persistence diagram, and outputs the vertices $v_a$ and $v_b$ associated to these simplices. def Rips(DX, mel, dim, card): # Parameters: DX (distance matrix), # mel (maximum edge length for Rips filtration), # dim (homological dimension), # card (number of persistence diagram points, sorted by distance-to-diagonal) # Compute the persistence pairs with Gudhi rc = gd.RipsComplex(distance_matrix=DX, max_edge_length=mel) st = rc.create_simplex_tree(max_dimension=dim+1) dgm = st.persistence() pairs = st.persistence_pairs() # Retrieve vertices v_a and v_b by picking the ones achieving the maximal # distance among all pairwise distances between the simplex vertices indices, pers = [], [] for s1, s2 in pairs: if len(s1) == dim+1: l1, l2 = np.array(s1), np.array(s2) i1 = [s1[v] for v in np.unravel_index(np.argmax(DX[l1,:][:,l1]),[len(s1), len(s1)])] i2 = [s2[v] for v in np.unravel_index(np.argmax(DX[l2,:][:,l2]),[len(s2), len(s2)])] indices += i1 indices += i2 pers.append(st.filtration(s2) - st.filtration(s1)) # Sort points with distance-to-diagonal perm = np.argsort(pers) indices = list(np.reshape(indices, [-1,4])[perm][::-1,:].flatten()) # Output indices indices = indices[:4*card] + [0 for _ in range(0,max(0,4*card-len(indices)))] return list(np.array(indices, dtype=np.int32)) # Second, we define a Tensorflow model whose parameters are the point coordinates, and which outputs the corresponding Rips persistence diagram. class RipsModel(tf.keras.Model): def __init__(self, X, mel=12, dim=1, card=50): super(RipsModel, self).__init__() self.X = X self.mel = mel self.dim = dim self.card = card def call(self): m, d, c = self.mel, self.dim, self.card # Compute distance matrix DX = tfa.losses.metric_learning.pairwise_distance(self.X) DXX = tf.reshape(DX, [1, DX.shape[0], DX.shape[1]]) # Turn numpy function into tensorflow function RipsTF = lambda DX: tf.numpy_function(Rips, [DX, m, d, c], [tf.int32 for _ in range(4*c)]) # Compute vertices associated to positive and negative simplices # Don't compute gradient for this operation ids = tf.nest.map_structure(tf.stop_gradient, tf.map_fn(RipsTF,DXX,dtype=[tf.int32 for _ in range(4*c)])) # Get persistence diagram by simply picking the corresponding entries in the distance matrix dgm = tf.reshape(tf.gather_nd(DX, tf.reshape(ids, [2*c,2])), [c,2]) return dgm # Finally, we are done! Let's define our parameters. n_pts = 300 # number of points in the point clouds card = 50 # max number of points in the diagrams hom = 1 # homological dimension ml = 12. # max distance in Rips n_epochs = 30 # number of optimization steps # We randomly initalize our point cloud. Xinit = np.array(np.random.uniform(size=(n_pts,2)), dtype=np.float32) X = tf.Variable(initial_value=Xinit, trainable=True) # We define our Tensorflow model + optimizer. model = RipsModel(X=X, mel=ml, dim=hom, card=card) optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2) # Finally, we can launch the optimization! Our loss is the opposite of the sum of squares of the distances to the diagonal of the points in the diagram. This will force the persistence diagram to have many prominent points, and thus the point cloud will contain as much loops as possible. Let's train and visualize the point cloud every 10 iterations, or epochs! for epoch in range(n_epochs+1): with tf.GradientTape() as tape: # Compute persistence diagram dgm = model.call() # Loss is sum of squares of distances to the diagonal loss = -tf.math.reduce_sum(tf.square(.5*(dgm[:,1]-dgm[:,0]))) # Compute and apply gradients gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) if epoch % 10 == 0: plt.figure() plt.scatter(model.X.numpy()[:,0], model.X.numpy()[:,1]) plt.title("Point cloud at epoch " + str(epoch)) plt.show() # Looks like there are more cycles indeed ;-) # ## Image optimization # We can actually play the same game with images! Indeed, Gudhi contains code for computing [cubical persistence](http://www2.im.uj.edu.pl/mpd/publications/Wagner_persistence.pdf), which is very well-suited for handling images. For instance, it can be used to filter a 2D image with its pixel values. Overall, the optimization follows the exact same steps as before, except that we use pixel filtration instead of Rips filtration. This means that the parameters $\theta$ that we are now going to optimize are the pixel values themselves, and that the gradients for positive simplex $\nabla_\theta f_\theta(\sigma_+(p))$ and negative simplex $\nabla_\theta f_\theta(\sigma_-(p))$ now simply equal $1$ for the pixels associated to $\sigma_+(p)$ and $\sigma_-(p)$ and $0$ for all other pixels. # Fortunately, Gudhi contains a function `cofaces_of_persistence_pairs()` that exactly retrieves the pixels (or, more formally, the cofaces) of the positive and negative simplices of a persistence point $p$. # Let's start with a function that computes those pixels. def Cubical(X, dim, card): # Parameters: X (image), # dim (homological dimension), # card (number of persistence diagram points, sorted by distance-to-diagonal) # Compute the persistence pairs with Gudhi cc = gd.CubicalComplex(dimensions=X.shape, top_dimensional_cells=X.flatten()) cc.persistence() cof = cc.cofaces_of_persistence_pairs()[0][dim] # Sort points with distance-to-diagonal Xs = X.shape pers = [X[np.unravel_index(cof[idx,1], Xs)] - X[np.unravel_index(cof[idx,0], Xs)] for idx in range(len(cof))] perm = np.argsort(pers) cof = cof[perm] # Retrieve and ouput image indices/pixels corresponding to positive and negative simplices D = len(X.shape) ocof = np.array([0 for _ in range(D*card*2)]) for idx in range(len(cof[:min(card, cof.shape[0])])): ocof[D*idx:D*(idx+1)] = np.unravel_index(cof[idx,0], X.shape) ocof[D*(idx+1):D*(idx+2)] = np.unravel_index(cof[idx,1], X.shape) return list(np.array(ocof.flatten(), dtype=np.int32)) # As before, we now define a corresponding Tensorflow model. class CubicalModel(tf.keras.Model): def __init__(self, X, dim=1, card=50): super(CubicalModel, self).__init__() self.X = X self.dim = dim self.card = card def call(self): d, c, D = self.dim, self.card, len(self.X.shape) XX = tf.reshape(self.X, [1, self.X.shape[0], self.X.shape[1]]) # Turn numpy function into tensorflow function CbTF = lambda X: tf.numpy_function(Cubical, [X, d, c], [tf.int32 for _ in range(2*D*c)]) # Compute pixels associated to positive and negative simplices # Don't compute gradient for this operation inds = tf.nest.map_structure(tf.stop_gradient, tf.map_fn(CbTF,XX,dtype=[tf.int32 for _ in range(2*D*c)])) # Get persistence diagram by simply picking the corresponding entries in the image dgm = tf.reshape(tf.gather_nd(self.X, tf.reshape(inds, [-1,D])), [-1,2]) return dgm # All right! Let's apply all of these now! We are going to use the following image: I = np.array(pd.read_csv("datasets/mnist_test.csv", header=None, sep=","), dtype=np.float32) idx = np.argwhere(I[:,0] == 8) image = np.reshape(-I[idx[8],1:], [28,28]) plt.figure() plt.imshow(image) plt.show() # As you can see, the upper loop of this 8 is not complete. Since it corresponds to 1-dimensional topology, we can fix this by optimizing the pixel values so that the points in the 1-dimensional persistence diagram have maximal persistence. # We first define the network parameters. card = 50 # max number of points in the diagrams hom = 1 # homological dimension n_epochs = 100 # number of optimization steps # We define our Tensorflow model + optimizer. X = tf.Variable(initial_value=np.array(image, dtype=np.float32), trainable=True) model = CubicalModel(X, dim=hom, card=card) optimizer = tf.keras.optimizers.SGD(learning_rate=5*1e-2) # This time, our loss is the sum of the squared birth coodinates of the persistence diagram points, so that the loops will appear as soon as possible, and the values of the corresponding pixels will be as dark as possible. Let's train and visualize the image every 10 iterations, or epochs! for epoch in range(n_epochs+1): with tf.GradientTape() as tape: # Compute persistence diagram dgm = model.call() # Loss is sum of squared birth coordinates loss = -tf.math.reduce_sum(tf.square(dgm[:,0])) # Compute and apply gradients gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) if epoch % 10 == 0: plt.figure() plt.imshow(model.X.numpy()) plt.title("Image at epoch " + str(epoch)) plt.show() # This upper loop looks definitely fixed now! :-)
Tuto-GUDHI-optimization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="MNbv_uwtN8g8" colab_type="text" # # Sketch Model # # A simple CNN model that classifies doodles into their classes. # # ## Dataset Used # # ### [Quick, Draw Dataset by Google](https://github.com/googlecreativelab/quickdraw-dataset) # + [markdown] id="fI2RKG-COgvF" colab_type="text" # ## Importing Necessary files # + id="BrXE9bntwkNs" colab_type="code" colab={} import os import glob import numpy as np from tensorflow.keras import layers from tensorflow import keras import tensorflow as tf # + id="b5rzuGycxI3W" colab_type="code" outputId="42878b15-8625-4d7c-e958-c3dc343befb3" colab={"base_uri": "https://localhost:8080/", "height": 55} from google.colab import drive drive.mount('/content/drive') # + [markdown] id="zKCnRO0yOnve" colab_type="text" # ## Reading the 100 selected classes from the file # + id="II9AKo9mxoVZ" colab_type="code" outputId="17e11708-dabf-4567-c9f8-ba9ff4aea9d6" colab={"base_uri": "https://localhost:8080/", "height": 55} with open("/content/drive/My Drive/Colab Notebooks/Quickdraw/classes.txt","r") as f: classes = f.read().splitlines() print(classes) # + [markdown] id="KF7eRBAgOucX" colab_type="text" # ## Downloading the Data # + id="KlqfIlt3zqzI" colab_type="code" colab={} import urllib.request def download_data(): base = 'https://storage.googleapis.com/quickdraw_dataset/full/numpy_bitmap/' for c in classes: cls_url = c.replace("_","%20") path = base+cls_url+'.npy' print(path) urllib.request.urlretrieve(path, '/content/drive/My Drive/Colab Notebooks/Quickdraw/data/'+c+'.npy') # + id="HLqWE0i50s0E" colab_type="code" outputId="5abb2af3-f913-4f0c-ba00-5c0a75944266" colab={"base_uri": "https://localhost:8080/", "height": 1000} download_data() # + [markdown] id="Ylk_fg73Oz-J" colab_type="text" # ## Splitting into train and test set # # We are using only 5000 images per class to keep the model light. # + id="T0s0hT362tE0" colab_type="code" colab={} from sklearn.model_selection import train_test_split def load_data(root, ratio = 0.2, max_images_per_class = 5000): all_files = glob.glob(os.path.join(root, '*.npy')) x = np.empty([0,784]) y = np.empty([0]) class_names = classes for id, file in enumerate(all_files): file_data = np.load(file) file_data = file_data[0:max_images_per_class, :] labels = np.full(file_data.shape[0],id) x = np.concatenate((x, file_data), axis = 0) y = np.append(y, labels) X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 42) return X_train, X_test, y_train, y_test, class_names # + id="AK0iqaiUAG2q" colab_type="code" outputId="5af33393-a0d7-4d86-a336-1d8d949e0afa" colab={"base_uri": "https://localhost:8080/", "height": 35} root = "/content/drive/My Drive/Colab Notebooks/Quickdraw/data/" X_train, X_test, y_train, y_test, class_names = load_data(root) num_classes = len(class_names) image_size = 28 print(X_train.shape) # + [markdown] id="7wdqHWuZPA2R" colab_type="text" # ## Visualizing the doodles in dataset # + id="jZSQB4J6BHD6" colab_type="code" outputId="0086cbf4-4bec-4668-d05c-81d5f85665ee" colab={"base_uri": "https://localhost:8080/", "height": 287} import matplotlib.pyplot as plt from random import randint # %matplotlib inline id = randint(0, len(X_train)) plt.imshow(X_train[id].reshape(28,28)) print(class_names[int(y_train[id].item())]) # + [markdown] id="E_yzvFCSPH13" colab_type="text" # ## Preprocessing images and labels # + id="G384P-2oFIOw" colab_type="code" colab={} from keras.utils import to_categorical X_train = X_train.reshape(X_train.shape[0], image_size, image_size, 1).astype('float32') X_test = X_test.reshape(X_test.shape[0], image_size, image_size, 1).astype('float32') X_train /= 255.0 X_test /= 255.0 y_train = to_categorical(y_train, num_classes) y_test = to_categorical(y_test, num_classes) # + id="Vp35BzcPNLcg" colab_type="code" outputId="dbb86d04-0368-48b0-a314-e8543001c39d" colab={"base_uri": "https://localhost:8080/", "height": 35} print(X_train.shape) # + [markdown] id="ZogNzep0PSds" colab_type="text" # ## Creating the CNN model # # Less number of layers to keep the model simple # + id="f78xxHzOFvrJ" colab_type="code" outputId="5d0d8ced-aa1c-43c8-a234-b9951086dbf7" colab={"base_uri": "https://localhost:8080/", "height": 571} tf.logging.set_verbosity(tf.logging.ERROR) from keras.models import Sequential from keras.layers import Dense, Conv2D, Flatten, MaxPooling2D from keras import optimizers model = Sequential() model.add(Conv2D(16,(3, 3),input_shape = X_train.shape[1:], activation='relu')) model.add(MaxPooling2D((2,2))) model.add(Conv2D(32, (3, 3), padding = "same", activation = "relu")) model.add(MaxPooling2D((2,2))) model.add(Conv2D(64, (3, 3), padding = "same", activation = "relu")) model.add(MaxPooling2D((2,2))) model.add(Conv2D(128, (3, 3), padding = "same", activation = "relu")) model.add(MaxPooling2D((2,2))) model.add(Flatten()) model.add(Dense(128, activation = "relu")) model.add(Dense(100, activation = "softmax")) adam = optimizers.Adam() model.compile(loss = 'categorical_crossentropy', optimizer = adam, metrics=['top_k_categorical_accuracy']) print(model.summary()) # + [markdown] id="64lsoYZ1Pf0S" colab_type="text" # ## Fitting the model on training dataset # + id="yp4khXPONgB3" colab_type="code" outputId="935e8b9e-1f2f-42ba-b0de-541d5bd9cd9c" colab={"base_uri": "https://localhost:8080/", "height": 430} model.fit(x = X_train, y = y_train, validation_split=0.2, batch_size = 256, verbose=2, epochs=10) # + [markdown] id="LVbUxiytPlvm" colab_type="text" # ## Evaluating the model on Test set # + id="DC-2KiSJONQl" colab_type="code" outputId="f9d58688-3bf5-4620-ba51-36a385d5f5b3" colab={"base_uri": "https://localhost:8080/", "height": 35} score = model.evaluate(X_test, y_test, verbose=0) print('Test accuarcy: {:0.3f}%'.format(score[1] * 100)) # + [markdown] id="Qsxd8i5XPrJu" colab_type="text" # ## Saving the Weights of the model # + id="3xNj8jY9RFO8" colab_type="code" colab={} model.save("/content/drive/My Drive/Colab Notebooks/Quickdraw/model.h5") # + [markdown] id="1wBbn8kUPvft" colab_type="text" # ## Prediction Test # + colab_type="code" outputId="2ee94e76-e7aa-4a4e-b57a-2a6127e5e9e1" id="zFXhr4P2SOn4" colab={"base_uri": "https://localhost:8080/", "height": 287} idx = randint(0, len(X_test)) img = X_test[idx] plt.imshow(img.squeeze()) pred = model.predict(np.expand_dims(img, axis=0))[0] ind = (-pred).argsort()[:5] latex = [class_names[x] for x in ind] print(latex[0]) # + [markdown] id="OAs4OxlCP1E9" colab_type="text" # ## Installing Tensorflowjs and converting the model. # + id="js0w0QepLFwK" colab_type="code" colab={} # !pip install tensorflowjs # + id="D46vLiemXmtI" colab_type="code" colab={} # !tensorflowjs_converter \ # --input_format=keras \ # /content/model.h5 \ # /content/model # + id="rX5RMgmyPE9s" colab_type="code" colab={} # !mkdir model # !tensorflowjs_converter --input_format keras model.h5 model/ # + id="1YVFHxI3XvGN" colab_type="code" colab={} # !zip -r model.zip model # + id="0xbo-2QlYgfI" colab_type="code" outputId="2c7903fc-ca48-4a58-9b76-5fe3a729b79a" colab={"base_uri": "https://localhost:8080/", "height": 35} print("End")
Sketch_Model.ipynb